Sunteți pe pagina 1din 1521

Attunity Integration Suite

AIS User Guide and Reference Version 5.1


AIS5100

March 2008

AIS User Guide and Reference, Version 5.1 AIS5100 Copyright March, 2008, Attunity Ltd. All rights reserved. Primary Authors: David Goldman, Andre Liss, Jeanne Wiegelmann

Contributors: Yishai Hadas, Dror Harari, Tzachi Nissim, Adeeb Massad, Costi Zaboura, Sami Zeitoun, Gadi Farhat, Arie Kremer The Programs (which include both the software and documentation) contain proprietary information; they are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright, patent, and other intellectual and industrial property laws. Reverse engineering, disassembly, or decompilation of the Programs, except to the extent required to obtain interoperability with other independently created software or as specified by law, is prohibited. The information contained in this document is subject to change without notice. If you find any problems in the documentation, please report them to us in writing. This document is not warranted to be error-free. Except as may be expressly permitted in your license agreement for these Programs, no part of these Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose. If the Programs are delivered to the United States Government or anyone licensing or using the Programs on behalf of the United States Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the Programs, including documentation and technical data, shall be subject to the licensing restrictions set forth in the applicable Attunit license agreement, and, to the extent applicable, the additional rights set forth in FAR 52.227-19, Commercial Computer SoftwareRestricted Rights (June 1987). Attunity Ltd., 70 jBlanchard Road, Burlington, MA 01803 The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently dangerous applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup, redundancy and other measures to ensure the safe use of such applications if the Programs are used for such purposes, and we disclaim liability for any damages caused by such use of the Programs. Attunity is a registered trademark of Attunity Ltd and/or its affiliates. Other names may be trademarks of their respective owners. The Programs may provide links to Web sites and access to content, products, and services from third parties. Attunity is not responsible for the availability of, or any content provided on, third-party Web sites. You bear all risks associated with the use of such content. If you choose to purchase any products or services from a third party, the relationship is directly between you and the third party. Attunity is not responsible for: (a) the quality of third-party products or services; or (b) fulfilling any of the terms of the agreement with the third party, including delivery of products or services and warranty obligations related to purchased products or services. Attunity is not responsible for any loss or damage of any sort that you may incur from dealing with any third party.

Contents
Send Us Your Comments ......................................................................................................................... li Preface ................................................................................................................................................................ liii
Audience...................................................................................................................................................... Organization ............................................................................................................................................... Related Documentation ............................................................................................................................. Conventions ................................................................................................................................................ liii liv liv liv

Whats New....................................................................................................................................................... lix


Continuous CDC ....................................................................................................................................... Bulk Data Performance Improvements................................................................................................... 64-Bit Support for AIS Server on Windows............................................................................................ 64-Bit Thin ODBC Clients .......................................................................................................................... New ODBC DSN Setup Wizard ................................................................................................................ Enhancements for the ADO.NET Client .................................................................................................. Update Index Statistics Improvements .................................................................................................... Better Support for Oracle Stored Procedures ......................................................................................... Support for Exact Arithmetic.................................................................................................................... Improvements to Attunity Studio............................................................................................................ lix lix lix lx lx lx lx lxi lxi lxi

Part I 1

Getting Started with AIS

Introducing the Attunity Integration Suite


AIS Overview............................................................................................................................................ AIS Use....................................................................................................................................................... SQL Connectivity ............................................................................................................................... Application Connectivity.................................................................................................................. Adapters .............................................................................................................................................. Change Data Capture ........................................................................................................................ Attunity Integration Suite Supported Systems and Resources....................................................... Operating Systems ............................................................................................................................. Data Sources and Adapters .............................................................................................................. Interfaces ............................................................................................................................................. 1-1 1-2 1-2 1-2 1-3 1-3 1-3 1-3 1-4 1-4

Setting up Attunity Connect, Stream, and Federate


Overview.................................................................................................................................................... Opening Attunity Studio........................................................................................................................ Setting up Machines ................................................................................................................................ Using an Offline Design Machine to Create Attunity Definitions.............................................. Administration Authorization............................................................................................................... License Management ............................................................................................................................... Registering a Product ........................................................................................................................ Viewing License Information ........................................................................................................... Importing and Exporting XML in Attunity Studio............................................................................ 2-1 2-2 2-3 2-4 2-5 2-7 2-7 2-8 2-9

Binding Configuration
Binding Configuration Overview ......................................................................................................... 3-1 Server Binding .................................................................................................................................... 3-1 Client Binding..................................................................................................................................... 3-2 Setting up Bindings in Attunity Studio............................................................................................... 3-2 Adding Bindings ................................................................................................................................ 3-2 Editing Bindings................................................................................................................................. 3-3 Setting the Binding Environment in Attunity Studio............................................................ 3-3 Defining Remote Machines in a Binding................................................................................. 3-4 Binding Syntax.......................................................................................................................................... 3-6 <remoteMachines> Statement.......................................................................................................... 3-7 <remoteMachine> Statement ........................................................................................................... 3-7 <adapters> Statement........................................................................................................................ 3-8 <adapter> Statement ......................................................................................................................... 3-8 <config> Statement ..................................................................................................................... 3-8 <datasources> Statement .................................................................................................................. 3-9 <datasource> Statement.................................................................................................................... 3-9 <config> Statement .................................................................................................................. 3-11 Sample Binding ..................................................................................................................................... 3-11 Environment Properties ....................................................................................................................... 3-12 Debug................................................................................................................................................ 3-13 General.............................................................................................................................................. 3-14 Language .......................................................................................................................................... 3-15 Modeling .......................................................................................................................................... 3-16 ODBC ................................................................................................................................................ 3-17 OLE DB ............................................................................................................................................. 3-17 Optimizer ......................................................................................................................................... 3-18 Parallel Processing .......................................................................................................................... 3-20 Query Processor .............................................................................................................................. 3-20 Temp Features ................................................................................................................................. 3-22 Transaction....................................................................................................................................... 3-23 Tuning............................................................................................................................................... 3-23 XML................................................................................................................................................... 3-24 Languages ........................................................................................................................................ 3-25 ARA (Arabic) ............................................................................................................................ 3-25 ENG (English)........................................................................................................................... 3-26

vi

FR (French)................................................................................................................................ GER (German) .......................................................................................................................... GREEK (Greek)......................................................................................................................... HEB (Hebrew) .......................................................................................................................... JPN (Japanese) .......................................................................................................................... KOR (Korean) ........................................................................................................................... SCHI (Simple Chinese)............................................................................................................ SPA (Spanish) ........................................................................................................................... TCHI (Traditional Chinese).................................................................................................... TUR (Turkish)........................................................................................................................... Sample Environment Properties ...................................................................................................

3-26 3-26 3-26 3-26 3-26 3-26 3-26 3-27 3-27 3-27 3-27

Setting up Daemons
Daemons..................................................................................................................................................... 4-1 Defining Daemons at Design Time ...................................................................................................... 4-1 Adding a Daemon.............................................................................................................................. 4-2 Editing a Daemon............................................................................................................................... 4-3 Control .......................................................................................................................................... 4-3 Logging......................................................................................................................................... 4-5 Security ......................................................................................................................................... 4-8 Administering Selected User Only Lists.................................................................................. 4-9 Reloading Daemon Configurations at Runtime ............................................................................. 4-10 Editing Daemon Configurations................................................................................................... 4-10 Checking the Daemon Status.............................................................................................................. 4-10 Checking the Daemon Status with Attunity Studio................................................................... 4-10 Starting and Stopping Daemons ........................................................................................................ 4-11 Starting a Daemon in Attunity Studio ......................................................................................... 4-11 Shutting Down a Daemon in Attunity Studio ............................................................................ 4-11 Sample Daemon Configuration.......................................................................................................... 4-11 Adding and Editing Workspaces........................................................................................................ 4-12 Adding a Workspace ...................................................................................................................... 4-12 Editing a Workspace....................................................................................................................... 4-15 General....................................................................................................................................... 4-16 Server Mode.............................................................................................................................. 4-19 Security ...................................................................................................................................... 4-22 Selecting a Binding Configuration................................................................................................ 4-24 Disabling a Workspace................................................................................................................... 4-25 Setting Workspace Authorization ................................................................................................ 4-25

Managing Metadata
Data Source Metadata Overview........................................................................................................... Importing Metadata ................................................................................................................................. Importing Metadata Using an Attunity Studio Import Wizard.................................................. Importing Metadata Using a Standalone Utility ........................................................................... Managing Metadata ................................................................................................................................. Using Attunity Metadata with AIS Supported Data Sources.......................................................... 5-1 5-2 5-2 5-3 5-3 5-4

vii

Extended Native Data Source Metadata......................................................................................... 5-4 Native Metadata Caching ................................................................................................................. 5-4 Procedure Metadata Overview .............................................................................................................. 5-5 Importing Procedure Metadata Using the Import Wizard ............................................................... 5-5 Procedure Metadata Statements ............................................................................................................ 5-6 The <procedure> Statement ............................................................................................................. 5-6 Syntax ........................................................................................................................................... 5-6 <procedure> Attributes ............................................................................................................. 5-7 The <parameters> Statement............................................................................................................ 5-8 Syntax ........................................................................................................................................... 5-8 The <dbCommand> Statement ........................................................................................................ 5-8 Syntax ........................................................................................................................................... 5-8 The <fields> Statement...................................................................................................................... 5-9 Syntax ........................................................................................................................................... 5-9 The <field> Statement ....................................................................................................................... 5-9 Syntax ........................................................................................................................................... 5-9 <field> Attributes..................................................................................................................... 5-10 The <group> Statement.................................................................................................................. 5-10 Syntax ........................................................................................................................................ 5-10 <group> Attributes.................................................................................................................. 5-11 The <variant> Statement................................................................................................................ 5-11 Variant without selector.......................................................................................................... 5-11 Variant with selector................................................................................................................ 5-12 ADD Syntax .............................................................................................................................. 5-13 Usage Notes .............................................................................................................................. 5-13 Resolving Variants in Attunity Studio.................................................................................. 5-13 The <case> Statement ..................................................................................................................... 5-14 Syntax ........................................................................................................................................ 5-14 <case> Attributes ..................................................................................................................... 5-14 ADD Supported Data Types ............................................................................................................... 5-15 ADD Syntax............................................................................................................................................ 5-23 The <table> Statement.................................................................................................................... 5-24 Syntax ........................................................................................................................................ 5-24 Table Attributes........................................................................................................................ 5-25 The <dbCommand> Statement ..................................................................................................... 5-29 Syntax ........................................................................................................................................ 5-29 Examples ................................................................................................................................... 5-29 The <fields> Statement................................................................................................................... 5-29 Syntax ........................................................................................................................................ 5-30 The <field> Statement .................................................................................................................... 5-30 Syntax ........................................................................................................................................ 5-30 Example ..................................................................................................................................... 5-30 Field Attributes......................................................................................................................... 5-30 The <group> Statement.................................................................................................................. 5-35 Syntax ........................................................................................................................................ 5-35 Example ..................................................................................................................................... 5-35 Group Attributes...................................................................................................................... 5-35

viii

The <variant> Statement................................................................................................................ Variant without selector.......................................................................................................... Variant with selector................................................................................................................ Usage Notes .............................................................................................................................. Resolving Variants in Attunity Studio.................................................................................. Variant Attributes .................................................................................................................... The <case> Statement ..................................................................................................................... Syntax ........................................................................................................................................ Case Attributes ......................................................................................................................... The <keys> Statement .................................................................................................................... Syntax ........................................................................................................................................ Example ..................................................................................................................................... The <key> Statement ...................................................................................................................... Syntax ........................................................................................................................................ Key Attributes .......................................................................................................................... The <segments> Statement............................................................................................................ Syntax ........................................................................................................................................ The <segment> Statement.............................................................................................................. Syntax ........................................................................................................................................ Segment Attributes .................................................................................................................. The <foreignKeys> Statement ....................................................................................................... Syntax ........................................................................................................................................ The <foreignKey> Statement......................................................................................................... Syntax ........................................................................................................................................ Example ..................................................................................................................................... foreignKey Attributes.............................................................................................................. The <primaryKey> Statement ....................................................................................................... Syntax ........................................................................................................................................ The <pKeySegments> Statement .................................................................................................. Syntax ........................................................................................................................................ Example ..................................................................................................................................... pKeySegment Attributes.........................................................................................................

5-38 5-38 5-39 5-40 5-40 5-40 5-41 5-41 5-41 5-42 5-42 5-43 5-43 5-43 5-43 5-46 5-46 5-46 5-46 5-46 5-47 5-47 5-47 5-48 5-48 5-48 5-49 5-49 5-49 5-49 5-49 5-50

Working with Metadata in Attunity Studio


Overview.................................................................................................................................................... 6-1 Managing Data Source Metadata .......................................................................................................... 6-1 General Tab ......................................................................................................................................... 6-2 Columns Tab....................................................................................................................................... 6-4 Column Definition Section ........................................................................................................ 6-5 Column Properties...................................................................................................................... 6-6 Indexes Tab ......................................................................................................................................... 6-8 Table Information ....................................................................................................................... 6-8 Properties ..................................................................................................................................... 6-9 Statistics Tab ....................................................................................................................................... 6-9 Table........................................................................................................................................... 6-10 Columns .................................................................................................................................... 6-10 Indexes....................................................................................................................................... 6-11

ix

Update Button .......................................................................................................................... Modelling Tab.................................................................................................................................. Importing Data Source Metadata with the Attunity Import Wizard .......................................... Starting the Import Process............................................................................................................ Selecting the Input Files ................................................................................................................. Applying Filters............................................................................................................................... Selecting Tables ............................................................................................................................... Import Manipulation ...................................................................................................................... Import Manipulation Screen .................................................................................................. Field Manipulation Screen...................................................................................................... Metadata Model Selection.............................................................................................................. Import the Metadata ....................................................................................................................... Working with Application Adapter Metadata................................................................................. Adapter Metadata General Properties ......................................................................................... Adapter Metadata Schema Records ............................................................................................. Editing an Existing Schema Definition ................................................................................. Adapter Metadata Interactions ..................................................................................................... Editing an Existing Interaction .............................................................................................. Interaction Advanced Tab ...................................................................................................... Working with Procedure Metadata .................................................................................................... Manually Creating Procedure Metadata ..................................................................................... Managing Procedure Metadata..................................................................................................... Importing Procedure Metadata.....................................................................................................

6-11 6-12 6-14 6-14 6-15 6-17 6-19 6-19 6-20 6-21 6-27 6-29 6-30 6-31 6-32 6-34 6-34 6-35 6-36 6-36 6-37 6-37 6-37

Handling Arrays
Overview of Handling Arrays ............................................................................................................... 7-1 Representing Metadata ........................................................................................................................... 7-1 Methods of Handling Arrays ................................................................................................................. 7-5 Columnwise Normalization ............................................................................................................. 7-5 Virtual Tables...................................................................................................................................... 7-7 Virtual Views ................................................................................................................................... 7-10 Sequential Flattening (Bulk Load of Array Data)....................................................................... 7-11 ADO/OLE DB Chapters ................................................................................................................ 7-14 Chapter Handling in Query and Database Adapters ......................................................... 7-16 XML................................................................................................................................................... 7-17

Using SQL
Overview of Using SQL .......................................................................................................................... Batching SQL Statements ....................................................................................................................... Hierarchical Queries ................................................................................................................................ Generating Hierarchical Results Using SQL .................................................................................. Accessing Hierarchical Data Using SQL......................................................................................... Examples ...................................................................................................................................... Flattening Hierarchical Data Using SQL ........................................................................................ Using an Alias ............................................................................................................................. Examples ...................................................................................................................................... Using Virtual Tables to Represent Hierarchical Data................................................................... 8-1 8-1 8-2 8-3 8-4 8-4 8-5 8-6 8-6 8-9

Creating Virtual Tables ........................................................................................................... Hierarchical Queries From an Application ................................................................................. Drill-down Operations in an ADO Application.................................................................. Drill-down Operations in an ODBC Application................................................................ ODBC Drill-down Operations Using RDO .......................................................................... ODBC Drill-down Operations Using C ................................................................................ Drill-down Operations in a Java Application...................................................................... Copying Data From One Table to Another ...................................................................................... Passthru SQL.......................................................................................................................................... For a Specific SQL Statement......................................................................................................... Via ADO .................................................................................................................................... Via RDO and DAO .................................................................................................................. For all SQL During a Session......................................................................................................... Via ADO/OLE DB ................................................................................................................... Via ODBC .................................................................................................................................. Passthru Queries as Part of an SQL Statement ........................................................................... Writing Queries Using SQL ................................................................................................................ Writing Efficient SQL ..................................................................................................................... Locking Considerations ....................................................................................................................... Locking Modes ................................................................................................................................ Optimistic Locking .................................................................................................................. Pessimistic Locking.................................................................................................................. No Locking................................................................................................................................ ODBC Locking Considerations ..................................................................................................... ADO Locking Considerations ....................................................................................................... Managing the Execution of Queries over Large Tables ................................................................. Optimizing Outer Joins........................................................................................................................ Limitations ....................................................................................................................................... Query Optimization........................................................................................................................ Property ............................................................................................................................................ Changing the Property............................................................................................................

8-10 8-10 8-11 8-11 8-12 8-14 8-16 8-17 8-17 8-18 8-18 8-19 8-20 8-20 8-20 8-21 8-22 8-22 8-23 8-23 8-23 8-23 8-23 8-23 8-24 8-24 8-25 8-26 8-26 8-26 8-26

9 Working with Web Services


Web Services Overview........................................................................................................................... Preparing to use Web Services............................................................................................................... Web Services Prerequisites ............................................................................................................... Setting up Attunity Studio to Work with Web Services............................................................... Deploying an Adapter as a Web Service.............................................................................................. Connection Information for the Axis Servlet ................................................................................. Define a new Web Service for an Adapter ..................................................................................... The General Tab .......................................................................................................................... The Pooling Tab .......................................................................................................................... The Map Tab ................................................................................................................................ Select the Interactions ........................................................................................................................ Summary Window............................................................................................................................. Undeploying Web Services .................................................................................................................... Viewing Web Services ............................................................................................................................. 9-1 9-1 9-1 9-1 9-2 9-3 9-3 9-5 9-5 9-6 9-7 9-8 9-9 9-9

xi

Logging Web Service Activities.......................................................................................................... Changing the Log File Location .................................................................................................... Changing the Error Message Level .............................................................................................. Changing the Error Message Format ...........................................................................................

9-10 9-10 9-10 9-11

Part II 10

Attunity Connect

Introduction to Attunity Connect


Overview of Attunity Connect ........................................................................................................... Logical Architecture .............................................................................................................................. System Components and Concepts ................................................................................................... Data Engine...................................................................................................................................... Query Optimizer ...................................................................................................................... Data Sources ............................................................................................................................. Interfaces and APIs .................................................................................................................. Transaction Support ................................................................................................................ Application Engine ......................................................................................................................... Application Adapters .............................................................................................................. Interfaces and APIs .................................................................................................................. Events......................................................................................................................................... Attunity Server ................................................................................................................................ Attunity Studio ................................................................................................................................ Design Time .............................................................................................................................. Runtime ..................................................................................................................................... Metadata Repository....................................................................................................................... System Repository ................................................................................................................... Data Source Repositories ........................................................................................................ Attunity Configuration Model ...................................................................................................... Daemons.................................................................................................................................... Client Communication Software ........................................................................................... Server Communication Software........................................................................................... 10-1 10-1 10-1 10-2 10-2 10-3 10-4 10-4 10-5 10-5 10-5 10-6 10-6 10-6 10-6 10-6 10-7 10-7 10-7 10-7 10-7 10-8 10-8

11

Attunity Integration Suite Architecture Flows


Overview................................................................................................................................................. 11-1 Data Source Architecture ..................................................................................................................... 11-1 Data Source ...................................................................................................................................... 11-2 Query Engine Flow ......................................................................................................................... 11-3 Making a Request between Two Relational Databases...................................................... 11-4 Making a Request between Two Non-Relational Databases............................................. 11-5 Making a Request between a Relational Data Source and a Non-Relational Data Source ...... 11-5 Application Adapter ............................................................................................................................. 11-6 Change Data Capture (CDC) Flow..................................................................................................... 11-7 Database and Query Adapter.............................................................................................................. 11-8

xii

12

Implementing a Data Access Solution


Overview................................................................................................................................................. Setting Up AIS for Data Access.......................................................................................................... Installing AIS ......................................................................................................................................... Install AIS on the Backend ............................................................................................................. Install the Attunity Server Software............................................................................................. Install Attunity Studio .................................................................................................................... Configuring the System for Data Access (Using Studio) .............................................................. Configure the Machines ................................................................................................................. Configure User Profiles .................................................................................................................. Configure the Binding .................................................................................................................... Configure the Data Sources in the Binding ................................................................................. Set Up the Data Source Metadata ................................................................................................. Supported Interfaces............................................................................................................................. Data Access Flow ................................................................................................................................... Data Source Metadata........................................................................................................................... Configuring the System for Data Access Using XML.................................................................... Set up the Environment in Attunity Studio ................................................................................ Expose the XML Field..................................................................................................................... Set up the Data Source and Import the Metadata ...................................................................... Prepare an Input XML Structure .................................................................................................. Setting up the Query and Database Adapter for XML Operations ............................................. Using the Query Adapter for XML Operations.......................................................................... The Input XML Record................................................................................................................... 12-1 12-2 12-2 12-2 12-2 12-2 12-3 12-3 12-3 12-3 12-3 12-3 12-4 12-5 12-5 12-6 12-6 12-6 12-7 12-7 12-7 12-7 12-8

13

Setting up Data Sources and Events with Attunity Studio


Data Sources ........................................................................................................................................... Adding Data Sources ...................................................................................................................... Configuring Data Source Advanced Properties ......................................................................... Testing a Data Source ..................................................................................................................... Creating a Data Source Shortcut (Optional)................................................................................ Testing Data Source Shortcuts (Optional) ................................................................................... Events....................................................................................................................................................... Adding Event Queues .................................................................................................................... Defining Metadata for Event Queues........................................................................................... 13-1 13-1 13-2 13-4 13-4 13-7 13-8 13-8 13-9

14

Procedure Data Sources


Procedure Data Sources Overview..................................................................................................... Configuring the Procedure Data Source........................................................................................... Adding Procedure Data Sources................................................................................................... Configuring Data Source Advanced Properties .................................................................. Defining a Shortcut to a Procedure on Another Machine......................................................... Defining the Procedure Metadata................................................................................................. 14-1 14-1 14-2 14-3 14-5 14-6

xiii

15

Implementing an Application Access Solution


Overview................................................................................................................................................. Setting up AIS for Application Access ............................................................................................. Installing System Components ..................................................................................................... Configuring the System for Application Access (Using Studio).............................................. Supported APIs...................................................................................................................................... Application Access Flow...................................................................................................................... Defining the Application Adapter..................................................................................................... ACX Protocol .......................................................................................................................................... Transaction Support.............................................................................................................................. Generic and Custom Adapters............................................................................................................ Developing an Application Adapter in AIS................................................................................ MyApp: An Application Adapter Example......................................................................... 15-1 15-2 15-2 15-3 15-3 15-4 15-4 15-5 15-6 15-6 15-7 15-7

16

Setting Up Adapters
Setting up Adapters Overview ........................................................................................................... Working with Adapters........................................................................................................................ Adding Application Adapters....................................................................................................... Configuring Application Adapters .............................................................................................. Testing Application Adapters ....................................................................................................... 16-1 16-1 16-1 16-2 16-3

17

Application Adapter Definition


Overview................................................................................................................................................. The adapter Element ............................................................................................................................. The interaction Element ....................................................................................................................... The schema Element ............................................................................................................................. The enumeration Element ................................................................................................................... The record Element ............................................................................................................................... Defining Hierarchies....................................................................................................................... The variant record Element ................................................................................................................. Variant without Selector ................................................................................................................ Variant with Selector ...................................................................................................................... The field Element .................................................................................................................................. 17-1 17-2 17-3 17-4 17-5 17-5 17-6 17-6 17-6 17-7 17-7

Part III 18

Attunity Stream

What is the Attunity Stream CDC Solution


CDC Solution Overview ...................................................................................................................... The Attunity Stream CDC Architecture............................................................................................ The Staging Area ............................................................................................................................. Handling Before and After Images............................................................................................... Tracking Changes - Auditing ........................................................................................................ Security Considerations ................................................................................................................. What Can Be Captured? ....................................................................................................................... 18-1 18-2 18-4 18-4 18-5 18-5 18-5

xiv

19

Implementing a Change Data Capture Solution


Overview................................................................................................................................................. Setting up AIS to Create a Change Data Capture ........................................................................... Installing System Components ..................................................................................................... Configuring the System for Change Data Captures (Using Studio)........................................ Generated CDC Components........................................................................................................ Handling Arrays Defined in the Source Data ............................................................................. CDC System Architecture ................................................................................................................... CDC Adapter Definition...................................................................................................................... CDC Agent Metadata Definition Description............................................................................. Interactions................................................................................................................................ Schema....................................................................................................................................... CDC Streams .......................................................................................................................................... Transaction Support.............................................................................................................................. Troubleshooting..................................................................................................................................... 19-1 19-2 19-2 19-2 19-3 19-3 19-4 19-5 19-6 19-6 19-6 19-6 19-7 19-8

20

SQL-Based CDC Methodologies


Overview................................................................................................................................................. Components ..................................................................................................................................... CDC Agent................................................................................................................................ SQL-based Change Router ..................................................................................................... Staging Area Server ................................................................................................................. Configuration Parameters.................................................................................................................... SQL Access to Change Events............................................................................................................. Change Tables.................................................................................................................................. The STREAM_POSITION Table ................................................................................................... Reading the Change Tables ................................................................................................................. Reading Change Tables Continuously......................................................................................... Referential Integrity Considerations............................................................................................... Monitoring the Change Data Capture............................................................................................. Service Context Table ................................................................................................................... Monitoring the Status ................................................................................................................... Error Handling ..................................................................................................................................... Determining the Change Router Error Behavior...................................................................... CONTROL_TABLE....................................................................................................................... subscribeAgentLog Configuration Properties............................................................................... Performance Considerations ............................................................................................................. Memory Parameters...................................................................................................................... Latency............................................................................................................................................ Capacity Planning ............................................................................................................................... Storage ............................................................................................................................................ Memory .......................................................................................................................................... Network.......................................................................................................................................... Processing....................................................................................................................................... Applying Metadata Changes ............................................................................................................ Design-time Metadata Change Procedure ................................................................................ 20-1 20-2 20-2 20-3 20-3 20-3 20-4 20-5 20-7 20-7 20-8 20-11 20-13 20-13 20-14 20-16 20-16 20-16 20-17 20-17 20-17 20-18 20-18 20-18 20-19 20-19 20-20 20-20 20-21

xv

Production Metadata Change Procedure .................................................................................. 20-21 Migration from XML-based CDC .................................................................................................... 20-21

21

Creating a CDC with the Solution Perspective


Using the Solution Perspective........................................................................................................... Using Views in the Solution Perspective ..................................................................................... Getting Started Guide .......................................................................................................................... Creating a New Project................................................................................................................... Opening an Existing Project .......................................................................................................... Opening Recent Projects ................................................................................................................ Project Guide.......................................................................................................................................... Design Wizard ................................................................................................................................. Implementation Guide ................................................................................................................... Machine ..................................................................................................................................... Data Source ............................................................................................................................. Metadata.................................................................................................................................. CDC Service ............................................................................................................................ Access Service Manager ....................................................................................................... Stream Service ........................................................................................................................ Deployment Guide........................................................................................................................ Troubleshooting................................................................................................................................... 21-1 21-1 21-2 21-2 21-4 21-5 21-5 21-5 21-8 21-9 21-10 21-10 21-19 21-20 21-23 21-30 21-35

Part IV 22

Attunity Federate

What is Attunity Federate


Overview................................................................................................................................................. The Data Engine .................................................................................................................................... Query Optimizer ............................................................................................................................. Performance Tuning Tools............................................................................................................. Front End APIs ................................................................................................................................ Data Source Drivers ........................................................................................................................ Base Services .......................................................................................................................................... Attunity Integration Suite .............................................................................................................. AIS Installation Wizards ......................................................................................................... Attunity Internal Storage (The Repository)................................................................................. The System Repository............................................................................................................ Data Source Repository........................................................................................................... Attunity Studio: Configuration and Management GUI ........................................................... Client/Server Communication and the Attunity Daemon ....................................................... Daemons.................................................................................................................................... Client Communication Software ........................................................................................... Server Communication Software........................................................................................... 22-1 22-1 22-2 22-2 22-2 22-3 22-4 22-4 22-4 22-4 22-4 22-4 22-5 22-5 22-5 22-5 22-6

23

Using a Virtual Database


Virtual Database Overview ................................................................................................................. 23-1 Defining a Virtual Data Base .............................................................................................................. 23-2

xvi

Metadata Considerations ..................................................................................................................... Defining Tables ..................................................................................................................................... Creating Synonyms............................................................................................................................... Defining Stored Procedures ................................................................................................................ Creating Views....................................................................................................................................... Using a Virtual Database .....................................................................................................................

23-3 23-3 23-3 23-4 23-6 23-8

24

Segmented Data Sources


Overview................................................................................................................................................. Creating Segmented Data Sources..................................................................................................... Adding a Data Source to a Binding .............................................................................................. Creating a Data Source Shortcut ................................................................................................... Adding a Segmented Data Source to a Binding ......................................................................... Environmental Properties for Segmented Data Sources ............................................................... Using a Segmented Data Source ........................................................................................................ 24-1 24-1 24-1 24-2 24-3 24-4 24-4

Part V 25

Attunity Studio

Working with the Attunity Studio Workbench


Workbench Overview........................................................................................................................... Using Workbench Parts........................................................................................................................ Welcome Screen............................................................................................................................... Main Screen...................................................................................................................................... Main Menu Bar ................................................................................................................................ Working with Perspectives............................................................................................................ Solution Perspective ................................................................................................................ Design Perspective................................................................................................................... Runtime Manager Perspective ............................................................................................... Selecting a Perspective ............................................................................................................ Working with Views....................................................................................................................... Customizing Views.................................................................................................................. Getting Started View ............................................................................................................... Error Log View ......................................................................................................................... Configuration View ................................................................................................................. Metadata View ......................................................................................................................... Workbench Icons ................................................................................................................................... General.............................................................................................................................................. Actions ............................................................................................................................................ Objects............................................................................................................................................. Manipulation ................................................................................................................................. Setting Attunity Studio Preferences................................................................................................ Studio .............................................................................................................................................. Configuration ......................................................................................................................... Metadata.................................................................................................................................. Runtime Manager .................................................................................................................. Keys ................................................................................................................................................. 25-1 25-2 25-2 25-3 25-4 25-4 25-4 25-4 25-5 25-5 25-5 25-6 25-6 25-7 25-8 25-8 25-9 25-9 25-10 25-12 25-15 25-16 25-16 25-18 25-19 25-20 25-21

xvii

Default Keyboard Shortcuts ................................................................................................. 25-23

Part VI 26

Operation and Maintenance

AIS Runtime Tasks from the Command Line


Overview................................................................................................................................................. 26-1 Starting and Stopping Daemons ........................................................................................................ 26-1 Starting a Daemon........................................................................................................................... 26-1 Enabling Automatic Startup................................................................................................... 26-1 Manually Starting a Daemon on HP NonStop, OpenVMS, OS/400, UNIX, and Windows Platforms 26-2 Manually Starting a Daemon on z/OS Systems.................................................................. 26-4 Starting Multiple Daemons ................................................................................................... 26-5 Stopping a Daemon......................................................................................................................... 26-5 Shutting Down a Daemon Using Attunity Studio .............................................................. 26-5 Shutting Down a Daemon Using the Command Line........................................................ 26-6 Disabling a Workspace............................................................................................................ 26-7 Checking the Daemon .................................................................................................................... 26-7 Managing Daemon Configurations................................................................................................... 26-8 Daemon Configuration Groups .................................................................................................... 26-8 Adding and Editing Daemon Configurations ............................................................................ 26-9 Adding and Editing Workspaces.................................................................................................. 26-9 Configuring Logging ...................................................................................................................... 26-9

27

Runtime Management with Attunity Studio


Overview................................................................................................................................................. Runtime Explorer View........................................................................................................................ Runtime Explorer Tasks ................................................................................................................. Adding a Daemon.................................................................................................................... Daemon Tasks .......................................................................................................................... Workspace Tasks...................................................................................................................... Server Tasks .............................................................................................................................. Viewing Logs ................................................................................................................................... Working with the Event Monitor .......................................................................................... Viewing Events................................................................................................................................ Daemon Properties ....................................................................................................................... Error Log View ..................................................................................................................................... Displaying the Error Log View ................................................................................................... Error Log View Tasks ................................................................................................................... Clearing the Error Log.................................................................................................................. Deleting the Error Log.................................................................................................................. Opening a Log File........................................................................................................................ Restoring the Error Log................................................................................................................ Exporting Errors to a Log File ..................................................................................................... Importing a Log File ..................................................................................................................... Viewing the Event Details ........................................................................................................... 27-1 27-1 27-2 27-2 27-4 27-5 27-5 27-5 27-7 27-8 27-10 27-12 27-12 27-12 27-13 27-13 27-13 27-14 27-14 27-14 27-14

xviii

28

Managing Security
Overview of Attunity Security ........................................................................................................... Managing Design Time Security........................................................................................................ Local Access to AIS Design-Time Resources .............................................................................. Remote Access to AIS Design-Time Resources .......................................................................... Design Roles ............................................................................................................................. Assigning Design Roles .......................................................................................................... Password Handling in Attunity Studio ....................................................................................... Setting Up the Password Caching Policy in Attunity Studio............................................ Assigning Authorization Rights to a Workspace................................................................ Setting Up a Master Password for a User............................................................................. Managing Runtime Security ............................................................................................................... User Profiles ..................................................................................................................................... Setting a Master Password for a User Profile ...................................................................... Using a Client User Password................................................................................................ Using a Server User Profile..................................................................................................... Managing a User Profile in Attunity Studio ............................................................................... Adding a User .......................................................................................................................... Add Authenticators ............................................................................................................... Add Encryption Keys ............................................................................................................ Editing a User Profile ............................................................................................................ Remove an Authenticator or Encryption key .................................................................... Client Authentication ................................................................................................................... Client Authentication for Thin Clients ............................................................................... Client Authentication for Fat Clients .................................................................................. Client Authorization and Access Restriction ............................................................................ Restricting Access to a User by Login User Name............................................................ Restricting Access to Data with a Virtual Database.......................................................... Transport Encryption ................................................................................................................... Encrypting Network Communications...................................................................................... Setting a Client Encryption Protocol ................................................................................... Configuring Encrypted Communication............................................................................ Configuring the Encryption Key on the Server Machine................................................. Firewall Support............................................................................................................................ Accessing a Server through a Firewall....................................................................................... Selecting a Port Range for Workspace Servers .................................................................. Accessing a Server Using Fixed NAT ................................................................................. Dynamic Credentials .................................................................................................................... Providing Credentials in the Connection String ............................................................... Interactively Prompting for Credentials............................................................................. Setting Up Impersonation............................................................................................................ Setting Up Impersonation for DB2 ...................................................................................... Granting Daemon Administration Rights to Users ................................................................. Granting Workspace Administration Rights to Users............................................................. 28-1 28-1 28-1 28-2 28-2 28-2 28-3 28-3 28-4 28-4 28-5 28-6 28-6 28-7 28-7 28-9 28-9 28-11 28-12 28-13 28-14 28-14 28-15 28-15 28-15 28-15 28-17 28-17 28-18 28-18 28-19 28-20 28-22 28-23 28-24 28-24 28-25 28-25 28-26 28-26 28-28 28-28 28-30

xix

29

Backing Up AIS
Overview of the AIS Backup Process................................................................................................ Backing Up and Restoring AIS Server Installation........................................................................ Backing Up and Restoring AIS Server Metadata............................................................................ Backing Up and Restoring AIS Server Scripts ................................................................................ Backing Up and Restoring AIS Server Data .................................................................................... Backing Up and Restoring AIS Studio Metadata ........................................................................... 29-1 29-1 29-2 29-3 29-3 29-3

30

Transaction Support
Overview................................................................................................................................................. Using Attunity Connect as a Stand-alone Transaction Coordinator ........................................... Attunity Connect Data Source Driver Capabilities........................................................................ Data Sources That Do Not Support Transactions....................................................................... Data Sources with One-Phase Commit Capability .................................................................... Data Sources with Two-Phase Commit Capability.................................................................... Relational Database Procedures.................................................................................................... Distributed Transactions...................................................................................................................... Transaction Log File........................................................................................................................ CommitConfirm Table ................................................................................................................... Recovery .................................................................................................................................................. Recovery Utility Toolbar ................................................................................................................ Platform Specific Information ............................................................................................................ 30-1 30-2 30-3 30-3 30-3 30-4 30-4 30-4 30-5 30-6 30-7 30-9 30-9

31

Troubleshooting in AIS
Troubleshooting Overview.................................................................................................................. Product Flow Maps ............................................................................................................................... Local Data Access Scenario............................................................................................................ Remote Data Access Scenario........................................................................................................ Using the Product Flow Maps for Troubleshooting ....................................................................... SQL Application/SQL API Issues (A1, B1) ................................................................................. ADO/OLEDB ........................................................................................................................... JDBC........................................................................................................................................... SQL API Issues/Query Processor Issues (A2, B2F) ................................................................... Troubleshooting Methods ................................................................................................................... Using the NAV_UTIL CHECK SERVER Utility ......................................................................... Using the NAV_UTIL CHECK DATASOURCE Utility ............................................................ Using Trace Log Files ..................................................................................................................... Log Traces ................................................................................................................................. Using Extended Logging Options ................................................................................................ New Log Entries.............................................................................................................................. Identifying Nodes ......................................................................................................................... Configuring the Optimizer Trace File................................................................................. Reading the Log............................................................................................................................. Configuring the Extended Logging Option ....................................................................... Configuring Advanced Environment Parameters ................................................................... Common Errors and Solutions ......................................................................................................... 31-1 31-1 31-2 31-3 31-5 31-5 31-5 31-6 31-6 31-6 31-6 31-7 31-8 31-8 31-9 31-9 31-10 31-10 31-11 31-11 31-12 31-13

xx

Part VII 32

Utilities

Using Attunity SQL Utility


Overview................................................................................................................................................. Using the SQL Utility ........................................................................................................................... Connecting to an Attunity Server via the SQL Utility................................................................... Specifying and Executing Queries..................................................................................................... Working with Parameterized Queries ......................................................................................... Modifying Data in a Recordset........................................................................................................... Working with Chapters ........................................................................................................................ Modifying Chapters........................................................................................................................ Specifying Recordset Properties ................................................................................................... Working with Schemas .................................................................................................................. 32-1 32-1 32-2 32-2 32-3 32-4 32-4 32-5 32-5 32-7

33

Using the Attunity XML Utility


Overview................................................................................................................................................. Connecting with XML .......................................................................................................................... Connect Properties.......................................................................................................................... Connect Commands ....................................................................................................................... Metadata Browser .................................................................................................................... Events Listener ......................................................................................................................... Set Encryption .......................................................................................................................... Executing with XML ............................................................................................................................. Creating an XML Request .............................................................................................................. XML Input................................................................................................................................. XML Output.............................................................................................................................. 33-1 33-2 33-2 33-3 33-4 33-5 33-6 33-6 33-7 33-7 33-8

34

Attunity Query Tool


Query Tool Overview ........................................................................................................................... Getting Started with the Query Tool ................................................................................................. Where You Can Execute Queries.................................................................................................. Opening the Query Tool ................................................................................................................ The Query Builder Interface .......................................................................................................... Using the Query Builder ...................................................................................................................... The Tables Tab................................................................................................................................. The Columns Tab ............................................................................................................................ The Where Tab................................................................................................................................. The Group Tab................................................................................................................................. The Having Tab ............................................................................................................................. The Sort Tab ................................................................................................................................... Creating a Query Manually............................................................................................................... Managing Queries............................................................................................................................... Using the Query Tool with Transactions ........................................................................................ 34-1 34-1 34-2 34-2 34-3 34-4 34-5 34-6 34-7 34-9 34-10 34-12 34-13 34-14 34-16

xxi

35 SQL Explain Utility


SQL Explain Utility Overview............................................................................................................ Activating the SQL Explain Utility.................................................................................................... Using the Explain Command in NAV_UTIL .............................................................................. Using the prepareSQL Interaction in the Query Adapter......................................................... Automatically Generate the File Using the analyzerQueryPlan Property ............................. XML Output ........................................................................................................................................... Tree Node Verbs.............................................................................................................................. Other Verb Types ............................................................................................................................ 35-1 35-1 35-1 35-2 35-3 35-4 35-4 35-5

36

Using Attunity Query Analyzer Utility


Overview................................................................................................................................................. Using the Query Analyzer ................................................................................................................... Viewing the Execution Plan for an SQL Query............................................................................... Generating a Plan for Every SQL Statement.................................................................................... The SQL Statement Plan...................................................................................................................... The Query Analyzer Toolbar............................................................................................................... The Query Analyzer Icons................................................................................................................... Working with an Optimization Plan ................................................................................................. 36-1 36-1 36-2 36-2 36-2 36-3 36-4 36-5

37

Using NAV_UTIL Utility


Overview................................................................................................................................................. Using the NAV_UTIL Command Line Utility ................................................................................. Running NAV_UTIL....................................................................................................................... Basic NAV_UTIL Syntax......................................................................................................... Activating NAV_UTIL ............................................................................................................ Running NAV_UTIL from a Shell Environment................................................................. Running NAV_UTIL on a Java Machine .............................................................................. ADDON ............................................................................................................................................ ADD_ADMIN.................................................................................................................................. AUTOGEN ....................................................................................................................................... CHECK ............................................................................................................................................. check irpcd ................................................................................................................................ check network [port] ............................................................................................................... check irpcdstat.......................................................................................................................... check tcpip ................................................................................................................................ check server............................................................................................................................... check license ............................................................................................................................. check datasource ...................................................................................................................... CODEPAGE ..................................................................................................................................... DELETE ............................................................................................................................................ Deleting Data Source Objects ................................................................................................. EDIT ................................................................................................................................................ EXECUTE ....................................................................................................................................... EXECUTE Overview ............................................................................................................. NavSQL Environment........................................................................................................... 37-1 37-1 37-2 37-2 37-2 37-3 37-4 37-4 37-5 37-5 37-5 37-6 37-6 37-6 37-7 37-7 37-8 37-8 37-8 37-9 37-9 37-10 37-13 37-13 37-13

xxii

Executing SQL Statements.................................................................................................... NavSQL Commands.............................................................................................................. EXPORT.......................................................................................................................................... GEN_ARRAY_TABLES................................................................................................................ IMPORT.......................................................................................................................................... IRPCDCMD.................................................................................................................................... LOCAL_COPY............................................................................................................................... PASSWORD ................................................................................................................................... PROTOGEN ................................................................................................................................... REGISTER ...................................................................................................................................... SERVICE ......................................................................................................................................... SVC.................................................................................................................................................. TEST ................................................................................................................................................ UPDATE ......................................................................................................................................... Removing Metadata Statistics .............................................................................................. UPD_DS.......................................................................................................................................... UPD_SEC........................................................................................................................................ VERSION........................................................................................................................................ VERSION_HISTORY .................................................................................................................... VIEW ............................................................................................................................................... XML................................................................................................................................................. XML Samples..........................................................................................................................

37-14 37-15 37-16 37-19 37-20 37-21 37-21 37-22 37-22 37-22 37-22 37-23 37-23 37-23 37-23 37-25 37-25 37-26 37-26 37-26 37-28 37-29

38

Using Attunity BASIC Import Utility


Overview................................................................................................................................................. 38-1 Using the Basic Import Utility ............................................................................................................ 38-1

Part VIII 39

Data Source Reference

Adabas C Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Limitations ....................................................................................................................................... Functionality .......................................................................................................................................... Optimizing Adabas Queries .......................................................................................................... Statistical Information ............................................................................................................. Subdescriptors and Superdescriptors with Subfields................................................................ SQL That Will Not Use the S1 Descriptor ............................................................................ SQL That Will Use the S1 Descriptor .................................................................................... Phonetic-descriptors and Hyper-descriptors.............................................................................. Descriptors on MU and PE fields.................................................................................................. Null Suppression Handling........................................................................................................... Array Handling ............................................................................................................................... Logical Tables .................................................................................................................................. Locking Support .............................................................................................................................. 39-1 39-2 39-2 39-2 39-2 39-3 39-3 39-3 39-4 39-4 39-4 39-5 39-5 39-6 39-7 39-8

xxiii

Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Configuration Properties ................................................................................................................... Platform-specific Information .......................................................................................................... UNIX Platforms ............................................................................................................................. Verifying Environment Variables........................................................................................ Relinking to the Adabas Driver on UNIX Platforms ........................................................ Accessing 64 Bit Adabas ....................................................................................................... z/OS Platforms.............................................................................................................................. Specifying the Adabas SVC .................................................................................................. Configuring AIS to Run in multiClient Mode ................................................................... Defining an Adabas Data Source..................................................................................................... Defining the Adabas Data Source Connection ......................................................................... Configuring the Adabas Data Source Properties ..................................................................... Setting Up Adabas Data Source Metadata (Using the Import Manager) ................................. Selecting the DDM Declaration files .......................................................................................... Applying Filters............................................................................................................................. Selecting Tables ............................................................................................................................. Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Metadata Model Selection............................................................................................................ Import the Metadata ..................................................................................................................... Setting Up Adabas Data Source Metadata (Traditional Method) .............................................. Importing Attunity Metadata from DDM Files ........................................................................ Exporting Predict Metadata into Adabas ADD........................................................................ Testing the Adabas Data Source.......................................................................................................

39-8 39-8 39-8 39-10 39-12 39-12 39-12 39-13 39-13 39-13 39-13 39-13 39-17 39-17 39-18 39-19 39-19 39-22 39-23 39-23 39-24 39-25 39-31 39-33 39-34 39-34 39-35 39-35

40

DB2 Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... Update Semantics .................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. z/OS .................................................................................................................................................. OS/400 .............................................................................................................................................. UNIX and Windows ....................................................................................................................... Configuring Transaction Support ......................................................................................... Configuring the Shared Library Environment Variable .................................................... Security.................................................................................................................................................... DB2 Data Types ..................................................................................................................................... Defining a DB2 Data Source ............................................................................................................... 40-1 40-1 40-1 40-1 40-2 40-2 40-2 40-4 40-4 40-5 40-6 40-6 40-6 40-7 40-7 40-7 40-8

xxiv

z/OS .................................................................................................................................................. Defining an ODBCINI file ...................................................................................................... Defining the Data Source Connection................................................................................... Configuring the Data Source................................................................................................ OS/400 ............................................................................................................................................ Defining the Data Source Connection................................................................................. Configuring the Data Source................................................................................................ UNIX and Windows ..................................................................................................................... Defining the Data Source Connection................................................................................. Configuring the Data Source................................................................................................

40-8 40-8 40-9 40-10 40-11 40-11 40-11 40-13 40-13 40-13

41

CISAM /DISAM Data Source


Overview................................................................................................................................................. Supported Features ......................................................................................................................... Limitations ....................................................................................................................................... Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Defining a CISAM/DISAM Data Source ......................................................................................... Defining the CISAM/DISAM Data Source Connection............................................................ Configuring the CISAM/DISAM Data Source ........................................................................... Setting Up the CISAM/DISAM Data Source Metadata ................................................................ 41-1 41-1 41-1 41-1 41-2 41-2 41-3 41-3 41-4 41-5

42

DBMS Data Source (OpenVMS Only)


Overview................................................................................................................................................. Prerequisites..................................................................................................................................... Functionality .......................................................................................................................................... Locking ............................................................................................................................................. Update Semantics .................................................................................................................... Configuration Properties ..................................................................................................................... Data Types .............................................................................................................................................. Transaction Support.............................................................................................................................. Platform-specific Information ............................................................................................................ Database Model Mapping Requirements .................................................................................... Virtual Columns .............................................................................................................................. Using Virtual Columns ........................................................................................................... Virtual Columns and Indexes .............................................................................................. Virtual Column Categories................................................................................................... Accessing DBMS Data .................................................................................................................. DBMS Error Codes........................................................................................................................ Defining the DBMS Data Source..................................................................................................... Defining the DBMS Data Source Connection ........................................................................... Configuring the DBMS Data Source Properties ....................................................................... Setting Up the DBMS Data Source Metadata................................................................................ 42-1 42-2 42-2 42-2 42-2 42-3 42-3 42-4 42-4 42-5 42-5 42-5 42-12 42-14 42-14 42-15 42-21 42-21 42-22 42-23

xxv

43

Enscribe Data Source (HP NonStop Only)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Versions and Platforms .............................................................................................. Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Enscribe Data Types.............................................................................................................................. Defining the Enscribe Data Source.................................................................................................... Defining the Enscribe Data Source Connection.......................................................................... Configuring the Enscribe Data Source......................................................................................... Setting up the Enscribe Data Source Metadata ............................................................................... Importing Metadata from COBOL ............................................................................................... Starting the Import Process .................................................................................................... Importing Metadata Using the ADDIMP Utility ....................................................................... Importing Metadata Using the TALIMP Utility....................................................................... Maintaining Metadata .................................................................................................................. Testing the Enscribe Data Source..................................................................................................... Sample Log File Explained .......................................................................................................... 43-1 43-1 43-2 43-2 43-3 43-3 43-4 43-4 43-4 43-4 43-5 43-6 43-6 43-7 43-8 43-10 43-13 43-13 43-15

44

Flat File Data Source


Configuration Properties ..................................................................................................................... Defining a Flat File Data Source ........................................................................................................ Defining the Flat File Data Source Connection........................................................................... Configuring the Flat File Data Source.......................................................................................... Setting Up the Flat File Data Source Metadata ............................................................................... Importing Attunity Metadata from COBOL ............................................................................... Maintaining Attunity Metadata .................................................................................................. 44-1 44-1 44-1 44-2 44-3 44-3 44-11

45

IMS/DB Data Sources


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Environmental Prerequisites ......................................................................................................... IMS-DLI Prerequisites ............................................................................................................. IMS-DBCTL Prerequisites....................................................................................................... IMS-DBDC Prerequisites ........................................................................................................ Limitations ....................................................................................................................................... General IMS Limitations ......................................................................................................... Limitations Specific to IMS/DLI............................................................................................ Limitations Specific to IMS-DBCTL ...................................................................................... Limitations Specific to IMS/DBDC ....................................................................................... Functionality .......................................................................................................................................... Hierarchical Modelling................................................................................................................... Constructing DLI Commands from SQL Requests .................................................................... 45-1 45-1 45-2 45-2 45-2 45-2 45-2 45-3 45-3 45-4 45-4 45-4 45-4 45-4 45-5

xxvi

Selecting a PCB......................................................................................................................... DLI Samples.............................................................................................................................. Configuration Properties ..................................................................................................................... IMS/DB DLI Configuration Properties ....................................................................................... IMS/DB DBCTL Configuration Properties ................................................................................. IMS/DB DBDC Configuration Properties................................................................................... Configuring Advanced Data Source Properties ......................................................................... Transaction Support.............................................................................................................................. Using Attunity Connect with One-phase Commit..................................................................... Hospital Database Example ................................................................................................................ Defining the IMS/DB DLI Data Source ......................................................................................... Defining the IMS/DB DLI Data Source Connection................................................................ Configuring the IMS/DB DLI Data Source............................................................................... Setting Up the Daemon Workspace ........................................................................................... Defining the IMS/DB DBCTL Data Source .................................................................................. Defining the IMS/DB DBCTL Data Source Connection ......................................................... Configuring the IMS/DB DBCTL Data Source......................................................................... Accessing IMS/DB Data under CICS......................................................................................... Defining the IMS/DB DBDC Data Source .................................................................................... Defining the IMS/DB DBDC Data Source Connection ........................................................... Configuring the IMS/DB DBDC Data Source .......................................................................... Accessing IMS/DB Data under IMS/TM.................................................................................. Setting Up IMS/DB Metadata ........................................................................................................... Selecting the Input Files ............................................................................................................... Applying Filters............................................................................................................................. Selecting Tables ............................................................................................................................. Matching DBD to COBOL............................................................................................................ Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Metadata Model Selection............................................................................................................ Import the Metadata .....................................................................................................................

45-6 45-6 45-7 45-7 45-7 45-8 45-8 45-8 45-9 45-9 45-11 45-11 45-12 45-12 45-13 45-13 45-14 45-15 45-16 45-16 45-16 45-17 45-18 45-19 45-20 45-22 45-22 45-23 45-23 45-25 45-31 45-33

46

Informix Data Source


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Versions and Platforms .............................................................................................. SQL Capability ...................................................................................................................................... Stored Procedures ........................................................................................................................... Limitations ................................................................................................................................ Informix CLOB/BLOBs.................................................................................................................. Using Passthru Queries .................................................................................................................. Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Statistics ............................................................................................................................................ Owner Support ................................................................................................................................ Transaction Support.............................................................................................................................. 46-1 46-1 46-2 46-2 46-4 46-4 46-5 46-5 46-5 46-5 46-6 46-6 46-6

xxvii

Locking Levels ................................................................................................................................. Isolation Levels ................................................................................................................................ Security.................................................................................................................................................... Data Types .............................................................................................................................................. Defining an Informix Data Source .................................................................................................... Defining the Informix Data Source Connection ......................................................................... Configuring the Informix Data Source Properties ................................................................... Testing the Informix Data Source .................................................................................................... Sample Log File Explained ..........................................................................................................

46-6 46-7 46-8 46-8 46-9 46-9 46-10 46-11 46-13

47

Ingres II (Open Ingres) Data Source


Supported Versions and Platforms .................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... BLOBs................................................................................................................................................ Passthru Queries ............................................................................................................................. Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Platform-Specific Information............................................................................................................ Defining the Ingres II Data Source.................................................................................................... Defining the Ingres II Data Source Connection .......................................................................... Configuring the Ingres II Data Source Properties ...................................................................... 47-1 47-1 47-1 47-1 47-2 47-2 47-3 47-4 47-5 47-6 47-6 47-6 47-7

48

ODBC Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels ................................................................................................................................ SQL Capabilities ................................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Platform-Specific Information............................................................................................................ Defining the ODBC Data Source ....................................................................................................... Defining the ODBC Data Source Connection on a Windows Platform .................................. Defining the ODBC Data Source Connection on a non-Windows Platform .......................... Configuring the ODBC Data Source ............................................................................................ Testing the ODBC Data Source .......................................................................................................... Logging ........................................................................................................................................... 48-1 48-1 48-1 48-2 48-2 48-2 48-2 48-3 48-4 48-4 48-4 48-4 48-6 48-6 48-6 48-7 48-8 48-9 48-11

xxviii

49

OLEDB-FS (Flat File System) Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Data Provider Requirements............................................................................................................... Functionality .......................................................................................................................................... Isolation Levels ................................................................................................................................ Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Configuration Properties ..................................................................................................................... Defining the Data Source .................................................................................................................... Defining the OLEDB-FS Data Source Connection...................................................................... Configuring the OLEDB-FS Data Source Properties.................................................................. 49-1 49-1 49-1 49-2 49-2 49-3 49-3 49-3 49-4 49-4 49-4

50

OLEDB-SQL (Relational) Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Data Provider Requirements............................................................................................................... Functionality .......................................................................................................................................... Isolation Levels ................................................................................................................................ Stored Procedures ........................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Configuration Properties ..................................................................................................................... Defining the Data Source .................................................................................................................... Defining the OLEDB-SQL Data Source Connection .................................................................. Configuring the OLEDB-SQLData Source Properties ............................................................... 50-1 50-1 50-1 50-2 50-3 50-3 50-3 50-3 50-4 50-4 50-4 50-5

51

Oracle Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... Consistency ............................................................................................................................... Attunity Connect Treatment of Locking .............................................................................. Attunity Connect Treatment of Isolation Levels ................................................................. BLOBs ............................................................................................................................................... Passthru Queries ............................................................................................................................. SQL Capabilities ................................................................................................................................... Using Oracle Hints in the SQL ...................................................................................................... Attunity Hints .......................................................................................................................... Oracle Hints .............................................................................................................................. Configuration Properties ..................................................................................................................... Metadata................................................................................................................................................ Transaction Support............................................................................................................................ 51-1 51-1 51-1 51-2 51-2 51-2 51-3 51-3 51-4 51-4 51-4 51-6 51-7 51-7 51-8 51-9 51-10 51-11

xxix

Security.................................................................................................................................................. Data Types ............................................................................................................................................ Platform-Specific Information.......................................................................................................... UNIX Platforms ............................................................................................................................. Verifying Environment Variables on UNIX Platforms .................................................... Linking to Oracle Libraries on UNIX Platforms................................................................ OpenVMS Platform....................................................................................................................... Verifying Environment Variables on OpenVMS Platforms ............................................ Linking to Oracle Libraries on OpenVMS Platforms ....................................................... Defining the Oracle Data Source ..................................................................................................... Defining the Oracle Data Source Connection ........................................................................... Configuring the Oracle Data Source Properties ....................................................................... Configuring Table and Column Names to be Case Sensitive................................................. Checking Oracle Environment Variables .................................................................................. Testing the Oracle Data Source......................................................................................................... Sample Log File .............................................................................................................................

51-11 51-11 51-13 51-13 51-13 51-13 51-13 51-13 51-14 51-14 51-14 51-15 51-16 51-17 51-17 51-18

52

Oracle RDB Data Source (OpenVMS Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... BLOBs................................................................................................................................................ Passthru Queries ............................................................................................................................. SQL Capabilities ................................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Statistics ............................................................................................................................................ Transaction Support.............................................................................................................................. Installing XA-related Shareable Libraries.................................................................................... Security.................................................................................................................................................... Oracle RDB Data Types........................................................................................................................ Defining the Oracle RDB Data Source ........................................................................................... Defining the Oracle RDB Data Source Connection .................................................................. Configuring the Oracle RDB Data Source Properties .............................................................. Testing the Oracle RDB Data Source............................................................................................... Sample Log File ............................................................................................................................. 52-1 52-1 52-1 52-2 52-2 52-2 52-4 52-4 52-4 52-6 52-8 52-8 52-8 52-9 52-9 52-9 52-10 52-11 52-11 52-12 52-14

53

RMS Data Source (OpenVMS Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Functionality ......................................................................................................................................... Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. 53-1 53-1 53-1 53-1 53-2 53-2

xxx

Data Types .............................................................................................................................................. Defining the RMS Data Source .......................................................................................................... Defining the RMS Data Source Connection ................................................................................ Configuring the RMS Data Source ............................................................................................... Setting Up the RMS Data Source Metadata with the Import Manager...................................... Selecting the Input Files ................................................................................................................. Applying Filters............................................................................................................................... Selecting Tables ............................................................................................................................... Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Metadata Model Selection............................................................................................................ Import the Metadata ..................................................................................................................... Importing Attunity Metadata Using the RMS_CDD Import Utility ........................................

53-3 53-3 53-3 53-4 53-5 53-6 53-8 53-9 53-10 53-10 53-12 53-17 53-19 53-20

54

SQL Server Data Source (Windows Only)


Overview................................................................................................................................................. Supported Versions and Platforms .................................................................................................... Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Defining an SQL Server Data Source................................................................................................ Defining the SQL Server Data Source Connection..................................................................... Configuring the SQL Server Data Source Properties................................................................. 54-1 54-1 54-2 54-3 54-3 54-4 54-5 54-5

55

SQL/MP Data Source (HP NonStop Only)


Overview................................................................................................................................................. Limitations ....................................................................................................................................... Functionality .......................................................................................................................................... Mapping SQL/MP Table Names.................................................................................................. SQL/MP Primary Keys .................................................................................................................. Partitioned Tables ........................................................................................................................... Isolation Levels and Locking......................................................................................................... Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Defining the SQL/MP Data Source ................................................................................................... Defining the SQL/MP Data Source Connection......................................................................... Configuring the SQL/MP Data Source Properties..................................................................... 55-1 55-1 55-2 55-2 55-2 55-2 55-3 55-4 55-4 55-5 55-6 55-6 55-7

56

Sybase Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .................................................................................................... Functionality .......................................................................................................................................... Stored Procedures ........................................................................................................................... Isolation Levels and Locking......................................................................................................... 56-1 56-1 56-1 56-1 56-2

xxxi

Configuration Properties ..................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Platform-Specific Information............................................................................................................ Verifying Environment Variables on UNIX Platforms.............................................................. Defining the Sybase Data Source ...................................................................................................... Defining the Sybase Data Source Connection............................................................................. Configuring the Sybase Data Source Properties......................................................................... Checking Sybase Environment Variables....................................................................................

56-2 56-3 56-3 56-5 56-5 56-5 56-5 56-6 56-7

57

Text Delimited File Data Source


Overview................................................................................................................................................. Features............................................................................................................................................. Limitations ....................................................................................................................................... Configuration Properties .................................................................................................................... Defining the Text Delimited File Data Source ................................................................................ Defining the Text Delimited File Data Source Connection ....................................................... Configuring the Text Delimited File Data Source ...................................................................... Setting Up the Text Delimited Data Source Metadata ................................................................... Importing Attunity Metadata from COBOL ............................................................................... Maintaining Attunity Metadata .................................................................................................... 57-1 57-1 57-1 57-1 57-2 57-2 57-2 57-3 57-4 57-9

58

Virtual Data Source


Overview................................................................................................................................................. Configuration Properties ..................................................................................................................... Platform-specific Configuration Properties ............................................................................... HP NonStop Platforms............................................................................................................ z/OS Platforms......................................................................................................................... OpenVMS Platforms................................................................................................................ Defining the Virtual Data Source ...................................................................................................... Defining the Virtual Data Source Connection ............................................................................ Configuring the Virtual Data Source ........................................................................................... 58-1 58-1 58-2 58-2 58-3 58-3 58-3 58-3 58-4

59

VSAM Data Source (z/OS)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Environmental Prerequisites ......................................................................................................... Configuration Properties ..................................................................................................................... VSAM Data Source Parameters .................................................................................................... VSAM (CICS) Data Source Parameters........................................................................................ Metadata.................................................................................................................................................. VSAM Metadata Requirements .................................................................................................... VSAM (CICS) Metadata Requirements........................................................................................ Transaction Support.............................................................................................................................. Using Attunity Connect with One-phase Commit..................................................................... Data Types .............................................................................................................................................. 59-1 59-2 59-2 59-2 59-2 59-3 59-4 59-4 59-4 59-5 59-6 59-6

xxxii

Defining a VSAM Data Source .......................................................................................................... Defining the VSAM Data Source Connection ............................................................................. Defining the VSAM (CICS) Data Source Connection ................................................................ Configuring the VSAM Data Source Properties ......................................................................... Configuring the VSAM (CICS) Data Source Properties .......................................................... Setting Up the VSAM Data Source Metadata................................................................................ Selecting the COBOL files ............................................................................................................ Applying Filters............................................................................................................................. Selecting Tables ............................................................................................................................. Import Manipulation .................................................................................................................... Import Manipulation Screen ................................................................................................ Field Manipulation Screen.................................................................................................... Create VSAM Indexes................................................................................................................... Assigning File Names................................................................................................................... Assigning Index File Names........................................................................................................ Metadata Model Selection............................................................................................................ Importing the Metadata ...............................................................................................................

59-6 59-6 59-7 59-9 59-10 59-11 59-12 59-14 59-16 59-16 59-17 59-18 59-24 59-25 59-26 59-27 59-29

Part IX 60

Procedure Data Source Reference

Natural/CICS Procedure Data Source (z/OS)


Overview................................................................................................................................................. Environmental Prerequisites ......................................................................................................... Supported Platforms and Versions .................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Specifying the Program to Execute............................................................................................... Syntax ........................................................................................................................................ Specifying Input and Output Parameters.................................................................................... Syntax ........................................................................................................................................ Security.................................................................................................................................................... Defining the Natural/CICS Procedure Data Source ....................................................................... Defining the Natural/CICS Procedure Data Source Connection ............................................ Configuring the Natural/CICS Data Source............................................................................... Writing a Natural Remote Procedure Call ........................................................................................ The Subprogram General Structure ........................................................................................... Maintaining the CICS Environment for the Natural Agent ....................................................... 60-1 60-2 60-2 60-2 60-2 60-3 60-3 60-3 60-3 60-5 60-5 60-5 60-6 60-7 60-11 60-12

61

Procedure Data Source (Application Connector)


Overview................................................................................................................................................. Introduction ..................................................................................................................................... Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Limitations ....................................................................................................................................... Configuration Properties ..................................................................................................................... Overview .......................................................................................................................................... 61-1 61-1 61-2 61-2 61-2 61-2 61-3

xxxiii

Parameter Descriptions .................................................................................................................. Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Platform-specific Information ............................................................................................................ Windows Platforms and AIS Procedures (ADO Considerations) ........................................... HP NonStop Platforms and Attunity Connect Procedures ...................................................... Load Modules and DLLs on MVS ................................................................................................ Descriptors on OpenVMS ............................................................................................................ OS/400 Issues ................................................................................................................................ Defining the Procedure Data Source............................................................................................... Defining the Procedure Data Source Connection..................................................................... Configuring the Procedure Data Source.................................................................................... Setting Up Procedure Data Source Metadata................................................................................. Defining Return Values................................................................................................................ Defining Input and Output Arguments .................................................................................... Testing the Procedure Data Source .................................................................................................. Executing a Procedure ........................................................................................................................

61-3 61-7 61-7 61-7 61-7 61-7 61-8 61-9 61-10 61-10 61-10 61-10 61-10 61-11 61-12 61-13 61-16 61-16

62

CICS Procedure Data Source


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Environmental Prerequisites ......................................................................................................... Limitations ....................................................................................................................................... Design Considerations ......................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. Using Attunity Connect with One-phase Commit..................................................................... Using Attunity Connect with Two-phase Commit .................................................................... Security.................................................................................................................................................... Data Types .............................................................................................................................................. Defining the CICS Procedure Data Source ...................................................................................... Defining the CICS Procedure Data Source Connection............................................................. Configuring the CICS Procedure Data Source............................................................................ Setting-up the CICS Procedure Data Source Metadata ................................................................. Importing Metadata from COBOL ............................................................................................... Editing the XML in the Source Code............................................................................................... Editing the <procedure> Statement ........................................................................................... Editing the <field> Statement ..................................................................................................... Editing the <parameters> Statement.......................................................................................... Sample ADD Metadata................................................................................................................. 62-1 62-1 62-2 62-2 62-2 62-2 62-3 62-4 62-4 62-4 62-5 62-5 62-6 62-6 62-7 62-8 62-8 62-13 62-13 62-13 62-14 62-15

Part X

Adapters Reference

xxxiv

63

CICS Application Adapter (z/OS Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Environmental Prerequisites ......................................................................................................... Limitations ....................................................................................................................................... Transaction Support.............................................................................................................................. Configuration Properties ..................................................................................................................... Defining the CICS Application Adapter .......................................................................................... Defining the CICS Application Adapter Connection ................................................................ Configuring the CICS Application Adapter ............................................................................... Setting Up CICS Application Metadata ....................................................................................... Importing Attunity Metadata from COBOL ............................................................................... Refining CICS Application Adapter Metadata ......................................................................... Testing CICS Application Adapter Interactions ........................................................................... 63-1 63-1 63-1 63-2 63-2 63-2 63-3 63-3 63-4 63-4 63-5 63-5 63-12 63-13

64

COM Adapter (Windows Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Data Types .............................................................................................................................................. Registering the COM Application ..................................................................................................... Defining the COM Application Adapter.......................................................................................... Setting Up COM Application Interactions ...................................................................................... COM Adapter Attributes ............................................................................................................... Record Level Attributes .......................................................................................................... Field Level Attributes.............................................................................................................. Defining COM Data Types.................................................................................................................. 64-1 64-1 64-1 64-3 64-3 64-3 64-4 64-4 64-5 64-5

65

IMS/TM Adapter (z/OS Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Transaction Support.............................................................................................................................. Configuration Parameters ................................................................................................................... Defining the IMS/TM Application Adapter .................................................................................... Defining the IMS/TM Application Adapter Connection.......................................................... Configuring the IMS/TM Application Adapter......................................................................... Setting Up the IMS/TM Application Metadata............................................................................... Importing Attunity Metadata from COBOL ............................................................................... 65-1 65-1 65-1 65-2 65-3 65-3 65-3 65-4 65-5

66

Legacy Plug Application Adapter


Overview................................................................................................................................................. Configuration Parameters.................................................................................................................... Defining the Legacy Plug Application Adapter.............................................................................. Defining the Legacy Plug Application Adapter Connection.................................................... Configuring the Legacy Plug Application Adapter................................................................... 66-1 66-1 66-2 66-2 66-2

xxxv

Setting Up Legacy Application Metadata......................................................................................... Importing Attunity Metadata from PCML Files ........................................................................ Defining Interactions and Records ............................................................................................... Defining Interaction Properties.............................................................................................. Defining Schema Records ....................................................................................................... Configuring a Trigger for the Legacy Plug Adapter................................................................

66-3 66-3 66-7 66-7 66-9 66-11

67

Pathway Application Adapter (HP NonStop Only)


Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Transaction Support.............................................................................................................................. Pathway Adapter Configuration Parameters ................................................................................... Defining the Pathway Application Adapter .................................................................................... Defining the Pathway Application Adapter Connection.......................................................... Configuring the Pathway Application Adapter ......................................................................... Setting Up Pathway Application Metadata...................................................................................... Importing Attunity Metadata from COBOL ............................................................................... 67-1 67-1 67-1 67-2 67-2 67-3 67-3 67-4 67-5

68

Tuxedo Application Adapter (UNIX and Windows Only)


Overview of the Tuxedo Application Adapter ................................................................................ Supported Versions and Platforms .............................................................................................. Feature Highlights .......................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Checking the Tuxedo Environment Variables ................................................................................. Defining the Tuxedo Adapters............................................................................................................ Defining the Tuxedo Application Adapter ................................................................................. Defining the Tuxedo Queue Adapter........................................................................................... Setting Up Tuxedo Application Adapter Interactions.................................................................... Importing Metadata Using a BEA Jolt Bulk Loader File ........................................................... Importing Metadata Using FML/VIEW Files........................................................................... Setting up Tuxedo Queue Adapter Interactions............................................................................ Defining the Tuxedo Queue Unstructured Records ................................................................ Testing Tuxedo Application Interactions........................................................................................ 68-1 68-1 68-1 68-2 68-2 68-2 68-3 68-3 68-3 68-3 68-4 68-4 68-5 68-5 68-10 68-18 68-22 68-25

Part XI 69

Non-Application Adapters Reference

Database Adapter
Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Supported Features ......................................................................................................................... Configuration Properties ..................................................................................................................... Metadata.................................................................................................................................................. 69-1 69-1 69-1 69-2 69-2

xxxvi

Security.................................................................................................................................................... SQL Interaction Types .......................................................................................................................... Database Query Interaction ........................................................................................................... Database Modification Interaction ............................................................................................... Stored Procedure Call Interaction................................................................................................. Transaction Support.............................................................................................................................. Interaction Parameters.......................................................................................................................... Defining the Database Adapter.......................................................................................................... Defining the Database Adapter Connection ............................................................................... Configuring the Database Adapter .............................................................................................. Configuring Database Adapter Interactions.................................................................................... Automatically Creating Interactions ............................................................................................ Manually Creating Interactions .................................................................................................... Specifying Parameters ........................................................................................................... Testing Database Adapter Interactions ........................................................................................... Creating SQL Queries ........................................................................................................................

69-2 69-2 69-2 69-3 69-3 69-4 69-4 69-4 69-5 69-5 69-5 69-6 69-8 69-20 69-21 69-21

70

Query Adapter
Overview................................................................................................................................................. Supported Versions and Platforms .............................................................................................. Features Highlights......................................................................................................................... Metadata.................................................................................................................................................. Security.................................................................................................................................................... Transaction Support.............................................................................................................................. Predefined Interactions........................................................................................................................ callProcedure Interaction ............................................................................................................... Input Record ............................................................................................................................. Output Record .......................................................................................................................... ddl Interaction ................................................................................................................................. Input Record ............................................................................................................................. Output Record .......................................................................................................................... getSchema Interaction .................................................................................................................... Input Record ............................................................................................................................. query Interaction ............................................................................................................................. Input Record ............................................................................................................................. Output Record .......................................................................................................................... Output Data Formats............................................................................................................... setErrorAction Interaction ........................................................................................................... Input Record ........................................................................................................................... update Interaction ......................................................................................................................... Input Record ........................................................................................................................... Output Record ........................................................................................................................ Interactions for Internal Use ........................................................................................................ Using the Query Adapter................................................................................................................... 70-1 70-1 70-1 70-1 70-2 70-2 70-2 70-2 70-2 70-3 70-4 70-4 70-5 70-6 70-6 70-8 70-8 70-9 70-9 70-13 70-13 70-13 70-13 70-14 70-15 70-16

xxxvii

71

Managing the Execution of Queries over Large Tables


Overview of Query Governing........................................................................................................... 71-1 Configuring Query Governing........................................................................................................... 71-1

Part XII 72

CDC Agents Reference

Adabas CDC on z/OS Platforms


Overview................................................................................................................................................. Functionality .......................................................................................................................................... The Tracking File............................................................................................................................. Configuration Properties ..................................................................................................................... Data Source Properties ................................................................................................................... CDC Logger Properties .................................................................................................................. Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Data Types .............................................................................................................................................. Configuring the Adabas CDC............................................................................................................. Setting up the ATTSRVR Started Task......................................................................................... Setting up the Tracking File........................................................................................................... Adding the Tracking File Usage Step to the UE2 Procedure.................................................... Setting up the Adabas Agent in Attunity Studio............................................................................ Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 72-1 72-1 72-2 72-2 72-2 72-2 72-2 72-3 72-3 72-3 72-4 72-4 72-4 72-4 72-5 72-5 72-6 72-6

73

Adabas CDC on UNIX Platforms


Overview................................................................................................................................................. Functionality ......................................................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Configuring the Adabas CDC............................................................................................................. Identifying the Adabas CDC in the Adabas System.................................................................. Defining Adabas CDC in Attunity Studio....................................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 73-1 73-1 73-2 73-2 73-2 73-3 73-3 73-3 73-3 73-3 73-3 73-4 73-5

74

Adabas CDC for OpenVMS


Overview................................................................................................................................................. Functionality ......................................................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... 74-1 74-1 74-1 74-2

xxxviii

Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Configuring the Adabas CDC............................................................................................................. Identifying the Adabas CDC in the Adabas System.................................................................. Defining Adabas CDC in Attunity Studio....................................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................

74-2 74-2 74-2 74-3 74-3 74-3 74-3 74-3 74-4

75

DB2 CDC (z/OS)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Limitations ....................................................................................................................................... Supported Versions and Platforms .............................................................................................. Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Data Types .............................................................................................................................................. Configuring the DB2 Tables for CDC ............................................................................................... Configuring the ATTSRVR Started Task.......................................................................................... Setting up the DB2 Agent in Attunity Studio ................................................................................. Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 75-1 75-1 75-2 75-2 75-2 75-2 75-3 75-3 75-3 75-3 75-3 75-3 75-4 75-4 75-5

76

DB2 CDC (OS/400 Platforms)


DB2 CDC Agent Overview.................................................................................................................. Functionality .......................................................................................................................................... Limitations ....................................................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Setting-up the DB2 Journal on OS/400.............................................................................................. Setting-up the DB2 for OS/400 Agent in Attunity Studio............................................................. Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 76-1 76-1 76-1 76-1 76-2 76-2 76-3 76-3 76-3 76-3 76-6 76-6 76-7

77

Enscribe CDC (HP NonStop Platforms)


Overview................................................................................................................................................. 77-1 Functionality .......................................................................................................................................... 77-1 Configuration Properties ..................................................................................................................... 77-1

xxxix

Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform-specific Information ............................................................................................................ Setting Up Enscribe to use the Attunity Enscribe CDC Agent .................................................... Adding the Enscribe Agent to Attunity Studio............................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................

77-1 77-2 77-2 77-2 77-2 77-3 77-3 77-3 77-4

78

IMS/DB CDC on z/OS Platforms


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Platforms and Versions .............................................................................................. Configuration Properties ..................................................................................................................... CDC Logger Properties .................................................................................................................. CDC$PARM Properties.................................................................................................................. Agent Properties.............................................................................................................................. Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Configuring the DFSFLGX0 Exit........................................................................................................ MVS Logstream Creation............................................................................................................... Managing the MVS Logstream .............................................................................................. Creating and Configuring the CDC$PARM Data Set................................................................ Update the IMS Environment ....................................................................................................... Adjust the DBD for the Relevant Databases ............................................................................... Setting-up the IMS/DB CDC Agent in Attunity Studio................................................................ Configuring the CDC Service........................................................................................................ Setting the envlmsBatch Property ................................................................................................ Working with Metadata ................................................................................................................. Troubleshooting..................................................................................................................................... 78-1 78-1 78-2 78-2 78-2 78-2 78-2 78-3 78-3 78-3 78-4 78-4 78-4 78-4 78-4 78-5 78-5 78-5 78-5 78-7 78-8 78-9

79

Microsoft SQL Server CDC


Overview................................................................................................................................................. Microsoft SQL Server CDC Solution ............................................................................................ MS SQL Server.......................................................................................................................... TLOG Miner.............................................................................................................................. Transient Storage ..................................................................................................................... Functionality .......................................................................................................................................... Limitations ....................................................................................................................................... Supported Versions and Platforms .................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. 79-1 79-2 79-3 79-3 79-3 79-4 79-4 79-5 79-5 79-5 79-6 79-6

xl

User Defined Data Types (UDT)................................................................................................... Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ Setting up the SQL Server CDC in Attunity Studio ...................................................................... Enabling MS SQL Replication ......................................................................................................... MS SQL Server 2000 Replication................................................................................................. MS SQL Server 2005 Replication................................................................................................. Configuring Security Properties ...................................................................................................... Setting up Log On Information ........................................................................................................ Setting up the Database ..................................................................................................................... MS SQL Server 2000 Settings....................................................................................................... MS SQL Server 2005 Settings....................................................................................................... Setting Up the TLOG Miner (LGR) ................................................................................................. Call the LGR Service Interface..................................................................................................... Configuring the Template Input File ......................................................................................... Registering the TLOG Miner (LGR) Service.............................................................................. Setting the Recovery Policy ......................................................................................................... Testing Attunitys Microsoft SQL Server CDC Solution............................................................. Handling Metadata Changes ............................................................................................................ Environment Verification .................................................................................................................. Verify the MS SQL Server Version ............................................................................................. Ensure that the Service is Registered ......................................................................................... Verify that the LGR Service is Running ..................................................................................... Viewing the Service Greetings .................................................................................................... Check the Output Files .................................................................................................................

79-7 79-8 79-8 79-8 79-11 79-11 79-11 79-12 79-12 79-13 79-13 79-14 79-14 79-14 79-15 79-18 79-18 79-19 79-19 79-20 79-20 79-21 79-22 79-23 79-23

80

Oracle CDC (on UNIX and Windows Platforms)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Versions and Platforms .................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Setting-up the Oracle REDO Log....................................................................................................... Testing the Database Logging Settings............................................................................................. Changing the Operation Mode for Metadata Changes.................................................................. Setting up the Oracle CDC in Attunity Studio ............................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ Troubleshooting..................................................................................................................................... 80-1 80-1 80-2 80-2 80-2 80-3 80-3 80-4 80-4 80-5 80-5 80-5 80-6 80-7 80-8

81

Query-Based CDC Agent


Overview................................................................................................................................................. 81-1 Setting-up Query-based CDC Agent................................................................................................. 81-1

xli

Changing a Query-based CDC Agent Definition........................................................................... 81-6

82

SQL/MP CDC on HP NonStop


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Supported Versions and Platforms .................................................................................................... Configuration Properties ..................................................................................................................... Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Data Types .............................................................................................................................................. Security.................................................................................................................................................... Defining the SQL/MP Agent .............................................................................................................. Setting up the SQL/MP Agent in Attunity Studio.......................................................................... Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 82-1 82-1 82-1 82-1 82-2 82-2 82-2 82-2 82-2 82-3 82-3 82-4

83

VSAM Under CICS CDC (on z/OS)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Configuration Properties ..................................................................................................................... Data Source Properties ................................................................................................................... CDC Service Properties .................................................................................................................. Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Security.................................................................................................................................................... Data Types .............................................................................................................................................. Managing the CICS User Journal....................................................................................................... Setting up the CICS User Journal for VSAM............................................................................... Print out the CICS User Journal Content..................................................................................... Setting up the VSAM CICS Agent in Attunity Studio .................................................................. Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................ 83-1 83-1 83-1 83-1 83-2 83-2 83-3 83-3 83-4 83-4 83-4 83-5 83-6 83-6 83-7

84

VSAM Batch CDC (z/OS Platforms)


Overview................................................................................................................................................. Functionality .......................................................................................................................................... Configuration Properties ..................................................................................................................... CDC Service Properties .................................................................................................................. CDC$PARM Properties.................................................................................................................. Change Metadata................................................................................................................................... Transaction Support.............................................................................................................................. Single Program Transaction Manager.......................................................................................... Logical Transaction Manager ........................................................................................................ Data Types .............................................................................................................................................. Security.................................................................................................................................................... Platform Specific Information ............................................................................................................ 84-1 84-1 84-2 84-2 84-2 84-2 84-3 84-3 84-4 84-4 84-4 84-5

xlii

Configuring the Logger........................................................................................................................ Creating the Logstream.................................................................................................................. Managing the MVS Logstream .............................................................................................. Creating the CDC$PARM Data Set .............................................................................................. Updating Jobs and Scripts ............................................................................................................. Updating Jobs for Activating CDC JRNAD ......................................................................... Updating Jobs for Using the Logical Transaction Manager .............................................. Update the REXX Scripts ........................................................................................................ Setting up the VSAM Batch Agent in Attunity Studio.................................................................. Configuring the Data Source ......................................................................................................... Configuring the CDC Service........................................................................................................

84-5 84-5 84-5 84-6 84-6 84-6 84-6 84-7 84-7 84-7 84-7

Part XIII 85

Interface Reference

C and COBOL 3GL Client Interfaces


Overview of the C and COBOL 3GL APIs to Applications .......................................................... Using the 3GL API to Invoke Application Adapters ................................................................. Using the API with C Programs ............................................................................................ Using the API with COBOL Programs ................................................................................. Supported Interfaces....................................................................................................................... APIs and Functions ............................................................................................................................... Connection APIs.............................................................................................................................. The Connect Function ............................................................................................................. Identifying the Adapter Schema............................................................................................ The Clean Connection Function ............................................................................................ The Disconnect Function ........................................................................................................ The Retry Connection Function ............................................................................................. Transaction APIs ............................................................................................................................. Set Autocommit Function ....................................................................................................... Transaction Commit Function ............................................................................................... Transaction Rollback Function............................................................................................... Execution APIs................................................................................................................................. Execute Function .................................................................................................................... Execute Batch Function ......................................................................................................... Setting Environment Parameters......................................................................................... Get Adapter Schema Function .................................................................................................... Get Event Function ....................................................................................................................... Ping Function................................................................................................................................. Get Error Function ........................................................................................................................ Using APIs to Invoke Application Adapters - Examples ............................................................ C Program Example...................................................................................................................... COBOL Program Example........................................................................................................... CICS as a Client Invoking an Application Adapter (z/OS Only) ........................................... Configuring the IBM z/OS Machine.......................................................................................... Using a CICS Transaction to Invoke an Application Adapter ............................................... COBOL Data Buffer ............................................................................................................... 85-1 85-1 85-1 85-2 85-2 85-2 85-2 85-3 85-5 85-6 85-6 85-6 85-7 85-7 85-8 85-9 85-9 85-10 85-11 85-11 85-12 85-12 85-14 85-15 85-16 85-16 85-19 85-21 85-21 85-22 85-24

xliii

Calling the Transaction ................................................................................................................ Transaction Output ....................................................................................................................... CICS Connection Pooling under CICS ........................................................................................... Using Connection Pooling under CICS ..................................................................................... CICS Connection Pool Flow ........................................................................................................ Control Operations Flow ...................................................................................................... 3GL Operations Flow ............................................................................................................ ATTCALL Program Interface...................................................................................................... COMMAREA.......................................................................................................................... Control Protocol ..................................................................................................................... 3GL Protocol ........................................................................................................................... ATTCNTRL Program ................................................................................................................... ATTCNTRL Parameters........................................................................................................ Setting up 3GL under CICS ......................................................................................................... Create a Log File..................................................................................................................... CICS Definitions..................................................................................................................... IMS/TM as a Client Invoking an Application Adapter (z/OS Only) ..................................... Setting Up the IBM z/OS Machine............................................................................................. Setting Up a Call to the Transaction........................................................................................... Calling the Transaction ................................................................................................................ C Call ....................................................................................................................................... COBOL Call ............................................................................................................................ The Transaction Output ...............................................................................................................

85-25 85-25 85-26 85-26 85-26 85-27 85-28 85-28 85-28 85-29 85-30 85-31 85-32 85-33 85-33 85-33 85-33 85-34 85-34 85-35 85-36 85-36 85-37

86

JCA Client Interface


Overview................................................................................................................................................. Outbound Connections........................................................................................................................ Creating a Connection .................................................................................................................... Managed Connection Factory settings......................................................................................... JCA Client Interface.............................................................................................................................. Attunity JCA Enhancements ............................................................................................................... Attunity Metadata........................................................................................................................... Attunity Record ............................................................................................................................... Samples............................................................................................................................................. JCA Logging Mechanism ................................................................................................................... JCA Sample Program .......................................................................................................................... 86-1 86-1 86-1 86-2 86-3 86-5 86-5 86-6 86-7 86-15 86-17

87

JDBC Client Interface


Overview................................................................................................................................................. Connection.............................................................................................................................................. Creating a Connection .................................................................................................................... Connection String............................................................................................................................ Accessing Data Sources Directly................................................................................................... Data Types .............................................................................................................................................. JDBC and Java Type Mapping ...................................................................................................... Conversions Between Java Object Types and Target SQL Types ............................................ Using the getXXX Methods to Retrieve Data Types .................................................................. 87-1 87-2 87-2 87-2 87-6 87-6 87-7 87-8 87-8

xliv

JDBC API Conformance....................................................................................................................... Supported Interfaces....................................................................................................................... Supported Classes ......................................................................................................................... DataSource Properties .................................................................................................................. ConnectionPool Data Source and XADatasource Interface Properties................................. Connection Pooling Properties ................................................................................................... JDBC Client Interface......................................................................................................................... JDBC Sample Program .......................................................................................................................

87-9 87-9 87-19 87-20 87-21 87-22 87-22 87-23

88

ODBC Client Interface


Connection.............................................................................................................................................. Creating an ODBC Connection ..................................................................................................... Defining a DSN................................................................................................................................ The Opening Page.................................................................................................................... Local Authentication Page...................................................................................................... Local Binding Information Page ............................................................................................ Remote Server Authentication Page ..................................................................................... Remote Server Binding Page .................................................................................................. Advanced Settings Page.......................................................................................................... Final Page .................................................................................................................................. Defining a File DSN ........................................................................................................................ Connection String Parameters..................................................................................................... ODBC Client Interface ....................................................................................................................... Supported Interfaces..................................................................................................................... ODBC Schema Rowsets................................................................................................................ ODBC Data Types ......................................................................................................................... Supported Options........................................................................................................................ ODBC API Conformance................................................................................................................... Minimum Requirements of an ODBC Provider ....................................................................... Asynchronous Execution ............................................................................................................. General Information ..................................................................................................................... Conformance Information ........................................................................................................... SQL Syntax Information............................................................................................................... Platform Specific Information .......................................................................................................... Support for Non-C Applications on Platforms Other than Windows .................................. ODBC Client Interface Under CICS (z/OS Only) .................................................................... Sample Programs .......................................................................................................................... Environment Variables....................................................................................................................... 88-1 88-1 88-2 88-3 88-4 88-5 88-6 88-7 88-8 88-9 88-9 88-10 88-14 88-14 88-15 88-16 88-17 88-18 88-20 88-21 88-21 88-22 88-22 88-22 88-22 88-23 88-24 88-25

89

OLE DB (ADO) Client Interface


Overview................................................................................................................................................. Methods and Properties ....................................................................................................................... ADO Connect String............................................................................................................................. Connect String Parameters ............................................................................................................ Optimizing ADO................................................................................................................................... ADO Schema Recordsets ................................................................................................................... 89-1 89-1 89-4 89-4 89-8 89-10

xlv

OLE DB Data Types ............................................................................................................................ Mapping SQL Data Types to OLE DB Data Types .................................................................. ADO Conformance Level .................................................................................................................. OLE DB Interfaces and Methods ................................................................................................ OLE DB Properties ........................................................................................................................ Initialization Properties......................................................................................................... Data Source Properties .......................................................................................................... Data Source Information Properties.................................................................................... Session Properties .................................................................................................................. Rowset Properties .................................................................................................................. Specific Properties..................................................................................................................

89-10 89-12 89-12 89-13 89-15 89-15 89-15 89-16 89-17 89-17 89-19

90

XML Client Interface


Overview of the XML Client Interface.............................................................................................. ACX Verbs............................................................................................................................................... ACX Request and Response Documents ..................................................................................... Request Document................................................................................................................... Response Document ................................................................................................................ Connection Verbs ............................................................................................................................ The Connect Verb..................................................................................................................... The setConnection Verb .......................................................................................................... The disconnect Verb ................................................................................................................ The reauthenticate Verb .......................................................................................................... The cleanConnection Verb...................................................................................................... Transaction Verbs............................................................................................................................ The setAutoCommit Verb....................................................................................................... The transactionStart Verb ....................................................................................................... The transactionPrepare Verb.................................................................................................. The transactionCommit Verb ................................................................................................. The transactionRollback Verb ................................................................................................ The transactionRecover Verb ................................................................................................. The transactionForget Verb .................................................................................................. The transactionEnd Verb ...................................................................................................... The Execute Verb........................................................................................................................... Metadata Verbs.............................................................................................................................. The getMetadataItem Verb................................................................................................... The getMetadataList Verb .................................................................................................... The Ping Verb ................................................................................................................................ The Exception Verb....................................................................................................................... The Exception Element.......................................................................................................... Setting XML Transports for AIS ....................................................................................................... Passing XML Documents via TCP/IP ....................................................................................... Passing XML Documents via HTTP (Using the NSAPI Extension) ...................................... 90-1 90-1 90-2 90-2 90-2 90-2 90-3 90-4 90-5 90-5 90-6 90-6 90-7 90-7 90-8 90-8 90-9 90-9 90-10 90-10 90-11 90-12 90-12 90-14 90-15 90-16 90-16 90-18 90-18 90-18

Part XIV

Appendixes

xlvi

NAVDEMO - Attunity Demo Data


NAVDEMO Overview............................................................................................................................ NAVDEMO Database............................................................................................................................. NAVDEMO Tables .................................................................................................................................. TPART Table...................................................................................................................................... SUPPLIER Table ................................................................................................................................ PARTSUPP Table .............................................................................................................................. CUSTOMER Table ............................................................................................................................ TORDER Table .................................................................................................................................. LINEITEM Table ............................................................................................................................... NATION Table .................................................................................................................................. REGION Table ................................................................................................................................... A-1 A-1 A-2 A-2 A-2 A-3 A-3 A-3 A-4 A-4 A-5

Attunity SQL Syntax


Syntax Diagrams Describing SQL ....................................................................................................... SELECT Statement .................................................................................................................................. Keywords and Options .................................................................................................................... FROM Clause..................................................................................................................................... Keywords and Options .................................................................................................................... WHERE Clause................................................................................................................................ Keywords and Options .................................................................................................................. GROUP BY and HAVING Clause ................................................................................................ Keywords and Options .................................................................................................................. ORDER BY Clause........................................................................................................................... Keywords and Options .................................................................................................................. Additional Information........................................................................................................... Set Operators on SELECT Statements .......................................................................................... Keywords and Options .................................................................................................................. SELECT XML Statement ...................................................................................................................... Keywords and Options .................................................................................................................. Batch Update Statements ..................................................................................................................... INSERT Statement........................................................................................................................... Keywords and Options .................................................................................................................. UPDATE Statement ........................................................................................................................ Keywords and Options .................................................................................................................. Additional Information........................................................................................................... Updateability Rules ................................................................................................................. DELETE Statement.......................................................................................................................... Keywords and Options .................................................................................................................. Additional Information........................................................................................................... Updateability Rules ................................................................................................................. TABLE, INDEX CREATE and DROP Statements ........................................................................... CREATE TABLE Statement ........................................................................................................... Keywords and Options .................................................................................................................. DROP TABLE Statement................................................................................................................ Keywords and Options .................................................................................................................. B-2 B-3 B-3 B-5 B-5 B-11 B-11 B-12 B-12 B-13 B-14 B-14 B-15 B-16 B-17 B-17 B-18 B-18 B-18 B-19 B-20 B-23 B-23 B-24 B-24 B-28 B-28 B-28 B-28 B-28 B-30 B-31

xlvii

CREATE INDEX Statement ........................................................................................................... Keywords and Options .................................................................................................................. VIEW Statements................................................................................................................................... CREATE VIEW Statement ............................................................................................................. Keywords and Options .................................................................................................................. DROP VIEW Statement .................................................................................................................. Keywords and Options .................................................................................................................. Stored Procedure Statements .............................................................................................................. CREATE PROCEDURE Statement ............................................................................................... Keywords and Options .................................................................................................................. DROP PROCEDURE Statement.................................................................................................... Keywords and Options .................................................................................................................. CALL Statement .............................................................................................................................. Keywords and Options .................................................................................................................. Synonym Statements ............................................................................................................................ CREATE SYNONYM Statement ................................................................................................... Keywords and Options .................................................................................................................. DROP SYNONYM Statement........................................................................................................ Keywords and Options .................................................................................................................. GRANT Statement ................................................................................................................................ Keywords and Options .................................................................................................................. Transaction Statements......................................................................................................................... BEGIN Statement ............................................................................................................................ COMMIT Statement........................................................................................................................ ROLLBACK Statement ................................................................................................................... Constant Formats ............................................................................................................................ Expressions....................................................................................................................................... Operator Precedence ............................................................................................................... SIngle Quotation Marks in String Expressions.................................................................... Functions................................................................................................................................................. Aggregate Functions....................................................................................................................... Additional Information........................................................................................................... Conditional Functions .................................................................................................................... Data Type Conversion Functions ................................................................................................. Date and Time Functions ............................................................................................................... Date Format .............................................................................................................................. Time Format.............................................................................................................................. Timestamp Format................................................................................................................... Date Comparison Semantics .................................................................................................. Numeric Functions and Arithmetic Operators........................................................................... String Functions............................................................................................................................... Parameters............................................................................................................................................... Search Conditions and Comparison Operators............................................................................... Keywords and Options .................................................................................................................. Passthru Query Statements (bypassing Query Processing) .......................................................... Keywords and Options .................................................................................................................. Reserved Keywords ..............................................................................................................................

B-31 B-31 B-32 B-32 B-33 B-34 B-34 B-34 B-34 B-35 B-37 B-37 B-37 B-37 B-39 B-39 B-39 B-40 B-40 B-41 B-41 B-41 B-41 B-42 B-42 B-42 B-42 B-43 B-43 B-43 B-44 B-44 B-45 B-46 B-47 B-47 B-47 B-47 B-49 B-49 B-50 B-52 B-52 B-53 B-56 B-57 B-58

xlviii

National Language Support (NLS)


Codepage Terminology .......................................................................................................................... Basic NLS Settings .................................................................................................................................. Globally Setting Language at the System Level ............................................................................... Working with Multiple Languages (UTF Codepage)....................................................................... NLS and XML and Java Encoding........................................................................................................ NLS Support at the Field Level ............................................................................................................ Special Daemon Language Considerations ....................................................................................... Support for 7-Bit Codepages ................................................................................................................. SQL Functions For Use With Graphic Strings................................................................................... C-2 C-2 C-5 C-5 C-6 C-6 C-7 C-8 C-9

D E

COBOL Data Types to Attunity Data Types Editing XML Files in Attunity Studio
Preparing to Edit XML Files in Attunity Studio ............................................................................... Making Changes to the XML File ........................................................................................................ Remove Objects ................................................................................................................................. Add DTD Information...................................................................................................................... Edit Namespaces ............................................................................................................................... Add Elements and Attributes.......................................................................................................... Replace an Element ........................................................................................................................... E-1 E-2 E-2 E-2 E-3 E-5 E-5

Index

xlix

Send Us Your Comments


AIS User Guide and Reference, Version 5.1
AIS5100

Attunity welcomes your comments and suggestions on the quality and usefulness of this publication. Your input is an important part of the information used for revision.

Did you find any errors? Is the information clearly presented? Do you need more information? If so, where? Are the examples correct? Do you need more examples? What features did you like most about this manual?

If you find any errors or have any other suggestions for improvement, please indicate the title and part number of the documentation and the chapter, section, and page number (if available). You can send comments to us in the following ways:

Electronic mail: support@attunity.com FAX: (781) 213-5240. Attn: Documentation and Training Manager Postal service:

Attunity Incorporated Documentation and Training Manager 70 Blanchard Road Burlington, MA 01803 USA If you would like a reply, please give your name, address, telephone number, and electronic mail address (optional). If you have problems with the software, please contact your local Attunity Support Services.

li

lii

Preface
This guide is the primary source of user and reference information on AIS (Attunity Integration Suite), which enables integration of data across platforms and formats. This document applies to the IBM z/OS Series, OS/400, OpenVMS, UNIX, and Windows platforms. This preface covers the following topics:

Audience Organization Related Documentation Conventions

Audience
This manual is intended for Attunity integration administrators who perform the following tasks:

Installing and configuring the Attunity Integration Suite Diagnosing errors Using AIS to access data
Note:

You should understand the fundamentals of data base use and Microsoft screens operating system before using this guide to install or administer the Attunity Integration Suite (AIS).

liii

Organization
This document contains: Part I, "Getting Started with AIS" Part II, "Attunity Connect" Part III, "Attunity Stream" Part IV, "Attunity Federate" Part V, "Attunity Studio" Part VI, "Operation and Maintenance" Part VII, "Utilities" Part VIII, "Data Source Reference" Part IX, "Procedure Data Source Reference" Part X, "Adapters Reference" Part XI, "Non-Application Adapters Reference" Part XII, "CDC Agents Reference" Part XIII, "Interface Reference" Part XIV, "Appendixes"

Related Documentation
Printed documentation is available for at the Attunity Web site:
http://www.attunity.com/

You can download release notes, installation documentation, white papers, or other types of documentation. You must register online before downloading any documents.

Conventions
This section describes the conventions used in the text and code examples of this documentation set. It describes:

Conventions in Text Conventions in Code Examples Conventions for screens Operating Systems

liv

Conventions in Text
We use various conventions in text to help you more quickly identify special terms. The following table describes those conventions and provides examples of their use.
Convention Bold Meaning Example

When you specify this clause, you create an Bold typeface indicates terms that are defined in the text or terms that appear in index-organized table. a glossary, or both. Italic typeface indicates book titles or emphasis. Oracle Database Concepts Ensure that the recovery catalog and target database do not reside on the same disk. You can specify this clause only for a NUMBER column. You can back up the database by using the BACKUP command. Query the TABLE_NAME column in the USER_ TABLES data dictionary view. Use the DBMS_STATS.GENERATE_STATS procedure. Enter sqlplus to open SQL*Plus. The password is specified in the orapwd file. The department_id, department_name, and location_id columns are in the hr.departments table. Set the QUERY_REWRITE_ENABLED initialization parameter to true. Connect as oe user.

Italics

UPPERCASE monospace (fixed-width) font

Uppercase monospace typeface indicates elements supplied by the system. Such elements include parameters, privileges, datatypes, RMAN keywords, SQL keywords, SQL*Plus or utility commands, packages and methods, as well as system-supplied column names, database objects and structures, usernames, and roles. Lowercase monospace typeface indicates executables, filenames, directory names, and sample user-supplied elements. Such elements include computer and database names, net service names, and connect identifiers, as well as user-supplied database objects and structures, column names, packages and classes, usernames and roles, program units, and parameter values.

lowercase monospace (fixed-width) font

Note: Some programmatic elements use a The JRepUtil class implements these mixture of UPPERCASE and lowercase. methods. Enter these elements as shown. Lowercase italic monospace font lowercase represents placeholders or variables. italic monospace (fixed-width) font You can specify the parallel_clause. Run Uold_release.SQL where old_ release refers to the release you installed prior to upgrading.

Conventions in Code Examples


Code examples illustrate SQL, PL/SQL, SQL*Plus, or other command-line statements. They are displayed in a monospace (fixed-width) font and separated from normal text as shown in this example:
SELECT username FROM dba_users WHERE username = 'MIGRATE';

The following table describes typographic conventions used in code examples and provides examples of their use.
Convention [ ] { } Meaning Brackets enclose one or more optional items. Do not enter the brackets. Example DECIMAL (digits [ , precision ])

Braces enclose two or more items, one of {ENABLE | DISABLE} which is required. Do not enter the braces.

lv

Convention |

Meaning

Example

A vertical bar represents a choice of two {ENABLE | DISABLE} or more options within brackets or braces. [COMPRESS | NOCOMPRESS] Enter one of the options. Do not enter the vertical bar. Horizontal ellipsis points indicate either:

...

That we have omitted parts of the code that are not directly related to the example That you can repeat a portion of the code

CREATE TABLE ... AS subquery; SELECT col1, col2, ... , coln FROM employees;

. . .

Vertical ellipsis points indicate that we have omitted several lines of code not directly related to the example.

SQL> SELECT NAME FROM V$DATAFILE; NAME -----------------------------------/fsl/dbs/tbs_01.dbf /fs1/dbs/tbs_02.dbf . . . /fsl/dbs/tbs_09.dbf 9 rows selected.

Other notation

You must enter symbols other than acctbal NUMBER(11,2); brackets, braces, vertical bars, and ellipsis acct CONSTANT NUMBER(4) := 3; points as shown. Italicized text indicates placeholders or variables for which you must supply particular values. Uppercase typeface indicates elements supplied by the system. We show these terms in uppercase in order to distinguish them from terms you define. Unless terms appear in brackets, enter them in the order and with the spelling shown. However, because these terms are not case sensitive, you can enter them in lowercase. Lowercase typeface indicates programmatic elements that you supply. For example, lowercase indicates names of tables, columns, or files. Note: Some programmatic elements use a mixture of UPPERCASE and lowercase. Enter these elements as shown. CONNECT SYSTEM/system_password DB_NAME = database_name SELECT last_name, employee_id FROM employees; SELECT * FROM USER_TABLES; DROP TABLE hr.employees;

Italics

UPPERCASE

lowercase

SELECT last_name, employee_id FROM employees; sqlplus hr/hr CREATE USER mjones IDENTIFIED BY ty3MU9;

Conventions for screens Operating Systems


The following table describes conventions for screens operating systems and provides examples of their use.

lvi

Convention

Meaning

Example

c:\winnt"\"system32 is the same as File and directory File and directory names are not case names sensitive. The following special characters C:\WINNT\SYSTEM32 are not allowed: left angle bracket (<), right angle bracket (>), colon (:), double quotation marks ("), slash (/), pipe (|), and dash (-). The special character backslash (\) is treated as an element separator, even when it appears in quotes. If the file name begins with \\, then screens assumes it uses the Universal Naming Convention. C:\> Represents the screens command prompt C:\attunity\NACV_UTIL> of the current hard disk drive. The escape character in a command prompt is the caret (^). Your prompt reflects the subdirectory in which you are working. Referred to as the command prompt in this manual. C:\>exp scott/tiger TABLES=emp QUERY=\"WHERE job='SALESMAN' and sal<1600\" C:\>imp SYSTEM/password FROMUSER=scott TABLES=(emp, dept)

Special characters The backslash (\) special character is sometimes required as an escape character for the double quotation mark (") special character at the screens command prompt. Parentheses and the single quotation mark (') do not require an escape character. Refer to your screens operating system documentation for more information on escape and special characters.

lvii

lviii

Whats New
AIS, the Attunity Integration Suite, Version 5.1 introduces significant enhancements, support commitments and bug fixes, and continued improvements for ease of use.

Continuous CDC
Beginning with version 5.1, Attunity Stream can provide ETL tools or programs to use standard SQL to query for data changes by continuously feeding change records for processing, effectively working in real-time. This provides an alternative approach to traditional ETL processing that works by executing jobs periodically (for example, every 15 minutes), which adds latency between each pass. This provides a solution for data integration scenarios that require very low latency (near real-time). This ability is executed by a simple SQL statement. See Reading Change Tables Continuously.

Bulk Data Performance Improvements


Beginning with version 5.1, AIS has improved bulk data access from Oracle databases, supporting ETL and reporting applications that require access to large volumes of data. Similar enhancements are also available when using third-party ODBC drivers (for example, Teradata). Additional improvements were made for OLEDB users, such as those using SQL Server Integration Services (SSIS).

64-Bit Support for AIS Server on Windows


Beginning with version 5.1, AIS has native XP 64-bit support. AIS runs natively on Windows XP x64 operating systems and on Windows Server 2003 x64 and IA64 (Itanium) systems. The new kits support the following Attunity Connect features:

Generic Attunity Connect features, including the Procedure Driver, LegacyPlug, Database Adapter, and Query Adapter SQL Server 2005 Driver Oracle Driver Oracle CDC Agent

Note: The installation of the new 64-bit Attunity Server kits for Windows include the 32-bit Server components. This addresses the various options on the Microsoft

lix

platform, where some applications still require 32-bit functionality although running on 64-bit systems. See the installation guide for the operating system you are working with for more information.

64-Bit Thin ODBC Clients


Beginning with version 5.1, AIS has native 64-bit support for Thin ODBC Clients on the following platforms:

Windows x64 Windows IA64 HP UX (Itanium) AIX Solaris Linux

Note: Windows 64-bit clients have an installation utility that also installs a new ODBC setup wizard. All other thin clients are distributed as Zip files.

New ODBC DSN Setup Wizard


Beginning with version 5.1, a new ODBC DSN setup wizard is available. This wizard allows you to create a new ODBC user, system and file data Sources. It supports 32-bit and 64-bit Windows platforms. The setup wizard lets you create both local and remote connections.

Enhancements for the ADO.NET Client


Beginning with version 5.1, AIS introduces significant enhancements to the ADO.NET client:. These enhancements include:

Support for ADO.NET 2.0 APIs including the extended API set, providing richer capabilities to developers and supporting applications that use these APIs. The ADO.NET client is now merged with NETACX capabilities. This allows the ADO.NET client to interact with Application Adapters and exchange hierarchical XML documents with back-end systems. The ADO.NET client now supports design-time integration with Visual Studio 2005. Users can work with the Visual Studio Server Explorer component and use the standard Data Connections option to add a connection to an AIS data source using an integrated Add Connection dialog box. In this way, users can define datasets in the integrated Visual Studio environment and define the Attunity Connect metadata definitions.

Update Index Statistics Improvements


Query performance, especially in distributed queries, can be optimized using accurate information about the statistics of table indexes. This information needs to be updated periodically as table content changes (for example, when rows are added).

lx

Version 5.1 improves the accuracy and performance of the Attunity Connect Update Statistics utility. Using a new algorithm, the Update Statistics utility can now estimate statistics on very large tables more accurately, and much faster.

Better Support for Oracle Stored Procedures


Beginning in version 5.1, Attunity Connect can now return one or more result sets when calling Oracle stored procedures. See Oracle Stored Procedures.

Support for Exact Arithmetic


The AIS query processor uses double-precision floating point numbers for SUM() aggregations, additions, and subtractions.Because of imprecision with floating point numbers, arithmetic operations over large sets of numbers may result in visible errors in the least significant digits. Beginning with version 5.1, new parameters in the binding lets users configure fixed scale for precise arithmetic operations with double-precision floating point numbers, providing a high precision solution.

Improvements to Attunity Studio


Beginning in version 5.1, Attunity Studio has a new and easier interface for the key editors and wizards, such as those used for bindings and daemons. The editors are more intuitive and are easier to use. They include a new look and feel that makes using Attunity Studio more intuitive. Beginning in version 5.1, Attunity Studio has new robust XML editor. This editor makes advanced configuration on XML metadata and configuration files easier, by replacing the text editor available in previous versions with an XML-sensitive editor. This editor can be launched directly from the Design perspective Configuration view. Beginning in version 5.1, Attunity Studio has an easier to use import/export dialog box. The new Import wizard lets users to review and edit the XML definitions before finalizing the import process. The import process has improved validation of the configuration structure.

lxi

lxii

Part I
Getting Started with AIS
This part contains the following topics:

Introducing the Attunity Integration Suite Setting up Attunity Connect, Stream, and Federate Binding Configuration Setting up Daemons Managing Metadata Working with Metadata in Attunity Studio Handling Arrays Using SQL Working with Web Services

1
Introducing the Attunity Integration Suite
This section describes the Attunity Integration Suite (AIS) and its benefits. It contains the following topics:

AIS Overview AIS Use

AIS Overview
The Attunity Integration Suite (AIS) is a comprehensive integration platform for on-demand access and integration of enterprise data sources and legacy applications. AIS runs on many platforms, such as Windows, UNIX, OS/400, and mainframes and provides the many integration possibilities. The suites integration services under the banner are provided in the following programs:

Attunity Connect: Universal, Standard Data Access to enterprise data sources. Attunity Federate: Virtual Data Federation (EII), integrating data on the fly from heterogeneous sources. Attunity Stream: Change data capture, allowing efficient and real-time data movement and processing.

The following diagram provides an overview of AIS and how they can be used to solve many integration needs for each enterprise platform. It also includes the Attunity Studio, a GUI-based tool that lets you configure the Attunity Servers in your system.
Figure 11 AIS Overview

Introducing the Attunity Integration Suite 1-1

The Attunity Integration Suite (AIS) provides a modular solution that allows organizations to address different tactical requirements quickly, while relying on a comprehensive platform that allows reusability and addresses many needs. The following section provides some examples on the uses for AIS and its benefits.

AIS Use
The Attunity Integration Suite provides many integration possibilities. This section describes some uses, related applications, or projects, and how AIS simplifies the access and integration. The following are some of the common uses for AIS:

SQL Connectivity Application Connectivity Adapters Change Data Capture

SQL Connectivity
SQL is known skill set for application developers and a common interface that applications know and use in order to retrieve data. While relational databases provide SQL connectivity out of the box, legacy data sources and file systems do not. Attunity Connect can help in this area by making older non-relational data sources appear relational and providing access to them using standard SQL. Typical applications that require SQL connectivity include:

Reporting tools: for designing and providing reports to business users J2EE or .NET applications ETL tools: for bulk loading of source data

Some typical scenarios that use Attunity Connect for SQL Connectivity include connecting to Adabas, VSAM, IMS/DB, RMS, Enscribe, and ISAM data sources.

Application Connectivity
XML is now a standard interface for applications, by using standard APIs or Web Services. Many newer applications offer open interfaces, however legacy applications do not and interfacing with their embedded business logic is difficult. Attunity Connect defines virtual services on top of these legacy applications that provide seamless interoperability. Applications that require Service/XML-based application connectivity include:

EAI tools: for invoking business logic as part of an automated process J2EE or .NET applications: that need to reuse existing business logic Legacy Applications: that need to be extended and call off-platform services

Some typical scenarios that use Attunity Connect for application connectivity include CICS, IMS/TM, Tuxedo, COBOL, and RPG.

1-2 Attunity Integration Suite Service Manual

Adapters
Many enterprise application integration (EAI), enterprise service buses (ESB), and business process manager (BPM) tools need to integrate with existing applications and data sources. Attunity Connect removes the barrier to integrating with legacy applications and data sources by providing standard adapter interfaces and plug-ins to leading adapter frameworks. Adapters include inbound and outbound capabilities, that allow you to send messages to the adapter or receive messages from the adapters. Typical applications that require adapters include:

Integration Brokers: such as BizTalk Server, Oracle BPEL, BEA WLI, etc. ESB and BPM platforms

Typical usage scenarios employing Attunity Connect include application connectivity to CICS, IMS/TM, Tuxedo, COBOL, and RPG, as well as to enterprise data sources.

Change Data Capture


Data integration projects, especially data warehousing and data synchronization, and propagation projects must take both latency and efficiency into consideration. Users need fresher data with lower latency (how old the data is), and at the same time must deal with the efficiencies associated with moving and processing large amounts of data. Latency, data volumes, and shrinking batch windows are all barriers to data integration. Attunity Stream removes these barriers by providing an efficient way to processing only the changes to enterprise data source. Typical applications that require change data capture include:

ETL - for Data Warehousing and complex data movement Data Replication - for rehosting data (for example, to enable reporting) Data Synchronization - for maintaining integrity between systems

Typical tools that are used include IBM WebSphere DataStage, Microsoft SQL Server Integration Services (SSIS), Oracle Warehouse Builder, Business Objects Data Integrator, and Sunopsis.

Attunity Integration Suite Supported Systems and Resources


The section describes AIS support for the following:

Operating Systems Data Sources and Adapters Interfaces

Operating Systems
OS Supported versions

Windows x86 (32-bit) Windows 2000, XP, Vista, and Windows Server 2003 Windows x64 (64-bit) Windows XP, and Windows Server 2003 Windows IA64 (Itanium) Linux RedHat Windows XP, Windows Server 2003 AS 3.0 through 5.0.

Introducing the Attunity Integration Suite 1-3

OS AIX Solaris HP UX

Supported versions Versions 5.2, 5.3 Version 2.8-2.10 11.11 (11i v1) and above only. Itanium systems are now supported by most Attuniy data sources. For HP UX on Itanium supported versions are 11.23, 11.31

OpenVMS (Alpha) OpenVMS (Itanium) NonStop (Himalaya) NonStop (Itanium) OS/400 z/OS

Versions 6.2-8.3 Versions 8.2-1 to 8.3 G06.08 to G06.26 H-Series Versions 5.1 to 5.3 Versions 1.1 through 1.8. Beginning with version 5.1.

Data Sources and Adapters


Data source Data Sources Adabas/MVS Adabas/UNIX Adabas/Windows DB2/MVS DB2/UDB DBMS IMS/DB ODBC Informix Ingres II Oracle Oracle RDB SQL Server Sybase Adapters IMS/TM Tuxedo Versions 6.1, 7.1, 8.1, 9.1 Versions 8.0, 8.1, 9.1 Versions 7.x through 9.x. Versions 8.x through 9.x. Version 4.2, 7 Versions 6.1, 7.1, 8.1, 9.1, 10 ODBC 2.5 Versions 10 and 11 Versions 2-2.56 Versions 9iR2, 10g, and 11gR1 Versions 7.1-7.4 SQL Server 2000 (32-bit only), SQL Server 2005 (32-bit and 64-bit) Versions 12.5 and 15. Versions 6.22 through 7.4 and version 8.1 (in single-user mode, only) Versions 3.3, 4.1, 5.1 and 6.1 Supported versions

Interfaces
Interface ADO.NET Supported versions .NET 2.0

1-4 Attunity Integration Suite Service Manual

Interface JDBC ODBC OLE/DB and ADO JCA

Supported versions Version 2.0 Version 2.5 Version 2.5 Versions 1.0, 1.5

Introducing the Attunity Integration Suite 1-5

1-6 Attunity Integration Suite Service Manual

2
Setting up Attunity Connect, Stream, and Federate
This section contains the following topics:

Overview Setting up Machines Administration Authorization License Management Importing and Exporting XML in Attunity Studio

Overview
Attunity Studio is used to configure and manage access to applications, data, and events on all machines running AIS. You make configurations in the Design Perspective. This perspective has tabs for configuration and metadata. These tabs enable the following configuration:

Setting up access to machines running AIS. Configuring the daemon, which manages communication between AIS machines. Setting up the access to the application, data or event on these machines. Configuring metadata: To manage Attunity Metadata (ADD) for data sources that do not have metadata (such as DISAM), or have metadata that cannot be used by AIS. To view relational metadata. To extend metadata in ADD for relational data sources that require additional information not supplied by the native metadata (such as statistics for various relational data sources). To manage a snapshot of relational metadata converted to ADD. This is also called local copy metadata. To manage Application Adapter definitions (adapter metadata). To manage event definitions (event metadata).

You manage Attunity products in the Attunity Studio Runtime Manager perspective. Management includes the following:

Changing configuration settings when required.

Setting up Attunity Connect, Stream, and Federate

2-1

Managing daemons and workspaces on any machine during runtime

A perspective consists of views and an editor area. Views are used to navigate and manage resources. The editor area is used to carry out the main tasks.
Figure 21 Studio Main Screen

Note:

You can use the properties that are displayed in the editor to find a machine where a data source or adapter is located, especially when several data sources or adapters on different machines have similar names. Identifying the location is useful when working on the Metadata tab of the Design Perspective.

To switch between the Design time and Runtime Manager perspectives, click Perspective (at the top right of the workbench) and select the required perspective from the list. For a list of buttons, see Workbench Icons.

Opening Attunity Studio


To start work with the Attunity Integration Suite (AIS), open Attunity Studio. As described in the Overview, you can make the configurations that you need with this tool. To open Attunity Studio Do one of the following: From the Start menu, select Programs; then Attunity Integration Suite and select Attunity Studio to open Attunity Studio. Double-click the Attunity Studio shortcut on the desktop.

2-2 AIS User Guide and Reference

Special Note:

To work with solutions in Attunity Studio, when using Turkish, add the following switches to the Target path in the Attunity Studio shortcut properties: -nl en

For example: "C:\Program Files\Attunity\Studio1\studio.exe -nl en" When you open Attunity Studio for the first time, the Welcome Screen is displayed.

Setting up Machines
You use the Design perspective Configuration view to configure AIS Machines, and the applications, data, and events on those machines. To add a machine 1. Open Attunity Studio.
2.

In the Design perspective Configuration view, right-click the Machines folder and select Add Machines. The Add machine screen opens.

Figure 22 The Add Machine Screen

Enter the following information in each field:

Host name/IP address: Enter the name of the machine on the network or click Browse to browse all the machines running a daemon listener on the specified port currently accessible over the network. Port: Enter the number of the port where the daemon is running. The default port is 2551. Display name: Enter an alias used to identify the machine if it is different from the host name (optional).

Setting up Attunity Connect, Stream, and Federate

2-3

User name: Enter the user name of the machines administrator.


Note:

You indicate the machines administrator when the machine is installed or by using ADD_ADMIN operation in the NAV_UTIL utility.

Password: Enter the password of the machines administrator. This is the password for the user entered in the User name field. If no password is necessary to access this machine, leave this field empty. Connect via NAT with fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, regardless of the port specified. For more information, see Firewall Support.

1. 2.

To edit a machine In Attunity Studio, in the Configuration view of the Design perspective, expand the Machine folder. Right-click the machine you want to edit and select Open. The Machine editor with the information for the existing binding opens in the editor area. You can make the following changes:

Connect via NAT with fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, regardless of the port specified. Anonymous login: Select this if users can access the machine without password authentication. When this is selected the User name and Password fields are not available. User name: Enter the machine administrators user name. Password: Enter the machine administrators password. This is the password for the user entered in the User name field.

After you add a machine, it is displayed in the Configuration view under the Machine folder. You can edit the machines login information, or configure bindings, daemons, and users for each machine. For more information, see:

Setting up Data Sources and Events with Attunity Studio Setting up Daemons User Profiles and Managing a User Profile in Attunity Studio

Using an Offline Design Machine to Create Attunity Definitions


The offline design mode enables you to define the resources to AIS in Attunity Studio without having to connect to the actual machine where the definitions are implemented. For example, you can set up a machine even when the actual server machine is down, or set up a number of definitions for different machines on the same design machine. After the resources are defined on the design machine, you can drag and drop each definition to a server machine to implement them.

2-4 AIS User Guide and Reference

To define an offline machine 1. In the Configuration view of the Design perspective, right-click the Machines folder and select Add Offline Design Machine. The Add offline design machine screen opens.
2. 3.

Enter a name for the design machine. Click Finish.

You can define all available resources on this machine. You can also set up metadata using a metadata import utility. Every resource is available on the design machine, no matter on which platform the resource needs to exist. For example, both HP NonStop data sources and z/OS data sources are available, even though on completion, you can only drag and drop the NonStop definitions (such as an Enscribe data source) to an HP NonStop machine.

Administration Authorization
You can provide access to Attunity Studio for:

Administrators: People granted access as administrators can add and edit resources in all areas of AIS. Designers: People granted access as designers can add adapters, data sources, and CDC agents and create and edit definitions for them. Users: People granted access as users have read-only access to all resources in AIS.

1. 2.

To grant administrative authorization in Attunity Studio In Attunity Studio, in the Configuration view of the Design perspective, expand the Machines folder. Right-click the machine you want to grant privileges to and select Administrative Authorization. The Administrative Authorization editor opens in the editor area. The name of the machine that you are granting access privileges to is shown on the tab at the top of the editor.

Setting up Attunity Connect, Stream, and Federate

2-5

Figure 23 Administration Authorization Editor

3. 4.

Click the Everyone checkbox at the top of the Administrator, Designer, or User sections to allow all people who use AIS access as the selected type of user. Clear the check box at the top of one or more of the sections to grant specific people access to that area. For more information, see Granting Access to Specific Users and Groups.

Granting Access to Specific Users and Groups To grant a User or group access rights, add them to the list for the type of rights you want to grant them. For example, you can grant user1 administrator rights by adding user1 to the list in the administrator section. To grant rights to all users and groups, select the Everybody check box for any of the three sections. To add users or groups From the Administrators, Designers, or Users sections in the Administration Authorization editor, click Add user and enter the name of a valid user in the Add user screen. Make sure that the name entered matches a valid user account. To add groups to the list, click Add group and enter the name of a valid group in the Add group screen. Make sure that the name entered matches a valid group account.
2.

1.

Click OK to close the screen. The name of the user or group is added to the field.

2-6 AIS User Guide and Reference

To rename a user or group 1. From the Administrators, Designers, or Users sections in the Administration Authorization editor, select the user or group you want to rename and click Rename.
2. 3.

Change the name entered in the Rename user or Rename group screen to the name you want to use. Click OK to close the screen. The changes are entered in the field.

1. 2.

To remove a user or group From the Administrators, Designers, or Users sections in the Administration Authorization editor, select the use or group that you want to remove. Click Remove. The user or group is removed from the field.

License Management
Before you can work with any product in AIS, you must register the product. You can register the product with a valid license file. The following sections describe how you can use license management.

Registering a Product Viewing License Information

Registering a Product
You need to register the software before you can access data sources on a machine. Your Attunity vendor should provide you with a text file called license.pak.The PAK file contains details such as the product expiration date (if any), the maximum number of concurrent sessions allowed, which drivers you are authorized to use, and other information. After you make sure that the PAK file is installed on your machine, you must register it before you can use the product.
Notes:

Make sure you are connected to the Internet before carrying out the following procedure. When you register a product, the new license will overwrite the old license. If you want to register a new product and continue using any previously registered products, then request a single license for all of the products you are using.

To register a product 1. In Attunity Studio, in the Configuration view of the Design perspective, Expand the Machine folder.

Setting up Attunity Connect, Stream, and Federate

2-7

2.

Right-click the machine with the license you want to register, point to License management, and select Register product. The Register Product screen opens.

Figure 24 Register Product

3.

Click Browse and browse to find the license (PAK) file. It is usually located in the directory where AIS is installed. The XML content of the license file is displayed in the screen.

4.

Click Register. Attunity Studio contacts the Attunity registration server. A message is displayed stating whether the registration was successful. Contact your Attunity vendor if there is a problem with the registration or if you do not have a license file.

Viewing License Information


You can use Attunity Studio to view the contents of a PAK file. A PAK file contains details such as the product expiration date (if any), the maximum number of concurrent sessions allowed, and which drivers you are authorized to use. To view license information 1. In Attunity Studio, in the Configuration view of the Design perspective, Expand the Machine folder.
2-8 AIS User Guide and Reference

2.

Right-click the machine with the license you want to register, point to License management, and select View license information. The View License Registration screen opens.

Figure 25 View License Information

3.

Click Save as to create a new license file. You can also read the license information in this screen.

Importing and Exporting XML in Attunity Studio


All AIS definitions, such as Bindings, Daemons, adapters, and Data Sources, are saved as XML in DISAM files. You can back up the definitions by exporting the XML data to a separate file. You can also reload the definitions by importing the XML data back into AIS.
Note:

In some cases exporting XML data using Attunity Studio does not export all of the data. If this happens, you can use the IMPORT and EXPORT operations in the NAV_UTIL utility.

To import XML 1. In Attunity Studio, in the Configuration view of the Design perspective, right-click one of the following:

Setting up Attunity Connect, Stream, and Federate

2-9

Machines (folder) Any specific machine Data Sources (folder) Bindings (folders) Any specific binding Adapters (folder) Any specific adapter Daemons (folder)

2. 3. 4. 5.

Select Import XML definitions. The Import XML Definitions window opens. Click Browse to open the Import XML Definitions dialog box and browse to the file with the XML data you want to import. Click OK. The data opens at the bottom of the Import XML Definitions screen. Click Finish to close the screen and save the data.

1.

To export XML definitions In Attunity Studio, in the Configuration view of the Design perspective, right-click one of the following:

Machines (folder) Any specific machine Data Sources (folder) Any specific data source Bindings (folders) Any specific binding Adapters (folder) Any specific adapter Daemons (folder) Any specific daemon

2. 3. 4.

Select Export XML definitions. The Export XML Definitions window opens. Click Browse to browse to the location where you want to save the XML data. Click OK to save the data.

When you import or export definitions, the XML data for the level you are using is transferred, including the data for all the sublevels. For example, if you export data for a daemon, the data for the binding and all of its data sources, adapters, and events is transferred. However, if you export the data for an adapter, only the data for that adapter is transferred.

2-10 AIS User Guide and Reference

3
Binding Configuration
This section contains the following topics:

Binding Configuration Overview Setting up Bindings in Attunity Studio Binding Syntax Sample Binding Environment Properties

Binding Configuration Overview


The information that Attunity needs to access Applications, data sources, and events is defined in a Binding configuration. A binding configuration always exists on a Server Machine, where data sources and applications to be accessed using Attunity reside. Additionally, a binding configuration can be defined on a Client Machine to point to data sources on a server machine. When you access data using ADO, a binding configuration must exist on the client machine. When you access data using JDBC, ODBC or .NET, or applications using JCA, XML, COM or .NET, a binding configuration is not required on the client machine. In these cases, you must install the Attunity thin client kit.

Server Binding
A Binding configuration on a server includes the following:

Definitions for data sources that are accessed using Attunity Integration Suite (AIS). Shortcuts to data sources on other server machines that can be accessed from the current Machine. Application Adapter definitions for applications that can be accessed using AIS, including application-specific properties. Event queue definitions for event queues that are managed using AIS, including event-specific properties. Environment properties that apply to all the data sources, adapters, and machines listed in the binding configuration. For more information, see Environment Properties.

Binding Configuration 3-1

Client Binding
A Binding configuration on a client includes the following:

Shortcuts to data sources on other Server Machines that can be accessed from the current machine. Environment properties that apply to all the data sources, adapters, and Machines listed in the binding configuration. For details, see Environment Properties.

You use Attunity Studio for Adding Bindings and binding configurations or for Editing Bindings that are already in use. NAV is the default binding configuration. You can use this configuration to define all the data sources and adapters you want to access via AIS.

Setting up Bindings in Attunity Studio


The information that Attunity needs to access applications, data sources, and events is defined in a binding configuration. Bindings are configured on a server machine with the data sources and applications that you are working with. A binding configuration can also be defined on a client machine to point to data sources on a server machine.
Note:

The configuration supplied with the product installation includes the NAV binding. This configuration is used if a specific binding is not defined to access an application, data source, or event queue.

Use Attunity Studio to configure one or more bindings, each with a set of application adapters, data sources, and events. Each binding configuration has its own environment that defines the binding (such as cache sizes for storing information in memory during a session). The following sections describe the required tasks to define a binding:

Adding Bindings Editing Bindings

Adding Bindings
You can set up a number of different bindings. Each binding may be for a different set of applications, data sources or events. You can also create different binding configurations for the same application adapters, data sources or events, with each having a different set of requirements. For example, you can set up separate configurations that allow different users access to specific resources. To add a new binding 1. Open Attunity Studio.
2. 3.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add the binding.

3-2 AIS User Guide and Reference

Note:

You can add a new binding configuration in a design machine in the offline design mode and later drag and drop the binding to this machine. For more information, see Using an Offline Design Machine to Create Attunity Definitions.

4. 5.

Right-click the Bindings folder and select New Binding. Enter a name for the binding in the New Binding window.
Note: Attunity Studio does not support renaming a binding. Changing the binding name can cause problems with the data sources, adapters, and events for that binding.

In the event that you want to change the binding name, you must create a new binding and copy the data sources, adapters, and events to the new binding.
6.

Click Finish. The new binding editor opens. See Editing Bindings for information on entering information in the binding editor.

Editing Bindings
The binding editor is used to set the binding environment and to define remote machines for a binding. The following sections describe how to edit the binding:

Setting the Binding Environment in Attunity Studio Defining Remote Machines in a Binding

To open the binding editor 1. Open Attunity Studio.


2. 3.

In the Design perspective Configuration view, expand the Bindings folder. Right-click the binding you want to edit and select Open. The binding editor with the information for the existing binding opens in the editor area. Make any changes you want to the binding properties.

Setting the Binding Environment in Attunity Studio


You set the binding environment from the binding editor s Environment tab. The environment is a set of properties that govern how the parts of the binding work. The properties are shared by all adapters, data sources, and events in the binding. Follow these steps for setting the environment properties in a binding. To set an environment for the binding configuration 1. Right-click the binding you want to edit and select Open. The binding editor opens in the editor area.

Binding Configuration 3-3

Figure 31 Environment Properties

2.

On the Environment tab, you can do any of the following to work with the binding environment.

Select the Use NAV environment check box. When this is selected, the NAV environment is used for all of the Environment Properties. The property values are set to the NAV values at runtime. The editing controls for each property are disabled. If you want to change any of the Environment Properties, clear the check box. Click Copy NAV environment to use the default settings, which are the original settings for the NAV binding. This automatically sets all of the environment properties to the NAV values and it allows you to edit individual properties as needed. The property values are created as NAV values during design time. Click Restore default values to automatically restore the environment settings that were defined in the development environment. This means that the values will revert to the values as they were when the AIS solution was first deployed. All values that were changed in the current session or any previous session (including values changed with NAV_UTIL or Attunity Studio) will return to their original value. In addition, each section in the Environment Properties editor has its own Restore default values button. Click this button to restore the default values for the properties in that section only.

3.

Edit any of the environment properties displayed on the Environment tab. For an explanation of the properties, see Environment Properties.

Defining Remote Machines in a Binding


Every remote machine accessed by AIS must be defined in the remote machines binding configuration. The machines to be accessed are listed on the Machines tab of the binding editor.

3-4 AIS User Guide and Reference

To set up access to a remote machine 1. Right-click the binding you want to edit and select Open. The binding editor opens in the editor area.
2.

Click the Machines tab.

Figure 32 Machine Tab

3.

Click Add to open the Add Remote Machines dialog box.

Figure 33 Add Remote Machines Screen

Binding Configuration 3-5

4.

Enter the following information:

Host name/IP address: Enter the name of the machine on the network or click Browse, to browse all the machines running a daemon listener on the specified port currently accessible over the network. Port: Enter the port number where the daemon is running. The default port is 2551. Display name: Enter an alias used to identify the machine if it is different from the host name (optional). User name: Enter the machine administrators user name.
Note:

Define the machine administrator when the machine is installed using Attunity Studio or by using the ADD_ADMIN operation in NAV_UTIL.

Password: Enter the machine administrators password. This is the password for the user entered in the User name field. If no password is necessary to access this machine, leave this field blank. Connect via NAT with fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, regardless of the port specified. For more information, see Firewall Support.

5.

Click OK. If the users with access to the remote machines are known, you can set the user profile for the machine by clicking Security on the Machines tab. If the users are not known, you can define them later from the Users folder in the Configuration view. For more information, see Managing a User Profile in Attunity Studio.

Binding Syntax
The Binding settings in XML format include the following statements:

A <remoteMachines> Statement, specifying the remote machines that can be accessed from the current machine by using <remoteMachine> statements. A <datasources> Statement, specifying the data sources that can be accessed by using <datasource> statements. This statement specifies the following: A name to identify the data source The data source type General information <config> statements, specifying specific properties for the data source driver

An <environment> statement, specifying the properties for the specific binding. For details, see Environment Properties. An <adapters> Statement specifying the Application Adapters that can be accessed using the <adapter> statement. This statement specifies the following: <adapter> Statement, specifying a name to identify the Application and the application type.

3-6 AIS User Guide and Reference

<config> statements, specifying specific properties for an application adapter.

See Sample Binding for an XML example of a binding.

<remoteMachines> Statement
The <remoteMachines> statement lists the names of the accessible servers, using <remoteMachine> statements. These statements are only necessary when you connect to data sources through a shortcut on the client machine. In other cases (such as accessing an application, or a data source using JDBC), the location is specified as part of the Connect String.

<remoteMachine> Statement
The <remoteMachine> statement lists names and IP addresses of the remote machines with data sources and are accessed using data source shortcuts on the current machine. The names are used as aliases for the IP addresses in the <datasource> statements. This enables you to redefine the location of a group of data sources (on a given machine) by changing the IP address associated with this alias. The format is:
<remoteMachine name="alias" address="address" port="port_number" workspace="workspace" encryptionProtocol="RC4|DES3" firewallProtocol="none|nat|fixednat"/>

Where:

name: The name of the remote machine that is recognized by AIS. The names maximum length is 32 characters, and it must start with a character. This name cannot be the name of a data source specified in a <datasources> statement.
Note:

The name does not need to relate to the name of the machine on the network.

address: The IP address of the remote machine. port: The port on the remote machine where the AIS Daemon is running. If you do not specify a port number, the system allocates the default server port, 2551. workspace: The specific working configuration specified for this binding by the daemon. A Workspace must be defined in the daemon configuration on the remote machine. encryptionProtocol: The protocol used to encrypt network communications. AIS currently supports the RC4 and DES3 protocols. firewallProtocol: The firewall protocol used. Valid values are none, nat, or fixednat. The default is none. NAT (Network Address Translation) is a firewall protocol where internal IP addresses are hidden. It enables a network to use one set of IP addresses for internal traffic and a second set of addresses for external traffic. NAT translates all necessary IP addresses. However, using NAT requires every access by every client to go through the daemon port, even after a specific server process has been assigned to handle the client. Specifying fixednat for this parameter sets AIS to access this remote machine through a firewall using NAT with a fixed IP address. When the server address is returned to the client and

Binding Configuration 3-7

the client sees that the IP is not the IP of the daemon, it ignores the IP and uses the daemon's IP instead. It is recommended to use fixednat to access data via a firewall. For more information, see Firewall Support.
Example 31 <remoteMachines> statement <remoteMachines> <remoteMachine name="ALPHA_ACME_COM" address="alpha.acme.com" /> <remoteMachine name="SUN_ACME_COM" address="sun.acme.com port="8888" workspace="PROD" /> </remoteMachines>

<adapters> Statement
This statement lists the accessible application adapters using the <adapter> statement.

<adapter> Statement
An <adapter> statement specifies the name and properties of an AIS application adapter. The basic format is as follows:
<adapter name="name" type="type" definition="definition_name"> <config .../> </adapter>

Where:

name: The name of the adapter. The maximum length is 32 characters. type: The type of the application adapter to be accessed. This value is different for each application adapter. Refer to a specific application adapter for the value of this parameter. definition: The name of the adapter metadata used to describe the adapter. If the value here is the same as the adapter name, it can be omitted.
Note:

Some adapters have an internal definition, and a value here must be omitted.

<config> Statement
A <config> statement specifies the configuration properties of an application adapter. The configuration information is specific to each adapter type. The basic format is as follows:
<adapter name="name" type="type"> <config attribute="value" attribute="value" ... /> </adapter>

Where:
3-8 AIS User Guide and Reference

attribute: The name of the configuration property. Attributes are adapter-dependent. For example, the preloaded attribute is set with an event router adapter or event to initiate the adapter as soon as the Daemon is started.
Note:

The pre-loaded attribute can also be set when the adapter definition is large, so that the time allocated to a server when the User opens a connection is shortened since the definition has been pre-loaded.

value: The value of the configuration property.

<datasources> Statement
This statement lists the accessible data sources using the <datasource> statement.

<datasource> Statement
A <datasource> statement specifies the name and type of the data source and the information required to connect to the data source. The basic format is as follows:
<datasource name="name" type="type" attribute="value"> <config .../> </datasource>

Where:

name: The name of the data source that is recognized by AIS. The maximum length is 32 characters. The name cannot include hyphens (-). It can include underscores (_). This name cannot be the name of a machine specified in a <remoteMachines> statement. type: The type of the data source to be accessed. This value is different for each data source driver. Refer to a specific data source driver for the value of this parameter. The value of this field when you define a data source shortcut (where the data source resides on another machine) is REMOTE. attribute: General data source attributes, such as read only. These attributes are set in Attunity Studio on the Advanced tab for the data source.
Note:

The localCopy and noExtendedMetadata attributes are set automatically on the Metadata tab of the Design perspective when changes are made to native metadata. For more details, see Native Metadata Caching and Extended Native Data Source Metadata.

The following additional attributes are supported:


Table 31 AIS Studio Data source Advanced Tab Transaction Type Data Source Supported Attributes Description The Transaction level (0PC, 1PC or 2PC) that is applied to this data source, no matter what level the data source supports. The default is the data sources default level.

Attribute transactionType="trnLevel Support | datasource Default

Binding Configuration 3-9

Table 31 (Cont.) Data Source Supported Attributes AIS Studio Data source Advanced Tab Syntax name For further details about this field, see Using the Attunity Connect Syntax File (NAV.SYN). Attribute syntaxName="value" Description A section name in the NAV.SYN file that describes SQL syntax variations. The default syntax file contains the following predefined sections:

OLESQL driver and the SQL Server 7 OLE DB provider (SQLOLEDB): syntaxName="OLESQL_SQLOLEDB" OLESQL driver and JOLT: syntaxName="OLESQL_JOLT" Rdb driver and Rdb version: syntaxName="RDBS_SYNTAX" ODBC driver and EXCEL data: syntaxName="excel_data" ODBC driver and SQL/MX data: syntaxName="SQLMX_SYNTAX" ODBC driver and SYBASE SQL AnyWhere data: syntaxName="SQLANYS_SYNTAX" Oracle driver and Oracle case sensitive data: syntaxName="ORACLE8_SYNTAX" or, syntaxName="ORACLE_SYNTAX" For case sensitive table and column names in Oracle, use quotes (") to delimit the names. Specify the case sensitivity precisely.

Default table owner Read/Write Information Repository directory Repository name

owner="value" readOnly="true|false" The default value is false. objectStoreDir="value" objectStoreName="value"

The name of the table owner that is used if an owner is not indicated in the SQL. true: The data source is in read-only mode. All update and data definition language (DDL) operations are blocked. Indicates the location of a repository for a data source. The name of a repository for a specific data source. The name is defined as a data source in the binding configuration with a type of Virtual and is used to store AIS views and stored procedures specific to the data source, when this is wanted in preference to the default SYS data.

Example 32 <datasources> statement <datasources name="NAV"> <datasource name="ADABAS" type="ADABAS"> <config dbNumber="3" predictFileNumber="7"/> <datasource name="DB2" type="DB2"> <config dbname="person2"/> </datasource> <datasource name="DEMO" type="ADD-DISAM"> <config newFileLocation="/users/nav/dis"/> </datasource> <datasource name="DISAM" type="ADD-DISAM"> 3-10 AIS User Guide and Reference

<config newFileLocation="/users/nav/dis"/> </datasource> <datasource name="SYBASE" type="SYBASE"> <config server="SYB11_HP" dbName="personnel"/> </datasource> </datasources>

Note:

On the HP NonStop platform, the Repository Information objectStoreDir and objectStoreName attributes do not affect Alternate Key Files for the following data sources: Enscribe SQL/MP if a local copy or extended metadata is used.

These are always created in the NAVROOT subvolume with uniquely generated filenames.

<config> Statement
This statement specifies configuration properties of a data source. The configuration information is specific to each adapter type. The basic format is as follows:
<datasource name="name" type="type"> <config attribute="value" attribute="value" ... /> </datasource>

Where:

attribute: The name of the configuration property. value: The value of the configuration property.

Example 33 <config> statement <datasources> <datasource name="DEMO" type="ADD-DISAM"> <config newFileLocation="/users/nav/dis"/> </datasource> </datasources>

Sample Binding
This section shows a sample XML binding in XML format. You can also view the binding.bnd XML file in the XML editor. To open this file in the editor:

Right-click on the binding you want to view and select Open as XML.

This displays a graphical interface where you can define the various aspects of a solution. This interface lets you make changes easily without having to manually edit the XML file. For more information, see Editing XML Files in Attunity Studio. If you want to view the XML in its original format, click the Source tab after you open the binding in XML.

Binding Configuration 3-11

The following Binding configuration provides information for the NAVDEMO sample data source and for a local data source (an ORA_EXT Oracle database), one remote data source, and one adapter ( "MathLegacy", which uses the AIS Legacy Plug adapter):
<?xml version="1.0" encoding="ISO-8859-1"?> <navobj version="..."> <bindings> <binding name="NAV"> <remoteMachines> <remoteMachine name="SUN_ACME_COM" address="sun.acme.com" workspace="PROD"/> </remoteMachines> <environment name="NAV"> <debug generalTrace="true"/> <misc/> <queryProcessor/> <optimizer goal="none" preferredSite="server"/> <transactions/> <odbc/> <oledb/> <tuning/> </environment> <datasources name="NAV"> <datasource name="NAVDEMO" type="ADD-DISAM"> <config newFileLocation="$NAVDEMO"/> </datasource> <datasource name="ORA_EXT" type="ORACLE8" connect="@ora8_ntdb"/> <datasource name="ORA" type="remote" connect="sun_acme_com"/> </datasources> <adapters name="NAV"> <adapter name="MathLegacy" type="LegacyPlug"> <config dllName="c:\legacy\prc_samples.dll"/> </adapter> </adapters> </binding> </bindings> </navobj>

Environment Properties
Each Binding configuration includes its own environment, specified in the environment properties.
Note:

When using an ADO front-end application, the environment used is the environment of the first binding configuration used in the program, even if the binding configuration used is changed during the program.

To display environment properties for the binding configuration in Attunity Studio, right-click the binding configuration and select Open. The environment properties are listed in the Environment tab. The following sections describe each category in the Environment Properties editor.

3-12 AIS User Guide and Reference

Debug General Language Modeling ODBC OLE DB Optimizer Query Processor Temp Features Transaction Tuning XML

Debug
This following list shows the debug properties. The debug properties control what information is reported for debugging purposes.

ACX trace: Select this to write the input XML sent to the adapter and the output XML returned by the adapter to the log file. GDB trace: Select this to write the driver transactions created using the AIS SDK to the log. For details refer to Attunity Developer SDK. General trace: Select this to write the general trace information used by to the log. The default writes only error messages to the log. Note: Changing the default setting can degrade AIS performance.

Query warnings: Select this to generate a log file of Query Processor warnings. Add timestamp to traced events: Select this to add a timestamp on each event row in the log. Trigger trace: Select this to log trigger information each time that a database executes a trigger. Adapter trace: Control trace: Query processor trace: Performance trace:

Binary XML log level: Select the binary XML log level from the list. The following logging levels are available:

None API Info Debug

This parameter is used for troubleshooting.

Binding Configuration 3-13

Log file: Enter the full path and filename of the log file for messages. The default log file (NAV.LOG) is located in the TMP directory under the directory where AIS Server is installed. To send log messages to the console, instead of a file, set logFile to a minus, "-", character. The following message types of are written to the log:

Error messages. Trace information about the query optimization strategy (when General trace is selected).

For HP NonStop, the default AIS log file is called NAVLOG and it is located in the subvolume where the AIS Server is installed. If the log file is location is described by a UNIX type path (such as /G/d0117/ac3300/navlog), then the log file can be viewed from other processes (while it is open). Otherwise, the log is not readable while it is open. To view the file use the following:
FUP COPY filename,, SHARE

For z/OS, the default AIS log file is NAVROOT.DEF.NAVLOG, where NAVROOT is the high level qualifier specified when AIS Server is installed.

Trace directory: Enter the directory where AIS writes the log generated by the optimizer files (with a PLN extension). The optimizer files include details of the optimization strategy used by AIS. By default, these files are written to the same directory as the log file (see Log file). Transaction log file: Enter the full path and filename of the log file that logs transaction activity. This log file is used during recovery operations. On Windows platforms, the default log file (TRLOG.TLF) is written to the same directory as the NAV.LOG file (which is defined by the debug logFile parameter). It is recommended to use the default log file and perform recovery from a PC. Transaction trace: Select this to write 2PC and XA transactions-related events to the log. Optimizer trace: Select this to write trace information and information about the Query Optimizer strategy to the log file. If this property is selected, the following properties are also enabled. Full trace: Select this to enable all optimizer traces. Trace groups: If using Full trace, you can select this to enable generated groups optimizer traces. Trace groups is unavailable if Full trace is not selected.

Transaction extended logging: Select this for the transaction manager to write additional information about transactions to the log.

General
The following list shows the general properties. The general properties control general configuration properties for the binding.

Compress object store: Select this to allow compressing objects in the repository if they use more than 2K storage. This property is automatically selected for a bindings created for a CDC Agent with a Staging Area.

3-14 AIS User Guide and Reference

Read V3 definition: Select this if you are upgrading AIS from version 3.xx. This property is selected by default. If you do not need to use this behavior, clear the check box. Temporary directory: Enter the path to the directory where temporary files are written, including the temporary files created for use by hash joins and for sorting files. The default is the current directory. The following describes how to determine where your temporary directory should reside:
1.

Select a directory that contains temporary files only. You can then easily remove these files if necessary (for example, if the process stopped in the middle). Select a directory on a disk that has a significant amount of free disk space.

2.

Year 2000 policy: This property defines how two-digit years are converted into four-digit years. Enter a numeric value in this field. Two policies can be used:

Fixed Base Year: If this property is set to a value greater than, or equal to 1900, the Fixed Base Year policy is used. In this case, the property value is the first four-digit year after 1900 that can be represented by a two-digits. For example, if the value is set to 1905, the years 2000->2004 will be represented by 00->04. All other two digits will map to 19xx. Sliding Base Year: If this property is set to a positive value less than 100, the Sliding Base Year policy is used. In this case, the property value is the number of years ahead of the current year that can be represented by a two-digit number. With each passing year the earliest year that can be represented by a two-digit number changes to a year later. When the parameter is not set, or when it is set to a value outside the range of values defined for the above policies, the default value of 5 and the Sliding Base Year policy is used.

NAV_UTIL editor: Enter the text editor to use when you use NAV_UTIL EDIT. The default is the native text editor for the operating system. Journal file name: Enter the full path to the journal file (including the file name) for use with CDC on DISAM. The default journal file is located in the DEF directory of the AIS installation. Cache buffer size: Enter the number of bytes to be used for a memory buffer on a client machine, which is used by the AIS client/server to store read-ahead data. The default is 200000.

Language
The Language section lets you set the default language for applications in the binding. To set the default language for the binding From the Language list, select the National Language Support (NLS) supported language to use in this binding. Valid values are:

ARA (Arabic) ENG (English) FR (French) GER (German)

Binding Configuration 3-15

GREEK (Greek) HEB (Hebrew) JPN (Japanese) KOR (Korean) SCHI (Simple Chinese) SPA (Spanish) TCHI (Traditional Chinese) TUR (Turkish)

From the Codepage list, select the codepage that you want to use with this language. The code pages available are determined by the Language that is selected. If you have additional code pages available, you can manually enter them in this field. Note: If you change the language, the code page will also change. Check to be sure that you want to use the selected code page with the language you selected. If no codepage is selected, the default codepage for the selected language is used.

From the NLS string list, select the NLS string for this language and code page. The NLS strings available are determined by the code page that is selected. If you have additional NLS strings available, you can manually enter them in this field. The codepage is used by a field with a data type defined as nlsString. This parameter is used for a field with a codepage that is different than the machines codepage. This property includes the following values:

The name of the codepage. Whether the character set reads from right to left (as in middle-eastern character sets).

For example, the following specifies a Japanese EUC 16-bit codepage:


<misc nlsString="JA16EUC,false"/>

For more information, see NLS Support at the Field Level.

Modeling
The modeling section lets you define how to handle nonrelational data and arrays. For more information, see Handling Arrays.

Array metadata model: Select the virtual array flattening model for the binding. You can select one of the models below: virtualarrayTables: In this model, a virtual table is generated for every array in the parent record, with specially generated virtual fields that connect the parent and the virtual table. This is the default for all cases except CDC. virtualarrayViews: In this model, the parent field is replaced with a unique key. Virtual views use the same metadata as virtual tables. This is the default model for CDC.

Reduce sequential flattening: Select this for sequentially flattened tables to not return a row that lists only the parent record, without the values of the child array columns. This option is available only if you select virtualarrayViews as your Array metadata model.

3-16 AIS User Guide and Reference

Reduce virtual views: Select this for virtual views to not return a row that lists only the parent record, without the values of the child array columns. By default the property is selected. Clear the check box to return the row. This option is available only if you select virtualarrayViews as your Array metadata model.

Generate unique index names: Select this to generate a unique name for every index on a table that is defined in a non-relational system. When this is selected, the names of all indexes on non-relational tables are exposed in the following format: table_name_KEYkey_number. For example, if a table is called X, the name of the first index would be X_KEY0.

ODBC
The following list shows the properties for ODBC. The ODBC properties set the parameters used when using ODBC to work with AIS.

Maximum active connections: Enter the maximum number of connections that an ODBC or OLE DB application can make through AIS. The default is 0. The greater the number of connections possible, the faster the application can run. However, other applications will run slower and each connection is counted as a license, restricting the total number of Users who can access data through AIS concurrently. For example, this is true when using MS Access as a front-end, because MS Access allocates more than one connection whenever possible.

Maximum active statements: Enter the value returned for the InfoType of the ODBC SQLGetInfo API. The default (0) means that there is no limit on the number of active statements. Force qualify tables: Select this to report the catalog and table name together as a single string (in this format: DS:table_name).

OLE DB
The following list shows the properties for OLE DB. The OLE DB properties set the parameters used when using OLE DB to work with AIS.

Trace: Select this to write the trace information used when working with OLE DB providers to the log. If not selected, only error messages are written to the log. Note: Changing the default setting can degrade AIS performance.

Suppress chapters: Select this to suppress chapters in ODBC. In this case, Chapters are exposed as regular columns. Use this property to prevent an SQL server error for a bad datatype. Maximum row handles: Enter the maximum number of hrows (row handles) that can reside in memory at one time under OLE DB. This property is related to the ADO CacheSize property. Set both of these properties to the same value. If the two properties are different, the smaller value is used and the other is ignored. OLE threads: Enter the number of open threads allowed when working with OLE transactions. These threads are used for operations received from the MSDTC. The minimum value is 5. the optimum value is 15. The maximum value is 25.

Binding Configuration 3-17

Optimizer
This following list shows the optimizer properties. The optimizer properties control how the query optimizer works in the binding.

Avoid scan: Select this to force the optimizer not to choose the scan strategy, if a different strategy can be used. Disable multi-index: Select this to disable the multi-index storage. For Adabas only. Disable cache without index: Select this to disable non-index caches. Disable flattener: Select this to instruct the query optimizer not to flatten queries including nested queries. Disable hash join: Select this to disable hash join optimization. When hash joins are enabled, a significant amount of disk space is required (see Hash maximum disk space). If the system does not have available disk space, use this option to disable hash join optimization. Disable index cache: Select this to disable index caching. Disable lookup cache: Select this to disable the lookup cache. Encourage lookup cache: Select this to define the optimizer to ignore the cache buffer size restriction (hashBufferSize) on a group consisting of a single table.

Disable pass thru: Select this to force the optimizer to execute a full optimization for the query even if it can be delegated to the relational backend database as is. Disable Tdp union: Select this to define the optimizer to handle separate Data sources as if they are on a different backend even if they are part of the same database or remote machine. Disable subquery cache: Select this to disable the cache for subqueries. Analyzer query plan: Select this to write the Query Optimizer plan to a plan file for analysis using the AIS Query Analyzer. Optimization goal: Select the optimization policy to use from the list. The following policies are available:

none: All row optimization is used. This is the default value. first: First row optimization is performed based on the assumption that the results produced by the query are used as the rows are retrieved. The query optimizer uses a strategy that retrieves the first rows as fast as possible, which might result in a slower overall time to retrieve all the rows. all: Optimization is performed based on the assumption that the results produced by the query are used after all the rows have been retrieved. The query optimizer uses a strategy that retrieves all the rows as fast as possible, which might result in a slower time to retrieve the first few rows.

Note: Aggregate queries automatically use all row optimization, regardless of the value of this parameter.

Hash maximum disk space: Enter the maximum amount of disk space (in MBs) that a query can use for hash joins. The default is -1 (which indicates unlimited, all the free space on the allocated disk). If a query requires more space than allocated via this parameter, the query execution will stop. The minimum value for this parameter is 20 MB.

3-18 AIS User Guide and Reference

Note: Temporary files are written per query. Therefore, if several users can execute queries at the same time, adjust the amount of space available, so that the total that can be allocated at any one time does not exceed the available space. HP NonStop: If AIS files reduced to disk are larger than 500 MB, use this parameter to enlarge the default size of the file that AIS opens. The default for this parameter on HP NonStop machines is 478.3MB, which consists of a primary extent size of 20K, a secondary extent size of 1000, and a maximum of 500 extents.

Preferred site: Select the machine where you want to process the query. Normally the query is processed as close to the data source as possible (either using the query processing of the data source, or if this is not available, the Query Processor on the same machine as the data source). If a situation arises in which it is more efficient to process the query on the Client Machine (for example, when the remote machine is heavily overloaded), you can tune AIS to process all or part of the query locally. The extent that performance is improved by processing all or some of the query locally can be determined only on a trial and error basis. Consider the following points when processing the query locally:

Increased communication costs. Decreased server workload.

Before adjusting this parameter, check the log to see if other tuning is more appropriate. The options are:

server (the default): The query is processed on the server. nearServer: The query is processed mostly on the server with parts of the query processed on the client (determined by the specific query). nearClient: The query is processed mostly on the client with parts of the query processed on the server (determined by the specific query). client: The query is processed on the client.

Maximum tables in group: Default row cardinality: Enter the default row cardinality. If you want to set to a value other than 0, this value is used by the Optimizer as the default rows number for pass-through queries and for tables that have no statistical information. Maximum groups for reorder: No LOJ delegation: Select this if you do not want to delegate LEFT OUTER JOIN queries to the relation backend database. AIS will attempt to suppress LOJ queries. If not selected, every query can be delegated as is. Use recursive LOJ allocation: Select this to allow recursive optimization to queries including. By default, this is selected. If you want to disable the recursive LOJ delegation, clear the check box. LOJ recursive optimization limit: Value only if using the log allocation to true.

Disable semi-join: Select this to disable semi-join optimization. Semi-join in values factor: Enter the number of parameters that a semi-join strategy sends to an RDBMS.

Disable order -by-index strategy: Select to disable the order-by-index strategy. In this strategy the order of the results is achieved by accessing the table by the index that contains its segments in the ORDER BY clause so that QP does not sort the results. If not selected, this strategy is not used and sorting is done by QP. Note
Binding Configuration 3-19

that this strategy is rarely selected and using this property affects only queries with an ORDER BY and with no WHERE clause on indexes. All other cases will use the same strategies as they do when this property is not selected.

Parallel Processing
This following list shows the parallel processing properties. The parallel processing properties control how parallel processes are handled in the binding.

Disable threads: Select this to disable multi-threading. If this is selected, the following properties are disabled. Disable threaded read ahead (QP): Select this to disable read-ahead functionality. Disable query read ahead (QP): Select this to disable read-ahead functionality for components using Query Processor services. ODBC async execution property to enable ODBC asynchronous execution Disable QP parallel execution: Select this to disable parallel processing for query execution. This option is available only if both Disable threaded read ahead (QP) and Disable query ready ahead (QP) are not selected. Hash parallelism: Select this to read both sides of hash joins at the same time. By default, this property is selected. If you do not want this behavior, clear the check box.

Query Processor
This following list shows the query processor properties. The query processor properties control how the query processor processes requests in the binding.

Disable command reuse: Select this to disable Query Processor caching the executed state of a query for reuse. Disable DS property cache: Select this to disable caching data source properties. Disable insert parameterization: Select this to disable parameterization constants in INSERT statements. Disable metadata caching: Select this to disable caching object metadata. If this is selected, the object metadata is taken from the from the original data source instead of the cache. Disable query parameterization: Select this to not convert constants into parameters when accessing data sources. Disable row mark field fetch: Select this for OLE DB getRows errors to be marked and reshown on every getRows, if the rowset is active. Compile after load: Select this to always compiles an AIS procedure or view after it is read. Ignore segments bind failure: This property determines how AIS responds when the execution of one of the segments of a segmented data source fails:

Select this to Log a message and continue execution. This is the default setting. Clear the check box to Log a message and stop execution. By default, this property is selected. If you want to stop execution after sending a message, clear this check box.

3-20 AIS User Guide and Reference

Prompt database-user password: Select this to configure AIS to prompt the user for security information when accessing a data source. Use alternate qualifier: Select this to use the @ symbol instead of a colon (:) when connecting to multiple data sources. Note: You should use this value when building an application using PowerBuilder from Sybase Inc. or Genio from Hummingbird Ltd.

Use table filter expression: Select this to enable the use of tables that have filter expressions specified in their metadata. For details of filters in ADD, see The <table> Statement for information on the filter property or see the Metadata General Tab for information on using the Filter expression in Attunity Studio. Write empty string as null: Select this to replace empty strings in a SET clause of an UPDATE statement or in a VALUES list of an INSERT statement with null values. Optimistic for update: Select this to use optimistic locking as the default locking behavior on queries with a FOR UPDATE clause. Disable compilation cache: Select this to disable saving successfully compiled statements in the cache. Maximum SQL cache: Enter the maximum number of SQL queries that can be stored in cache memory. This propertys value is ignored if Disable compilation cache is selected. The default is 3. First tree extensions: Enter the maximum size allowed for an SQL query after compilation. The default is 150. Maximum columns in parsing: Enter the maximum number of columns that a query can reference. The default is 500. Maximum segmented database threads: Enter the maximum number of open threads allowed, when working with segmented databases. Minimum number of parameters allocated: Enter the minimum number of parameters that can be used in a query. Continuous query retry interval: Enter the number of seconds that the query processor waits before executing a query again, when no records are returned. The default is 2. Continuous query timeout: Enter the number of seconds that the query processor will continue to issue queries, when no records are returned. The default is 3600 (one hour), which indicates that after an hour without new messages the continuous query will end. Enter 0 to indicate that there is no timeout and the continuos query will not end automatically. Continuous query prefix: Enter a prefix to replace the $$ prefix that is used to identify the continuous query special columns. For example, if you enter ##, then the continuous query alias is ##StreamPosition and the control command alias is ##ControlCommand. Arithmetic fixed precision: Enter an integer determine the precision scale factor for floating decimal position. The default is 0, which indicates that the exact arithmetic function is not used. When the value is set to a small positive integer, the special precise floating point arithmetic is used in the query processor. The value determines the precision scale factor (for example, a value of 2 indicates two digits decimal precision). Setting this parameter can be done at a workspace level and it affects all queries running

Binding Configuration 3-21

at that workspace with no change to the query or to the underlying data source. The query processor ADD(), SUBTRACT() and SUM() functions that currently use double arithmetic for both floating and decimal types will use this logic. When the value is set to the default, 0, the exact arithmetic function is not used. This property is used to set the Exact Arithmetic function. The qpArithmeticFixedPrecision property is an integer value that determines the fixed precision the AIS query processor uses for precise floating point arithmetic. It is used to create an accurate result when using the SUM function. Because floating point datatypes are not accurate their results over time does not correspond to the expected arithmetic sum. In other words, in the floating point representation, values such as 0.7 cannot be represented precisely. If there are eight precision digits, there is usually imprecision in the least significant digit so the number is actually approximately 0.699999995. The qpArithmeticFixedPrecision property corrects this imprecision by using an exact floating point.

Parser depth: Enter the maximum depth of the expression tree. The default is 500. Token size: Enter the maximum length of a string in an SQL query. The minimum value is 64. the default is 350. Insert from select commit rate: Enter the commit rate to use when executing an INSERT-FROM-SELECT operation. If a value more than 0 is entered, a commit is performed automatically after inserting the indicated number of rows. For example, if the value is 5,a commit is performed every time 5 rows are inserted. Disable SQS cache: Select this to always read compiled AIS procedures and views from a disk. In this case, they are not saved in the cache. Procedures cache size: Enter the number of AIS stored queries created with a CREATE PROCEDURE statement that can be kept in cache memory. This propertys value is ignored if Disable SQS cache size is selected. Expose XML fields: Select this to display data returned for a query as XML, representing the true structure of the result. This is useful when querying a data source table that contains arrays or variants. For additional information, see the SELECT XML Statement. XML field name: Enter the name used in a query to indicate that the data is returned as XML, instead of the keyword XML. This is available only if Expose XML fields is selected.

Temp Features
The temp features section lets you add temporary properties to the binding. These properties may be defined in the AIS documentation or release notes, or you can define any additional property. The temporary property is added to the binding.bnd file, which defines the binding environment. To set a temporary feature Enter an ID for the feature. This is the name given to the Environmental property that configures the feature.

Enter the Value to use for the feature.

3-22 AIS User Guide and Reference

Transaction
This following list shows the transaction properties. The transaction properties control how transactions are handled in the binding.

Commit on destroy: Select this to commit all single-phase commit transactions opened for a data source, if a connection closes while the transaction is still open. Disable 2PC: Select this to disable Two-phase Commit capabilities, even in drivers that support two-phase commit. User commit confirm table: Select this to use the commit-confirm table for data sources that support single-phase commit. Recovery delay: Enter the number of minutes from the start of a transaction before any recovery operation on that transaction can be attempted. The default is 15. Time limit: Enter the time to wait for a transaction to complete before an error is returned. This parameter is also used when executing a RECOVERY, and it then indicates the number of minutes from the last transaction activity to wait before a forced activity can be executed.

Conversions: You can select one of the following: No conversion: Select this if you want all transactions to remain as sent. This is selected by default. Convert all to distributed: Select this to convert all simple Transaction Managers into distributed transactions. Convert all to simple: Select this to convert all distributed transactions into simple transactions.

Tuning
This following list shows the tuning properties. The tuning properties are set to increase system efficiency in the binding.

Dsm maximum buffer size: Enter the maximum size of a cache memory. This cache is used when memory is required on a temporary basis (as when AIS sorts data for a query output, for a subquery, or for aggregate queries). This cache size is not used for hash joins and lookup joins (see also, Hash buffer size). The default is 1000000. Dsm maximum hash file size: Dsm maximum sort buffer size: Enter the maximum size of the sort buffers. Use this parameter instead of Dsm maximum buffer size for sorts only. The default is 1000000. Dsm middle buffer size: Enter the maximum number of bytes for the index cache. This cache is not used for hash joins and lookup joins. The default is 1000000. File pool size: Enter the maximum number of files that can be opened in the file pool. The default is 10. File pool size per file: Enter the size of the file in the pool. The default is 3. File close on transaction: Select this if you want the File Pool to close when a transaction is committed. Use global file pool: Select this to use a global file pool is. When the workspace server mode parameter is set to multiClient or reusable, this parameter also
Binding Configuration 3-23

indicates whether the file pool closes upon the client disconnection. See Server Mode.

Hash buffer size: Enter the number of bytes of cache memory that is available for each hash join or lookup join. The default is 1000000. Hash max open files: Enter the maximum number of files that a query can open at one time for use when performing hash joins. The number assigned to this parameter must not exceed the system maximum. The default is 90. Note: The hash join optimization strategy results in a number of files being opened to perform the join. The larger the table size, the more files are opened. By adjusting this parameter you can disable hash joins on very large tables, while allowing hash joins for small tables. (See Disable hash join, for information on disabling hash optimization for all table joins).

Hash primary extent size: Enter the primary extent size (MVS and Tandem only). Hash secondary extent size: Enter the secondary extent size (MVS and Tandem only). Hash max extents: Enter the maximum extent size (Tandem only). Hash enable RO: Select this for the QP to store the first hash bucket in memory instead of a sequential file. Hash refresh EoF: Select this to refresh EoF requests (Tandem only).

XML
This following list shows the XML properties. The XML properties control how XML files are handled in the binding.

COM maximum XML in memory: The maximum size, in bytes, for an XML document held in memory. The default is 524288. COM maximum XML size: The maximum size of an XML document passed to another machine. The default is 524288. Note: When you increase this value for this property, you may need to increase the value for the maxXmlSize property in the daemon. For more information on daemons, see Setting up Daemons.

COM XML transport buffer size: Enter the maximum size of the internal communications buffer. The default value (-1) indicates there is no size limit. XML date format: Enter the date format to use for XML. The options are:

ISO (the default): The date format is: YY-MM-DDThh:mm:ss[.ss..] ODBC: The date format is: YYYY-MM-DD HH:MM:SS[.NNN...]

Replace invalid XML characters: Select this to replace invalid XML characters with a ?. It is used for diagnostic and troubleshooting purposes. XML trim character columns: Select this to enable padded spaces to be trimmed from XML string columns when the record format of is fixed. By default this is selected, and padded spaces are trimmed for fixed size character columns. If you do not want this behavior, clear this check box.

3-24 AIS User Guide and Reference

Table 32 Category debug

Environment Properties Default Value Description .

Parameter Name

adminTrace basedDate This property is used to customize the based_date data type having the form: basedDate="yyyymmdd[/dttype[/dtl en[/multiplier]]]" where:

yyyymmdd: Start date dttype: The name of the data type (int4 is the default). dtlen: The number of digits in the data type (if not atomic). multiplier: The number of increments per day.

Example: 19700101/int4//24 indicates the number of hours since January 1, 1970. basedDateNullability true If true, when the based_date value is 0 it is considered Null. If false, the type is not nullable.

Languages
This section provides additional, detailed information regarding each language supported. The languages supported are:

ARA (Arabic) ENG (English) FR (French) GER (German) GREEK (Greek) HEB (Hebrew) JPN (Japanese) KOR (Korean) SCHI (Simple Chinese) SPA (Spanish) TCHI (Traditional Chinese) TUR (Turkish)

ARA (Arabic)
If the codepage parameter is blank, the default codepage on all supported platforms is AR8ISO8859P6, with the exception of HP NonStop platforms, where the default codepage is ARCII. The Windows codepage is 1256.

Binding Configuration 3-25

ENG (English)
The default. If the codepage parameter is blank, the default codepage on all supported platforms is ASCII, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is EBCDIC. The Windows codepage is 1252.

FR (French)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P1, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is F8EBCDIC297. The Windows codepage is 1252.

GER (German)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P1, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is F8EBCDIC297. The Windows codepage is 1252.

GREEK (Greek)
If the codepage parameter is blank, the default codepage on all supported platforms is WEISO8859P7, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is F8EBCDIC875. The Windows codepage is 1253.

HEB (Hebrew)
If the codepage parameter is blank, the default codepage on all supported platforms is IW8ISO8859P8, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is IW8EBCDIC424. The Windows codepage is 1255.

JPN (Japanese)
If the codepage parameter is blank, the following are the default codepages on the supported platforms:
Platform HP NonStop IBM z/OS IBM OS/400 OpenVMS UNIX, excluding Sun Solaris UNIX Sun Solaris Windows Default Japanese Codepage JA16SJIS JA16DBCS JA16DBCS JA16VMS JA16SJIS JA16EUC JA16SJIS

KOR (Korean)
If the codepage parameter is blank, the default codepage on all supported platforms is KO16OSC5601, with the exception Windows platforms, where the default codepage is KO16MS949 (949) and IBM OS/400 and z/OS platforms, where the default codepage is KO16DBCS.

SCHI (Simple Chinese)


If the codepage parameter is blank, the default codepage on all supported platforms is ZHS16CGBK231280, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is ZHS16DBCS. Form EBCDIC, IBM 1388.
3-26 AIS User Guide and Reference

SPA (Spanish)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P1 (or the alias ASCII), with the exception of IBM OS/400 and z/OS platforms, where the default codepage is WE8EBCDICLATIN. The Windows codepage is 1252.

TCHI (Traditional Chinese)


If the codepage parameter is blank, the default codepage on all supported platforms is ZHT16BIG5, with the exception of IBM OS/400 and z/OS platforms, where the default codepage is ZHT16DBCS.

TUR (Turkish)
If the codepage parameter is blank, the default codepage on all supported platforms is WE8ISO8859P9 (or the alias ASCII), with the exception of IBM OS/400 and z/OS platforms, where the default codepage is WE8EBCDIC1026. The Windows codepage is 1254.

Sample Environment Properties


The following sample shows how different environment properties are represented in XML for the NAV binding configuration:
<environment name="NAV"> <comm comCacheBufferSize="200000" /> <debug logFile="" traceDir="" /> <misc tempDir="" language="" codepage="" nlsString="" /> <odbc maxActiveConnections="0" /> <oledb maxHRows="100" /> <optimizer preferredSite="server" /> <queryProcessor proceduresCacheSize="3" firstTreeExtensions="150" maxColumnsInParsing="500" /> <transactions/> <tuning dsmMaxBufferSize="1000000" dsmMidBufferSize="100000" hashBufferSize="1000000" hashMaxDiskSpace="-1" hashMaxOpenFiles="90" /> </environment>

Note:

The XML representation of the environment properties are displayed in Attunity Studio in the XML editor. To view the XML, right-click the binding you are working with and select Open as XML.

Binding Configuration 3-27

3-28 AIS User Guide and Reference

4
Setting up Daemons
This section contains the following topics:

Daemons Defining Daemons at Design Time Reloading Daemon Configurations at Runtime Checking the Daemon Status Starting and Stopping Daemons Adding and Editing Workspaces

Daemons
Daemons manage communication between machines running AIS. The daemon is responsible for allocating Attunity server processes to clients. A daemon runs on every machine running AIS. The daemon authenticates clients, authorizes requests for a server process within a certain server workspace, and provides the clients with the required servers. When a client requests a connection, the daemon allocates a server process to handle this connection, and refers the client to the allocated process.
Note:

The configuration supplied with the product installation includes the default IRPCD daemon. This configuration is used when no other daemon is configured to access the machine that is requested by a client machine.

For more information, see AIS Runtime Tasks from the Command Line.

Defining Daemons at Design Time


This section describes how to configure and edit a daemon in the Attunity Studio Design perspective. You can also configure Daemons and workspaces in the Runtime Manager perspective. For more information on working with daemons and workspaces in the Runtime Manager perspective see Runtime Management with Attunity Studio.

Setting up Daemons

4-1

Adding a Daemon
The following describes how to add a new daemon to your system using Attunity Studio. When you want to add a new daemon in the Design perspective, you use standard configuration information or copy the configuration information from another daemon. When you edit the Daemon, you can make custom changes to its configuration. To add a new daemon Open Attunity Studio. In the Design Perspective, Configuration view, expand the Machines folder and then expand the machine where you want to add the daemon. Right-click the Daemons folder and select New Daemon. The New Daemon dialog box opens.

1. 2. 3.

Figure 41 New Daemon

4. 5.

Enter a name for the new daemon. Select one of the following:

Create empty daemon with default values Copy Properties from another daemon If you choose to copy the properties of an existing daemon, click browse and select the daemon where you want to copy the properties.

6.

Click Finish. The Daemon editor opens on the right of the workbench. This editor contains three tabs, which are described in the Editing a Daemon section below.

4-2 AIS User Guide and Reference

Notes:

A machine can have a more than one daemon running at the same time, each on its own port. You can add a new daemon configuration in offline design mode, in a design machine and later drag-and-drop the daemon configuration to this machine. For more information, see Using an Offline Design Machine to Create Attunity Definitions. The daemon editor may contain four additional tabs for workspace information. To display both the daemon and workspace configuration, right-click a workspace under the daemon and select Edit Workspace For a description of these tabs, see Adding and Editing Workspaces.

Editing a Daemon
You can edit the information in the following Daemon editor tabs:

Control: In this tab you enter general details about the server timeout parameters and monitoring rules. Logging: In this tab you enter the logging details such as, the log file format and location, and the parameters to log and trace. Security: In this tab you enter the daemons administrative privileges and access privileges.

To open the daemon editor 1. In the Design Perspective Configuration view expand the Machines folder and then expand the machine where you want to add the daemon.
2. 3.

Expand the daemon folder. Right-click the daemon you want to edit and select Open. The Daemon editor opens on the right of the workbench. Click each tab to edit the information. The tab fields are described below.
Note:

Changes made to the daemon configuration are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager perspective. See Runtime Explorer Tasks.

Control
The Control tab for the daemon lets you define general daemon control properties. The following figure shows the Daemon control tab:

Setting up Daemons

4-3

Figure 42 Daemon Control Tab

The following table describes the fields in this tab.


Table 41 Field General Daemon IP address Enter the IP address of the machine(s) where the daemon is listening. If no IP address is entered, the daemon will listen on all available IP addresses. Enter the port where the daemon is listening. If no port is entered, the daemon listens on all available ports. This determines the range of ports available for this daemon when starting server processes. Enter the port range in the following fields:

Daemon Control tab Description

Daemon port Port range for servers

From: Enter the highest numbered port in the range To: Enter the lowest numbered port in the range

Automatically recover from failure

The daemon restarts automatically if it fails for any reason (any error that causes the daemon process to terminate, such as network process lost or the CPU running the daemon crashes and the backup daemon is defined on another CPU). All available and unconnected servers are terminated and any connected servers are marked and terminated on release. Also the backup starts a backup for itself. The backup appends a new log file to the log of the original daemon, adding a line indicating that a backup daemon was started. The language that the daemon supports. This setting is used when working with a client with a codepage different from the server codepage. See Basic NLS Settings.

Default language

4-4 AIS User Guide and Reference

Table 41 (Cont.) Daemon Control tab Field Maximum XML request size Maximum XML in memory Timeout Parameters Call timeout The timeout period for short calls for all daemons. The definition of a short call is a call that should be completed in a few seconds. For example, most calls to a database such as DESCRIBE should be completed in a few seconds as opposed to call like a GETROWS call, which can take a long time. In heavily loaded or otherwise slow systems, even short calls such as calls to open a file, may take a significant amount of time. If a short call takes more than the specified time to complete, then the connection is stopped. The default value for this parameter is 60 seconds. Values of less than 60 seconds are considered to be 60 seconds. Note: Specifying the timeout in a workspace overrides the value set in this field for the daemon configuration/workspace. Connect timeout The time the client waits for a daemon server to start. If the daemon server does not start within this period, then the client is notified that the server did not respond. The value specified for this parameter serves as the default timeout for all the workspaces listed in the daemon configuration. The default value for this parameter is 60 seconds. Notes:

Description The maximum number of bytes that the daemon handles for an XML document. The maximum amount of space reserved for the XML in memory.

Entering the timeout in a workspace overrides the value set in this field for that workspace. Even if the XML source does not list this parameter in the workspace section, the workspace gets it using the default value. If you want to prevent a workspace from using the default value, you must enter a value of zero for this parameter in the workspace section.

Client idle timeout

The maximum amount of time any daemon client may be idle before the connection with the server is closed. Note: Entering the timeout in a Workspace overrides this setting for that workspace.

Logging
You can set up daemon logging for the following:

Daemon log files Workspace server process log files

The daemon log records daemon operations, such as RPC calls and error messages. You can select the type of information you want included in the log file
Note:

This section does not apply to z/OS platforms, where logging information is written directly to the process.

The following figure shows the Logging tab:

Setting up Daemons

4-5

Figure 43 The Daemon Logging Tab

In this tab, define the daemon log file settings, the log file structure and the location where the log is saved in the Daemon Logging tab. You can also define the data that is logged and traced in the file.
Note:

Changes made to the daemon configuration are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager perspective. See Runtime Explorer Tasks.

The following table describes the fields in the Daemon Logging tab.
Table 42 Field Logging options Daemon log file location Enter how the daemon produces its log data. The full path must be specified. You can use wildcards as part of this file to indicate specific information. Daemon Logging Tab Description

4-6 AIS User Guide and Reference

Table 42 (Cont.) Daemon Logging Tab Field Server log filename format Description Defines the name and location of the server log file. The field must specify the full path name. If no directory information is provided for the log file, then it will be located in the login directory of the account running an AIS workstation. You can enter the following wildcards in this field to generate the following information:

%A: workspace name %D: date (yymmdd) %I: instance number of the given workspace server %L: server account's login directory %P: server's process ID %T: time (hhmmss) %U: server's account name (username)

Daemon operations Trace and debug options Daemon RPC function calls Log ACX Extended RPC trace

Select this if you want to log the daemon operations.

Select this if you want to log all daemon RPC function calls. Select this if you want to log requests and processes. Generates a verbose message in the server log file for each low-level RPC function called. This is useful for troubleshooting the server. Generates system-specific tracing of various operations. Generates a timestamp for every entry to the server log file. Generates a message in the server log file for each socket operation. Select this if you want to log low-level RPC operations. Disables the standard RPC timeouts, setting them to a long duration (approximately an hour) to facilitate debugging. Generates a message in the server log file for each RPC function called. This is useful for troubleshooting the server. Enables debugging messages on the server. Sets the binary XML log level. Your options are:

System trace Timing Sockets Trace information No timeout Call trace RPC trace Binary XML log level

debug none (the default) api info

Setting up Daemons

4-7

Notes:

AIS supports a subset of UNIX commands that enables file manipulation using standard UNIX commands for the AS/400. Log files are stored under the UNIX file system section of the AS/400 machine. The following types of files are stored under the native OS/400 file system:

Executables Service programs

When manipulating configuration files or log files, wrap the path/filename in single quotes (), to ensure that the slash (/) used in the UNIX file system syntax is handled correctly. (The slash is an OS/400 special character.)

Security
The Security tab for daemons is used to:

Grant administration rights for the daemon. Determine access to the computer.

The following figure shows the Daemon Security tab:


Figure 44 The Daemon Security Tab

4-8 AIS User Guide and Reference

Note:

Changes made to the daemon configuration are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager perspective. See Runtime Explorer Tasks.

The following table describes the fields in the Daemon Security tab:
Table 43 Field Administrators privileges All users Daemon Security Tab Description Identifies the users (accounts) allowed to perform administrative tasks (tasks that require administrative login). Enables all users to access the daemon and change the settings. When this is selected, add the names of users (accounts) and groups that can be workspace administrators. See Administering Selected User Only Lists for information on adding users and groups to the field. If no user is not in the list, any user who has logged on to the daemon can administer the workspace Selected users only Identifies the names of users (accounts) and groups that can be administrators.1 If a user is not specified, the account from which the daemon was started is the administrator account. The daemon does not require the user to log in to the account on the system, but to log in to the daemon using the account name and password. Machine access Allow anonymous login Manages access to the computer. Indicates whether workspaces allow anonymous logins (without user name/password entries). For the optimal level of security, do not select this option and define a username for the Daemon Administrators parameter. If unchecked, then no workspace can have an anonymous client. If checked, then a particular workspace allows anonymous clients. Enables login passwords to be cached. This enhances performance by reducing login times for future connections from the same client in a session. Indicates the encryption method used to send information across the network. The default is an asterisk (*), meaning that all methods are acceptable. If an encryption method is specified, it must be used. Currently, AIS supports the RC4 and DES3 protocols. Enter the authentication domain name.

Cached passwords for performance Encryption methods

Domain name for authentication


1

The name is prefixed with@, to utilize the operating system GROUP feature.

Administering Selected User Only Lists


You can give specific users or groups rights to access or administer workspaces. Add the users in the Daemon security tab or the WS Security tab. In the WS Security tab, you can add users in both the Workspace access and Administration sections. To add users or groups 1. In the Daemon security tab, or the WS Security tab Administration or Workspace access section, select Selected users only.
2.

Click Add user and enter the name of a valid user in the Add user dialog box. Make sure that the name entered matches a valid user account.
Setting up Daemons 4-9

To add groups to the list, click Add group and enter the name of a valid group in the Add group dialog box. Make sure that the name entered matches a valid group account.
3.

Click OK. The name of the user or group is added to the field.

1. 2. 3.

To rename a user or group Select the user or group you want to rename and click Rename. Change the name entered in the Rename user or Rename group dialog box to the name you want to use. Click OK. The changes are entered in the field.

1. 2.

To remove a user or group Select the use or group that you want to remove. Click Remove. The user or group is removed from the field.

Reloading Daemon Configurations at Runtime


To make changes to daemon and workspace definitions, you must first reload the configuration and stop any existing servers with the old configuration. To make sure that the daemon configurations are reloaded, you can reload them from the Runtime Explorer view. For more information, see Runtime Explorer Tasks.

Editing Daemon Configurations


You can also edit the daemon configurations at runtime. This is done in the Attunity Studio Runtime Manager Perspective. For information on how to select a different perspective, see Selecting a Perspective. To edit the daemon configuration In the Runtime Manager perspective Configuration view, right-click a daemon and select Edit Daemon Configuration. The Daemon editor opens on the right of the workbench. Make changes for each of the tabs as you would if you were editing the daemon from the Design perspective. See Editing a Daemon above.

Checking the Daemon Status


You can check the status of a Daemon at any time. You can receive information on the number of logins, active daemon clients and other information. The information is presented in a dialog box in Attunity Studio or by an on-screen display Using NAV_ UTIL Utility.

Checking the Daemon Status with Attunity Studio


You can check the daemon status in Attunity Studios Runtime Manager perspective. To check a daemons status 1. From the perspective toolbar at the top right of the workbench, select Runtime Manager.

4-10 AIS User Guide and Reference

2.

Right click the daemon and select Status. A dialog box with the daemon status is displayed. The daemon status contains the following information:

Daemon platform IRPCD process ID IRPCD log file IRPCD configuration Logging detail Number of logins Number of active daemon clients Number of active client sessions Max. number of concurrent client sessions

Starting and Stopping Daemons


Daemons start automatically when the system starts. You must be sure that the system is configured correctly to make sure that the daemon starts up as expected. You can also start and stop a daemon manually. For more information, see Starting and Stopping Daemons.

Starting a Daemon in Attunity Studio


You cannot start a daemon manually in Attunity Studio. To start a daemon manually, you can use the NAV_UTIL. For more information, see Starting and Stopping Daemons.

Shutting Down a Daemon in Attunity Studio


You can shut down the daemon on any machine defined in Attunity Studio, from the Studio Runtime Manager perspective. To shut down the daemon using Attunity Studio 1. From the perspective toolbar at the top right of the workbench, select Runtime Manager.
2. 3.

In the Runtime Explorer view, expand the Daemon folder. Right-click the daemon you want to shut down and select Shutdown Daemon.

Sample Daemon Configuration


The following example shows a daemon configuration for a workspace managing orders (ACMEOrdersServer) and a workspace used for reporting (ACMEReportingServer):
<daemons> <daemon name="IRPCD"> <workspaces> <workspace name="Navigator" description="A Navigator Server" workspaceAccount="orders" startupScript="machine_dependent"1

Setting up Daemons 4-11

serverMode="reusable" serverLogFile="%l.xxx%i" reuseLimit="20" nAvailableServers="10" minNAvailableServers="4" anonymousClientAllowed="false" administrator="*" /> <workspace name="ACMEReportingServer" workspaceAccount="report" startupScript="machine_dependent"1 serverMode="singleClient" serverLogFile="%l.xxx%i" nAvailableServers="3" minNAvailableServers="1" anonymousClientAllowed="false" administrator="*" /> </workspaces> <control "ServerLogfile=/users/nav/%A%U%P.log" /> <security anonymousClientAllowed="false" administrator="sysadmin" /> <logging logFile="irpcd.log" logClientDomain="0" detail="errors" /> </daemon> </daemons>

Note:

The daemon configuration is displayed in Attunity Studio by editing the daemon, in the Source tab.

Adding and Editing Workspaces


A Daemon must have one or more workspaces. Workspaces define the server processes and environment used for the communication between the client and server throughout a client request. A workspace definition is set in the Attunity Studio Design perspective Configuration view.

Adding a Workspace
When you define a new Workspace, you can copy the values of an existing workspace on the same Daemon or have AIS set its default values. To add a new workspace 1. Open Attunity Studio.
2. 3. 4.

In Design Perspective Configuration view, expand the Machine folder and then expand the machine with the daemon where you want to add the workspace. Expand the daemon folder. Right-click the daemon where you want to add the workspace and select New Workspace.

The startupScript values in these examples is machine dependent. For example, for z/OS the startup script might be startupScript=ATTSRVR.AB, for OpenVMS startupScript="dka0:[user.orders]NAV_SERVER.COM" and for Windows startupScript="nav_util svc"

4-12 AIS User Guide and Reference

Figure 45 New Daemon Workspace

Note:

You can add a new daemon configuration in offline design mode, in a design machine and later drag-and-drop the daemon configuration to this machine. For more information, see Using an Offline Design Machine to Create Attunity Definitions.

5.

In the New Daemon Workspace window, enter the following:

Name: The name used to identify the workspace. The workspace name is made up of letters, digits, underscores (_) or hyphens (-)
Note:

On machines running HP NonStop or z/OS, limit the name of a workspace to five characters so that the system environment file, workspaceDEF, does not exceed eight characters. Workspace names greater than five characters are truncated to five character and the default workspace, Navigator, will look for a system environment called NavigDEF.

Description: A short description of the workspace.

6.

From the Workspace data section, select one of the following


Create empty workspace with default values Copy properties from another workspace If you copy the properties from another workspace, the fields below the selection become active. You must indicate the workspace from where you want to copy the properties. Enter the following information:

Setting up Daemons 4-13

<name of the workspace> in <name of the daemon where the workspace is located> on <name of machine where the daemon is located>. or you can click the browse button and browse to select the workspace you want to use. The above information is added automatically.
7.

Click Next to open the Select Scenario window. Select the type of applications the daemon works with from the following options:

Application server using connection pooling. Stand-alone applications that connect and disconnect frequently. Applications that require long connections, such as reporting programs and bulk extractors. Custom (configure manually). If you select this option, the Workspace editor opens. See

8.

Click Next to open the next window. Select one of the following. The options available depend on the scenario selected:

The minimum number of server instances available at any time: This is the minimum number of connections that are available at any time. If the number of available connections drops below this number, the system will create new connections. (Available if you select Stand-alone applications that connect and disconnect frequently). The maximum number of server instances available at any time: This is the maximum number of connections that are available at any time. If the number of connections used reaches this number, no additional server connections can be made. (Available if you select Stand-alone applications that connect and disconnect frequently). The average number of expected concurrent connections: This lets the system know how much the average load will be and helps to distribute the resources correctly. (Available if you select Application server using connection pooling. or Stand-alone applications that connect and disconnect frequently). The maximum number of connections: This is the most connections that will be available. If the number of requests exceeds this number, an error message is displayed that informs the user to try again when a connection becomes available. (Available if you select Application server using connection pooling. or Stand-alone applications that connect and disconnect frequently). How many connections you want to run concurrently. This sets the number of connections that will run at the same time. (Available if you select Applications that require long connections, such as reporting programs and bulk extractors).

9.

Click Next and enter the amount of wait time for the following parameters. If your system is not too overloaded, you can leave the default times.

How long to wait for a new connection: Enter the amount of time (in seconds) to wait for a connection to be established before the system times out. For example if you want a wait time of one minute enter 60 (the default). If you enter 0, the time is unlimited. How long to wait for a response that is usually fast: Enter the time (in seconds) to wait for a response from the system before the system times out. For example if you want to wait for one minute, enter 60. The default is 0, which indicates unlimited wait time.

4-14 AIS User Guide and Reference

10. Click Next to open and enter the workspace security information in this window.

You can determine which users or groups can access the workspace you are defining. For more information, see <xref to Managing Security>.
11. Click Next to open the summary window. Review the summary to be sure that all

the information entered is correct. If you need to make any changes, click Back to go back to previous steps and change the information.
12. Click Finish to close the wizard and add the new workspace to the Configuration

view.

Editing a Workspace
After you add a Workspace, you can make changes to the workspaces configuration. You can edit the information in the following workspace editor tabs:

General: Specifies general information including the server type, the command procedure used to start the workspace, the binding configuration associated with this workspace (which dictates the data sources and applications that can be accessed) the timeout parameters, and logging information. Server Mode: Contains the workspace server information including features that control the operation of the servers started up by the workspace and allocated to clients. Security: Contains administration privileges, user access, ports available for access to the workspace and workspace account specifications.

1. 2. 3. 4. 5.

To edit a workspace Open Attunity Studio. In the Design Perspective Configuration view, expand the Machines folder and then expand the machine where you want to edit the workspace. Expand the daemon folder. Expand the daemon with the workspace you want to edit. Right-click the workspace you want to edit and select one of the following:

Workspace Setup Wizard: Opens the wizard that was used to add a new workspace (see Adding a Workspace). Make any required changes to the wizard settings to change the workspace definition. Open: Opens the editor. The editor includes the information that was entered in the New Workspace wizard. Click the following tabs to edit the information: General Server Mode Security

Note:

The default daemon configuration supplied with AIS includes the default Navigator Workspace. This workspace is automatically used if no workspace is selected.

Setting up Daemons 4-15

General
You enter general information about the Workspace operations in the General tab. This information includes the server type, the command procedure used to start the workspace and the binding configuration associated with this workspace. The following figure shows the General tab:
Figure 46 The General Tab

Notes:

You can also change daemon settings using the Configuration view, by selecting a computer and scrolling the list to the required daemon. Right-click the daemon and select Edit Daemon. Changes made to the daemon configuration are not implemented immediately. They are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager. For z/OS logging, the default is to write the log entries to the job only

The table below shows the General tab fields:

4-16 AIS User Guide and Reference

Table 44 Field Info

General Tab Description

Workspace name

The name used to identify the workspace. Note: The default configuration includes the default Navigator workspace. This workspace is automatically used if a workspace is not specified as part of the connection settings.

Description Startup script

A description of the workspace. The full path name of the script that starts the workspace server processes. The script specified here must always activate the nav_ login procedure and then run the server program (svc). If you do not specify the directory, the startup procedure is taken from the directory where the daemon resides. AIS includes a default startup script, which it is recommended. Enter the script name only because the server is activated as a started task. The workspace server type:

Server type

IMS Events Java Native External

Workspace binding name

The name of a specific binding configuration on the server machine that you want to use with this workspace. Notes:

For HP NonStop the name of the binding must be five characters or less. For z/OS the name of the binding must be five characters or less and the name must be surrounded by single quotes. If the high-level qualifier is not specified here, NAVROOT.DEF is assumed, where NAVROOT is the high-level qualifier specified when Attunity Server is installed.

Workspace database name

Enter a name of a virtual database that this workspace accesses if applicable. A virtual database presents a limited view of the available data because only selected tables from either one or more data sources are available, as if from a single data source. For more information, see Using a Virtual Database. If a value is entered in this field, only the virtual database can be accessed using this workspace. Note: Entering a value in this field restricts access from a JDBC Client Interface, ODBC Client Interface or OLE DB (ADO) Client Interface during runtime.

Timeout parameters

The following properties define the time the client waits for the workspace server to start. If the workspace server does not start within this period, then the client is notified that the server did not respond. Entering a timeout value for these properties overrides the default setting entered in the Control tab. The maximum amount of time a workspace client can be idle before the connection with the server is closed. The time the client waits for a workspace server to start. If the workspace server does not start within this period, then the client is notified that the server did not respond.

Client idle timeout Connect timeout

Setting up Daemons 4-17

Table 44 (Cont.) General Tab Field Call timeout Description The timeout period for short calls for all workspaces. The definition of a short call is a call that should be completed in a few seconds. For example, most calls to a database such as DESCRIBE should be completed in a few seconds as opposed to call like a GETROWS call, which can take a long time. In heavily loaded or otherwise slow systems, even short calls such as calls to open a file, may take a significant amount of time. If a short call takes more than the specified time to complete, then the connection is stopped. The default value for this parameter is 60 seconds. Values of less than 60 seconds are considered to be 60 seconds. Note: Specifying the timeout in a workspace overrides the value set in Call timeout field for the daemon configuration. Logging and Trace Options Specific log file format Defines the name and location of the server log file if you want the data written to a file instead of SYSOUT for the server process. The parameter must specify the name and the high level qualifier. You can enter the following wildcards in this field to generate the following information:

%A: workspace name %D: date (yymmdd) %I: instance number of the given workspace server %L: server account's login directory %P: server's process ID %T: time (hhmmss) %U: server's account name (username)

Logging

Specifies the type of tracing. The following tracing options are available:

No timeout: Select this to disable the standard RPC timeouts, setting them to a long duration (approximately an hour) to facilitate debugging. Call trace: Select this to generate a message in the server log file for each RPC function called. This is useful for troubleshooting the server. RPC trace: Select this to enable debugging messages on the server. Sockets: Select this to generate a message in the server log file for each socket operation. This is useful for troubleshooting client/server communication by providing a detailed trace of every client/server communication. Extended RPC trace: Select this to generate a more detailed message in the server log file for each low-level RPC function called. This is useful for troubleshooting the server. System trace: Select this to generate operating system-specific tracing. Timing: Select this to generate a timestamp for every entry to the server log file.

Query governing restrictions Max Number of Rows in a Table That Can Be Read Select the maximum number of table rows that are read in a query. When the number of rows read from a table exceeds the number stated, the query returns an error.

4-18 AIS User Guide and Reference

Table 44 (Cont.) General Tab Field Max Number of Rows Allowed in a Table Before Scan is Rejected Description Select the maximum number of table rows that can be scanned. This parameter has different behavior for query optimization and execution.

For query optimization, the value set is compared to the table cardinality. If the cardinality is greater than the value, the scan strategy is ignored as a possible strategy (unless it is the only available strategy). For query execution, a scan is limited to the value set. When the number of rows scanned exceeds the number entered, the query returns an error.

Server Mode
You enter the features that control the operation of the servers started up by the workspace and allocated to clients in the Server Mode tab. For example, you can configure the Workspace to use connection pooling and to start up a number of servers for future use, prior to any client request, instead of starting each server when a request is received from a client.
Figure 47 The Server Mode Tab

Setting up Daemons 4-19

Notes:

You can also change Daemon settings using the Configuration view, by selecting a computer and scrolling the list to the required daemon. Right-click the daemon and select Edit Daemon. Changes made to the daemon configuration are not implemented immediately. They are only implemented after the configuration is reloaded using the Reload Configuration option in the Runtime Manager.

The table below describes the fields in the Server Mode tab:
Table 45 Field Workspace server mode Server Mode Tab Description Specifies the type of new server processes that the daemon starts up. The daemon supports the following server modes:

singleClient: Each client receives a dedicated server process. The account in which a server process runs is determined either by the client login information or by the specific server workspace. This mode enables servers to run under a particular user account and isolates clients from each other, as each receives its own process. However, this server mode incurs a high overhead due to process startup times and can use a lot of server resources as it requires as many server processes as concurrent clients.

multiClient: Clients share a server process and are processed serially. This mode has low overhead because the server processes are already initialized. However, because clients share the same process, they can impact one another, especially if they issue lengthy queries. The number of clients that share a process is determined by the Clients per server limit field. Notes: This mode is not available on HP NonStop machines. Do not use this property when accessing a database that supports two-phase commit through XA.

multiThreaded (Windows only): Clients are allocated a dedicated thread in a shared server process. This mode has low overhead since the servers are already initialized. However, because clients share the same process, they may impact one another, especially if the underlying database is not multi-threaded. The number of multi-threaded clients that share a process is set in the Clients per server limit field (the maximum number of concurrent clients a server process for the current workspace accepts) in the Attunity Studio Design perspective configuration tab. This value is set in the daemon configuration settings maxNClientsPerServer parameter. Notes: Multiple multi-client and multi-threaded servers can be started at the same time for optimal performance. Do not use this property when accessing a database that supports two-phase commit through XA.

4-20 AIS User Guide and Reference

Table 45 (Cont.) Server Mode Tab Field Description

reusable: An extension of single-client mode. Once the client processing finishes, the server process does not die and can be used by another client, reducing startup times and application startup overhead. This mode does not have the high overhead of single-client mode because the servers are already initialized. However, this server mode can use a lot of server resources as it requires as many server processes as concurrent clients. Do not use this mode with a database that supports two-phase commit through XA. In this case, define a new workspace for the data source, so that all the other data sources you are accessing use reusable servers. The other modes can be set so that the server processes are reusable. The number of times a process can be reused is controlled by the Reuse limit field value in Attunity Studio (the maximum number of times a server process can be reused or how many clients it can server before it finishes). Reuse of servers enhances performance since it eliminates the need to repeat initializations. However, reuse runs a risk or using more memory over time. The default for the Reuse Limit field value is 0, which means that there is no limit.

Port range

Select the range for specific firewall ports through which you access the workspace. Determines the range of ports available for this workspace when starting server processes. Use this option when you want to control the port number, so that Attunity Connect can be accessed through a firewall. Enter the port range in the following fields:

From: Enter the highest numbered port in the range To: Enter the lowest numbered port in the range

Use default port range

Select this to use the port range that is defined in the daemon. This is defined in the Port range for servers field in the daemon Control tab.

Maximum number of Enter the maximum number of server processes that can run at the server processes same time. Limit server reuse Select this if you want to limit the number of servers that can be reused. If this is selected, the Reuse limit parameter is available. If Limit server reuse is selected, in the field next to the check box, enter the maximum number of times a server can be reused. Select the maximum of clients accepted in a server process. A one-client server can be reused after its (single) client has disconnected. Reuse of servers enhances startup performance because it avoids the need to repeat initialization. This parameter is not available if the Limit server reuse parameter is not selected. This parameter is not available if the server mode value is singleClient. Limit concurrent clients per server Select this to limit the number of clients that a server can accept for the current workspace process. If this is not selected, the number of clients is unlimited.

Setting up Daemons 4-21

Table 45 (Cont.) Server Mode Tab Field Description If Limit concurrent clients per server is selected, in the field next to the check box, enter the maximum number of clients that a server process for the current workspace accepts. The default for this field is None, indicating that the number of clients for each server is unlimited. This field is available if the server mode value is multiClient or multiThreaded. Specify Server Priority Set the priority for servers. For example, a workspace for applications with online transaction processing can be assigned a higher priority than a workspace that requires only query processing. The lower the number, the higher the priority. For example, workspaces with a priority of 1 are given a higher priority than workspaces with a priority of 2. Note: This is unavailable if Use default server priority is selected. Use default server priority Keep when daemon ends Sets the priority to 0. There is no specific priority for this workspace. Clear this check box to set a priority in the Specify Server Priority parameter. Select this to kill all servers started by that daemon when a daemon is shutdown, even if they are active. Select this if you want the servers for the workspace to remain active, even after the daemon has been shut down. If selected, it is the responsibility of the system operator or manager to ensure that the servers are eventually killed. This must be done at the system level.

Server Provisioning Number of prestarted Initial number of servers: The number of server processes that are servers in pool prestarted for this workspace when the daemon starts up. When the number of available server processes drops lower than the value specified in the Minimum number field, the daemon again starts server processes until this number of available server processes is reached. The default for this field is 0. Number of spare servers The minimum number of server processes in the prestarted pool before the daemon resumes creating new server processes (to the value specified in the Initial number of servers field). If this field is set to a value higher than the Initial number of servers field, the daemon uses the value specified in the Initial number of servers field. The default for this field is 0. The maximum number of available server processes. Once this number is reached, no new nonactive server processes are created for the particular workspace. For example, if a number of server processes are released at the same time, so that there are more available server processes than specified by this field, the additional server processes higher than this value are terminated. The default for this field is zero, meaning that there is no maximum.

Prestarted server pool limit

Security
Configure the security level for a workspace in the Workspace editor Security tab. This lets you set the security options for the workspace only. To set security on the daemon level, see Security. The Security tab is used:

To grant administration rights for the workspace To determine access to the workspace by a client

The following figure shows the Security tab:

4-22 AIS User Guide and Reference

Figure 48 The Security Tab

The following table describes the fields in this tab:


Table 46 Field Server Account Security Tab Description This section defines the users (accounts) allowed to access the workspace, firewall access ports, workspace account, and anonymous login permissions. Select this if you want to define the operating system account used for the workspace. If selected, enter the name of the workspace account in the workspace account field. If not selected, the account name that was provided by the client is used. Allow anonymous clients to use this workspace Select this if you want to allow this workspace to be invoked without authentication. If selected, enter the name of the workspace account in the Server account to use with anonymous clients field.

Use specific workspace account

Setting up Daemons 4-23

Table 46 (Cont.) Security Tab Field Authorized Workspace users Description Indicate which users have permission to use the workspace. Select one of the following

All users: Any user who has logged on to the daemon may use the workspace Selected users only: Select this to allow only users (or accounts) with specific permission to use the workspace. When this is selected, add the names of users (or accounts) and groups that can be use the workspace in the field below. See Administering Selected User Only Lists for information on adding users and groups to the field. Note: If no user is specified, any user who has logged on to the daemon may use the workspace.

Authorized Administrators

Identifies the users (accounts) with administrator privileges. Select one of the following:

All users: Indicates that anyone can access the workspace and change the settings. Selected users only: Select this to allow only users (or accounts) with specific permission to be administrators. When this is selected, add the names of users (or accounts) and groups that can be workspace administrators. See Administering Selected User Only Lists for information on adding users and groups to the field. If no user is specified, any user who has logged on to the daemon may administrator this workspace.

Note:

You can also use the Allow Listing parameter. Select this if you

want this workspace to appear in the list of workspaces.

To set this parameter you must use the XML view. Right-click the daemon with the workspace you are working with and select Open as XML. Find this parameter in the XML editor to change it. For more information, see Editing XML Files in Attunity Studio.

Selecting a Binding Configuration


After you add a Workspace, you can change the binding that is associated with the workspace, if you dont want to use the default NAV binding. If you want to use a Binding configuration other than the default (NAV) configuration, you can select a different required binding configuration from the Workspace editor General tab Workspace binding name list.
Notes:

You can have more than one workspace, each with different binding configurations. If you want to use a binding other than the default binding, on UNIX and OpenVMS platforms you can select the binding as part of the startup script for the workspace.

4-24 AIS User Guide and Reference

Disabling a Workspace
Workspaces can be disabled. When you disable a workspace, server processes are not started and a client requesting the disabled workspace receives an error. To disable a workspace in Attunity Studio 1. In the Design perspective Configuration view, right-click the workspace you want to disable.
2.

Select Disable.

Setting Workspace Authorization


Once a machine is defined in Attunity Studio, you can authorize which users can view and edit rights for specific workspaces. To assign authorization rights to a workspace: 1. Right-click the workspace in the Attunity Studio Design perspective Configuration view and select Set Authorization.
2.

Enter the User name and Password for the user with authorization rights for this workspace. For more information about setting rights and security privileges, see Security.

Setting up Daemons 4-25

4-26 AIS User Guide and Reference

5
Managing Metadata
This chapter includes the following sections:

Data Source Metadata Overview Importing Metadata Managing Metadata Using Attunity Metadata with AIS Supported Data Sources Procedure Metadata Overview Importing Procedure Metadata Using the Import Wizard Procedure Metadata Statements ADD Supported Data Types ADD Syntax

Data Source Metadata Overview


Metadata defines the structure of the data and where it is located. Attunity Connect relies on the native Metadata of the data source when connecting to relational data sources (such as Informix, Oracle and Sybase) and some File-system Data Source (such as Adabas, using Predict). For other data sources whose metadata is not readable by Attunity Connect or which do not have metadata, Attunity Connect requires its own metadata. Attunity metadata is stored in a proprietary data dictionary called the Attunity Data Dictionary (ADD). You need Attunity metadata for each record (referred to as a table) in the data source. The Attunity metadata definition for a record is viewable and updateable via the Design perspective Metadata tab of Attunity Studio or via an XML file. The Attunity metadata for each Data Source is stored in a repository for the data source, on the machine where the data resides. The following require Attunity metatdata:

CISAM /DISAM Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources (z/OS Only) RMS Data Source (OpenVMS Only)

Managing Metadata 5-1

Text Delimited File Data Source VSAM Data Source (z/OS) (VSAM Under CICS and VSAM Drivers (z/OS Only)) OLEDB-FS (Flat File System) Data Source

An Adabas C Data Source is available if the Adabas Predict metadata is not available (or efficient). Metadata can be imported, saved and managed in the Attunity Studio Design perspectives Metadata tab.

Importing Metadata
You can use the Attunity Studio Import wizards or standalone import utilities to generate metadata. If an import wizard or standalone utility is not available, you can create the metadata manually.

Importing Metadata Using an Attunity Studio Import Wizard


The following data source drivers have wizards in Attunity Studio for importing Metadata.

Enscribe Data Source (HP NonStop Only) IMS/DB Data Sources RMS Data Source (OpenVMS Only) VSAM Batch CDC (z/OS Platforms) (VSAM Under CICS and VSAM Drivers (z/OS Only))

For other data source drivers, if a COBOL copybook is available, metadata can be generated from the COBOL in Attunity Studio. Otherwise, the metadata has to be manually defined in the Attunity Studio Design perspective Metadata tab. If COBOL copybooks describing the data source records are available, you can import the metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the metadata from copybooks with the same settings and later import the metadata from the other copybooks. You can save an import procedure and use it again.
Note:

Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) are not made to the Attunity metadata

Each data source has different import requirements. Some may have different amount steps than others and require different information. Each wizard guides you through the import. The input files for each import (such as COBOL copybooks) are needed during the import to define the input and output structures used by the application adapter. These files are sent to the machine running Attunity Studio using the FTP protocol, as part of the import procedure. For additional information on importing metadata, refer to the specific data source.

5-2 AIS User Guide and Reference

Importing Metadata Using a Standalone Utility


The following third-party standalone utilities are available:

Adabas DDM import (DDM_ADL): This utility produces Attunity metadata from non-Predict metadata (nsd files) BASIC mapfiles import (BAS_ADL): This utility produces Attunity metadata from BASIC mapfiles. DBMS Import (DBMS_ADL): produces Attunity metadata from a DBMS database. HP NonStop Enscribe Import (ADDIMP): This utility produces Attunity metadata for HP NonStop Enscribe data from a DDL subvolume and/or COBOL copybooks. You can generate the metadata from COBOL copybooks in Attunity Studio. HP NonStop Enscribe Import (TALIMP): This utility produces Attunity metadata for HP NonStop Enscribe data sources from TAL datafiles and a DDL subvolume. MS CDD Import (CDD_ADL): This utility extracts the information stored in an RMS CDD directory into ADD metadata. You can generate the metadata from COBOL copybooks in Attunity Studio.

Managing Metadata
You manage metadata in Attunity Studio. Click the Metadata tab in the Design perspective to view and modify Attunity Metadata for Data Sources. To view and modify metadata for data sources In the Design perspective Configuration view, right-click the data source for which you want to manage the metadata. Select Edit metadata from the shortcut menu. The Metadata tab opens with the selected data source displayed in the tree.
Note:

1. 2.

You can also open the Metadata tab and right-click the Data sources folder in the Metadata view to add the data source that you want to import metadata for to the tree.

3.

Right-click the resource (such as the data source table) in the Metadata view and select Edit.

Data source tables are edited using the following tabs, which are at the bottom of the screen:

General Tab: Defines general information about the table, such as the table name and the way the table is organized, and the location of the table. Columns Tab: Specifies the table columns and their properties. For example, the column data type, size and scale. Indexes Tab: Enables you to specify the indexes of a table. The indexes are described by the order of the rows they retrieve and the data source commands used and the index type. Statistics Tab: Enables you to specify statistics for the table, including the number of rows and blocks of the table.

Managing Metadata 5-3

Note:

Attunity Connect provides a relational model for all data sources defined to it. Thus, relational terminology is used, even when referring to non-relational data sources. For example, the metadata for an RMS record is referred to as the metadata for an RMS table.

Source Tab: The metadata in its XML representation.

Using Attunity Metadata with AIS Supported Data Sources


The native Metadata of other Data Sources, which is used by Attunity Connect (such as Relational Data Sources like Oracle) can also be viewed in the Design perspective metadata tab of Attunity Studio. However, this metadata cannot be edited. Some native metadata does not include information that Attunity Connect requires to fully optimize query execution (for example, the number of rows and blocks in an Rdb table). In this sort of situation you can extend the native metadata by adding these extensions in Attunity metadata.

Extended Native Data Source Metadata


When native metadata lacks some features provided by Attunity metadata, performance can be improved by extending the native metadata with ADD metadata. For example, you can use the Attunity metadata to specify the number of rows and blocks in an RDB table. When accessing the data, Attunity Connect uses both the native metadata and this extended metadata. Extended metadata is managed in Attunity Studio. To assign extended metadata for a data source 1. Display the metadata for the data source in the Design perspective Metadata tab of Attunity Studio
2.

Change the relevant values for the table to be extended in the Statistics tab. The table symbol in the tree is marked with an asterisk to show that the metadata has been extended.
Note:

The information in other tabs are for reference only and cannot be edited. The noExtendedMetadata property in the data source definition in the binding configuration is set to false.

You can sometimes improve performance using Attunity metadata instead of the native metadata. In this case, you can export a snapshot of the native metadata to Attunity Connect and use this local copy of the native metadata when accessing the data source.

Native Metadata Caching


When access to a data source via its native metadata is slow but the metadata is static, performance can be improved by creating a local copy (snapshot) of the data source metadata and then run queries using this metadata instead of the data source metadata.

5-4 AIS User Guide and Reference

Examples of when this is beneficial include when native metadata is not up to date or information, such as statistics, are not available. You can see a snapshot of the metadata in Attunity Studio. To make a copy of data source metadata 1. Display the metadata for the data source in the Design perspective Metadata tab of Attunity Studio.
2. 3. 4.

Right-click the data source and select Manage Cached Metadata from the popup menu. Select the tables that you want to use a local copy and move them to the right pane. Click Finish. The tables are displayed under the data source.
Note::

The table symbol changes from the relational data source symbol to a data source symbol that requires Attunity metadata. The localCopy property in the data source definition in the binding configuration is set to true.

Only the tables that have cached metadata are displayed in the tree. To revert back to using non-cached metadata, you can either right-click individual tables and choose Delete Cached Table from the popup menu or, for all the tables, right-click the data source and choose Set Metadata followed by Native Metadata from the popup menu.
Note:

If the native metadata change, then from using a snapshot of the native metadata is not recommended.

Procedure Metadata Overview


Procedure Metadata defines the input and output structures that are passed to and returned from the procedure. The metadata can be generated using an import wizard in Attunity Studio for the CICS Procedure Data Source and the Procedure Data Source (Application Connector) based on COBOL copybooks. For the Natural/CICS Procedure Data Source (z/OS), the metadata has to be defined in Attunity Studio Design perspective Metadata view. Attunity metadata is stored in a proprietary data dictionary called the Attunity Data Dictionary (ADD). You need Attunity metadata for each function in the procedure. The Attunity metadata definition for a procedure is viewable and updateable via the Design perspective Metadata tab of Attunity Studio or via an XML file. The Attunity metadata for each procedure is stored in a repository for the procedure, on the machine where the procedure resides.

Importing Procedure Metadata Using the Import Wizard


Import wizards are available for the following:

CICS Procedure Data Source Procedure Data Source (Application Connector)

Managing Metadata 5-5

For other procedures, the metadata is created manually, as described in Manually Creating Procedure Metadata. If COBOL copybooks describing the procedure input and output structures are available, you can import the Metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the metadata from copybooks with the same settings and later import the metadata from the other copybooks. You can also save an import procedure for reuse.
Note:

Imported metadata is independent of its origin. Changes made to the source metadata after the import, are not made to the Attunity metadata.

The following information is required during the import:

The input files for the import (such as COBOL copybooks) to define the input and output structures used by the Application Adapter These files are copied to the machine running Attunity Studio as part of the import procedure. The names of the applications (such as the IMS/TM transaction or CICS program) to be executed via the procedure.

Procedure Metadata Statements


Metadata can be viewed and modified in the Attunity Studio Design perspective Metadata view. For information on how to manage metadata in Attunity Studio, see Working with Procedure Metadata. You build the metadata with various statements. This section describes the statements used for creating and managing procedure metadata. The procedure syntax is composed of the following statements:

The <procedure> Statement The <parameters> Statement The <dbCommand> Statement The <dbCommand> Statement The <fields> Statement

The <procedure> Statement


A procedure definition begins with a <procedure> statement. This statement consists of the following components:

An attribute list. Input parameters for procedures A <field> statement which includes the fields list.

Syntax
<procedure name="proc_name" attribute="value" ...> <dbCommand>...</dbCommand>

5-6 AIS User Guide and Reference

<fields> <field name="field_name" attribute="value" .../> ... </fields> <parameters> <field name="param" attribute="value" ... /> ... </parameters> </procedure>

Where proc_name is the procedure name, up to a maximum of 40 characters.


Notes:

The proc_name entry must conform to standard ANSI 92 SQL naming conventions. You must include a <field> statement. Use a <parameters> statement to specify input parameters.

Example 51 <procedure> statement <procedure name="math_simple" filename="prc_samples"> <dbCommand>LANGUAGE=C</dbCommand> <fields> <field name="sum1" datatype="int4"> <dbCommand>order=1</dbCommand> </field> <field name="subtract" datatype="int4"> <dbCommand>order=2</dbCommand> </field> <field name="multiply" datatype="int4"> <dbCommand>order=3</dbCommand> </field> <field name="divide" datatype="int4"> <dbCommand>order=4</dbCommand> </field> </fields> <parameters> <field name="oper1" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> <field name="oper2" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> </parameters> </procedure>

<procedure> Attributes
The attributes are listed in the following table:
Table 51 Attribute Name alias <procedure> Attributes Syntax alias="name" Description Replaces the procedure name with a logical procedure name. Names greater than 39 characters are truncated from the left.

Managing Metadata 5-7

Table 51 (Cont.) <procedure> Attributes Attribute Name description Syntax description="optional_ user_supplied_ description" Description Specifies an optional textual description.

filename

filename="full_filename" Specifies the full name and location of the file. where full_filename includes the full path to the file.

name

name="name"

Specifies the name of the procedure. This attribute must be specified.

The <parameters> Statement


The <parameters> statement specifies a list of input parameters for a procedure, and each parameter is defined by a <field> statement.

Syntax
<parameters> <field name="param" attribute="value" ...> <dbCommand>...</dbCommand> </field> <field name="param" attribute="value" .../> ... </parameters>

Example 52 <parameters> statement <parameters> <field name="oper1" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> <field name="oper2" datatype="int4"> <dbCommand>mechanism=value; order=5</dbCommand> </field> </parameters>

The <dbCommand> Statement


The <dbCommand> statement is used to specify procedure-specific commands for the metadata.

Syntax
<dbCommand>text</dbCommand>

Example 53 <dbCommand> statement <dbCommand>mechanism=value; order=5</dbCommand>

5-8 AIS User Guide and Reference

The <fields> Statement


The <fields> statement is used to list the field descriptions of fields in a table or procedure. A field description can be one of the following:

The <field> Statement The <group> Statement The <variant> Statement

Syntax
<fields> <field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field> <group name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </group> <variant name="field_name" attribute="value" ...> <case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case> ... </variant> ... </fields>

For details of the specific syntax requirements for a procedure, see the specific procedure.

The <field> Statement


The <field> statement defines the characteristics of a field that is not made up of other fields.

Syntax
<field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field>

Example 54 <field> statement

The following code defines one field (N_NAME) and its two attributes (data type and size):
<field name="n_name" datatype="string" size="25" />

Managing Metadata 5-9

<field> Attributes
The attributes are listed in the following table:
Table 52 Attribute Name datatype <field> Attributes Syntax datatype="datatype" Description Specifies the data type of a field. For the supported data types, see ADD Supported Data Types. scale scale="n" Specifies the number of characters or digits. For example: <field name="SALARY" datatype="numstr_s" size="10" scale="2" /> size size="n" where n is the number of characters or digits. The digit must be greater than 0. name name="name" Specifies the name of the field. This attribute must be specified. For example: <field name="EMP_ID" datatype="int4" /> Specifies the size of the field.

The <group> Statement


The <group> statement defines the characteristics of a field that is made up of other fields, such as an array in a record.

Syntax
group name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> <fields> <field name="field_name" attribute="value" ... /> </fields> </group>

A <group> statement is handled as an array. Each of the array elements contains all of the subordinate fields defined in the <group> statement. The size of the array is the size of a single array element multiplied by the dimension.
Example 55 <group> statement <procedure name='math_all_structs' filename='prc_samples'> <dbCommand>LANGUAGE=C</dbCommand> <fields> <group name='MATH_STRUCT'> <dbCommand>ORDER=1</dbCommand> <fields> <field name='SUM1' datatype='int4'/> <field name='SUBTRACT' datatype='int4'/> <field name='MULTIPLY' datatype='int4'/> <field name='DIVIDE' datatype='int4'/> </fields> </group> </fields> <parameters> <group name='MATH_IN_STRUCT' > 5-10 AIS User Guide and Reference

<dbCommand>ORDER=2</dbCommand> <fields> <field name='OPER1' datatype='int4'/> <field name='OPER2' datatype='int4'/> </fields> </group> </parameters> </procedure>

<group> Attributes
The attribute is listed in the following table:
Table 53 Attribute Name name <group> Attributes Syntax name="name" Description Specifies the name of the field. This attribute must be specified. For example: <group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields>...</fields> </group>

The <variant> Statement


Variants are similar to redefine constructs in COBOL and to union in C. The basic concept is that the same physical area in the buffer is mapped several times. The mappings can be of the following:

Different nuances of the same data. Different usage of the same physical area in the buffer.

This section describes the common use cases of variants and how they are represented in the variant syntax. There are two types of variants:

Variant without selector Variant with selector

Variant without selector


Variants without selectors are used to define different cases of the variants and represent different types of looking at the same data. The use of this type of variant is discouraged. It is recommended to pick which case is more convenient to work with when consuming the data and remove the other cases.
Example 56 COBOL 20 PARTNUM PIC X(10). 20 PARTCD REDEFINES PARTNUM. 30 DEPTCODE PIC X(2). 30 SUPPLYCODE PIC X(3). 30 PARTCODE PIC X(5).

Managing Metadata

5-11

In this example one case includes a PARTNUM field of 10 characters while the other case, PARTCD, maps the same part number to a 2 character DEPTCODE, a 3 character SUPPLYCODE and a 5 character PARTCODE. The two variant cases are just different ways of viewing the same item of data. In Attunity Studio, the Import Manipulation screen enables you to replace any variant with the fields of a single case. The metadata generated following a metadata import, appears as follows:
<variant name="VAR_0"> <case name="UNNAMED_CASE_1"> <fields> <field name="PARTNUM" datatype="string" size="10"/> </fields> </case> <case name="PARTCD"> <fields> <field name="DEPTCODE" datatype="string" size="2"/> <field name="SUPPLYCODE" datatype="string" size="3"/> <field name="PARTCODE" datatype="string" size="5"/> </fields> </case> </variant>

Variant with selector


Different cases of the variant represent different ways in which to use the physical area in the buffer. For every record instance there is only one case which is valid, the others are irrelevant. Additional fields in the buffer help determine which variant case is valid for the current record.
Example 57 COBOL 10 ORDER. 20 RECTYPE PIC X. 88 ORD-HEADER VALUE H. 88 ORD-DETAILS VALUE D. 20 ORDER-HEADER. 30 ORDER-DATE PIC 9(8). 30 CUST-ID PIC 9(9). 20 ORDER-DETAILS REDEFINES ORDER-HEADER. 30 PART-NO PIC 9(9). 30 QUANTITY PIC 9(9) COMP.

In this example each of the records is either an order header record or an order item record, depending on the value of the RECTYPE field. This construct can be mapped as a variant with a selector, where the RECTYPE field is the selector. During a metadata import from COBOL, all variants are assumed to be variants without selectors. The COBOL syntax doesnt distinguish between different types of variants or REDEFINEs. In COBOL, only the program logic includes this distinction.

5-12 AIS User Guide and Reference

This is true, unless a selector is specified in the import manipulation screen. Refer to Attunity Studio Guide and Reference for additional information.

ADD Syntax
The following is the ADD syntax to use for setting variants:
<variant name="variant_name"> <case name="case_name" value="val" ...> <fields> <field name="field_name" ... /> </fields> </case> <case ... </case> </variant>

The metadata generated by Attunity Studio following a metadata import appears as follows:
<filed name="RECTYPE" datatype="string" size="1"/> <variant name="VAR_1" selector="RECTYPE"> <case name="ORDER_HEADER" value="H"> <fields> <field name="ORDER_DATE" datatype="numstr_u" size="8"/> <field name="CUST_ID" datatype="numstr_u" size="9" </fields> </case <case name="ORDER_DETAILS" value="D" <fields <field name="PART_NO" datatype="numstr_u" size "9"/> <field name="QUANTITY" datatype="uint4" size="4"/> </fields </case> </variant>

Usage Notes

From an SQL consumer, none of the <variant> or <case> fields are visible. Only the simple fields are accessible. For a variant with a selector, all fields are reported as nullable regardless of their backend definition. For every record instance, only the relevant case will show values the rest of the cases will contain NULLs. When updating or inserting both types of variants, it is up to the User to ensure that only a single case is given values. Attempting to set fields from two or more cases will result in unpredictable behavior.

Resolving Variants in Attunity Studio


In Attunity Studio, variants are resolved in the data source metadata, Import Manipulation screen. To resolve variants in using the Import Manipulation screen 1. In the Validation tab, double-click the variant to resolve. A screen opens, listing the variants in the COBOL copybook.

Managing Metadata

5-13

Note:
2.

You can expand the variants to expose the variant cases.

Right-click the required variant and select Structures, and then select Mark selector. The Select Selector screen opens.

3. 4.

Select the selector for the variant from the list of available selectors in the COBOL copybook, and then click OK. Repeat for all the required variants, and then click OK.

The <case> Statement


The <case> statement specifies an alternative definition that maps to the same storage area. This statement can include the following:

Syntax <case> Attributes

Syntax
<case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case>

Example 58 <case> statement <variant name='VAR_DIVIDE_DATATYPE' selector='DIVIDE_DATATYPE'> <dbCommand>ORDER=5</dbCommand> <case name='CASE_1_1' value='L'> <fields> <field name='DIVIDE_LONG' datatype='int4'/> </fields> </case> <case name='CASE_1_2' value='F'> <fields> <field name='DIVIDE_FLOAT' datatype='single'/> </fields> </case> <case name='CASE_1_3' value='D'> <fields> <field name='DIVIDE_DOUBLE' datatype='double'/> </fields> </case> </variant>

<case> Attributes
The attribute is listed in the following table:

5-14 AIS User Guide and Reference

Table 54 Attribute Name name

<case> Attributes Syntax name="name" Description Specifies the name of the case. Note: When a selector attribute is not specified in the <variant> statement, a name attribute must be specified here.

value

value="value"

Specifies the value for a variant definition that is used in the current record (row) for the field specified in the <variant> statement via the selector attribute. Note: When a selector attribute is specified in the <variant> statement, a value attribute must be specified here.

ADD Supported Data Types


The following table lists data types supported by Attunity Connect ADD:
Note:

Platform and data source-dependent data types run only on their respective platforms.

Table 55 ADD Type ada_d_time ada_decimal ada_numstr_s ada_time apt_date

ADD Supported Data Types ODBC Type SQL_ TIMESTAMP SQL_NUMERIC SQL_NUMERIC SQL_ TIMESTAMP SQL_DATE JDBC Type Details ADABAS date format z/OS ADABAS packed decimal z/OS ADABAS numeric string ADABAS timestamp format Date packed into a 4 character string. Format: DMYY Example: 23-July-1998 is represented by four bytes, 19, 98, 7, and 23

OLE DB Type DBTYPE_TIMESTAMP DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_TIMESTAMP DBTYPE_DBDATE

apt_time

DBTYPE_TIMESTAMP

SQL_ TIMESTAMP

ADD date-time format

Managing Metadata

5-15

Table 55 (Cont.) ADD Supported Data Types ADD Type based_date OLE DB Type DBTYPE_ DBTIMESTAMP ODBC Type SQL_ TIMESTAMP JDBC Type Details Customize this data type by defining an environment variable (UNIX) or logical (OpenVMS) having the form: NVDT_ BASEDDATE=yyyymmdd[/dttyp e[/dtlen[/multiplier]]] where:

yyyymmdd: start date dttype: the name of the data type (int4 is the default). dtlen: the number of digits in the data type (if not atomic). multiplier: the number of increments per day.

Example: 19700101/int4//24 specifies the number of hours since Jan 1 1970. binary bit DBTYPE_BYTES DBTYPE_I1 SQL_BINARY SQL_TINYINT Unknown data type, string type, length must be specified A single bit within a byte. Size: 1 byte Format: datatype="bit" onBit="n", where n specifies which bit (within a byte) the field uses. If more than one bit is defined, the additional bits may be defined sequentially within the same byte (or bytes, if the number of bits requires this much space)

5-16 AIS User Guide and Reference

Table 55 (Cont.) ADD Supported Data Types ADD Type bits OLE DB Type DBTYPE_I4 ODBC Type SQL_TINYINT JDBC Type Details A signed number of bits within a byte. Size: 1 bit to 1 byte Format: <field name="name" datatype="bits" onBit="n" size="m"/> where:

n: specifies which bit (within a byte) to start from. m: the number of bits. If n is not specified then n defaults to 1 for the first occurrence of the field and is contiguous thereafter.

The maximum number of bits you can map is 32. cstring DBTYPE_STR SQL_VARCHAR A null-terminated string of alphanumeric characters; maximum length must be specified. An extra byte is required for the null flag. A CorVision date-time format. ODBC date format. Date in a string having the form YYMMDD. Date in a string having the form YYYYMMDD. DB2 UDB date format (OS/400 machine) DB2 UDB date-time format (OS/400 machine). DB2 UDB time format (OS/400 machine). Packed decimal. Maximum number of digits: 31 Maximum fractions: 11 Length: int (number of digits/2) + 1 bytes dfloat DBTYPE_R8 SQL_DOUBLE Double floating point number (D_FLOAT). Size: 8 bytes Range: 0.29E-38 to 1.7E38 Precision: 16 digits

cv_datetime date date6 date8 db400_date

DBTYPE_ DBTIMESTAMP DBTYPE_DBDATE DBTYPE_DBDATE DBTYPE_DBDATE DBTYPE_ DBTIMESTAMP

SQL_ TIMESTAMP SQL_DATE SQL_DATE SQL_DATE SQL_ TIMESTAMP SQL_ TIMESTAMP SQL_ TIMESTAMP SQL_NUMERIC

db400_datetime DBTYPE_ DBTIMESTAMP db400_time decimal DBTYPE_ DBTIMESTAMP DBTYPE_NUMERIC

Managing Metadata

5-17

Table 55 (Cont.) ADD Supported Data Types ADD Type double OLE DB Type DBTYPE_R8 ODBC Type SQL_DOUBLE JDBC Type Details Double floating point number (G_FLOAT). Size: 8 bytes Range: 0.56E-308 to 0.90E308 Precision: 15 digits filler DBTYPE_BYTES SQL_BINARY Allocation for future use, string type; length must be specified. A fixed null-terminated string of numeric characters; length must be specified. An extra byte is required for the null flag IEEE double floating point number. IEEE single floating point number. Binary image (BLOB). Date in a four byte integer. Format: YYMMDD or YYYYMMDD Example: 23-Jul-1998 has the form: 980723 or 19980723. int1 DBTYPE_I4 SQL_TINYINT Signed byte integer. Size: 1 byte Range: -128 to +127 int2 DBTYPE_I2 SQL_SMALLINT Signed word integer. Size: 2 bytes Range: -32768 to +32767 int3 DBTYPE_I4 SQL_INTEGER Signed integer. Size: 3 bytes int4 DBTYPE_I4 SQL_INTEGER Signed long integer. Size: 4 bytes Range: -2147483648 to +2147483647 int6 DBTYPE_NUMERIC SQL_INTEGER Signed integer. Size: 6 bytes int8 DBTYPE_NUMERIC SQL_NUMERIC Signed quadword. Size: 8 bytes Range: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807 isam_decimal DBTYPE_NUMERIC SQL_NUMERIC CISAM and DISAM packed decimal.

fixed_cstring

DBTYPE_NUMERIC

SQL_NUMERIC

ieee_double ieee_float image int_date

DBTYPE_R8 DBTYPE_R4 DBTYPE_STR DBTYPE_DBDATE

SQL_DOUBLE SQL_REAL SQL_CHAR SQL_DATE

5-18 AIS User Guide and Reference

Table 55 (Cont.) ADD Supported Data Types ADD Type jdate OLE DB Type DBTYPE_DBDATE ODBC Type SQL_Date JDBC Type Details Julian date. Size: 2 bytes

Bits 0-6: (non-century) year Bits 7-15: day of the year

logical

DBTYPE_I4

SQL_INTEGER

Signed long integer. Values: 1 or true (not case sensitive) are inserted as 1. Any other value is false, and inserted as 0.

magic_pc_date magic_pc_time mvs_date mvs_datetime mvs_time nls_string numeric_ cstring

DBTYPE_ DBTIMESTAMP DBTYPE_ DBTIMESTAMP DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_NUMERIC

SQL_ TIMESTAMP SQL_ TIMESTAMP SQL_CHAR SQL_CHAR SQL_CHAR SQL_CHAR SQL_NUMERIC

Magic PC date format. Magic PC time format. z/OS date format. z/OS date-time format. z/OS time format. String based on language and driven by table. A null-terminated string of numeric characters; maximum length must be specified. An extra byte is required for the null flag. Signed numeric string. Sign is the first character of the string. Maximum number of digits: 31 Maximum fractions: 11 Note: the number of fractions includes the decimal point.

numstr_bdn

DBTYPE_NUMERIC

SQL_NUMERIC

numstr_lse

DBTYPE_NUMERIC

SQL_NUMERIC

HP NonStop signed numeric string. A left overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11

numstr_nl

DBTYPE_NUMERIC

SQL_NUMERIC

Signed numeric string. Sign is the first character of the string. Maximum number of digits: 31 Maximum fractions: 11

Managing Metadata

5-19

Table 55 (Cont.) ADD Supported Data Types ADD Type numstr_nlo OLE DB Type DBTYPE_NUMERIC ODBC Type SQL_NUMERIC JDBC Type Details Signed numeric string. A left overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11 numstr_nr DBTYPE_NUMERIC SQL_NUMERIC Signed numeric string. Sign is the last character of the string. Maximum number of digits: 31 Maximum fractions: 11 numstr_s DBTYPE_NUMERIC SQL_NUMERIC Signed numeric string. A right overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11 The number must be right justified (for example, " 1234N" is -12345). The number can be left padded by either spaces or zeros. If a scale is provided, it is a fixed positional scale; no decimal point is provided in the data (for example, a value of "1234E" with scale 2 is interpreted as "123.45"). numstr_tse DBTYPE_NUMERIC SQL_NUMERIC HP NonStop signed numeric string. A right overpunched sign is implemented. Maximum number of digits: 31 Maximum fractions: 11 numstr_u DBTYPE_NUMERIC SQL_NUMERIC Unsigned numeric string. Maximum number of digits: 31 Maximum fractions: 11 numstr_zoned DBTYPE_NUMERIC SQL_NUMERIC Signed numeric string. Maximum number of digits: 31 Maximum fractions: 11 ole_date ole_decimal ole_numeric ora_time DBTYPE_DBDATE DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_ DBTIMESTAMP SQL_DATE SQL_NUMERIC SQL_NUMERIC SQL_ TIMESTAMP OLE DB date format. OLE DB packed decimal. OLE DB numeric string. Oracle time format.

5-20 AIS User Guide and Reference

Table 55 (Cont.) ADD Supported Data Types ADD Type oracle time padded_str_ date padded_str_ datetime padded_str_ time phdate OLE DB Type DBTYPE_ DBTIMESTAMP DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_DBDATE ODBC Type SQL_ TIMESTAMP SQL_CHAR SQL_CHAR SQL_CHAR SQL_Date JDBC Type Details Oracle time format. Padded date format. Not null terminated. Padded date format. Not null terminated. Padded date format. Not null terminated. Size: 2 bytes

Bits 0-6: (non-century) year Bits 7-10: number of month Bits 11-15: day of month

scaled_int1

DBTYPE_NUMERIC

SQL_NUMERIC

Signed byte integer. Size: 1 byte Range: -128 to +127 Maximum: 3

scaled_int2

DBTYPE_NUMERIC

SQL_NUMERIC

Signed word integer. Size: 2 bytes Range: -32768 to +32767 Maximum: 5

scaled_int3

DBTYPE_NUMERIC

SQL_NUMERIC

Signed integer. Size: 3 bytes

scaled_int4

DBTYPE_NUMERIC

SQL_NUMERIC

Signed long integer. Size: 4 bytes Range: -2147483648 to +2147483647 Maximum: 10

scaled_int6

DBTYPE_NUMERIC

SQL_NUMERIC

Signed integer. Size: 6 bytes

scaled_int8

DBTYPE_NUMERIC

SQL_NUMERIC

Signed quadword. Size: 8 bytes Range: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807 Maximum: 19

scaled_uint1

DBTYPE_NUMERIC

SQL_DOUBLE

Unsigned byte integer. Size: 1 byte Range: 0 to 254 Maximum: 3

Managing Metadata

5-21

Table 55 (Cont.) ADD Supported Data Types ADD Type scaled_uint2 OLE DB Type DBTYPE_NUMERIC ODBC Type SQL_ NUMERIC(5) JDBC Type Details Unsigned word integer. Size: 2 bytes Range: 0 to 65534 Maximum: 5 scaled_uint4 DBTYPE_NUMERIC SQL_ NUMERIC(10) Unsigned long integer. Size: 4 bytes Range: 0 to 4,294,967,294 Maximum: 10 single DBTYPE_R4 SQL_REAL Single floating point number (F_FLOAT). Size: 4 bytes Range: 0.29E-38 to 1.7 E38 Precision: 6 digits str_date DBTYPE_STR SQL_CHAR Atomic date string. Size: 10 characters Format: YYYY-MM-DD str_datetime DBTYPE_STR SQL_CHAR Atomic date-time string. Size: 23 characters Format: YYYY-MM-DD HH:MM:SS.FFF str_time DBTYPE_STR SQL_CHAR Atomic time string. Size: 8 characters Format: HH:MM:SS string DBTYPE_STR SQL_CHAR String of alphanumeric characters; length must be specified. Date in a string. Format: YYYY-MM-DD tandem_ datetime DBTYPE_DBTIME SQL_TIME Date and time in a string. Format: YYYY-MM-DD:HH:MM:SS.FF FFFF DBTYPE_TIMESTAMP SQL_ TIMESTAMP SQL_CHAR SQL_ TIMESTAMP SQL_ TIMESTAMP Date and time in a string. Format: HH:MM:SS Text data (BLOB). ODBC time format. ODBC date-time format.

tandem_date

DBTYPE_DBDATE

SQL_DATE

tandem_time

text time timestamp

DBTYPE_STR DBTYPE_ DBTIMESTAMP DBTYPE_ DBTIMESTAMP

5-22 AIS User Guide and Reference

Table 55 (Cont.) ADD Supported Data Types ADD Type ubits OLE DB Type DBTYPE_I4 ODBC Type SQL_TINYINT JDBC Type Details An unsigned number of bits within a byte. Size: 1 bit to 1 byte Format: <field name="name" datatype="bits" onBit="n" size="m"/> where:

n: specifies which bit (within a byte) to start from. m: the number of bits. If n is not specified then n defaults to 1 for the first occurrence of the field and is contiguous thereafter.

The maximum number of bits you can map is 31. uint1 DBTYPE_UI1 SQL_TINYINT Unsigned byte integer. Size: 1 byte Range: 0 to +254 uint2 DBTYPE_I4 SQL_INTEGER Unsigned word integer. Size: 2 bytes Range: 0 to +65534 uint4 DBTYPE_NUMERIC SQL_ NUMERIC(11) Unsigned long integer. Size: 4 bytes Range: 0 to +4,294,967,294 uint6 DBTYPE_NUMERIC SQL_NUMERIC Unsigned integer. Size: 6 bytes unicode DBTYPE_WSTR SQL_VARCHAR A null-terminated alphanumeric unicode string; maximum length must be specified 16-bit count, followed by a string. 32-bit count, followed by a string OpenVMS date-time format

varstring varstring4 vms_date

DBTYPE_STR DBTYPE_STR DBTYPE_ DBTIMESTAMP

SQL_VARCHAR SQL_VARCHAR SQL_DATE

ADD Syntax
This section describes the Attunity Connect Data Dictionary. The following statements are described:

The <table> Statement The <dbCommand> Statement

Managing Metadata

5-23

The <fields> Statement The <field> Statement The <group> Statement The <variant> Statement The <case> Statement The <keys> Statement The <key> Statement The <segments> Statement The <segment> Statement The <foreignKeys> Statement The <foreignKey> Statement The <primaryKey> Statement The <pKeySegments> Statement

The <table> Statement


The <table> statement describes the general record or table attributes.

Syntax
<table name="table_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... > <dbCommand>...</dbCommand> </field> ... </fields> <keys> <key name="param"> attribute="value" ...> <segments> <segment name="param" attribute="value" ... /> ... </segments> </key> ... </keys> </table>

where table_name is the record/table name. It can be made of a maximum of 40 characters. The <table> statement consists of the following components:

Table Attributes The <fields> Statement. This statement includes the fields list. Optionally, The <keys> Statement, which contains a list of keys.

5-24 AIS User Guide and Reference

Notes:

The table_name entry must conform to standard ANSI 92 SQL naming conventions. When you define the structure of a table for a non-relational data source, you must include a <fields> statement. When both <keys> and <fields> statements are present, the <keys> statement must come after the <fields> statement.

Example 59 <table> statement syntax <table name="nation" organization="index" filename="d:\demo\nation" datasource="DEMO"> <fields> <field name="n_nationkey" datatype="int4" /> <field name="n_name" datatype="string" size="25" /> <field name="n_regionkey" datatype="int4" /> <field name="n_comment" datatype="cstring" size="152" /> </fields> </table>

Table Attributes
A table can have the following attributes:

alias: Replaces the table name with a logical name. Names greater than 39 characters are truncated from the left. Syntax:
alias="name"

basedOn: Specifies the table (or virtual table) on which the current table is based. This attribute is generated automatically when an array in a record is generated as a virtual table. Syntax:
basedOn="table_name::array_name"

where: * * table_name: The name of the table which contains the array. If the array is nested in another array, this value is the name of the parent array. array_name: The name of the array in the table.

Example:
<table name="EMP_CHLDRN" organization="index" basedOn="EMPLOYEE::CHILDREN" datasource="DEMO" />

Also refer to counterName and dimension1 in The <group> Statement.

datasource: Specifies the data source name as specified in the binding configuration. The repository for this data source is used to store the ADD information. This attribute must be specified. Syntax:

Managing Metadata

5-25

datasource="datasource_name"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" datasource="DEMO" />

delimited: Specifies the character that delimits fields. In order to get the delimiter character into the data you must have the entire field quoted. See quoteChar further in this section. If you do not specify this attribute, ADD assumes that a comma (,) functions as the delimiting character. Syntax:
delimited="character"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" delimited="/" datasource="DEMO" />

description: Specifies an optional textual description. Syntax:


description="optioanl_user_supplies_description"

filename: The filename attribute specifies the full name and location of the file. Syntax:
filename="full_filename"

where full_filename includes the full path to the file.


Note:

Data source drivers require the file suffix, except for the CISAM and DISAM drivers, where the suffix must not be specified.

Flat files and text delimited on z/OS Platforms When defining metadata for a flat file or text delimited file, use the following syntax: filename="high_level_qualifier.filename" For example: filename="SYS1.AAA.AC.DATATEMP"

Example (RMS):
filename="DISK$2:[DB]NATION.DAT"

Example (DISAM):
filename="d:\demo\nation"

5-26 AIS User Guide and Reference

filter: Adds a WHERE clause to every query accessed using this table or procedure. This attribute is useful when more than one logical table is stored in the same physical file. If a query relates to data with a filter attribute defined, then the Attunity Connect Query Processor handles the query including the WHERE clause and will return an error if the query is invalid because of the added WHERE clause. To use the filter attribute, you must set the useTableFilterExpressions environment property to true. You specify this parameter in the queryProcessor node of the environment properties for the relevant binding configuration, in Attunity Studio Design perspective Configuration view. Syntax:
filter="sql_expression"

where sql_expression is a valid SQL expression combining one or more constants, literals and column names connected by operators. Column names must be prefixed with $$$ and the column must exist in the current table. Example:
<table name="nation" filename="d:\demo\nation" organization="index" filter="$$$.RECORD_TYPE = 80" datasource="DEMO" />

nBlocks: Specifies the approximate number of blocks in the table. It is used by Attunity Connect to optimize query execution.
Note: The nRows attribute must be specified if the nBlocks attribute is specified. If neither nRows nor nBlocks is specified for a table, queries over the table might be executed in a non-optimal manner.

Syntax:
nRows="numeral"

organization: Specifies the organization of a file system data provider. The organization can be one of the following: indexed, sequential, relative, or unstructured. The default is sequential. Syntax:
organization="index" | "sequential" | "relative" | "unstructured"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" datasource="DEMO" />

Note:

Attunitys test-delimited and the flat file drivers, both support a sequential organization.

Managing Metadata

5-27

Use unstructured for unstructured Enscribe files that are not indexed. Note that you must include a filler field, of size one, when having an odd record size in an even unstructured Enscribe file. Access to a specific record number of a relative file is performed by using a pseudo column to specify the record position. The hash symbol (#) is used to specify a pseudo column. For example:

SELECT * FROM colleges WHERE # = 6 INSERT INTO colleges(coll_id,coll_name,coll_status,#) VALUES(111,New collage,2,5) DELETE FROM colleges WHERE # = 15

quoteChar: Specifies the character that quotes a string field. In order to quote a field the entire field must be quoted (leading and trailing white space before and after the start and end quote characters, respectively, are allowed). In particular you cannot start or end quoting data in the middle of a field since the quote characters will be interpreted as part of the data. In order to have a quote character in a quoted field you must escape it by preceding it with a backslash. Syntax:
quoteChar="character"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" quoteChar="" datasource="DEMO" />

recordFormat: Used only with RMS data, to identify the underlying RMS record format. This attribute is for information purposes only. Within Attunity Connect, all records are treated as fixed length. Syntax:
recordFormat="undefined" | "fixed" | "variable"

Example:
<table name="nation" datasource="RMS" filename="DKA100:[USER.NAV.RMSDEMO]nation.INX" recordFormat="fixed" organization="index" />

size: Specifies the maximum size, in bytes, of a record. This attribute is useful when you only want to use part of the record. This attribute is generated automatically for RMS, Enscribe, DISAM, and CISAM data. For these data sources, do not specify a size attribute. This attribute is not supported by the flat files driver. Syntax:
size="n"

Example:
<table name="nation" filename="d:\demo\nation" organization="index" size="500" datasource="DEMO" />

5-28 AIS User Guide and Reference

tableId: The record number within a DBMS user work area. The BASIC_ADL utility (see DBMS Data Source (OpenVMS Only)) generates ADD metadata from DBMS and automatically creates a tableId attribute.
Note:

Do not change this attribute.

Syntax:
tableId="record_number"

The <dbCommand> Statement


The <dbCommand> statement is used to specify data source-specific commands for the metadata.

Syntax
<dbCommand>text</dbCommand>

Examples
<dbCommand>CMD=^PAK(K1,1)</dbCommand>

The BASIC_ADL utility for generating ADD metadata from DBMS metadata creates a dbCommand statement. For example, for a field, the dbCommand has the following form:
<dbCommand> field-type/set-name/record-name-of-paired-table/ realm-of-paired-table/ insertion-mode/retention-mode </dbCommand>

For a part of the <key> statement:


<key unique="true"> <dbCommand>{ AC }</dbcommand> <segments> <segment name="EMPID-2-4" /> <segment name="EMPNAME-3-8" /> </segments> </key>

The <fields> Statement


The <fields> statement is used to list the field descriptions of fields in a table or procedure. A field description can be one of the following:

The <field> Statement The <group> Statement The <variant> Statement

Managing Metadata

5-29

Syntax
<fields> <field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field> <group name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </group> <variant name="field_name" attribute="value" ...> <case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case> ... </variant> ... </fields>

The <field> Statement


The <field> statement defines the characteristics of a field that is not made up of other fields.

Syntax
<field name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> </field>

Example
The following code defines one field (n_NAME) and its two attributes (datatype and size):
<field name="n_name" datatype="string" size="25" />

Field Attributes
A field can have the following attributes:

autoIncrement: When set to true, specifies that this field is updated automatically by the data source during an INSERT statement and shouldnt be explicitly specified in the INSERT statement. The INSERT statement should include an explicit list of values. This attribute is used for fields such as an order number field whose value is incremented each time a new order is entered to the data source. Syntax:

5-30 AIS User Guide and Reference

autoIncrement="true|false"

Example:
<field name="ORDER_NUM" datatype="string" size="6" autoIncrement="true" />

chapterOf: Used for DBMS metadata and specifies that the set member field is a chapter of an owner table. This attribute must be included when accessing a set member as a chapter in an ADO application. The BASIC_ADL utility (see DBMS Data Source (OpenVMS Only) generates ADD metadata from DBMS and automatically creates this attribute. Syntax:
chapterOf="owner_table"

Example:
<field name="_M_CLASS_PART" datatype="string" size="29" chapterOf="CLASS" nullable="true"/

compressedArray: When set to true, any array that has a counter can be marked with the attribute compressedArray. Compressed arrays can be groups or single fields and they store only the members that have a value. Thus, a compressed array with a maximum of 100 elements that has only 5 elements in a particular record will include only the 5 elements in the physical file. Records are compressed and decompressed for READ and WRITE operations.
Note:

Only arrays with counters can be compressed.

Syntax:
compressedArray="true|false"

datatype: Specifies the data type of a field. Syntax:


datatype="datatype"

See ADD Supported Data Types.

emptyValue: Specifies the value for the field during an insert operation, when a value is not specified. Syntax:
emptyValue="value"

Example:
<field name="RECORD_TYPE" emptyValue="80" datatype="int4" />

explicitSelect: When set to true, specifies that this field is not returned when you execute a "SELECT * FROM..." statement. To return this field, you must explicitly ask for it (in a query such as "SELECT NATION_ID, SYSKEY FROM NATION" where SYSKEY is a field defined with explicitSelect). Syntax:
Managing Metadata 5-31

explicitSelect="true|false"

Example:
<field name="_M_CLASS_PART" datatype="string" size="29" explicitSelect="true" nullable="true" />

Note:

You can disable this attribute by specifying the disableExplicitSelect attribute for the data source in the binding.

hidden: When set to true, specifies that the field is hidden from users. The field is not displayed when a DESCRIBE statement is executed on the table. Syntax:
hidden="true|false"

Example:
<field name="CURRENT_SALARY" hidden="true" datatype="decimal" size="9" />

name: Specifies the name of the field. This attribute must be specified. Syntax:
name="name"

Example:
<field name="EMP_ID" datatype="int4" />

nonSelectable: When set to true, specifies that the field is never returned when you execute an SQL statement. The field is displayed when a DESCRIBE statement is executed on the table. Syntax:
nonSelectable="true|false"

Example:
<field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" nonSelectable="true" />

nonUpdateable: When set to true, specifies that the field cannot be updated (the default is false). Syntax:
nonUpdateable="true|false"

Example:
<field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" nonupdateable="true" />

nRows: Specifies the approximate count of distinct column values in the table. It is used by Attunity Connect to optimize the query execution.

5-32 AIS User Guide and Reference

Syntax:
nRows="numeral"

nullable: When set to true, specifies that the field can contain NULL values. Syntax:
nullable="true|false"

Example:
<field name="_M_CLASS_PART" datatype="string" size="29" explicitSelect="true" nullable="true" />

nullSuppressed: When set to true, causes the query optimizer to ignore strategies that use a key or segment that includes a field defined as null-suppressed (that is, when rows whose value for this field is NULL do not appear in the key). For example, normally the query optimizer would use a key for a query including an ORDER BY attribute on this field. If nullSuppressed is not specified, then the query may return incomplete results when the key is used in the optimization plan. If nullSuppressed is set to true, the key is not used. To retrieve rows in a table for a field with the nullSuppressed attribute specified and that have a NULL value, specify NULL in the WHERE clause and not a value. That is, specify:
WHERE field= NULL

Specifying "WHERE field=0" will return an incorrect result.


Note: This attribute is supported for Adabas databases and OLE DB providers. For Adabas, the attribute is set at run-time according to the value returned by the ADABAS LF command.

Syntax:
nullSuppressed="true|false"

Example:
<field name="AGE" nullSuppressed="true" datatype="int4" />

nullValue: Specifies the null value for the field where the data source does not support null values, thereby providing a means of assigning a "null" value. A select statement returns the value as NULL. Syntax:
nullValue="value"

where value is a string value. Example:


<field name="RETIRED" nullable="false" nullValue="-1" datatype="int4" />

Managing Metadata

5-33

offset: Specifies an absolute offset for the field in a record. When used with a field whose data type is BIT, the offset can be stated for the first BIT data type. All following BIT data types refers to the same offset if possible. When the last mapped bit is the 8th bit, the next bit is mapped to the first bit in the next byte. Syntax:
offset="n"

Example:
<field name="EMP_ID" offset="3" datatype="int1"/> <field name="CHECK_DIGIT1" offset="3" datatype="bit" onBit="1" /> <field name="CHECK_DIGIT2" datatype="bit"/>

onBit: Specifies the position of the bit in a field with data type BIT or the starting position in a field with data type BITS. Syntax:
onBit="n"

where: * * For the BIT data type: Specifies which bit the field uses. For the BITS data type: Specifies which bit (within a byte) to start from. If n is not specified then n defaults to 1 for the first occurrence of the field and is contiguous thereafter.

scale: Specifies the number of characters of digits. Syntax:


scale="n"

where: * For decimal and numeric data types: The number of digits that are fractions. The number of fractions must not be greater than the number of digits. The default value is 0. For scaled data types: The number of digits. The number must be negative.

Example:
<field name="SALARY" datatype="numstr_s" size="10" scale="2" />

size: Specifies the size of the field. Syntax:


size="n"

where n is the number of characters or digits. The digit must be greater than 0. For the BITS data type, n specifies the number of used bits, starting from the value specified in the onBits attribute.

subfieldOf: The value for this attribute is generated automatically when you generate ADD metadata from ADABAS data that includes a superdescriptor based on a subfield. A field is created to base this index on, and set to the offset specified as the value of the subfieldStart attribute.

5-34 AIS User Guide and Reference

If a subfieldStart attribute is not specified, then the subfield is set by default to an offset of 1. Syntax:
subfieldOf="parent_field"

subfieldStart: The offset within the parent field where a subfield starts. If a subfieldStart attribute is not specified, then the subfield is set by default to an offset of 1. Syntax:
subfieldStart="offset_number"

The <group> Statement


The <group> statement defines the characteristics of a field that is made up of other fields, such as an array in a record.

Syntax
<group name="field_name" attribute="value" ...> <dbCommand>...</dbCommand> <fields> <field name="field_name" attribute="value" ... /> </fields> </group>

A <group> statement is handled as an array. Each of the array elements contains all of the subordinate fields defined in the <group> statement. The size of the array is the size of a single array element multiplied by the dimension.

Example
An array containing information about an employees children can be defined as follows:
<group name="CHILDREN" dimension1="4" > <fields> <field name="DATE_OF_BIRTH" datatype="vms_date" /> <field name="NAME" datatype="string" size="16" /> </fields> </group>

The CHILDREN structure has 4 occurrences numbered from 0 to 3. Each occurrence consists of two fields, DATE_OF_BIRTH and CHILD_NAME. The size of the single structure occurrence is 20 (4 bytes for DATE_OF_BIRTH and 16 bytes for CHILD_ NAME), and the total size of the CHILDREN array is therefore 80.

Group Attributes
A group can have the following attributes:

alias: Used to replace the default virtual table name automatically generated for an array. Virtual table names are generated by appending the array name to the parent name (either the record name or a parent array name). Thus, when an array includes another array the name of the nested array is the name of the record and the parent array and the nested array.

Managing Metadata

5-35

When the default generated virtual table name is too long to be usable or over 39 characters, specify an alias to replace the long name. Names greater than 39 characters are truncated from the left. Syntax:
alias="name"

Example:
<group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> ... </group>

counterName: Specifies the name of a field that counts the number of the elements stored in an array. Syntax:
counterName="field_name"

where field_name is the name of a field that counts the number of elements stored in the array. The counterName attribute cannot be used with an Attunity Connect procedure.
Note:

For an ADABAS database, you dont need to define a counter field since one is created automatically with the name C-arrayname, where arrayname is the multiple value field name or periodic group field name in ADABAS.

Example: An array containing information about an employee and the employees children can be defined as follows:
<table name="EMPLOYEE" organization="index" nRows="4" filename="d:\ddescription: isam\EMPLOYEE" datasource="DISAMDEMO"> <fields> <field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" /> <field name="CHILD_COUNTER" datatype="int4" /> <group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields> <field name="AGE" description="AGE" datatype="int4" /> ... </fields> </group> ... </fields> </table>

description: Specifies an optional textual description. Syntax:


description="optional_user_supplied_description"

dimension1: Specifies that the field is an array.

5-36 AIS User Guide and Reference

Syntax:
dimension1="n"

where n indicates the number of elements in the array. The dimension1 attribute cannot be used with an Attunity Connect procedure.
Note:

This syntax is for a one-dimensional array. For a two-dimensional array, you specify dimension2="n".

Example: An array containing information about an employee and the employees children can be defined as follows:
<table name="EMPLOYEE" organization="index" nRows="4" filename="d:\disam\EMPLOYEE" datasource="DISAMDEMO"> <fields> <field name="EMP_ID" description="EMPLOYEE ID" datatype="int4" /> <field name="CHILD_COUNTER" datatype="int4" /> <group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields> <field name="AGE" description="AGE" datatype="int4" /> ... </fields> </group> ... </fields> </table>

To create the EMP_CHLDRN child table, import the table metadata to the Attunity Connect repository (by right-clicking the data source in Attunity Studio Design perspective Metadata view and selecting Import XML definitions).

name: Specifies the name of the field. This attribute must be specified. Syntax:
name="name"

Example:
<group name="CHILDREN" alias="EMP_CHLDRN" dimension1="4" counterName="CHILD_COUNTER"> <fields> ... </fields> </group>

offset: Specifies an absolute offset for the group. Syntax:


offset="offset"

Managing Metadata

5-37

The <variant> Statement


Variants are similar to redefine constructs in COBOL, and to union in C. The basic concept is that the same physical area in the buffer is mapped several times.The mappings can be of the following:

Different nuances of the same data. Different usage of the same physical area in the buffer.

This section describes the common use cases of variants and how they are represented in the variant syntax. The following variant types are available:

Variant without selector Variant with selector

Variant without selector


Variants without selectors are used to define different cases of the variants and represent different ways of looking at the same data. Variants without selectors are used to define different cases of the variants and represent different ways of looking at the same data.

COBOL Example:
20 PARTNUM PIC X(10). 20 PARTCD REDEFINES PARTNUM. 30 DEPTCODE PIC X(2). 30 SUPPLYCODE PIC X(3). 30 PARTCODE PIC X(5).

In this example one case includes a PARTNUM field of 10 characters while the other case, PARTCD, maps the same part number to a 2 character DEPTCODE, a 3 character SUPPLYCODE and a 5 character PARTCODE. The two variant cases are just different ways of viewing the same item of data. In Attunity Studio, the Import Manipulation Panel enables replacing any variant with the fields of a single case. The metadata generated by Studio following a Metadata import appears as follows:
<variant name="VAR_0"> <case name="UNNAMED_CASE_1"> <fields> <field name="PARTNUM" datatype="string" size="10"/> </fields> </case> <case name="PARTCD"> <fields> <field name="DEPTCODE" datatype="string" size="2"/> <field name="SUPPLYCODE" datatype="string" size="3"/> <field name="PARTCODE" datatype="string" size="5"/> </fields> </case> </variant> 5-38 AIS User Guide and Reference

Variant with selector


Different cases of the variant represent different ways in which to use the physical area in the buffer. For every record instance there is only one case which is valid, the others are irrelevant. Additional fields in the buffer help determine which variant case is valid for the current record.

COBOL Example:
10 ORDER. 20 RECTYPE PIC X. 88 ORD-HEADER VALUE H. 88 ORD-DETAILS VALUE D. 20 ORDER-HEADER. 30 ORDER-DATE PIC 9(8). 30 CUST-ID PIC 9(9). 20 ORDER-DETAILS REDEFINES ORDER-HEADER. 30 PART-NO PIC 9(9). 30 QUANTITY PIC 9(9) COMP.

In this example each of the records is either an order header record or an order item record, depending on the value of the RECTYPE field. This construct can be mapped as a variant with a selector, where the RECTYPE field is the selector. During a metadata import from COBOL, all variants are assumed to be variants without selectors. The COBOL syntax doesnt distinguish between different types of variants or REDEFINEs. In COBOL, only the program logic includes this distinction. This is true, unless a selector is specified in the import manipulation panel. See Working with Metadata in Attunity Studio for additional information.

ADD Syntax: The following is the ADD syntax used for setting variants:
<variant name="variant_name"> <case name="case_name" value="val" ...> <fields> <field name="field_name" ... /> </fields> </case> <case ... </case> </variant>

The metadata generated by Attunity Studio following a metadata import appears as follows:
<field name="RECTYPE" datatype="string" size="1"/> <variant name="VAR_1" selector="RECTYPE"> <case name="ORDER_HEADER" value="H"> <fields> <field name="ORDER_DATE" datatype="numstr_u" size="8"/> <field name="CUST_ID" datatype="numstr_u" size="9"/> </fields>

Managing Metadata

5-39

</case> <case name="ORDER_DETAILS" value="D"> <fields> <field name="PART_NO" datatype="numstr_u" size="9"/> <field name="QUANTITY" datatype="uint4" size="4"/> </fields> </case> </variant>

Usage Notes

From an SQL consumer, none of the <variant> or <case> fields are visible. Only the simple fields are accessible. For a variant with a selector, all fields are reported as nullable regardless of their backend definition. For every record instance, only the relevant case will show values the rest of the cases will contain NULLs. When updating or inserting both types of variants, it is up to the user to ensure that only a single case is given values. Attempting to set fields from two or more cases will result in unpredictable behavior.

Resolving Variants in Attunity Studio


In Attunity Studio variants are resolved in the data source metadata Import Manipulation panel. To resolve variants in the Import Mani8pulation panel 1. In the Validation tab, double-click the variant to resolve. The variants in the COBOL copybook are displayed (you can expand the variants to expose the variant cases).
2.

Right-click the variant and select Structures, and then Mark selector. The Select Selector Field screen opens.

3. 4. 5. 6.

Select the selector for the variant from the list of selectors in the COBOL copybook. Click OK. Repeat as needed to set variants with selectors. Click OK.

Variant Attributes
A variant can have the following attributes:

name: Specifies the name of the variant. This attribute must be specified. Syntax:
name="name"

Example:
<variant name="VAR_SEX" selector="SEX"> <case name="CASE_1_1" value="M"> <fields> <field name="M_SCHOOL" datatype="string" size="20" /> </fields>

5-40 AIS User Guide and Reference

</case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHOOL" datatype="string" size="20" /> </fields> </case> </variant>

selector: Specifies the name of a field whose value determines which of the alternate variant definitions is used in the current record (row). When a selector attribute is specified, a value attribute must be specified in the <case> statement. Syntax:
selector="field_name"

Example:
<field name="SEX" datatype="string" size="1" /> <variant name="VAR_SEX" selector="SEX"> <case name="CASE_1_1" value="M"> <fields> <field name="M_SCHOOL" datatype="string" size="20" /> </fields> </case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHOOL" datatype="string" size="20" /> </fields> </case> </variant>

The <case> Statement


The <case> statement specifies an alternative definition that maps to the same storage area. The <case> statement can include:

The <field> Statement The <group> Statement The <variant> Statement

Syntax
<case name="field_name" attribute="value" ...> <fields> <field name="field_name" attribute="value" ... /> </fields> </case>

Case Attributes
A case can have the following attributes:

Managing Metadata

5-41

name: Specifies a name for the case. When a selector attribute is not specified in the <variant> statement, a name attribute must be specified here. Syntax:
name="name"

Example:
<variant name="VAR_SEX" selector="SEX"> <case name="CASE_1_1" value="M"> <fields> <field name="M_SCHOOL" datatype="string" size="20" /> </fields> </case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHOOL" datatype="string" size="20" /> </fields> </case> </variant>

value: Specifies the value for a variant definition that is used in the current record (row) for the field specified in the <variant> statement via the selector attribute. When a selector attribute is specified in the <variant> statement, a value attribute must be specified here. Syntax:
value="value"

Example:
<case name="CASE_1_1" value="M"> <fields> <field name="M_SCHL" datatype="string" size="20" /> </fields> </case> <case name="CASE_1_2" value="F"> <fields> <field name="F_SCHL" datatype="string" size="20" /> </fields> </case>

The <keys> Statement


The <keys> section of a table definition describes the keys of the table. A list of key description statements is included in the <keys> statement.

Syntax
<keys> <key name="key_name" attribute="value" ...> <segments> 5-42 AIS User Guide and Reference

<segment name="segment_name" attribute="value" ... /> ... </segments> </key> <key name="key_name" ...> <segments> <segment name="segment_name" ... /> ... </segments> </key> </keys>

Example
<keys> <key name="nindex" unique="true"> <segments> <segment name="n_nationkey" /> </segments> </key> </keys>

The <key> Statement


The <key> statement describes a key of the table. An optional list of key attributes and a list of segment statements are included in the <key> statement.

Syntax
<key name="key_name" attribute="value" ...> <dbCommand>...</dbCommand> <segments> <segment name="segment_name" attribute="value" ... /> ... </segments> </key>

Key Attributes
A key can have the following attributes:

bestUnique: When set to true, specifies that the query optimization chooses an optimization strategy that uses this key in preference to any other strategy. Use this attribute on keys containing a field which represents a bookmark of the record (and consequently retrieval is assumed to be faster than with other keys). Fields that represent a bookmark include ROWID in Oracle, DBKEY in DBMS and ISN in ADABAS. Syntax:
bestUnique="true|false"

Example:
<key unique="true" bestUnique="true" hashed="true"> <segments>

Managing Metadata

5-43

<segment name="ISN" /> </segments> </key>

clustered: When set to true, indicates that this key reflects the physical organization of the table and is used to determine the query optimization strategy used by Attunity Connect. Syntax:
clustered="true|false"

Example:
<key clustered="true"> <segments> <segment name="DBKEY" /> </segments> </key>

descending: When set to true, specifies whether the order of the current key is descending. If this attribute is not specified for the key, it can be specified per segment of the key. The default is ASCENDING. Syntax:
descending="true|false"

Example:
<key unique="true" descending="true" nRows="30"> <segments> <segment name="EMPLOYEE_ID" /> </segments> </key>

description: Specifies an optional textual description. Syntax:


description="optional_user_supplied_description"

hashed: When set to true, indicates that this is a hash key and is used to determine the query optimization strategy used by Attunity Connect. Syntax:
hashed="true|false"

Example:
<key unique="true" bestUnique="true" hashed="true"> <segments> <segment name="ISN" /> </segments> </key>

hierarchical: When set to true, specifies that the query optimization chooses an optimization strategy that uses this key in preference to any other strategy. Use this attribute for DBMS databases, on keys containing a DBKEY field which represents a bookmark of the record (and consequently retrieval is assumed to be faster than with other keys). Syntax:

5-44 AIS User Guide and Reference

hierarchical="true|false"

Example:
<key unique="true" hierarchical="true" > <segments> <segment name="DBKEY" /> </segments> </key>

indexId: Identifies the physical key for the record. You can use this attribute only with the key and not with a segment. The use of the field is data source dependent. Syntax:
indexId="previously_defined_key"

For an Enscribe alternate key, the indexId attribute is the ASCII value corresponding to the 2 bytes of the key specifier surrounded by quotes. For details, see Enscribe Data Source (HP NonStop Only).

nRows: Specifies the approximate count of distinct key values in the key. It is used by Attunity Connect to optimize query execution. For a unique key, the nRows value must be equal to the nRows value for the record (that is, the number of distinct key values is the same as the number of rows). Syntax:
nRows="numeral"

nullSuppressed: When set to true, causes the query optimizer to ignore strategies that use a key that includes a field defined as null-suppressed (that is, when rows whose value for this field is NULL do not appear in the key). For example, normally the query optimizer would use a key for a query including an ORDER BY attribute on this field. If nullSuppressed is not specified, the query may return incomplete results when the key is used in the optimization plan. If nullSuppressed is specified, the key is not used. To retrieve rows in a table for a field with the nullSuppressed attribute specified and that have a NULL value, specify NULL in the WHERE clause and not a value. That is, specify:
WHERE field=NULL

Specifying "WHERE field=0" will return an incorrect result.


Note: This attribute is supported for Adabas databases and OLE DB providers. For Adabas, the attribute is set at run-time according to the value returned by the Adabas LF command.

unique: When set to true, indicates that each key entry uniquely identifies one row and is used to determine the query optimization strategy used by Attunity Connect. Syntax:
unique="true|false"

Example:
Managing Metadata 5-45

<key unique="true" bestUnique="true"> <segments> <segment name="ISN" /> </segments> </key>

The <segments> Statement


The <segments> section of a table definition describes the segments of the key. A list of <segment> statements is included in the <segments> statement.

Syntax
<segments> <segment name="segment_name" attribute="value" .../> ... </segments>

The <segment> Statement


The <segment> statement describes a segment of the key. An optional list of segment attributes are included in the <segment> statement.

Syntax
<segment name="segment_name" attribute="value" ... <dbCommand>...</dbCommand> </segment>

Segment Attributes
Each segment can have the following attributes:

descending: When set to true, specifies the order of the current segment is descending. The default is ASCENDING. Syntax:
descending="true|false"

Example:
<key unique="true" nRows="30"> <segments> <segment name="EMPLOYEE_ID" descending="true" /> </segments> </key>

name: Specifies the name of the segment. This attribute must be specified. Syntax:
name="name"

Example:
<key unique="true" nRows="30"> <segments>

5-46 AIS User Guide and Reference

<segment name="EMPLOYEE_ID" /> </segments> </key>

nRows: Specifies the approximate count of distinct segment values in the key. It is used by Attunity Connect to optimize query execution. Syntax:
nRows="numeral"

nullsLast: When set to true, causes value comparison operations to treat nulls as the greater of the values being compared. Syntax:
nullLast="true|false"

nullSuppressed: When set to true, causes the query optimizer to ignore strategies that use a segment that includes a field defined as null-suppressed (that is, when rows whose value for this field is NULL do not appear in the key). Syntax:
nullSuppressed="true|false"

The <foreignKeys> Statement


The <foreignKeys> statement defines the foreign keys of the table. A list of foreign key description statements is included in the <foreignKeys> statement.
Note:

The <foreignKeys> statement is not available in the Attunity Studio Design perspective Metadata view.

Syntax
<foreignKeys> <foreignKey name="key_name" attribute="value" ...> <fkeySegments> <fkeySegment attribute="value" ... /> </fkeySegments> </foreignKey> </foreignKeys>

The <foreignKey> Statement


The <foreignKey> statement describes a foreign key of a table. The <foreignKey> statement defines the following:

The name of the foreign key. The external table reference by the foreign key. The primary key of the referencing table. The segments of the foreign key. The referential integrity rule to be implemented when the primary key field in the referencing table is either updated or deleted.
Managing Metadata 5-47

Note:

The <foreignKey> statement is not available in the Attunity Studio Design perspective Metadata view.

Syntax
<foreignKey name="key_name" referencingTable="external_table" referencingPKey="external_table_ primary_key" updateRule="cascade|restrict|setNull"> deleteRule="cascade|restrict|setNull"> <fkeySegments> <fkeySegment fname="local_table_field" pName="referenced_table_field"/> ... </fkeySegments> </foreignKey>

Example
<foreignKeys> <foreignKey name="fkey1" referencingTable="table2" referencingPKey="table2_pkey" updateRule="cascade" deleteRule="setNull"> <fKeySegments> <fKeySegment fName="col2" pName="table2_col3" /> <fKeySegment fName="col5" pName="table2_col2" /> </fKeySegments> </foreign_key> <foreignKey name="fkey2" referencingTable="table3" referencingPKey="table3_pkey" updateRule="restrict" deleteRule="restrict"> <fKeySegments> <fKeySegment fName="col1" pName="table3_col1" /> </fKeySegments> </foreignKey> </foreignKeys>

foreignKey Attributes
Each foreignKey can have the following attributes:

name: The name of the foreign key. referencingTable: The name of the external table referenced by the foreign key. referencingKey: The primary key of the external referencing table. updateRule: The referential integrity rule for the foreign key when the primary key of the referenced table is updated. The specific option is determined by the application accessing the data: cascade restrict setNull

For a general description, see Referential Integrity.


5-48 AIS User Guide and Reference

deleteRule: The referential integrity rule for the foreign key when the primary row of the referenced table is deleted. The specific option is determined by the application accessing the data: cascade restrict setNull

fName: The name of the external field referenced in this foreign key segment. pName: The name of the local field referenced in this foreign key segment.

The <primaryKey> Statement


The <primaryKey> statement describes the primary key of a table. The <primaryKey> statement uses <pKeySegments> and <pKeySegment> statements to define the segments constituting the primary key.
Note:

The <primaryKeys> statement is not available in the Attunity Studio Design perspective Metadata view.

Syntax
<primaryKey name="name"> <pKeySegments> <pKeySegment segment="name"/> ... </pKeySegments> </primaryKey>

The <pKeySegments> Statement


The <pKeySegment> statements define the segments that make up the primary key of a table.
Note:

The <pKeySegment> statement is not available in the Attunity Studio Design perspective Metadata view.

Syntax
<pKeySegments> <pKeySegment segment="name"/> ... </pKeySegments>

Example
<primaryKey name="pk"> <pKeySegments> <pKeySegment segment="col1"> <pKeySegment segment="col2"> </pKeySegments> </primaryKey>

Managing Metadata

5-49

pKeySegment Attributes
The pKeySegment statement can have the segment attribute, which specifies the name of a segment constituting the primary key of a table.

5-50 AIS User Guide and Reference

6
Working with Metadata in Attunity Studio
This chapter includes the following sections:

Overview Managing Data Source Metadata Importing Data Source Metadata with the Attunity Import Wizard Working with Application Adapter Metadata

Overview
AIS uses metadata to access and read a data source. AIS can use the native metadata for many data sources, such as Relational Data Sources or Adabas. However, some data sources require Attunitys own metadata (ADD). Metadata can be imported, saved, and managed in the Attunity Studio Design perspective, on the Metadata tab. The following data sources require Attunity metadata:

CISAM /DISAM Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources RMS Data Source (OpenVMS Only) Text Delimited File Data Source VSAM Data Source (z/OS) OLEDB-FS (Flat File System) Data Source

An Adabas Driver is available if the Adabas Predict metadata is not available.

Managing Data Source Metadata


You manage data source metadata in the Attunity Studio Design perspective, in the Metadata editor. The Metadata editor is used to display the Metadata for any data source. Metadata for data sources where AIS uses the native metadata is displayed as readonly in the metadata editor. The editor lets you view and modify Attunity Metadata for Data Sources that require Attunity metadata.

Working with Metadata in Attunity Studio

6-1

To view and edit data source metadata 1. In the Design perspective Configuration view, expand the Machines folder and then expand the machine where you want to add the data source.
2. 3. 4.

Expand the Bindings folder and then expand the binding with the data source metadata you are working with. Expand the Data Source folder. Right-click the data source for which you want to manage the metadata and select Show Metadata View. The Metadata tab opens with the selected data source displayed in the Configuration view.

5.

Right-click the resource (such as the data source table) in the Metadata view and select Edit.

Data source tables are edited using the following tabs, which are at the bottom of the editor screen:

General Tab: Enter general information about the table, such as the table name and the way the table is organized, and the location of the table. Columns Tab: Edit information about the table columns and their properties. For example, the column data type, size, and scale. Indexes Tab: Edit information about the indexes of a table. The indexes are described by the order of the rows they retrieve, the data source commands used, and the index type. Statistics Tab: Enter the statistics for the table, including the number of rows and blocks in the table. Source Tab: View the metadata in its XML representation.
Note:

Attunity Connect provides a relational model for all data sources defined to it. Thus, relational terminology is used, even when referring to non-relational data sources (File-system Data Source). For example, the metadata for an RMS record is referred to as the metadata for an RMS table.

General Tab
The General tab lets you maintain information about the table, such as the table name and the way the table is organized.

6-2 AIS User Guide and Reference

Figure 61 The Data Source Metadata General Tab

Note:

The fields displayed depend on the type of data source used.

The table below describes the parts on this tab.


Table 61 Field name Description Table Properties Metadata General Tab Description Enter a description of the table. These fields provide general configuration information for the metadata. The fields displayed depend on the data source and table attributes of the selected table. The name of the file that contains the table. You must enter the full path and include the file extension for the file. For example, D:\COBOL\orders.cob. You can click Browse and browse to find and enter the location of the table file. Note: Do not enter the file extension for DIASM or CIASM files. Organization Select how the record is organized:

Data file location

Index: Select this if the source data has an index column. All searches are according to the index column. Sequential: Select this if the source does not have a key and all requests search the table columns sequentially. Relative: Access to a specific record number of a relative file is performed by using a pseudo column to specify the record position. The (#) symbol indicates a pseudo column.

Working with Metadata in Attunity Studio

6-3

Table 61 (Cont.) Metadata General Tab Field name Maximum record length Description The maximum size of the record in bytes. The value for this field is generated automatically for RMS, Enscribe, DISAM, and CISAM data. For these data sources, leave this field empty. Delimited Quote Character Record Format Enter a character to act as a delimiter in the metadata. If nothing is entered, a comma (,) is used as the delimiter. The character that quotes a string field. Select how the record is formatted according to its size.

Undefined: There is no set definition for the record size. Fixed: The record length is fixed. Variable: The record length varies.

DB Command Filter expression

Enter special commands for the data source you are working with. You can create a filter by entering a WHERE clause. You should use a filter when more than one logical table is stored in the same physical file. To enter a filer, click Set filter expression and enter an expression in the field below. The following is an example of the WHERE clause syntax: "$$$.expression" Note: To use a filter, you must select use Table Filter Expressions in the Query Processorsection of the Environment Properties in Attunity Studio.

Columns Tab
The Columns tab lets you specify ADD metadata that describes the table columns. This tab is divided into the following:

Column Definition Section Column Properties

6-4 AIS User Guide and Reference

Figure 62 Data Source Metadata Columns Tab

Column Definition Section


The top section of this tab lets you define the columns in the source data. You can click in any row (which represents a column in the data base table) to edit the information. The following table describes this section.
Table 62 Field name Name Data type Metadata Column Tab Definitions Description The name of the column The data type of the column. Selecting this field displays a drop-down box listing the possible data types. For details about these data types, see ADD Supported Data Types. The size of the column The information entered in this field depends on the data type: For decimal data types, this is the number of digits to the right of the decimal place. This number must not be greater than the number of digits. The default value is 0. For scaled data types, this is the total number of digits. The number must be negative.

Size Scale

Working with Metadata in Attunity Studio

6-5

Table 62 (Cont.) Metadata Column Tab Definitions Field name Dimension Description The maximum number of occurrences of a group of columns that make up an array. The (+) to the left of a column indicates a group field. This type of field will have a Dimension value. Click (+) to display the group members. Offset Fixed offset An absolute offset for the field in a record. This column lets you determine whether to calculate the offset. There are two options:

Calc offset: If you clear this check box, the absolute offset for each of the columns is calculated. Fixed offset: When you select this check box, you will have a fixed offset. The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select the check box in this column to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. By selecting the check box, or by editing the offset value you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.

Primary Key

Select this to indicate that this column is a primary key.

The buttons on the right side of the tab are used to manipulate the data in this section of the tab. The following table describes how you can move around in this section.
Table 63 Button Insert Up Down Rename Delete Find Definition Section Buttons Description Inserts a column to the table. You can insert a new column. If the table has arrays, you can add a new child column. Moves your selection to the column directly above where the currently selected column. Moves your selection to the column directly below where the currently selected column Lets you rename the selected column. Deletes the selected column. Click this button to open a list of all columns in the database. Select a column and click OK to select it in the table.

Column Properties
You can change the property value by clicking in the Value column. Follow these steps for displaying the column properties. To display the column properties Select a column from the Column Definition (top) section. The properties for the column are displayed at the bottom of the tab.

6-6 AIS User Guide and Reference

The following table shows some of the properties available for selected columns.
Table 64 Property Alias Metadata Properties Description A name used to replace the default virtual table name for an array. Virtual table names are created by adding the array name to the record name. When an array includes another array the name of the nested array is the name of the record and the parent array and the nested array. When the default generated virtual table name is too long, use an Alias to replace the long name. The current field is updated automatically by the data source during an INSERT statement and is not explicitly defined in the INSERT statement. The INSERT statement should include an explicit list of values. This attribute is used for fields such as an order number field whose value is incremental each time a new order is entered to the data source. A short note or description about the column. Click the button to enter data source-specific commands for the column. The value for the field in an insert operation, when a value is not specified. When true, the current field is not returned when you execute a SELECT * FROM... statement. To return this field, you must explicitly ask for it in a query, for example, SELECT NATION_ ID, SYSKEY FROM NATION where SYSKEY is a field defined with explicitSelect. You cannot use an asterisk (*) in a query where you want to retrieve a field defined with the explicitSelect value. You can disable this value by entering the disableExplicitSelect value in the data source bindings Environment Properties. Hidden Non Selectable The current field is hidden from users. The field is not displayed when a DESCRIBE statement is executed on the table. When true, the current field is never returned when you execute an SQL statement. The field is displayed when a DESCRIBE statement is executed on the table. If true, the current field cannot be updated. This value allows the current field to contain NULL values. The null value for the field during an insert operation, when a value is not specified. This property shows that the set member field is a chapter of an owner field. A value for this property must be used when accessing a set member as a chapter in an ADO application. This property is used for DBMS metadata OnBit Subfield of The position of the bit in a BIT field and the starting bit in a BITS field. The value is generated automatically when you generate ADD metadata from Adabas data that includes a superdescriptor based on a subfield. A field is created to base this index on, set to the offset specified as the value of the Subfield start field. If no value is entered in the Subfield start field, the subfield is set by default to an offset of 1.

Autoincrement

Comment DB command Empty value Explicit Select

Non Updateable Nullable Null value Chapter of

Working with Metadata in Attunity Studio

6-7

Table 64 (Cont.) Metadata Properties Property Subfield start Description The offset within the parent field where a subfield starts.

Indexes Tab
The Indexes tab lets you indicate ADD metadata describing the table indexes.
Note: This tab contains information only if the Organization field in the General tab is set to Index.
Figure 63 Data Source Metadata Indexes Tab

This tab has two sections. The first section lets you define the index keys for the columns in the table. The bottom of the tab lists the properties for each of the columns at the top.

Table Information
The following table describes the fields for the top part of the tab, which defines the indexes used for the table.
Table 65 Field Name Order Index Definitions Fields Description The name of the column used as an index for the current table. The row order that the index gets back.

6-8 AIS User Guide and Reference

Table 65 (Cont.) Index Definitions Fields Field DB Command Description Data source-specific commands for the index.

The buttons on the right side of the tab are used to manipulate the data in this section of the tab. The following table describes how you can move around in this section.
Table 66 Button Insert Rename Index Delete Index Definition Buttons Description Inserts an index to the table. Lets you rename the selected index. Deletes the selected index.

Properties
You can index properties for each index column. Follow these steps for displaying the properties for each index. To display the index properties Select a column from the Index Definitions (top) section. The properties for the column are displayed at the bottom of the tab. This properties displayed at the bottom of the tab describe the index or segment. The properties available depend on the data source.

Statistics Tab
The Statistics tab lets you specify metadata statistics for a table. Statistics can be updated with the Update Statistics utility.

Working with Metadata in Attunity Studio

6-9

Figure 64 Data Source Metadata Statistics Tab

This tab is divided into three sections:


Table Columns Indexes

Table
Enter the statistical information for the table in this section.

Rows: Enter or use the arrows to select the approximate number of rows in the table. If the value is -1, the number of rows in the table is unknown (no value was supplied and the update statistics utility was not run to update the value). A value of 0 indicates that this table is empty. Blocks: Enter or use the arrow to select the approximate number of blocks in the table.
Note:

If no value is entered for the number of rows or the number of blocks, queries over the table may not be executed effectively.

Columns
Enter the cardinality for each of the columns in the table in this section.

Column Name: The columns in the table. Cardinality: The number of distinct values for the column. If the value is -1, the number of distinct values for the column is unknown (a value was not supplied

6-10 AIS User Guide and Reference

and the update statistics utility was not run to update the value). A value of 0 indicates that there are no distinct values for the column.

Indexes
Enter the cardinality for the columns in each of the tables indexes in this section.

Indexes and Segments: The indexes and segments in the table. Cardinality: The number of distinct key values in the index. If the value is -1, the number of distinct key values in the index is unknown (no value was supplied and the update statistics utility was not run to update the value). A value of 0 indicates that there are no distinct key values in the index.

Update Button
Update: Opens the Update Statistics window. AIS collects information about tables, indexes, and column cardinalities. These statistics can be used to optimize a query using the Query Optimizer, which finds the most efficient way to perform a query across multiple machines. This is done using the metadata statistics.
Figure 65 Metadata Statistic Update Window

where:

Type: The type of statistic information being added. Estimated: An estimation of the amount of statistical information returned. Estimated with Rows: An estimation of the amount of statistical information returned. Including an estimation of the number of rows in the table. Specify the number in the text box. This number is used to shorten the time to produce the statistics, assuming that the value specified here is the correct value, or close to the correct value.

Working with Metadata in Attunity Studio 6-11

Note:

When the number of rows in the table is not provided, the number of rows used is determined as the maximum value between the value specified in the tuning DsmMaxBufferSize environment property and the value set in the nRows attribute (specified as part of the metadata for the data source)

Exact: The exact statistical information returned. Note that this can be a lengthy task and can lead to disk space problems with large tables.

Resolution: The level of the statistical information returned. Default: Only information about the table and indexes is collected. Information for partial indexes and columns is not collected. All Columns and Indexes: Information about the table, indexes, partial indexes and columns is collected. Select Columns and Indexes: Lets you select the columns and indexes you want to collect statistics for. In the enabled list of columns and/or indexes left click those columns you want included (you can use shift-click and control-click to select a number of columns and/or indexes).

The statistics are updated on the server, enabling work to continue in Attunity Studio. A message is displayed in Attunity Studio when the statistics on the server have been updated.

Modelling Tab
The Modelling tab lets you enter information about the virtual view policy for arrays. These parameters are valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties.

6-12 AIS User Guide and Reference

The configurations made in this editor are for the selected table, only. The same parameters are configured on the data source level in the data source editor.
Figure 66 Data Source Metadata Advanced Tab

Enter the following information in this tab:


Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter.

For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns.

Working with Metadata in Attunity Studio 6-13

false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter.

For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Importing Data Source Metadata with the Attunity Import Wizard


This section describes how to use the Import wizard for a basic Metadata import. You can use this wizard with a non-relational data source with its own metadata. The metadata is generated from metadata files, such as COBOL copybooks. If no COBOL copybook is available, the metadata has to be manually defined in the Attunity Studio Design perspective Metadata editor. For more information, see Managing Data Source Metadata. A basic COBOL Import wizard can be used with most of the Attunity data sources. The following data source drivers have wizards specific to that data source for importing Metadata. This wizard is similar to the basic COBOL wizard, with special steps necessary for the specific data source.

Adabas C Data Source Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources RMS Data Source (OpenVMS Only) VSAM Data Source (z/OS)

For Importing Procedure Metadata, an import wizard is available for these procedure data sources:

CICS Procedure Data Source


Note:

Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) are not made to the Attunity metadata

Starting the Import Process


This section describes the steps required to begin importing metadata from data sources that must use Attunity metadata, such as COBOL copybooks. This metadata is used to generate the Attunity metadata. To begin the import process 1. Open Attunity Studio.
2.

In the Design perspective Configuration view, expand the Machines folder and then expand the machine with the data source where you are importing the metadata. Expand the Bindings folder and expand the binding with the data source metadata you are working with.

3.

6-14 AIS User Guide and Reference

4. 5.

Expand the Data Source folder and. Right-click the data source that you are working with and select Show Metadata View. The Metadata view opens with the selected data source displayed.

6.

Right-click Imports under the data source and select New Import. The New Import dialog box is displayed, as shown in the following figure:

Figure 67 The New Import screen

7. 8. 9.

Enter a name for the import. The name can contain letters, numbers and the underscore character. Select the Import type from the list. The import types in the list depend on the data source you are working with. Click Finish. The Metadata import wizard opens with the Get Input Files screen, as described in the Selecting the Input Files step.

Selecting the Input Files


This section describes the steps required to select the input files that will be used to generate the metadata. It continues the process after the Starting the Import Process step. To select the input files 1. Click Add in the Import Wizard to add COBOL copybooks. The Add Resource screen is displayed, providing the option of selecting files from the local machine or copying the files from another machine using FTP. The following figure shows the Add Resource screen.

Working with Metadata in Attunity Studio 6-15

Figure 68 Add Resource Screen

2. 3.

If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the COBOL copybooks reside and, if not using anonymous access, enter a valid username and password to access the Machine. To browse and transfer files required to generate the metadata, access the machine using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.

4.

5.

Select the files to import and click Finish to start the transfer. The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process. You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The selected files are displayed in the Get Input Files screen of the wizard, as shown in the following figure.

6-16 AIS User Guide and Reference

Figure 69 Import Wizard

6.

To manipulate table information or the fields in the table, right-click the table and select the option you want. The following options are available:

Fields manipulation: Access the Fields Manipulation screen to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table. Set table attributes: Set table attributes. The table attributes are described in Table Attributes. XSL manipulation location: Specify an XSL transformation or JDOM document that is used to transform the table definition.

7.

Click Next to go to the Applying Filters step.

Applying Filters
This section describes the steps required to apply filters on the COBOL Copybook files used to generate the Metadata. It continues the Selecting the Input Files step. To apply filters 1. Click Next, the Apply Filters step is displayed in the editor.

Working with Metadata in Attunity Studio 6-17

Figure 610 Apply Filters Screen

2.

Apply filters to the copybooks, as needed. The following COBOL filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.

3.

Click Next to go to the Selecting Tables step.

6-18 AIS User Guide and Reference

Selecting Tables
This section describes the steps required to select the tables from the COBOL Copybooks. The following procedure continues the Applying Filters step.
1.

From the Select Tables screen, select the tables that you want to access. To select all tables, click Select All. To clear all the selected tables, click Unselect All. The following figure shows the Select Tables screen.

Figure 611 Select Tables Screen

The import manager identifies the names of the records in the COBOL copybooks that will be imported as tables.
2.

Select the tables that you want to access (that require Attunity metadata) and then click Next to go to the Import Manipulation step.

Import Manipulation
This section describes the operations available for manipulating the imported records (tables). It continues the Selecting Tables procedure. The import manager identifies the names of the records in the DDM Declaration files that will be imported as tables. You can manipulate the general table data in the Import Manipulation Screen. To manipulate the table metadata 1. From the Import Manipulation screen (see Import Manipulation Screen figure), right-click the table record marked with a validation error, and select the relevant operation. See the table, Table Manipulation Options for the available operations.

Working with Metadata in Attunity Studio 6-19

2.

Repeat step 1 for all table records marked with a validation error. You resolve the issues in the Import Manipulation Screen. Once all the validation error issues have been resolved, the Import Manipulation screen is displayed with no error indicators.

3.

Click Next to continue to the Metadata Model Selection.

Import Manipulation Screen


The Import Manipulation screen is shown in the following figure:
Figure 612 Import Manipulation Screen

The upper area of the screen lists the DDM Declaration files and their validation status. The metadata source and location are also listed. The Validation tab at the lower area of the screen displays information about what needs to be resolved in order to validate the tables and fields generated from the COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location). The following operations are available in the Import Manipulation screen:

Resolving table names, where tables with the same name are generated from different files during the import. Selecting the physical location for the data. Selecting table attributes. Manipulating the fields generated from the COBOL, as follows: Merging sequential fields into one (for simple fields). Resolving variants by either marking a selector field or specifying that only one case of the variant is relevant.

6-20 AIS User Guide and Reference

Adding, deleting, hiding, or renaming fields. Changing a data type. Setting the field size and scale. Changing the order of the fields. Setting a field as nullable. Selecting a counter field for array for fields with dimensions (arrays). You can select the array counter field from a list of potential fields. Setting column-wise normalization for fields with dimensions (arrays). You can create new fields instead of the array field where the number of generated fields will be determined by the array dimension. Creating arrays and setting the array dimension.

The following table lists and describes the available operations when you right-click a table entry:
Table 67 Option Fields Manipulation Table Manipulation Options Description Customizes the field definitions, using the Field Manipulation screen. You can also access this screen by double-clicking the required table record. Renames a table. This option is used especially when more than one table with the same name is generated from the COBOL. Sets the physical location of the data file for the table. Sets the table attributes. Specifies an XSL transformation or JDOM document that is used to transform the table definitions. Removes the table record.

Rename Set data location Set table attributes XSL manipulation Remove

You can manipulate the data in the table fields in the Field Manipulation Screen. Double-click a line in the Import Manipulation Screen to open the Field Manipulation Screen.

Field Manipulation Screen


The Field Manipulation screen lets you make changes to fields in a selected table. You get to the Field Manipulation screen through the Import Manipulation Screen. The Field Manipulation screen is shown in the following figure.

Working with Metadata in Attunity Studio 6-21

Figure 613 Field Manipulation Screen

You can carry out all of the available tasks in this screen through the menu or toolbar. You can also right click anywhere in the screen and select any of the options available in the main menus from a shortcut menu. The following table describes the tasks that are done in this screen. If a toolbar button is available for a task, it is pictured in the table.
Table 68 Command General menu Undo Click to undo the last change made in the Field Manipulation screen. Field Manipulation Screen Commands Description

Select fixed offset

The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select this option to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. When you select a fixed offset you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.

6-22 AIS User Guide and Reference

Table 68 (Cont.) Field Manipulation Screen Commands Command Test import tables Description Select this table to create an SQL statement to test the import table. You can base the statement on the Full table or Selected columns. When you select this option, the following screen opens with an SQL statement based on the table or column entered at the bottom of the screen.

Enter the following in this screen:

Data file name: Enter the name of the file that contains the data you want to query. Limit query results: Select this if you want to limit the results to a specified number of rows. Enter the amount of rows you want returned in the following field. 100 is the default value. Define Where Clause: Click Add to select a column to use in a Where clause. In the table below, you can add the operator, value and other information. Click on the columns to make the selections. To remove a Where Clause, select the row with the Where Clause you want t remove and then click Remove.

The resulting SQL statement with any Where Clauses that you added are displayed at the bottom of the screen. Click OK to send the query and test the table. Attribute menu Change data type Select Change data type from the Attribute menu to activate the Type column, or click on the Type column and select a new data type from the drop-down list.

Working with Metadata in Attunity Studio 6-23

Table 68 (Cont.) Field Manipulation Screen Commands Command Create array Description This command allows you to add an array dimension to the field. Select this command to open the Create Array screen.

Enter a number in the Array Dimension field and click OK to create the array for the column. Hide/Reveal field Select a row from the Field manipulation screen and select Hide field to hide the selected field from that row. If the field is hidden, you can select Reveal field. Select this to change or set a dimension for a field that has an array. Select Set dimension to open the Set Dimension screen. Edit the entry in the Array Dimension field and click OK to set the dimension for the selected array. Set field attribute Select a row to set or edit the attributes for the field in the row. Select Set field attribute to open the Field Attribute screen.

Set dimension

Click in the Value column for any of the properties listed and enter a new value or select a value from a drop-down list. Nullable/Not nullable Select Nullable to activate the Nullable column in the Field Manipulation screen. You can also click in the column. Select the check box to make the field Nullable. Clear the check box to make the field Not Nullable. Set scale Select this to activate the Scale column or click in the column and enter the number of places to display after the decimal point in a data type. Select this to activate the Size column or click in the column and enter the number of total number of characters for a data type.

Set size

6-24 AIS User Guide and Reference

Table 68 (Cont.) Field Manipulation Screen Commands Command Field menu Add Select this command or use the button to add a field to the table. If you select a row with a field (not a child of a field), you can add a child to that field. Select Add Field or Add Child to open the following screen: Description

Enter the name of the field or child, and click OK to add the field or child to the table. Delete field Select a row and then select Delete Field or click the Delete Field button to delete the field in the selected row.

Move up or down

Select a row and use the arrows to move it up or down in the list.

Rename field Structures menu Columnwise Normalization

Select the Rename field to make the Name field active. Change the name and then click outside of the field.

Select Columnwise Normalization to create new fields instead of the array field where the number of generated fields will be determined by the array dimension.

Working with Metadata in Attunity Studio 6-25

Table 68 (Cont.) Field Manipulation Screen Commands Command Combining sequential fields Description Select Combining sequential fields to combine two or more sequential fields into one simple field. The following dialog box opens:

Enter the following information in the Combining sequential fields screen:

First field name: Select the first field in the table to include in the combined field End field name: Select the last field to be included in the combined field. Make sure that the fields are sequential. Enter field name: Enter a name for the new combined field.

Flatten group

Select Flatten Group to flatten a field that is an array. This field must be defined as Group for its data type. When you flatten an array field, the entries in the array are spread into a new table, with each entry in its own field. The following screen provides options for flattening.

Do the following in this screen:

Select Recursive operation to repeat the flattening process on all levels. For example, if there are multiple child fields in this group, you can place the values for each field into the new table when you select this option. Select Use parent name as prefix to use the name of the parent field as a prefix when creating the new fields. For example, if the parent field is called Car Details and you have a child in the array called Color, when a new field is created in the flattening operation it will be called Car Details_Color.

6-26 AIS User Guide and Reference

Table 68 (Cont.) Field Manipulation Screen Commands Command Mark selector Description Select Mark selector to select the selector field for a variant. This is available only for variant data types. Select the Selector field form the following screen.

Replace variant Select counter field

Select Replace variant to replace a variants selector field. Select Counter Field opens a screen where you select a field that is the counter for an array dimension.

Metadata Model Selection


This section lets you generate virtual and sequential views for imported tables containing arrays. In addition, you can configure the properties of the generated views. It continues the Import Manipulation procedure. This allows you to flatten tables that contain arrays. In the Metadata Model Selection step, you can select configure values that apply to all tables in the import or set specific settings for each table. To configure the metadata model Select one of the following:

Working with Metadata in Attunity Studio 6-27

Default values for all tables: Select this if you want to configure the same values for all the tables in the import. Make the following selections when using this option: Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties. Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns. false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Specific virtual array view settings per table: Select this to set different values for each table in the import. This will override the data source default for that table. Make the selections in the table under this selection. See the item above for an explanation.

The Metadata Model Selection screen is shown in the following figure:

6-28 AIS User Guide and Reference

Figure 614 The Metadata Model Selection Screen

Import the Metadata


This section describes the steps required to import the metadata to the target computer. It continues the Metadata Model Selection step. You can now import the metadata to the computer where the data source is located, or import it later (in case the target computer is not available). To transfer the metadata 1. Select Yes to transfer the matadata to the target computer immediately or No to transfer the metadata later.
2.

Click Finish.

The Import Metadata screen is shown in the following figure:

Working with Metadata in Attunity Studio 6-29

Figure 615 The Import Metadata screen

Working with Application Adapter Metadata


Adapter metadata defines the interactions for the application adapter and the structures of any input and output records used by the interactions. The metadata is stored as an application adapter definition in the SYS repository, on the machine where the adapter is defined. The metadata is in XML format. For information on the adapter metadata schema and using adapters in AIS, see Implementing an Application Access Solution. The following explains how to access the adapter metadata in Attunity Studio. To access adapter metadata 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design Perspective Configuration View, expand the Machines folder and then expand the machine where you want to add the data source. Expand the Bindings folder and then expand the binding with the adapter metadata you are working with. Expand the Adapters folder and then right-click the adapter for which you want to manage the metadata. Select Show Metadata View from the shortcut menu. You can create and edit the following adapter metadata properties:

Adapter Metadata General Properties: Enter and edit information about the adapter, such as the adapter name and the way in which you connect to the adapter. You make these changes in the Design perspective, Metadata view. Adapter Metadata Schema Records: The input and output record structure for a record in the adapter definition.

6-30 AIS User Guide and Reference

Adapter Metadata Interactions: Enter details of an interaction. The interaction Advanced tab is displayed for some adapters only, such as the Database adapter and includes more details about the interaction.

Adapter Metadata General Properties


You can enter and edit information about the adapter, such as the adapter name and the way in which you connect to the adapter. You make these changes in the Design perspective, Metadata view. The following describes how to begin editing general adapter metadata information. To edit general adapter metadata information 1. In Attunity Studio Design perspective, Metadata view, expand the Adapters folder.
2.

Right-click the interaction that you want to edit, and select Open. The Interaction General Properties editor is displayed.

The following figure shows the Interaction General Properties editor.


Figure 616 Interaction General Properties

This following table describes the fields in this tab:


Table 69 Field Name Adapter Metadata General Tab Description The name of the adapter definition. The definition name is usually the same as the name representing the adapter in the binding. The name of the adapter definition. The definition name is usually the same as the name representing the adapter in the binding.

Description

Working with Metadata in Attunity Studio 6-31

Table 69 (Cont.) Adapter Metadata General Tab Field Version Header Authentication mechanism Description The schema version. A C header file with the data structures for the adapter. This header file is used with the C API for applications. The authentication method. Select from:

kerbv5 none basic password

Max Request Size Max Active connections Max Idle timeout Adapter Specifications

The maximum size, in bytes, for an XML ACX request or response. Larger messages are rejected with an error. The maximum number of machines that can connect to the adapter at a time. By default, this number is unlimited. The maximum amount of time without activity before the connection is timed out. By default, this is 600 seconds. The maximum number of simultaneous connections for an adapter (per process).

Adapter Metadata Schema Records


You edit the input and output record structure for a record in the adapter Record Refinition Properties editor. The following describes how to create the adapter schema records. For information on how to open the editor to edit existing schema records, see Editing an Existing Schema Definition. To create adapter schema records 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design Perspective Metadata View, expand the Adapters folder. Right-click Schema and select New record. Enter a name for the new record definition. Click OK. The Schema Record editor is displayed. You define field grouping in the Schema Record editor.

6. 7.

Click New Field to add new fields for the record, describing the data structure for the adapter. Continue adding fields until the record structure is complete.

The following figure shows the Schema Record editor.

6-32 AIS User Guide and Reference

Figure 617 Schema Record Editor

The following table describes the fields in this tab.


Table 610 Field Fields list Schema Record Tab Description Defines the single data item within a record. This section has a table with the following three columns:

Name: The name of the field Type: The data type of the field. See the Valid Data Types table for a list of the valid data types. Length: The size of the field including a null terminator, when the data type supports null termination (such as the string data type).

Specifications

Defines specific field properties. To display the properties, select the specific field in the Fields list.

The following table describes the valid data types that can be used when defining these specifications in the Schema Record editor.
Table 611 Binary Date Float Numeric[(p[,s])] Time Valid Data Types Boolean Double Int Short Timestamp Byte Enum Long String

Working with Metadata in Attunity Studio 6-33

Editing an Existing Schema Definition


This section describes how to open the editor for an existing schema record. To edit an existing schema record 1. Open Attunity Studio.
2. 3. 4.

In the Design Perspective Metadata View, expand the Adapters folder. Expand the Schema. Right-click the record you want to edit and select Open. The Schema Record editor for that record is displayed. You can add fields and make any necessary changes. For information on how to work in this editor, see Adapter Metadata Schema Records.

Adapter Metadata Interactions


You define the details of an interaction in addition to its input and output definitions and additional information specific to the interaction in the Interaction editor. The following describes how to create interactions. For information on how to open the editor to edit existing schema records, see Editing an Existing Interaction. To create interactions 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design Perspective Metadata View, expand the Adapters folder. Right-click the Interactions folder and select New. Enter a name for the new Interaction. Click OK. The Interaction editor is displayed. You define interaction properties and its input and output properties in the Schema Record editor.

6.

Make any changes in the editor that you need. The table below describes the Interactions editor.

The following figure shows the Interaction editor.

6-34 AIS User Guide and Reference

Figure 618 Interaction Editor

The following table describes the fields in this tab:


Table 612 Field Description Mode Adapter Metadata Interaction Tab Description A description of the interaction (optional) Select the interaction mode from the following:

sync-send-receive: The interaction sends a request and expects to receive a response. sync-receive: The interaction sends a request and does not expect to receive a response. async-send: The interaction sends a request that is divorced from the current interaction. This mode is used with events, to identify an event request.

Input/Output record Interaction Specific Parameters

Identifies the input and output records. Specific properties for the interaction. When an Interaction Advanced tab is used, this section is not displayed. The parameters shown are for a Legacy Plug Adapter.

Editing an Existing Interaction


This section describes how to open the editor for an existing interaction. To edit an existing interaction 1. Open Attunity Studio.
2. 3. 4.

In the Design Perspective Metadata View, expand the Adapters folder. Expand the Interactions folder. Right-click the interaction you want to edit and select Open.

Working with Metadata in Attunity Studio 6-35

The Interaction editor for that record is displayed. You can add fields and make any necessary changes. For information on how to work in this editor, see Adapter Metadata Interactions.

Interaction Advanced Tab


In the Interaction editor, click Advanced to open the Interaction Advanced tab. This tab is only available for adapters that support complex interactions, such as the data source adapter. Use this to enter advanced details for the interaction. The following figure shows the Interactions Advanced tab.
Figure 619 Interaction Advanced Tab

The fields in this tab are dependent on the specific adapter. In the above screen the tab is displayed for the Database adapter and enables building or modifying an SQL statement. For information on how this tab relates to the Database adapter. See Adapter Metadata Schema Records.

Working with Procedure Metadata


You use procedure metadata when using Procedure data sources (drivers). See Procedure Data Source Reference for a list of available data sources. You can create and import metadata with Procedure metadata in Attunity Studio. Procedure metadata defines the input and output structures that are sent to and returned from a procedure. The following sections describe how to work with procedure metadata in Attunity Studio.

6-36 AIS User Guide and Reference

Manually Creating Procedure Metadata Managing Procedure Metadata

Manually Creating Procedure Metadata


The following procedure describes how to create event metadata when a COBOL copybook is not available. To create procedure driver metadata 1. In the Design perspective Configuration view, right-click the procedure for which you want to manage the metadata, and select Show Metadata View. The Metadata tab is displayed with the selected procedure driver displayed.
2. 3.

Right-click Procedures, and select New Procedure. Enter a name for the new procedure definition, and click OK. The editor opens with the General properties tabs displayed.

4.

Define the metadata as described for the specific procedure driver.

Managing Procedure Metadata


Metadata can be viewed and modified in the Attunity Studio Design perspective Metadata tab. To edit the metadata 1. Right-click the procedure for which you want to manage the metadata, and select Show Metadata View.
2. 3.

Click Procedures to view the defined procedures. Right-click the required procedure (or double-click it) to view or edit its properties. The editor opens and is different depending on which procedure driver you are working with. See the section that describes the driver you are working with for more information:

CICS Procedure Data Source Procedure Data Source (Application Connector) Natural/CICS Procedure Data Source (z/OS)

Importing Procedure Metadata


Import wizards are available for the following procedure data sources:

CICS Procedure Data Source Procedure Data Source (Application Connector)

For other procedures, the metadata is created manually, as described in Adapter Metadata Schema Records. You start importing procedure metadata in the same way as Importing Data Source Metadata with the Attunity Import Wizard.

Working with Metadata in Attunity Studio 6-37

6-38 AIS User Guide and Reference

7
Handling Arrays
This section describes the support methods that Attunity Connect applies to deal with arrays. It includes the following topics:

Overview of Handling Arrays Representing Metadata Methods of Handling Arrays

Overview of Handling Arrays


Attunity Connect exposes a purely relational front-end through APIs like ODBC, JDBC, and ADO. However, it connects to non-relational Data Sources, which include non-relational data models. As such, Attunity Connect provides a logical mapping that exposes the non-relational constructs in a relational manner. The most prevalent problem in this domain is the issue of arrays, or OCCURS constructs, which is described in this section.

Representing Metadata
Before looking at the different methods of handling arrays, you should understand how metadata is represented in Attunity Studio. The following figure shows an example in COBOL that illustrates arrays and nested arrays. When you import this metadata into Attunity Studio, the import process creates an ADD definition that is equivalent to the original structure, usually mapping the fields one-to-one.

Handling Arrays 7-1

Figure 71 Metadata Example in COBOL

The following figure shows how Attunity Studio represents the same data in XML.

7-2 AIS User Guide and Reference

Figure 72 XML Representation of the COBOL Metadata in Attunity Studio

On the Columns tab, Attunity Studio represents the same metadata as shown in the following figure.

Handling Arrays 7-3

Figure 73 Representation of Metadata on the Columns tab in Attunity Studio

Finally, if you run the NavSQL > native STUDENT command from the NavSQL utility, the same metadata is represented as shown in the following figure.

7-4 AIS User Guide and Reference

Figure 74 Representation of Metadata in the NavSQL Utility

Methods of Handling Arrays


Attunity Studio lets you handle arrays by using one of the following methods:

Columnwise Normalization Virtual Tables Virtual Views Sequential Flattening (Bulk Load of Array Data) ADO/OLE DB Chapters XML

Columnwise Normalization
Some small arrays are merely a short hand notation of a sequence of simple columns. For example, an address field with three lines on the screen may have been represented in a COBOL copybook by an OCCURS clause that is used three times, as a
Handling Arrays 7-5

short hand notation of writing three columns, Address_1, Address_2, and Address_3.
Figure 75 Simple Array in COBOL

This is not a real case of hierarchical dependency of data that needs to be modeled in the relational world. The simplest approach for dealing with this simple class of arrays is to replace them with simple columns during the import process. This process of replacing a small array by a sequence of columns is referred to as columnwise normalization. Columnwise normalization is useful for simple arrays that share any of the following characteristics:

They show a small number of occurrences. They do not possess a counter. They contain only one field.

You can choose to normalize an array columnwise in the Import Manipulation panel in Attunity Studio. The following figures show an example.
Figure 76 Selecting Columnwise Normalization in the Import Manipulation Panel

7-6 AIS User Guide and Reference

Figure 77 Flat View Produced by Columnwise Normalization

Virtual Tables
Exposing arrays as virtual tables is a commonly used technique to handle arrays. It generates a virtual table for every array in the parent record, with specially generated virtual fields that connect the parent and the virtual table. For example, an array called course in a table called student is represented by the virtual table STUDENT_ COURSE. Attunity Studio generates, updates, and removes virtual tables automatically, according to the status of their parent table, and names them by appending the array name to the parent name. When an array includes another array, the name of the resulting virtual table consists of the parent name, the array name, and the name of the nested array, as follows:
parentName_arrayName_nestedArrayName

The number of nested-array levels is not limited. Virtual tables includes the following columns:

The array member fields from the original structure. A column called _PARENT, which identifies the parent record. This column is a string representation of the parent records bookmark. A column called _ROWNUM, which is the ordinal that identifies the array record within the parent table.

The _PARENT and _ROWNUM columns are generated automatically and cannot be updated. Together, they uniquely identify each row in the virtual table. To identify the array field in the parent record when joining parent and child tables, you can use the _ PARENT field of the virtual table. You cannot edit the definition of virtual tables; you can only manipulate the parent table. Attunity Studio indicates virtual tables by using a differently colored icon in the Metadata view, as shown in the following figure.

Handling Arrays 7-7

Figure 78 Display of Virtual Tables in Attunity Studio

Right-clicking a table in the Metadata view and selecting SQL View opens the SQL view. The following figure shows a sample table that includes the _PARENT and _ ROWNUM fields and replaces the entire array ASSIGNMENTS by a varchar(64) field that includes a string representation of the bookmark.
Figure 79 SQL View of the Virtual Table STUDENT_COURSE

When you look at an actual record, the same fields_PARENT, _ROWNUM, and NUMOF_ ASSIGNMENTSare displayed, as shown in the following figure. You can use the ASSIGNMENTS field to join with the STUDENT_COURSE_ASSIGNMENTS virtual table.

7-8 AIS User Guide and Reference

Figure 710 Actual Record Showing Same Fields as SQL View

The metadata that the ADD stores for the virtual table is only a pointer to the parent table, as indicated by the attribute name, basedOn. It is filled in from the parent table upon first access. You can join the parent and the child by using the courses and _PARENT fields as the join criteria, as shown in the following figure.
Figure 711 Joining Parent and Child

Equally, you can join parent, child, and grandchild, as shown in the following figure.
Figure 712 Joining Parent, Child, and Grandchild

Handling Arrays 7-9

When joining a table and an array table, AIS reads every physical record only once, thus maximizing the joins efficiency in terms of I/O. The virtual table method supports all DML operations, with the following limitations:

Arrays without counters (DEPENDING ON clauses) only support UPDATE commands. INSERT commands ignore the _ROWNUM column and add data at the end. DELETE commands that remove data from the middle of an array move the following elements up and update the counter automatically. Therefore, if you want to delete all the elements from an array, you cannot perform the following loop: for (i=1; i<=counter; i++) delete from array where _PARENT=x and _ROWNUM=i This loop does the following:
1. 2. 3.

It deletes the first member of the array. It moves up the second member of the array so that it becomes the first member. It deletes the second member of the array, which was originally the third member. It does not delete the original second member because that is now the first member of the array.

Alternatively, you can use the following loops: while (counter>0) delete from array where _PARENT=x and _ROWNUM=1 This loop keeps deleting the first member of the array until no member is left to move up. The counter is updated by AIS. while (i=counter; i>0; i--) delete from array where _PARENT=x and _ROWNUM=i This loop deletes the members of the array from last to first. It is slightly more efficient because it avoids the moving of members.

Note:

Every write operation causes physical I/O on the parent record. If you need to use a lot of DML operations on arrays, consider using the XML approach instead.

Virtual Views
Virtual views are a more recent method of handling arrays in AIS. They eliminate the need for a _PARENT field, replacing it by primary key fields from the parent. This makes the processing of array records easier for most applications. Per binding, you can handle arrays either as virtual tables or as virtual views; you cannot mix the two models in the same binding. However, when you switch between virtual tables and virtual views, you do not need to make any changes to the metadata because they both use the same table definitions in the ADD. During the import procedure, Attunity Studio generates virtual views and names them by appending the array name to the parent name. When an array includes

7-10 AIS User Guide and Reference

another array, the name of the resulting virtual table consists of the parent name, the array name, and the name of the nested array, as follows:
parentName_arrayName_nestedArrayName

The number of nested-array levels is not limited. Virtual views includes the following:

The array member columns from the original structure. The fields from the parents first unique key or all parent fields, depending on the selection during the import process. If all parent fields are included in the virtual view, the parents indexes are available in the view definition and can be used for efficient optimization strategies.
Note:

Inherited keys lose their uniqueness in the virtual view.

If the view does not include all parent fields, the primary key fields (if the primary key is not the parents first unique key). A column called <array>_ROWNUM, which is the ordinal that identifies the array record within the parent table.

The unique key and <array>_ROWNUM columns are generated automatically. Together, they uniquely identify each row in the virtual view and form a unique key. When working with virtual views, consider the following limitations:

Virtual views are read-only. Virtual views currently do not support arrays within variants that have a selector field.

Including all parent fields in the virtual view greatly reduces the need for performing join operations because this in itself is an implicit join. However, if you do perform a join, it is not as efficient as a join of virtual tables because AIS reads each record twice, not once. This means that you are better advised to use Virtual Tables if you need to perform complex, explicit joins. In general, though, the query processor can devise efficient access strategies because AIS copies all relevant indexes from the parent to the virtual view. Attunity Studio indicates virtual views by using a differently colored icon in the Metadata view, as shown in Display of Virtual Tables in Attunity Studio.

Sequential Flattening (Bulk Load of Array Data)


Performing a bulk load of complex data from a non-relational system to a relational database requires a carefully thought-out algorithm that keeps I/O operations at a minimum. In a bulk load scenario, methods such as Columnwise Normalization, Virtual Tables, and Virtual Views require a full scan of the physical file for every single array. An efficient method of performing this task presents a kind of row-wise normalization, called sequential flattening. This method reads all data in the physical file in a single scan. Sequential flattening replaces arrays in a non-relational system by a sequence of rows. It maps all the record fields of the non-relational file to a single table that contains both parent and child records. In this way, sequential flattening enables the reception of a stream of data by using a single SELECT statement.
Handling Arrays 7-11

The sequentially flattened view of a complex table is referred to as single table. You can choose to create a single table in Attunity Studio, by selecting Sequential View during the Metadata Model Selection step.
Figure 713 Selecting a Single Table View

The flattened table is called <table>_ST, where <table> is the name of the parent table and ST indicates a single table. For example, if a parent table is called STUDENT, the single table is called STUDENT_ST. The structure of the single table is identical to the original tables structure, except that AIS removes all array dimensions and adds some control fields. When reading a record, AIS performs a tree traversal of the parent and its array hierarchy. Each record in the resulting recordset deals with a specific array member; other arrays are nulled out, and the _LEVEL control field indicates the current array in the tree traversals hierarchy. The sequentially flattened single table includes the following columns:

The parent fields, that is the non-repeating fields. The array fields for all arrays within the parent. A column called __LEVEL, which indicates the current child level. The value comprises a concatenation of parent and child names. A column called __SEQUENCE, which indicates the sequence of the single table row in the physical, non-relational table. A value of 1 indicates a parent record. For each array, an optional column called <array>_ROWNUM, which identifies the row in the array. This column is generated automatically for the array.

The sequentially flattened single table includes the following rows:

7-12 AIS User Guide and Reference

A row for each parent record only, without reference to the arrays. All rows with an empty __LEVEL column and a value of 1 in the __SEQUENCE column indicate a parent record only. A record for each array record.

The following figure represents a non-relational data source with arrays.


Figure 714 Non-relational Data Source with Arrays

The following figure shows the metadata that sequential flattening produces for this data source.
Figure 715 SQL View of the Single Tables Metadata

Handling Arrays 7-13

The next figure shows the actual single table. It contains a column for each row in the SQL view above.
Figure 716 STUDENT_ST with All Parent and Child Records
Details of the first course. The first row shows the course itself, and the next three rows show the course assignments. Once the first course has been retrieved, the next course is retrieved with its assignments. Following all the courses, the next array (BOOK) is retrieved.

These rows belong to a single parent record.

During a bulk load, the rows are read one by one and written to the appropriate target file for each row, based on the __LEVEL field.

ADO/OLE DB Chapters
ADO and OLEDB introduced the concept of an embedded rowset, called a chapter, as part of the API. A chapter is a result set embedded in a single column within the primary resultset that you can drill down to. This model works well with complex legacy data that includes arrays. AIS supports chapters and lets you update them. From a metadata perspective, the array is represented by a chapter column and contains a chapter identifier. As such, the chapter functions as a handle to the data that it links to. Any ADO-based programs, particularly Visual Basic programs, can use chapters to handle arrays, such as the Attunity-provided SQL sample utility, ChapView. The following figure illustrates how ChapView handles chapters.

7-14 AIS User Guide and Reference

Figure 717 Chapter-Handling in ChapView


Double-clicking this COURSE [CHAPTER] Double-clicking this ASSIGNMENTS [CHAPTER] field field opens the Chapter - COURSE window. opens the Chapter - COURSE/ASSIGNMENTS window.

A look at a Microsoft Visual Basic code snippet using ADO helps understand how chapters are embedded.
Figure 718 Visual Basic Code Snippet Using ADO
oRST is the main recordset. For the embedded recordset, oRSTChapter is used. However, both are simple recordset objects.

. . .

The embedded recordset is the value of the COURSE column in the main recordset. From here on, everything is simple Visual Basic recordset syntax.

Handling Arrays 7-15

Note:

Chapters are not supported with virtual views and sequential flattening. However, you can use chapters and virtual views in the same Visual Basic program, as needed.

Chapter Handling in Query and Database Adapters


Query and database adapters support chapters. In a way, they open every chapter that they encounter and include the embedded rowset within the resulting XML document. For example, the query select * from student->course produces the output shown in the following figure.
Figure 719 Chapters Represented in XML

For more information on chapters, see the OLE DB and ADO documentation.

7-16 AIS User Guide and Reference

XML
Forcing non-relational structures that include arrays to fit the relational mould is an endless source of complexity, with many potential efficiency issues. The method of using XML to handle arrays is a way to get around the restrictions of the relational model while maintaining the use of SQL. It makes conventional use of SQL, but instead of specifying a list of columns in the select clause, it transfers the records data to the client application as a BLOB column that contains the XML representation of the data. This method works for both read and write operations. By using XML, you can have a client application write an entire complex structure in a single I/O operation. Approaching the same task with virtual tables would result in multiple I/O operations (typically one I/O operation per array member). To be able to use XML, you need to make sure that the environment property exposeXmlField in the misc section of the binding definition is set to true. By default, this property is set to false. You can use the XML method from any supported client interface, including JDBC, ODBC, and ADO. It is also commonly used in the context of the query or database adapter. If you use a query adapter, the XML documents that result from a select * clause and a select XML clause are almost identical. However, the select XML clause produces a natural XML representation while the select * clause produces a hierarchical rowset representation of the record, as shown in the following figures.

Handling Arrays 7-17

Figure 720 Output of a Conventional select * Clause

7-18 AIS User Guide and Reference

Figure 721 Output of a select xml Clause

Handling Arrays 7-19

Similarly, you can combine DML with XML to insert a complex record with arrays during a single I/O operation. For example, you could run the following query to add another record to the STUDENT table:
<update sql="insert into navdemo:student(xml) values(?)" executionMode="insertUpdateXml"> <inputParameter type="xml"> <STUDENT ID="1" FIRST_NAME="JOHN" LAST_NAME="SMITH" DATE_OF_ BIRTH="1984-12-01"> <COURSES COURSE_ID="1" COURSE_TITLE="MATH 1" INSTRUCTOR_ID="1"> <ASSIGNMENTS ASSIGNMENT_TYPE="QUIZ" ASSIGNMENT_TITLE="DERIVATIVES-1" DUE DATE="2004-03-14" GRADE="4.0"/> <ASSIGNMENTS ASSIGNMENT_TYPE="QUIZ" ASSIGNMENT_TITLE="DERIVATIVES" DUE_ DATE="2004-03-14" GRADE="4.5"/> </COURSES> <BOOKS ISBN="1234" RETURN_DATE="2004-05-02"/> </STUDENT> </inputParameter> </update>

Making use of XML is recommended when your application environment is Web-based. This method has the lowest overhead and performs better than chapters in Web environments.

7-20 AIS User Guide and Reference

8
Using SQL
This section contains the following topics:

Overview of Using SQL Batching SQL Statements Hierarchical Queries Copying Data From One Table to Another Passthru SQL Writing Queries Using SQL Locking Considerations Managing the Execution of Queries over Large Tables

Overview of Using SQL


Within an application, you can use the following versions of SQL to access data:

SQL specific to the data source you want to access. Attunity SQL, which is based on standard ANSI 92 SQL. The full syntax for Attunity SQL is described in Attunity SQL Syntax. The Attunity SQL incorporates enhancements, including SQL access to a data source that does not natively support SQL.

Whatever version of SQL you use, you can always use the Attunity SQL extensions to incorporate additional features. For information about customizing the way the SQL is processed, based on the data source being accessed, see Using the Attunity Connect Syntax File (NAV.SYN). You can test SQL interactively to ensure the correct results, using NAV_UTIL EXECUTE.

Batching SQL Statements


You can process multiple queries within an ADO or ODBC application by batching the SQL statements, one after the other, separated by semi-colons (;), as in the following example:
sql1;sql2;sql3

Parameters are passed as a group for all the queries in the same order as they appear in the individual queries.

Using SQL 8-1

You cannot use this syntax to batch SQL statements from a Java application or via the NAV_UTIL EXECUTE utility or ADO Demo application supplied with Attunity Connect.

ADO Notes

Set the Multiple Results connection property to enable multiple results (before executing the compound query):
oConn.Properties("Multiple Results") = 1

The results of the first query are displayed. To see the results of the next query, request NextRecordSet.

Hierarchical Queries
This section contains the following topics:

Generating Hierarchical Results Using SQL Accessing Hierarchical Data Using SQL Flattening Hierarchical Data Using SQL Using Virtual Tables to Represent Hierarchical Data Hierarchical Queries From an Application

A hierarchical query is a query whose result is a hierarchy of rowsets linked by chapters (see Chapter), reflecting parent-child relationships. For example, Customers and Orders may constitute a hierarchical rowset, with each chapter of the child Orders rowset corresponding to all of the orders of one customer in the parent Customers rowset. Rowsets with arrays of structures as columns (which are supported by certain providers) are modeled such that the rows of an array constitute the children of a column in the containing parent row. Hierarchical queries enable you to do the following:

Arrange rowsets resulting from a query in a hierarchy, reflecting a parent-child relationship. Use nested SELECT statements to do this. Manipulate data that is stored hierarchically in a data source (such as information stored in arrays in RMS). Currently arrays are supported in the Adabas, CISAM, DISAM, DBMS, Enscribe, RMS, and VSAM drivers. You can handle this type of data in the following ways:

By including the hierarchical data as chapters, reflecting a parent-child relationship. By flattening the hierarchical data (see Flattening Hierarchical Data Using SQL). By using virtual tables to represent the data (see Using Virtual Tables to Represent Hierarchical Data). You can use virtual driver columns in a DBMS database to produce a chaptered result.

See Hierarchical Queries From an Application to see how the SQL is incorporated in an application.
8-2 AIS User Guide and Reference

Chapter
A chapter is a group of rows within a hierarchy the chapter constitutes a collection of children of some row and column in a parent rowset. The column in the parent rowset is called a chapter column and contains a chapter identifier. The name of the column is also the name identifying the child rowset (which is meaningful only in the context of the parent rowset).

Generating Hierarchical Results Using SQL


A hierarchical query nests a SELECT statement as one of the columns of the rowset retrieved by a nested SELECT statement. You use braces ({}) to delimit the nesting. This type of query generates a chapter, which enables you to incorporate drill-down operations in the application.

Example
The following hierarchical query produces a child rowset:
SELECT C_name, {SELECT O_orderkey, {SELECT L_partkey, L_linenumber FROM lineitem WHERE L_orderkey = O_orderkey} AS items FROM torder WHERE O_custkey=C_custkey} AS orders FROM customer

The main (root) rowset has two columns. The second column (orders) is a chapter. The result has a three-tier hierarchical structure as displayed in this figure.
Figure 81 Hierarchical SQL Query Producing Child Rowset

In the SQL Utility, click a field in the second column ([CHAPTER]) to display the contents of the chapter. The second column can be opened to display another (child) rowset. This child rowset includes items, a chaptered column that can be opened to display another child rowset (showing the L_partkey and L_linenumber columns for the opened chapter) as displayed in this figure:
Figure 82 Hierarchical SQL Query Producing Multiple Child Rowsets

Using SQL 8-3

You can display chapters for only one parent row at a time. For example, you can display the set of orders for only one customer at a time. You can perform drill-down operations from an ADO, ODBC and JDBC-based application. For details, see Hierarchical Queries From an Application.

Accessing Hierarchical Data Using SQL


Data stored hierarchically in a data source (such as information stored in arrays in RMS) can be referenced by using a -> to denote the parent child relationship in the source:
FROM parent_name->chapter1->chapter2 [alias]

Or, using an alias for the parent table:


FROM parent_alias->chapter1->chapter2 [alias]

For example, the following hierarchical query uses an alias and produces a list containing the children stored in an array called hchild belonging to each employee:
SELECT emp_id,(SELECT name,age FROM e->hchild) FROM disam:emp_struct e

For more details about nesting a SELECT statement in the FROM clause of another SELECT statement, refer to Flattening Hierarchical Data Using SQL. Without an alias the query lists for each employee all of the children of all of the employees:
SELECT emp_id,(SELECT name,age FROM disam:emp_struct->hchild FROM disam:emp_struct

The chaptered data is specified as part of the source syntax of the FROM clause. You can use chaptered data:

In an outer SELECT statement to flatten the chaptered data. Anywhere a FROM clause is used.

Examples
The following examples assume hierarchical data with a parent employees rowset called emp_struct. This rowset has ordinary columns plus a chapter column called hchild (one row for each of the children of this employee). The child rowset hchild itself has (in addition to ordinary columns) a chapter column called hschool (one row for each school attended by this child).
Example 81 All Employees Children

The following query retrieves information about children of all employees:


SELECT * FROM emp_struct->hchild

Access to the hchild rowsets is via the parent rowset emp_struct. Apart from this access, the employee data is not required by the query.

When child rowsets can be accessed as normal tables, it is generally more efficient to access them directly. The presumption in the previous example is that the child information is accessible only through a parent rowset (such as emp_struct). If both the father and the mother of a particular child are employees, this child will appear in two different chapters (once through the father rowset and again

8-4 AIS User Guide and Reference

through the mother rowset). This query may therefore return the information for this child twice. If this behavior is undesirable, use the SELECT DISTINCT* syntax.
Example 82 Childrens Schools

The following query counts the number of distinct schools where the children of all employees studied:
SELECT COUNT(DISTINCT school_name) FROM emp_struct->hchild->hschool

This example illustrates a two-level hierarchy, with the child rowset hchild itself serving as a parent of the child rowset hschool.
Example 83 Employees with Children

The following query retrieves information about employees who have at least one child:
SELECT * FROM emp_struct E WHERE EXISTS (SELECT * FROM E->hchild)

Note that E->hchild refers to the children of the given employee, not of all employees, because the nested SELECT clause refers to the same rowset as the outer SELECT clause. The emp_struct rowset appears in the same role in both the outer and the inner query, with the name E (whose use is mandatory).
Example 84 School Age Children

The following query retrieves information about all children who have attended at least one school:
SELECT * FROM emp_struct->hchild C WHERE EXISTS (SELECT * FROM C->hschool)

The same scope rules as those of example 3 apply: the nested subquery SELECT * FROM C->hschool is executed for the given row of hchild and serves to filter out the children who have not attended any school.
Example 85 Multiple Hierarchies

To better understand the scope issues when multiple hierarchies are involved, consider the following queries:
SELECT * FROM emp_struct E WHERE EXISTS (SELECT * FROM E->hchild->hschool)

and:
SELECT * FROM emp_struct->hchild C WHERE EXISTS (SELECT * FROM C->hschool)

The nested subquery looks similar in both of these queries, but in the first query the outer rowset that defines the context is the emp_struct row, while in the second query it is the row of the child of an employee. Thus, the first query specifies employees such that at least one of the employees children went to at least one school, and the scope of the subquery is all schools of all children of this employee. The second query specifies all children that went to school, and the scope of the subquery is all of the schools of this child.

Flattening Hierarchical Data Using SQL


You can produce a flattened view of hierarchical data by embedding a SELECT statement inside the list of columns to be retrieved by another SELECT statement. You

Using SQL 8-5

use parentheses to delimit the nesting. This is equivalent to specifying a left outer join (LOJ) between the parent rowset and a child rowset, and the resulting rowset reproduces each row of the parent, combining it with one row for each of its children. The nested SELECT statement can reference a child rowset (using the parent->child syntax) only in its FROM clause.

Using an Alias
In order to list the hierarchical data with the parent data only, you must use an alias for the child data. Without an alias the query lists for each parent row all of the children of all of the parent rows. Compare the following queries:
SELECT emp_id,(select name from employee->hchild)from employee

and
SELECT emp_id,(select name from e->hchild)from employees e

The first query, without an alias, lists for each employee all of the children of all of the employees. The second query uses an alias and produces a list containing the children stored in an array called hchild belonging to each employee.

Examples
The following examples assume hierarchical data with a parent employees rowset called emp_struct. This rowset has ordinary columns plus a chapter column called hchild (one row for each of the children of this employee). The child rowset hchild itself has (in addition to ordinary columns) a chapter column called hschool (one row for each school attended by this child).
Example 86 Number of Children in School Per City

The following query retrieves the number of children that study in each city for every city where the company has employees.
SELECT city, (SELECT COUNT(*) FROM emp_struct->hchild->hschool A WHERE A.city = B.city) FROM cities B

This query demonstrates the use of a nested query within a SELECT statement, using tables and not aliases.
Example 87 Employee and Child Information

The following query retrieves the employee ID, address, the number and names of children of each employee:
SELECT emp_id, child_counter, (SELECT name FROM E->hchild), Address FROM emp_struct E

8-6 AIS User Guide and Reference

This figure displays an SQL query that....


Figure 83 Example 2 of a Hierarchical SQL Query

Employees who have no children (such as employee 1122 in the example) appear and the corresponding child row is Null (and the output is like that of a left outer join). When more than one child rowset is specified in the query (by including more than one nested SELECT statement), the parent row is reproduced a sufficient number of times to accommodate the largest number of the parents' children, and the data from all children appears in parallel. Thus, child rows are paired randomly, resulting in side-by-side columns for the child rowsets. When one of the child rowsets is out of rows, its columns are padded with NULLs. See example 3, below.
Example 88 Employees Salary and Child Information

For each employee, the following query retrieves the employee ID, the number of the employees children, the first two salaries of the year and the names of the children:
SELECT emp_id, child_counter, (SELECT sal FROM E->Sal WHERE month IN ('JAN', 'FEB')), (SELECT name FROM E->hchild) FROM emp_struct E

This figure displays an SQL query:


Figure 84 Example 3 of a Hierarchical SQL Query

Note:

Note the padded NULLs in the example.

When multiple levels of hierarchies are involved, the flattening operation is repeated by nesting a SELECT statement inside another nested SELECT statement. Conceptually a left outer join is performed between the parent and child rowsets as above, and then another left outer join is applied to the result and to the grandchild rowset. See Example 4, below.

Using SQL 8-7

Example 89 Multiple Hierarchical SQL Query

The following query retrieves the employee ID, address, the number and names of the children of each employee and the number of different schools at which each child studied:
SELECT emp_id, child_counter, (SELECT name, (SELECT COUNT(DISTINCT school_name) AS dist_schools FROM C->hschool) FROM E->hchild C) FROM emp_struct E

This figure displays an example of a hierarchical SQL Survey.


Figure 85 Example 4 of a Hierarchical SQL Query

In this example the query at the deepest level of nesting is an aggregate (COUNT DISTINCT), which returns a single value. A request for the names and addresses of the schools would result in the child information being repeated for each school where the child studied.
Example 810 Multiple Hierarchical SQL Query

To order the retrieved information according to the employee ID and the child name, use a query like the following:
SELECT emp_id, child_counter, (SELECT name, (SELECT COUNT(DISTINCT school_name) AS dist_schools FROM C->hschool) FROM E->hchild C) FROM emp_struct E ORDER BY emp_id, name

This figure displays an example of a hierarchical SQL query.


Figure 86 Example 5 of a Hierarchical SQL Query

Example 811

Multiple Hierarchical SQL Query

To order the rowset according to the child name and the number of distinct schools where the child studied, use a query like the following:

8-8 AIS User Guide and Reference

SELECT emp_id, child_counter, (SELECT name, (SELECT COUNT(DISTINCT school_name) AS dist_schools FROM C->hschool) FROM E->hchild C) FROM emp_struct E ORDER BY 3, 4

This figure displays an example of a hierarchical SQL query.


Figure 87 Example 6 of a Hierarchical SQL Query

You cannot specify ORDER BY inside a nested query. Ordering of the result rowset is done in the main query only. Columns can be referenced by name or by ordinal number. Ordinal numbers are determined by the order the columns appear in the result set.
Example 812 Multiple Hierarchical SQL Query

The following query retrieves the number of children that study in each city for every city where the company has employees.
SELECT city, (SELECT COUNT(*) FROM employees->hchild->hschool A WHERE A.city = B.city) FROM cities B

Using Virtual Tables to Represent Hierarchical Data


You can handle arrays and array-like structures (such a periodic group fields in Adabas) in non-relational data sources by converting the array data into a virtual table that resides in memory. The name assigned to the virtual table is a composite, consisting of the parent field name and the array name. For example, if a parent field is called Employee and the array is called Empchild, the virtual table is called Employee_Empchild. When arrays are nested within arrays, the virtual table name includes the names of all the parent fields and array names. For example, if a parent field is called Employee and an array called Empchild includes a nested array called EmpChildSchools, the virtual table name is called Employee_Empchild_EmpChildSchools. A virtual table includes the following columns:

The array fields. A column called _parent, which contains the bookmark of the parent field. This column is generated automatically for the array and is read-only. A column called _rownum, which identifies the row in the virtual table. This column is generated automatically for the array and is read-only.

Using SQL 8-9

The fields _parent and _rownum together uniquely identify each row in the virtual table. Use the _parent field of the virtual table to identify the array field in the parent record when joining parent and child tables. Example A record NATION stores data about countries for a mail order application and includes an array structure REGIONS. The array structure is converted into a virtual table NATIONS_REGIONS. The following SQL accesses data from the virtual table NATIONS_REGIONS:
SELECT * FROM NATION N, NATION_REGIONS R WHERE R._PARENT = N.REGIONS

Once a virtual table has been created, it can be used in applications in the same way as any other table. Note that this SQL statement accessing a virtual table is equivalent to the SQL statement using the -> syntax to access the array data in the table:
SELECT * FROM NATION->REGIONS

Creating Virtual Tables


Virtual tables are automatically created for arrays. These tables are displayed in lists of tables when building queries, such as in the Attunity Studio Query Tool. Reference the virtual table in the query as you would reference any other table, using the name of the generated repository entry as the table name. During import of metadata you can flatten an array so that each entry in the array is represented by a field in the table. The recommendation is to use the virtual tables generated by Attunity Connect and not to flatten the arrays, unless there are specific reasons, such as a small static array. You can override the default behavior as follows:
1. 2.

Right-click the data source in Attunity Studio and select Edit Data Source. In the advanced tab set the Use runtime columnwise flattening property. This property flattens the array within the table by expanding the array into separate fields with each row in the array displayed with the row number appended to the field name.

An array with the Use runtime columnwise flattening property set is read-only.

Hierarchical Queries From an Application


This section contains the following topics:

Drill-down Operations in an ADO Application Drill-down Operations in an ODBC Application ODBC Drill-down Operations Using RDO ODBC Drill-down Operations Using C Drill-down Operations in a Java Application

You can use hierarchical queries specified with Attunity SQL to execute drill-down operations on data within the application. How the hierarchical query for drill-down operations is implemented varies according to the applications API:
8-10 AIS User Guide and Reference

For ADO applications: Use the standard ADO methods and properties (see Drill-down Operations in an ADO Application below. For ODBC applications: Use functionality provided with Attunity Connect (see Drill-down Operations in an ODBC Application). For Java: Use standard Java (see Drill-down Operations in a Java Application).

Drill-down Operations in an ADO Application


Support is provided for hierarchical queries using standard ADO methods and properties. The following code is an example of how to manipulate chapters:
Dim Dim Dim Dim oConn As New ADODB.Connection oRST As New ADODB.Recordset oChild As ADODB.Recordset sSQL As String

' Set active connect oConn.ConnectionString = "Provider=AttunityConnect;" oConn.Open sSQL = "select n_name, {select c_name from customer where n_nationkey = c_ nationkey} as Customers from nation" ' Execute SQL with forward only cursor and read only oRST.Open sSQL, oConn, adOpenForwardOnly, adLockReadOnly While Not oRST.EOF ' Get the Customers chapter Set oChild = oRST("Customers").Value While Not oChild.EOF ' Code that manipulates the chapter resultset oChild.MoveNext Wend ' Release chapter Set oChild = Nothing oRST.MoveNext Wend Set oRST = Nothing Set oConn = Nothing

Drill-down Operations in an ODBC Application


Support is provided for hierarchical queries from both C and RDO applications, as follows:

ODBC Drill-down Operations Using RDO: In RDO you use standard methods and properties for rdoConnection, rdoQuery, rdoResultset and rdoColumn objects and their collections. ODBC Drill-down Operations Using C: In C you use standard ODBC APIs with standard arguments.

Additionally, a number of functions are provided that you can incorporate in an application in order to utilize hierarchical queries. These functions use new descriptor types that have been added to the ODBC API SQLColAttributes: SQL_COLUMN_IS_ CHAPTER_ and SQL_COLUMN_ORDINAL_.

Using SQL 8-11

To determine which column is a chapter, use the descriptor type SQL_COLUMN_ IS_CHAPTER_. To determine the ordinal position for any column, use the descriptor type SQL_ COLUMN_ORDINAL_.

ODBC Drill-down Operations Using RDO


Support is provided for hierarchical queries in RDO applications. In addition to the standard methods and properties for rdoConnection, rdoQuery, rdoResultset and rdoColumn objects and their collections, the following functions are provided:

GetChapterInfo: Returns chapter information. For details, see GetChapter(RDO). OpenEmbeddedRowset Opens an embedded rowset for a parent chapter column. For details, see OpenEmbedded Rowset (RDO).

The following code is an example of how to manipulate chapters in RDO:


Dim oConn As New rdoConnection Dim oRST As rdoResultset Dim rsChild As rdoResultset Dim quChild As New rdoQuery ' Dim sSQL As String ' ///// RDO Chapter support Dim IsChapterCol() As Long ' Dim cChaptCols As Long Dim sCursorName As String ' RDO Connection

' Resultset for chapter ' Query for chapter (required by OpenEmbeddedRowset)

' An array which indicates that a specific column is a chapter ' Number of chapter columns ' Cursor Name

' Set active connect, Attunity-Demo is the DSN Set oConn = rdoEnvironments(0).OpenConnection("Attunity-Demo", RDO.rdDriverNoPrompt, False, "UID=;PWD=;") ' SQL statement sSQL = "select n_name, {select c_name from customer where n_nationkey = c_ nationkey} as Customers from nation" ' Execute SQL with forward only cursor and read only Set oRST = oConn.OpenResultset(sSQL) ' Redims according to the number of columns ReDim IsChapterCol(oRST.rdoColumns.Count - 1) 'Calling Attunity Connect help routine to set the necessary info GetChapterInfo oRST, sCursorName, IsChapterCol(), cChaptCols While Not oRST.EOF ' Open Customers chapter - lock type of child is derived from parent OpenEmbeddedRowset sCursorName, oRST("Customers"), oConn, quChild, rsChild, oRST.LockType If rsChild Is Nothing Then ' Failed to open chapter MsgBox "Failed to open the chapter!" Exit Sub End If 8-12 AIS User Guide and Reference

While Not rsChild.EOF ' Code that manipulates the chapter resultset rsChild.MoveNext Wend ' Release chapter Set oChild = Nothing oRST.MoveNext Wend Set oRST = Nothing Set oConn = Nothing

Notes:

Before you can open any chapter in a parent resultset, you need to know which columns in the resultset are chapters. You save the ordinal positions of these columns and the cursor name of the parent statement in order to bind the parent column to child rowsets. To identify a chapter column, use the Attunity Connect RDO function GetChapterInfo. If cChaptCols > 0 is returned by GetChapterInfo, the parent resultset includes chapter columns. The information about child rdoResultset objects and parent chapter columns is set in a special array. Go to the needed row in the parent rdoResultset by using the oRST.MoveFirst, oRST.MoveNext or oRST.Move methods. Call the parent chapter column using the OpenEmbeddedRowset function.
If you want to use the Requery method on the child rdoResultset objects, save information about the embedded rowsets in an array. Use a user structure similar to the following: Type TypeChildResultset clParentOrdinal As Integer ' Ordinal position of ' parent chapter column rsChild As rdoResultset quChild As New rdoQuery End Type You need to save only the ordinal position of a parent chapter column. You can refer to any rdoColumn object using the following syntax: rsParent.rdoColumns(clParentOrdinal 1). Close all child rdoResultset objects before closing the parent rdoResultset object.

Using SQL 8-13

GetChapter(RDO) GetChapterInfo returns the following chapter information:


The number of chapter columns in a parent resultset. The ordinal positions of these columns. The cursor name of the parent statement (in order to bind the parent column with child rowsets).

GetChapterInfo returns TRUE if successful.

Syntax
GetChapterInfo rsParent, sCursorName, isChapter(), cChaptCols

where

rsParent (input): A parent rdoResultset object. sCursorName (output): A string containing the parent cursor name. isChapter() (output): An array of long isChapter flags for each column of the parent resultset. The flag is set to 1 for a chapter column and unset for any other column. You can use the isChapter array with each column of the parent rdoResultset to determine whether the current column is a chapter. The isChapter flag is set by calling for each column of parent rdoResultset the ODBC API SQLColAttributes with the additional descriptor type SQL_COLUMN_IS_ CHAPTER_. If a chapter column exists, the cursor name is retrieved by calling the ODBC API SQLGetCursorName.

cChaptCols (output): A long value returns the number of chapter columns in the parent resultset.

In its pfDesc argument, SQLColAttributes returns 1 for chapter columns and 0 for non-chapter columns.

OpenEmbedded Rowset (RDO) OpenEmbedded Rowset opens an embedded rowset for a parent chapter column. This method does not return data from the embedded rowset. Syntax: OpenEmbeddedRowset sCursorName, clParent, cn, quChild, rsChild where

sCursorName (output): A string with the parent cursor name. clParent (input): An rdoColumn object representing the parent chapter column. Cn (input): An rdoConnection object. quChild (input/output): A child rdoQuery object. The first time the chapter is opened for this parent column, quChild is NULL. rsChild (input/output): A child rdoResultset object. The first time the chapter is opened for this parent column, rsChild is NULL.

ODBC Drill-down Operations Using C


An example C program that manipulates chapters is included in the Attunity Server installation, in the Sample\OdbcDemo directory under the directory where Attunity Server is installed. For HP NonStop Platforms: The sample program is in the subvolume where Attunity Server is installed

8-14 AIS User Guide and Reference

For z/OS Platforms: The sample program is in NAVROOT.SAMPLES.ODBCDEMO, where NAVROOT is the high-level qualifier of the Attunity Server installation When handling chapter data, note the following:

Before you can open any chapter in a parent resultset, you need to know which columns in the resultset are chapters. You save the ordinal positions of these columns and the cursor name of the parent statement in order to bind the parent column to child rowsets, using the Attunity-provided function GetChapterInfo. GetChapterInfo returns 1 when the parent resultset includes chapter columns. Open each chapter using the Attunity OpenEmbeddedRowset function. When you call this function with the prepared child statement, the function changes only the active chapter bookmark and calls SQLExecute.
Note:

Free all child statements before freeing the parent statement.

GetChapterInfo (C)
GetChapterInfo returns the following chapter information:

The number of columns in a parent resultset. The ordinal positions of these columns. The cursor name of the parent statement (in order to bind it with child rowsets).

GetChapterInfo returns 1 if successful.

Syntax
int GetChapterInfo(SQLHENV henv, SQLHDBC hdbc, SQLHSTMT hstmt, UCHAR* szCursorName, COLDESC *pColDesc, SWORD cCols, long *cChapters)

where:

henv (input): The ODBC Environment handle. hdbc (input): The ODBC Connection handle. hstmt (input): The ODBC SQL handle. szCursorName (input/output): The parent cursor name. Allocate szCursorName before calling GetChapterInfo. pColDesc (input/output): An array of descriptor information for all columns. cCols (input): The number of columns in the resultset. cChapters (output): The number of chapter columns in the parent resultset.

To save standard and additional descriptor information for each column, use the Attunity user structure stColumnDescriptor (see stColumnDescriptor User Structure (C)).

Using SQL 8-15

stColumnDescriptor User Structure (C)


stColumnDescriptor is a user structure to save standard and additional descriptor

information for each column:


typedef struct stColumnDescriptor { UDWORD cbPrec; /* Precision of the column */ SWORD iCol; /* Column number */ UCHAR szColName[MAX_COLNAME+1]; /* Column name */ SWORD fSQLType; /* SQL data type */ Char szTypeName[MAX_COLNAME+1]; /* Name of the SQL type */ SWORD fSQLCType; /* C data type */ SWORD cbScale; /* Scale of the column */ SWORD fNullable; /* Indicates if column allows NULL values */ SDWORD iLength; /* Length in bytes of the column */ PTR rgbValue; /* Buffer used in SQLGetData */ SDWORD dataOffset; /* Data offset in rgbValue buffer */ SDWORD cbValueMax; /* Maximum length of rgbValue */ SDWORD *cbValue; /* SQL_NULL_DATA or total number of bytes available to return in rgbValue */ SDWORD iChapterOrdinal; /* Save chapter ordinal position for chapter columns, 0 for other columns */ SQLHSTMT hstmtSon; /* hstmt handle for child rowset binding with this column */ } COLDESC;

OpenEmbedded Rowset OpenEmbeddedRowset opens a chapter (embedded rowset). This function does not return data from the embedded rowset. Syntax: int OpenEmbeddedRowset(SQLHENV henv, SQLHDBC hdbc, SQLHSTMT hstmtParent, UCHAR* szParentCursorName, COLDESC *pOneColDesc) where:

henv (input): The ODBC Environment handle. hdbc (input): The ODBC Connection handle. hstmt (input): The ODBC SQL handle. SzParentCursorName (input): The parent cursor name. POneColDesc (input/output): The pointer to the descriptor information of the parent chapter column.

Drill-down Operations in a Java Application


Support is provided for hierarchical queries using the OTHER type and getObject of the JDBC driver. The following code is an example of a method to determine whether a column is a chapter column:
public static boolean IsChapter(ResultSetMetaData parentRsmd, int column) throws SQLException { if (parentRsmd.getColumnType(column) == java.sql.Types.OTHER) { String sTypeName = parentRsmd.getColumnTypeName(column); 8-16 AIS User Guide and Reference

return sTypeName.equalsIgnoreCase("CHAPTER"); } else { return false; } }

This code checks both the column type and the column type name. In the code you need to check only one of them. To retrieve a chapter column, call getObject and cast it to a ResultSet as follows:
ResultSet rsChild = (ResultSet)parentRs.getObject(column);

Because of the need to cast the returned object, first make sure that the column is a chapter column.

Copying Data From One Table to Another


Using the SQL SELECT statement in an INSERT statement enables data from one table to be copied into another table. The tables can be in different data sources, as long as the data types of data retrieved by the SELECT statement match the data types of the columns inserted in the table. For example:
insert into oracle:employees select * from disam:emp

Passthru SQL
SQL UPDATE, INSERT, DELETE, DDL statements anFor all SQL During a SessionFor all SQL During a Sessiond SELECT statements can be passed directly to a relational data source, instead of being processed by the Query Processor. A data retrieval query can include joins where the data from one or more of the data sources is processed directly by the data source instead of the Query Processor. This is particularly useful when dealing with a data source whose commands are not in standard SQL and whose data you want to join with data from other data sources HP NonStop Platforms: When specifying a passthru query to a HP NonStop SQL/MP database, if the query is not within a transaction, you must append the words BROWSE ACCESS at the end of the query. For statements that do not return rowsets, you can pass SQL directly to the data source in one of the following ways:

For a Specific SQL Statement: For a specific SQL statement from within the application. For all SQL During a Session: For all SQL during a session from within the application.

All SQL statements (both statements that do not return rowsets and statements that do return rowsets) can be passed as part of the SQL itself. This is particularly useful when dealing with a non-SQL data source, whose data you want to join with other SQL data. Passthru queries are not supported for complex objects, such as BLOBs.

Using SQL 8-17

For a Specific SQL Statement


You can bypass the Query Processor for SQL statements that do not return rowsets for specific queries using the following:

Via ADO Via RDO and DAO

Via ADO
To enable passthru SQL for a specific statement, use the Command object and pass it the Operating_Mode parameter. All SQL during this connection will bypass the Query Processor. To reset the connection to channel the SQL through the Query Processor, reset the Operating_Mode parameter to NULL. Example The following example code shows an ADO connection to an Oracle database. Using the Command object, individual lines of SQL can be passed directly to the database. You specify the connection with the Operating_Mode parameter set to Passthru. All SQL, until the Operating_Mode parameter is reset to NULL, subsequently bypasses the Query Processor.
Public Public Public Public cmd As Object cmd1 As Object conn As Object conn1 As Object

Private Sub Bypass_Qpex2() Dim Dim Dim Dim Dim rs As New ADODB.Recordset conn As New ADODB.Connection conn1 As New ADODB.Connection cmd As New ADODB.Command cmd1 As New ADODB.Command

-------------------------------------- An example of using a Passthru Command --------------------------------------conn1.ConnectionString = "Provider=AttunityConnect" conn1.Open conn1.DefaultDatabase = "Oracle" cmd1.ActiveConnection = conn1 cmd1.Properties("Operating_Mode") = "Passthru" cmd1.CommandText = "ALTER TABLE mytbal ADD new_column INTEGER" cmd1.Execute ------------------------------------------- Resetting the Passthru mode for the command -------------------------------------------cmd1.Properties("Operating_Mode") = "" Set cmd1 = Nothing conn1.Close Exit Sub End Sub

8-18 AIS User Guide and Reference

Via RDO and DAO


You can pass SQL directly to the data source using either RDO or DAO. With DAO, use the ODBCDirect workspace to pass a query directly to the data source. Use the SQLGetConnectOption and SQLSetConnectOption APIs and a long constant set to 1000. Pass a value of 1 as the last parameter of these APIs if you want to pass the SQL directly to the data source. When you open the connection to the data source, use the hDbc property, which returns a Long value containing the ODBC connection handle created by the ODBC driver manager corresponding to the specified Connection object. Example The following example code highlights the statements that enable passthru queries and that execute two SQL statements in passthru mode (creation and deletion of a table). The code does not show other statements and functions (such as the connection function) that would also be required).
Declare Function SQLGetConnectOption Lib "odbc32.dll" (ByVal hstmt&, ByVal fOption%, pvParam As Any) As Integer Declare Function SQLSetConnectOption Lib "odbc32.dll" (ByVal hstmt&, ByVal fOption%, pvParam As Any) As Integer Const SQL_STMT_MODE_ As Long = 1000 Public sCreateSQL As String Public sDropSQL As String Public rdoCn As New rdoConnection Public Sub PassThrough(hDbc As Long) Dim rc As Integer Dim UsePassThrough As Long --------------------------------------- Set Passthru mode ---------------------------------------UsePassThrough = 1 rc = SQLSetConnectOption(hDbc, SQL_STMT_MODE_, ByVal UsePassThrough) rc = SQLGetConnectOption(hDbc, SQL_STMT_MODE_, UsePassThrough) --------------------------------------- Execute Passthru queries ---------------------------------------sCreateSQL = "create table tat(c number(8))" sDropSQL = "drop table tat" rdoCn.Execute sCreateSQL rdoCn.Execute sDropSQL --------------------------------------- Reset the Passthru mode ---------------------------------------UsePassThrough = 0 rc = SQLSetConnectOption(hDbc, SQL_STMT_MODE_, ByVal UsePassThrough) rc = SQLGetConnectOption(hDbc, SQL_STMT_MODE_, UsePassThrough) End Sub

Using SQL 8-19

For all SQL During a Session


You can bypass the Query Processor for SQL statements that do not return rowsets when using ADO, RDO and DAO, ODBC, or JDBC APIs. The method involved for RDO and DAO is the same method required if you want to pass a specific SQL statement directly to the data source, as described in For a Specific SQL Statement. For JDBC, you specify the Passthru parameter in the connection string, described in the JDBC Connection String section.
Note: Attunity does not recommend using this option, since it impacts on every DDL SQL statement, even if only some statements were intended.

Via ADO/OLE DB
Use the Connection object and pass it the Operating_Mode parameter set to Passthru in order to enable passthru SQL. All SQL during this connection will bypass the Query processor. Example The following example code shows an ADO connection to an Oracle database. The connection is specified with the Operating_Mode parameter set to Passthru.
Public Public Public Public cmd As Object cmd1 As Object conn As Object conn1 As Object

Private Sub Bypass_Qpex1() Dim Dim Dim Dim Dim rs As New ADODB.Recordset conn As New ADODB.Connection conn1 As New ADODB.Connection cmd As New ADODB.Command cmd1 As New ADODB.Command

----------------------------------------- An example of using a Passthru Connection -----------------------------------------conn.ConnectionString = "Provider=AttunityConnect; Operating_Mode=Passthru" conn.Open conn.DefaultDatabase = "Oracle" cmd.ActiveConnection = conn cmd.CommandText = "ALTER TABLE mytbal ADD new_column INTEGER" cmd.Execute Set cmd = Nothing conn.Close

Via ODBC
Pass connection information to the SQLDriverConnect method, which includes the passthru parameter. For Windows Platforms: Via the Microsoft ODBC Data Source Administrator, with the Attunity ODBC connection wizard. Create or edit a DSN then select the Batch update passthru check box in the Remote Server Binding Page.

8-20 AIS User Guide and Reference

See also: Creating an ODBC Connection.


Figure 88 ODBC Data Source Administrator

Example
ConnectString = "DRIVER=Attunity Connect Driver; DefTdpName=ORACLE;Binding=nav;Passthru=1;"

Passthru Queries as Part of an SQL Statement


Both retrieval statements and non-returning rowset statements (such as DDL) can be passed directly to the data source as part of the SQL syntax. To bypass the Query Processor, include the query that you want to send directly to the data source in double braces ({{query}}), prefixed with the keyword TEXT=. Prefix all table names with the data source name specified in the binding configuration. For retrieval queries, the passthru syntax is part of the FROM clause of a SELECT statement. Examples A non-returning result:
oracle:TEXT={{CREATE TABLE employee (emp_num number(5) NOT NULL, emp_name varchar2(32))}}

As part of a SELECT statement:


SELECT * FROM disam:nation, disam1:TEXT={{SELECT * FROM customer WHERE c_nationkey = ? AND c_custkey = ?}} (7,100)

where disam and disam1 are data sources defined in the binding configuration. The query to the disam1 database is passed directly to the database, bypassing the Query Processor. Note the use of parameters in the example. You can also

Using SQL 8-21

specify parameters in a non-returning rowset query (see Passthru Query Statements (bypassing Query Processing) for the syntax). Standard ANSI 92 SQL has been extended so that expressions involving columns of tables that appeared previously in the FROM list are used (such as from the nation table in the above example) HP NonStop Platforms: When specifying a passthru query to a HP NonStop SQL/MP database, if the query is not within a transaction, you must append the words BROWSE ACCESS at the end of the query.

Writing Queries Using SQL


Use standard SQL to access both relational and non-relational data. For example, a user on a PC can issue an SQL join across IMS/DB and VSAM, as well as across relational databases such as Oracle or Sybase. SQL extensions have the following rules and limitations:

You cannot use reserved keywords in SQL for table and column names (see Reserved Keywords). The table below displays SQL query size limitations:
SQL Query Size Limitations Maximum Length

Table 81 Limitation

Length of an identifier (table or column name) 64 Length of a string1 Number of parameters in a query Level of nesting of function calls Level of nesting of nested queries Length of a LIKE mask operand Length of a comment line
1

350 50 20 10 255 350

The length can be modified in one of the following ways: By specifying a value for the tokenSize parameter within the <queryProcessor> group in the Attunity Server environment. Specify a question mark (?) instead of the string. When prompted for the data type of the value specify C (for cstring).

Comments can be included as part of the SQL. Comments are bounded by /* and */. If a comment is greater than the limit of 350 characters, break it up over a number of lines. Quotation marks ("") or square brackets ([]) can be used to quote identifiers such as table names and column names.

Writing Efficient SQL


The following tips can improve the efficiency of queries:

Use views and stored query procedures (CREATE VIEW Statement and CREATE PROCEDURE Statement). Use forward-only and read-only cursors. Batch SQL statements together in a single statement. For details, see below.

8-22 AIS User Guide and Reference

Use the HINT clause, specifying the optimization strategy you want used for the query (see Attunity SQL Syntax).

You can test connections and SQL interactively using NAV_UTIL EXECUTE.

Locking Considerations
This section includes the following topics:

Locking Modes ODBC Locking Considerations ADO Locking Considerations

Locking Modes
SQL UPDATE and DELETE statements are automatically executed in pessimistic locking mode. With SQL SELECT statements, the following locking modes are supported:

Optimistic Locking Pessimistic Locking No Locking

With chapters, child rowsets cannot be updated if the parent rowset is locked. Refer to the specific driver for additional locking information.

Optimistic Locking
With optimistic locking, records are locked just before an update operation. Before locking the row, Attunity Connect checks that another user hasn't changed the specified data. Optimistic locking has the following advantages over pessimistic locking:

Performance is improved when you use optimistic locking. Concurrency is ensured.

Pessimistic Locking
With pessimistic locking, records are locked as they are read. Pessimistic locking is slower than optimistic locking. When connecting to data via Microsoft Jet or SQL Server, you must open a transaction before issuing the query.

No Locking
With no locking, records are not locked and are read-only.

ODBC Locking Considerations


Locking is established using SQLSetStmtOptions with fOption = SQL_ CONCURRENCY specified. The following locking values are available for vParam:

Using SQL 8-23

SQL_CONCUR_VALUES: Optimistic locking (the locking itself is done during SQLSetPos if fLock = SQL_LOCK_EXCLUSIVE is specified). SQL_CONCUR_ROWVER: This is treated as if you specified SQL_CONCUR_VALUES.

SQL_CONCUR_LOCK: Pessimistic locking. SQL_CONCUR_READ_ONLY: Read-only mode. This is the default value.

Example To enable pessimistic locking:


SQLSetStmtOption(hstmt, SQL_CONCURRENCY, SQL_CONCUR_LOCK)

ADO Locking Considerations


Locking is established using LockType property of the Recordset object. The following locking values are available:

adLockReadOnly (default): Read-only mode. adLockPessimistic: Pessimistic locking. adLockOptimistic: Optimistic locking. adLockBatchOptimistic: Optimistic batch updates. Required for batch update mode as opposed to immediate update mode.

These locking values should be used with ADO version 2.1 and higher.

Managing the Execution of Queries over Large Tables


Query governing enables you to manage the way queries are executed. Query governing parameters are defined at the workspace levels. All defined restrictions apply to all queries for all data sources that require Attunity metadata and which are defined in the binding associated with the workspace. Query governing is defined in General tab of the Workspace editor in Attunity Studio. For more information, see the information on the General in Editing a Workspace. The workspace governing parameters only apply to data sources that require Attunity metadata.

8-24 AIS User Guide and Reference

Figure 89 Query Governing Restrictions

The following workspace governing parameters can be defined:

Max Number of Row in a Table That Can Be Read: This parameter restricts the number of table rows that are read in a query. When the number of rows read from a table exceeds the number stated, the query returns an error. Max Number of Rows Allowed in a Table Before Scan is Rejected: This parameter restricts the number of table rows that can be scanned. This parameter impacts on the query both during query optimization and execution, as follows.

During query optimization: The value set is compared to the table cardinality. If the cardinality is greater than the value, the scan strategy is ignored as a possible strategy (unless it is the only available strategy). During query execution: A scan is limited to the value set. When the number of rows scanned exceeds the number stated, the query returns an error.

You must refresh the daemon as well as reloading the configuration after changing values in the WS Governing tab.

Optimizing Outer Joins


Attunitys query optimizer delegates queries or parts of them directly to the database on the backend when possible, to improve execution performance. Outer joins (both LOJ and ROJ), are sent to the database directly by the query optimizer whenever possible, so that the query processor (QP) does not have to carry out the join. This increases the performance speed. This section includes the following topics:

Limitations Query Optimization

Using SQL 8-25

Property

Limitations
To optimize a query and select the largest part of a query that can be sent to the backend, the optimizer reorders the tables joined. The optimizer follows the following Outer join rules when reordering the tables to maintain outer join semantics:

Tables on the right side of a left outer join cannot be moved up the LOJ or to the LOJs left side. Tables from the left side of a left outer join that are not included in the outer join condition can be moved and joined to the outer join result. Tables from the left side of a left outer join that are included in the outer join condition cannot be joined to the outer join result.

Query Optimization
The query optimizer will attempt to send the greatest part of a query possible to the database on the backend instead of to the query processor. This can be achieved easily on joins that are executed on tables from the same database. Attunity Connect can execute outer joins on two or more databases, however these queries must be made by the Attunity query processor.

Property
The property noLojDelegation is set to false by default, which indicates that the Optimizer tries to delegate outer joins to the backend database. When this property is set to true, outer joins are always performed by the Attunity query processor even if part of the LOJs can be delegated to the backend. This is helpful for troubleshooting problems with outer join optimization or to preserve legacy optimization.

Changing the Property


You set the noLojDelegation property in Attunity Studio, under the binding configurations. This property is in the Optimization category. For more information, see the AIS reference section for bindings. To change the noLojDelegagion property 1. Open the Design Perspective in Attunity Studio.
2. 3. 4.

From the Configuration view on the left side, expand the machine folder. Expand the machine with the binding that has the optimizer settings you want to change. Right-click the binding that has the optimizer settings you want to change and select Edit Binding. The Binding editor opens on the right of the screen with the Properties tab open.

5. 6. 7.

From the Environment Properties list, expand Optimizer. Click the right side of the Value column for the noLojDelegaton property and select true from the drop-down list. Save the changes.

8-26 AIS User Guide and Reference

9
Working with Web Services
This section contains the following topics:

Web Services Overview Preparing to use Web Services Deploying an Adapter as a Web Service Undeploying Web Services Viewing Web Services Logging Web Service Activities

Web Services Overview


A Web service is a standard way for accessing Web-based applications. It used the XML, SOAP, and WDSL standards. XML us used to tag the data, SOAP to transfer it, and WDSL to describe the various services available. Web services lets applications from different sources to communicate with each other. Web services are not platform dependant, which means that any client computer can access the same Web-based applications no matter what platform the client is running. Any AIS application adapter using one or more of its interactions can be deployed as a Web service.

Preparing to use Web Services


This section explains how to set up your system so that Attunity Studio can work with Web services. It contains the following topics:

Web Services Prerequisites Setting up Attunity Studio to Work with Web Services

Web Services Prerequisites


Make sure that your system has the following:

Apache Tomcat Web server, version 4.1 The axis.war file JDK 1.4. You must make sure that this full Java SDK is installed. Apache Tomcat will not work if only the Java Runtime Environment (JRE) is installed.

Setting up Attunity Studio to Work with Web Services


To enable Attunity Studio to work with Web services, carry out the following tasks:
Working with Web Services 9-1

Install JDK version 1.4x on your computer. Apache Tomcat will not work with the Java Runtime Environment (JRE) only. You can download JDK 1.4 from http://tomcat.apache.org/download-41.cgi. Install Apache Tomcat version 4.1. Attunity Studio will only work with this version of Tomcat. Download Apache Tomcat from http://tomcat.apache.org/download-41.cgi. It is best to use the zip file. Extract the contents of the zip file to a folder on your main root drive, such as program files.

Copy the axis.war file to the webapps folder of the Tomcat root folder. For example, C:\Program Files\Apache\tomcat\webapps. This file should be supplied with your Attunity Studio installation. If not, you can download it from the Apache site. Run startup.bat from the tomcat\bin folder. Depending on where you installed your Apache Tomcat, the actual path will look something like C:\Program Files\Apache Software Foundation\Tomcat 4.1\apache-tomcat-4.1.34-LE-jdk14\bin. Test that Tomcat is running. Open a browser and enter the following URL: http:\\localhost:port 8080. If you changed your Tomcat default port to another number, for example, 80, enter that number instead of 8080.

Deploying an Adapter as a Web Service


Setting an adapter as a Web service involves the following:

Provide the connection information for the Apache AXIS servlet. Apache AXIS is an implementation of SOAP (Simple Object Access Protocol). Define the Web service and select the interactions to include in the service.

1. 2. 3. 4. 5.

To deploy an adapter as a Web service Open Attunity Studio, from the windows Start menu, select Programs, then select Attunity and then click Attunity Studio. Select the Design perspective. In the Configuration manager, expand the machine with the adapter that you want to set as a Web service. Expand the Bindings folder and then expand the binding with the adapter. Right-click the adapter you want to deploy as a Web service and find Web Services and then select Deploy. The Web Service Deployment wizard opens. This wizard has the following steps:

Connection Information for the Axis Servlet Define a new Web Service for an Adapter Select the Interactions Summary Window

9-2 Attunity Integration Suite Service Manual

Connection Information for the Axis Servlet


In this step, you enter information that explains where to connect to the Axis servlet. The following figure shows where to enter this information.
Figure 91 Axis Servlet Connection Information

The following describes the information entered in this step:


Host: Enter the name of the server hosting the axis servlet. Enter the port information:

Port: The port used by the axis servlet. If your Apache Tomcat Web server is defined as port 8080, select the Use default port check box. Axis path: The path to the Axis servlet. In you copied the servlet into the Apache Tomcat webapp folder, you can use the default setting.

Enter the username and password to access the axis servlet. If you are using an anonymous connection, select the Anonymous connection check box.

Click Next. The Define a new Web Service for an Adapter step is displayed.

Define a new Web Service for an Adapter


In this step, you enter information that defines the Web service for the adapter. The following figure shows where to enter this information

Working with Web Services

9-3

Figure 92 Web Service Definition

The following describes the information entered in this step:

Name: Enter a name for the Web service. Each Web service must have a unique name that has only letters, digits and underscores. To reuse a name, you must undeploy the Web service with this name before assigning the name to the new Web service. See Undeploying Web Services.

Description: Enter a general description of this Web service. This is optional. Namespace: Enter a name for the namespace, if you want it to have a unique name. Select the Namespace enabled check box if you want to enable and use this namespace. If you do not enter a namespace, the namespace is created from the host name and Web service name.

Enter the username and password for users who can access the adapter at runtime. If you are using an anonymous connection, select the Anonymous connection check box. Click Advanced Options if you want to use any of the advanced options. The Advanced Options settings has the following tabs:

The General Tab The Pooling Tab The Map Tab

Click Next. The Select the Interactions step is displayed.

9-4 Attunity Integration Suite Service Manual

The General Tab


This tab lets you configure some advanced configuration options. The following figure shows the information that is entered in this tab.
Figure 93 The General Tab

Enter the following information in this tab:


Timeout (sec): The Web server timeout in seconds. Firewall protocol: Select one of the following: None: To use no protocol. FixedNat: Select this if the machine you are working with has a fixed Network Address Translation (meaning that it is always connected to the same external IP address when accessing the Internet.

Trace enabled: Select this to enable tracing the communication between AIS and the Web service. If enabled, specify a path and name for the trace file. Encryption: Select this if you are using encryption. Enter the information for the encryption used.

When you are finished entering information in this tab, click OK to close the window and return to the Define a new Web Service for an Adapter step. You can also click The Pooling Tab or The Map Tab to enter information there.

The Pooling Tab


This tab lets you configure some advanced pooling options. The following figure shows the information that is entered in this tab.

Working with Web Services

9-5

Figure 94 The Pooling Tab

Enter the following information in this tab:

Maximum active connections: The maximum number of connections that can be active at the same time. Maximum idle connections: The maximum number of connections that can be available in the pool. Behavior when a connection is not available: Select one of the following to indicate what behavior you want if a connection is not available: Fail: The interaction fails Grow: Create a new connection Block: Block the interaction

Time between idle connection examination runs (msec): The time (in milliseconds) to wait before checking if any connections are idle. Minimum idle time before removed from pool (msec): The minimum amount of time (in milliseconds) a connection can be idle before being closed.

When you are finished entering information in this tab, click OK to close the window and return to the Define a new Web Service for an Adapter step. You can also click The General Tab or The Map Tab to enter information there.

The Map Tab


This tab is used to map additional parameters for the connection. The following figure shows this tab:

9-6 Attunity Integration Suite Service Manual

Figure 95 The Map Tab

New connection parameters are entered as a key. For example, to open a new server for each operation, create a new key called newserver and enter 1 in the Value column. To enter a new key Right-click in the Key column and select Add. Type in a name for the key. Click in the Value column in the same row as the key to enter a value.

1. 2. 3.

To delete a key, right click on the key you want to delete and select Delete. When you are finished entering information in this tab, click OK to close the window and return to the Define a new Web Service for an Adapter step. You can also click The General Tab or The Pooling Tab to enter information there.

Select the Interactions


In this step you select the interactions that you want to include in the Web service. This step has two columns. All interactions that are defined for the adapter are shown in the left column. For more information on adding interactions to your adapter, see Working with Application Adapter Metadata. The following figure shows this step.

Working with Web Services

9-7

Figure 96 Select the Interactions

You can add interactions to include in the Web service, and also remove them from the Web service. To add or remove Web services, select the Web service and click on one of the buttons described in the following table.
Button Description Select an interaction from the left column and click this button to move it into the right column and include it in the Web service. Click this button to move all interactions from the left column to the right column and include all of the interactions in the Web service. Select an interaction from the right column and click this button to move it into the left column and remove it from the Web service. Click this button to move all interactions from the right column to the left column and remove all of the interactions from the Web service.

When you are finished selecting interactions to include in the Web service, click Next. A Summary Window showing the Web service settings is displayed.

Summary Window
The summary window shows you the configurations that you entered in each step of the wizard. Look over the information in this window to be sure it is correct. Click Back to return to any previous step and edit the information, if necessary.

9-8 Attunity Integration Suite Service Manual

When you are sure all the information is correct, click Finish to deploy the Web service.

Undeploying Web Services


Web services can be removed either for all the adapters set as Web services in a binding configuration, or for a specific adapter. To undeploy an adapter as a Web service: 1. In the Design perspective Configuration manager, expand the machine with the adapter with the deployed the Web service you want to undeploy.
2. 3.

Expand the Bindings folder and then expand the binding with the adapter. Right-click the adapter you want to undeploy as a Web service and find Web services, then select Undeploy. The Web Service Undeployment wizard opens.

4. 5.

Enter the Connection Information for the Axis Servlet. Click Next. A list of the Web services is displayed. You can click on the Web services and see the list of interactions that are associated with it.

Figure 97 Undeploy Web Services

6. 7.

Select the Web services you want to remove. Click Finish to undeploy the selected Web services.

Viewing Web Services


Web services and the interactions associated with them can be viewed in Attunity Studio. To view a list of Web services and the interactions associated with them 1. In the Design perspective Configuration manager, expand the machine with the adapter where you deployed the Web service.
2.

Expand the Bindings folder and then expand the binding with the adapter.
Working with Web Services 9-9

3.

Right-click the adapter with the Web services and interactions you want to view, and find Web services, then select List. The Web Service Undeployment wizard opens.

4.

Click Next. A list of the Web services is displayed. You can click on the Web services and see the list of interactions that are associated with it. This is the same wizard that is displayed in when Undeploying Web Services, however you can only view the Web services.

Logging Web Service Activities


Web Service activities are logged to the Web server logs. If using Tomcat, the logs can are located in the Tomcat log folder in the Tomcat root folder. An example of the possible location for this file is: Program Files\Apache Group\Tomcat 4.1\logs The following logging information can be modified:

The log file location. The level of error messages written to the log. The order of the error messages in the log.

Changes to the log file are set in log4j.properties which is located in: WebserverRoot\WebApps\Axis\Web-INF\classes\log4j.properties Where WebserverRoot is where the Web server is installed, for example: C:\Program Files\Apache Group\Tomcat 4.1\Webapps\axis\ Web-INF\classes\log4j.properties

Changing the Log File Location


To change the log file location, enter the location of the log file in the logfile attribute: log4j.appender.LOGFILE.File=log_file_location For example: log4j.appender.LOGFILE.File = c:\\Webservices\\log\\ connect-Webservices.log.

Changing the Error Message Level


The following error types are available:

DEBUG INFO WARN ERROR FATAL

Enter the error type as shown in the example below: log4j.logger.com.attunity.connect=error_type, LOGFILE

9-10 Attunity Integration Suite Service Manual

Changing the Error Message Format


To change error message format, enter the formatting pattern as shown in the example below: log4j.appender.LOGFILE.layout.ConversionPattern=formatting_ pattern The following is an example of the output:
2000-09-07 14:07:41,508 [main] INFO MyApp - Entering application. 2000-09-07 14:07:41,529 [main] INFO MyApp - Exiting application.

Working with Web Services 9-11

9-12 Attunity Integration Suite Service Manual

Part II
Attunity Connect
This part contains the following topics:

Introduction to Attunity Connect Attunity Integration Suite Architecture Flows Implementing a Data Access Solution Setting up Data Sources and Events with Attunity Studio Implementing an Application Access Solution Setting Up Adapters Application Adapter Definition

10
Introduction to Attunity Connect
This section contains the following topics:

Overview of Attunity Connect Logical Architecture System Components and Concepts

Overview of Attunity Connect


Attunity Connect is a suite of pre-built adapters to mainframe and enterprise data sources and applications. Attunity Connect resides natively on the data server to provide standard, service-oriented integration (SQL, XML, Web Services) to a broad list of data sources and applications on platforms ranging from Windows and UNIX to HP NonStop and z/OS. With robust support for metadata, bi-directional read/write access and transaction management, Attunity Connect simplifies and reduces the cost of legacy integration.

Logical Architecture
(including diagram/s and what runs on each of the machines, ties in to next section)

Backend Data Server Client Application Client Workstation (Studio)

System Components and Concepts


At the heart of the Attunity Connect integration solution are the Attunity engines. These shared, multi-platform engines manage access and updates across the IT infrastructure. They can also integrate information from disparate systems with a single request. The Attunity engines can receive requests from a wide variety of supported APIs. The relevant engine processes the requests interacting with applications and data sources via Attunity Connect application adapters and data source drivers and returns the results, again via the interfaces. Attunity Connect buffers the user from the applications and data sources requested, so that it appears as one consistent API and data model, regardless of the source. The movement of information from any of the supported applications and data sources to the consumer is completely transparent to the consumer application.
Introduction to Attunity Connect 10-1

The engines provide comprehensive transaction support to the extent allowed by the sources support for two-phase commit functions. AIS installations include the following engines:

Data Engine
The data engine accesses, updates, and joins enterprise information from data sources as if they were all relational databases. At the same time, it takes advantage of its query optimizer to determine the fastest way to carry out these tasks, minimizing the load on IT resources, networks, and systems. Because the data engine uses a relational model, it normalizes the data, converting hierarchical structures into tables without redundant data. By combining the relational model with the SQL language, the data engine allows applications to issue the same complex query to multiple data sources without tuning it to each target source. The relational approach also simplifies access via commercial tools and applications that interoperate with relational sources. Clients can use industry-standard JDBC, ODBC, ADO/OLE DB and .NET interfaces to submit SQL requests to the data engine. By using either the Database or Query application adapter, you can also use JCA, XML or COM as the client interface. When the data engine receives and parses an SQL request, it first determines which data source is involved, where the data resides, and how the source handles data. The data engine determines how to carry out the process based on metadata that it retrieves from a local cache, from the repository, or dynamically from the backend data sources. Then, the data engine generates a query execution plan in the form of a tree. Whenever possible the data engine passes the entire request to the underlying data source. In this case, the engine translates between standard ANSI SQL 92 and the underlying databases SQL dialect. The data engine can also accept pass-through queries to nonstandard SQL functions supported by the target source. If a data source offers limited SQL capabilities, the data engine implements missing functions as needed. If the data source offers no SQL capabilities at all, the data engine breaks the request into simple retrieval operations that an indexed or sequential table can read.

Query Optimizer
The data engine includes the query optimizer, which minimizes execution time and resource consumption. The query optimizer enhances the data engine's initial query execution plan based on the query structure, network structures, the target data sources capabilities and locations, and the statistical information available for each table. To maximize the efficiency of query execution, the query optimizer uses various caching and access techniques, including read-ahead, parallelism, and lookup-, hash-, and semi-joins. It flattens views, breaks out and propagates simple predicates down the tree, reorders joins, directs join strategies, selects indexes, and performs other related tasks. If the target data source is distributed across multiple machines, the data engine and query optimizer together generate a distributed execution plan that minimizes network traffic and round-trips. Performance Tuning Tools Database administrators can review and control the optimization strategies that the optimizer uses. Using the query analyzer, IT staff can monitor accumulated statistics and heuristic information to evaluate the success of the
10-2 AIS User Guide and Reference

optimization strategies. These tools enable users to evaluate and understand the way specific queries work by specifying hints, flags, optimization goals (first-row or all-rows optimization), and other query properties, such as requests for scrollable or updateable cursors.

Data Sources
Native data source drivers utilize the native API of each data source. Attunity Connect tailors these drivers to the individual data model and performance characteristics of the particular data source. These source-specific drivers share common logic. These drivers deal with metadata, describing the information offered by the data sources and mapping the underlying data model and functionality into relational or ISAM (Index Sequential Access Methods) models. Drivers for nonrelational data sources support sequential and indexed access, array structures and hierarchical structures, and other functions without normalization or other time-consuming operations. Drivers fall into one of the following classes:

RDBMS drivers to access data providers that support some dialect of SQL Non-relational drivers to access data providers that do not support SQL File system drivers to access files Procedure drivers to access program functionality

The functionality within each class can be factored further. For RDBMS drivers, Attunity Connect needs to be aware of the syntactic and semantic nuances of the SQL dialect for each specific data source, of the library functions supported by the data source, and other details. Similarly, a file system may or may not support functionality such as indexes, BLOBs and embedded rowsets, or be capable of efficiently executing single-table predicates (filters). For relational data sources not supported by Attunity Connect through a custom driver, Attunity Connect provides generic ODBC and OLE DB gateways. A syntax file (NAV.SYN) encapsulates backend peculiarities and facilitates the connection of the new data source to Attunity Connect using one of these generic drivers. The following data sources are supported:
Table 101 Relational DB2 Informix Supported Data Sources Non-relational Adabas CISAM File System Flat Files ODBC OLEDB-FS (Flat File System) OLEDB-SQL (Relational) Text-Delimited Procedures Procedure IMS/TM Natural CICS Procedure (Application Connector) CICS Procedure

Ingres II (Open Ingres) DBMS (OpenVMS only) Oracle 8 DISAM

Oracle Rdb (OpenVMS only) SQL/MP (HP NonStop only) SQL Server (Windows only)

Enscribe (HP NonStop only) IMS/DB (z/OS only) RMS (OpenVMS only)

Introduction to Attunity Connect 10-3

Table 101 (Cont.) Supported Data Sources Relational Sybase Non-relational VSAM under CICS and VSAM (z/OS only) File System Procedures

In addition, procedure drivers are provided to enable access to program functionality. These procedures include a generic procedure driver and specific drivers for CICS, IMS/TM, and Natural programs on a z/OS platform.

Interfaces and APIs


The following standard APIs are supported for accessing data:

JDBC Client Interface A pure-Java Type-3 driver that supports J2EE JDBC (such as data sources, distributed transactions, hierarchical record sets, and connection pooling). The JDBC interface is available on all platforms that support Java. ODBC Client Interface The ODBC interface enables organizations to use the API of choice for most popular client-server business intelligence tools. The ODBC interface implements the ODBC 2.5 and ISO CLI standards, so that COBOL and other 3GL programs on any platform can call it. The ODBC interface is available on all platforms running AIS. ADO Client Interface An OLE DB/ADO interface that supports advanced features, including chapters, scrollability, and multi-threading. The OLE DB/ADO interface is compatible with all Microsoft tools and applications. This provider also functions as a database gateway for Microsoft SQL Server, allowing SQL Server users to access all available data sources. The OLE DB/ADO interface is available on the Microsoft Windows platforms. .NET Client Interface ADO.NET is the data-access component of the Microsoft.NET Framework. AIS supports all ADO.NET objects, methods and properties as well as additional .NET Data Provider classes.

Transaction Support
To ensure the integrity of simultaneous updates to multiple data sources, Attunity Connect supports distributed transactions in the following ways:

As a distributed transaction manager, for safe, reliable multi-server transactions. As an OLE Transaction resource manager, enabling all AIS-enabled data sources to participate in distributed Microsoft MTS transactions using the OLE Transactions protocol. As an XA resource manager. Using the JDBC 2.x Interface, so that all AIS-enabled data sources can participate in distributed J2EE transactions under a J2EE application server.

A transactions log file backs up two-phase-commit (2PC) operations in the case of a failure to recover transactions. Generally, the ability to support distributed transactions depends on the capabilities of the data sources that participate in the transaction. Attunity Connect cannot guarantee data integrity for AIS-enabled data sources that do not support the 2PC protocol. However, Attunity Connect does employ various optimizations to extend the coverage of 2PC support. For example, Attunity Connect can execute a 2PC process as long it involves a maximum of one 1PC data source while all the other data source support 2PC.
10-4 AIS User Guide and Reference

Application Engine
The application engine provides an enterprise application integration (EAI) model that enables any kind of application to interact with applications and data sources via their own native interfaces. On the client end, applications use industry-standard interfaces to communicate with the application engine. In turn, the application engine communicates with specific adapters that access enterprise and application data on the server end. An XML-based schema supports precise application mapping. As a result, the application engine opens legacy applications of all types to integration with each other and with cutting-edge technologies such as Java and XML. Because the application engine uses an EAI model, interactions with the source application or data are precise and predictable. Requesting applications can specify exactly how interactions occur. Moreover, the application engine maps data structures faithfully, facilitating access to familiar applications within new environments. The EAI model is particularly suitable for deployment over the Internet using protocols such as TCP/IP and HTTP because it allows for both stateful and stateless interactions and can batch requests to minimize network traffic. A distinctive feature of the application engine is its ability to translate application structures, such as those typical of legacy COBOL applications, to and from XML. Web and other solutions can use the application engine to interact with both applications and data sources through a growing number of XML-based tools, as well as other application-oriented frameworks such as Suns J2EE JCA (J2EE Connectors Architecture) and Microsoft .NETs SOAP-based interfaces.

Application Adapters
With the AIS generic application adapter, developers can incorporate applications into Attunity Connect solutions. They can wrap and leverage existing application-specific business logic, protect data integrity by writing to a data source through its original application, and trigger operational tasks, all with a single adapter that runs consistently across diverse platforms. The following application adapters are provided:

CICS (on z/OS platforms) COM applications (on Windows platforms) Database and Query adapters access to any supported data source via XML, JCA, COM or .NET. IMS/TM (on z/OS platforms) Legacy applications Pathways (on HP NonStop Himalaya platforms) Tuxedo (on Windows and UNIX platforms)

Interfaces and APIs


The following APIs are supported for accessing applications:

XML Client Interface: The XML application interface enables any application with an XML API to access applications and data. The XML application interface supports an XML-based protocol modeled after the JCA architecture. This protocol is both readable by people and programmatically easy to use. This protocol is exceptionally well tailored for web-based, internet-wide use, particularly in conjunction with XML transformation engines. The XML application interface is directly available (callable) on all the platforms where AIS runs. On other

Introduction to Attunity Connect 10-5

platforms, it is accessible via network protocols such as TCP/IP (sockets) and HTTP.

JCA Client Interface: The JCA (J2EE Connectors Architecture) interface supports application servers based on J2EE (Java 2 Enterprise Edition) standards. It provides a robust, efficient, and reliable way to integrate Java applications with native applications. COM Client Interface: A COM component that enables access to application adapters and services using the XML protocol. The COM component provides seamless integration with Microsoft Windows products and technologies. .NET Client Interface: A .NET component, called NetACX, that enables access to application adapters from any .NET-based application.

Events
AIS handles events via an event queue. The event queue is defined as an adapter in Attunity Studio where interactions are expected as events. The event queue itself is managed by a dedicated server process, which is set up by defining an events workspace in the daemon definition.

Attunity Server
Attunity Server is the server component that includes all the software required to enable AIS to run. AIS has a native installation wizard on each of the supported platforms, simplifying deployment. It takes no special skills to install AIS, so IT staff can set up the AIS infrastructure by applying only their platform- and application-specific expertise.

Attunity Studio
Attunity Studio is the configuration tool for AIS. Configuration using Attunity Studio is performed on a Windows platform. The configuration information is stored in the AIS repository on the backend system.

Design Time
Attunity Studio is used during design time to perform the following configuration tasks:

Set up access to machines running AIS Configure the daemon that is responsible for managing communication between AIS machines Configure metadata for both data sources and adapters. For all relational data sources and some non-relational data sources, Attunity Connect uses the native metadata. Otherwise, the metadata is specified to Attunity. See Working with Metadata in Attunity Studio for more information.

Runtime
Attunity Studio is used during runtime to perform the following management task:

Modify the configuration settings Manage daemons and workspaces during runtime (see Runtime Management with Attunity Studio)

10-6 AIS User Guide and Reference

Metadata Repository
AIS supports an XML-based schema. The schema and the AIS configuration are stored on the server file system and represent the repository. There is a single main repository for each installation of AIS. The repository maintains server-wide definitions (such as the daemon configuration) and application adapter definitions. There is also a repository for each data source which uses Attunity metadata. These repositories are optimized for fast run-time access. These repositories are not restricted by native operating system file naming conventions.

System Repository
The system repository is used for information that is general to AIS, such as:

Binding information, including the names of configured backend adapters and drivers and environment settings. Daemon definitions, to control client-server communication. User profiles, enabling single sign-on to multiple backend applications and data sources. Information used directly by the Query Processor. An adapter definition for each adapter defined. For each adapter, this includes a list of its interactions and the input and output structures that are used by these interactions.

The system repository is called SYS.

Data Source Repositories


Each data source can have a repository. For data sources that do not have native built-in metadata information (such as nonrelational data sources), Attunity has developed a proprietary metadata mechanism that enables users to describe the data in a standard way. The information in the repository includes:

Attunity metadata for non-relational data sources, files and Attunity Connect procedures, extensions of the native metadata of the data source (the extended metadata feature), and a snapshot of native metadata for better performance (the local copy feature). Synonym definitions for some of the data sources.

Attunity Configuration Model


Attunity Connect takes advantage of daemons, client software, and server software to support seamless operations across both local and remote distributed environments.

Daemons
A daemon, called IRPCD, runs on every machine running AIS and manages communication between machines running AIS. The daemon is responsible for allocating server processes to clients. The daemon authenticates clients, authorizes requests for a server process within a certain server workspace and provides clients with the required servers. When a client requests a connection, the daemon allocates a server process (or where applicable, a thread) to handle this connection, and refers the client to the allocated process. This may be a new process (dedicated to this client) or an already-started process. Further

Introduction to Attunity Connect 10-7

communication between the client session and the server process is direct and does not involve the daemon. The daemon is notified when the connection ends and the process is either killed or remains available for use by another client. The daemon supports multiple server configurations called workspaces. Each workspace defines accessible data sources, applications, environment settings, security requirements, and server allocation rules. The allocation of servers by the daemon is based on the workspace that the client uses to access the data source. Thus, a client can access a data source via one workspace, where a server process is allocated from an existing pool of servers, or the client can access a data source via a different workspace, where a new server process is allocated for each client request. A fail-safe mechanism allows the specification of alternate daemons, which function as a standby for high availability.

Client Communication Software


Within Attunity Connects symmetrical operation, clients serve as agents that request remotely located data. Client software, which includes one or more application-specific interfaces, resides on every system that needs to interact with data. To the calling application, clients look like local data providers. They receive requests for data and metadata and either execute those requests locally or dispatch them to an appropriate server. The communication protocol minimizes processing and traffic by negotiating data formats (for example, when similar systems talk, there is no need to switch data formats) and by avoiding repeated transmission of the same data over the network. The communication subsystem handles machine-dependent transformations such as big/little endian translations, floating point format translations, single- and multi-byte character encoding translations, etc. Attunity Connect dynamically determines these translations upon initial connection between a client and a server, based on the nature of the parties involved in the connection. Clients maintain caches of data and metadata which enable it to satisfy many requests locally without needing to go to the server. They also batch some commands in order to avoid unnecessary network traffic. Upon the first remote request by a particular user session, a client starts a corresponding session for this user on one or more servers. Each server session remains open until the client session terminates. In the case of systems using connection pooling (such as MTS or IIS), the client and server sessions may stay open indefinitely. In the event that server operations terminate (for example, someone restarts the server machine or communication is lost), the client automatically reestablishes the connection upon the next remote operation.

Server Communication Software


Isolating different server configurations increases readability and flexibility. These configurations enable solutions to serve different clients, including classic two-tier client-server applications, three-tier applications with connection pooling, and ad-hoc usage. Each configuration specifies the following:

Processing mode, specifying multi-threaded/single-threaded operations, the number of processes, and server pools. Security settings for impersonation, authorized users, administrators, and encryption.

10-8 AIS User Guide and Reference

Accessible adapters. Various operational parameters.

AIS also supports multi-version interoperability. IT teams can simultaneously install multiple versions of AIS on all supported operating systems. As a result, organizations can begin to deploy new software versions in a staged update process, without interrupting the operations of previous versions. At the provider end of the system, servers act as agents that access, read, manipulate, and write to data sources and applications. Servers accept commands from clients, call the corresponding local functions, and package and return the results to the clients.
Note:

In the event that a target data source actually resides on another machine, the source is represented by an agent on the server using a third-party communications component such as Oracle Net Servers or Sybase CT-Lib, which is transparent to Attunity Connect.

Server software, which includes the application and data engines, drivers and adapters, resides on every AIS machine. When a client requests a connection, the daemon allocates a server process to handle this connection, and refers the client to the allocated process. This may be a new process (dedicated to this client) or an already-started process. Further communication between the client session and the server process is direct and does not involve the daemon. The daemon is notified when the connection ends and the process is either killed or remains for use by another client. This kind of server model is very flexible. It accommodates different operating systems and data source requirements. AIS supports several server models:

The multi-threaded model is effective when the data sources support multi-threading (on Windows platforms only). Serialized multi-client server processes are useful for short requests and for data sources that allow more than one simultaneous client per process. The single-client-per-process model supports data sources that only handle one client per process and to maximize client isolation.

Server processes can be reused. Various server process pooling options allow organizations to tune the solution for different application and load requirements.

Introduction to Attunity Connect 10-9

10-10 AIS User Guide and Reference

11
Attunity Integration Suite Architecture Flows
This section has the following sections:

Overview Data Source Architecture Application Adapter Database and Query Adapter Change Data Capture (CDC) Flow

Overview
This section provides architectural drawings that show the basic flows used when you deploy AIS. AIS provides solutions for integrating data between data sources and through applications. In addition, you can also work with changed data captures (CDC). The basic AIS flows are:

Integration using a relational data source Integration using a non-relational data source (file system) Integration using generic database or query adapters Application-based integration Change Data Capture

Data Source Architecture


There are two types of data source flows. Those using a Data Source and those using a Query Engine Flow.

Data Source Query Engine Flow

Attunity Integration Suite Architecture Flows 11-1

Data Source
The following figure shows the flow architecture for integrating data with relational data sources.
Figure 111 Relational Data Source Flow

This table shows the flow for data integration from the client Machine to the Server Machine. The client side shows the APIs used for Java and other applications. The following table describes the parts in the relational data v flow. For more information on working with data access, see Implementing a Data Access Solution.
Table 111 Part Client Platform Java-based applications: Consumer Application in Java JDBC API NAV API Non-Java applications Consumer Application Data API An application written in Java. This is a Java application being run by the user on the client machine. The Java programming interface that allows external access to SQL queries. The standard programming interface for AIS. Any application not written in Java. This is an application being run by the user on the client machine. An interface that supports data and allows SQL queries. Relational Data Flow Parts Description

11-2 AIS User Guide and Reference

Table 111 (Cont.) Relational Data Flow Parts Part NAV API Query Engine and Optimizer Description The standard programming interface for AIS. The query engine parses the SQL and creates a statement that is read by multiple data sources, if necessary. The query optimizer refines the SQL statement so that it is most efficient and takes the least amount of time to send and receive the results. The way that a statement is handled by AIS, depends on the data source types that are queried. AIS queries the data for relational and non-relational data sources differently. For a more detailed explanation of this part of the data flow, see Query Engine Flow. Server Platform IRPCD (Daemon) Service Loop Driver This is the process that manages communication between all the machines in the flow. The service loop is the following, get request, handle request, revise request and continually repeat the process. The connecting driver for your data source. Attunity supplies many data source drivers. For more information, see the Data Source Reference. The Attunity data driver API. For more information, see The Attunity Connect Developer SDK. A driver created for a specific data source that is not one of the drivers supplied with AIS. The API for the database on the backend and the actual database used in this flow. The code on this level belongs to the database.

GDB API Custom Driver Database API and Database

Query Engine Flow


AIS handles requests differently depending on whether you are using relational data sources, non-relational data sources, or both types. Attunity allows you to make SQL queries to any type of database. The following sections show how the request is handled by the Query Processor for each type of request.

Making a Request between Two Relational Databases Making a Request between Two Non-Relational Databases Making a Request between a Relational Data Source and a Non-Relational Data Source

Attunity Integration Suite Architecture Flows 11-3

Making a Request between Two Relational Databases


The following figure shows a tree that traces an SQL query on two Oracle (relational) databases. In this case, the SQL statement requests an LOJ on the two Oracle tables. The query engine and optimizer carry out the request and return a result set based on the table columns in the statement.
Figure 112 Two Relational Data Sources

11-4 AIS User Guide and Reference

Making a Request between Two Non-Relational Databases


The following figure shows a tree that traces an SQL query on two DISAM (non-relational) data sources. In this case, the SQL statement requests a LOJ on the two DISAM files. The query engine in this case splits into two branches. The first branch looks up the information in the file. The second branch uses the information in the SQL statement to carry out a filtering action and then looks up the information on the second file according to the filters rules.
Figure 113 Two Non-Relational Data Sources

Making a Request between a Relational Data Source and a Non-Relational Data Source
The following figure shows a tree that traces an SQL query on a DISAM (non-relational) data source and an Oracle (relational) data source. In this case, the SQL statement requests an LOJ on the DISAM file and the Oracle table. The query engine splits into two branches. The first branch looks up the information in the DISAM file. The second branch uses the information in the SQL statement to carry out a filtering action and then looks up the information in the table using the filtering rules to find the correct columns.

Attunity Integration Suite Architecture Flows 11-5

Application Adapter
The following figure shows the flow architecture for integrating data between applications.
Figure 114 Application Flow

The following table describes the parts of the application flow. The client side is always a thin client and contains the application. The server side contains the main AIS installation and the necessary adapters. For more information on using application access, see Implementing an Application Access Solution.
Table 112 Part Client Platform Consumer Application ACX Gateway ACX Client Server Platform IRPCD (Daemon) This is the process that manages communication between all the machines in the flow. This is an application being run by the user on the client machine. The application can be 3GL, JCA, COM, or XML. A gateway based on the ACX protocol. For more information on the ACX protocol, see the Attunity Connect Developer SDK. A client based on the ACX protocol. This is always a thin client. Application Flow Parts Description

11-6 AIS User Guide and Reference

Table 112 (Cont.) Application Flow Parts Part Service Loop ACX Dispatcher Adapter GAP API Custom Adapter Application and Application API Description The service loop is the following, get request, handle request, revise request and continually repeat the process. The ACX-based program that sends XML queries to the application. The adapter that is used to provide access to your application. The Attunity Application API. For more information, see The Attunity Connect Developer SDK. An adapter created for a specific application that is not one of the adapters supplied with AIS. The actual application used in this flow and its API. The code on this level belongs to the application.

Change Data Capture (CDC) Flow


The following figure shows the flow architecture for a change data capture. The color coding in the diagram indicates the parts origin (Attunity, third party, or both, depending on the CDC agent used). For more information on the parts to the flow for each CDC agent, see the CDC Agents Reference.
Figure 115 Change Data Capture Flow

The following table describes the parts of the Change Data Capture flow. A change data capture captures changes to databases and writes them to a log file. For more information, see Implementing a Change Data Capture Solution.

Attunity Integration Suite Architecture Flows 11-7

Table 113 Part Data Source

Change Data Capture Flow Parts Description The data source where the changes are made. The user application makes the changes to the data source. The tool that identifies and captures the changes from the data source. A data source that saves raw change records. An Attunity Stream component that produces Attunity Stream change records (change stream). A change provider that retrieves raw change records (raw change stream) from the change source and produces Attunity Stream change records (change stream). A change provider that provides value-added services on a change stream. This element is optional. The Change Warehouse produces a change stream. An Attunity component that provides change stream records to a change consumer. A third-party tool or application that consumes change stream records. Examples include ETL and EAI tools and messaging systems.

Change Logger Change Source Change Provider Change Agent

Change Warehouse

Change Client Change Consumer

Database and Query Adapter


The following figure shows the flow architecture when integrating data with the Query or Database Adapter.
Figure 116 Query and Database Adapter Flow

The figure above shows how the query and database adapters handle a request. The flow has two parts. The top (above the dotted line) is the client side. The client side has the application that is making a request to the adapter. The application is based on the

11-8 AIS User Guide and Reference

ACX protocol. For information on the ACX protocol, see the Attunity Connect Developer SDK. The request is sent to the adapter on the server. If you are using a database adapter, the request is phrased in a standard XML format, which is converted to an SQL query. If you are using a query adapter, the SQL is entered directly.
Note:

The database adapter is not part of the flow if you use the Query adapter.

The SQL query is sent to the query processor, which sends it to the data source. The data source returns the result of the query, which are sent back to the application. In the figure above, the left side shows the input requests and the right side shows the output results for each phase of the flow.

Attunity Integration Suite Architecture Flows 11-9

11-10 AIS User Guide and Reference

12
Implementing a Data Access Solution
This section includes the following topics:

Overview Setting Up AIS for Data Access Installing AIS Configuring the System for Data Access (Using Studio) Supported Interfaces Data Access Flow Data Source Metadata

Overview
A data access solution uses data source drivers to let you connect to a supported data source from JDBC or ODBC applications on all platforms. Attunity data sources support ADO and .NET applications on Windows platforms. The following data source drivers are provided:
Table 121 Relational DB2 Data Source Informix Data Source Ingres II (Open Ingres) Data Source Oracle Data Source Oracle RDB Data Source (OpenVMS Only) SQL/MP Data Source (HP NonStop Only) SQL Server Data Source (Windows Only) Sybase Data Source Data Source Drivers Non-relational Adabas C Data Source Generic Flat File Data Source

CISAM /DISAM Data Source ODBC Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) IMS/DB Data Sources RMS Data Source (OpenVMS Only) VSAM Data Source (z/OS) under CICS and VSAM (z/OS only). OLEDB-FS (Flat File System) Data Source OLEDB-SQL (Relational) Data Source Text Delimited File Data Source

Implementing a Data Access Solution 12-1

If you are working with a data source for which Attunity does not supply a driver, Attunity supplies an SDK to allow you to develop a driver for your data source. For details, see the Attunity Developer SDK reference.

Setting Up AIS for Data Access


This section provides the general workflow when using a data access solution in Attunity Connect.
1. 2. 3. 4. 5. 6. 7. 8.

Install AIS on the Backend. The backend are the machines where the databases are located. You also install the data source drivers here. Install the Attunity Server Software. Install Attunity Studio. Configure the Machines where you installed AIS and the data sources in Attunity Studio. Configure User Profiles. Set up and Configure the Binding for the data source. Configure the Data Sources in the Binding. Set Up the Data Source Metadata.

Installing AIS
Before you begin the process, you must install the necessary components. This includes the AIS client and server, and the data source drivers, the Attunity server software, and Attunity Studio. This section shows where to install the AIS components necessary for a data access solution.

Install AIS on the Backend


The backend is where is where your data is stored. In a data access solution, you access a data source, such as an Oracle or DB2 database, to get the necessary data. You install the full (thick) version of AIS on the backend. You must also install the necessary data source drivers on the backend. For information on how to install AIS, see the installation guide for your platform.

Install the Attunity Server Software


Install the main AIS software on your main integration server machine. For information on installing the Attunity Studio server software, see the installation guide for your platform.

Install Attunity Studio


Install Attunity Studio on any Windows computer in your system. Attunity Studio provides a graphic interface for configuring the components in your system. For information on installing Attunity Studio, see the installation guide.

12-2 AIS User Guide and Reference

Configuring the System for Data Access (Using Studio)


You make all the necessary set up and configurations in Attunity Studio. This section describes the steps necessary to use Attunity Studio to configure your system to work with data access.

Configure the Machines


You must configure the machines used in the system. Make sure to configure the machine where your backend data source and data source driver reside. For information on how to add and configure machines, see Setting up Machines. You can also test the machine connection in Attunity Studio.

Configure User Profiles


You must set up the users in your system. Setting up users is for security purposes. You can specify which users have access to various machines. For information on setting up user profiles, see User Profiles and Managing a User Profile in Attunity Studio.

Configure the Binding


You add bindings to Attunity Studio before you can add the data source. A Binding Configuration always exists on a server machine where the data sources to be accessed reside. A binding configuration can also be defined on a client machine to point to data sources on a server machine.

Configure the Data Sources in the Binding


You add Data Sources to a binding in Attunity Studio. After you add the data source, you define the connection and configure the data sources environment properties. For a description of the environment properties for each data source, see the section for the data source you are working with. You may need to create a data source shortcut. A data source shortcut is a definition of a data source on a machine that refers to a data source in the binding of a remote machine. You need a data source shortcut when you access the procedure from a client machine using either ADO or the "fat" version of ODBC.

Set Up the Data Source Metadata


Non-relational data sources (excluding Adabas, when Predict metadata is used) require Attunity metadata. This Attunity metadata is stored as a data source definition in an Attunity data source repository, on the machine where the driver is defined. AIS metadata lets you access the data from a non-relational database with SQL commands. For data sources that require AIS metadata, the metadata is imported and maintained using the Attunity Studio Design perspective Metadata tab. If COBOL copybooks describing the data source records are available, you can import the metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the metadata from copybooks with the same settings and later import the metadata from the other copybooks. For more information, see Managing Metadata.

Implementing a Data Access Solution 12-3

Supported Interfaces
AIS Data Sources provide universal connectivity in many standard interfaces. In addition, Attunity also provides generic drivers for various interfaces, such as ODBC and JDBC. Attunitys universal connectivity uses Standard ANSI 92 SQL to query any supported data source, whether relational or non-relational. Specific versions of SQL for relational data sources can be used by making the SQL query directly to the data source. The following interfaces are supported by AIS:

JDBC Client Interface: A pure-Java Type-3 driver that supports J2EE JDBC (such as data sources, distributed transactions, hierarchical record sets, and connection pooling). The JDBC interface is available on all platforms that support Java. ODBC Client Interface: The ODBC interface enables organizations to use the API of choice for most popular client-server business intelligence tools. The ODBC interface implements the ODBC 2.5 and ISO CLI standards, so that COBOL and other 3GL programs on any platform can call it. The ODBC interface is available on all platforms running AIS. ODBC Client Interface Under CICS (z/OS Only): The ODBC interface on a mainframe is access through either a COBOL or C program running under CICS. On the z/OS machine you do not need to run the daemon if the daemon is running on the target machine. OLE DB (ADO) Client Interface: An OLE DB/ADO interface that supports advanced features, including chapters, scrollability, and multi-threading. The OLE DB/ADO interface is compatible with all Microsoft tools and applications. This provider also functions as a database gateway for Microsoft SQL Server, allowing SQL Server users to access all supported data sources. The OLE DB/ADO interface is available on the Microsoft Windows platforms.
Note:

The Windows Attunity Server kit is required to use the ADO client interface.

NET Client Interface: ADO.NET is the data-access component of the Microsoft .NET Framework. AIS supports all ADO.NET objects, methods and properties as well as additional.NET Data Provider classes for specific use with AIS.

12-4 AIS User Guide and Reference

Data Access Flow


When you use the data access solution, you connect directly to a data source with a driver. The connection is done within one of the Supported Interfaces. Information is accessed through standard Attunity metadata from the data source driver using standard SQL syntax.(for more information see Attunity SQL Syntax). The following example shows the Data Access solution flow.
Figure 121 Data Access Flow

Data Source Metadata


Metadata defines the structure of the data and where it is located. AIS converts the metadata to an XML structure that you can view in Attunity Studio. When you configure the data source driver you must import the metadata from the original data source. AIS uses the metadata from relational databases, such as Oracle or DB2. For most non-relational data sources or data sources which do not have metadata, AIS requires its own metadata. This allows AIS to structure the data for these data sources into a table structure similar to a relational database. This lets you use SQL requests to access data from these data sources. For more information on metadata, see Managing Metadata. The following is an example of the metadata in a standard AIS data source adapter.
Example 121 Data Source Metadata

<?xml version="1.0" encoding="UTF-8"?> <table name="customer" datasource="___NAVDEMO" description="" fileName="C:\Program Files\Attunity\Server\demo\customer" nBlocks="0" nRows="0" bookmarkSize="4" organization="index"> <fields> <field name="c_custkey" datatype="int4"/> <field name="c_name" datatype="cstring" size="25"/>

Implementing a Data Access Solution 12-5

<field name="c_address" datatype="cstring" size="40"/> <field name="c_nationkey" datatype="int4"/> <field name="c_phone" datatype="string" size="15"/> <field name="c_acctbal" datatype="double"/> <field name="c_mktsegment" datatype="string" size="10"/> <field name="c_comment" datatype="cstring" size="117" nullable="true"/> </fields> <keys> <key name="cindex" size="4"> <segments> <segment name="c_custkey"/> </segments> </key> </keys> </table>

Configuring the System for Data Access Using XML


You can use the Update XML, Insert XML, and Select XML commands when using a data access solution for a non-relational data source. These commands are supported on the following data sources:

Adabas C Data Source Enscribe Data Source (HP NonStop Only) VSAM Data Source (z/OS) CISAM /DISAM Data Source

Set up the Environment in Attunity Studio


You set up a machine, binding, and user profiles should be created in Attunity Studio as described in Configuring the System for Data Access (Using Studio).

Expose the XML Field


In the binding that you added, you must configure the binding properties to expose the XML fields. To expose the XML field 1. Right-click the binding you are working with and select Edit Binding.
2. 3. 4. 5.

In Binding editor, click the Properties tab (at the bottom of the editor). In the Properties tab, expand Misc. Find the exposeXMLField property, click in the Value column and select true. Save and refresh the binding.
Note:

The binding must be on the server where AIS is installed.

See also:

Environment Properties

12-6 AIS User Guide and Reference

Set up the Data Source and Import the Metadata


Add the data source you are working with to Attunity Studio and import the metadata as described in Configure the Data Sources in the Binding and Set Up the Data Source Metadata.

Prepare an Input XML Structure


You must prepare an XML string that is related to the metadata record. For an SQL interface, such as ADO .NET, the application must refer to a field called xml. This field is available when you Expose the XML Field. The xml filed is used as a parameter field that receives a value that is set to your XML string. The following is an example of the required structure:
command.CommandText = "insert into adabas:TABLE1(xml) values (?)"; param.Value = "<TABLE1 SMOD='ABC123' USER='USER4' >" + "<MU1>34</MU1>" + "<MU1>35</MU1>" + "<MU1>36</MU1>" + "<MU2>A</MU2>" + "<MU2>B</MU2>" + "<MU2>Z</MU2>" + "</TABLE1>";

Setting up the Query and Database Adapter for XML Operations


You can use the Query or Database adapter to carry out XML operations. The XML operations that are supported are:

Update XML Insert XML Select XML

Using the Query Adapter for XML Operations


Use the Query adapter to access data using the XML interface. The following example shows how to use the Update XML operation. In this case use the Query adapters Update interface with the following syntax:
Example 122 Update XML

<update> <sql>update adabas:table1 set xml=? where ISN='13'</sql> <inputParameter> <TABLE1 INT="400"> <PE1 FLOAT4="23.67" FIXED2="1500" PACKED_DEC="898900" character="XYZ"/> <PE1 FLOAT4="24.67" FIXED2="600" PACKED_DEC="898900"/> <PACKED_DEC_ARRAY>409</PACKED_DEC_ARRAY> <PACKED_DEC_ARRAY>299</PACKED_DEC_ARRAY> </TABLE1> </inputParameter> </update>

Implementing a Data Access Solution 12-7

The Input XML Record


You can use the Query or Database adapter to receive a record in XML format that represents the data you requested using one of the valid XML operations. The following is the input that is required to receive the record:
<query metadata="true" outputFormat="xml"> select xml from adabas:table1 limit to 1 rows </query>

Notes:

Using the Attunity XML Utility lets you enter the input and receive the record. The recommended method for executing an XML operation is to make the select operation (for example a select Update operation), then update the output XML and then set the operation. If subfields are defined in the metadata with one of the XML operations, the subfields must match their original field or the subfield will have a non-valid offset.

12-8 AIS User Guide and Reference

13
Setting up Data Sources and Events with Attunity Studio
This section contains the following topics:

Data Sources Events

Data Sources
Data Sources are added to Bindings in Attunity Studio. A data source driver is used to connect directly to a various data sources, from JDBC, ODBC applications on all platforms, and from ADO and.NET applications on Windows. You use data sources when Implementing a Data Access Solution in Attunity Connect. The following sections describe how to set up data sources in your system.

Adding Data Sources


To use a Data Source, you need to add it to the system and configure the data source properties. The data source is defined in the Design perspective Configuration view in Attunity Studio. To define the connection 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design Perspective Configuration View, expand the Machine folder and then expand the Machine where you want to add the data source. Open the Bindings folder for the machine where you want to add the data source. Expand the Binding with the data source you are working with. Right-click the Data source folder and select New Data Source. The New Data Source wizard opens.

6.

Enter the following information in this window.


Name: Enter a name to identify the data source. Type: Select the data source type that you want to use from the list. The available data sources are described in the adapter reference section.

7.

Select the data source type from the Type list. Select the data source type that you want to use from the list. The available data sources are described in the adapter reference section.

Setting up Data Sources and Events with Attunity Studio

13-1

8.

Click Next. The Data Source Connect String page opens.

9.

Enter the information requested in this step. The information required depends on the type of data source selected. See the Data Source Reference for more information on how to configure each data source.

10. Click Finish.

Note:

The above procedure is also used to add a procedure data source. To add a procedure, select a procedure driver as your data source driver. For more information, see Configuring the Procedure Data Source.

1.

To configure a data source Right-click the data source you are working with and select Open. The Data Source configuration opens in the editor.

2. 3.

Click the Configuration tab. You can make changes to the data source configuration as required. This tab can have three sections:

Connection: This section lets you make changes to the information that you entered in the Connect String page of the New Data Source wizard. Authentication: This lets you edit the authentication information for data sources that require this. The following data sources have the Authentication section: Adabas (ADD) DB2 (all types) Ingres ODBC OLEDB (all types) Oracle Oracle RDB SQL Server Sybase VSAM (CICS)

Properties: This section lets you change the default values for the configuration properties for your data source.

For information on how to enter the configuration information, see the information for your data source in the Data Source Reference.

Configuring Data Source Advanced Properties


You configure the advanced properties for a Data Source in the Advanced tab of the data source editor. The advanced settings are the same for every data source. Advanced settings let you do the following:

13-2 AIS User Guide and Reference

Define the Transaction type Edit the syntax name Provide a table owner Determine if a data source is updateable or readable Provide repository information Set the virtual view policy

1. 2. 3. 4.

To configure data source advanced properties Open Attunity Studio. In the Design Perspective Configuration View, expand the Machine folder and then expand the machine where you want to configure the data source. Expand the Data sources folder, right click the data source you are configuring, then select Open. Click the Advanced tab and make the changes that you want. The table below describes the available fields:
Data Source Advanced Configuration Description

Table 131 Field Properties

Transaction type

The transaction level (0PC, 1PC or 2PC) that is applied to this data source, no matter what level the data source supports. The default is the data sources default level. A section name in the NAV.SYN file that describes SQL syntax variations. The default syntax file contains the following predefined sections:

Syntax name

OLESQL driver and the SQL Server 7 OLE DB provider (SQLOLEDB): syntaxName="OLESQL_SQLOLEDB" OLESQL driver and JOLT: syntaxName="OLESQL_JOLT" Rdb driver and Rdb version: syntaxName="RDBS_SYNTAX" ODBC driver and EXCEL data: syntaxName="excel_data" ODBC driver and SQL/MX data: syntaxName="SQLMX_SYNTAX" ODBC driver and SYBASE SQL AnyWhere data: syntaxName="SQLANYS_SYNTAX" Oracle driver and Oracle case sensitive data: syntaxName="ORACLE8_SYNTAX" or, syntaxName="ORACLE_SYNTAX" For case sensitive table and column names in Oracle, use quotes (") to delimit the names. Specify the case sensitivity precisely.

Default table owner

The name of the table owner that is used if an owner is not indicated in the SQL

Setting up Data Sources and Events with Attunity Studio

13-3

Table 131 (Cont.) Data Source Advanced Configuration Field Read/Write information Description Select one of the following:

Updateable data: Select this if you want to be able to update the data on the data source. Read only data: Select this to allow users to only view the data on the data source.

Repository Directory Repository directory Repository name Enter the location for the data source repository. Enter the name of a repository for a data source. The name is defined as a data source in the binding configuration. It is defined as the type Virtual and is used to store AIS views and stored procedures for the data source, if required instead of using the default SYS data.

Virtual View Policy Generate sequential view Select this to map a non-relation file to a single table. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties. Select this if you want to have an individual table created for every array in the non-relational file. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties. Select this to include a column that specifies the row number in the virtual or sequential view. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties. Select this for virtual views to include all the columns in the parent record. This parameter is valid only if you are using virtual array views. You configure virtual array views in the Modeling section of the binding Environment Properties.

Generate virtual views

Include row number column

All parent columns

Testing a Data Source


You can run a test procedure from Attunity Studio to determine whether the data source connection is active. This procedure pings the server and returns the result in a Test screen. To test a data source 1. Right-click the data source and select Test.
2. 3.

Select the active workspace from the drop-down list. Click Next. The next page in the wizard opens with the results. A successful result indicates that the connection is active. If there is a problem, and error message that describes the problem is displayed.

Creating a Data Source Shortcut (Optional)


A Data Source shortcut points to the location of a data source that is defined in a binding on another machine. You need a data source shortcut when you access the procedure from a client machine using either OLE DB (ADO) Client Interface or the
13-4 AIS User Guide and Reference

"fat" version of ODBC Client Interface (when the server installation is used as a client).1 To create a data source shortcut Drag-and-drop a data source from the Machine where the data source is defined to the machine where you want the shortcut. Or do the following:
1. 2. 3.

In the Design Perspective Configuration View, expand the machine where the data source shortcut should be defined. Expand the Bindings folder and then expand the binding where you want to add the data source. Right-click Data Sources and select New Data Source Shortcut. The New Data Source Shortcut wizard opens.

Figure 131 New Data Source Shortcut Wizard

4.

Select the machine where the target data source is defined:

If the machine is defined in Attunity Studio, select Machine from Configuration view and select the machine with the target data from the drop-down list. If the machine is defined in the binding as a remote machine (for more information see, Setting up Machines), select Machine defined in current binding remote machines list and select the machine from the list. If you want to also add the machine where the data source resides to the list of machines in the Configuration view select New machine. After clicking Next, the Add Machine screen opens where you can add the machine (see Setting up Machines).

5.

Click Next. The machine access information screen opens.

A thin ODBC client is available that does not require a data source shortcut defined on the client machine.

Setting up Data Sources and Events with Attunity Studio

13-5

Figure 132 Machine Access Screen

6.

Enter the following information in this screen.

Physical Address: The physical address of the machine with the data source. You cannot edit this field. Alias in binding: The alias given to the machine for the binding. Port: The machines port number. Workspace: Select the workspace that is used to access the data source.

7.

Click Next. The runtime security information screen opens.

13-6 AIS User Guide and Reference

Figure 133 Runtime Security Information Screen

8.

Add the following information about the machine security in this screen.

User name: Enter the user name for a user that has rights to access this data source on this machine. Password: Enter the users password. Confirm password: Re-enter the users password to confirm. Encryption protocol: Select the encryption protocol (if any) used for this protocol. For information on encryption in AIS, see Encrypting Network Communications. Firewall protocol: Enter the firewall protocol (if any) used for the machine with the data source. For information on using a firewall in AIS, see Accessing a Server through a Firewall.

This user and password information is written in the user profile associated with the binding (the user profile with the same name as the binding name). For details about the user profiles, see Managing a User Profile in Attunity Studio.
9.

Click Next. The list of available data sources opens. Select the data source for which you are creating a shortcut and click Finish. source name is used as a data source in the binding on the client machine. The data source shortcut is displayed in the Configuration View.

10. Select the Alias in binding check box and provide an alias for the name if the data

Testing Data Source Shortcuts (Optional)


You can run a test procedure from Attunity Studio to determine whether the data source shortcut is active. This procedure pings the server and returns the result in a Test screen. To test a data source shortcut 1. Right-click the data source shortcut and select Test.
Setting up Data Sources and Events with Attunity Studio 13-7

The active workspace is displayed in a read only field. The test procedure pings the server with the selected workspace when the test is executed.
2.

Click Next The next screen in the wizard opens with the results. A successful result indicates that the connection is active. If there is a problem, an error message that describes the problem is displayed.

Events
Attunity Studio uses event queues to handle Events. TheEvent Queue is defined as an adapter in Attunity Studio where interactions are expected as events.

Adding Event Queues


The Event Queue adapter is defined in a binding using Attunity Studio, in the Design perspective Configuration View. To define an event queue 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design perspective Configuration view, expand the Machine folder and then expand the machine where you want to add the event. Expand the Bindings folder and then the binding where you want to add the event. Right-click the Events folder and select New event. The New Event editor opens. Enter a name for the event queue.
Notes:

The name must be less than 32 characters. You cannot use the word event in the event name.

6.

Select the type of queue that you are using to handle events.

Event Queue: This will handle events using a regular event queue adapter. Tuxedo Queue: This will handle events with a Tuxedo Queue adapter. CICS Queue: This will handle events with a CICS Queue adapter (for z/OS systems only).

7.

Click Finish. A message appears asking if you want to change the daemon configuration. Click OK to accept the changes made in this procedure, or cancel to close and not confirm the changes. A new editor opens in the Editor section of the Studio workbench. The name of the editor is the name of the binding where you added the event queue.

After you finish adding the event queue to the Studio, follow these steps for making changes to the event queues adapter properties. To edit the event queues adapter properties In the Design perspective Configuration view, expand the Machine folder and then expand the machine with the event, expand the Events folder, then right-click

1.

13-8 AIS User Guide and Reference

the event and select Open. The Adapter configuration properties open in the editor
Note:
2.

If the event editor is open, go to the next step.

The following properties are available for the adapter queue:


addExecuteTimestamp: Do not change the setting for this property. remoteExecuteCompress: Do not change the setting for this property. remoteExecuteUrl: Do not change the setting for this property. routers: A list of users who can send receive an event. If the owner of the target is not one of these users, the event is not routed to the user. To add routers, expand this property and right-click users. A new entry called Item(#) is added to the Property column. In the Value column, enter the User Name for this router. senders: A list of users who can send an event. If the owner of the event is not one of these users, the event is ignored. To add senders, expand this property and right-click users. A new entry called Item(#) is added to the Property column. In the Value column, enter the User Name for this sender.

Defining Metadata for Event Queues


You use Metadata to define the Interactions for the Event and the structures of records used by the interactions. The metadata is stored as an application adapter definition in the SYS Repository, on the machine where the adapter is defined. To define event queue adapter metadata 1. In the Design perspective Configuration view, right-click the event queue adapter where you want to edit the metadata and select Show Metadata View. The Metadata view becomes active with the event queue adapter displayed in the Metadata view.
2.

Expand the event queue adapter, then right-click Imports and select New Import. The Metadata import screen opens in the editor.

3. 4.

Enter a name for the import. The name can contain letters, numbers and the underscore character. Select one of the following from the drop-down list:

Event Queue Import Manager Using COBOL Copybook Files Event Queue Import Manager Using Tuxedo VIEW/FML Files as the import type

5. 6.

Click Finish. The Metadata Import wizard opens. Click Add in the import wizard to add COBOL copybooks or BEA Tuxedo VIEW/FML files. The Add Resource window opens, which lets you select files from the local machine or get them from an FTP site. If the files are on another machine, you can add the FTP site.

Setting up Data Sources and Events with Attunity Studio

13-9

Figure 134 Add Resource

To add an FTP site:


Right-click My FTP Sites and select Add. In the Add FTP site screen, enter the server name where the COBOL copybooks reside. If you are not using an anonymous access, enter a valid username and password. You can browse and transfer files required to generate the metadata. Access the machine using the username as the root directory (high-level qualifier on z/OS systems).
Note:

After you access a machine you can right-click the machine and select Change Root Directory to change the high-level qualifier.

7.

From the Add Resource screen, select the files to transfer and click Finish.
Note:

You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. Each COBOL copybook format must be the same. Therefore, if you try to import a COBOL copybook that uses the first six columns with a COBOL copybook that ignores the first six columns, you must repeat the import.

The selected files are displayed in the wizard.

13-10 AIS User Guide and Reference

Figure 135 Input Files

8.

Click Next. The Apply Filters editor opens. Use this editor to apply filters, if necessary.
Note:

You can only use filters if using COBOL. If you are using Tuxedo VIEW/FML files, go to the next step.

Figure 136 Apply Filters

The following filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type.
Setting up Data Sources and Events with Attunity Studio 13-11

Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP is for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested columns: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Indicates whether to consider case sensitivity. Find: Searches for the selected value. Replace with: Replaces the value that is entered (in the Find field) with the value with another selected value.
9.

Click Next to open the Add Events editor.

Figure 137 Add Events

10. Click Add to add events. You can change the default name that is specified for the

event. The event mode is async-send, which means that the event queue waits for an input. You specify an input record used by the program associated with the event from the drop-down list in the Input column. This list is generated from the input files specified at the beginning of the procedure. Select a relevant record for the event.
11. Add as many events as necessary and click Next.

13-12 AIS User Guide and Reference

12. Click Next twice to open the Import Metadata screen. In this screen you generate

the metadata. You can import the metadata to the mainframe machine or leave the generated metadata on the Attunity Studio machine, to be imported later.
Figure 138 Import Metadata

13. In the Deploy Metadata section, select Yes if you want to transfer the metadata to

the server where the application adapter is defined.


14. Click Finish.

Note:

After you import the metadata, you can view the metadata in the Metadata tab. You can also make any adjustments to the metadata and maintain it, as necessary. For more information, see Working with Metadata in Attunity Studio.

Setting up Data Sources and Events with Attunity Studio

13-13

13-14 AIS User Guide and Reference

14
Procedure Data Sources
This section includes the following topics:

Procedure Data Sources Overview Configuring the Procedure Data Source

Procedure Data Sources Overview


A procedure driver enables connecting to a supported Attunity Connect procedure from JDBC, ODBC applications on all platforms and from ADO and .NET applications on Windows platforms. A procedure is a user-written DLL, which is viewed by AIS as a data source that returns a single rowset. A procedure enables the use of SQL to query the application. A procedure driver enables connecting to a supported AIS procedure, using Attunity SQL. Drivers are currently provided for the procedures listed in the following table:
Table 141 Driver Name CICS Procedure Data Source Natural/CICS Procedure Data Source (z/OS) Procedure Data Source (Application Connector) Available Procedures Procedure A program via a CICS EXCI transaction. A NATURAL program via CICS EXCI transaction. Any legacy application. Compare with Legacy Plug Application Adapter to access the application using JCA, XML, COM or.NET front-end applications.

Configuring the Procedure Data Source


You configure a procedure to Attunity:

Adding Procedure Data Sources: The procedure needs to be listed in a binding configuration. The definition in the binding includes properties, set in the procedure editor Properties tab. Each procedure has its own set of specific properties. Defining a Shortcut to a Procedure on Another Machine: A procedure driver requires a metadata definition that describes the inputs and outputs for the procedure.

Procedure Data Sources 14-1

Adding Procedure Data Sources


To use a procedure data source, you need to add it to the system and configure the data source properties. The data source is defined in the Design perspective, Configuration view of Attunity Studio. or you can also use the following methods:

Using the NAV_UTIL EDIT command. Using the NAV_UTIL UPD_DS command.

To add a Procedure data source 1. Open Attunity Studio.


2. 3. 4. 5. 6. 7. 8.

In the Design perspective, Configuration view expand the machine where you want to add the procedure data source. Expand the Bindings folder, and then expand the required binding configuration. Right-click Data sources and select New Data source. Enter a name for the procedure in the Name field. Select the procedure type from the Type list. Click Next. Enter the connect string to access the procedure. For information on how to enter the connect string, see the information for the procedure data source that you are using in the Procedure Data Source Reference. Click Finish.

9.

The procedure data source is displayed in the Configuration view and the procedure editor is displayed. You can set or change the properties for procedures in the following ways:

Using the Attunity Studio Design perspective Configuration view. Using the sp_config_datasource stored procedure as follows:
nav_proc:sp_config_datasource(ds_name, <config attribute="value"/>)

You change an environment property by executing a statement with the following format:
call nav_proc:sp_config_environment('proc_name','<config att="value"/>')

Where proc_name is the name of the procedure in the binding configuration. To edit a procedure data source definition Open Attunity Studio. In the Design perspective Configuration view, expand the machine with the procedure data source you are using. Expand the Binding folder and then expand the binding with your procedure data source. Expand the Data sources folder. Right-click your procedure data source and select Open. Click the Configuration tab. This tab should be open by default.

1. 2. 3. 4. 5.

14-2 AIS User Guide and Reference

6.

Make changes to any of the sections in this tab, if necessary. This tab can have any of the following sections.

Connection: This section lets you make changes to the information that you entered in the Connect String page of the New Data Source wizard. The Natural/CICS Procedure Data Source (z/OS) and the CICS Procedure Data Source require configuration in the Configuration section. Authentication: This lets you edit the authentication information for data sources that require this. The Procedure Data Source (Application Connector). data source requires configuration in the Authentication section: Properties: This section lets you change the default values for the configuration properties for your data source. All of the procedure data sources contain properties to be configured. For information about the properties for each data source, find the data source in the Procedure Data Source Reference.

7.

Click Save.

Configuring Data Source Advanced Properties


You configure the advanced properties for a Data Source in the Advanced tab of the data source editor. The advanced settings are the same for every data source. Advanced settings let you do the following:

Define the Transaction type Edit the syntax name Provide a table owner Determine if a data source is updateable or readable Provide repository information Set the virtual view policy

1. 2. 3. 4.

To configure data source advanced properties "Open" Attunity Studio. In the Design Perspective Configuration View, expand the Machine folder and then expand the machine where you want to configure the data source. Expand the Data sources folder, right click the data source you are configuring, then select Open. Click the Advanced tab and make the changes that you want. The table below describes the available fields:
Data Source Advanced Configuration Description

Table 142 Field Properties

Transaction type

The transaction level (0PC, 1PC or 2PC) that is applied to this data source, no matter what level the data source supports. The default is the data sources default level.

Procedure Data Sources 14-3

Table 142 (Cont.) Data Source Advanced Configuration Field Syntax name Description A section name in the NAV.SYN file that describes SQL syntax variations. The default syntax file contains the following predefined sections:

OLESQL driver and the SQL Server 7 OLE DB provider (SQLOLEDB): syntaxName="OLESQL_SQLOLEDB" OLESQL driver and JOLT: syntaxName="OLESQL_JOLT" Rdb driver and Rdb version: syntaxName="RDBS_SYNTAX" ODBC driver and EXCEL data: syntaxName="excel_data" ODBC driver and SQL/MX data: syntaxName="SQLMX_SYNTAX" ODBC driver and SYBASE SQL AnyWhere data: syntaxName="SQLANYS_SYNTAX" Oracle driver and Oracle case sensitive data: syntaxName="ORACLE8_SYNTAX" or, syntaxName="ORACLE_SYNTAX" For case sensitive table and column names in Oracle, use quotes (") to delimit the names. Specify the case sensitivity precisely.

Default table owner Read/Write information

The name of the table owner that is used if an owner is not indicated in the SQL Select one of the following:

Updateable data: Select this if you want to be able to update the data on the data source. Read only data: Select this to allow users to only view the data on the data source.

Repository Directory Repository directory Repository name Enter the location for the data source repository. Enter the name of a repository for a data source. The name is defined as a data source in the binding configuration. It is defined as the type Virtual and is used to store AIS views and stored procedures for the data source, if required instead of using the default SYS data.

Virtual View Policy Generate sequential view Generate virtual views Include row number column All parent columns Select this to generate virtual views for arrays when importing metadata.

14-4 AIS User Guide and Reference

Defining a Shortcut to a Procedure on Another Machine


A shortcut is a definition of a procedure on a machine, where the definition points to the location of the procedure that is defined in a binding on another machine. You need a data source shortcut when you access the procedure from a client machine using either ADO or the "fat" version of ODBC (when the server installation is used as a client).
Note:

A thin ODBC client is available that does not require a data source shortcut defined on the client machine.

To create a procedure shortcut You can drag-and-drop a data source from the machine where the data source is defined to the machine where you want the shortcut. or do the following:
1. 2. 3. 4.

Open Attunity Studio. In the Design perspective Configuration view, expand the machine where you want to create the shortcut. Expand the Bindings folder and the binding where you want to add the procedure shortcut. Right-click the Data sources folder and select New Data Source Shortcut. The New Data source shortcut wizard opens, as shown in the following figure:

Figure 141 The New data source shortcut screen

5.

Do one of the following:

Select the machine where the target procedure is defined. If that machine is defined in Attunity Studio, then select Machine from Configuration, and then select the machine from the list. Click Next to open the Add runtime machine access information page. If the machine was defined in the binding as a remote machine, then select Machine defined in current binding remote machines list, and then select

Procedure Data Sources 14-5

the machine from the drop-down list. Click Next to open the Add runtime machine access information page.

If you want to also add the machine where the procedure resides to the list of machines in the tree, select New machine-add the new machine. Click Next, the Add Machine dialog opens to add the machine as described in Setting up Machines. After you add the new machine information, click Next to open the Add runtime machine access information page.

6.

In the Add runtime machine access information page, you can change the following information, if necessary:

Alias in binding: Change the name of the shortcut. Port: Change the port where the original data source is located. Workspace: Change the workspace for the original data source.

7.

Click Next. The Add runtime security information page is displayed. Add security information, if necessary (User name, Password, Encryption protocol and Firewall protocol).
Note:

This information is written to the user profile, associated with the binding. For more information, see Managing a User Profile in Attunity Studio.

8.

Click Next. The list of available data sources, including procedures, is displayed. Select the required procedure, and click Finish.
Note:

If the procedure name is used as a procedure or data source in the binding on the client machine, check the Alias in binding box and provide an alias for the name.

The shortcut is now displayed in the Configuration view.

Defining the Procedure Metadata


Metadata defines the inputs and outputs for the procedure. The metadata is stored as a data source definition in an Attunity procedure repository, on the machine where the driver is defined. All metadata is either imported from relevant input files (such as COBOL copybooks) or created and maintained using the Attunity Studio Design perspective Metadata tab.
Note: Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) is not reflected in the Attunity metadata.

14-6 AIS User Guide and Reference

15
Implementing an Application Access Solution
This section contains the following:

Overview Setting up AIS for Application Access Supported APIs Application Access Flow Defining the Application Adapter ACX Protocol Transaction Support Generic and Custom Adapters

Overview
An application access solution lets you connect between 3GL applications directly. An application context is any enterprise software component where interactions occur within a transaction context. Applications include:

Internally developed applications Enterprise packaged products, such as ERP or CRM products Database systems, such as relational databases or file systems

AIS uses the Attunity Applications Adapter Framework (AAF) to accomplish this. In this framework, an application adapter that directly connects to an application is used instead of a data source. The following application adapters are currently available.
Table 151 Application Adapters Application A program via a CICS EXCI transaction Simple COM-based applications A program via an IMS/TM transaction

Adapter Name CICS Application Adapter (z/OS Only) COM Adapter (Windows Only) IMS/TM Adapter (z/OS Only)

Implementing an Application Access Solution 15-1

Table 151 (Cont.) Application Adapters Adapter Name Legacy Plug Application Adapter Pathway Application Adapter (HP NonStop Only) Tuxedo Application Adapter (UNIX and Windows Only) Application Any legacy application A program via a Pathway transaction

A program via a Tuxedo service

Note:

If the application is not directly supported by one of these adapters, Attunity Connect includes an SDK that enables you to write an adapter for the specific application. For details, see the Attunity Connect Developer SDK.

Setting up AIS for Application Access


This section describes how to prepare your system for an application access solution. To set up your system, follow these steps. To set up an Application Access solution 1. Install the System Components in the proper locations. See Installing System Components for an explanation on what to install and the locations for installing the components.
2.

Configure the system for application access. Make the configurations in Attunity Studio. For Application Access, select an adapter to use for you integration. See Configuring the System for Application Access (Using Studio) for a detailed explanation on what to configure.

Installing System Components


To use AIS for an application access solution, install the following:

AIS on the backend. The backend of the system is where your data is stored. In an application access solution, you access an application, such as CICS, directly to get the necessary data. This is where your application adapter resides. You must install the full (thick) version of AIS on the backend. For information on how to install AIS, see the Installation Guide for the platform you are working with. AIS for the Application (Thin Client): The application (XML, JCA, COM or .NET) that you are working with is installed with an AIS thin client. For information on how to install the AIS thin client, see the installation guide for the platform you are working on. Attunity Studio: Attunity Studio can be installed on any Windows computer in your system. Attunity Studio provides a graphic interface for configuring the components in your system. For information on installing Attunity Studio, see the installation guide.

15-2 AIS User Guide and Reference

Configuring the System for Application Access (Using Studio)


You must configure the following system components to set up you system for application access:

Machines: You must configure the machines used in the system. Make sure to configure the machine where your backend application and application adapter reside. For information on how to add and configure machines, see Setting up Machines. You can also test the machine connection in Attunity Studio. User Profiles: You must set up the users in your system. Setting up users is for security purposes. You can specify which users have access to various machines. For information on setting up user profiles, see User Profiles and Managing a User Profile in Attunity Studio. Application adapters: This is what makes this solution the application access solution. To set up your adapter, you must: Prepare to use the adapter by Setting Up Adapters to Attunity Studio. Create the application adapter interactions. You do this by Configuring Application Adapters that you are using for your solution. Make sure that your adapter is responding correctly by carrying out the procedures in Testing Application Adapters. Define the adapter metadata. Adapter metadata defines the interactions for the application adapter and the schema of any input and output records used by the interactions. The metadata is stored as an application adapter definition in the SYS repository, on the machine where the adapter is defined. See Working with Application Adapter Metadata. All metadata is imported from relevant input files (such as COBOL copybooks) and maintained using the Attunity Studio Design perspective Metadata tab. For more information on setting up application adapters, see Setting Up Adapters and the Adapters Reference section for the adapter you are using.

Note:

Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) is not indicated in the Attunity metadata.

Supported APIs
AIS application adapters connect to supported applications using JCA, XML COM, or.NET APIs on Windows platforms.

The XML Client Interface: The XML application interface enables any application with an XML API to access applications and data. The XML application interface supports an XML-based protocol modeled after the JCA architecture. This protocol is both readable by people and programmatically easy to use. This protocol is works well for web-based, internet-wide use, particularly in conjunction with XML transformation engines. The XML application interface is directly available (callable) on all supported platforms. On other platforms, it is accessible via network protocols such as TCP/IP (sockets) and HTTP. The JCA Client Interface: The JCA (J2EE Connectors Architecture) interface supports application servers based on J2EE (Java 2 Enterprise Edition) standards.

Implementing an Application Access Solution 15-3

It provides a robust, efficient, and reliable way to integrate Java applications with native applications.

The COM Client Interface: A COM component that enables access to application adapters and services using the XML protocol. The COM component provides seamless integration with Microsoft Windows products and technologies. .NET Client Interface : A .NET component, called NetACX, that enables access from any .NET-based application.

Application Access Flow


When you use adapters with AAF, you are connecting between applications. An application written with one of the Supported APIs connects to another application. Information is sent from the first application using an XML protocol called ACX (for more information, see ACX Protocol). An AIS application adapter that uses Attunity metadata is defined to talk to the second application. All required connections are sent directly in the ACX protocol, there is no need to use the query processor. The following diagram shows the Application Access solution flow.
Figure 151 Application Access Flow

Defining the Application Adapter


The central processing of application access is done by the application adapter. An adapter usually refers to the adapter binding, which is the same as a data source for SQL or data access solutions. The adapter type does the job of the data source driver in the application access world. An adapter type is compatible with a specific

15-4 AIS User Guide and Reference

application, such as the CICS adapter. You can also create an application to work with an application developed internally. All adapters must have an adapter or metadata definition. The definition is made up of a list of interactions and a schema. The user must enter the location for an XML schema that includes the metadata for the application. The schema contains all of the record structures without the interactions. The following figure shows an example of the adapter definition. In this case, this is the adapters metadata in XML format.
Figure 152 Example of an Adapter Definition

You define the adapter in Attunity Studio. Studio provides a graphical interface that allows you to define and import metadata definitions for adapters. For more information on how to define adapters in Attunity Studio, see Setting Up Adapters.

ACX Protocol
ACX (Attunity Connect XML Protocol) is Attunitys internal XML protocol. ACX is modeled after JCA (Sun Javas connection protocol). ACX is the wire that connects the applications and APIs with the target Application metadata definitions. ACX executes its tasks with connections between applications. There are two types of connections:

Transient connection: Transient connections are used with a single ACX request. A transient connection closes when an ACX request ends, or when the connection context changes (when a new verb is used to send a new request). Persistent connection: Persistent connections are used for an ongoing dialog with the back-end adapter or for pooling scenarios. Persistent connections are closed manually or when timed out. Persistent ACX connections are logical and do not rely on an active network connection (such as a connected socket) to stay active. This lets you use disconnected protocols such as HTTP as transports for ACX.

Implementing an Application Access Solution 15-5

The daemon keeps track of persistent connections with a unique connection ID. If you need to send a new ACX request at a later time, you can use a previous persistent connection by using its connection ID.
Note:

Support for two-phase-commits may require a physical connection.

The AIS XML Utility lets you make connections between applications using the ACX protocol. For more information see Using the Attunity XML Utility. For information on using ACX, see the Adapters Reference and the XML Client Interface chapter.

Transaction Support
Application adapters may support a two-phase commit capability to the extent that transactions are implemented in the application. A two-phase commit lets a database return to its pre-transaction state if an error occurs. This type of commit is used when more than one database or resource are updated in a single transaction. In a two-phase commit either all or none of the databases are committed. In a transaction using a two-phase commit, all changes are stored temporarily by each database. A transaction monitor issues a pre-commit command that requires an answer from each database. When all databases answer with a positive response, the final commit is issued. Applications with two-phase commit support: Attunity Connect supports the PrepareCommit and Recover API calls. Applications that support two-phase commit can participate fully in a distributed transaction. The following adapters support two-phase commit:

CICS Application Adapter (z/OS Only) IMS/TM Adapter (z/OS Only)


Note:

To work with two-phase commit with either of these application adapters, RRS (Transaction Management and Recoverable Resource Manager Services) must be installed. See Transaction Support.

See the reference for the specific adapter for any two-phase commit considerations.

Generic and Custom Adapters


You can use generic or custom adapters in your application access solution. Custom adapters are usually written for a set of legacy functions. The schema for a custom adapter is fixed and usually hard-coded into the adapter. In most cases backend interaction attributes are hard-coded into the adapter code with a special handler for each interaction. Generic adapters are written according to an API. The schema is provided at deployment can may vary according to the individual configurations. Backend interaction attributes are provided in the schema with a single handler for executing

15-6 AIS User Guide and Reference

interactions. Examples of generic adapters include CICS Application Adapter (z/OS Only), IMS/TM Adapter (z/OS Only), Pathway Application Adapter (HP NonStop Only), and Legacy Plug Application Adapter.

Developing an Application Adapter in AIS


In addition to the generic adapters provided with AIS, you can develop your own adapter using the Adapter SDK or GAP (Generic APplication). This API defines the interfaces for adding new application adapters to AIS. The adapters framework is designed to enable application integration using XML and the J2EE Java Connector Architecture (JCA) resource manager provided with AIS.

MyApp: An Application Adapter Example


The following are examples of an adapter with one function called Action1. The following is true about Action1:

It receives a string and an integer parameter It returns a string value of up to 512 characters and an integer value

If the above is true then the following figure shows the schema that is generated for this application.
Figure 153 Sample Schema for MyApp

Implementing an Application Access Solution 15-7

The following is an example of the generated code for the MyApp application:
Figure 154 Code Example for MyApp

For more information on writing application adapters with the GAP SDK, see the Attunity Connect Developer SDK.

15-8 AIS User Guide and Reference

16
Setting Up Adapters
This section contains the following topics:

Setting up Adapters Overview Working with Adapters

Setting up Adapters Overview


Adapters are used when you are using Application access. AIS provides a number of adapters to connect to applications. For more information, see Implementing an Application Access Solution. For a reference on the available adapters, see Adapters Reference.

Working with Adapters


Adapters are added to bindings in Attunity Studio. Adapters are used to access applications that you are integrating into your system. You use adapters when Implementing an Application Access Solution in Attunity Connect. These sections describe how to set up adapters in your system.

Adding Application Adapters Configuring Application Adapters Testing Application Adapters

Adding Application Adapters


To use an Application Adapter, you need to first add it to the system and configure the adapter properties. The adapter is defined in the Design perspective, Configuration view. To define the connection 1. Open Attunity Studio.
2.

In the Design Perspective Configuration View, expand the Machines folder and then expand the Machine where you want to add the adapter. If you need to add a machine to your Attunity Studio setup, see Setting up Machines. Expand the Bindings folder. Expand the binding where you want to add the adapter. Right-click the Adapters folder and select New Adapter. The New Adapter wizard opens.
Setting Up Adapters 16-1

3. 4. 5.

Figure 161 New Adapter Screen

6.

Enter the following information in this screen:


Name: Enter a name to identify the adapter. Type: Select the adapter type that you want to use from the list. The available adapters are described in the adapter reference section. Create event queue for the adapter: Select this check box if you want to associate an Event Queue with this adapter. For information on event queues, see Events.
Note:

you cannot use the word Event as part of the adapter name.

7.

Click Finish.

Configuring Application Adapters


This section describes how to use Attunity Studio to set up an adapter configuration. To configure the adapter 1. Open Attunity Studio.
2. 3. 4. 5.

Expand the Machines folder. Expand the Bindings folder, and then expand the binding with the adapter you are working with. Expand the Adapter folder. Right-click the adapter that you want to work with and select Open. The adapter configuration editor opens in the editor, which displays the properties for the adapter.

16-2 AIS User Guide and Reference

6.

Configure the adapter parameters as required. The configuration properties displayed depend on the type of adapter you are working with. For an explanation of these properties, see the Adapters Reference or the Non-Application Adapters Reference to find the documentation for your adapter.

Testing Application Adapters


You can run a test procedure from Attunity Studio to determine whether the adapter is connected. This procedure pings the server and returns the result in a Test screen. To test an adapter 1. Right-click the adapter and select Test.
2. 3.

Select the active workspace from the drop-down list. The test procedure pings the server with the selected workspace when the test is executed. Click Next. The next page in the wizard opens with the results. A successful result indicates that the connection is active. If there is a problem, an error message that describes the problem is displayed.

Setting Up Adapters 16-3

16-4 AIS User Guide and Reference

17
Application Adapter Definition
This section includes the following topics:

Overview The adapter Element The interaction Element The schema Element The enumeration Element The record Element The variant record Element The field Element

Overview
An application adapter definition includes the following information:

General Adapter Properties


This part defines various simple adapter properties such as name, type, description, time-out values, etc.

Interactions List
This part lists the Interactions offered by the adapter. Information items include the interaction name, its description and input and output record names.

Input and Output Record Schema


This part details the structure of all input and output records used by the adapter. The following diagram shows the Schemas of the adapter XML definition document:

Application Adapter Definition

17-1

Figure 171 Adapter Schema

The adapter Element


The adapter element is the root element of the adapter definition XML document.

The attributes of the adapter element define simple adapter properties (see below). The interaction elements under the adapter describe particular interactions. The schema element under the adapter provides the schema of all the records used within the adapter.

The following table summarizes the attributes of the adapter element:


Table 171 Attribute authenticationMechanism adapter Element Attributes Type Default Description

enum basic password The type of authentication implemented by the adapter, as follows:

none: The adapter does not handle authentication. basic password: The adapter implements basic username-password authentication.

Note: Kerberos authentication will be supported in future releases. connect connectionPoolingSize description maxActiveConnections maxIdleTimeout string int string int int 600 Adapter specific connect string. The number of connections that can be held in the connections pool simultaneously. The number of connections that can be held in the connections pool simultaneously. The maximum number of simultaneous connections an adapter may take (per process). The maximum time, in seconds, that an active connection can stay idle. After that time, the connection is soft-closed and placed in the connections pool or simply destroyed (depending on the pooling settings). The maximum size in bytes that an XML ACX request or reply may span. Larger messages are rejected with an error.

maxRequestSize

int

17-2 AIS User Guide and Reference

Table 171 (Cont.) adapter Element Attributes Attribute name Type string Default Description The name of the adapter definition. (This name is normally the name of the adapter specified in the binding configuration. If this name differs from the name in the binding configuration, the binding entry must include a definition element set to the name specified here.) The operating system the application adapter runs under. 120 The maximum amount of time (in seconds) that a connection is kept in the connections pool before it is destroyed. The name of the schema used to define the adapter. The level of transaction support. Your options are:

operatingSystem poolingTimeout

string int

schemaName transactionLevelSupport

string enum 1PC

0PC: No transaction support 1PC: Simple (single phase) transactions 2PC: Distributed (two phase) transactions

type Example 171

string The adapter Element

The name of the adapter executable.

<adapter name="calc" description="Attunity Connect Calc Schema" transactionLevelSupport="0PC" authenticationMechanism="basic-password" maxActiveConnections="0" maxIdleTimeout="600" maxRequestSize="32000">

The interaction Element


The interaction element describes a single adapter interaction. The interaction element is a child-element of the adapter element. The following table summarizes the attributes of the interaction element:
Table 172 Attribute description input interaction Element Attributes Type string string Default Description A description of the interaction. The name of the input record structure.

Application Adapter Definition

17-3

Table 172 (Cont.) interaction Element Attributes Attribute mode Type enum Default Description The interaction mode. Available modes are:

sync-send-receive: The interaction sends a request and expects to receive a response. sync-send: The interaction sends a request and does not expect to receive a response. sync-receive: The interaction expects to receive a response. async-send-receive: The interaction sends a request and expects to receive a response that are divorced from the current interaction. async-send: The interaction sends a request that is divorced from the current interaction. This mode is used with events, to identify an event request.

name output Example 172

string string

The name of the interaction. The name of the output record structure.

The interaction Element

<interaction name="add" description="Add 2 numbers" mode="sync-send-receive" input="binput" output="output" /> <interaction name="display" description="Display msg in output stream" mode="sync-send-receive" input="inpmsg" output="outmsg" />

The schema Element


The schema element describes the structures used in the interactions. The schema element is a child-element of the adapter element. Only one schema is allowed per adapter. The following table summarizes the attributes of the schema element:
Table 173 Attribute header initialization header name noAlignment version Example 173 schema Element Attributes Type string string string boolean string The name of the adapter. Determines whether buffers are aligned or not. The schema version. Description The include file generated by NAV_UTIL PROTOGEN (described in Attunity Developer SDK).

The schema Element

<schema name="calc" version="1.0" header="calcdefs.h">

17-4 AIS User Guide and Reference

The enumeration Element


The enumeration element defines an enumeration type for use in interaction definitions.

The record Element


The record element defines a grouping of fields. The following table summarizes the attributes of the record element:
Table 174 Attribute EntryRef IID libIID record Element Attributes Type string string string Description (Used with the COM adapter) The name of the method to be invoked within that object. (Used with the COM adapter) The UUID of a user defined type. Used for user defined data types only. (Used with the COM adapter) The UUID of the library in which the user defined data type is defined. Used for user defined data types only. Determines whether buffers are aligned or not. The name of the record. (Used with the COM adapter) Either a ProgID or a UUID of the COM that this input record refers to. (Used with the COM adapter) The number of parameters passed to that method. (Used with the cics adapter) The name of the program to be executed in a CICS transaction. (Used with the IMSTM adapter) The IMS/TM transaction name. (Used with the CICS adapter) The CICS transaction id where the program will run. The TRANSID must be EXCI or a copy of this transaction.

noAlignment name ObjectRef ParamCount program (z/OS only) transaction (z/OS only) transid (z/OS only)

boolean string string int string string string

Example 174

The record Element

<record name="binput"> <field name="p1" type="int" /> <field name="p2" type="int" /> </record> <record name="output"> <field name="result" type="int" /> </record> <record name="inpmsg"> <field name="m" type="string" nativeType="string" length="512" /> </record> <record name="outmsg"> <field name="m" type="string" nativeType="string" length="512" /> </record>

Application Adapter Definition

17-5

Defining Hierarchies
The variant record element can be used to define a hierarchical structure. The hierarchical definition includes a record definition for the child. The parent record includes a field record with the type name that is used to define the child.
Example 175 Variant Record

<record name="parent"> <field name="f1" type="child" /> <field name="f2" type="int" /> </record> <record name="child"> <field name="c1" type="int" /> <field name="c2" type="string" nativeType="string" length="20" /> <field name="c3" type="string" nativeType="string" length="20" /> </record>

The XML used to access the adapter, must use the same structure as specified in the interaction definition.

The variant record Element


Variants are similar to redefine constructs in COBOL, and to union in C. The basic concept is that the same physical area in the buffer is mapped several times. The mappings can be of:

Different nuances of the same data. Different usage of the same physical area in the buffer.

This section describes the common use cases of variants and how they are represented in the variant syntax. There are two types of variant:

Variant without Selector Variant with Selector

Variant without Selector


Variants without selectors are used to define different cases of the variants and represent different ways of looking at the same data. At this time, only the first case of a variant without a selector appears in the XML. It is therefore highly recommended to remove all unnecessary cases.
Example 176 Variant without a Selector

<record name="VAR1"> <field name="VAR_0" type="VAR1__VAR_0" /> </record> <variant name="VAR1__VAR_0"> <field name="UNNAMED_CASE_1" type="VAR1__VAR_0__UNNAMED_CASE_1" /> <field name="PARTCD" type="VAR1__VAR_0__PARTCD" /> </variant> <record name="VAR1__VAR_0__UNNAMED_CASE_1">| <field name="PARTNUM" type="string"

17-6 AIS User Guide and Reference

nativeType="string" size="10" /> </record> <record name="VAR1__VAR_0__PARTCD"> <field name="DEPTCODE" type="string" nativeType="string" size="2" /> <field name="SUPPLYCODE" type="string" nativeType="string" size="3" /> <field name="PARTCODE" type="string" nativeType="string" size="5" /> </record>

Variant with Selector


Different cases of the variant represent different ways in which to use the physical area in the buffer. For every record instance there is only one case which is valid, the others are irrelevant. Additional fields in the buffer help determine which variant case is valid for the current record.
Example 177 Variant with a Selector

<record name="ORDER"> <field name="RECTYPE" type="string" nativeType="string" size="1" /> <field name="VAR_1" type="ORDER__VAR_1" /> </record> <variant name="ORDER__VAR_1" selector="RECTYPE"> <field name="ORDER_HEADER" type="ORDER__VAR_1__ORDER_HEADER" case="H"/> <field name="ORDER_DETAILS" type="ORDER__VAR_1__ORDER_DETAILS" case="D"/> </variant> <record name="ORDER__VAR_1__ORDER_HEADER"> <field name="ORDER_DATE" type="string" nativeType="numstr_u" size="8" /> <field name="CUST_ID" type="string" nativeType="numstr_u" size="9" /> </record> <record name="ORDER__VAR_1__ORDER_DETAILS"> <field name="PART_NO" type="string" nativeType="numstr_u" size="9" /> <field name="QUANTITY" type="int" nativeType="uint4" size="4" /> </record>

The field Element


The field element defines single data item within a record or a variant. The following table summarizes the attributes of the variant element:
Table 175 Attribute array field Element Attributes Type array Description The array field that is made up of other fields.

Application Adapter Definition

17-7

Table 175 (Cont.) field Element Attributes Attribute COMtype Type enum Description (Used with the COM adapter) Specifies the field's data type as recognized by COM, using explicit COM enumeration values (for details, see COM Adapter (Windows Only). Runtime value holding the actual number occurrences in the array. A field can be specified as a counter field. The default value for the field. The default value for an integer is zero (0) and for a string NULL. Specifying a default value means that the field can be omitted from the input XML. Note: If a field isnt nullable, when using the database adapter and a default value is not supplied, an error occurs. filter length int Filtering of extraneous, unwanted metadata. This attribute is for internal use only. The size of the field including a null terminator, when the data type supports null termination (such as the cstring data type). (Used with the LegacyPlug adapter) The method by which the field is passed or received by the procedure (either byValue or byReference). When a parameter is used for both input and output, the mechanism must be the same for both the input and the output. For outer-level (non-nested) parameters, structure parameters (for the structure itself, and not structure members), and variant parameters, the default value is byReference. name nativeType string string The name of the field. The Attunity Connect data type for the field. Refer to Managing Metadata for a list of all supported data types. Note: When the type value is string, the nativeType value must also be specified as string. offset paramnum int int An absolute offset for the field in a record. (Used with the LegacyPlug adapter) The procedure argument number. 0 indicates the value is a return value. 1 indicates the value is the first argument, and so on. If paramNum is specified at the record level, it cannot be specified for any of the record members (at the field level). precision private reference required int boolean boolean boolean The float data type precision. Used in conjunction with scale (see below). The value is hidden in the response. Used with array (see above), to identify a pointer. A value is mandatory for this field.

counter

int

defult

string

mechanism

string

17-8 AIS User Guide and Reference

Table 175 (Cont.) field Element Attributes Attribute scale size type Type int int string Description The float data type scale. Used in conjunction with precision (see above). The size of the field. The data type of the field. The following are valid:

Binary Boolean Byte Date Double Enum Float Int Long Numeric[(p[,s])] Short String (When the type value is string, the nativeType value must also be specified as string) Time Timestamp

usage

string

(Used with the COM adapter) Explains what the COM adapter is about to do with this field:

InstanceTag: Names an object instance. Property: Handled as a property. Parameter: The field value should be passed as a parameter to/from a method. RetVal: The field will hold a methods return value.

value

string

Internal code representing the field, used in the C program.

Example 178

field Element

<field name="m" type="string" nativeType="string" length="512" private="true" />

Application Adapter Definition

17-9

17-10 AIS User Guide and Reference

Part III
Attunity Stream
This part contains the following topics:

What is the Attunity Stream CDC Solution Implementing a Change Data Capture Solution SQL-Based CDC Methodologies Creating a CDC with the Solution Perspective

18
What is the Attunity Stream CDC Solution
This section includes the following topics:

CDC Solution Overview The Attunity Stream CDC Architecture What Can Be Captured?

CDC Solution Overview


Attunity Stream captures and delivers the changes made to enterprise data sources in real-time. This enables you to move mainframe and enterprise operational data in real-time to data warehouses and data marts, improving the efficiency of ETL processes, synchronizing data sources, and enabling event-driven business activity monitoring and processing. Attunity Stream provides agents that non-intrusively monitor and capture changes to mainframe and enterprise data sources. Changes are delivered in real-time or consumed as required using standard interfaces. The Attunity Stream CDC solution provides the following capabilities:

Capture changes to data in real-time or near-real-time for applications that demand zero latency and require the most up-to-date data. Real-time data capture guarantees that a change event is immediately available at the consumer. Near-real-time data capture involves a configurable delay before a change is available at the consumer. Using a near-real-time configuration, when there is significant capture activity, events are reported immediately. However, after the system has been idle, it may take a few seconds for the events to start flowing again.

Enable consumers of changed data to receive changes quickly, either by asking for the changes in high-frequencies (such as every few seconds), or by sending them the changes as soon as they are identified. The consumer application periodically requests changes, receiving each time a batch of records that represent all the changes that were captured since the last request cycle. Change delivery requests can be done in low or high frequencies, for example, every 15 seconds or a set number of times a day. The extracted changes are exposed to enable the consumer application to seamlessly access the change records using standard interfaces like ODBC and JDBC or XML and JCA.

Using the Attunity Stream CDC solution enables ETL (extract, transform, and load) processes to run without bringing the site computer systems down. CDC enables the

What is the Attunity Stream CDC Solution 18-1

movement of only changes to data while the operational systems are running, without the need for a downtime window. The CDC architecture consists of the following components:

CDC agents, which are located on the same computer as the changes to be captured. Each agent is customized for the specific data source on the specific platform. A CDC agent provides access to the journal (or logstream), to read the journal for specific table changes. The agent maintains the last position read in the journal (the stream position or context) and starts at this point the next time it polls the journal. The context is stored in the repository where the agent adapter definition is stored. The adapter definition includes a single primary interaction which is used to access the appropriate journal and includes the list of tables to monitor for changes.
Note:

In addition to this interaction, secondary interactions are defined during runtime, based on the table metadata of each table specified as part of the change data capture solution.

Depending on the agent, transactions are supported. If transactions are not used, then auto-commit is used.

A CDC staging area: Changes are stored in a staging area, enabling a single scan of a journal to extract the details of changes to more than one table and to also enable filtering of the changes. A committed change filter is currently available and a redundant change filter (hot-spot optimization) is planned. Both of these filters are described in Tracking Changes - Auditing. The staging area is described in more detail in The Staging Area. CDC data sources, to enable access to the changes using industry standards such as ODBC and JDBC. A CDC data source uses Attunity metadata and holds metadata for tables marked for change data capture. The CDC data source points to either the CDC agent or the staging area to enable getting the changes, as events, using ODBC or JDBC. Additionally, the CDC data source converts the data types in the table with changes to be captured to standard ODBC data types.

An audit of the changes that are captured. The audit is stored on disk and changes captured by both the CDC agent and written to the staging area are recorded.

Once a CDC solution has been defined, the changes can be pushed to the consumer, using a standard Attunity Connect event router. Setting up and using an event router is described in Attunity Connect Reference.

The Attunity Stream CDC Architecture


The Attunity Stream CDC solution operates with or without a staging area. When a staging area is used, the change data capture agent reads the journal and channels requested changes from the journal to the staging area. The change data capture agent resides on the same computer as the data source with changed data to be captured. The staging area can reside on any computer. If the consumer application is SQL-based, a change data capture data source is available to access the relevant changes written to the staging area. The change data capture data source can reside on any computer.

18-2 AIS User Guide and Reference

The CDC architecture is shown in the following figure:


Figure 181 Change Data Capture when a Staging Area is used

When a staging area is used, the change data capture agent reads the journal and channels requested changes from the journal to the staging area. The change data capture agent resides on the same machine as the data source with changed data to be captured. The staging area can reside on any machine. If the consumer application is SQL-based, a change data capture data source is available to access the relevant changes written to the staging area. The change data capture data source can reside on any machine. A context is stored both for the staging area and the agent which mark the last point where changes were captured. For details about the context, refer to The Staging Area.
Figure 182 Change Data Capture when a Staging Area is not used

When a staging area is not used, the change data capture agent reads the journal and channels requested changes from the journal either to a JCA or XML-based consumer application or, if the consumer application is SQL-based, to a change data capture data

What is the Attunity Stream CDC Solution 18-3

source that is made available to access the relevant changes. The change data capture agent resides on the same machine as the data source with changed data to be captured. The change data capture data source can reside on any machine. A context is stored for the agent. For details about the context, refer to The Staging Area.

The Staging Area


The staging area is an area used by Attunity Stream to store captured data from a journal. When capturing changes from more than one table, without a staging area, the journal is scanned by the agent once for each table. When a staging area is used, the journal is scanned once and changes for every required table read during that scan are passed to the staging area, where they are stored. Thus, the journal is scanned once each time it is polled. Furthermore, once the changes have been written to the staging area, processing of these changes is performed independently of the journal. Another benefit of using the staging area is when transactions are used. The changed data is not written to the change queue until the transaction is committed. Thus, if the transaction fails, there is no overhead of having to back out any processing done with the steps in the failed transaction. The staging area can be on any Windows platform running Attunity server and not necessarily on the same server as the CDC agent. Once the information has been extracted from the journal and written to the staging area, processing of changes is performed on the staging area only. Thus, the staging area should be setup to consider the network configuration and where the consumer application runs. The staging area maintains the last position read by the consumer application (the staging area context) and starts at this point the next time a request from the consumer application is received. The context is stored in the repository where the staging area is maintained. The staging area is indexed. Thus, access to the staging area for a specific stream is quick. Use of the staging area is recommended in the following situations:

When changes to data in more than one table needs to be captured. When transactions are used. The changed data is not written to the change queue until the transaction is committed. Thus, if the transaction fails, there is no overhead of having to back out any processing done with the steps in the failed transaction. When repositioning the stream position (resetting the context) is planned to be performed often.

The staging area is cleared by default every 48 hours. All events that have been in the staging area for more than 48 hours are deleted.

Handling Before and After Images


Where applicable, when setting up a CDC solution, you can specify if you want to save the before image information of changed data that is recorded in the journal as well as the after image (undo and redo records for DB2 on the z/OS platform) information. The default is that only the after images are captured. The following table shows what is captured if before images are requested:

18-4 AIS User Guide and Reference

Table 181 Operation INSERT UPDATE DELETE

What is Captured Before Image X P P After Image P P X

To capture before images, the journal must be set up to include before images, as described for each type of journal.

Tracking Changes - Auditing


When using a CDC solution you can produce an audit trail of the captured data. The following audit levels are available:

None: Indicates no auditing is performed. Summary (Statistics): The total number of records retrieved from the change queue and system and error messages are reported. In addition, header information about each record captured, such as the type of operation and the table name, is reported.
Note:

Details of the header information is provided in the CDC agent specific chapters.

Detailed: The total number of changes retrieved are reported, along with system and error messages. In addition, header information and record information about each record captured is reported. The audit entries can be viewed in Attunity Studio Runtime Manager perspective in the Event monitor. Entries include a direction as follows: Entries from the CDC agent: The audit entries show what data changes were extracted from the agent by the consumer application. Entries from the staging area: The audit entries show what data changes were extracted from the staging area by the consumer application and what entries were written to the staging area by the agent.

Security Considerations
In general, Attunity Stream relies on the security mechanisms implemented at a site. For example, on az/OS system using RACF to manage security, the security rules implemented in RACF are also applied to Attunity Stream CDC. Additionally, you can specify as part of the CDC setup who can access the CDC agent and staging area to extract changed data and who can write changed data to the staging area.

What Can Be Captured?


Attunity Stream includes the following data source CDC agents:

DB2 Journal CDC on z/OS systems: The change data capture agent monitors both the archived and active DB2 journals and captures changes made to specific tables, which are written to these journals. Since transaction information is also stored,
What is the Attunity Stream CDC Solution 18-5

the committed change filter can be used to ensure that only committed changes are captured. For details of the committed change filter, refer to Tracking Changes Auditing.

DB2 Journal CDC on OS/400 platforms: The change data capture agent monitors a DB2 database journal and captures changes made to specific tables, which are written to this journal. Since transaction information is also stored, the committed change filter can be used to ensure that only committed changes are captured. For details of the committed change filter, refer to Tracking Changes - Auditing. DISAM on Windows platforms: The change capture agent monitors a journal for changes in DISAM tables and captures changes made to specific tables, which are written to this journal. This solution can only be used when the DISAM data is updated using Attunity Connect and not when updated directly by another program. IMS on z/OS systems: The change capture agent monitors a system log for changes in IMS/DB tables and captures changes made to specific tables, which are written to the logstream. Oracle on Windows and UNIX platforms: The change data capture agent monitors Oracle REDO log files for changes in Oracle tables from Oracle version 9iR2. The CDC solution polls the Oracle Logminer with its archive mode to capture changes. The staging area must be used when capturing Oracle data changes. VSAM-Batch on z/OS systems: The change data capture agent monitors a system log for changes in VSAM tables. Transactions are handled at the program level of the program. If a program fails, a decision to rollback must be made independently of the VSAM-batch change data capture agent. VSAM-CICS on z/OS systems: The change data capture agent monitors a CICS logstream for changes in VSAM tables. When transaction information is not available, rolled-back transactions appear as a set of backed-out changes, applied to the data. Query-based CDC on all Attunity server platforms: A generic change capture agent that enables the capture of changes in any of the data sources supported by Attunity Connect. The query-based agent only captures changes based on changes to a specific field in the table. An initial value is specified for the field as the starting change data capture context. The query-based agent does not include the ability to specify a staging area, nor to reset the context.

18-6 AIS User Guide and Reference

19
Implementing a Change Data Capture Solution
This section contains the following topics:

Overview CDC System Architecture Setting up AIS to Create a Change Data Capture CDC Adapter Definition CDC Streams Transaction Support Troubleshooting

Overview
The CDC solution lets you identify changes to data. This is referred to as consuming changes. The consumed changes are reported in a special log file. With CDC, the data is consumed in real time, that is at exactly the time the INSERT, UPDATE, or DELETE operations occur in the source tables. Changes are stored in change tables. Attunity Stream uses CDC (Change Data Capture) agents to consume changes to data that they display when needed. The following agents are supported:

VSAM Under CICS CDC (on z/OS) VSAM Batch CDC (z/OS Platforms) IMS/DB CDC on z/OS Platforms Adabas CDC on z/OS Platforms Adabas CDC on UNIX Platforms Adabas CDC for OpenVMS DB2 CDC (z/OS) DB2 CDC (OS/400 Platforms) Enscribe CDC (HP NonStop Platforms) Microsoft SQL Server CDC Oracle CDC (on UNIX and Windows Platforms) Query-Based CDC Agent (for work tables)

Implementing a Change Data Capture Solution 19-1

SQL/MP CDC on HP NonStop

Setting up AIS to Create a Change Data Capture


You set up a change data capture in Attunity Studio. This provides a single, integrated GUI where you can easily configure a Change Data Capture. The workflow required for setting up a CDC solutions is:
1.

Install the System Components in the proper locations. See Installing System Components for an explanation on what to install and the locations for installing the components.

2. 3.

Apply the license. Use Attunity Studio to apply the license to the machines defined in your system. Configure the Change Data Capture solution. Use the Solution perspective in the Attunity Studio to add a new CDC solution and define its configuration properties. This automatically configures all the necessary components for the Change Data Capture, including the data source driver and CDC agent. See Creating a CDC with the Solution Perspective for more information.

See Configuring the System for Change Data Captures (Using Studio) for additional information.

Installing System Components


To use AIS for a Change Data Capture solution, install the following:

AIS on the backend. The backend of the system is where your data is stored. In an CDC solution, you consume changes from data sources such as Oracle, to create a log with the change data information. This is where your CDC agent resides. You must install the full (thick) version of AIS on the backend. For information on how to install AIS, see the Installation Guide for the platform you are working with. Data Source: The data source that you are working with is installed with AIS. For more information, see the installation guide for the platform you are working on. Attunity Studio: Attunity Studio can be installed on any Windows computer in your system. Attunity Studio provides a graphic interface for configuring the components in your system. For information on installing Attunity Studio, see the installation guide or the platform you are working on.

See also CDC System Architecture.

Configuring the System for Change Data Captures (Using Studio)


You must configure the following system components to set up you system for application access:

Machines: You must configure the machines used in the system. Make sure to configure the machine where your backend application and application adapter reside. For information on how to add and configure machines, see Setting up Machines. You can also test the machine connection in Attunity Studio. User Profiles: You must set up the users in your system. Setting up users is for security purposes. You can specify which users have access to various machines. For information on setting up user profiles, see User Profiles and Managing a User Profile in Attunity Studio.

19-2 AIS User Guide and Reference

CDC Agents: You need to set up the CDC agent that you want to use. Attunity Studio lets you set up a CDC agent by Creating a CDC with the Solution Perspective. This lets you use Attunity Studio to manually set up your own CDC. Data Sources: If you create the CDC agent manually, you need to include the data source that your CDC agents supports. To set up your data source, you must: Prepare to use the data source by Adding Data Sources to Attunity Studio. Make sure that your data source is responding correctly by carrying out the procedures in Testing a Data Source. For non-relational data sources, you must define the metadata. Data source metadata defines the table structure for your data source. All metadata is imported from relevant input files (such as COBOL copybooks) and maintained using the Attunity Studio Design perspective Metadata tab. For more information on setting up data sources, see Data Sources and the data source Reference section for the adapter you are using.

Note:

Attunity metadata is independent of its origin. Therefore, any changes made to the source metadata (for example, the COBOL copybook) is not indicated in the Attunity metadata.

Generated CDC Components


The following are generated when you set up a CDC using the Solution perspective.

A new binding for the change data capture components on the machine where the data source with changes to be captured resides. CDC components are defined under this new binding, which is named name of project_ag, if the agent and the staging are are on different machines. If the agent and staging area are on the same machine, bindig is called name of project_router. A CDC agent, which is defined as an adapter under the CDC binding and named name of project_ag. A CDC data source, which is on the same machine as the CDC agent. This data source is called name of project. A staging area. This includes a binding and a data soruce. Both are called name of project_sa.The staging area is defined on the machine where the CDC changes are stored. A CDC router. This includes a binding, a data source, and an adapter. The binding and the adapter are called name of project_router and the data source is called name of project_sa. The router data source is a copy of the staging area s data source. For each of the bindings defined in the CDC Solution, a workspace with the same name is defined.

Handling Arrays Defined in the Source Data


The CDC solution stores changes in XML and does not enable capturing virtual tables. To work with these tables, they need to be flattened. By default, the data source used in Attunity Stream is defined with the arrays flattened.

Implementing a Change Data Capture Solution 19-3

When using the CDC solution, the built-in flattening mechanism enables the following:

A view for the parent table records in the non-relational structure. This view does not include reference to the related child-arrays. A view for each child array that includes a unique key from the parent table and an array row number. This enables adding further columns from the parent table to the virtual array view.

CDC System Architecture


The CDC solution system components include software that is installed in your system. These components are described in Installing System Components. In addition, various files are created, such as the change data tables that keep track of the consumed changes. The CDC solution is usually deployed on three platforms. A platform can be one or more servers. The following figure shows the CDC system architecture.
Figure 191 CDC Solution System Components

This figure shows the three platforms used to deploy a CDC solution. These platforms are:

Database Platform: The database platform is where the database and the agent run. This platform can be any platform supported by AIS. The database platform is, in many cases, also a legacy application platform. This means that processing overhead on this platform should be minimized. Staging Area Platform: The staging area platform is where the change tables are hosted. This platform also hosts the SQL based change router, which enters data into the change tables. This platform can be any Windows or UNIX platform.

19-4 AIS User Guide and Reference

ETL Platform: The ETL platform is where the ETL tool runs. This platform can be any platform supported by AIS as well as any platform that can run any of AIS thin clients (such as any standard Java platform). The ETL platform may be the same as the Staging Area platform depending on the ETL tool resource requirements. If the platform is strong enough, placing both the staging area and the ETL tool on the same platform can reduce network utilization and improve throughput.

CDC Adapter Definition


The CDC adapter definition is returned in a standard XML format. This format is based on the Attunity ACX Protocol. The CDC adapter definition is composed of two parts, the interactions, which are static and stored in the Attunity Studio repository and the schema, which is dynamic and is created at runtime. The following figure shows the parts of the CDC definitions metadata.
Figure 192 CDC Definition

The figure above shows the adapter metadata as viewed in Attunity Studio. This shows the interactions only. You can view the schema by Using the Attunity XML Utility. The following figure shows how to configure the XML utility to return the full definition including the schema.
Figure 193 XML Utility Metadata Button

Implementing a Change Data Capture Solution 19-5

CDC Agent Metadata Definition Description


This section describes the elements in the CDC agent metadata. This metadata has two elements.

Interactions Schema

Interactions
The following are the outbound interactions in the CDC agent metadata:

getEvents: Gets the CDC events from the agent. setStreamPosition: Moves the stream position back to a specified position to re-read events from that point.

The following are the inbound interactions in the CDC agent metadata. These events are not callable by a client application.

eventStream table-name
Note:

The schema records of these interactions are important for client apps because they describe the metadata of the change events.

Schema
The following are the schema elements in the CDC agent metadata:

header: This accompanies all events. It is the same for a specific CDC agent. A list of header fields is included in the CDC agent reference sections. For more information, see the Overview for a list of the CDC agents. eventStream: An implicit union of all CDC event types. This includes all tables plus transaction events. *table-name* - A record for each table including all its fields.

CDC Streams
A Change Data Capture tracks or consumes changes in a data source using streams. A stream is a point in a data bases change log that is used as a point of reference by the CDC agent. Each time you begin to start a Change Data Capture, a new stream, with a unique name is automatically created. You can have as many concurrent streams as necessary. The place where the changes are currently being consumed is called the stream position. The stream position is automatically saved in the data base. You can filter a stream to consume for selected tables in the data base or for all of the tables. The figure below shows the CDC Stream.

19-6 AIS User Guide and Reference

Figure 194 CDC Stream

The following are the parts of the CDC stream in the figure above.

Data Source: The native data source where you are consuming changes. For example, Adabas. Change Logger: This is the native tool that logs changes for a data source. Change Source: This is the data source log. It contains various information depending on the data source and the log configuration. This part has a different name depending on the data source you are working with. For example, for Adabas this is called the PLOG, for DB2 it is called a journal, and for Oracle it is called REDO logs. For more information, see the Overview for a list of the CDC agents. Change Capture Agent: This is the Attunity CDC Agent. The agent consumes the changes from the Change Source and outputs them into a standard XML format.

Transaction Support
CDC agents that support transactions can eliminate uncommitted changes from the stream service Most CDC agents support transactions. The reference section for each CDC agent indicates whether the agent supports transactions. The following CDC agents support transactions:

DB2 and DB2/400 VSAM Batch Adabas (all Adabas agents) Oracle Enscribe SQL/MP

When a CDC agent handles a transaction it reads the change record and then:

If the record is from a new transaction it creates an in memory transaction record cache for Txn-ID If the record is a DML record, then it is added to the appropriate transaction record cache by the Txn-ID If there was a Rollback, it deletes the Txn-ID transaction cache If there was a Commit, it distributes the Txn-ID transaction records to the various change files It the transactions is idled for too long it is timed out This is written to a special Txn-ID.txe file

For information on troubleshooting, see Troubleshooting in AIS.

Implementing a Change Data Capture Solution 19-7

Troubleshooting
If changes are not written to the journal or other native Change Source, make sure that the journal and system (such as CICS) file definitions match (for example, the same logstream number in CICS is used in the journal). To check that changes are written in the journal 1. From the Windows Start menu, point to Programs, Attunity and then Server Utilities, then click XML Utility. The XML Utility is displayed.
2. 3. 4.

Select the relevant server, workspace and CDC agent adapter. Click Events to open the Events listener. Click Start Events in the Events listener. You can specify the event name and qualify it with an application name to provide a unique starting point for the change events. Update a table with data to be captured (for example, using NAV_UTIL execute).

5.

The captured event is displayed in the XML Utility Events listener window. If changes are written to the journal, set the Attunity server environment properties driverTrace and acxTrace in the debug section for the CDC binding. Make changes to the data source and analyze the resulting standard Attunity server logs.

19-8 AIS User Guide and Reference

20
SQL-Based CDC Methodologies
This section contains the following topics:

Overview Configuration Parameters SQL Access to Change Events Reading the Change Tables Referential Integrity Considerations Monitoring the Change Data Capture Error Handling subscribeAgentLog Configuration Properties Performance Considerations Capacity Planning Applying Metadata Changes Migration from XML-based CDC

Overview
When using ETL (Extract Transform and Load) tools such as Informatica Power Center, Ascential Data Stage, BO Data Integrator, Microsoft SSIS and others, SQL based CDC can be used to retrieve incremental source table changes and apply them to the target tables. An SQL-based Change Event Router is used to read changes off a CDC agent and write these events into multiple change files (implemented as DISAM files). The following diagram shows the components related to SQL-based Change Event Router.

SQL-Based CDC Methodologies 20-1

The SQL-based CDC Change Router components is displayed in the following figure:
Figure 201 SQL-based Change Router Components

The system has three platforms, as follows:

Database platform: This is where the database and agent run. It can be any platform supported by AIS. The database platform is, in many cases, also a legacy application platform, which means that processing overhead on this platform should be minimized. Staging area platform: This is where the Change Table (Change File) are hosted. This platform also hosts the SQL-based Change Router which enters data into the change tables. It can be any Windows or UNIX platform. ETL platform: This is where the ETL tool runs. It can be any platform supported by AIS, as well as any platform that can run any of AIS thin clients (e.g., any standard Java platform). The ETL platform may be the same as the Staging Area platform, this depends on the ETL tool resource requirements. In general, if the platform is strong enough, then placing both the staging area and the ETL tool on the same platform can reduce network utilization and improve throughput.

Components
This section describes the various system components.

CDC Agent
The task of the CDC agent is to read the database logs (or other change source), and to transform them into a format that is easy to process. The agent does not initiate any process on its own. It serves requests for change events from various clients. In the case of SQL-based CDC, the agents client is the SQL-based Change Router. The agent represents the changes as a stream of change events with a stream position (also known as context) pointing to the next change event to be retrieved.

20-2 AIS User Guide and Reference

On the first call in a specific application context, the agent uses the initial stream position that is specified in the agent configuration. The recommended setting for the initial stream position is Now which indicates that the first retrieval by an application will determine the time after which changes are captured.

SQL-based Change Router


The SQL-based change router is a server that reads change events off a CDC agent on the database platform and writes these events into multiple change files (DISAM files) on the staging platform. The SQL-based change router main characteristics are:

Uses memory aggressively to cache change events before they are committed. When dealing with very large transactions, the change router writes uncommitted transaction data to a temporary sequential file that is read back and processed if Commit is given, or discarded on Rollback. Maintains a stream position against the agent to keep track on processed change events.

Staging Area Server


This is a standard AIS server which provides SQL access to the DISAM Change Table (Change File) set, which make the Staging Area. The staging area server configuration depends on how the change tables are read by the ETL tool.

Configuration Parameters
The following table describes the SQL-based change event router configuration parameters:
Table 201 Parameter cdcDatasource eliminateUncommittedChan ges Event Router Configuration Parameters Type string Boolean false Default Description The Change Data Source. A local DISAM data source. When set to true, only committed change records are moved to the Change Table (Change File). If false, all change records are moved to the change tables (in which case, memory usage is minimal) hence the change table may contain rolled back data. For most agents, following the RI considerations (see Referential Integrity Considerations) results in rolled-back changes eliminated naturally by means of compensating change records generated by the agent in case of a rollback. Consult the respective CDC agent documentation for details. Indicates how long change records are kept in change tables within the Staging Area. After the indicated time, change records are deleted. You can set a value between 0 and 50000. A value of 0 means that the records are never deleted. A value of 1 indicates that the records are kept for one hour. logLevel enum none One of none, api, internalCalls, info, or debug.

eventExpirationHours

int

48

SQL-Based CDC Methodologies 20-3

Table 201 (Cont.) Event Router Configuration Parameters Parameter maxDeletedEventsInBatch Type int Default 500 Description Controls how many expired change records are delete in a single pass. This number may need to be lowered in some rare cases in order to reduce latency when a large number of change events is continuously being received. Controls the number of physical files opened by the event router. Specifies how much memory can be stored in memory per transaction before it is off-loaded to disk. This number should be higher than the average transaction size so that the slower-than-memory disk is not used too often. Specifies how much memory in total can be used for storing active transactions (ones that have not yet committed or rolled back). Connection information to the CDC agent.

maxOpenfiles maxTransactionMemory

int int (in Kb)

200 1000

maxStagingMemory

int (in Kb) Structure: string string string int int int boolean string

100000

sourceEventQueue

server workspace adapter eventWait maxEventsAsBlocks reconnectWait fixedNat

30 250 15 false Specifies the directory where the staging area change files will be stored. This directory also stores off-loaded transactions as well as timed-out transactions and error files. 3600 Specifies how long can a transaction be active without getting new events. This parameter should be set according to the corresponding setting of the captured database. In particular, this setting must not be lower than the databases transaction time-out setting as this may lead to the loss of transactions.

stagingDirectory

transactionTimeout

int (in seconds)

useType subscribeAgentLog

enum Boolean

sqlBbased This parameter must be set to this value. Cdc False When set to true, the change router writes the contents of the CDC agents log into its own log. Do not set this property to true if the logLevel property is set to debug because the large amount of information that is sent in this mode will cause performance delays.

SQL Access to Change Events


To implement incremental change data capture (as opposed to full reloads), ETL tools access changes using SQL. This section describes how the ETL tool consume these changes.

20-4 AIS User Guide and Reference

Change Tables
For every table in the Change Data Source, a change table by the same name is maintained by Attunity Stream in a change data source (DIFDB, a DISAM data source). A change table contains the original table columns, and CDC header columns, as listed in the following table:
Table 202 Change Event Common Header Columns Structure Data Type string string operation string Description The original change timestamp. The name of the table. beforeImage | update | insert | delete. The source specific transaction ID of the change. For auto-commit operations, it may be empty (or 0). The change record stream position in the Staging Area. The column is defined as primary unique index. It is a 32-bytes string with the following structure: <yyyymmdd>T<hhmmss>.<nnnn><cccccc> Where:

Column Name timestamp table_name operation transactionID

context

string (32)

<yyyymmdd>T<hhmmss> is the commit processing timestamp as generated in the staging area when starting to process the Commit event. <nnnn> is a unique number to differentiate between transactions committed during the same second (up to 99,999 are assumed). <cccccc> is a counter for the change events in the transaction making every stream position unique (up to 9,999,999 are assumed).

agent_context

string (64)

The original change record stream position from the agent (non-numeric). This column is defined as alternate, descending unique index. It is used for the following:
1. 2.

It ensures that a change event does not appear more than once in the change table. It allows scanning of a change table backwards, peeking easily for the last N change events. When working with complex records, multiple records may result from a single back-end change record. This column enables the user to associate these records with the single change record.

3.

In addition to the common CDC header columns, each change table may maintain auxiliary CDC header fields specific to the source data source CDC agent. Including these fields in the Change Table (Change File) is optional. The following table lists auxiliary CDC header fields for different data source types:

SQL-Based CDC Methodologies 20-5

Table 203 Data Source DB2 Oracle VSAM CICS VSAM CICS DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 DB400 VSAM Batch VSAM Batch VSAM Batch VSAM Batch VSAM Batch VSAM Batch ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF ADABAS MF

Agent Specific Auxiliary Header Columns Data Type string (16) string (24) string (4) string (8) string (1) string (2) string (10) string (10) string (6) string (10) string (10) string (10) string (10) string (10) string (10) string (8) int4 int4 string (1) string (8) string (8) string (8) string (8) string (8) string int4 string (8) string (2) string (2) string (16) string (56) uint2 uint4 int4 uint4 Description Original change record block address in the database log. Row ID of affected record. Terminal ID of the transaction that made the change. Task ID of the transaction that made the change.

Column Name RBA rowID terminalID taskID journalCode entryType jobName userName jobNumber programName fileName libraryName memberName RRN userProfile systemName referentialConstraint trigger objectNameIndicator jobName programName userName stepName procedureStepName programStartTimeStamp sequence BEFZ indicator recordType userID generalUniqueID fileNumber RABN imageType workRabChain

20-6 AIS User Guide and Reference

The STREAM_POSITION Table


The SQL-based CDC approach relies on an auxiliary table to maintain the current CDC stream position for every change table. The recommended structure of the STREAM_POSITIONS table is as follows:
Table 204 STREAM_POSITION Table Structure Data Type string (64) string (64) String (32) Description The application for which the stream position is kept. The table for which the stream position is kept. The last recorded stream position for the table and application.

Column Name application_name table_name context

This tables primary unique key is the concatenation of application_name + table_name. The use of this table is not mandatory but it is part of the recommended use pattern of SQL-based CDC. The STREAM_POSITIONS table is created with the following definition (where filename is changed into an actual path):
<?xml version='1.0' encoding='UTF-8'?> <navobj> <table name='STREAM_POSITIONS' fileName='<staging-directory-path>STREAM_POSITIONS' organization='index'> <fields> <field name='application_name' datatype='string' size='64'/> <field name='table_name' datatype='string' size='64'/> <field name='context' datatype='string' size='32'/> </fields> <keys> <key name='Key0' size='128' unique='true'> <segments> <segment name='application_name'/> <segment name='table_name'/> </segments> </key> </keys> </table> </navobj>

The general preference would be for this table to be stored inside the target database which will allow committing changes along with an update of the stream position under the same transaction. However, with local access, using a STREAM_POSITIONS table in the Change Data Source introduces very limited risk.

Reading the Change Tables


ETL tools query for changes in a selected Change Table (Change File). For each table and, potentially, for specific application, the tools keep a stream position (also called context) that is maintained between runs.

SQL-Based CDC Methodologies 20-7

Note:

Some ETL tools stop consuming changes when an EOF (End of File) is returned. To force an ETL to contiguously consume changes, see Reading Change Tables Continuously.

The kind of SQL query that the tools use have the following structure:
select * from change-table where context > :last-stream-position

where last-stream-position parameter represent the last processed event as maintained by the ETL tool for the specific change table and application. While some ETL tools support this stream position concept internally, with others this requires manual implementation. This chapter specifies the manual steps required for reading change logs, leaving tool-specific information for tools-specific documents. The general procedure for consuming change events for change table T in application A using SQL is outlined below. Note that this procedure does not address Referential Integrity constraints (see Referential Integrity Considerations). To create a stream position This is a one-time setup step aimed to create a stream position record for T + A in the STREAM_POSITIONS table. The following SQL statement creates that record:
insert into STREAM_POSITIONS values (A, T, ); 2.

1.

This step is where change data is actually read. It occurs on each ETL round.
select n.* from T t, STREAM_POSITIONS sp where sp.application_name = 'A' and sp.table_name = 'T' and n.context > sp.context order by n.context;;

This query retrieves change records starting from just after the last handled change record. Obviously, n.* can be replaced with an explicit list of columns. What is important is that the context column must be selected as this is the change record stream position which is required for the next step. This step occurs at the end of each ETL round once all change records were retrieved and processed. Lets assume that the value of the context column of the last change record was C. This value needs to be stored back into the STREAM_POSITION table for the next ETL round. This is done with:
update STREAM_POSITIONS set context=C where application_name A and table_name = T;

This value can be stored more frequently during the ETL process as needed. The general guideline is that once change record data has been committed to the target database, the stream position should be updated as well.

Reading Change Tables Continuously


There are several methods for consuming changes stored in the AIS staging area. Many ETL tools work by polling the change tables in the staging area for changes made since the last ETL run. This is executed with an SQL statement, for example:
select * from INVENTORY_CHANGES where CONTEXT > ?

20-8 AIS User Guide and Reference

In this case, the parameter value is the last value returned in the previous ETL run and is stored in an auxiliary table of a target database. This approach works well for polling cycles that are not too close (up to several times an hour). However, in cases where a near real-time SQL-based change capture is required, this approach is not adequate. The cost of starting an ETL run is often high. The cost is even greater if several change tables need to be monitored concurrently in near real time. To compensate for the higher costs a continuous CDC approach is used. In this case, a continuous query uses a special feature of the Attunity query processor that defines a query that (almost) never ends. Internally Attunity Stream polls the data by re-executing the query as needed but with the least possible overhead. For an ETL tool that uses a data access API such as ODBC, it just appears as if the query never ends. If no data is available, the ETL tool will just be blocked in a call to fetch the next row of data. When new data becomes available, the ETL continues receiving rows back from the fetch call. The continuous CDC approach requires one ETL to be continuously active for each consumed change table.

Executing a Continuous Query


The following query is used to execute continuous queries:
select CONTEXT as $$StreamPosition, * from INVENTORY_CHANGES where CONTEXT > ?

The parameter value in this statement is the last value returned in the previous ETL and stored in an auxiliary table of the target database. This is the same as the statement used for executing regular change queries as explained in Reading Change Tables Continuously. The only difference is that the CONTEXT column in this statement uses the alias '$$StreamPosition'. Once the query starts with an initial value for the parameter, the alias name instructs the query processor to read until the end of the data and then run the query again with the last CONTEXT value as the new parameter value.
Notes:

To ensure that the continuos works correctly:

The Stream Position column must be the first column on the SELECT list, with the alias name '$$StreamPosition'. The matching stream position parameter must be the first parameter in the WHERE clause.

Continuous queries can be invoked from ODBC, JDBC and ADO.NET interfaces.
Note:

Using continuous queries under the ADO OLE DB interface may cause problems because of the OLE DB read-ahead behavior. In this case, when a query is made, more records than the ones requested are fetched. This causes the system to continue to wait for information that is not there because it was fetched early.

Stopping and Pausing a Continuous Query


When you need to stop making a query, you must stop the continuous query. You can pause and stop the query using different values in a Control Command column. The following are the values that are used to control a continuous query:

SQL-Based CDC Methodologies 20-9

(empty): If the column value is empty, the query processor continues to return rows as they become available. Pause: The query processor pauses the continuous query until this value is changed. After a retry interval, the query is re-executed and the value of the Control Command column is checked again. Stop: The query processor stops running the continuous query and returns an EOF.

The following query demonstrates the use of a control command column:


select t.context AS $$streamPosition, sp.command AS $$controlCommand, t.*, sp.* from T t, STREAM_POSITIONS sp where sp.application_name = 'A' and sp.table_name = 'T' and t.context >= ? order by t.context limit to 1000 rows;

In this example, table T is the change table and STREAM_POSITIONS is the table used to record the last position processed in the previous ETL run. The STREAM_ POSITION Table contains a column named command that gets the alias name $$ControlCommand. The STREAM_POSITION table must contain a row for the application and table before running this query. To stop this query while it is running continuously, run the following query from a different session:
update STREAM_POSITIONS set command='Stop' where sp.application_name = 'A' and sp.table_name = 'T'

When you use a control command column, make sure that the:

The control command column is the second column in the SELECT list of the query, immediately following the stream position column. The condition sign used in the stream position column is > + (greater than or equal to). Do not use the > (greater than) sign. This is because when no rows are returned, the control command column is also not be returned. Note: If the change table is empty, there may be a problem stopping the continuous query.

The clause limit to 1000 rows is included. If the query returns very long number of rows without re-executing, it may take it very long to re-evaluate the control command column. By setting a limit on the number of rows returned for each query, the control command column value is re-evaluated more often. The value, 1000 rows, is used as an example. You can use a higher of lower number.

The control command value can come from another table, or it can be calculated. For example, you can set its value based on the time of day (for example, if may return a value of Pause between 2:00 PM and 4:00 PM). When a continuous query is used with a thin-client (JDBC or ADO.NET), the polling takes place on the server and it may take a long time before a response is returned to the client. The server keeps monitoring the client connection so if the client goes away, the server will automatically stop the continuous query at the next poll event.

20-10 AIS User Guide and Reference

Note:

Continuous queries are subject to normal query processing rules. Make sure that the SQL that is used returns the result in the order of the stream position column. A single-segment unique index on the column and an express sort condition is one way to ensure this without performance loss.

Continuous Query Properties


The following environment settings are used to control the behavior of continuous queries:

continuousQueryRetryInterval continuousQueryTimeout continuousQueryPrefix

For more information on these properties, see Environment Properties, Query Processor.

Referential Integrity Considerations


Some related tables have Referential Integrity (RI) constraints enforced on them. For example, with OrderHeader and OrderLines one cannot have OrderLines without an associated OrderHeader. When processing change events by the table (which is how SQL-based CDC works) as opposed to by transaction, referential integrity cannot be maintained properly. For example, when first handling all OrderHeader records and then all OrderLines records then a deleted OrderHeader may be applied long before the required delete of the associated OrderLines records. In order to reduce the potential referential integrity to a known time frame after which referential integrity is restored, a somewhat different process is needed (compared with Reading the Change Tables). A special SYNC_POINTS table should be added to maintain a common sync-point for use with multiple related tables. The table is defined as follows:
Table 205 SYNC_POINTS Table Structure Data Type string (64) string (64) String (32) Description The application for which the processing is done. The name of the synchronization point A stream position that can be safely used as an upper bound for event retrieval of all related tables

Column Name application_ name table_name context

This tables primary unique key is the concatenation of application_name + sync_name. The use of this table is not mandatory but it is part of the recommended use pattern of SQL-based CDC. The SYNC_POINTS table is created with the following definition (where filename is changed into an actual path):
<?xml version='1.0' encoding='UTF-8'?> <navobj>

SQL-Based CDC Methodologies

20-11

<table name='SYNC_POINTS' fileName='<staging-directory-path>SYNC_POINTS' organization='index'> <fields> <field name='application_name' datatype='string' size='64'/> <field name='sync_name' datatype='string' size='64'/> <field name='context' datatype='string' size='32'/> </fields> <keys> <key name='Key0' size='128' unique='true'> <segments> <segment name='application_name'/> <segment name='sync_name'/> </segments> </key> </keys> </table> </navobj>

The following procedure describes how to ensure RI is regained at the end of a group of ETL rounds. It is an extension of the procedure described earlier for consuming change records. Here we assume that tables T1, T2 and T3 are related with RI constraints and that A is the application we are working under. To create a stream position 1. This is a one-time setup step aimed to create a stream position record for T [1/2/3] + A in the STREAM_POSITIONS table. The following SQL statement creates that record:
insert into STREAM_POSITIONS values (A, T1, ); insert into STREAM_POSITIONS values (A, T2, ); insert into STREAM_POSITIONS values (A, T3, ); 2.

This step is performed at the beginning of a group of ETL rounds processing (that is before starting to process change events for T1, T2 and T3). The goal here is to get a shared sync point for retrieval of T1, T2 and T3. This is done by sampling the context column of the SERVICE_CONTEXT table. This value is the stream position of the last change record in the most recently committed transaction. This is done as follows:
insert into SYNC_POINTS select 'A' application_name, 'T123' sync_name, context from SERVICE_ CONTEXT;

Here, T123 is the name chosen for the synchronization [points of tables T1, T2, and T3.
3.

This step is where change data is actually read. It occurs on each ETL round.
select n.* from T t, STREAM_POSITIONS sp, SYNC_POINTS sy where sp.application_name = 'A' and sp.table_name = 'T' and sy.application_name = sp.application_name and sy.sync_name = 'T123' and n.context > sp.context and n.context <= sy.context order by n.context;

Note that n.context <= sy.context is used because the context represents a change record to be processed and processing should include the change record associated with sy.context, too.

20-12 AIS User Guide and Reference

This query retrieves change records starting from just after the last handled change record but stopping at a common sync point. n.* can be replaced with an explicit list of columns, however it is important that the context column must be selected as this is the change record stream position which is required for the next step. This step occurs at the end of each ETL round once all change records were retrieved and processed for a table Ti. Lets assume that the value of the context column of the last change record was C. This value needs to be stored back into the STREAM_POSITION table for the next ETL round. This is done with:
update STREAM_POSITIONS set context=C where application_name A and table_ name = Ti;

This value can be stored more frequently during the ETL process as needed. The general guideline is that once change record data has been committed to the target database, the stream position should be updated as well.

Monitoring the Change Data Capture


You can monitor the Change Data Capture when it is deployed. This provides you with information about the CDC agents status, troubleshooting and tuning. This section contains the following topics that explain monitoring in a CDC.

Service Context Table Monitoring the Status

Service Context Table


A control table is maintained by the event router that reports its current state and other important statistics. It can be accessed with any tool that supports SQL access. The control table is called SERVICE_CONTEXT. This table has a single row with the following columns:
Table 206 Column Name context SERVICE_CONTEXT Table Structure Data Type Description string (32) The context value of the last change record in the most recently committed transaction. This value can be used to synchronize retrieval of transactions among different tables. This is the agent context that the staging area would return to if it were to restart for whatever reason. The agent context value is calculated as follows:

agent_context

string (64)

If there are pending uncommitted transactions, the agent_context value is the agent context of the first event of the oldest uncommitted transaction. If there are no pending uncommitted transactions the agent context of the last event of the most recently committed transaction, prefixed withnext and indicates that on recovery, the next event after that is to be processed.

The staging area maintains an internal agent_context that is more advanced than the one stored in the SERVICE_ CONTEXT table. The staging area uses memory to speed up change processing and when stopped it may revert back to an earlier agent context. The amount of extra work depends on the existence of long-running transactions.

SQL-Based CDC Methodologies

20-13

Table 206 (Cont.) SERVICE_CONTEXT Table Structure Column Name start_time status sub_status status_message status_time completed_transactions active_transactions timedout_transactions Data Type Description timestamp The time when the staging area started. string (16) string (64) string (80) Staging area status. For more information, see Monitoring the Status. A second level status. For more information, see Monitoring the Status. Message that is returned that describes the staging area status.

timestamp The time that the status is updated. uint4 uint4 uint4 Number of transactions processed. Number of transactions in progress (in memory, not yet committed or rolled back). Number of transactions that have timed out (were in memory for too long, declared to have timed out and written to a file). Number of rolled back transactions. Number of change events written out. Number of change events deleted from Change Table (Change File). Accumulated size in bytes of change records written. Current number of physically opened files by the staging area. Current number of logically opened files by the staging area. Amount of memory currently allocated for staging. A two-digit identifier with the same value as the nodeID config property when in multi-router mode. In regular mode, this column has no value. The time of the last transaction. The version number for the router. As of AIS Version 5.0, the value for this column is 1. This value is increased when major updates are added to the router. Total number of errors reported. The number of transactions reduced to disc. The number of compensation records captured.

rolledback_transactions uint4 processed_change_events uint4 deleted_change_events bytes_written opened_files opened_files_virtual memory_usage node_id uint4 uint4 uint4 uint4 uint4 uint4

last_transaction_ timestamp Version

string (26) uint4

errors Reduced_transactions compensation_records

uint4 uint4 uint4

The CONTROL table is also used by the event router to persist its state for purpose of recovery. This table must not be modified by the users.

Monitoring the Status


The following table describes the status for the CDC agents when they are running. The status is defined as a state in the SERVICE_CONTEXT table. The table describes the different statuses available for a CDC agent.

20-14 AIS User Guide and Reference

Table 207 State Active Sub State Processing

SERVICE_CONTEXT Status States State Details

Description

Reads the change The router is connected to the CDC agent and is processing or waiting for the change events events. Writes the change events Reduces the timed-out transaction to disc Deletes any expired change events

Idle

Waits for new change The routers agent reaches the end of its events journal and does not have any new change events. Detailed error text This indicates that the change router operation involving writing to disk failed. The most common reason is not enough disk space. Other reasons such as permissions, wrong path, or locking can also cause this. The prefix component (Agent/Router) indicates where the error happened. The error in the sub_state column identifies the error.

error

router.discWriteError

component.error This error type occurs in agents and routers. The following are the errors that are returned for this error type:

Detailed error text

xmlError requestError noActiveConnection resourceLimit noSuchResource authenticationError noSuchInteraction noSuchConnection notImplemented autogenRejected resourceNotAvailable authorizationError configurationError noSuchStream temporarilyUnavailable dataError interventionRequired Detailed error text This indicates that the change router operation with the CDC agent failed and cannot be restored.

Disconnected

SQL-Based CDC Methodologies

20-15

Table 207 (Cont.) SERVICE_CONTEXT Status States State Paused Sub State N/A State Details Description Operator manually paused the change router using the sqlrtr_pause control reset. This indicates that the change router is not Down message (orderly shutdown or running. abort message)

Down

N/A

Error Handling
Errors can occur in a Change Data Capture in either the source machine or the staging area. Most of these errors are passive and handled automatically. The change router is an active component that reads the change events and applies them to a target. AIS lets the user determine how to handle errors that occur in the change router. The following topics describe error handing for a CDC.

Determining the Change Router Error Behavior CONTROL_TABLE

Determining the Change Router Error Behavior


Change routers are active components that read change events over a network connection and apply them to a target. When an error occurs, the cause of the error and relevant details are logged to a log file. In addition, AIS provides error descriptions about the error. These error descriptions are handled by the status columns in the SERVICE_CONTEXT table. See Monitoring the Status for more information about the status columns. AIS lets the user determine how to handle the error based on its description. When an error occurs, the change router holds the event until an error behavior is selected. The following are the error behaviors:

Skip: The change router tries to skip a the event which caused the router error. Retry: The change router to tries to process the event again, This option lets you execute the event again after manually fixing the error. Pause: This pauses the change router when it does not pull changes from the agent. Resume: This resumes the change router if it is paused.

CONTROL_TABLE
The CONTROL_TABLE is part of the staging area. This table lets you control the router by inserting control operations that are processed by the router. The router checks the table for new records. When a new record is entered the router reads it and then deletes the record. It then tries to execute the requested action if it is valid. For a description of the possible actions, see Determining the Change Router Error Behavior.

20-16 AIS User Guide and Reference

The CONTROL_TABLE has the following columns:


Table 208 Column Name action CONTROLTABLE Structure Data Type Description string (32) One of the following:

skip retry pause resume

For details on the above actions, see Determining the Change Router Error Behavior. node_id string (2) A two-digit identifier with the same value as the nodeID config property when in multi-router mode. In regular mode, this column has no value.

subscribeAgentLog Configuration Properties


The following shows what the entries for each column will be:

Parameter: subscribeAgentLog Type: Boolean Default: False Description: When set to true, the change router writes the contents of the CDC agents log into its own log. Do not set this property to true if the logLevel property is set to debug because the large amount of information that is sent in this mode will cause performance delays.

Performance Considerations
The SQL-based approach can reasonably scale in the following cases:

There are many captures tables but changes are processed in rounds with sufficient time apart. There is a relatively low number of captured tables but changes are processed in near-real-time manner.

This approach does not fit very well with many captured tables together with frequent change processing because that means constantly running SQL queries, consuming much resources whether or not there are any change records. The following sections provide further information about the various tuning parameters which may affect performance.

Memory Parameters
The maxTransactionMemory and maxStagingMemory parameters control the amount of memory used by the Change Router. As the change router processes transactions, it tries to keep them in memory in order to obtain the best performance. If a transaction is too large to be kept in memory (as determined in the maxTransactionMemory parameter), the transaction is reduced to a special disk file by the name <txn-id>.txn and from that point on, additional change events for this transaction are appended directly to this file.
SQL-Based CDC Methodologies 20-17

Since disk I/O is rather slow, it is recommended to set the maxTransactionMemory parameter to a high enough value that can be supported on the machine; The program must have sufficient system quota to allocate the memory and use it efficiently (without page-fault thrashing). To estimate the value for this parameter, you need to consider the biggest transaction likely to occur and calculate approximately the total size of all updated records (including some header overheads. For more information, see Capacity Planning). Very big transactions may occur as a result of a simple batch-update SQL query and although typical production applications do not invoke such transaction, they may still occur (for example, in month-end processing). The maxTransactionMemory parameter deals with a single transaction. Since there could be multiple transactions active concurrently, there is another parametermaxStagingMemory. This parameter determines how much memory all active transactions may consume together. This parameter value must be greater than the value of maxTransactionMemory, so that typical small transactions could still fit in memory while big transactions are reduced to disk files.

Latency
The Change Router is responsible for deleting a change record whose retention period had expired. When events are being deleted, the change router does not move data from the CDC agent to the change files. This may increase the latency of change event handling. Impact of deleting old events on latency can be reduced by setting the maxDeletedEventsInBatch parameter to a lower value. Another way to reduce the impact of deleting old events is by setting the retention period wisely. For example, if the change activity peaks between 08:00 and 16:00, then setting eventExpirationHours to 3 days plus 12 hours (that is 84 hours) would result in deletion activity matching the peak occurring between 20:00 and 04:00.

Capacity Planning
This section describes various planning considerations, and includes the following topics:

Storage Memory Network Processing

Storage
The main consideration for storage requirements is the amount of changes that needs to be cached in the Staging Area. The parameters are as follows:

D: How many days back does the staging area keeps records before deleting them (retention period). N: The number of captured tables. Ri: The size (in bytes) of a single record of the captured table (i=1, ..., N). Fi: The average number of changes per day for the captured table (i=1, ..., N).

20-18 AIS User Guide and Reference

H: The header size (in bytes). This is 225 bytes for the common header fields. It should be adjusted based on the developed solution. For example, some header fields may be omitted while some agent-specific header fields may be added. IH: The index overhead. Approximately 125 bytes.

The persistent storage requirement Sp (in bytes) can be calculated as follows:


Sp=D*SUM((Ri+H+IH)*Fi)

Where SUM has taken over i=1, ..., N The Change Router also uses temporary storage for processing transactions. Storage is used when an entire transaction does not fit in the memory allocated for the change router. The parameters are as follows:

CT: The number of concurrently running large transactions. MS: The maximum record size. MR: The maximum number of records in a large transaction.

The transient storage requirement St (in bytes) can be calculated as follows:


St=CT*(MS+H)*MR

Change records are kept in the Staging Area for a specified length of time, counted from the time the changes were added to the staging area. This behavior does not consider the original time the change occurred. For this reason, if CDC processing was halted for say, 4 days, then when processing resumes, 4 days worth of changes needs to be stored in the Change Table (Change File). This needs to be considered when allocating storage, since if the retention period is shorter, say 2 days, then recovering from a 4 days pause requires twice the normal storage. The minimal storage estimate S can be NC calculated as follows:
S=St+Sp

Memory
Memory requirement is made up of the following parts:

Memory usage of the event router: This value is mostly affected by the maxStagingMemory event router setting. The default value for this is 100,000Kb (=100MB) and it can be increased or reduced as needed. Using a higher number will increase the throughput if the total size of active transactions reaches that number (active transaction are the concurrently running transactions that have not yet committed or rolled back).

Memory usage of the SQL servers: Here the evaluation should be made experimentally by monitoring a running system and checking how much memory is added per additional server.

For capacity planning, the sum of these two parts should be used.

Network
Network capacity and latency is reflected in the throughput of the Change Router as well as in the throughput of the SQL consumers (if they are located on a remote machine). The parameters are as follows:
SQL-Based CDC Methodologies 20-19

AR: The average record size, in bytes. RF: The desired number of change events per second.

A very rough estimate of network capacity NC can be calculated with the following formula:
NC=AR*RF*8*(0.5+1) (Bits per second)

Another related setting is the maximum number of events to request in a single round-trip to the CDC agent. The parameter is the config/sourceEventQueue/maxEventsAsBlock change router configuration parameter. It affects the request sizes and requires updating the environment/comm/comMaxXmlSize and the environment/comm/comMaxXmlInMemory parameters. The setting of these XML parameters can be calculated as follows:

MEAB: The maximum events as block (as defined in the configuration. The default is to request 250 events). MAXR: The maximum size of a single XML change record (including the header part). To get a rough estimate on the maximum size, one can take the original change record data+the header size and double it (taking into account the binary XML overhead in the extreme case).

comMaxXmlInMemory=comMaxXmlSize=MEAB*MAXR*1.2

Where 1.2 is a 20% cushion factor, for safety.

Processing
Processing power is not usually the limiting factor in the work of the Change Router and the Staging Area. Actual CPU consumption depends on the rate of the change events processed as well as on concurrent ETL clients reading the Change Table (Change File). It is recommended to monitor CPU consumption under typical load to verify that the system functions within its optimal load.

Applying Metadata Changes


CDC projects and more generally, ETL projects, are rather sensitive to metadata changes. There are two major cases where we need to apply metadata changes:

At design-time, while building a solution, metadata may change either because of mapping changes or because of changes in the applications that work with the data. This use case has the following characteristics: Design-time change may occur frequently (less so towards the end of the development cycle). Changes may be small (such as adding a new column to a table). Change data can typically be deleted without any implications.

At production, replacing one production version with another (typically a new production version of the entire customer project).

20-20 AIS User Guide and Reference

This use case has the following characteristics: Production changes are relatively rare. Production changes typically contain changes to many components because production is typically held to apply changes. The preference is to wait as possible and consolidate changes into a single upgrade process. Production change data needs to be handled, and data loss is not an option.

Since applying metadata changes in production requires one to first applying these changes in the design environment, we will introduce the design changes first and then add the production requirements.

Design-time Metadata Change Procedure


The procedure for changing metadata is outlined as follows:
1. 2. 3.

Open the relevant CDC solution in Attunity Studio. Disable the solution. Access Solution, Implementation, and then Metadata to add or remove a captured table. To apply a metadata change in a captured table, remove and immediately add back the updated table name from the list. Delete the physical DISAM change files associated with the updated or deleted captured tables. Delete the SERVICE_CONTEXT file so that changes start flowing immediately. Re-enable the solution.

4. 5. 6.

Production Metadata Change Procedure


Here we assume that the solution have been updated for metadata changes in the development environment. The remaining procedure for changing metadata in the production environment is as follows:
1. 2. 3. 4. 5. 6. 7.

Perform a backup. Make the application go quiet so that no new events are added. Run the ETL processes until all changes were processed off all Change Table (Change File). Disable the solution workspaces. Delete the physical DISAM change files associated with the updated or deleted captured tables. Deploy the updated solution to the production environment. Re-enable the solution.

Migration from XML-based CDC


Attunity Stream implements SQL-based CDCs with the goal of providing improved support for classical SQL-based ETL tools. This section summarizes the features that are used in a solution to adapt to the new SQL-based CDC implementation.

SQL-Based CDC Methodologies

20-21

Support for the XML mode of operation is also supported for easy use with XML-based integration. The following table summarizes some features that enhance the SQL-based CDC abilities:
Table 209 Topic Attunity Stream SQL-Based CDC Features Regular Behavior Enhanced SQL Behavior

Change Table (Change File) Each captured table had a change table The same change tables are available, associated with it. The table column with the following changes: included the original captured table Two new indexes for direct access. columns plus CDC header fields. A few more CDC header fields. The change tables are now real tables based on DISAM files on disk; one file per captured table. The XML to binary conversion is done once so the DISAM files store data in an efficient binary format. Stream position management and guaranteed delivery Stream position was maintained at the agent or the Staging Area. Stream position was automatically advanced on commit or on read when auto-commit mode was used. This approach did not offer guaranteed change delivery because one might have read a change event and failed to process it (or might have failed to get it due to network failure). Friendly stream position The stream position was an opaque entity with the only quality of being strictly incremental (that is, each new change record got a higher stream position, alphabetically). Stream position is no longer maintained by the agent or the staging area. Instead, an easy to use stream position is exposed as an index column of a change table (the context column).

the stream position looks like a timestamp; it actually is the timestamp of when the event was written out to the change file (in universal coordinated time, not local time, to avoid daylight saving time conflicts). Since the ETL tool had to maintain stream position anyway for guaranteed change delivery, there is no extra work needed in the new approach.

Guaranteed change delivery To guarantee change delivery, the ETL tool needed to maintain a stream position and to reestablish it when retrieving using the SetStreamPosition stored procedure. Referential Integrity

To attain referential integrity at the end An alternative, SQL based approach of a mapping session, one had to use was developed (see Referential the SetSyncPoint procedure. Integrity Considerations). Change events in their original XML form, were stored in a one big index file (about 5 indices) whether they were committed or not. They were later extracted by a non-primary index scan (to achieve transaction order). Change data was stored, in most cases, as a CLOB attached to a change entry. Very little use was made of available system memory. Memory is used intensively to improve performance. Change records are transformed into their binary representation (just like one a COBOL copybook gives) and are kept in memory until commit time when they are written to different change files, one per captured table. Rolled-back events simply vanish without incurring any I/O overheads.

Performance

20-22 AIS User Guide and Reference

Table 209 (Cont.) Attunity Stream SQL-Based CDC Features Topic Scalability and robustness Regular Behavior A single server process (with a single thread) did all the reading from the CDC agent and the servicing of SQL to consumers. This resulted in poor scalability of the event consumption and the applied process. With more than a few consumers, time-outs would be quite likely. Enhanced SQL Behavior A single dedicated server process reads change records off the CDC agent and distributes the (after transactions are committed) to separate change files. A separate workspace with any number of servers is defined for consuming change records using a normal DISAM data source. This results in very good scalability and use of system resources.

Another disadvantage of the single server process is that a communication Having separated the Change Router hang (mostly due to network issues) and the change file access servers also could bring the entire solution to a means that problems communicating stand still. with the CDC agent no longer impact the reading process. One can even take down the agent and the change router server for updates, leaving the Change Table (Change File) accessible. Monitoring The monitoring system had a significant impact on the system in terms of resource utilization (and the more change events, bigger was the impact). Very little could be told about what the system is doing at any given moment or over time.

While the existing monitoring system is still there, now a special SERVICE_ CONTEXT table is provided in the same data source as the change file. Querying this table provides insight into what the change router is doing, as well as important counters. Having this information exposed simply via SQL makes it easy to integrate 3rd party management solutions. The service context does not introduce significant overhead and it can be sampled as needed to produce running statistics. Timed-out transactions are detected and written out to special .txt files. A special SERVICE_CONTEXT file counts timed-out transactions, enabling to easily detect timed-out transactions.

Transaction time-outs

A timed-out transaction was not detectable.

Migrating an existing solution to V4.8 involves the following steps:


1. 2. 3. 4.

Recreate the solution in Attunity Studio V4.8. Establish new capacity planning as explained above. Eliminate calls to the GetStreamPosition procedure. Instead, retrieve the last known handled position from the STREAM_POSITION table. Eliminate calls to the SetStreamPosition procedure. Instead, use the SQL WHERE clause with the context field greater than the last known handled position from the STREAM_POSITIONS table. Establish and use the STREAM_POSITIONS and SYNC_POINTS as explained in this document.

5.

These steps are general guidelines. Actual changes depend on the kind of solution implemented with Attunity Stream V4.6.

SQL-Based CDC Methodologies

20-23

20-24 AIS User Guide and Reference

21
Creating a CDC with the Solution Perspective
The Solution Perspective lets you create a Change Data Capture (CDC) operation using a series of interactive guides. This section contains the following topics:

Using the Solution Perspective Getting Started Guide Project Guide Troubleshooting

Using the Solution Perspective


The Solution perspective is a series of wizard steps that guide you through the procedures needed to configure all components in a CDC solution. You use the Solution perspective to set up projects. A project contains all of the components necessary for a change data capture and the specific configuration parameters that you set. For example, you can set up a project called Adabas. This project might consume data from a specific Adabas data source and send it to your data warehouse. You may also need to consume changes from the same data source, but send the data to a different place. Using the Solution perspective, you can create two separate projects that will save the parameters and then easily use each solution where needed without configuring the system each time. To open the Solution Perspective Do one of the following:

From the menu bar, click Window| Open Perspective|Solution. Click the Perspective button on the perspective toolbar and select Solution from the list. The Solution perspective opens with the Getting Started view available on the left of the workbench.

For more information on perspectives, see in Working with Perspectives.

Using Views in the Solution Perspective


The views on the left side of the Solution perspective contain a series of links that guide you through the creation of a project. In the Solution perspective, these views are called guides. Double-click these links to open various editors that let you

Creating a CDC with the Solution Perspective

21-1

configure the parts of the project or solution you are creating. In most cases you can click on each link in the order they appear. The solution perspective guides display the following symbols in front of a link to show you what tasks should be done, and what tasks were completed.

Triangle: This indicates that there are subtasks associated with this link. When you click the link, the list expands to display the subtasks. Asterisk (*): This indicates that you should click that link and carry out the tasks and any subtasks presented. If more than one link has an asterisk, you can carry out the marked tasks in any order. Check mark (): This indicates that the tasks for this link and any sublink are complete. You can double click the link to edit the configuration at any time. Exclamation mark (!): This indicates a potential validation error.

Getting Started Guide


The Getting Started view looks like the figure below.
Figure 211 Getting Started

This view has two sections. The upper section has links to open or create an AIS project or to open an existing project to edit or to finish configuring the project. These options are:

Creating a New Project Opening an Existing Project Opening Recent Projects

The section below the line has the link, Opening Recent Projects. This lets you access existing projects quickly.

Creating a New Project


The first step is to create a new project. The project holds all the information necessary for implementing the specific CDC solution you want to execute.

21-2 AIS User Guide and Reference

To create a new project 1. In the Getting Started Guide guide, click the Create new project link. The Create new project screen opens.
2.

In the Project name field, enter a name for your project. The types of projects available are listed in the left pane just below.

3.

Select Change Data Capture. The CDC options available are presented in the right pane.

Figure 212 Create New Project

4.

Select a CDC type from the right pane. Your options are:

ADD-ADABAS (Mainframe): Capture data changes from ADD-ADABAS Mainframe database, with or without the use of a staging area, from any type of client. ADD-ADABAS (Unix): Capture data changes from ADD-ADABAS Unix database, with or without the use of a staging area, from any type of client. ADD-ADABAS (OpenVMS): Capture data changes from ADD-ADABAS OpenVMS database, with or without the use of a staging area, from any type of client. ADABAS Mainframe: Capture data changes from ADABAS Mainframe database, with or without the use of a staging area, from any type of client. ADABAS Unix: Capture data changes from ADABAS Unix database, with or without the use of a staging area, from any type of client.

Creating a CDC with the Solution Perspective

21-3

ADABAS OpenVMS: Capture data changes from ADABAS OpenVMS database, with or without the use of a staging area, from any type of client. DB2 (Mainframe): Capture data changes from DB2 Mainframe database, with or without the use of a staging area, from any type of client. DB400: Capture data changes from DB400 database, with or without the use of a staging area, from any type of client. DISAM: Capture data changes from DISAM database, with or without the use of a staging area, from any type of client. Enscribe: Capture data changes from Enscribe database, with or without the use of a staging area, from any type of client. IMS-DBCTL: Capture data changes from IMS-DBCTL database, with or without the use of a staging area, from any type of client. IMS-DBDC: Capture data changes from IMS-DBDC database, with or without the use of a staging area, from any type of client. IMS-DLI: Capture data changes from IMS-DLI database, with or without the use of a staging area, from any type of client. MS SQL Server: Capture data changes from MS SQL Server database, with the use of a staging area, from any type of client. Oracle: Capture data changes from an Oracle database, with or without the use of a staging area, from any type of client. SQL/MP: Capture data changes from SQL/MP database, with or without the use of a staging area, from any type of client. VSAM-Batch: Capture data changes from VSAM-Batch database, with or without the use of a staging area, from any type of client. VSAMCICS: Capture data changes from VSAM-CICS database, with or without the use of a staging area, from any type of client. VSAM for OpenVMS: Capture data changes from VSAM database running on the OpenVMS platform, with or without the use of a staging area, from any type of client.

5.

Click Finish. The Project Guide is displayed.

Opening an Existing Project


To open an existing project 1. In the Getting Started guide, click the Open existing project link. The Open existing project screen is displayed.
2. 3.

From the Project drop-down list, select the required project. Click Finish. The Project Guide with the information for that project is displayed.

21-4 AIS User Guide and Reference

Opening Recent Projects


To open an existing project In the Getting Started Guide guide, click the link to the recent project of your choice. The project opens and the Project Guide with the information for that project is displayed.

Project Guide
The Project Guide has three parts:

Design Wizard Implementation Guide Deployment Guide

Figure 213 Project Guide

Design Wizard
The Design Wizard lets you review and modify all the basic settings of your project. With CDC Solutions, you have the flexibility of customizing your project map according to different configurations based on parameters that are entered in the wizards screens.

Creating a CDC with the Solution Perspective

21-5

To configure basic Solution settings 1. Click the Design link. The Design Wizard opens. Use this wizard to enter the basic settings for your project.
Note:

The wizard screens are divided into sections. Some sections provide information only and other sections let you to enter information about the project. If you do not see any information or fields for entering information, click the triangle next to the section name to expand the section.

Figure 214 Design Wizard (Design Options)

2. 3.

In the Data Source Name field, enter a name for your data source. In the Client Type combo box, select a client type. Your options are:

BizTalk Generic SQL: SQL clients ETL: All ETL tools that work using ODBC Generic XML: Client applications that consume changes through XML SQL Over Change Queue: SQL clients using the ADD-Queue mechanism Web Logic
Note:

This selection affects the number of machines actually employed and whether or not the use of stream service is enabled as a result.

21-6 AIS User Guide and Reference

4. 5.

To use the staging area, select Use staging area. For most CDC agents, this is the default and cannot be changed. Click Next. The Design Wizards second screen is displayed. In this step you configure the machines used in your solution. Enter the information for the following machines:

Server Machine: The machine where AIS is installed Client Machine: The local machine Staging Area Machine: The machine where the staging area is located. This is only available if you selected the staging area check box in the first screen.

Figure 215 Design Wizard (Configure Solution Machines)

6.

Enter or select a name from the drop-down list each of the above machines.
Note:

You can enter the same name for any of the machines. This allows you to use more or fewer machines for your Solution.

7. 8.

Select the platform that is running on each machine from the drop-down list. The platforms available depend on the original data source type. Click Finish. The wizard closes.
Note: Studio adds a check mark () next to the Design link to indicate that you completed this part of the configuration. However, you can click this link at any time to make changes.

Creating a CDC with the Solution Perspective

21-7

Implementation Guide
The Implementation Guide is done after you complete the Design Wizard. Note that the Implementation link has an asterisk (*) next to it to indicate that you need to enter configuration information in this section. To begin the Implementation Guide Click Implement. The Implementation Guide lets you customize the parameters of your CDC Solution machines, as they were defined in the Design Wizard. The tasks in the Implementation Guide are grouped under the configuration categories below, which can be expanded or collapsed with a mouse click. The order and grouping in which they are presented may change according to Design Wizard settings and/or data source type. For example, when selecting an Oracle CDC project, after setting the configurations in the Design Wizard, the Implementation Guide is displayed as follows:
Figure 216 Implementation Guide

The actual tasks performed in the Implementation Guide vary according to data source and number of machines employed. You can set up the implementation in any order. Click on any available link with an asterisk (*). Below is a complete list of tasks.

Machine Data Source Metadata

21-8 AIS User Guide and Reference

CDC Service Access Service Manager Stream Service

Machine
The Machine link lets you set the IP address/host name and the port of the CDC or server machine. To configure server IP address/host name and port 1. Click the Machine link. The machine definition screen is displayed:
Figure 217 Machine Definition

2.

In the IP address/host name field, do one of the following:


Enter the server machines numeric IP address. Click the Browse button and select the host machine from the ones presented, then click Finish.

Creating a CDC with the Solution Perspective

21-9

Figure 218 Select Machine

Note:

The machine you enter must be compatible with the platform designated in the Design Wizard (Configure Solution Machines) screen.

3.

Enter the port number. The default port number is 2551.

4. 5.

If you want to connect with user authentication, enter a user name and password, with confirmation, in the Authentication Information area. Select the Connect via NAT with a fixed IP address checkbox if you are using Network Access Translation and want to always used a fixed IP address for this machine. For more information, see Firewall Support.

6.

Click OK.

Data Source
The Data Source link lets you define your data source. The information you enter depends on the type of CDC agent you are setting up. It can be a file name, or a path with a file name, or a full connect string, depending on the data source type. To enter the data source Enter the data source information requested, and click Next. For details on the format to be entered, see the chapter for the CDC agent you are using. For a list of agents, see the CDC Agents Reference.

Metadata
The Metadata link lets you define the metadata used for the data sources that are file systems or non-relational databases. You must first select a metadata source. After you select the source, then you must carry out one or more operations described below.
Note:

The Select Metadata Source link has an asterisk (*) next to it to indicate that you must carry out this operation first.

21-10 AIS User Guide and Reference

To select the metadata source 1. Click the Metadata link. The Create metadata definitions view is displayed.
2. 3.

Click the Select Metadata Source link. Select one of the following:

Create new metadata definitions Copy from existing metadata Import metadata from external source

4.

Click Finish. The screen closes. Click an available link (with an asterisk (*) next to it):

To create new metadata definitions To copy definitions from existing metadata To import metadata from an external source

To create new metadata definitions 1. Click the Customize Metadata link. The customize metadata screen is displayed.
Figure 219 Customize Metadata

2. 3.

Right-click in the fields under Customize Metadata, and select Add. Enter the table name in the field presented, and click OK.
Note: You may have validation errors in the tables created, which you can correct by the end of the procedure.

Creating a CDC with the Solution Perspective

21-11

4.

Right-click the table created and select Field Manipulation. The Field Manipulation screen is displayed.

Figure 2110

Field Manipulation

5. 6. 7.

Right-click in the upper pane and select Field|Add|Field. Enter the name of the field in the screen provided, and click OK. Default values are entered for the table. To manipulate table information or the fields in the table, right-click the table and choose the option you want. The following options are available:

Add table: Add a table. Field manipulation: Access the field manipulation window to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table. Set table attributes: Set table attributes. XSL manipulation: You specify an XSL transformation or JDOM document that is used to transform the table definition.

The Validation tab in the bottom half of the window displays information about what you must do to validate the tables and fields generated from the COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location).
8. 9.

Correct any remaining validation errors. Click Finish to generate the metadata.

21-12 AIS User Guide and Reference

To copy definitions from existing metadata 1. Select Copy from existing metadata, and click OK.
2.

Click the Copy Existing Metadata Source link. The Copy Existing Metadata Source screen is displayed showing your local machine and with metadata compatible with the data source selected.

Figure 2111

Copy Existing Metadata Source

Selected tables Tables

Arrow keys Left pane Right pane

3. 4. 5.

From the sources in the left pane, expand the list until you see the tables from which you want to copy metadata. Using the arrow keys, bring the required tables into the right pane. Once you have selected all the desired tables, click Finish.

To import metadata from an external source


Note:

When you import metadata from an external source, the data source type has an impact on which metadata files you can import. IMS imports Cobol, DBD and PSB files. All other ISAM CDC projects import Cobol files for metadata.

1. 2.

Select Import metadata from external source, and click OK. Click the Import Metadata Source link. The New Import screen is displayed.

Creating a CDC with the Solution Perspective

21-13

Figure 2112

New Import

3. 4.

In the Import name field, you must enter a name for the import operation. Select an import type from the options available in the drop-down list. The items in the drop-down list will be different according to data source type selected. See the Data Source Reference for a list of available data sources.

5.

Click Finish.

To import metadata from COBOL files 1. Click the Add button to select COBOL copybook files.
2.

The Add Resource window opens, which lets you select files from the local machine or FTP the files from another machine.
Add Resource Screen

Figure 2113

3. 4.

If the files are on another machine, right-click My FTP Sites and choose Add. Set the FTP data connection by entering the server name where the DBD files reside and enter a valid username and password to access the machine, unless using anonymous access.

21-14 AIS User Guide and Reference

After you access the machine, you can browse and transfer files required to generate the metadata. You access the machine using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.
5. 6.

Select the files to import and click Finish to start the transfer. Repeat the procedure for the COBOL copybooks.
Note:

You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns with a COBOL copybook that ignores the first six columns. In this case, repeat the import.

Or, for IMS data sources, click Add in the import wizard to add a PSB file.
7.

The selected files are displayed in the wizard.


Import Tables

Figure 2114

8. 9.

Click Next. Apply filters to the copybooks, if necessary.

Creating a CDC with the Solution Perspective

21-15

Figure 2115

Apply Filters to Copybooks

The following COBOL filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP is for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested columns: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Indicates whether to consider case sensitivity. Find: Searches for the specified value. Replace with: Replaces the value specified for Find with the value specified here.

The following DBD filters are available: (relevant for IMS)


Ignore after column 72: Ignore columns 73 to 80 in the DBD files. Ignore first 6 columns: Ignore the first six columns in the DBD files. Ignore labels: Ignore labels in the DBD files.

The following PSB filters are available (relevant for IMS):

Ignore after column 72: Ignore columns 73 to 80 in the PSB files.

21-16 AIS User Guide and Reference

Ignore first 6 columns: Ignore the first six columns in the PSB files.

10. Click Next.

The Select Tables screen opens.


Figure 2116 Select Tables Screen

The import manager identifies the names of the segments in the DBD files that will be imported as tables.
11. Select the tables that you want to access (and receive the Attunity metadata) and

then click Next. The Import Manipulation window is displayed. In this window, do the following:

Resolve table names, where tables with the same name are generated from different COBOL copybooks specified during the import. Specify the physical location for the data. Specify table attributes. Manipulate the fields generated from the COBOL, as follows: * * * * * * * * Merging sequential fields into one for simple fields. Resolving variants by either marking a selector field or specifying that only one case of the variant is relevant. Adding, deleting, hiding, or renaming fields. Changing a data type. Setting a field size and scale. Changing the order of the fields. Setting a field as nullable. Selecting a counter field for array for fields with dimensions (arrays). You can select the counter for the array from a list of potential fields.

Creating a CDC with the Solution Perspective

21-17

Setting columnwise normalization for fields with dimensions (arrays). You can create new fields instead of the array field where the number of generated fields will be determined by the array dimension. Creating arrays and setting the array dimension.
Import Manipulation Screen

Figure 2117

Note:

The Validation tab in the bottom half of the window displays potential validation problems in the current metadata definition.

12. To manipulate table information or the fields in the table, right-click the table and

select one of the following:


Add table: Add a table. Field manipulation: Access the Fields manipulation window to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table. Set table attributes: Set the table attributes. XSL manipulation: Specify an XSL transformation or JDOM document that is used to transform the table definition.

13. Click Next to generate the metadata. The final window lets you import the

metadata to the machine where the data source is located or leave the generated metadata on the Attunity Studio machine, to be imported later.
14. Specify that you want to transfer the metadata to the machine where the data

source is located and click Finish. The metadata is imported to the machine where the data source is located.

21-18 AIS User Guide and Reference

CDC Service
The CDC Service wizard depends on the CDC agent you are working with. This section describes the two standard screens in the wizard. For additional information required, see the chapter for the specific agent you are working with. For a list of agents, see CDC Agents Reference. To configure the CDC Service 1. Click the CDC Service link. The CDC Define change capture starting point dialog box opens.
2.

In this dialog box, select one of the following to determine the Change Capture starting point:

All changes recorded in the journal. On first access to the CDC agent (immediately when a staging area is used, otherwise when a client first requests changes). Changes recorded in journal after a specific time.

If you choose the third option, you need to select the specific time by clicking the Set time button and selecting the time from the calendar that appears.
Figure 2118 Define Time Stamp and Set Specific Time

3. 4.

If you want, select include a capture of before-image records. Click Next to open the CDC agents configuration screen. In this screen you configure the CDC Service properties for the agent you are using. See CDC Agents Reference.

Creating a CDC with the Solution Perspective

21-19

5.

Click Next, to open the CDC Service Logging screen.


CDC Service Logging

Figure 2119

6.

From the combo box, select a logging level:


None internalCalls Info Debug API

7.

Click Finish.

Access Service Manager


This wizard helps you set up a daemon workspace to optimally handle client needs. To configure the Access Service Manager 1. Click the Access Service Manager link. The Setup Workspace wizard opens.

21-20 AIS User Guide and Reference

Figure 2120

Select Scenario

2.

Select the scenario that best meets your site requirements:


Application Server using connection pooling Stand-alone applications that connect and disconnect frequently Applications that require long connections, such as reporting programs and bulk extractors

3.

Click Next. The next screen that appears depends on the type of connection you just selected.

4.

Enter the relevant connection information and click Next.

Creating a CDC with the Solution Perspective

21-21

Figure 2121

Connection Waiting Time

5. 6. 7.

In the first field, enter the amount of time you want to wait for a new connection to be established (in seconds). In the second field, assuming a fast connection, the amount of time to wait for a response (in seconds) Click Next.
Site Security

Figure 2122

21-22 AIS User Guide and Reference

8. 9.

Enter the operating system account (user name) used to start server instances. Select allow anonymous users to connect via this workspace, if you want to allow this option. workspace, or select the users/groups that you want to have exclusive access.

10. Enter the permissions for the workspace. You can allow all users to access the 11. Select access server instances via specific ports, if you want to allow this options.

If this option is cleared, the defaults are used. If you select this option, indicate the from port and to port and make sure that you reserve these ports in the TCP/IP system settings.
12. Click Next.

The summary screen opens.


Figure 2123 Workspace Setup Summary

13. Click Save icon and then click Finish.

Stream Service
The Stream Service configures the following:

Staging area Filtering of changed columns Auditing information


Note:

Null filtering is currently unsupported. Filtering empty values is supported. Space values are truncated and are handled as empty values.

Creating a CDC with the Solution Perspective

21-23

To configure the Stream Service 1. Click the Stream Service link. The Stream Service wizard opens.
Figure 2124 Staging Area

Note:

This screen will only appear if you selected the inclusion of a staging area in your solution.

2. 3.

Select Eliminate uncommitted changes to eliminate uncommitted changes from your CDC project. Select the use secured connection checkbox to configure the staging area to have a secured connection to the server. This is available only if you logged into the server using user name and password authentication. Set the event expiration time in hours. Under File Locations, click the Browse buttons to select the location of the changed files, and temporary staging files, if necessary. Click Next. You are prompted to select the tables to participate in the filtering process.

4. 5. 6.

21-24 AIS User Guide and Reference

Figure 2125

Select Tables

Tables

Arrow keys

Left pane

Right pane

7. 8.

Click the required tables in the left pane and move them to the right pane using the arrow keys. Click Next. You are prompted to select the relevant columns within the tables selected above from which to receive changes.

Creating a CDC with the Solution Perspective

21-25

Figure 2126

Table and Column Selection

Column checkboxes

Table checkboxes

9.

Select the columns from which to receive changes.


Note:

Table headers will appear grouped together in a separate table at the beginning of the list. You can also request the receipt of changes in the headers columns.

Any data changes in the columns selected will be recorded.


10. Click Next.

You are prompted to indicate the types of changes you want to receive in the tables and which columns to display.

21-26 AIS User Guide and Reference

Figure 2127

Filter Selection

Actions

Changes within columns

11. Select the actions from which you want to receive change information. Your

options are:

Update Insert Delete


Note:

These items are all selected by default.

12. Under the Changed Columns Filter column, select the columns for which you

want to receive notification of changes.


Notes:

If you do not select them, you will receive notification of all changes. If you select only one, you will receive change information only if the field selected undergoes a change. If you select more than one, but not all, then you will receive change information only if any or all of the selected fields undergo a change

Creating a CDC with the Solution Perspective

21-27

13. To filter content from within a given column, under the Content Filter column,

double-click the relevant column, and then once again on the ellipsis button that appears. The Content Filter screen is displayed.
Figure 2128 Content Filter

Filter type

14. Select a filter type:

Select In for events to be returned where the relevant column value equals the values you specify (if a column is NULL, it is not captured). Select Not In for events to be returned where the column value is not in the values you specify (if the column is NULL, it is captured). Select Between for when the column value is between the two values you specify (if a column is NULL, it is not captured).

15. Click Add in the lower-left corner of the Content Filter screen.

Note:

If you select more than one condition, you will receive the change information as long as one of the conditions is true.

16. Depending on your selection, do one of the following:


If you selected In/Not In, continue with step17. If you selected Between, continue with step 20.

17. Click Add in the Add items to the list screen.

21-28 AIS User Guide and Reference

Figure 2129

Add Items (In or Not In)

You are prompted to enter a value.


Figure 2130 Enter a Value

18. Enter a value for events to be returned where the relevant column value appears

(or does not appear) in that value. To filter empty values () for the Not In filter type, leave this filed blank.
19. Repeat steps 17 and 18 as many times as necessary, and then proceed to step 22. 20. Click Add in the Add items to list screen.

The Add between values screen is displayed.


Figure 2131 Add Items (Between)

21. Enter values for events to be returned where the column value is between the two

values you specify.


22. In the Content Filter screen, click Next.

You are prompted to select the level of auditing when receiving changes.

Creating a CDC with the Solution Perspective

21-29

Figure 2132

Audit Level

23. Select the required level of auditing when receiving changes. Your options are:

None: For no changes. Summary: For an audit that includes the total number of recorded delivered, as well as system and error messages. Headers: For an audit that includes the total number of records delivered, system and error messages, and the record headers for each captured record. Detailed: For an audit that includes the total number of records delivered, system and error messages, the record headers for each captured record, and the content of the records.

24. Click Finish.

When you complete all the Implementation operations, a check mark () is displayed next to every link. Click Done to return to the Project Guide so you can begin the Deployment activities

Deployment Guide
After you complete the design and implementation guides, you are ready to deploy your project. The Deployment Guide has two sections:

Deployment Procedure: This section is used to deploy the project. Control: This section is used to activate or deactivate workspaces after the project is deployed and you are ready to consume changes. In this section, you can

21-30 AIS User Guide and Reference

deactivate the workspace anytime you want to suspend consumption of changes from the staging area.
Figure 2133 Ready to Deploy

To deploy the project 1. Click the Deploy link. The Deployment Procedure and Control open.

Creating a CDC with the Solution Perspective

21-31

Figure 2134

Deployment Procedure and Control

2.

Click the Deploy link. Studio processes the naming information. This may take a few minutes. If there are naming collisions, you are prompted to allow Studio to resolve them.

Figure 2135

Resolve Naming Collision

3.

Click Yes to resolve any naming collisions. The Deployment Guide is displayed.

21-32 AIS User Guide and Reference

Figure 2136

Deployment Guide

4.

If you are ready to deploy, click Finish. Otherwise, click Cancel and you can return to the Design Wizard and the Implementation Guide to modify the solution. If this project was deployed previously, you will be notified that re-deployment will override the previous instance.

Figure 2137

Previously Deployed Project

Notes:

When you redeploy a project where the metadata is changed, the Staging Area (SA) tables should be deleted so that no incorrect information is reported. When you redeploy a solution, the a new binding is created for the solution. The new binding is created with the default parameters only. Any temporary features that were added are lost.

5. 6.

Where applicable, click OK to redeploy. Click the Deployment Summary link.

Creating a CDC with the Solution Perspective

21-33

The Deployment Summary is shown. It includes the ODBC Connection String Parameters and JDBC Connection String, as well as specific logger scripts to enable CDC capturing.
Figure 2138 Deployment Summary

CDC summary

Connection strings

Scripts

7. 8.

Cut and paste any information required from the Deployment Summary screen to your environment as necessary. If there is nothing wrong with your deployment results, click Finish. If you found problems, click Cancel and to return to the Design Wizard and the Implementation Guide to modify the solution. If you are redeploying a solution you must follow these directions to make sure that the context and agent_context fields of the SERVICE_CONTEXT table should be saved. Follow these directions to save the fields:
Note:
1. 2. 3. 4. 5. 6.

In the staging area data source run: select context, agent_context from SERVICE_CONTEXT; and save the returned values. Delete the SERVICE_CONTEXT table physical files. Redeploy the solution. Activate the router to create the SERVICE_CONTEXT table. Disable the router. In the staging area datasource run: insert into SERVICE_CONTEXT (context, agent_context) values('XXX', 'YYY'). This will Insert the saved values to the SERVICE_CONTEXT table. Activate the solution.

7.

21-34 AIS User Guide and Reference

To activate and deactivate workspaces To activate workspaces, click the Activate Workspaces link.

To deactivate workspaces, click the Deactivate Workspaces link. Over the course of the activation/deactivation, you may receive messages indicating that the daemon settings on one or more of the machines involved in your solution have changed. Click Yes to proceed.
Daemon Configuration Change

Figure 2139

Troubleshooting
When you deploy your project, any problems and/or messages appear in their respective views.
Figure 2140 Deployment Message View

To resolve deployment problems 1. Right-click the problem or message and select Go To. The point in the previous guides where the problem occurred is displayed.
2. 3.

Check the settings and make any changes required. Follow the procedure To deploy the project.

Creating a CDC with the Solution Perspective

21-35

21-36 AIS User Guide and Reference

Part IV
Attunity Federate
This chapter contains the following topics:

What is Attunity Federate Using a Virtual Database Segmented Data Sources

22
What is Attunity Federate
This chapter has the following sections:

Overview The Data Engine Base Services

Overview
Attunity Federate provides Enterprise Information Integration (EII) across heterogeneous data sources. Using Attunity Federate, you can create single views of business information, making it easier for business users to access information in multiple data silos with virtual data models. You can use Attunity Federate to complement data warehouses with real-time access to operational data stores, and guarantee data integrity with distributed transaction management. Attunity Federate joins heterogeneous data sources to make them available as a virtual data layer. Attunity Federate uses distributed query optimization and processing engines that reside natively on enterprise data servers to provide superior performance, security, and transaction management. Attunity Federate leverages AIS adapters to access any data source in the enterprise. At the heart of the Attunity Federate solution is a data engine, supplied as part of the AIS. This shared, multi-platform engine manages access and updates across the IT infrastructure. It can also integrate information from disparate systems with a single request.

The Data Engine


The data engine accesses, updates, and joins enterprise information from data sources as if they were all relational databases. At the same time, it takes advantage of its query optimizer to determine the fastest way to carry out these tasks, minimizing the load on IT resources, networks, and systems. Because the data engine uses a relational model, it normalizes the data, converting hierarchical structures into tables without redundant data. By combining the relational model with the SQL language, the data engine allows applications to issue the same complex query to multiple data sources without tuning it to each target source. The relational approach also simplifies access via commercial tools and applications that interoperate with relational sources. Clients can use industry-standard JDBC, ODBC, ADO/OLE DB and .NET interfaces to submit SQL requests to the data engine.

What is Attunity Federate 22-1

When the data engine receives and parses an SQL request, it first determines which data source is involved, where the data resides, and how the source handles data. The data engine determines how to carry out the process based on metadata that it retrieves from a local cache, from the Attunity repository, or dynamically from the backend data sources. Then, the data engine generates a query execution plan in the form of a tree. Whenever possible the data engine passes the entire request to the underlying data source. In this case, the engine translates between standard ANSI SQL 92 and the underlying databases SQL dialect. The data engine can also accept pass-through queries to nonstandard SQL functions supported by the target source. If a data source offers limited SQL capabilities, the data engine implements missing functions as needed. If the data source offers no SQL capabilities at all, the data engine breaks the request into simple retrieval operations that an indexed or sequential table can read. The data engine can also work with data via generic drivers (such as ODBC and OLE DB). In this case, it uses an external syntax definition to determine which queries or parts of queries the database can handle and which it will execute itself.

Query Optimizer
The data engine includes the query optimizer, that minimizes execution time and resource consumption. The query optimizer enhances the data engine's initial query execution plan based on the query structure, network structures, the target data sources capabilities and locations, and the statistical information available for each table. To maximize the efficiency of query execution, the query optimizer uses various caching and access techniques, including read-ahead, parallelism, and lookup-, hash-, and semi-joins. It flattens views, breaks out and propagates simple predicates down the tree, reorders joins, directs join strategies, selects indexes, and performs other related tasks. If the target data source is distributed across multiple machines, the data engine and query optimizer together generate a distributed execution plan that minimizes network traffic and round-trips.

Performance Tuning Tools


Database administrators can review and control the optimization strategies that the optimizer uses. Using a query analyzer, IT staff can monitor accumulated statistics and heuristic information to evaluate the success of the optimization strategies. These tools enable users to evaluate and understand the way specific queries work by specifying hints, flags, optimization goals (first-row or all-rows optimization), and other query properties, such as requests for scrollable or updateable cursors.

Front End APIs


Attunity Federate uses the following APIs, supplied as part of AIS, to access data:

JDBC Client Interface: Attunity provides a pure-Java Type-3 driver that supports J2EE JDBC (such as data sources, distributed transactions, hierarchical record sets, and connection pooling). The Attunity Connect JDBC interface is available on all platforms that support Java. ODBC Client Interface: The Attunity Connect ODBC interface enables organizations to use the API of choice for most popular client-server business

22-2 AIS User Guide and Reference

intelligence tools. The ODBC interface implements the ODBC 2.5 and ISO CLI standards, so that COBOL and other 3GL programs on any platform can call it. The Attunity Connect ODBC interface is available on all platforms running Attunity Connect

OLE DB (ADO) Client Interface: Attunity Connect provides an OLE DB/ADO interface that supports advanced features, including chapters, scrollability, and multi-threading. The OLE DB/ADO interface is compatible with all Microsoft tools and applications. This provider also functions as a database gateway for Microsoft SQL Server, allowing SQL Server users to access all data sources available via Attunity Connect. The Attunity Connect OLE DB/ADO interface is available on the Microsoft Windows platforms.

Data Source Drivers


All native Attunity Connect data sources are supported by Attunity Federate. Attunity Federate also supports the following data source drivers to create federated databases:

Segmented data sources: A number of data sources on different machines, all of which have the same metadata and functionality. Virtual data sources: A view containing selected tables from one or more data sources.

The segmented data source is used to create a federated database where all the participating data sources have identical metadata. The Virtual data source is used to create a federated database when the participating data sources have different metadata. Access to the following Attunity Connect data sources are available:
Table 221 Relational DB2 Data Source Informix Data Source AIS Data Sources Non-Relational Adabas Driver CIASAM Driver Generic Flat Files Driver ODBC Driver OLEDB-FS (Flat File System) Driver OLEDB-SQL (Relational) Driver Text-Delimited File Driver

Ingres II (Open Ingres) Data DBMS Driver (OpenVMS Source Only) Oracle Data Source Oracle RDB Data Source (OpenVMS Only) SQL/MP Data Source (HP NonStop Only) SQL Server Data Source (Windows Only) Sybase Data Source DISAM Drive Enscribe Driver (HP NonStop Only) IMS/DB Drivers (z/OS only) RMS Driver (OpenVMS Only) VSAM Under CICS and VSAM Drivers (z/OS Only)

Viewing, creating, modifying, and managing metadata for data sources, such as flat files, that require Attunity metadata, is performed via Attunity Connect, from within Attunity Studio.

What is Attunity Federate 22-3

Base Services
Attunity includes a suite of administration and development tools for system and database administration. IT staff can run most of these utilities remotely using the GUI interface or locally on the native machine using the command line interface.

Attunity Integration Suite


The Attunity Integration Suite (AIS) is the server component that includes all the Attunity Connect, Stream and Federate).

AIS Installation Wizards


Attunity offers a native installation wizard on each of the supported platforms, simplifying deployment. It takes no special skills to install AIS, so IT staff can set up the Attunity Server infrastructure by applying only their platform- and application-specific expertise.

Attunity Internal Storage (The Repository)


Attunity supports an XML-based schema. The schema and the Attunity configuration are stored on the server file system and represent the repository. There is a single main repository for each installation of Attunity Server. The repository maintains server-wide definitions (such as the daemon configuration) and application adapter definitions. There is also a repository for each data source which uses Attunity Connect metadata. These repositories are optimized for fast run-time access. These repositories are not restricted by native operating system file naming conventions

The System Repository


The following Attunity general information is stored in the system repository (SYS):

Binding information, including the names of configured backend adapters and drivers and environment settings. Daemon definitions, to control client-server communication. User profiles, enabling single sign-on to multiple backend applications and data sources. Information used directly by Attunity Connect query processor (Attunity Connect procedures and views). An adapter definition for each adapter defined, which includes adapter interactions and input and output structures used by these interactions.

Data Source Repository


There is a repository for each data source associated with Attunity Connect. The information in the repository includes:

Metadata for the data source. Attunity metadata for non-relational data sources and for Attunity Connect procedures. Attunity metadata that is additional to the data sources native metadata (the extended metadata feature) and a snapshot of native metadata (the local copy feature).

Attunity Connect synonym definitions for the data.

22-4 AIS User Guide and Reference

Attunity Studio: Configuration and Management GUI


Attunity Studio is the configuration tool for Attunity products. Configuration using Attunity Studio is performed on a Windows platform. The configuration information is stored in the repository on the backend system. Administrators can use Attunity Studio during design time to configure:

Binding configurations. Client-server communication via the daemon listener. User profiles, enabling single sign-on to multiple backend applications and data sources. Metadata for both data sources and adapters. For all relational data sources and some non-relational data sources, Attunity Connect uses the native metadata. Otherwise, the metadata is specified to Attunity Connect

Administrators can use Attunity Studio during runtime to manage Attunity daemons and logging.

Client/Server Communication and the Attunity Daemon


Federate takes advantage of daemons, client software, and server software to support seamless operations across both local and remote distributed environments. To the calling application, Attunity clients look like local data providers. They receive requests for data and metadata and either execute those requests locally or dispatch them to an appropriate server. The communication protocol among Attunity Connect components minimizes processing and traffic by negotiating data formats (for example, when similar systems talk, there is no need to switch data formats) and by avoiding repeated transmission of the same data over the network.

Daemons
A daemon runs on every machine running AIS. The daemons handle user authentication and authorization, connection allocation, and server process management. A fail-safe mechanism allows the specification of alternate daemons, which function as a standby for high availability.

Client Communication Software


Within Attunitys symmetrical operation, clients serve as agents that request remotely located data. Client software, which includes one or more application-specific interfaces, resides on every system that needs to interact with data via Attunity. To the calling application, Attunity clients look like local data providers. They receive requests for data and metadata and either execute those requests locally or dispatch them to an appropriate server. The communication protocol among Attunity Connect components minimizes processing and traffic by negotiating data formats (for example, when similar systems talk, there is no need to switch data formats) and by avoiding repeated transmission of the same data over the network. The communication subsystem handles machine-dependent transformations such as big/little endian translations, floating point format translations, single- and multi-byte character encoding translations, etc. Attunity dynamically determines these translations upon initial connection between a client and a server, based on the nature of the parties involved in the connection.

What is Attunity Federate 22-5

Clients maintain caches of data and metadata which enable it to satisfy many requests locally without needing to go to the server. They also batch some commands in order to avoid unnecessary network traffic A client starts a corresponding session for this user on one or more servers on the first remote request by a user session. Each server session remains open until the client session terminates. In the case of systems using connection pooling (such as MTS or IIS), the client and server sessions may stay open indefinitely. In the event that server operations terminate (for example, someone restarts the server machine or communication is lost), the client automatically reestablishes the connection upon the next remote operation.

Server Communication Software


Attunity daemons support multiple configurations, called workspaces. Each workspace defines accessible data sources, applications, environment settings, security requirements and server allocation rules. The daemon authenticates clients, authorizes requests for a server process within a certain server workspace and provides clients with the required servers. This capability gives organizations more control over the use of system resources and access rights for specific sources and applications. It also makes it easier to tune the system at the deployment site. Isolating different server configurations increases reliability and flexibility. They enable solutions to serve different clients, including classic two-tier client-server applications, three-tier applications with connection pooling, and ad-hoc usage.

Processing mode, specifying multi-threaded/single-threaded operations, the number of processes, and server pools. Security settings for impersonation, authorized users, administrators, and encryption. Accessible adapters. Various operational parameters

Attunity also supports multi-version interoperability. IT teams can simultaneously install multiple versions of Attunity Server on all supported operating systems. As a result, organizations can begin to deploy new software versions in a staged update process, without interrupting the operations of previous versions. At the provider end of the system, Attunity servers act as agents that access, read, manipulate, and write to data sources and applications. Attunity servers accept commands from clients, call the corresponding local functions, and package and return the results to the clients.1 Server software, which includes the Attunity engines, drivers and adapters, resides on every Attunity machine. When a client requests a connection, the daemon allocates a server process to handle this connection, and refers the client to the allocated process. This may be a new process (dedicated to this client) or an already-started process. Further communication between the client session and the server process is direct and does not involve the daemon. The daemon is notified when the connection ends and the process is either killed or remains for use by another client. This kind of server model is very flexible. It accommodates different operating systems and data source requirements. Attunity supports several server models:
1

IA target data source can reside on another machine. In this case, the source is represented by an agent on the server using a third-party communications component such as Oracle Net Servers or Sybase CT-Lib, which is transparent to Attunity Connect

22-6 AIS User Guide and Reference

The multi-threaded model is effective when the data sources support multi-threading. Serialized multi-client server processes are useful for short requests and for data sources that allow more than one simultaneous client per process. The single-client-per-process model supports data sources that only handle one client per process and to maximize client isolation.

Server processes can be reused. Various server process pooling options allow organizations to tune the solution for different application and load requirements.

What is Attunity Federate 22-7

22-8 AIS User Guide and Reference

23
Using a Virtual Database
This chapter contains the following sections:

Virtual Database Overview Defining a Virtual Data Base Metadata Considerations Defining Tables Creating Synonyms Defining Stored Procedures Creating Views Using a Virtual Database

Virtual Database Overview


A virtual database presents a view containing selected tables from one or more data sources, as if from a single Data Source. You populate a virtual database by defining synonyms for the tables, views, and stored procedures you want the virtual database to make available. The following figure shows the structure of a virtual database.
Figure 231 Virtual Database

An AIS virtual database:

Using a Virtual Database 23-1

Presents a view to the user such that only selected tables from either one or more data sources are available, as if from a single data source. Limits the data available to a user to the tables and views that are needed by that user. Presents table names in a meaningful way, by defining the name as used by a data source with a name that has meaning in the application. Enables an application to view the tables in several data sources in a uniform manner, by presenting a consistent view of each table in the virtual database.

AIS provides a mechanism that enables the user to create and access a single virtual database. A virtual database consists of selected tables and views from other Attunity Connect data sources. To the calling application, a virtual database looks like a single database with a predefined set of tables and views. The calling application cannot access the underlying sources, which prevents access to other corporate data. A virtual database can also be used to provide more meaningful names for tables and views. Because the virtual database uses a Repository to maintain its definitions, native system constraints on file and table names do not apply. This feature is particularly useful for legacy back end databases whose table/record names are restricted by the naming conventions of the backend platforms (for example, the eight-character limit on HP NonStop computers).

What Can a Virtual Database Include


A virtual database can reference the following from other data sources:

Tables (with Attunity Studio or the CREATE SYNONYM statement)


Note: For the syntax used to create a synonym, see the CREATE SYNONYM Statement.

Views

A virtual database can also include AIS stored procedures (An SQL statement or part of an SQL statement that can be included in other SQL statements).

Defining a Virtual Data Base


The virtual database is set using Attunity Studio, in the Design perspective Configuration view.
Note: The Virtual Data Source is often used when defining a virtual database.

To set up a virtual database 1. Open Attunity Studio.


2. 3. 4.

In the Design perspective Configuration view, expand the Machine folder and then expand the machine where you want to define the virtual database. Expand the Bindings folder and then expand the binding where you want to define the virtual database. Right-click the Data sources folder and select New data source.

23-2 AIS User Guide and Reference

5. 6.

In the Name field, enter a name for the data source. In the Type field, select Virtual. If you do not need to enter the data location where the new tables for this virtual database reside, click Finish. To enter the data location, click Next. Enter the Data Location where the new tables for this virtual database reside. This is optional. Click Finish.

7.

Once a virtual database is defined in the Binding, you can define the tables, synonyms, views, and stored procedures in the virtual database.

Metadata Considerations
Metadata must be defined in Attunity Studio for each table, synonym, view and stored procedure. To define metadata for a virtual database In the Design perspective Configuration view, right-click the virtual database, and select Show Metadata View. The Metadata view opens with the virtual database displayed with Tables, Synonyms, Stored Procedures and Views under it.

Defining Tables
Tables can be defined and used as part of the virtual database. Normally it is recommended to create tables for storing administrative details about the virtual database. To define tables included in a virtual database 1. Open Attunity Studio.
2. 3.

In the Design Perspective Configuration view, expand the virtual database you are working with. Right-click the Tables folder under and select New Table The New tables wizard is displayed.

4. 5. 6.

Enter a name for the table. Click Finish. Create metadata for the table, including the columns and indexes for the table. For more information, see Working with Metadata in Attunity Studio.

Virtual tables are automatically created for arrays. These tables are displayed in lists of tables when building queries, such as in the Attunity Query Tool. For more information on how arrays are handled, see Handling Arrays.
Note:

Synonyms and views are not flattened.

Creating Synonyms
A synonym is an alias for a table or view. You can use the synonym name in place of any table or view name.

Using a Virtual Database 23-3

Note:

You cannot create a synonym in a single session after dropping a synonym with the same name.

A synonym can be used to implement a virtual data source by defining synonyms for the tables and views you want the virtual data source to make available To create a synonym for a table of view 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design Perspective Metadata view, expand the virtual database you are working with. Right-click Synonyms and select New Synonym. The New Synonym wizard is displayed. Enter a name for the synonym. Enter a target data source. You can select Browse to open the Select Target dialog box. The Select Target dialog box displays all the data sources defined in the same binding as the virtual database. Expand the data source you are working with and select the table or view you want to assign the synonym to. Click OK to close the dialog box. Click Finish. The new synonym is displayed in the Configuration view.

6. 7. 8.

Defining Stored Procedures


An AIS stored procedure is an SQL statement that accesses Data Sources or other stored procedures. After you define a stored procedure, it can be used in an SQL statement, or wherever a subquery can be specified, for example in a from clause. To define a stored procedure 1. Open Attunity Studio.
2. 3.

In the Design Perspective Metadata view, expand the virtual database you are working with. Right-click the Stored Procedures folder and select New Stored Procedure. The New Stored Procedure wizard is displayed.

23-4 AIS User Guide and Reference

Figure 232 New Stored Procedure

4. 5. 6.

Enter a name that identifies the stored procedure. Select the query type. Select the table.

In the left side of the New Stored Procedure wizard, expand the data source where the table resides. Click the Tables tab on the right side of the screen. Select the table and click the right arrow to move the table to the select table tab on the right side.
Note:

You cannot include tables from any data source shortcuts listed in the binding. To include a table from a data source defined in the binding that resides on a different server, manually edit the SQL.

7.

Select the column.

On the left side of the screen, expand the data source and the table containing the column. Click the Columns tab on the right side of the screen. Select the column and click the right arrow to move the table to the select table tab on the right side of the screen.
Using a Virtual Database 23-5

8.

Join columns, if necessary. The Create Joint tables pane opens.if a column with the same name as a column from a different table is selected.

Expand the table and select the column you want to join. Click the right arrow to move the column to the right side of the screen or click Next to edit the join statement.

9.

Add conditions to the WHERE clause.


Click the Where tab on the right side of the screen. Select the column and click the right arrow to move the table to the Where tab on the right side of the screen. Set the operator and value conditions as needed.

10. Group columns, if necessary.


Click the Group tab on the right side of the screen. Select the columns to group and click the right arrow to move the table to the Group tab on the right side of the screen.

11. Set filtering using a HAVING clause.


Click the Having tab on the right side of the screen. Select the column you want to filter and click the right arrow to move the table to the Having tab on the right side of the screen. Set the operator and value conditions as needed.

12. Sort the Query results.


Click the Sort tab on the right side of the screen. Select the column whose query result you want to sort and click the right arrow to move the table to the Sort tab on the right side of the screen. Select the sorting order as either ascending or descending.

13. You can select Enable manual query editing to fine tune the query. 14. Click Finish.

The New Stored Procedure wizard is based on the Attunity Query Tool. For more detailed information on how to create the queries you want to save as stored procedures, see Attunity Query Tool.

Creating Views
Attunity Connect lets you define views for one or more Data Sources. Views created are read-only and can be used in a FROM clause of an SQL query, or wherever a subquery can be specified. A view is stored by default in the SYS data source. A view cannot accept parameters. To create a view In the Design Perspective Metadata view, expand the virtual database you are working with. Right-click the Views folder and select New View. The New View wizard is displayed.

1. 2.

23-6 AIS User Guide and Reference

Figure 233 New View

3. 4.

Enter a name that identifies the view. Select the table.

In the left side of the New View screen, expand the data source where the table resides. Click the Tables tab on the right side of the screen. Select the table and click the right arrow to move the table to the select table tab on the right side of the screen.
Note:

You cannot include tables from any data source shortcuts listed in the binding. To include a table from a data source defined in the binding that resides on a different server, manually edit the SQL (using the Attunity Connect ds: table syntax).

5.

Select the column.

On the left side of the screen, expand the data source and the table containing the column. Click the Columns tab on the right side of the screen. Select the column and click the right arrow to move the table to the select table tab on the right side of the screen.

6.

Join columns, if necessary.

Using a Virtual Database 23-7

The Create Joint tables pane opens.if a column with the same name as a column from a different table is selected.

Expand the table and select the column you want to join. Click the right arrow to move the column to the right side of the screen or click Next to edit the join statement.

7.

Group columns, if necessary.


Click the Group tab on the right side of the screen. Select the columns to group and click the right arrow to move the table to the Group tab on the right side of the screen.

8.

Set filtering using a HAVING clause.


Click the Having tab on the right side of the screen. Select the column you want to filter and click the right arrow to move the table to the Having tab on the right side of the screen. Set the operator and value conditions as needed.

9.

Sort the Query results.


Click the Sort tab on the right side of the screen. Select the column whose query result you want to sort and click the right arrow to move the table to the Sort tab on the right side of the screen. Select the sorting order as either ascending or descending.

10. You can select Enable manual query editing to fine tune the query. 11. Click Finish.

The New Stored Procedure wizard is based on the Attunity Query Tool. For more detailed information on how to create the queries you want to save as stored procedures, see Attunity Query Tool.

Using a Virtual Database


The user is restricted to access data defined to the virtual database:

At the Daemon level, by setting the Workspace database name field to the virtual database name in the WS Info. tab for the workspace used by the client At the connect string level, by setting the Database parameter in the connect string. For example: connection.Open "provider=AttunityConnect; binding=production;Database=vdb_name"

If the mode of operation is set using either of these methods, the client application has access only to the data included in the virtual database. That is, when an application connects to the virtual database, all the tables, views and stored procedures defined in the virtual database from other data sources are available as if they are part of the virtual database. Any data source related task that can be performed on a normal data source can also be performed on the virtual database. For example, the user can create a new table in the virtual database or create a new view using existing tables in the virtual database. The new views are stored in the repository of the virtual database they are not stored in any actual data source.

23-8 AIS User Guide and Reference

Note:

The user cannot create a view that includes data that is not part of the virtual database.

Stored procedures in a virtual database can be either procedures defined in Attunity Connect or procedures that originate in an actual data source. The stored procedures are stored in the repository for the virtual database and are used in the same way, without regard to the source of the stored procedure (Attunity Connect or a Data Source stored procedure).

Using a Virtual Database 23-9

23-10 AIS User Guide and Reference

24
Segmented Data Sources
This chapter has the following sections:

Overview Creating Segmented Data Sources Environmental Properties for Segmented Data Sources Using a Segmented Data Source

Overview
A segmented data source consists of a number of different data sources, all of which have the same type (such as Oracle or SQL Server) and identical metadata and functionality. Different versions of the same data source are supported as long as the metadata and functionality used in the segmented data source are identical. Only one segmented data source can appear in a query and can be accessed only for retrieval and not for update. To update a segmented data source, update the physical data sources used in the segmented data source.
Note:

Segmented data sources do not work with tables containing BLOB fields or array fields of any type (including chapters). Stored procedures in the data sources that comprise the segmented data source are not supported in the segmented data source.

Creating Segmented Data Sources


To create a segmented data source, you must carry out the following procedures:

Add a data source to a binding Create a Data Source Shortcut Add a segmented Data Source to a binding

Adding a Data Source to a Binding


Before you define a segmented data source, use Attunity Studio to define the data sources that comprise the segmented data source to Attunity Connect. To add a data source to a binding 1. Open Attunity Studio.

Segmented Data Sources 24-1

2. 3. 4. 5.

In the Design perspective Configuration view Expand the Machine where you want to add the data source. Expand the Bindings folder and then expand the binding where you want to add the data source. Right-click the Data sources folder and select New Data source. Enter the following information in this window.

Name: Enter a name to identify the data source. Type: Select the data source type that you want to use from the list. The available data sources are described in the adapter reference section.

6. 7.

Click Next. Enter the connect string required to access the data source. The connect string is data source dependent. For more information, see to the specific data source in the Data Source Reference Click Finish.

8.

For more information on how to create a data source, see Adding Data Sources. You must define these data sources as data source shortcuts to the machine you want to define the segmented data source in Attunity Studio. For more information, see Creating a Data Source Shortcut.
Note:

None of the data sources can be defined directly on the machine where the segmented data source will be defined. That is, all the data sources must be defined on other machines and then defined on the machine with the segmented data source as shortcuts.

Creating a Data Source Shortcut


Each data source has properties specific to that data source. See the specific data source in the Data Source Reference for the properties applicable to that data source. To create a data source shortcut 1. Open Attunity Studio.
2. 3. 4.

In the Design Perspective Configuration View, expand the machine where the data source shortcut should be defined. Expand the Bindings folder and then expand the binding where you want to add the data source. Right-click Data sources and select New data source shortcut. The New Data Source Shortcut wizard is displayed:

24-2 AIS User Guide and Reference

Figure 241 New Data Source Shortcut Wizard

5.

Select the machine where the target data source is defined.

If the machine is defined in Attunity Studio, select Machine from Configuration view and select the machine from the drop-down list. If the machine was defined in the binding as a remote machine (for more information, see Setting up Machines) select Machine defined in current binding remote machines list and select the machine from the drop-down list. To add the machine where the data source resides to the list of machines, select New machine-Add the new machine to the Configuration view.

6.

Click Next to display the machine access information. Change the workspace to the relevant workspace, if necessary. Enter the configuration information. For configuration details, see Setting up Machines. Click Next to display the runtime security information. Add the information to be used to access the machine at runtime. This information is written in the user profile associated with the binding (the user profile with the same name as the binding name). For details about the user profile, see Managing a User Profile in Attunity Studio. Click Next to display the list of available data sources. Select the data source and click Finish.
Note:

7.

8.

If the data source name is used as a data source in the binding on the machine where the shortcut is defined, check the Alias in binding box and provide an alias for the data source name.

Adding a Segmented Data Source to a Binding


You must define the segmented data source as a shortcut on the machine that includes the data sources that makes up the segmented data source. A local data source cannot be used in the segmented data source.

Segmented Data Sources 24-3

To add a segmented data source to a binding 1. Open Attunity Studio.


2. 3. 4. 5. 6. 7. 8.

Expand the Machines folder. Expand the machine with the binding that has the data source shortcuts that you defined. See Creating a Data Source Shortcut. Expand the Bindings folder. In the binding where the data source shortcuts are defined, right-click Data sources and select New Data source. Enter a unique name to identify the segmented data source. Select Segmented from the Type field and click Next. Enter a connection string in the following format. DS1;DS2 Where DS1 and DS2 are data sources with identical metadata that where previously defined to Attunity Connect as independent data sources.

9.

Click Finish.

Environmental Properties for Segmented Data Sources


In Attunity Studio, right-click the binding where the segmented data source is defined and select Open. You set the environment settings in the tab that opens on the right of the screen. You can optimize performance when working with segmented data sources by setting the <queryProcessor maxSegmentedDbThreads> parameter in the Attunity Connect binding environment (see Environment Properties). This parameter controls the number of segments that Attunity Connect processes in parallel.
Note:

On Windows platforms set noThreadedReadAhead, under the queryProcessor group to true.

Query execution when one of the segments fails is controlled by the <queryProcessor IgnoreSegmentBindFailure> parameter in the Attunity Connect binding environment. Setting this parameter to true (the default) causes Attunity Connect to log a message and continue the execution of the entire query when the execution of one of the segments fails. When this parameter is set to false, execution stops when any of the segments fails.

Using a Segmented Data Source


You can perform joins between a segmented data source and a standard (non-segmented) data source. The join is performed between the result of the union and the other data sources. Aggregate functions are performed on the result of the union. The following example queries show how the SEGDS segmented data source is accessed:

SELECT * FROM SEGDS:tb This shows a union of DS1:tb and DS2:tb.

24-4 AIS User Guide and Reference

SELECT * FROM SEGDS:tb1, SEGDS:tb2 This code example performs a join between tb1 and tb2 on each segment and then performing a union on the results of all the segments.

SELECT * FROM SEGDS:tb1, SEGDS:tb2, DEMO:tb3 This is the same as a union of: SELECT * FROM DS1:tb1, DS1:tb2, DEMO:tb3 and SELECT * FROM DS2:tb1, DS2:tb2, DEMO:tb3

Segmented Data Sources 24-5

24-6 AIS User Guide and Reference

Part V
Attunity Studio
This part contains the following topics:

Working with the Attunity Studio Workbench

-1

-2 Attunity Integration Suite Service Manual

25
Working with the Attunity Studio Workbench
This section contains the following topics:

Workbench Overview Using Workbench Parts Workbench Icons Setting Attunity Studio Preferences

Workbench Overview
The workbench is the AIS development environment, where you configure the data sources, adapters, and agents used in your Attunity Connect, Attunity Federate, and Attunity Stream projects. The workbench window contains one or more perspective, which is a collection of views and editors. Each perspective also contains a menu bar, a toolbar, and a shortcut bar. The views are associated with a perspective and are not shared, however editors are shared across perspectives.
Note: The Attunity Studio contains some menu items and buttons that are not used to carry out normal tasks in AIS. These include:

In the Open Perspective dialog box, the Debug and Resource perspectives and all views included in these perspectives. All items in the Project and Run menus in the main menu bar. In the File menu, the menu item New and all of its submenus. The New button in the toolbar and all of its submenus. The following is an example of the New button:

In the File menu, the menu items Import and Export and all of the operations included when these are selected. In the Help menu, the menu item Software updates.

The following perspectices are available in Attunity Studio:


Design Perspective Runtime Manager Perspective

Working with the Attunity Studio Workbench 25-1

Using Workbench Parts


This section contains the following topics:

Welcome Screen Main Screen Main Menu Bar Working with Perspectives Working with Views

Welcome Screen
When you Open Attunity Studio, the Welcome screen is displayed.
Figure 251 Welcome Screen

The Welcome screen contains the links to help you get started working with AIS. You can open the following perspectives from this screen.

Solution Perspective Design Perspective Runtime Manager Perspective

You can also directly open the following wizards:


Add Machine. For more information, see Setting up Machines. Add Daemon. For more information, see Adding a Daemon.

25-2 AIS User Guide and Reference

Click the Restore button to view the Welcome screen on the right side of the perspective, while you work with the editors and views.

Main Screen
The Main Screen shows the main parts in Attunity Studio:
Figure 252 Main Screen Main Menu Bar Tool Bar

Perspective Tool

Editors

Views

Working with the Attunity Studio Workbench 25-3

Main Menu Bar


The Workbench main menu bar contains the File, Edit, Navigate, Search, Solution, Window, and Help top-level menus.
Figure 253 Main Menu Bar

Working with Perspectives


A perspective defines the initial set and layout of views and editors in the workbench window. In the workbench, each perspective shares the same set of editors and each has its own functionality. This lets you carry out a specific type of task or works with individual resources. Perspectives control what appears in certain menus and toolbars. The following perspectives are available:

Solution Perspective Design Perspective Runtime Manager Perspective

Solution Perspective
The Solution Perspective implements the design, implementation and deployment of Change Data Capture solutions. CDC solutions allow IT professionals and others to track changes in various data sources. Change Data Captures allow these professionals to keep track of recently changed data. This is a major advantage to ETL solutions, especially those that use data warehouses with vast amounts of data. Traditionally all the data is extracrted from the various data sources and transferred to the data warehouse. With a CDC solution, only recently changed data needs to be extracted and transferred. This saves a large amount of valuable time and resources for an organization. You can create projects and solutions by performing the tasks that are presented to you. For more information, see Creating a CDC with the Solution Perspective.

Design Perspective
Use the Design perspective to define your environment and create the necessary connections between the relevant data sources that contain the data you want to access. The Design perspective is made up of the Configuration View and the Metadata View.

25-4 AIS User Guide and Reference

Runtime Manager Perspective


The Runtime Manager perspective lets you monitor the status of daemons and servers. You can view, print and export reports for each daemon and server, and reload and refresh daemons, servers, and configurations.
Figure 254 Runtime Perspective

Selecting a Perspective
You can move between the perspectives in Attunity Studio using the perspective tool bar on the upper right part of the screen or from the Window menu. To open a Perspective Do one of the following:

From the Window menu, click Open Perspective and select the Perspective you want to open. Click the Perspective button on the perspective toolbar and select the perspective you want.

Working with Views


Views support editors and provide alternative presentations as well as ways to navigate the information in the Workbench. You can change the layout of a perspective by opening and closing views and by docking them in different positions in the Workbench screen. Views can be opened and closed according to your needs. See Customizing Views below. The Workbench contains the following Views:

Error Log View Metadata View Configuration View

Working with the Attunity Studio Workbench 25-5

Customizing Views
Views can be customized to let you set up a workbench perspective in a way that is comfortable for you. This section describes how:

To move views into different positions To resize a view To open and close views

To move views into different positions Click the tab at the top of a view and drag and drop it to the top, bottom, or sides of the workbench where you want to view it. When you drag and drop a view you will see an outline for in the position you dragged it to. The outline provides you with a guide to see where you are moving the view. You can move to the same position as another view. If two views are in the same place, click the tab at the top of the view you want to see. For example, by default the Metadata and Configuration views are in the same place at the left of the workspace (in the Design perspective). To change from the Metadata view to the Configuration view, click the Configuration view tab. After you move it to a new position, you can then resize it.

To resize a view 1. Place your curser on the edge of the view you want to resize. The turns into a double-sided arrow.
2.

Drag the view until it is the size you want.


Note:

You cannot resize a view from all sides (top, bottom, left, right) The sides that are active for resizing depend on the views position in the workbench.

1. 2.

To open and close views From the Window menu, click Show View and then select the view you want to open from the submenu. Select the view from the list, or Select Other, and select the view from the Show View window.

Getting Started View


In the Solution perspective, the Getting Started view contains the Getting Stared Guide. When you create a new CDC in the Solution perspective, the workbench opens to to this guide. This is a wizard that guides you through the creation of a CDC solution and automatically configures the CDC agent. For more information, see Getting Started Guide.

25-6 AIS User Guide and Reference

Figure 255 Getting Started View

The image shows the Getting Started View. ***********************************************************************************************

Error Log View


The Error Log View shows errors of which you should be aware. The errors are listed by title.
Figure 256 Error Log View
Error log icons

Working with the Attunity Studio Workbench 25-7

The Error Log view lets you execute various tasks. For more information, see Error Log View.

Configuration View
The Configuration view lets you configure all levels of access to data. Together with the Metadata View, it makes up the Design Perspective.
Figure 257 Configuration View

Metadata View
The Metadata View lets you define the metadata for objects. Together with the Configuration View, it makes up the Design Perspective.

25-8 AIS User Guide and Reference

Figure 258 Metadata View

Workbench Icons
Studio icons are divided into the following four categories:

General Actions Objects Manipulation

General
The general icons are listed and described in the following table:
Table 251 Icon General Icons Description Document Error File object Folder Help Import wizard

Working with the Attunity Studio Workbench 25-9

Table 251 (Cont.) General Icons Icon Description Information Large image Sample Star Horizontal Vertical Warning

Actions
The action icons are listed and described in the following table:
Table 252 Icon Action Icons Description Add Back

Cancel Change directory Checkmark Clear log viewer

Collapse all Connect Copy Cut Deploy

25-10 AIS User Guide and Reference

Table 252 (Cont.) Action Icons Icon Description Disconnect Delete Execute Expand all Export log file

Export to XML File transfer Import log file

Import XML Load Move Next

Paste Props Move down Move up Open error log file

Rename Reload error log

Resume Save as

Working with the Attunity Studio Workbench 25-11

Table 252 (Cont.) Action Icons Icon Description Search Select Set up workspace Stop Undo edit

Objects
The objects icons are listed and described in the following table:
Table 253 Icon Object Icons Description Adapter Add FTP Binding Build metadata Changed data capture Client information Column Completed task Computer up Computer down Configuration Configure CDC Agent Configure data source

25-12 AIS User Guide and Reference

Table 253 (Cont.) Object Icons Icon Description Configure service point Configure staging area Daemon Daemon down Daemon offline Data source Database view Design machine Data source link Element Enable backend database Encryption Delete Error Error in task Event Execute File Group Hard disk Import Index

Working with the Attunity Studio Workbench 25-13

Table 253 (Cont.) Object Icons Icon Description Input Interaction Join Login Machine Machine down Machine offline Metadata My Computer My FTP Native data source Native metadata New project Open project Output Procedure Project Query Record Schema Search Segment

25-14 AIS User Guide and Reference

Table 253 (Cont.) Object Icons Icon Description Server Solution Perspective Stored procedure Synonym Table Table Ext. Local table Native table Transparent table Task User Virtual data source Workspace XML

Manipulation
The manipulation icons are listed and described in the following table:
Table 254 Icon Manipulation Icons Description Change data type Column normalization Combine fields Create array

Working with the Attunity Studio Workbench 25-15

Table 254 (Cont.) Manipulation Icons Icon Description Data file Fixed Flatten group Free XSL Hide Mark selector Nullable Replace variant Select counter Set dimension Set scale Set size

Show Test Unfixed

Setting Attunity Studio Preferences


You set preferences for Attunity Studio from the Windows menu. When you open the Preferences screen, a list of categories is shown on the left side. You can expand the entries in the tree to find additional sub-entries where you can set the preferences. The preferences that are important for AIS are:

Studio Keys

Studio
The following tables describe the Studio preferences. The Studio preferences section has two tabs.

25-16 AIS User Guide and Reference

Table 255 Option

Studio Security Tab Description Select this to compress data used in Attunity Studio.

Use Compression

Use encrypted communication Select this to encrypt communication between Attunity Studio and servers. Remember Password Change Master Password Select this if you want Attunity Studio to automatically enter

the users password each time you sign on.


Click this to open the Change Master Password screen and change or create a password for using any module in AIS.

Figure 259 Studio Security Tab

Table 256 Option

Studio Advanced Tab Description Select this to start Attunity Studio with all editors closed and the Design perspective Configuration views collapsed. In this case, none of the editors left open at the end of the previous session will open when starting a new session. This is the default setting. If you clear this check box, all windows open at the end of the previous session will open when you start a new session and Attunity Studio will take longer to load. Select this to display advanced binding environment properties. These properties should only be displayed in coordination with Attunity Support. Select this to implement tracing and logging on the network and communication transport layer. When you select this check box, the following fields are available:

Quick Startup (editors closed)

Show advanced environment parameters Activate JCA tracing

JCA log level. Select Error, Info, or Debug from the drop-down list. JCA log file: Enter the location of the JCA log file or click Browse to browse for a location.

Working with the Attunity Studio Workbench 25-17

Table 256 (Cont.) Studio Advanced Tab Option Network XML Protocol Description Select:

Text Binary

Connection timeout

Enter the amount of time (in seconds) that Attunity Studio waits for a connection to another machine (such as a server) before returning an error message. The default value is 60. In this case, Attunity Studio waits one minute before returning an error message. Enter the amount of time (in seconds) that Attunity Studio waits for a connection to a specific interaction (such as a data source) before returning an error message. The default value is 120. In this case, Attunity Studio waits two minutes before returning an error message.

Interaction timeout

Figure 2510 Studio Advanced Tab

Configuration
The following table describes the Configuration preferences
Table 257 Option Enable using an adapter definition by multiple adapters Configuration Preferences Description Select this to reuse adapter definitions. When adding a new adapter an additional window lists the current adapters in the same binding and lets you use an adapter definition unique to the adapter or an adapter definition from any of the listed adapters.

Enable specifying Select this if you want to enter administration authorization administration information on the machine level. It is added to the machines authorization directly in the source XML. source XML

25-18 AIS User Guide and Reference

Table 257 (Cont.) Configuration Preferences Option Show SYS data source Description Select this to display the SYS data source, including stored procedures and views defined for it in the Design perspective Metadata tab. This option is selected by default. For future use only.

Enable Query Manager Figure 2511

Configuration Preferences

Metadata
The following table describes the Metadata preferences.
Table 258 Options Enable editing source XML Metadata Preferences Description Select this to allow editing of the source tabs XML content.

Working with the Attunity Studio Workbench 25-19

Figure 2512 Metadata Preferences

Runtime Manager
The following table describes the Runtime Manager preferences.
Table 259 Option Enable periodic machine check Runtime Manager Preferences Description Select this to set up a machine check on a scheduled basis. If you select this option, enter a time interval in the field below. The time interval is in seconds. For example, if you want a machine check every minute, enter 60.

25-20 AIS User Guide and Reference

Figure 2513

Runtime Manager Preferences

Keys
Attunity Studio has many built in keyboard shortcuts. You can also customize keyboard shortcuts in Attunity Studio. The key preferences has two tabs. The View tab shows the list of default shortcuts.

Working with the Attunity Studio Workbench 25-21

Figure 2514 Key View Tab

The Modify tab lets you make changes to the current keyboard shortcuts or add new shortcuts. If you want to return to the default settings, click Restore Defaults at the bottom of the screen.

25-22 AIS User Guide and Reference

Figure 2515

Key Modify Tab

Default Keyboard Shortcuts


This following table shows the default shortcuts available in Attunity Studio.
Table 2510 Command Content Assist Context Information Copy Copy Cut Cut Delete Find Next Find Previous Find and Replace Incremental Find Key Sequence Ctrl+Space Ctrl+Shift+Space Ctrl+C Ctrl+Insert Ctrl+X Shift+Delete Delete Ctrl+K Ctrl+Shift+K Ctrl+F Ctrl+J When (The context the command is available) In Dialogs and Windows In Windows In Dialogs and Windows In Dialogs and Windows In Dialogs and Windows In Dialogs and Windows In Windows Editing Text Editing Text In Windows Editing Text

Working with the Attunity Studio Workbench 25-23

Table 2510 Command

(Cont.) Key Sequence Ctrl+Shift+J Ctrl+V Shift+Insert Ctrl+Shift+Q Ctrl+1 Ctrl+Y Ctrl+A Ctrl+Shift+Insert Ctrl+Z Alt+/ Ctrl+F4 Ctrl+W Ctrl+Shift+F4 All,Ctrl+Shift+W Ctrl+N Alt+Shift+N Ctrl+P Alt+Enter F5 F2 Ctrl+S Ctrl+Shift+S Alt+Left Alt+Right Ctrl+L Ctrl+Q Next,Ctrl+. When (The context the command is available) Editing Text In Dialogs and Windows In Dialogs and Windows Editing Text In Windows In Windows In Dialogs and Windows Editing Text In Windows Editing Text In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows Editing Text In Windows In Windows In Windows In Windows In Windows In Windows In Windows Editing Text Editing Text Editing Text Editing Text

Incremental Find Reverse Paste Paste Quick Diff Toggle Quick Fix Redo Select All Toggle Insert Mode Undo Word Completion Close Close Close All Close New New menu Print Properties Refresh Rename Save Save All Backward History Forward History Go to Line, Last Edit Location

Open Resource Previous, Show In menu Build All Open Search Dialog,, Collapse Copy Lines Delete Line Delete Next Word,,t

Ctrl+Shift+R Ctrl+, Alt+Shift+W Ctrl+B Ctrl+H Ctrl+Numpad_Subtract Ctrl+Alt+Down ,Ctrl+D Ctrl+Delete

25-24 AIS User Guide and Reference

Table 2510 Command

(Cont.) Key Sequence When (The context the command is available) Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text Editing Text In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows In Windows

Delete Previous Word,,t Ctrl+Backspace Delete to End of Line Duplicate Lines Expand Expand Insert Line Above Current Line,, Insert Line Below Current Line Move Lines Down Move Lines Up Next Word,,Editing Text Previous Word Scroll Line Down Scroll Line Up Select Next Word Select Previous Word To Lower Case To Upper Case,,Editing Text Toggle Folding Toggle Overwrite Cheat Sheets,,In Windows Console,,In Windows Search,,In Windows Show View (View: Outline),,In Windows Show View (View: ) Activate Editor Maximize Active View or Editor Next Editor Next Perspective Ctrl+Shift+Delete Ctrl+Alt+Up Ctrl+Numpad_Add All,Ctrl+Numpad_Multiply Ctrl+Shift+Enter Shift+Enter Alt+Down Alt+Up Ctrl+Right Ctrl+Left Ctrl+Down Ctrl+Up Ctrl+Shift+Right Ctrl+Shift+Left Ctrl+Shift+Y Ctrl+Shift+X Ctrl+Numpad_Divide Insert Alt+Shift+Q, H Alt+Shift+Q, C Alt+Shift+Q, S Alt+Shift+Q, O Alt+Shift+Q, X F12 Ctrl+M Ctrl+F6 Ctrl+F8

Next View,,In Windows Ctrl+F7 Open Editor Drop Down Previous Editor, Ctrl+E ,Ctrl+Shift+F6

Working with the Attunity Studio Workbench 25-25

Table 2510 Command

(Cont.) Key Sequence When (The context the command is available) In Windows In Windows In Dialogs and Windows Editing Text In Windows In Windows In Windows

Previous Perspective,,In Ctrl+Shift+F8 Windows Previous View,,In Windows Show Key Assist,,In Dialogs and Windows Show Ruler Context Menu, Show System Menu Show View Menu Switch to Editor Ctrl+Shift+F7 Ctrl+Shift+L Ctrl+F10, Alt+Ctrl+F10 Ctrl+Shift+E

25-26 AIS User Guide and Reference

Part VI
Operation and Maintenance
This part contains the following topics:

AIS Runtime Tasks from the Command Line Runtime Management with Attunity Studio Managing Security Backing Up AIS Transaction Support Troubleshooting in AIS

26
AIS Runtime Tasks from the Command Line
This section includes the following topics:

Overview Starting and Stopping Daemons Managing Daemon Configurations

Overview
This section describes how to use AIS in a production environment. It describes the procedures necessary for smooth production time data integration. Many of the operations necessary for the production environment are carried out in Attunity Studio..

Starting and Stopping Daemons


This section includes the following topics:

Starting a Daemon Stopping a Daemon Checking the Daemon

Starting a Daemon
You start a daemon from a privileged account (such as the super user account on a UNIX platform) on the machine where the daemon will run. If not run from a privileged account, the daemon can start servers only with the same user ID as the account that started it. In this case, the daemon may also have problems validating user name/password pairs within the system. Use Attunity Studio to manage all daemon operations, except for starting the daemon. A daemon can only be started via the command line. A daemon cannot be started from within Attunity Studio. The daemon startup processes vary according to the type of platform.

Enabling Automatic Startup


The daemon is usually started automatically when the system boots up. OpenVMS Platforms The daemon should start automatically when the system boots up, through SYS$STARTUP:NAV_START.COM (see "Automatic Startup" in the AIS

AIS Runtime Tasks from the Command Line 26-1

Installation Guide for more details). When IRPCD initializes itself as a daemon, it creates a detached process under the same account from which 'IRPCD start' was issued. In the detached process, the account's login procedure is not executed. If the daemon fails to start in the detached process, define the symbol NV_DEBUG_MODE to something before starting the daemon. This creates a process log file in SYS$LOGIN:IRPCD_START.LOG that can help you to locate the problem. UNIX Platforms To enable automatic client/server access to an Attunity Server, start the daemon at system boot time by adding the command invoking IRPCD to the /etc/inittab file. This table lists the lines to add for the supported UNIX systems.
Table 261 System Sun Solaris HP-UX AIX HP Tru64 UNIX Linux UNIX Command Lines for Starting the Daemon Automatically Starting the Daemon nv:3:once:navroot/bin/irpcd start >/dev/console 2>&1 nav:3:once:navroot/bin/irpcd start >/dev/console 2>&1 nav:2:once:navroot/bin/irpcd start >/dev/console 2>&1 nv:2:once:navroot/bin/irpcd start >/dev/console 2>&1 nv:3:once:navroot/bin/irpcd start >/dev/console 2>&1

The symbol navroot should be replaced with the directory where AIS is installed. Windows Platforms To start the daemon automatically, set the Startup type property for the Daemon (IRPCD) service to Automatic. The Daemon (IRPCD) service is accessed via the Services option in the Windows Control Panel (for example, for Windows 2000 this is accessed via Start|Settings|Control Panel|Administrative Tools|Services). If you change the account under which the daemon runs, make sure that the following user rights are assigned:

Act as part of the operating system Create a token object Log on as a batch job Log on as a service Log on locally

Set these rights in Control Panel|Administrative Tools|Local Security Policy. In the Local Security Settings screen, select Local Policies|User Rights Assignment and verify that each of the above user rights is set. After verifying the settings, reboot the Windows machine. The daemon starts up automatically each time the machine is restarted.

Manually Starting a Daemon on HP NonStop, OpenVMS, OS/400, UNIX, and Windows Platforms
The IRPCD command is used to start the daemon.

26-2 AIS User Guide and Reference

For IRPCD commands to manage the daemon on the machine itself see.
Note:

On HP NonStop you must start the daemon from a super user account.

To start the daemon Enter the appropriate command line as follows:

For HP NonStop, OpenVMS, UNIX, and Windows platforms:


irpcd [-l[host][:port | :a][-r]] [-u username [-p password]][-n][-v] start [daemon_name]

For OS/400 platforms: Start the daemon from the root directory and from the account specified in the workspaceAccount property. * * * To go to the root directory enter: chgcurdir '/ The workspaceAccount property is specified as described in Granting Workspace Administration Rights to Users. Run the following to start the daemon:

Sbmjob cmd(call pgm(navroot/irpcd) parm([-l[host[:port | :a]][-r]] [-u username [-p password]] [-n][-v] start [daemon_name]))

Note that strings in parameters containing special characters (such as the hyphen in -u username) must be surrounded by single quotes, as in '-u' username. If navroot has been defined, start the daemon without specifying the path:
Sbmjob cmd(call pgm(irpcd) parm([-l[host[:port|:a]][-r]] [-u username [-p password]] [-n][-v] start [daemon_name]))

To use the default location for the irpcd program, when the NAVROOT library has been defined, specify the following:
Sbmjob cmd(call pgm(irpcd) parm([-l[host[:port|:a]][-r]] [-u username [-p password]] [-n][-v] start [daemon_name]))

If you do not succeed in starting the daemon, check the log file by running the following command:
Edtf '/navroot/tmp/irpcd.log'

where:

-l [host][:port | :a][-r]: The daemon uses a particular port, rather than the default attunity-uda-server port number 2551. If you specify "a" as the port, the system assigns a free port, which is registered with the portmapper. The option -r allows you to set a port range. The daemon will only start in the range of ports specified. For example, -r 2551-2553, indicates that the daemon

AIS Runtime Tasks from the Command Line 26-3

will start only on ports 2551, 2552, and 2553. This is used when a system defines a specific port range for AIS to run, as this ensures that the daemon will start only in the defined port range. This is important because if the daemon starts in a port that is not defined in the systm, users will be unable to use components of AIS, such as Attunity Studio. If you are starting the daemon on platforms that support a multi-home machine, such as HP NonStop, and are not using the default IP address, specify the IP address in the host parameter. Start the daemon as follows:
run irpcd -l 194.90.22.23 start

On HP NonStop platforms, add the following before starting the daemon:


DEFINE: ADD DEFINE =TCPIP^PROCESS^NAME,FILE tcpip_proc_name

For example, if the tcpip_proc_name is $ztc0 and it corresponds to the IP address 194.90.22.23, the daemon is started as above (run irpcd -l 194.90.22.23 start).

u username -p password: The login information used to issue the daemon command. Specify '-u ""' for an anonymous login (if the daemon is configured to accept anonymous logins). The username and password may be needed to start a daemon since the daemon uses them to stop the active daemon (if one exists). -b (Blocking): OpenVMS and UNIX Platforms only. The daemon remains connected to the terminal and does not daemonize itself. By default, when the daemon is started it disconnects itself from the activating terminal and becomes a detached process. All of the daemon logging messages are sent to the standard output device, regardless of the logging settings. Use this option for troubleshooting if there are problems starting the daemon. n (No mapping) The daemon is not used as a portmapper. By default, if a portmapper is not found the daemon itself can be used as a portmapper. v (Verbose): Provides detailed information whenever applicable (for example, messages entered to the log). daemon_name: The name of a daemon. If the a daemon name is not specified the default, IRPCD, is started.

Manually Starting a Daemon on z/OS Systems


The IBM /s command is used to start the daemon on z/OS systems. To start the daemon 1. Ensure the following:

The NAVROOT.loadaut library is APF authorized. NAVROOT is the high-level qualifier specified during installation.
Note: To define a DSN as APF authorized, in the SDSF screen enter the following command:
/setprog apf,add,dsn=navroot.loadaut,volume=nav002

where nav002 is the volume where you installed AIS

26-4 AIS User Guide and Reference

The navroot.LOAD library is APF authorized. Use the information in the previous step to do this, and issue this command:
/setprog apf,add,dsn=navroot.load,volume=nav002

NAVROOT.USERLIB(ATTSRVR) and NAVROOT.USERLIB(ATTDAEMN) have been copied to a library within the started tasks path. If they have not been copied, add the NAVROOT.USERLIB library to this path.

2.

Activate NAVROOT.USERLIB(ATTDAEMN)as a started task to invoke the daemon. For example, in the SDSF screen enter the following:
/s ATTDAEMN

To submit the daemon as a job, uncomment the first two lines of the ATTDAEMN JCL and run the job using the sub command. The ATTDAEMN JCL is similar to the following:
//*ATTDAEMN JOB 'RR','TTT',MSGLEVEL=(1,1),CLASS=A, //*MSGCLASS=A,NOTIFY=&SYSUID,REGION=8M //STEP1 EXEC PGM=IRPCD, // PARM='-B START ''NAVROOT.DEF.IRPCDINI''' //STEPLIB DD DSN=NAVROOT.LOADAUT,DISP=SHR //SYSPRINT DD SYSOUT=A //GBLPARMS DD DSN=NAVROOT.DEF.GBLPARMS,DISP=SHR // EXEC PGM=IRPCD,COND=((1,EQ,STEP1),(2,EQ,STEP1)), // PARM='-KATTDAEMN START ''NAVROOT.DEF.IRPCDINI''' //STEPLIB DD DSN=NAVROOT.LOADAUT,DISP=SHR //SYSPRINT DD SYSOUT=A //GBLPARMS DD DSN=NAVROOT.DEF.GBLPARMS,DISP=SHR //SYSDUMP DD DUMMY

Note:

You can also run ATTDAEMN by submitting the job, without making any changes to the JCL.

Starting Multiple Daemons


You can start more than one daemon on the same machine by specifying a different port number for each daemon. This option is useful, for example, when you want different users to access data on the same machine using different daemon configurations. Each daemon started must have its own configuration, which is specified when starting the daemon. In addition, the workspaces in all the configurations must be unique, so that there is no conflict between configurations and workspaces. If you use different startup scripts in the daemon configuration settings, specify a profile of started tasks for each startup script in the security manager.

Stopping a Daemon
You can shut down the daemon on any machine with Attunity Studio or from the command line.

Shutting Down a Daemon Using Attunity Studio


You can shut down the daemon on any machine defined in Attunity Studio from within the Runtime Manager perspective

AIS Runtime Tasks from the Command Line 26-5

To shut down the daemon using Attunity Studio In the Runtime explorer, right-click the daemon you want to shut down and select Shutdown Daemon.

Shutting Down a Daemon Using the Command Line


You can shut down the machine locally via the command line. To shut down the daemon using the command line Enter the appropriate command line as follows:

For HP NonStop, OpenVMS, UNIX, and Windows platforms:


iirpcd shutdown [[abort[why]]| oper]

Note:

On Windows Platforms, run this command through the Command Line Console menu item in the Attunity menu (Start|Programs|Attunity|Command Line Console).

You can also issue irpcd -s stop.

For z/OS platforms:


NAVROOT.USERLIB(IRPCDCMD)

Enter shutdown [abort[why]] at the prompt or enter a control command:


S/P ATTDAEMN or /F ATTDAEMN,STOP

Shutting down the daemon does not immediately kill active servers. To kill active servers, add the NVSHKILL parameter, with a value of 1, to the NAVROOT.DEF.GBLPARMS dataset (where NAVROOT is the high-level qualifier where Attunity Server is installed).

For OS/400 platforms:


call pgm (navroot/irpcd) parm(shutdown [abort[why]|oper])

Or
call pgm(irpcd) parm(shutdown [abort[why] | oper])

where

abort: If non-zero, the daemon shuts down regardless of any outstanding activity or active clients. why: The reason for the shutdown, which is written to the log file. oper: On HP NonStop, OpenVMS, OS/400, UNIX, and Windows platforms, shuts down the daemon by sending a signal (SIGQUIT) to the daemon process. You do not need to specify the username or password for this option but you must have system privileges (you need to be a superuser). To send a signal to the daemon, the IRPCD program requires the process ID of the target daemon: the program retrieves this information from the file irpcd[_port].pid in the directory where the daemon resides (a daemon that was started on a particular port would have the port number in the PID filename). The PID file is automatically created when a daemon starts and is deleted when a daemon ends.

26-6 AIS User Guide and Reference

Disabling a Workspace
You can disable a workspace, so that although it is defined for a daemon it is not operable. Server processes are not started via this workspace and a client requesting this workspace receives an error. To disable a workspace using Attunity Studio In the Design perspective Configuration view, right-click the workspace to be disabled and select Disable.

Checking the Daemon


Check the daemon on any machine defined in Attunity Studio, from within the Runtime Manager perspective. To check the status of a daemon using Attunity Studio In the Runtime Explorer view, right-click the workspace to be checked and select Status. The Runtime Explorer displays the daemon activity, as shown in this figure.
Figure 261 Daemon Activity in the Runtime Explorer

To check the status of a daemon using the command line Enter the appropriate command line as follows:

For HP NonStop, OpenVMS, and Windows platforms:


nav_util check irpcd(daemon_location [,username, password])

Note:

On Windows Platforms, run this command through the Command Line Console menu item in the Attunity menu (Start|Programs|Attunity|Command Line Console).

For z/OS platforms:


NAVROOT.USERLIB(NAVCMD)

Enter CHECK IRPCD (daemon_location [, username, password]) at the prompt.

For OS/400 platforms:


call pgm (navroot/irpcd) parm(test)

If you have NAVROOT define, run the following command without specifying the path:
call pgm(irpcd) parm(test)

AIS Runtime Tasks from the Command Line 26-7

to use the default location for the irpcd program when the NAVROOT library has been defined, run the following command:
call pgm(irpcd) parm(test)

For UNIX platforms:


nav_util check irpcd(daemon_location [,username, password])

where

daemon_location: The host name with an optional port number (specified after a colon) username, password: Used for logging onto the daemon.

For example, if you check a machine named proc.acme.com, the following is returned if the daemon is active:
Checking IRPCD on host prod.acme.com Trying anonymous login - OK This test took 0.500 seconds.

The following is displayed if the daemon is not active:


Checking IRPCD on host prod.acme.com Trying anonymous login - FAILED, [C043] Failed to connect to host prod.acme.com: PC: Connect failed - Connection refused. This test took 1.042 seconds.

Managing Daemon Configurations


Use Attunity Studio to manage daemon configurations. The daemon can be initially configured from the Design perspective Configuration view. After initial setup it is recommended that you make changes to the daemon configuration after monitoring it in the Runtime Manager perspective. The Runtime Manager perspective enables managing and monitoring daemon activity. Open the Runtime Manager perspective by right-clicking a machine in the Design perspective Configuration view and selecting Open Runtime Perspective, or by clicking the Open a Perspective button and selecting Runtime Manager.

Daemon Configuration Groups


You can have a number of daemon configurations on any machine. The daemon configuration is divided into the following groups:

Daemon Control: Specifies the server details, including daemon failure recovery, maximum request file size, default language, and time out parameters. Daemon Logging: Specifies the logging details such as, the log file format and location, and the parameters to log and trace (as opposed to server logging, which is performed in the Workspace section). Daemon Security: Specifies the administrative privileges and access for the daemon. Daemon Workspaces: The workspaces defined for the daemon. A daemon can include a number of workspaces. A workspace defines the server processes and environment that are used for the communication between the client and the server machine for the duration of the client request. Each workspace has its own

26-8 AIS User Guide and Reference

definition and includes the data sources and applications that can be accessed as well as various environment variables. The workspace definition is divided into the following groups:

WS Info: Specifies general information including the server type, the command procedure used to start the workspace, the binding configuration associated with this workspace (which dictates the data sources and applications that can be accessed) and the timeout parameters. WS Server: Specifies workspace server information including features that control the operation of the servers started up by the workspace and allocated to clients. WS Logging: Specifies workspace tracing options. WS Security: Specifies administration privileges, user access, ports available for access to the workspace and workspace account specifications. WS Governing: Specifies the way queries are executed. This is used particularly when running queries against large tables.
Note:

The default daemon configuration supplied with AIS includes the default Navigator Workspace. This workspace is automatically used if a workspace is not specified.

Adding and Editing Daemon Configurations


The daemon is configured in the Design perspective Configuration view in Attunity Studio. A machine can have a number of daemons running, at the same time, each on its own port.

Adding and Editing Workspaces


Daemons include workspaces that define the server processes and environment that are used for the communication between the client and the server machine for the duration of the client request. A workspace definition is set in the Attunity Studio Design perspective Configuration view, under the daemon that manages it.

Configuring Logging
You can set up logging for the following:

Daemon log files Workspace server process log files

AIS Runtime Tasks from the Command Line 26-9

26-10 AIS User Guide and Reference

27
Runtime Management with Attunity Studio
This section contains the following topics:

Overview Runtime Explorer View Error Log View

Overview
Runtime Management tasks are done after you set up and define all of your machines and daemons. Runtime management lets you:

Monitor the status of daemons and servers Reload and refresh daemons, servers, and configurations View error logs and print and export reports for each daemon and server

You carry out the daemon and server tasks in the Runtime Explorer view and you carry out the logging tasks in the Log view.

Runtime Explorer View


The Runtime Explorer lets you monitor the progress and status of daemons, workspaces, and servers at runtime. The Runtime Explorer is the main view in the Runtime perspective. For more information, see Working with Perspectives. The Runtime Explorer views tree contains a daemon folder. When you expand this folder, you can view daemons defined for a machine. Expand any daemon to view its workspaces and expand any workpace to view its servers.

Runtime Management with Attunity Studio 27-1

Figure 271 Runtime Explorer View

Runtime Explorer Tasks


The tasks carried out in the Runtime Explorer view are accessed through the shortcut menu. The following sections describe:

Adding a Daemon Daemon Tasks Workspace Tasks Server Tasks

Adding a Daemon
To add a daemon at runtime 1. Open Attunity Studio.
2. 3.

From the perspective toolbar at the top right of the workbench, open the Runtime Manager perspective. From the Runtime Explorer view, right-click the Daemons folder and select Add Daemon.

27-2 AIS User Guide and Reference

The Add daemon dialog box opens.


Figure 272 Add Daemon Window

4.

Enter the following information about the machine where the daemon is located:

Host name/IP address: Enter the name of the machine on the network. The name can be entered manually or click Browse to browse all the machines running a daemon listener on the port currently accessible over the network. When you expand the daemon you can view the daemons workspaces to edit the server processes for each workspace.

Port: Enter the port where the daemon is running. The default port for Attunity Server is 2551. Display name: Enter an alias used to identify the daemon when different from the host name. This field is optional.

5.

Enter the following information about the connection:

User name: The username of a user defined as an administrator for the machine. This is optional. Password: Enter the password for the user entered in the User name field. Connect via NAT with a fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, no matter which port is used. For more information, see Firewall Support.
Note:

You create an administrator when the machine is installed or by adding the administrator using Attunity Studio. See Administration Authorization.

Runtime Management with Attunity Studio 27-3

Daemon Tasks
Right-click a daemon to execute the following tasks at runtime.
Table 271 Task Daemon Tasks Description

Edit Daemon Configuration Opens the daemon editor, which allows you to make changes to the daemon configuration. For more information, see Defining Daemons at Design Time. See Also: AIS Runtime Tasks from the Command Line for details about the configuration settings. Status Checks the status of the daemon. The information about the daemon includes the daemon name, configuration used, the active client sessions, and logging information. For more information, see Checking the Daemon Status with Attunity Studio. If a client session ends because of power loss, network inaccessibility, or a system reset, the associated server process may remain active for a long time. In this case, the system administrator should kill those server processes. You can monitor the status of all the currently active server processes to identify server processes that need to be killed Reload Configuration Reloads the configuration after any changes. Any servers currently started are not affected by the changed configuration. A warning message opens, click OK to reload the current daemon configuration. See Also: AIS Runtime Tasks from the Command Line for details about the configuration settings. View Log Opens the daemon log in the editor window. This log provides a real-time display of the details written to the daemon log from the time the view is open. For information, see Viewing Logs. Opens the daemon events log, which displays the daemons activities. See Viewing Events. Opens the Daemon Properties screen. This screen displays the information that was entered when adding the daemon. You can make changes to this information. For information about the fields in this screen, see Adding a Daemon. Shuts down the daemon on the computer. Note: You must start a daemon from a machine command line. Recycle servers Closes all unused servers and prepares all active servers to close when the client disconnects. New connection requests are allocated with new servers. Immediately closes all active and unused servers. Note: This option can lead to data loss. Rename Remove Refresh Opens a screen where you can change the name of the daemon displayed in the Runtime Explorer view. Removes the daemon from the Runtime Explorer view. Refreshes the state of the daemon to the last state. After starting or restarting a daemon, you should refresh the daemon in Attunity Studio.

View Events Daemon Properties

Shutdown Daemon

Kill servers

27-4 AIS User Guide and Reference

Workspace Tasks
Right-click a workspace to execute the following tasks at runtime.
Table 272 Task Edit Workspace Configuration Workspace Tasks Description Opens the daemon editor, which allows you to make changes to the daemon configuration. See Editing a Daemon. Click the workspace tabs at the bottom of the editor to edit the workspace configuration. For more information, see Adding and Editing Workspaces. See Also: AIS Runtime Tasks from the Command Line for details about the configuration settings. Status View Log Checks the status of the workspace, whether it is available or not. Opens the log for workspaces on all servers in the editor window. This log provides a real-time display of the details written to the workspace log from the time the view is open. For information, see Viewing Logs. Opens the workspace events log. See Viewing Events. Closes all unused servers and prepares all active servers to close when the client disconnects. New connection requests are allocated with new servers. Immediately closes all active and unused servers. Note: This option can lead to data loss. Refresh Refreshes the state of the workspace to the last state. After starting or restarting a daemon, you should refresh the workspaces in Attunity Studio.

View Events Recycle Servers

Kill Servers

Server Tasks
Right-click a server to execute the following tasks at runtime:
Table 273 Task Status Server Tasks Description Checks the status of the server. The information about the server includes the server mode and the number of active client sessions for the server. Opens the server log in the editor window. This log provides a real-time display of the details written to the server log from the time the view is open. For information, see Viewing Logs. Opens the server events log. See Viewing Events. Ends the server process, regardless of its activity status. Note: It is recommended to use this option with caution, as it may lead to data loss. Refresh Refreshes the state of the server to the last state. After starting or restarting a daemon, you should refresh the server in Attunity Studio.

View Log

View Events Kill server

Viewing Logs
AIS produces a number of logs that you can use to troubleshoot problems. The daemon manages the following logs:

Runtime Management with Attunity Studio 27-5

Daemon log Workspace log Server process log

To view the logs 1. Open Attunity Studio.


2. 3.

From the Runtime Explorer view, expand the daemon folder, daemon, or workspace. Right-click the level you want to view (daemon, workspace, or server) and select View Log. Each log is displayed in a separate tab. You can switch logs by clicking the required tab. The Attunity Studio Runtime Manager perspective opens the log monitor in the editor.

Figure 273 Runtime Perspective Log

27-6 AIS User Guide and Reference

Working with the Event Monitor


The logs displayed in the Runtime Perspective Events monitor displays daemon, workspace, or server activities as they happen. You can carry out the following activities for each log:

Set the logging level Start and stop the logging display Clear the activities displayed in the log

1. 2. 3.

To set the logging preferences Open Attunity Studio. Open the log view (see Viewing Logs). In the log view editor, click Properties. The Preferences screen opens.

Figure 274 Logging Preferences

4.

Select one of the following:

none: The log displays who has connected and disconnected from the server process. error: The log displays who has connected and disconnected from the server process and all errors. debug: The log displays who has connected and disconnected from the server process, errors, and any tracing specified in the daemon configuration.

5.

Click OK to set the preferences and close the screen.

Runtime Management with Attunity Studio 27-7

To close the screen without changing the logging level, click Cancel. To reset the logging level to the default settings, click Restore Defaults.

To start and stop the logging display Do one of the following:


Click Suspend to stop collecting logging information Click Resume to start collecting logging information

To clear the information in the log Click Clear to remove all entries in the log displayed in the Events monitor. If logging is enabled, new information will continue to be displayed. The cleared information cannot be viewed again.
Note:

You can view a copy of the full log located in your system. The log location is defined when you define the daemon or workspace. For more information, see Editing a Daemon.

Viewing Events
Attunity Studio provides a view of the events for each daemon, workspace, and server. The view is displayed in the workbench editor. To view events 1. Open Attunity Studio.
2. 3.

From the Runtime Explorer view, expand the Daemons folder, daemon, or workspace. Right-click the level you want to view (daemon, workspace, or server) and select View Events. Each Event monitor is displayed in a separate tab. You can switch logs by clicking the required tab. The Attunity Studio Runtime Manager perspective opens the Event monitor in the editor.

27-8 AIS User Guide and Reference

Figure 275 Runtime Event Logs

Working with the Event Editor The Events displayed in the Event Editor display daemon, workspace, and server activities as they happen. You can carry out the following activities for each.

Set the Event logging level Start and stop the Event logging display Clear the events displayed in the log

To set the Event preferences 1. In the Event view editor, click Properties. The Preferences screen opens. You can set preferences for the following:

Logging Server Client

2.

To set the logging preferences, click Logging and then select one of the following:

none: The log displays who has connected and disconnected from the server process.

Runtime Management with Attunity Studio 27-9

error: The log displays who has connected and disconnected from the server process and all errors. debug: The log displays who has connected and disconnected from the server process, errors, and any tracing specified in the daemon configuration.

3.

Select Server from the left side of the screen to determine whether to display connection and disconnection events for a server machine in the log.

Select serverConnect to display all users that connect to the server. Select serverDisconnect to display all users that disconnect from the server.

4.

Select Client from the left side of the screen to determine whether to display connection and disconnection events for the client machine in the log.

Select clientConnect to display all users that connect to the client. Select clientDisconnect to display all users that disconnect from the client.

5.

Click OK to set the preferences and close the screen.


To close the screen without changing the logging level, click Cancel. To reset the logging level to the default settings, click Restore Defaults.

To start and stop the logging display Do one of the following:


Click Suspend to stop collecting logging information Click Resume to start collecting logging information

To clear the information in the log Click Clear to remove all entries in the log displayed in the Event Editor. If logging is enabled, new information will continue to be displayed. The cleared information cannot be viewed again.

Daemon Properties
The Daemon Properties screen lets you make changes to the daemon definition that was created. See also: Adding a Daemon. To view and edit daemon properties 1. Open Attunity Studio.
2.

In the Runtime Explorer view, Right-click the daemon where you want to view the properties and select Daemon Properties. The Daemon properties screen opens:

27-10 AIS User Guide and Reference

Figure 276 Daemon Properties Screen

3.

Enter the following information about the machine where the daemon is located:

Host name/IP address: Enter the name of the machine on the network. The name can be entered manually or click Browse to browse all the machines running a daemon listener on the port currently accessible over the network. When you expand the daemon you can view the daemons workspaces to edit the server processes for each workspace.

Port: Enter the port where the daemon is running. The default port for Attunity Server is 2551. Display name: Enter an alias used to identify the daemon when different from the host name. This field is optional.

4.

Enter the following information about the connection:

User name: The username of a user defined as an administrator for the machine. This is optional. Password: Enter the password for the user entered in the User name field. Connect via NAT with a fixed IP address: Select this if the machine uses the NAT (Network Address Translation) firewall protocol, with a fixed configuration, mapping each external IP to one internal IP, no matter which port is used.
Note:

You create an administrator when the machine is installed or by adding the administrator using Attunity Studio. See Administration Authorization.

5.

Enter the following properties information:

Platform: The name of the platform the daemon is running on.

Runtime Management with Attunity Studio

27-11

Configuration daemon: The daemons physical location is entered in this field.

Error Log View


The Error Log view lets you easily view information about errors that are generated at runtime. You can access the Error Log view from the Design, Runtime Manager, and Solution perspectives. You can perform the following actions on error logs:

Error Log View Tasks Viewing the Event Details Deleting the Error Log Clearing the Error Log Restoring the Error Log Exporting Errors to a Log File Importing a Log File Opening a Log File

Displaying the Error Log View


You can access the Error Log view from any perspective. Viewing errors is helpful during runtime to troubleshoot problems that occur. To display the Error view In Attunity Studio, open any perspective and from the Window menu, find Show View, then select Error Log.

Error Log View Tasks


The Error Log view lets you execute various tasks with the Attunity Studio logs. In this view, you can view and delete logs and trace a logs source to find what might have caused an error.
Note:

The logs displayed in this view are the Attunity Studio logs. These logs display details about Attunity Studio users, connections to other AIS modules, and other information about Attunity Studio.

To access and execute Log View tasks Do one of the following:

Right click in the view or on the error you want to carry out the task on and select one of the Event Log View tasks. Or

Select an error (if the task is carried out for a specific error) and click the button at the top of the for the Event Log View task you want to carry out. For an explanation of the buttons, see Workbench Icons.

27-12 AIS User Guide and Reference

Table 274 Task Copy

Event Log View Tasks Description Copies the Event Details for the selected error to the clipboard. The event details traces the history of the current thread. For more information, see Event Details below. Clears the information displayed in the current Error Log view. See Clearing the Error Log. Deletes the current log in Error Log View. See Deleting the Error Log. Opens the full log in text format. See Opening a Log File. Restores information that was cleared from the last error log displayed to the Error Log view. See Restoring the Error Log. Exports the currently displayed log into text format. See Exporting Errors to a Log File. Imports another Attunity Studio log into the Error Log view. See Importing a Log File. Opens the Event Details screen. You can view information on the execution history of the current thread up to the point where the error was thrown. This helps to trace the source of the error. See Viewing the Event Details.

Clear Log Viewer Delete Log Open Log Restore Log Export Log Import Log Event Details

Clearing the Error Log


To clear the error log Do one of the following:

Right click from anywhere in the Error Log view and select the Clear log viewer icon. Or

Click Clear log viewer at the top of the Error Log View.

Deleting the Error Log


To delete the error log Do one of the following:

Right click from anywhere in the Error Log view and select Delete. Or

Click the Delete button at the top of the Error Log View.

Opening a Log File


To open a log file 1. Do one of the following:

Right click in the Error Log view and select Open error log file. Or

Click Open error log file at the top of the Error Log View.

Runtime Management with Attunity Studio

27-13

2.

Browse to select a.log file and click Open. The log file opens as a text file.

Restoring the Error Log


To restore the error log Do one of the following:

Right click in the Error Log view and select Reload error log. Or

Click Restore Log at the top of the Error Log View.

Exporting Errors to a Log File


To export errors to a log file 1. Do one of the following:

Right click in the Error Log view and select Export log file icon. Or

Click Export log file at the top of the Error Log View.

2.

Enter a name for the log file and browse to a location, then click Save.

Importing a Log File


To import errors from a log file 1. Do one of the following:

Right click in the Error Log view and select Import log file. Or

Click the Import log file button at the top of the Error Log View.

2.

Browse to find a.log file and click Open. The log file opens in the Error Log View.
Note:

The Error Log view displays logs for Attunity Studio only.

Viewing the Event Details


To view Event Details for an error Do one of the following:

In the Error Log View, double click on the error with the Event Details you want to view. Or

Right click the error and select Event Details The Event Details screen opens. This screen displays the error message, the error severity and the errors event details.

27-14 AIS User Guide and Reference

Figure 277 Event Details Screen

To view the event details for another error, you can click the error keys at the top of the screen to scroll through all the errors in the view.

Runtime Management with Attunity Studio

27-15

27-16 AIS User Guide and Reference

28
Managing Security
This section contains the following topics:

Overview of Attunity Security Managing Design Time Security Managing Runtime Security

Overview of Attunity Security


Attunity provides the following types of security:

Design Time: Security aspects that affect the design process of AIS solutions. Runtime: Security aspects that affect the use of AIS for accessing data sources and applications and for managing AIS solutions.

Security for Attunity Connect is managed through Attunity Studio.

Managing Design Time Security


The management of design time security is done in the following areas:

Local Access to AIS Design-Time Resources Remote Access to AIS Design-Time Resources Password Handling in Attunity Studio

Local Access to AIS Design-Time Resources


AIS solutions vary from simple data access solutions to complete data unload and change capture solutions. They all share the same kinds of design resources:

XML definitions, which are usually stored in a native object store (NOS). These definitions saved in the NAVROOT/def directory or folder. Operating system scripts, which are system dependant. These are generally located in various places on the local file system.

Local file access to design resources refers to working on the system where AIS is installed using Attunity Studio or NAV_UTIL to access and change the design resources. Local file access to these design resources is controlled only by the host operating systems where AIS runs. One should treat these resources just like the source code of an application.

Managing Security 28-1

Local file access to the AIS design resources must be limited to people who are authorized to design and set up AIS. Authorization is usually provided through a dedicated account with full access to these design resources

Remote Access to AIS Design-Time Resources


XML definitions are the only AIS design resources that are accessible from remote machine. Remote operating system scripts are never accessed from remotely by AIS. These scripts are edited manually using operating system editing tools. Remote access to AIS design resources is enabled by the AIS Administration Server. This is a special AIS server workspace called ACADMIN. The Administration Server is the backbone for Attunity Studio and other tools, such as the AIS Deployer. All of the design operations are made through this server. When a machine is added to the Attunity Studio configuration tree, Attunity Studio connects with the AIS Administration Server on that machine (using the username and password provided in the Add Machine screen) and presents the design resources on that server. To work with remote access at design time, you must create Design Roles.

Design Roles
Remote access to design-time resources has many security implications. The ACADMIN server has fine granularity control of which definitions a user can access and how the definitions can be accessed (read or write). Attunity Studio offers a simplified role model for AIS design that defines the following roles:

The Administrator role is allowed to view and modify all AIS design resources. The Designer role is allowed to view and modify all AIS design resources except for the daemon settings. The User role is allowed only to view the binding, adapter and user definitions.

A user that is assigned multiple roles in Attunity Studio will have the permissions for the most powerful role assigned. An account is set up when you add a machine to Attunity Studio. The account is defined in the Connection section of the Add Machine dialog box. By default, the AIS installation account is automatically assigned an administrator role. The administrator role or any other role assignment can be reset using the NAV_UTIL ADD_ADMIN command from an account with write access to the AIS design resources:
$ nav_util add_admin <username>

For information on assigning roles in Attunity Studio, see Assigning Design Roles. For information on adding machines in Attunity Studio, see Setting up Machines.

Assigning Design Roles


After you define a machine in Attunity Studio, you can grant viewing and editing rights to users and groups according to their roles in the design process. The following roles are available:

Administrator: Allowed to edit all of the definitions in Attunity Studio. Designer: Allowed to edit binding definitions and view daemon definitions in Attunity Studio. User: Allowed to view the definitions in Attunity Studio.

28-2 AIS User Guide and Reference

Perform the following steps to assign design roles to users and groups.
1. 2.

In the Configuration view, right-click the machine and select Administration Authorization. In the Administration Authorization screen, click Add User or Add Group to assign authorization to specific users or a specific group of users. The name cannot contain blanks.
Note: Administration Authorization is set from the top down. This means that if you specify users or groups at a higher level, you do not need to specify the same users or groups on a lower level.

Password Handling in Attunity Studio


When working with Attunity Studio, you may need to provide many kinds of usernames and passwords:

Username and password to access AIS Administration Servers. Each Administration Server requires its own username and password Username and password for runtime access to servers, databases and applications (some design operations require runtime access to resources).

Attunity Studio supports the following password handling methods:

Prompt for passwords once per session for each secured resource that is accessed. This setting is the most secure but it requires the user to enter the password for any server being accessed. This may be easy when small number of machines and sources are involved in the AIS solution but as more secured resources are used, this becomes more difficult. However, security policy in certain organizations may require this method. Prompt for password once and then cache the password in a local file protected with a master password. With this method, the user is prompted once for the master password at the beginning of a session and if the master password is correct, Attunity Studio automatically uses the passwords stored in that file. This method is secure as long as you use a strong master password. A strong password is not easy to guess. It should not be too short and as complex as possible, yet you should be able to easily remember it. The last method is similar to the previous one but without using any master password. In this case, Studio uses an internal password so the local file does not contain plain-text passwords. However, the stored passwords may be discovered with little effort. This method is not recommended as it is not secure. It should be used only if a password leak is not considered a security risk (for example, when the system is isolated).

The following sections describe how to set up passwords for design-time security:

Setting Up the Password Caching Policy in Attunity Studio Assigning Authorization Rights to a Workspace Setting Up a Master Password for a User

Setting Up the Password Caching Policy in Attunity Studio


Attunity Studio can cache passwords that are used for accessing servers and databases. The following security levels are supported:

Managing Security 28-3

Attunity Studio does not cache passwords. When Attunity Studio needs a password to access a server or database, it prompts the user to enter the password. This is the safest level, but it is less convenient. Attunity Studio caches passwords persistently but protects them through a master password. At this security level, Attunity Studio prompts for the master password at startup. Attunity Studio caches passwords persistently but hides them through a fixed internal master password. This security level is convenient but not safe because the cached password can be retrieved any time. This level is not recommended.

To set the password caching policy of Attunity Studio 1. In Attunity Studio, point to Windows, then select Preferences.
2. 3.

Select the Studio node. On the Security tab, do the following:


For the highest security level, clear the Remember passwords check box. For a lower security level, do the following: Click Change master password. The Change master password dialog box opens. Enter a master password and confirm it. Then click OK. Select the Remember passwords check box.
Note:

If you set a master password, Attunity Studio will prompt you for the password the next time you open the application. If you set the master password to an empty password, Attunity Studio will use the internal default password.

4.

Click OK to save your settings.

Assigning Authorization Rights to a Workspace


Once a machine is defined in Attunity Studio, you can authorize viewing and editing rights for specific workspaces. To assign authorization rights to a workspace 1. Right-click the machine in the Attunity Studio Design perspective Configuration view and select Set Authorization.
2.

Specify the user name and password for the user with authorization rights for this workspace.

Setting Up a Master Password for a User


The definition of a user profile stores the user names and passwords that are required to access databases and remote machines. To protect this information, you can assign a master password to a user profile. If you do not assign a password, anyone can use the information stored in the definition.

28-4 AIS User Guide and Reference

Note:

The password assigned to a user profile definition must be identical to the password of the user account of the servers operating system. Likewise, if the password of the user account of the servers operating system changes, you must also change the password of the user profile definition.

To set a master password for a user profile 1. Expand the machine under which the user is set.
2. 3. 4.

Expand the Users node under the machine. Right-click the User in the Attunity Studio Design perspective Configuration view and select Change Master Password. Enter the password to be changed and the new password, and then click OK.

Managing Runtime Security


The following security aspects are implemented at runtime.

Daemon and Workspace Administration: Granting the authorization needed to manage a running daemon. Access Authorization: Granting the authorization needed to connect and use a server workspace. Network Communication Encryption: Encrypting the data sent over the network between the client and the server. Impersonation: Making the server process take on the security identity of the client so that data access permissions and restrictions apply based on the clients identity (rather than on the servers security level). User Profiles: Enabling a single sign-on to the server machines, application adapters, and data sources.

The Managing Runtime Security section of this document is divided into the following theory and task sections:

User Profiles Managing a User Profile in Attunity Studio Client Authentication Client Authorization and Access Restriction Transport Encryption Encrypting Network Communications Firewall Support Accessing a Server through a Firewall Dynamic Credentials Setting Up Impersonation Granting Daemon Administration Rights to Users Granting Workspace Administration Rights to Users

Managing Security 28-5

User Profiles
The User Profile definition is an AIS resource that plays an important role in the AIS runtime security. A User Profile is a collection of username and password pairs for accessing databases, applications and remote machines on behalf of the user. The User Profile definition also stores encryption keys for use with transport encryption. A User Profile is protected by a master password that is required for access to the passwords and keys. You need to define a user profile:

At the client with ODBC, ADO/OLEDB or NAV_UTIL At the server side when connecting with any client.

Setting up user tasks includes the following:


Setting a Master Password for a User Profile Using a Client User Password Using a Server User Profile

Setting a Master Password for a User Profile


Because the User Profile definition stores passwords and keys, it is important to protect the it with a master password to prevent unauthorized access to the stored passwords and keys. If a master password is not set for a User Profile, its passwords are encrypted with a default, built-in password. Without a proper master password, the User Profile is not protected. A master password can be set in two ways. Using the command line, on the same machine where AIS is installed, enter the following command
$ nav_util password -u <user-profile-name>

If the User Profile was already protected with a master password, you will be prompted to enter the old password. Then the program will prompt you to enter the new password twice (to make sure you entered the intended password) and will update the User Profile. Another way to change the master password of a User Profile is with Attunity Studio. You can create a master profile when you define a new user or you can edit an existing user to add or change the master password. To change the master password in Attunity Studio 1. Connect to the AIS machine where the User Profile is defined.
2. 3. 4.

From the Configuration explorer, expand the machine where the User Profile is defined. Right-click the User Profile you want to edit and select Change Master Password. In the Change password dialog box, enter the following information:

Old password: Type the current master password or the default master password. New password: Type the new password that you want to use. Make sure that it is a strong password that is not easy to guess and that you can remember. Confirm new password: Type the new password again.

For information on adding and editing user profiles in Attunity Studio, see Managing a User Profile in Attunity Studio.

28-6 AIS User Guide and Reference

Using a Client User Password


When working with AIS through ODBC, ADO/OLEDB or NAV_UTIL, you can use a local User Profile definition for passwords and keys it needs for access to databases, applications and remote machines. The following table shows how the User Profile and the master password are provided with each of these interfaces.
Table 281 Interface ODBC Client User Passwords and Master Profiles User Profile Use the UID connection string option as shown here: UID=<user-profile-name> Also as a parameter to the SQL[Driver]Connect() functions. OLE/DB Use the UID connection string option as shown here: UID=<user-profile-name> Also as a parameter to the connection.open() method NAV_UTIL Use the -u command line option as shown here: $ nav_util -u scott execute navdemo Master Password Use the PWD connection string option as shown here: PWD=<master-password> Also as a parameter to the SQL[Driver]Connect() functions. Use the PASSWORD connection string option as shown here: PASSWORD=<master-password> Also as a parameter to the connection.open() method Use the -p command line option or omit it and let the program prompt for it

Here is an example of how a client User Profile is used:


An ODBC program uses AIS to access an Oracle database To access the Oracle database through AIS, the data source must be defined in a binding definition either locally or on an AIS server. In this example, it is defined with the name MYORADB. The local User Profile definition with the name NAME_R is protected with a password xA2oo4' The NAME_R User Profile definition has an entry for a data source called MYORADB with a username name and password tiger. When opening an ODBC connection, the connect string contains the following (in addition to other items): uid=scott_r;pwd= xA2oo4.

With this setting, when a query is made in MYORADB, AIS finds the entry for MYORADB in the NAME_R User Profile and finds that it should use scott/tiger to connect to Oracle.

Using a Server User Profile


A server User Profile definition is a User Profile definition on an AIS server. The same User Profile definition can be a client User Profile and a server User Profile, depending on how it is accessed. When an AIS client connects to an AIS server, it usually provides a username and a password, which are checked with the operating system (for example, on Windows they are checked with the domain and on z/OS they are checked with the installed system security package such as RACF).

Managing Security 28-7

To use a server User Profile, its name must be the same as the login name used for connecting to the server. The master password for a server User Profile, if set, must be the same as the login password for the username. If the server User Profile is not protected with a master password, then it can be used by a remote user only if the user has authenticated itself with a login password (though any local user can also use it so it is not secure). When accessing AIS with a remote client interface such as JDBC. ADO.NET, JCA and NETACX, AIS takes usernames and passwords from the server User Profile that are associated with the login user of the connection (or from the NAV User Profile if no User Profile matches the login user name). The following table shows how the User Profile and the master password are provided with each of these interfaces.
Table 282 Interface JDBC Server User Profiles and Master Passwords User Profile/Login Account Use the JDBC connection string as in: jdbc:attconnect://[usernam e: password@]machine:port/wor kspace. Also use the setUser method on the [XA]DataSource object. Also as a parameter to the DriverManager.getConnectio n method Also as a parameter to the Datasource.getConnection and XADatasource.getXAConnecti on methods. ADO.NET Use the User or UID or Username connection string options as shown here: User=<user-profile-name> Also use the Username property of the AisConnectionStringBuilder object. JCA Master Password/Login Password Use the JDBC connection string as shown in the User Profile/Login Account column. Also use the setPassword method on the [XA]DataSource object. Also as a parameter to the DriverManager.getConnection method. Also as a parameter to the Datasource.getConnection and XADatasource.getXAConnectio n methods.

Use the PASSWORD connection string option as shown here: Password=<master-password> Also use the Password property of the AisConnectionStringBuilder object.

Use the setUserName method on Use the setPassword method on the ManagedConnectionFactory the ManagedConnectionFactory class. class. Also, equivalently, through the resource adapter deployment descriptor. Also use the setUser method on the ConnectionRequestInfo class. Also, equivalently, through the resource adapter deployment descriptor. Also use the setPassword method on the ConnectionRequestInfo class. Use the Username property of the AcxClient object.

NETACX

Use the Username property of the AcxClient object.

28-8 AIS User Guide and Reference

Managing a User Profile in Attunity Studio


User profiles are managed in the Attunity Studio Design Perspective. User profile management lets you add a new user or edit a current user.
Note:

The configuration supplied with the product installation includes the default NAV user profile. This profile is used if a user profile is not specified when accessing an application, data source or remote server machine.

Managing a user profile has the following tasks:


Adding a User Add Authenticators Add Encryption Keys Editing a User Profile Remove an Authenticator or Encryption key

Adding a User
You can add users to Attunity Studio to grant them access to specific authenticators. Follow these steps for adding a new user. To add a new user 1. Open Attunity Studio.
2. 3.

In the Design perspective Configuration view, expand the machine where you want to set user permissions. Right click the User folder in the list and select New user. The New User screen opens.

Managing Security 28-9

Figure 281 New User

4.

Enter a name for the user profile. If you want the user to automatically access the authenticator options, select the Use default master password check box (this is not recommended). Clear this check box to enter a specific user name and password for this user. For information on master passwords, see Setting Up a Master Password for a User.

5.

Click Finish. The user information is displayed in the User editor.

28-10 AIS User Guide and Reference

Figure 282 User Editor Screen

In the User editor, you can:


Add Authenticators Add Encryption Keys

Add Authenticators
Authenticators are Data Sources, Adapters (including CDC Agents), and remote machines that the user can access. Authenticators are added in the User editor. The editor opens when Adding a User or by right clicking a user in the Configuration explorer and selecting edit. For more information on how to open the User editor, see Editing a User Profile. Follow these steps for adding authenticators. To add authenticators 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective Configuration view, expand the machine where you want to add authenticators. Expand the User folder. Right click the user where you want to add authenticators and click Open. In the User editor, find the Authenticators section at the top of the editor. Click Add to add new authenticators for the user. Authenticators are data sources, adaptors, and machines that the user is authorized to access. The Add authenticator dialog box opens.

Managing Security

28-11

Figure 283 Add Authenticator

7.

Enter the following information in the Add authenticator screen:

Resource Information: Defines the resource to which the user is authorized. This is defined in the following fields: Resource type: Defines the resource type as Data source, Adapter or Remote machine. Resource name: The resource to which the user is authorized.

Authorization information: Defines the user authorization details. This is defined in the following fields: User name: The name of the user authorized to enter the resource. Password: The password for the resource authorization. Confirm password: Enter the password a second time to confirm that it is correct.

8.

Click OK. The new authenticator is displayed in the User editor Authenticator tab.

Add Encryption Keys


Encryption keys are used with encrypted communication between machines. For information on using encryption in AIS, see Encrypting Network Communications. Encryption keys are added in the User editor. The editor opens when Adding a User or by right clicking a user in the Configuration explorer and selecting edit. For more information on how to open the User editor, see Editing a User Profile Follow these steps for adding encryption keys to a user.

28-12 AIS User Guide and Reference

To add encryption keys 1. Open Attunity Studio.


2. 3. 4. 5. 6.

In the Design perspective Configuration view, expand the machine where you want to add encryption keys. Expand the User folder. Right click the user where you want to add the encryption key and click Open. In the User editor, find the Encryption keys section. Click Add to add new encryption keys for the user. Encryption keys are used to allow encrypted communication between machines. A user must have access to an encryption key to communicate with a remote machine. The Add Encryption key dialog box opens.

Figure 284 Add Encryption key

7.

Enter the following information in the Add Encryption key screen:

Key name: Enter the name associated with the encryption password and which the daemon on this machine looks up. Key: Enter the encryption key. Confirm key: Re-enter the encryption key. Click OK. The new encryption key is displayed in the User editor Encryption keys tab.

Editing a User Profile


You can do the following for user profiles:

Change what is displayed in the authenticators list. Edit and remove the authenticators and encryption keys that in a user profile.

You can make changes to the information you entered in the for both authenticators and encryption keys. Follow the following steps for editing a user profile. To edit a user profile 1. Open Attunity Studio.
2.

In the Design perspective Configuration view, expand the machine where you want to add encryption keys.

Managing Security

28-13

3. 4.

In the Design perspective Configuration view, expand the machine where you want to edit the user properties Expand the User Folder.
Note:

The default NAV user should always be available.

5.

Right click the user you want to make changes to and select Open. The User editor has two sections, for editing authenticators and for editing authentication keys.

6.

Double click any authenticator or encryption key in the list or select the item you want to change and click Edit. The Edit Authenticator or Edit Encryption key screen opens. This is the same as the Add Authenticators or Add Encryption Keys screens.
Note: For Authenticators, you cannot change the Resource Type field.

7.

Make any changes necessary to the information and click OK.

Remove an Authenticator or Encryption key


You can remove authenticators or encryption keys from a user profile. Follow the following steps for removing authenticators or encryption keys. To remove authenticators or encryption keys 1. In the Design perspective Configuration view, expand the machine where you want to edit authenticators or encryption assigned to a user.
2.

Expand the User Folder to see a list of users available on that machine
Note:

The default NAV user should always be available.

3. 4.

Right click the user you want to remove and select Open. T Select an Authenticator from the Authenticator list or an encryption key from the list and click Remove.
Note:

Make sure to use the Remove button for the correct list. For example, if you are removing an authenticator, click the Remove button next to the authenticator list.

5.

Select the item you want to remove and click Remove.

Client Authentication
AIS clients connect to AIS using many client interfaces (for example, ODBC, ADO/OLEDB, JDBC, ADO.Net, ACXAPI). All client interfaces get a username and password as part of the connection information and this information is used to authenticate the identity of the caller (also called Principal) which may be a person or a software program such as a web server.
28-14 AIS User Guide and Reference

There are two client authentication scenarios:


Client Authentication for Thin Clients Client Authentication for Fat Clients

Client Authentication for Thin Clients


When using a thin AIS client (such as, JDBC, ADO.NET) the username and password provided to the client are checked by the daemon on the server machine. The username and password provided must exactly match an operating system account name and password. For example, if you connect using JDBC, with robert/seaweed as the username and password, then the server must have an account named robert with a password seaweed. A matching server-based user profile and master password are optional. For more information, seeUsing a Server User Profile.

Client Authentication for Fat Clients


Fat AIS clients, such as OLEDB and ODBC support optional client authentication using a local User Profile definition. The username provided in the API call is the name of the local User Profile definition, and the password provided should be the master password that is protecting that user profile. The following is true for creating client authentication for fat clients:

Client authentication for fat clients is not mandatory. You can connect to either OLEDB or ODBC without authenticating. In that case, however, credentials for accessing databases, adapters and remote machines have to be specified explicitly on the connection string using the DSNPassword connect string option. For more information, see Providing Credentials in the Connection String. If the referenced user profile is protected with the default master password, then the password that is passed to the API is ignored (it is not checked for correctness). If the references user profile is protected with a non-default master password and the password provided in the API is not correct, the client connection creation fails.

Client Authorization and Access Restriction


AIS provides access to enterprise resources such as databases and applications. This presents a security challenge to ensure that clients (once authenticated) can only access what they are allowed to access. AIS uses the standard native interfaces of the databases and applications it supports. Therefore, any client authorization and access restriction enforced by the enterprise resource are also for the AIS client, under the identity used for the connection. When working locally, that is the only authorization functionality that is available. When working using the AIS client/server infrastructure, the following types of authorization and access restriction features are available:

Restricting Access to a User by Login User Name Restricting Access to Data with a Virtual Database

Restricting Access to a User by Login User Name


The AIS daemon workspace definition lets you define a list of operating system users (and in some platforms groups of operating system users) who are allowed to connect to that workspace. When you define a list of users, you must authenticate with a login

Managing Security

28-15

user name that is in the list in order to be able to connect to the workspace server and work with it. By adding a user to the list of workspace users, you grant them access to the workspace servers only. If a database on the server requires additional authentication, you need to create a user profile definition with the name of this user. When you restrict access to a specific workspace (by clearing the Allow Anonymous Client check box and specifying the users using the Workspace Access field), you must also define a user profile on the machine with the workspace for each user defined as able to access the workspace. Each user profile must have the same name as specified in the Workspace Users field. For more information, see Managing a User Profile in Attunity Studio. To define user access to a workspace 1. Open Attunity Studio.
2. 3. 4. 5. 6. 7.

In the Design perspective Configuration view, expand the machine where you want to define the user access In the Design perspective Configuration view, expand the Daemons folder. Expand the daemon with the workspace where you want to define the user access. Right click the workspace where you want to define the user access and select Open. Select the Security tab. In the Authorized Workspace Users section, select Selected users only.

Figure 285 Security Tab

28-16 AIS User Guide and Reference

8.

Click Add User to add Workspace Users (accounts) for users who can access data using this workspace.
Note:

On OpenVMS and z/OS Platforms, to define a group of users instead of a single user (so that the group name is validated by a security system), preface the name of the group in the configuration with @.

9.

Save your settings.

Restricting Access to Data with a Virtual Database


AIS is a very powerful data integration software providing easy access to wide array of enterprise data. In many cases, in addition to allowing access to data, you must also prevent access to non-authorized data. AIS offers a special type of data source called a Virtual Database that can be used to specify what a user can access through the product. A virtual database is a special data source that is made of local tables as well as synonyms, views and procedures referring to tables, views and procedures in other data sources. A query against a virtual database can only access tables, synonyms, views and procedures defined within it. A query against a virtual database cannot directly access any of the tables, views and procedures in the underlying data sources. For more information on virtual data sources, see Using a Virtual Database. To set up a workspace to only allow access through a virtual database follow these steps:

In Attunity Studio, define a virtual database (V) in binding (B). See Using a Virtual Database. Edit the workspace definition. See Editing a Workspace. Select the General tab. In the Workspace binding name field, select the binding (B) where you define the virtual database. See Editing a Workspace. In the Workspace database name section, enter the name of the virtual database (V). See Editing a Workspace.

Transport Encryption
When AIS is used in client-server mode, it transfers data over the network. Except for authentication details such as passwords, all the data is transferred in without encryption and may be exposed to an eavesdropper listening on the network. In cases where the privacy of the data is important (or where the privacy of the network cannot be ensured), AIS clients can be configured to encrypt everything that is transmitted over the network. Using encryption creates unavoidable processing overhead, which is why encryption is not enabled by default. You need to weigh the risks against the processing overhead in your system to determine whether to use encryption. The transport encryption supported with AIS is called symmetric encryption which means that it is based on a secret key that is shared between the AIS client and AIS server. AIS does not handle the sharing of the encryption key, it just assumes a key was

Managing Security

28-17

selected and is known both at the client and the server. The following section called Encrypting Network Communications describes how to set up encryption.

Encrypting Network Communications


To encrypt communications passed over the network, you need to specify the following encryption parameters in AIS:

The encryption protocol for the server machine on the client machine (see Setting a Client Encryption Protocol). The encryption communication that indicates the servers that the client will communicate with using encrypted communication (see Configuring Encrypted Communication). The user profile on the client machine, indicating that this information is encrypted (see User Profiles). The encryption key on the server machine (see Configuring the Encryption Key on the Server Machine).

For a Java thin client, you can specify encryption of client/server communication via the JDBC connect string.For more information, see the JDBC Connection String.

Setting a Client Encryption Protocol


The encryption protocol between the client and the server is set on the client machine. All communication from the client machine is encrypted using the specified protocol. To define the encryption protocol for the server machine 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design perspective Configuration view, expand the Machines folder and then expand the server machine where you are defining the encryption protocol. Expand the Bindings folder Right-click the binding in the Attunity Studio Design perspective Configuration view and select Open. Select the Machines tab. This figure shows the Machines tab of the Binding editor.

28-18 AIS User Guide and Reference

Figure 286 Binding Editor Machines Tab

6.

In the Machines tab, select the client machine from the list, and click Network. The Network Settings screen is displayed.

Figure 287 Network Settings Screen

7.

Select the protocol for encryption from the Encryption Protocol list and click OK.

Configuring Encrypted Communication


After specifying the encryption protocol, you must specify which servers the client is going to communicate with using the specified encryption protocol. To configure the server machine for encrypted communication 1. Open Attunity Studio.
2. 3. 4. 5. 6.

Expand the Machines folder. In the Design perspective Configuration view, expand the machine where you want to configure the encryption key. Expand the User folder. Right-click the user profile where you are configuring the encrypted communication and select Open. In the User editor, select the client machine and click Add.

Managing Security

28-19

The Add Authenticator screen is displayed.


Figure 288 Add Authenticator Screen

7.

Configure the authenticator parameters as follows:


Resource type: Specify the resource type as Remote machine. Resource name: Specify the communication to the machine is encrypted in the following format:
enckey: machine_name

where machine_name is the machine to which you are connecting.

User name: Specify the name associated with the encryption password and which the daemon on the remote machine looks up. Multiple clients may specify this name in their user profile. In this case, the user profile on the remote machine needs to list only this one username/password entry for network encryption (rather than listing and looking up multiple username/password pairs). If this user name entry is not specified, the daemon on the remote machine uses the name of the currently active user profile.

Password: Specify the password that is required in order to pass or access encrypted information over the network.

8.

Click OK.

Configuring the Encryption Key on the Server Machine


Any communication from the client to the server machine is encrypted. The server machine must be configured to decipher the encrypted information using an encryption key. To configuring the encryption key on the server machine 1. Open Attunity Studio.
2. 3. 4. 5.

Expand the Machines folder. In the Design perspective Configuration view, expand the machine where you want to configure the encryption key. Expand the User folder. Right-click the user profile and select Edit User. The User editor is displayed.

28-20 AIS User Guide and Reference

Figure 289 User Editor and Encryption Key Section

6.

In the Encryption Keys section, click Add. The Add Encryption Key screen is displayed.
Add Encryption Key Screen

Figure 2810

7.

Configure the encryption key parameters as follows:

Key name: Enter the name associated with the encryption password and which the daemon on this machine looks up. Key: Enter the encryption key. Confirm key: Re-enter the encryption key.

8. 9.

Click OK. In the Configuration view, expand the Daemons folder.

10. Right-click the daemon that manages the connection and select Open. 11. Select the Security tab. 12. In the Machine access area, enter RC4 in the Encryption methods field.

Managing Security

28-21

This figure shows the Daemon Security tab.


Figure 2811 Daemon Security Tab

Firewall Support
AIS supports a common case of intranet firewall setup called Fixed NAT. The following describes the The Fixed NAT setup:

The AIS daemon and servers run on a machine behind a firewall with a local IP address that is not normally accessible from outside the firewall. For example, the AIS server IP address may be 10.10.10.100. The AIS clients run outside of firewall on a different network, for example, the client IP address may be something like 192.168.10.55. To access the server, a fixed network address translation is set up so the client can access an IP address that looks local and it automatically is translated into the server's IP address. For example, the client uses the IP address 192.168.10.88, which is translated by the network to 10.10.10.100. When AIS server instances start, they report their IP and port to the daemon, for example, an AIS server may report its location as 10.10.10.100:5544. The client gets this address from the daemon but cannot connect to that address since it is not directly accessible from the client's network. By setting the Fixed NAT connection option at the client side, the client ignores the IP address that the daemon returns and just uses the port number so in the example we have, upon getting the server address of 10.10.10.100:5544, the client will actually connect to 192.168.10.88:5544.

28-22 AIS User Guide and Reference

This setup also works when the fixed network address translation is restricted to a specific port range. In such case, the daemon itself and the workspaces that needs to be access must be running in that port range.

The following diagram shows the connection process with the Fixed NAT setting.
Figure 2812 Fixed NAT Firewall Connection

You can enable a Fixed NAT using NAV_UTIL or with Attunity Studio. To enable a Fixed NAT with NAV UTIL, enter the following at the command prompt:
$ nav_util -b bindurl=<host>:<port>:fixednat/<workspace> execute <ds>

To set up a Fixed NAT with Attunity Studio select the Fixed NAT check box in any dialog box that supports Fixed NAT. The following section, called Accessing a Server through a Firewall describes how to use Attunity Studio to set up Firewall Access.

Accessing a Server through a Firewall


AIS lets you access a server through a firewall. The following table describes the available options.
Table 283 Firewall Options Description With VPN, firewall traversal is transparent and does not require special configuration of Attunity Connect. Attunity Connect provides no SOCKS client. You can only use SOCKS if the firewall vendor provides automatic libraries that add SOCKS support, such as WinSock2 filters on Windows, so that communication goes through SOCKS. When Attunity Connect connects through SOCKS, make sure to do the following:

Firewall Traversal Technology VPN SOCKS

Set the Fixed NAT option at the client to Select. Establish a port range for the workspace server.

Managing Security

28-23

Table 283 (Cont.) Firewall Options Firewall Traversal Technology NAT Description Attunity Connect supports Network Address Translation (NAT) only when Fixed NAT, also called static NAT or one-to-one NAT, is used. Fixed NAT uses an external IP address to connect to the daemon and starts the servers on an internal IP address. When Attunity Connect connects through Fixed NAT, the Attunity Connect client ignores the server IP address that the daemon returns and uses the port number with the original external IP address instead. When Attunity Connect connects through Fixed NAT, make sure to do the following:

Set the Fixed NAT option at the client to Select. Establish a port range for the workspace servers.

Selecting a Port Range for Workspace Servers


Selecting a port range for a workspace causes all servers that are started for that workspace to listen on ports within the specified range. When a new server starts, it scans the port range in random order trying to bind to a port. It then uses the first port that it manages to bind to. Multiple workspaces can share the same port range. However, you must make sure to define a range that is large enough to allow for all servers to start. In addition, because it take time for used ports to become available again after the server that used them terminated, it is recommended to specify a port range that is slightly larger than the number of servers requires to accommodate for transition periods. To select a range of ports 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective Configuration view, expand the Machines folder. Expand the machine where you are defining the port range. Expand the Daemons folder and then expand the daemon with the workspace where you want to define the port ranges. Right-click the workspace you where you are defining the port range and select Open. In the Workspace editor, on the Server mode tab, Server section, select Enable port range and enter the starting and ending port numbers in the From port and To port boxes. For more information, see the Editing a Workspace, Server Mode section.

7.

Save your settings.

Accessing a Server Using Fixed NAT


When an Oracle Transparent Gateway needs to access the Attunity Connect server through a Fixed NAT firewall, make sure to set the HS_FDS_CONNECT_INFO parameter in the init.ora file as in the following example:
HS_FDS_CONNECT_INFO="address=mvs.acme.com port=2551 firewallProtocol=fixednat

28-24 AIS User Guide and Reference

This setting does not require any changes to the daemon configuration. To specify the binding information 1. On the client, specify connection information to the Windows proxy, as in the following:
<datasource name="mydata" type="remote" connect="ntproxy"/> ... <remoteMachine name="ntproxy" address="address" port="port" firewallProtocol="nat"/> 2.

On the Windows proxy, specify connection information for those data sources you want to access through the firewall on the server, as in the following:
<datasource name="mydata" type="remote" connect="acme"/> ... <remoteMachine name="acme" address="address" port="port"/>

Dynamic Credentials
In some cases, security policy or regulations dictate that credentials (user names and passwords) cannot be stored in files, encrypted or not. In other cases, applications can dynamically acquire credentials (for example, from directory services). AIS provides two mechanisms for providing credentials dynamically. These mechanisms are:

Providing Credentials in the Connection String Interactively Prompting for Credentials

Providing Credentials in the Connection String


Clients can provide credentials dynamically using the AIS connect string attribute DSNPasswords. This attribute lets the caller specify what username and password to use with databases, adapters and remote machines being accessed. The DSNPasswords syntax is: DSNPasswords=<resource-name>=<user-name>/<password>[&] where:

resource-name is the name of the database, application adapter or remote machine for which the authentication should be provided. user-name is the database/application/server user ID. password is the matching password for the given user-name. You can give several authenticators by repeating the same format with '&' as delimiter.

To use encryption when the resource refers to a remote machine and you are communicating with that machine, use the slightly more complex DSNPasswords syntax: DSNPasswords=<resource-name>=<user-name>/<password>[&] where:

Managing Security

28-25

resource-name is the name of the database, application adapter or remote machine for which the authentication should be provided. user-name is the database/application/server user ID. password is the matching password for the given user-name.

An example of an OLEDB connect string that uses the DSNPasswords syntax is: Provider=AttunityConnect;DSNPasswords=myOracle=scott/tiger&myDb2 =mik/maker When the resource refers to a name of a remote machine, you can also provide encryption key name and value using the following syntax: DSNPasswords=<resource-name>=<user-name>#<key-name>/<password>#< key-value>[&] where:

key-name is the name of the key the server should use. key-value is the value of the key to use when encrypting communication with the remote machine.

Interactively Prompting for Credentials


When AIS is invoked interactively, through a fat client (OLEDB or ODBC), AIS can be set to interactively prompt the user to enter username and password for a data source as it is being accessed. To set up interactive prompting for a data source 1. Open Attunity Studio.
2. 3. 4. 5. 6. 7. 8.

In the Attunity Studio Design perspective Configuration view, expand the Machines folder and expand the machine you are working with. Expand the Bindings folder. Right-click the binding you are working with and select Open. Click the Environment tab. In the Query Processor section, select Prompt DB user password . Add a user profile entry for the data source in the User Profile definition that you are using (this can be the default user profile NAV or any other user profile). Set either the username or the password in the new entry to ?.

The same procedure can be used to set up interactive prompting for a remote machine.
Note:

This feature should only be used when AIS is invoked from an interactive session. Prompting from a non-interactive process may cause the calling process stop responding, and wait for user input that will never be given. For this reason, it is not recommended to set this property in the default NAV binding.

Setting Up Impersonation
Impersonation is the ability of a server to execute in a security context that is different from the context of the process that owns the server.

28-26 AIS User Guide and Reference

The primary reason for impersonation is to cause access checks to be performed against the client's identity. Using the client's identity for access checks can cause access to be either restricted or expanded, depending on what the client has permission to do. For example, suppose a file server has files containing confidential information and that each of these files is protected by a security system. To prevent a client from obtaining unauthorized access to information in these files, the server can impersonate the client before accessing the files. Impersonation through Attunity Server is available on all platforms except for the OS/400 platform. For information on setting up inpersonization for DB2 databases on the z/OS systems, see Setting Up Impersonation for DB2. To set up impersonation 1. On the z/OS systems only, APF authorize all the steplibs in the server script. For example:
setprog... ada622-volume adavol CICS.CICS.SDFHEXCI - p390dx navroot.load - 111111 navroot.loadaut - 111111

where navroot is the high-level qualifier where AIS is installed.


2.

On all platforms, on the Workspace editor, Security tab, leave the Server account field empty, as shown in the following figure.
Workspace Account Setting

Figure 2813

Managing Security

28-27

3. 4.

On the Client platform, define a user profile for the server machine with the required username and password for the remote machine in the user profile. Use Attunity Studio to add a new user authenticator with the Resource type defined as Remote, as described in Configuring Encrypted Communication.

Setting Up Impersonation for DB2


Impersonation can be set up for DB2 on z/OS systems only. In addition to the steps described in Setting Up Impersonation for DB2, a call to an Attunity Server load module must be implemented from a DB2 Exit routine in order to implement impersonation for DB2. If the site has a DB2 Exit routine, add a call in the routine to an Attunity Server load module (ATYDSN3). If the site does not have a DB2 Exit routine, use the Exit routine NAVROOT.SAMPLES(DSN3SATH) supplied with AIS. To implement impersonation for DB2 1. Save NAVROOT.SAMPLES(ATYDSN3) in any PSD.
2.

If the site has a DB2 Exit routine, add the following line:
CALL ATYDSN3

3.

Modify the following lines in the NAVROOT.SAMPLES(DSNTIJEX) job so that the high-level qualifier is valid for the site (instead of DEV):
//ASM.SYSIN // DD DD DISP=SHR,DSN=DEV.DB2(&MEM) DISP=SHR,DSN=DEV.DB2(ATYDSN3)

If you are not using the supplied AIS DB2 Exit routine NAVROOT.SAMPLES(DSN3SATH), replace the parameter, MEM, with the name of the exit routine used at the site.
Note:
4.

Submit the DSNTIJEX job. The DSNTIJEX job builds both the exit routine and load module in the DB2 libraries at the site.

5.

Shutdown and restart DB2, so that the changes are applied.

Granting Daemon Administration Rights to Users


You can grant a user full administration rights to the daemon. An administrator has the right to perform actions such as starting and shutting down a daemon.
Note:

Administration rights to the daemon can only be set to use a cached password.

To grant administrative rights to a daemon 1. Open Attunity Studio.


2. 3. 4.

In the Design perspective Configuration view, expand the Machines folder. Expand the machine where you are granting the administrative rights Expand the Daemons folder.

28-28 AIS User Guide and Reference

5. 6.

Right-click the daemon and select Open. In the Daemon editor, select the Security tab. This figure shows an example of the Daemon editor, Security tab.

Figure 2814

Administrator Privileges in the Daemon Security Tab

7. 8.

In the Administrator privileges area, select the Selected users only option. Use the Add Group and Add User buttons to grant Administration rights to groups and users. For more information, see the Editing a Workspace, Security section.
Note: To define a group of users on OpenVMS and z/OS Platforms, preface the name of the group in the configuration with @. Under z/OS, the group name is validated by a security system such as RACF.

The results can be seen in XML, in the Source tab. For example:

To prevent anonymous access and limit administration rights to a user called sysadmin:
<security anonymousClientAllowed="false" administrator="sysadmin" />

To allow anonymous access a server and to grant all users administration right:
<security anonymousClientAllowed="true" administrator="*" />

Managing Security

28-29

Note:

On OpenVMS and z/OS Platforms, to prevent anonymous access and limit administration rights to a user named sysadmin and to a group named sys:

<security anonymousClientAllowed="false" administrator="sysadmin,@sys" />

Granting Workspace Administration Rights to Users


You can grant a user runtime workspace administrative rights for a specific workspace. These rights allow a user to enable, disable, and recycle workspace server. To grant administrative rights to a workspace 1. Open Attunity Studio.
2. 3. 4. 5. 6. 7.

In the Design perspective, in the Configuration view, expand the Machines folder. Expand the machine where you want to grant the workspace administrative rights. Expand the Daemons folder. Expand the daemon with the workspace where you want to add the administrative rights. Right-click the workspace where you are adding administrative rights and select Open. Click the Security tab.

Figure 2815 Administrator Privileges

28-30 AIS User Guide and Reference

8. 9.

In the Authorized Workspace Users section, select Selected users only. Click Add Group or Add User to grant administration rights to a group or user. For more information, see the Editing a Workspace, Security section.
Note: To define a group of users on OpenVMS and z/OS Platforms, preface the name of the group in the configuration with @. Under z/OS, the group name is validated by a security system such as RACF.

The results can be seen in XML, in the Source tab. For example:

Users must enter the correct user name and password to use a workspace. All authenticated users have workspace administration rights. This is defined as follows:
<workspace name="Navigator" description="An Attunity Server" workspaceAccount="orders" startupScript="machine_dependent" serverMode="reusable" reuseLimit="20" anonymousClientAllowed="false" administrator="*" /> </workspace>

Users must enter the correct user name and password to use a workspace. Only the user SYSADMIN has administrator rights. This is defined as follows:
<workspace name="Navigator" description="An Attunity Server" workspaceAccount="orders" startupScript="machine_dependent"1 serverMode="reusable" reuseLimit="20" anonymousClientAllowed="false" administrator="sysadmin" /> </workspace>

Notes:

The value for startupScript in these examples is machine-dependent. For example, for z/OS the startup script may be: startupScript=ATTSRVR.AB On z/OS systems, to limit administration rights to a user called SYSADMIN and a group called DEV, the following line is required: administrator="SYSADMIN,@DEV"

Managing Security

28-31

28-32 AIS User Guide and Reference

29
Backing Up AIS
This section includes the following topics:

Overview of the AIS Backup Process Backing Up and Restoring AIS Server Installation Backing Up and Restoring AIS Server Metadata Backing Up and Restoring AIS Server Scripts Backing Up and Restoring AIS Server Data Backing Up and Restoring AIS Studio Metadata

Overview of the AIS Backup Process


The backup procedures let you create a backup of your working environment so that you can completely restore the system if the AIS environment is lost. To completely back up AIS, you must back up the following components:

AIS Server installation (any number of instances) AIS Server scripts: (any number of instances) AIS Server data (Stream only): (any number or no instances) Attunity Studio metadata: (a single instance)

Backing Up and Restoring AIS Server Installation


In the event that your AIS installation is lost or corrupted, you must restore the installation. When you restore the installation, you need to restore the server files that you have worked on to this point. Therefore, make sure to back up the AIS server files on a regular basis. To back up the AIS Server installation, you must back up the following.

All of the directories and files in the installation. The default location for these files are in a folder or directory called Attunity/Server. The original installation file.

Place the backed-up files in a where you can easily find them. You will need them if you must restore the installation later. Follow these steps for restoring the server installation.

Backing Up AIS 29-1

To restore the server installation 1. Re-install the AIS server in the original installation location with the original installation file. The default location is in a folder or directory called Attunity/Server. Make sure that you know the location of your server files if you do not use the default location. For more information, see the AIS Installation Guide for the platform you are working with.
2.

Restore the backed-up files. The reason for this two-step process is that the installation procedure sets the environment in ways that are not easy to reproduce by backing up and restoring files (e.g., the installation may register components or copy modules to various locations on the system). Depending on the scope of recovery (Just AIS or an entire system recovery), recovery may require additional activities, such as recreating user accounts, assigning permissions and system quotas. These activities are handled as part of the standard system backup procedures.

Backing Up and Restoring AIS Server Metadata


AIS server Metadata is backed up and restored as part of the AIS installation backup and restore process. However, in some cases, it is required to back up the metadata contents rather than the physical files (e.g. when upgrading or during development before applying a significant change). The server metadata includes the following definition types:

Daemon definitions User definitions Binding definitions (including environment settings) Adapter metadata for some adapters Data Source metadata for some adapters License Key

The following procedure can be used to backup and restore AIS server metadata. To back up the AIS server metadata 1. Create a list of all the Data Sources for which you provided metadata. This usually includes non-relational data sources such as VSAM, Enscribe, DISAM, RMS, IMS, DBMS).
2.

Run the following NAV_UTIL commands to back up the AIS metadata (see Using NAV_UTIL Utility for information on NACV_UTIL commands). These commands assume that you provided metadata for data sources DS1, DS2, , DSn
$ nav_util export all SYS sys.xml $ nav_util export all DS1 ds1.xml $ nav_util export all DS2 ds2.xml $ $ nav_util export all DSn dsn.xml

The set of files sys.xml , ds1.xml,, ds2.xml, , dsn.xml is the backup of the AIS server metadata. Follow this step for restoring the AIS server metadata.

29-2 AIS User Guide and Reference

To restore the AIS server metadata Run the following NAV_UTIL commands:
$ nav_util $ nav_util $ nav_util $ $ nav_util import SYS sys.xml import DS1 ds1.xml import DS2 ds2.xml import DSn dsn.xml

A more granular metadata backup and restore is possible using other options of the NAV_UTIL EXPORT command. The concept is similar to what was shown above. For information on NAV_UTIL commands, see Using NAV_UTIL Utility.

Backing Up and Restoring AIS Server Scripts


An AIS installation has several script and parameter files that are customized to the specific data sources and applications in use. There is no common list of scripts that can be given. For a description of the standard scripts and parameter files provided with the installation, such as site_nav_login, see the relevant Attunity Server Installation Guide.

Backing Up and Restoring AIS Server Data


The Attunity Stream component of AIS maintains application data and has special backup and restore considerations. There are three kinds of stream data maintained by AIS:

Stream Position (context) of clients against the agent (or staging area) change stream Stream position of a staging area against the agent change stream Change events stored in the staging area

To back up the server data, back up the data files. To ensure consistency of the backed-up files, disable the agent or staging area workspace before backing up or restoring data. Enable the daemon when the backup is complete. The data must also be restored with the AIS inactive, however there is also a possibility that this no longer available when the AIS is restored. In that case, a new starting point must be established For information on NAV_UTIL commands, see Using NAV_UTIL Utility.

Backing Up and Restoring AIS Studio Metadata


To restore Attunity Studio so you can continue to work with it you must do the following.

Back up the Workspace folder. Re-install Attunity Studio

Follow these steps for backing up Attunity Studio.

Backing Up AIS 29-3

To back up and restore the Attunity Studio, do the following 1. Backup your Studio workspace folder. The workspace folder is in the root directory where Attunity Studio is installed. The default path to this folder is Program Files\Attunity\Studio\workspace. You should back the folder up in a place where you can find it easily.
2. 3.

Install the Attunity Studio upgrade. Find the workspace folder backup and copy it to the Studio folder in the root installation directory. Replace the existing folder with your backup.

29-4 AIS User Guide and Reference

30
Transaction Support
This section contains the following topics:

Overview Using Attunity Connect as a Stand-alone Transaction Coordinator Attunity Connect Data Source Driver Capabilities Distributed Transactions Recovery Platform Specific Information

Overview
Attunity Connect serves as a distributed transaction coordinator with Two-phase Commit capability toward its Data Sources to the extent that Transaction support is implemented in the data source Drivers. Attunity Connect can be used either as a stand-alone transaction coordinator or as a sub-coordinator under another TP monitor (such as Microsoft DTC). As a transaction coordinator, Attunity Connect is called only with a Commit command and manages the two-phase commit functionality. As a sub-coordinator, Attunity Connect can be also called for a PrepareCommit function. In managing transactions, Attunity Connect does the following:

Exposes distributed transaction methods to users (using ITransactionJoin on Windows machines).


Note:

If you use the ITransactionJoin API, Attunity Connect must be a sub-coordinator under Microsoft's DTC product. DTC provides a transaction object for Attunity Connect to use. The XA_ APIs are available.

Issues transactional commands to data sources when necessary. Provides a recovery mechanism in case of system failures.

Transaction Support 30-1

Using Attunity Connect as a Stand-alone Transaction Coordinator


When another Transaction Manager is not available, AIS can function across the whole client/server system. On the client Attunity Connect runs as a master transaction coordinator (sending a PrepareCommit call to the Data Source on a server) and on the server Attunity Connect functions as a sub-coordinator. This can cascade through an entire tree of servers involved in the Transaction. Recovery can take place starting either on the client (with automatic cascading) or on a Server Machine (cascading to its servers). When a transaction is started, Attunity Connect generates a new Transaction ID (XID) and calls the Attunity Connect StartDistributedTransaction API. All subsequent statements are treated as part of the same transaction until the transaction is either committed or aborted. The Attunity Connect transaction coordinator on the client node issues transactional commands (including PrepareCommit) to all of its data sources, both local and remote. When Attunity Connect is a sub-coordinator, the Transaction ID is externally supplied. To use Attunity Connect as a transaction coordinator 1. Define a CommitConfirm table for every data source that supports only one-phase commit. For details, see the CommitConfirm Table.
Note:

Two-phase Commit is guaranteed only when the components participating in the transaction contain no more than a single one-phase commit data source.

2. 3. 4. 5.

In Attunity Studio, expand the machine whose binding environment you want to set. Expand Bindings to view the bindings for the machine. Right-click the binding and select Open. Expand the Transaction section. In the Transaction section, do the following:

Select Use commit confirm table. Make sure that this parameter is set for each machine in the transaction. Select Convert all to distributed

Figure 301 Binding Transaction Parameters

6.

In the Advanced tab for each data source, check the Transaction type property for each data source. The value of this property overrides any binding environment settings. For more information, see Configuring Data Source Advanced Properties.

30-2 AIS User Guide and Reference

If Transaction type is set to datasourceDefault, the data source supports the highest possible transaction level for that data source, except for machines where the level is set to the support that is provided even if RRS is not installed.

Attunity Connect Data Source Driver Capabilities


Attunity Connect supports a Two-phase Commit capability to the extent that Transactions are implemented in the Data Source drivers, as described in the following topics:

Data Sources That Do Not Support Transactions Data Sources with One-Phase Commit Capability Data Sources with Two-Phase Commit Capability Relational Database Procedures

Data Sources That Do Not Support Transactions


Attunity Connect does not reject transaction methods involving a Data Source that does not support Transactions, however it does nothing with this data source and returns a warning code for the method call. The Attunity Connect Flat File Data Source, VSAM Data Source (z/OS), CISAM /DISAM Data Source, and Text Delimited File Data Source Drivers do not support transactions.

Data Sources with One-Phase Commit Capability


Attunity Connect supports BeginTransaction, Commit, and Rollback transaction commands. It does not support PrepareCommit. If the Attunity Connect commit-confirm option is enabled, for every committed transaction, Attunity Connect writes an entry into a CommitConfirm Table on the data sources involved. Thus, if the system fails while the Data Source is executing a Commit command, you can check to see whether a Commit command succeeded (if the record for this transaction is there) or failed (if the record is not there). A data source with one-phase commit capability can still participate in a safe distributed Transaction if it is the only data source in the transaction without a two-phase commit capability. When Attunity Connect acts as a transaction coordinator, upon receiving a PrepareCommit command, it dispatches the PrepareCommit to all the data sources that support two-phase commit and inserts a record to the CommitConfirm table of the one-phase commit data source. When Attunity Connect receives the Commit command, it first issues the commit to the data source with one-phase commit capability, and then dispatches the Commit to the two-phase commit data sources. This guarantees transaction integrity regardless of what happens. When more than one data source with one-phase commit capability participates in a distributed transaction, Attunity Connect issues a Commit to the data sources with one-phase commit capability after issuing Prepare commands to all the data sources that support Two-phase Commit. Attunity Connect then commits all the data sources that support two-phase commit. Thus, if the Commit to a data source that supports one-phase commit fails, any other of the one-phase commit data sources already updated will not be rolled back when the transaction fails. If Attunity Connect is a sub-coordinator working under another coordinator (not under itself in client-server settings), it reports upward that it is one-phase commit
Transaction Support 30-3

(and not two-phase commit) as long as one of the data sources participating in the transaction supports only one-phase commit. This is because the outside coordinator may have other data sources that support only one phase commit, or may not know to issue a PrepareCommit to Attunity Connect as its very last data source (so that Attunity Connect can commit its one-phase commit data source).

Data Sources with Two-Phase Commit Capability


Attunity Connect supports the PrepareCommit and Recover API calls. Data Sources that support Two-phase Commit can participate fully in a distributed Transaction. The following Attunity Connect data source drivers support two-phase commit:

DB2 Data Source (on z/OS systems, using the DB2MFCLI driver) Informix Data Source (on UNIX and Windows platforms) Ingres II (Open Ingres) Data Source Oracle Data Source v8 and higher SQL Server Data Source (Windows Only) (using Microsoft DTC) VSAM Data Source (z/OS) under CICS (z/OS platforms with CICS TS 1.3 or higher)1

On z/OS Systems RRS (Transaction Management and Recoverable Resource Manager Services) must be installed.

The default setting of the transactionSupport property is set to the support that is provided even if RRS is not installed.

Refer to the specific driver for any two-phase commit considerations.


Note:

To use distributed transactions from an ODBC-based application, ensure that AUTOCOMMIT is set to 0.

Relational Database Procedures


The behavior of Relational Data Source procedures that change the Transaction state as part of the procedure can result in unexpected results. For example, an autocommit can be applied even when it is disabled in the client application.

Distributed Transactions
Attunity Connect is configured to enable distributed Transactions. In order for simple transactions to use the Attunity Connect distributed transaction capabilities, you can specify that Attunity Connect convert all simple transactions into distributed transactions by carrying out the following procedure. To use distributed transactions 1. Open Attunity Studio.

The VSAM under CICS driver supports two-phase commit.

30-4 AIS User Guide and Reference

2. 3. 4.

In the Attunity Studio Configuration view expand the machine with the environment binding you want to set. Expand the Bindings folder and right-click the binding you are using. Expand the Transaction section. In the Transaction section, do the following:

Select Use commit confirm table. Make sure that this parameter is selected on every Server Machine in the distributed transaction. Select Convert all to distributed

Figure 302 Binding Transaction Parameters

Transaction Log File


An Attunity Connect Transaction log file can be defined for every Binding configuration on every Machine. This enables recovery in two modes:

Driven from the local client to recover the clients current transactions, following a crash. This is the recommended method for recovery.

Driven from the remote machine to recover all of the transactions at a given server, regardless of the availability of any client machines involved. Recovery at the server does not affect the state of the transactions as recorded in the transaction log file on the client. Thus, a client may see as needing to be resolved transactions that have already been resolved on the server. The recovery utility assumes that recovery has been performed on the server (see Recovery).
Note:

For recovery at a server to work, a transaction log file must have been defined on the server.

Under Windows, the default log file (TRLOG.TLF) is written to the same directory as the NAV.LOG file (which is specified by the <debug logFile=...> environment property). It is recommended to use the default log file and perform recovery from a PC. You can change the name and location of the log file by the <transactions logFile=...> environment property. The transaction log file has one or more entries for every transaction whose PrepareCommit or Commit commands were received by Attunity Connect but whose Commit or Rollback or Forget commands have not completed successfully. The transaction log file also includes entries that provide you with transaction status information and information to assist in heuristic recovery.

Transaction Support 30-5

The entry is removed from the log file after the transaction has completed if Attunity Connect receives a Prepared statement from the transaction coordinator the entry exists only while a user Commit command is in progress). The following information is recorded in the transaction log file:

The transaction state, which can be one of the following:


PrepareCommit issued Prepared CommitIssued Committed RollbackIssued Rolledback

Information enabling you to reconnect to the data source during recovery without needing any external parameters, including:

The binding used by Attunity Connect. A timestamp field in every entry (indicating when the data source started or ended the transactional command).

You can determine whether to roll back the transaction or continue to commit the transaction, depending on the entries in the transaction log file. Before deciding whether to rollback or commit a failed transaction, you may need the information from another log (the CommitConfirm Table), which includes the following:

The state of the transaction in the transaction log file is Started Commit. All of the entries in the server machines for the data sources are Started Commit. The Data Source supports only one-phase commit and the state is Commit Issued.

This information is available using the AIS Recovery utility (see Recovery.

CommitConfirm Table
To use Data Sources that support only one-phase commit in a distributed transaction, a CommitConfirm table must be present for every one-phase commit data source. To use the CommitConfirm table 1. Execute the following CREATE TABLE statement on the data source to create the table:
CREATE TABLE CMTCNFRM ( "TRANS ID" CHAR (140) NOT NULL, "TID2" CHAR (140) NOT NULL, "CMNT" CHAR (128) )

Note:

Create the table exactly as shown (names are all uppercase and the CMNT column length is 128).

30-6 AIS User Guide and Reference

2.

In Attunity Studio binding settings, set the useCommitConfirmTable parameter to true. You can do this by opening the Attunity Studio Configuration view and expanding the machine whose environment Binding you want to set. Expand Bindings to view the bindings for the machine, right-click the binding, and select Edit Binding.

During Commit, Attunity Connect writes an entry to this table enabling Attunity Connect to determine, after a crash, whether the data source completed the Commit successfully prior to updating the status in the log file. Once a Commit is completed, the entry can be deleted.
Note:

Attunity Connect does not automatically delete the entry.

Recovery
AIS provides a recovery utility on Windows platforms that enables you to do the following:

Examine the transaction status Initiate automatic recovery when possible Manually resolve the status of transactions that were in the middle of a Commit when a crash occurred Examine a transaction log file to determine which transactions failed

Recovery can take place starting either on the client (with automatic cascading) or on a Server Machine (cascading to its servers). To examine the status of transactions processed by Attunity Connect, use the AIS Recovery utility. The Recovery utility graphically displays the contents of the Attunity Connect transaction log file. Using the Recovery utility you can identify the status of Transactions managed by Attunity Connect. The Recovery utility enables you to:

Commit transactions that have not yet been committed Rollback transactions to their previous state Recover a transaction, based on how Attunity Connect interprets its current status Recover all the transactions, based on how Attunity Connect interprets the current status
Note:

Do not use the recovery utility while running transactions.

To use the Recovery utility with a z/OS machine, define every library in the NAVROOT.USERLIB(ATTSRVR) JCL as an APF-authorized library, where NAVROOT is the high-level qualifier where AIS is installed. To use the Recovery utility 1. From the Start menu, find All Programs, Attunity Integration Suite, Server Utilities, and select Recovery Utility.

Transaction Support 30-7

The Recovery utility is displayed, listing the transactions logged in the transaction log file and their status:
Figure 303 Recovery Utility Screen

For more information, see Transaction Log File.


2.

Right-click on a transaction in the left pane or click the magnifying glass icon to get transaction button to show the transaction status. A message is displayed showing the current status of the transaction and the recommended recovery procedure. For example, the Transaction Recovery Utility Message shows that the transaction status is Commit and that no recovery is required, meaning that all the resources in the transaction were committed. If one of the resources was not committed, the recovery required would have been Commit.

Figure 304 Transaction Recovery Utility Message

3.

Right-click on a transaction in the left-hand pane or click on the appropriate button in the toolbar to select the type of recovery you want performed for the transaction. The following options are available:

Heuristic Commit Rollback

Recovery is done automatically, according to the information in the log file. This is the recommended option. Commits the selected transaction. Rolls back updates made by the transaction to the state immediately before the transaction was issued.

30-8 AIS User Guide and Reference

Forget Heuristic All

Maintains the current situation and deletes the selected transaction information from the log file. Recovers all the transactions in the log file. Recovery is done automatically by AIS.

Recovery Utility Toolbar


The following table describes the toolbar buttons, their menu equivalents and their functionality:
Table 301 Button The Recovery Utility Toolbar Menu Option Machine/Refresh Transactions List Machine/Heuristic Recover All Transaction/Heuristic Recover Transaction/Transaction Status Transaction/Commit Transaction Transaction/Rollback Transaction Transaction/Forget Transaction Function Redisplays the transaction log file. Recovery is done automatically by AIS for every transaction listed in the transaction log file for the selected machine. Recovery is done automatically by AIS, dependent on the information in the log file. A message is displayed showing the status of the transaction and the recovery strategy recommended. Commits the selected transaction. Updates made by the selected transaction are rolled back to the state immediately before the transaction was issued. Maintains the current situation and deletes the selected transaction information from the log file.

Platform Specific Information


Transaction log file is not supported on the AS400 machine.

Transaction Support 30-9

30-10 AIS User Guide and Reference

31
Troubleshooting in AIS
This section contains the following topics:

Troubleshooting Overview Product Flow Maps Using the Product Flow Maps for Troubleshooting Troubleshooting Methods Common Errors and Solutions

Troubleshooting Overview
AIS is an enterprise integration software that may require troubleshooting issues one that involve multiple technologies, platforms, data sources, applications and networks. This guide presents an orderly approach for troubleshooting AIS issues. Troubleshooting functions are available in Attunity Studio, command line utilities, and product logs. In some case you must use command line utilities to troubleshoot a feature because it may not be possible to use using Attunity Studio (especially if there is a problem with communications or installation). This section presents several AIS flow maps that describe common use scenarios. By identifying the appropriate flow map for your usage scenario and by following the control and data flow, you can to identify a broken flow step and what is needed to resolve the issue. Other AIS troubleshooting scenarios are described in the specific Data Source Reference, Procedure Data Source Reference, Adapters Reference, and CDC Agents Reference for the specific component that you are working with.

Product Flow Maps


The product flow maps are useful for understanding how AIS. These maps present a high-level picture, which may hide important details and complex interactions at lower levels. This section describes the flow maps. Later sections describe how to use the flow maps for troubleshooting. The flows maps described in this section are:

Local Data Access Scenario Remote Data Access Scenario

Troubleshooting in AIS 31-1

Local Data Access Scenario


The following figure illustrates a typical data-access scenario:
Figure 311 Local Data Access Scenario

The components in this scenario are:

SQL Application: This is an application that accesses data using SQL and native (non-VM) SQL-based APIs. Examples of SQL applications include Microsoft Access and Excel, many Visual Basic applications, reporting tools such as Cognos ReportNet and Business Objects Crystal Reports, and IIS business components. SQL API: The SQL API in this scenario can be ADO (or OLEDB on Windows platforms) or ODBC. Query Processor: This is the AIS data integration engine. NOS: Native Object Store, internal storage for AIS configuration and metadata Database Driver: This is an AIS component that adapts the query processor to the specific native database APIs. There is a specific database driver for each database supported under AIS. Some database drivers are custom-built drivers that are not part of AIS. Custom-built drivers are registered in the ADDON.DEF file in the products NAVROOT/DEF directory). Native Database API: AIS access a database using its native interface. This is a component provided by the database vendor. In this usage scenario, the native database API is typically part of the database client installation that must be installed to enable database access.

The following table describes the steps in the local data access scenario map shown in the figure above.
Table 311 Flow Step A1 Local Data-Access Scenario Steps Description The SQL application loads the SQL API and uses it to retrieve and modify data.

31-2 AIS User Guide and Reference

Table 311 (Cont.) Local Data-Access Scenario Steps Flow Step A2 Description The SQL API translates standard application requests to the internal data API, also known as NAV API. The NAV API accesses the query processor. The SQL API also references the desired product configuration and other operational properties using a connection string or a similar mechanism. The query processor receives queries (SELECT, UPDATE, INSERT, DELETE, CALL) through the NAV API, devises an execution plan, optimizes the plan, and then invokes the database drivers (or possibly an SQL client component not shown here) to execute the query (using NAV API). The query processor gets configuration, metadata, and driver binding information from NOS. The database driver gets instructions through a common interface and translates the requests to the native database API. The native database API (interface) may access the underlying database locally or through the network. Examples of local access include file-system interfaces (VSAM, Enscribe, RMS, DISAM) and Oracle clients (when using a local instance).

A3

A4 A5

Remote Data Access Scenario


Two typical remote data access scenarios are described in the following diagram; fat client access requires full AIS installation on the client platform while thin client access does not. The full AIS installation on the client enables local data to be accessed and combined with remote data (by means of the query processor); with the thin client, only remote data is accessible. For sake of conciseness, equivalent steps in both scenarios are numbered identically.
Figure 312 Remote Data Access Scenario

The components in this scenario are:

Troubleshooting in AIS 31-3

SQL Application: This is an application that accesses data using SQL and SQL-based APIs. Examples of SQL applications include business components deployed in J2EE application servers, Microsoft Access and Excel, many Visual Basic applications, reporting tools such as Cognos ReportNet and Business Objects Crystal Reports, and IIS business components. SQL API: The SQL API in this scenario may be thick interfaces such as ADO (or OLEDB on Windows platforms) and ODBC, or thin interfaces, such as JDBC and ADO.Net. When thin interfaces are used, the query processor component is not available (as it is part of the native product). In these cases, the SQL API communicates directly with the SQL client layer.

Query Processor: This is the data integration engine of AIS. The query processor use on the client platform is required only with the OLEDB interface. With ODBC it is optional, and with all other interfaces, it is only used on the server platform. SQL Client: The SQL client is a component that sends data access requests to a remote server for execution. The AIS SQL Client uses the same interface as an AIS database driver. This gives the query processor flexibility in deciding how to execute a query. Database Driver: This is an AIS component that adapts the query processor to the specific native database APIs. There is a specific database driver for each database supported under AIS. Some database drivers are custom-built drivers that are not part of AIS. Native Database API: AIS access a database using its native interface. This is a component provided by the database vendor. In this usage scenario, the native database API is typically part of the database client installation that must be installed for AIS to access the database.
Remote Data Access Scenario Steps Description The SQL application loads the SQL API and uses it to retrieve and modify data. The SQL API translates standard application requests to the internal data API, also known as NAV API, which accesses the query processor. The SQL API also references the desired product configuration and other operational properties via a connection string or a similar mechanism. The SQL API translates standard application requests to the internal data API, also known as NAV API, which accesses the SQL client. The SQL API also references the desired product configuration and other operational properties via a connection string or a similar mechanism. The query processor receives queries (SELECT, UPDATE, INSERT, DELETE, CALL, etc.) via the NAV API, devises an execution plan, optimizes the plan and then invokes the SQL client component (or possibly database drivers not shown here) to perform the query (using NAV API). The query processor gets configuration, metadata and driver binding information from NOS. The SQL client remote NAV API requests to an AIS SQL server where the NAV API requests are relayed to a remote query processor. The SQL client first step is to call the daemon on the server platform to ask for a server instance to call.

Table 312 Flow Step B1 B2F (Thick)

B2T (Thin)

B3

B4

31-4 AIS User Guide and Reference

Table 312 (Cont.) Remote Data Access Scenario Steps Flow Step B5 Description The daemon on the server platform starts a new server instance on behalf of the client or, if a server is available in a server pool immediately return its location (IP address, port and contact information). As a new instance of an AIS SQL server starts up, it opens a communication channel with the daemon informing it with its location (IP address, port and contact information).

B6

Using the Product Flow Maps for Troubleshooting


This section describes how the Product Flow Maps are used to troubleshoot some common problems. This section contains the following topics:

SQL Application/SQL API Issues (A1, B1) SQL API Issues/Query Processor Issues (A2, B2F)

SQL Application/SQL API Issues (A1, B1)


Some of the most common problems are related to the SQL application loading an API. This section will show some of the errors that can occur depending on the API that is loaded. The following is a list of the APIs discussed in this section:

ADO/OLEDB JDBC

ADO/OLEDB
The following are errors that may occur using the ADO/OLEDB API. Provider cannot be found. It may not be properly installed This error indicates one of the following:

The provider name in the ADO connection string is wrong. It should be AttunityConnect The AIS OLEDB provider is not installed. You can run the following command to (re)register the AIS OLEDB provider.: $ regsvr32 <NAVROOT>\bin\nav32.dll

The AIS OLEDB provider was registered but the path that is used for registration uses a network share or a substituted drive letter. When the AIS OLEDB provider is accessed from a service, the registration path must be an explicit local directory path (a UNC path or substituted drives are not allowed this is a Windows restriction).

[ODBC Driver Manager] Data source name not found This error indicates one of the following:

The data source name (DSN) provided in the ODBC connection string (or in the call to SQLConnect or SQLDriverConnect) was not defined (check the ODBC Data Source Administrator under ControlPanel->AdministrativeTools). The DSN given is defined but it is a USER DSN belonging to another user than the one used for running the SQL application.

Troubleshooting in AIS 31-5

The DSN is a FILE DSN and the path is incorrect or inaccessible (e.g., it contains a UNC path or substituted driver name and the SQL application is running as a service). The AIS ODBC driver is not properly installed. You can run the command: $ regsvr32 <NAVROOT>\bin\odnav32.dll to (re)register the AIS ODBC provider.

[ODBC Driver Manager] Specified driver could not be loaded due to system error 126 This error indicates that the AIS ODBC driver is defined but there is a problem loading the driver DLLs. This can happen if the driver DLL or one of its dependent DLLs were deleted or inaccessible.

JDBC
The following are errors that may occur using the JDBC API. java.lang.ClassNotFoundException: com.attunity.jdbc.NvDriver This error indicates that the AIS provided JDBC driver file nvjdbc2.jar does not appear on the Java class path for the SQL application.

SQL API Issues/Query Processor Issues (A2, B2F)


The following error can occur when the SQL application is working against the query processor. The binding entry xxxxx was not found This error indicates that the data source called xxxx named in the SQL API connect string option defTdpName=xxxx does not exist in the binding definition in use. The default binding definition is called NAV and an alternative binding definition can be specified with the binding=yyyy connect string option.

Troubleshooting Methods
This section describes procedures and utilities that are used to check specific problems. This section includes:

Using the NAV_UTIL CHECK SERVER Utility Using the NAV_UTIL CHECK DATASOURCE Utility Using Trace Log Files

Using the NAV_UTIL CHECK SERVER Utility


The NAV_UTIL CHECK SERVER utility performs basic network checks to check access to AIS servers. For more information, see check server. Follow these steps for checking access to AIS servers. To check access to AIS serves using the NAV_UTIL CHECK SERVER utility 1. Connect to the AIS daemon on the specified machine
2.

Request a server instance of the specified workspace. This returns the server location)

31-6 AIS User Guide and Reference

3. 4. 5.

Disconnect from the daemon Connect to the server instance at the determined location Disconnect from the server instance

By running this utility you can detect networking and configuration problems and get a better idea of the point where the error occurs. The complete syntax for this utility is as follows:
$ nav_util check server(<daemon-location>,<workspace-name>[,<username>,<password>])

Where:

daemon-location: The IP address (or hostname) and port where the daemon listens. For example, corpsrv.acme.com or corpsrv.acme.com:2800 workspace-name: The name of a workspace to check (the server checked will be a server of this workspace). If omitted, the default NAVIGATOR workspace will be used. username and password: The credentials to be used if access to the server requires authentication.
Note:

When used on UNIX platforms, the symbols must be prefixed with a forward slash (\) as in the following example:
$ nav_util check server\( corpsrv.acme.com:2800\)

Using the NAV_UTIL CHECK DATASOURCE Utility


The NAV_UTIL CHECK DATASOURCE utility performs basic AIS data source access checks. For more information, see check datasource. Follow these steps for checking access to AIS data sources. To check access to AIS data sources 1. Connect to the specified binding
2. 3. 4. 5.

Load the definition of the specified data source Loading the driver of the specified data source Connect to the specified data source Disconnect from the data source

By running this utility you can detect data source configuration problems and get a better idea of the point where the error occurs. The complete syntax for this utility is as follows:
$ nav_util check [-b <binding-name>] datasource(<datasource-name>])

Where:

binding-name: An optional binding name to use. If omitted, the default binding NAV is used. datasource-name: The name of the data source to check

Troubleshooting in AIS 31-7

Using Trace Log Files


You can use logs that are created by AIS to find possible problems and troubleshoot them. AIS creates various trace logs that trace specific types of information. The trace logs usually are saved in a temporary folder or directory (called temp) in the Attunity root folder. To create the log files, you must activate the log trace. Follow the steps below for activating a log trace. See also: Log Traces. To activate a trace log file 1. In the Attunity Studio Design perspective Configuration view, expand a machine with the binding you are working with.
2. 3. 4. 5.

Right click the binding and select Edit Binding. The Binding editor opens in the editor section. At the bottom of the editor, click the Properties tab. The editor displays a list of property categories. Expand the Debug category. The editor displays a list of trace options. Set the value to true for one or more of the trace options.
Note: For the binaryXmlLogLevel log, you set its level. For more information on setting the level for this log, see binaryXmlLogLevel.

Log Traces
The following table describes some of the log traces available and the type of information available in each.
Table 313 Log Trace acxTrace Available Log Traces Description Logs the input and output XML information. This lets you check whether the XML inputs and outputs are correct. You can use this trace when you use application adapters that use the Attunity XML protocol (ACX). Activates a plan file for the query analyzer. This lets you see the AIS Query Analyzers analysis of the SQL sent. This sets a log level for the binary XML log. This log provides specific debugging information and information about the API. The levels are:

analyzerQueryPlan binaryXmlLogLevel

none api info debug

gdbTrace generalTrace oledbTrace

This trace logs the driver transactions created with the AIS SDK. This trace logs the general error messages that are generated. For information on errors, see Common Errors and Solutions. This trace logs the error messages generated when working with OLEDB providers.

31-8 AIS User Guide and Reference

Table 313 (Cont.) Available Log Traces Log Trace optimizerTrace queryWarnings timeTrace transactionTrace triggerTrace Description This trace logs information about the Query Optimizer strategy. This trace logs the Query Processor warnings. This trace adds a time stamp to each event in a log. This lets you see the time frame for the various events in the log. This trace logs two- phase commit transactions in the 2PC and XA protocols and events related to those transactions. This trace logs information on triggers that are implemented. Whenever the database fires the trigger the information is logged.

Using Extended Logging Options


The Attunity Integration Suite supports logging activities for troubleshooting (debug) purposes. This section describes how Attunity now supports logging on specific SQL queries for performance tuning purposes. Without this ability, it is difficult to trace problems on queries to large virtual databases made up of many actual databases. Currently the ability to log the activities on specific queries is limited to fine tuning during development. This section includes the following topics:

New Log Entries Identifying Nodes Reading the Log Configuring Advanced Environment Parameters

New Log Entries


To support the ability to log information about specific queries, new logging information is available. There are two main types of information:

Query information: The following information is logged for each RDBMS node (which represents a relational database): Elapsed Time Number of rows returned Number of bytes returned
Note:

The query information is logged for each RDBMS node in the query, not for the whole query. For example, the elapsed time for each node is entered in the log, not for the entire query.

Query processor cache information: You can set AIS to cache table data when it is accessed by a query. The log now provides the following additional information about query processor caches: Size Number of lookups Number of misses
Troubleshooting in AIS 31-9

Number of rows in the cache Number of times the cache is flushed Inefficiency threshold

Identifying Nodes
Query executions are logged in the query processor, not in the driver. The logging is carried out for each node in the query, not the query as a whole. Each node is assigned an ID number. The following figure shows the optimizer tree. This tree shows each of the nodes with assigned ID numbers. The information contained in each node is listed in the sub-branches for each node. This is useful to help identify which nodes are logged. In the log file, the nodes are identified by ID number only. You can identify the logged nodes by correlating the ID numbers in the log with the optimizer tree. The optimizer tree is part of the Optimizer Trace file.
Figure 313 XML schema of nodes and their ID numbers

Currently, the following logging information is reported for the following nodes:

Index-cache semi-join RDBMS

Configuring the Optimizer Trace File


To turn on the Optimizer Trace file, you must set the following parameters to true:

optimizerTrace traceFull

You configure these parameters in Attunity Studio. To configure extended logging Open the Design Perspective in Attunity Studio.

1.

31-10 AIS User Guide and Reference

2. 3. 4.

From the Configuration view, expand the machine folder. Expand the machine with the binding that has the optimizer settings you want to change. Right-click the binding that has the optimizer settings you want to change and select Edit Binding. The Binding editor opens on the right of the screen with the Properties tab open.

5. 6. 7. 8. 9.

From the Environment Properties list, expand debug. Find the optimizerTrace property, click the right side of the Value column for the property, and select true from the drop-down list. From the Environment Properties list, expand optimizer. Find the traceFull property, click the right side of the Value column for the property, and select true from the drop-down list. Save the changes.

Reading the Log


The RDBMS node ID SQL statement for each RDBMS node is printed to the log followed by the SQL statement for each RDBMS node. This makes it easy to associate the SQL to the statistics displayed in the log for each node. The following is an example of this part of the log:
RDBMS Node id: 6. Accessing Database 'dbsql_swan' with SQL: SELECT T.O_ORDERKEY AS c000 FROM torder T WHERE ((? = T.O_ORDERKEY) OR (? = T.O_ ORDERKEY)

When the query is executed and deleted from the system, the statistics and performance tracing are logged. The following examples show how the statistics for different node types are logged.
RDBMS Node QPSTAT: RDBMS node, id = 9: QPSTAT: #Rows: 1187, Elapsed time(sec): 3.427000 QPSTAT: Total Bytes: 11870

Semi-Join Node QPSTAT: SEMI JOIN node, id = 4: QPSTAT: #Rows in cache: 1187, #Refills: 1, #Lookups: 40 QPSTAT: #Executions to right side: 4, Cache size(bytes): 1000000

Caching Node (Index Cache) QPSTAT: QPSTAT: QPSTAT: QPSTAT: CACHE node (Index cache), id = 6: #Rows in cache: 1205, #Lookups: 100 , #Hits: 2 #Flushes: 1, Cache size(bytes): 100000 Cache was not useful and canceled for this run

Configuring the Extended Logging Option


To turn on the extended logging option, you must set the following parameters to true:

qpTrace

Troubleshooting in AIS

31-11

performanceTrace

You configure these parameters in Attunity Studio. To configure extended logging Open the Design Perspective in Attunity Studio. From the Configuration view on the left side, expand the machine folder. Expand the machine with the binding that has the settings you want to change. Right-click the binding that has the settings you want to change and select Edit Binding. The Binding editor opens on the right of the screen with the Properties tab open.
5. 6. 7. 8. 9.

1. 2. 3. 4.

From the Environment Properties list, expand debug. Find the qpTrace property, click the right side of the Value column for the property, and select true from the drop-down list. From the Environment Properties list, expand queryProcessor. Find the performanceTrace, click the right side of the Value column for the property, and select true from the drop-down list. Save the changes.
Notes:

You must set the value of both the qpTrace and performanceTrace properties to true to enable extended logging. To be able to view and edit the qpTrace and performanceTrace parameters in Attunity Studio, the Show advanced environment parameters option must be turn on in the Attunity Studio Preferences. For information on how to turn on this option, see Configuring Advanced Environment Parameters.

Configuring Advanced Environment Parameters


The parameters used to turn on extended logging and Optimizer Trace file are advanced parameters. To view and edit these parameters, you must turn on the Advanced Parameter Parameters in the Attunity Studio preferences. To turn on extended logging 1. From the Window menu, select Preferences The Preferences screen opens.
2. 3. 4. 5.

From the pane on the left of the screen, click Studio. Click the Advanced tab on the right of the screen. Select the Show advanced environment parameters check box. Click OK to close the screen and confirm the selections.

31-12 AIS User Guide and Reference

Common Errors and Solutions


The following table explains some common communications errors and possible solutions for them.
Table 314 Code C000 Client/Server Communication Error Messages Possible Action

Message and Explanation Cannot shutdown a non-local IRPCD with a signal. Explanation: The oper parameter in the "irpcd shutdown" command is available only with a local daemon.

Check that a remote machine was not specified in the -l [host[:port]] parameter of the irpcd shutdown command. Check that the port number specified in the irpcd shutdown command is correct.

C001

Failed to open the IRPCD PID file. Explanation: The daemon could not open the irpcd[_port].pid file to find the process ID of the daemon to shut down. The irpcd[_port].pid file is located in the BIN directory, which is located in the directory where AIS is installed.

Check that the daemon has permission to access the irpcd[_port].pid file.

C002

Cannot shutdown IRPCD, PID cannot be found. Explanation: The shutdown operation failed because the irpcd[_port].pid file was not found in the BIN directory, which is located in the directory where AIS is installed.

Check whether the irpcd[_ port].pid file exists. Check that the daemon is running (another user may have shut down the daemon). Run the following command from a computer that is connected to the network: nav_util check irpcd(hostname[:port])

C003

Invalid PID in the IRPCD PID file (%s). Kill the daemon with a system command. Explanation: The shutdown failed because the irpcd[_port].pid file in the BIN directory, which is located in the directory where AIS is installed.

C004

Failed to create a PID file (%s).

Check that the account where the daemon runs has permission to access the irpcd[_ Explanation: The daemon was not able port].pid file. to create the irpcd[_port].pid file in the BIN directory, which is located in the directory where AIS is installed. AIS still runs, however when you shut down the daemon, the irpcd shutdown oper will not work. Could not open the IRPCD log file for write. Explanation: The daemon was not able to create or write to its log file.

C005

Check that the account where the daemon runs has permission to generate/write to the log file. Check the path specified for the log file in the daemon configuration. Check that there is no existing log file owned by another user at the specified location. Ensure that the disk device is not full.

Troubleshooting in AIS

31-13

Table 314 (Cont.) Client/Server Communication Error Messages Code C007 Message and Explanation Server initialization failed. Explanation: The daemon failed to start its network service. Possible Action

Check the processes that are run on the system to see whether another daemon or program is using the port specified in the -l [host[:port]] parameter of the irpcd start command. The netstat program on most platforms shows this information. Check the TCP/IP subsystem on the current machine by trying to ping it or run FTP or telnet to or from it. Check whether the daemon has privileges to use the TCP/IP services on the current machine with the designated port number.

C008

Setting server event handler failed. Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com. No action is required.

C009

IRPCD process has been terminated by user request. Explanation: This message is informational only. The daemon successfully shut down.

C00A

Application %s not found. Explanation: The requested workspace does not exist.

Check that the workspace defined in the client binding is also defined in the daemon configuration on the target server. Use the following command from a PC to check the workspace: nav_util check server(hostname, workspace) where: hostname: The host name with an optional port number (the port number is specified after a colon). workspace: The name of the workspace as defined in the client binding.

C00B

Invalid IRPCD client context. Explanation: A non-AIS program is trying to connect to the daemon.

Check the processes and kill the relevant process with a system command.

C00C

Daemon request requires a server login. Explanation: A non-AIS server or program was trying to use a daemon service that is reserved for AIS servers.

Check the processes and kill the relevant process with a system command.

C00D

Daemon request requires a client login. Explanation: The requested daemon requires a valid client login, which was not supplied.

Reissue the command and specify a username and password. Edit the User Profile in Attunity Studio to specify a valid username and password for the remote machine.

31-14 AIS User Guide and Reference

Table 314 (Cont.) Client/Server Communication Error Messages Code C00E Message and Explanation Daemon request requires an administrator login. Explanation: The requested daemon service requires an administrative login.

Possible Action

Reissue the irpcd command using the -u parameter and a valid administrator username and password. Edit the User Profile in Attunity Studio to specify a valid administrator username and password for the remote machine. Reissue the irpcd command using the -u parameter and a username and password. Enable anonymous client access by setting the AnonymousClientAllowed parameter to true in the Security section of the daemon configuration. Edit the User Profile in Attunity Studio to specify a valid username and password for the remote machine.

C00F

Anonymous client logins are not allowed. Explanation: The daemon is configured to require a valid username and password, which were not supplied.

C010

Anonymous server logins are not allowed. Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com.

C011

Client has already timed out. Explanation: A server process was started on behalf of a client and the client has timed out before the server completed its startup.

Increase the ConnectTimeout value for the server workspace in the Workspace xxx section of the daemon configuration.

C012

Invalid username/password. Explanation: Invalid username/password supplied when logging on to the daemon. On Windows platforms, the daemon is not registered correctly.

Reissue the irpcd command using the -u parameter and a username and password. See the daemon log file for the reason that the username/password were not accepted. Edit the User Profile in Attunity Studio to specify a valid username and password for the remote machine. Make sure the daemon is started from an account that is allowed to check for system usernames and passwords. On some platforms, only a privileged account can check for authentication. On z/OS, increase the number of sub-tasks per address space in the NsubTasks parameter in the Workspace xxx section of the daemon configuration. On UNIX, increase the value of the MaxNActiveServers and/or MaxNClientsPerServer parameters in the Workspace xxx section of the daemon configuration. Try running the command later.

C014

Client connection limit reached. Try later. Explanation: The maximum number of server processes for the workspace has been reached, and none of the active servers could accept the client connection.

Troubleshooting in AIS

31-15

Table 314 (Cont.) Client/Server Communication Error Messages Code C015 Message and Explanation Failed to start server process. Explanation: The AIS daemon failed to start a server process or the started server failed upon starting up. Possible Action See the daemon and server log files for the reason the server did not start. For example, if you receive a message similar to the following: [C015] Failed to start NAVIGATOR server process: No server account name defined for anonymous client; code: -1601: SQL code: 0 If you use impersonation, check the user profile on the client. Also see C069, below. C016 Unexpected server state. Explanation: Internal error. C017 Active daemon clients exist. Shutdown canceled. Explanation: One or more clients are still connected to the daemon. C019 Contact Attunity support. Contact local support or support@attunity.com.

Wait until all the clients log off the daemon and then retry the shutdown operation. Force a shutdown by using the irpcd shutdown abort command.

Request is not granted because someone Wait for the other user to release the else is locking it. resource. Explanation: A request to lock a resource managed by the daemon was denied because another user has locked the resource.

C01A

Lock %s not found. Explanation: A request to free a resource was denied because the caller did not lock that resource. For example, another user shut down the daemon that you are working with.

Contact Attunity support. Contact local support or support@attunity.com.

C01B

Unexpected error in %s. Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com. Contact Attunity support. Contact local support or support@attunity.com. Contact Attunity support. Contact local support or support@attunity.com. Contact Attunity support. Contact local support or support@attunity.com.

C01C C01D

Cannot update configuration without _ APPLICATIONS lock. Need to lock the application first. Explanation: Internal error.

C01F

Cannot set configuration of a deleted application. Explanation: Internal error.

C020

Failed in looking up host name (gethostname()) Explanation: Cannot connect to the remote machine.

Check that the machine name in the binding is correct. Check that a domain name server (DNS) is available to look up the host name. Check the TCP/IP subsystem on the machine by trying to ping it or run FTP or telnet to or from it.

31-16 AIS User Guide and Reference

Table 314 (Cont.) Client/Server Communication Error Messages Code C021 Message and Explanation Required variable %s not found Explanation: An environment variable required by the AIS server was not defined when the server started up. Possible Action

Check whether the startup script makes any changes to the environment variables used by AIS. Check whether the system-defined environment size is sufficiently large for AIS. Try to connect again. Increase the clients ConnectTimeout value for the target server workspace (in the Workspace xxx section of the daemon configuration). Check that the startup script for the workspace launches the correct version of AISt. On z/OS, increase the number of sub-tasks per address space in the NsubTasks parameter in the Workspace section of the daemon configuration. On UNIX, increase the value of the MaxNActiveServers and/or MaxNClientsPerServer parameters in the Workspace section of the daemon configuration.

C022

Server failed to connect and register with the daemon. Explanation: An AIS server started by the daemon was not able to connect or register back with the daemon.

C023

Call made to unregistered module %d. Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com.

C024

Failed to create a socket. Explanation: An error occurred within the TCP/IP subsystem.

Check whether you have sufficient system privileges. Check the TCP/IP subsystem on the machine by trying to ping it or run FTP or telnet to or from it. Check whether you have sufficient system privileges. Check the TCP/IP subsystem on the machine by trying to ping it or run FTP or telnet to or from it. Check whether another program is holding the port that was specified. Check whether you have sufficient system privileges.

C025

Failed to set socket option %s Explanation: An error occurred within the TCP/IP subsystem.

C026

Failed to bind server to port %s Explanation: An AIS server or daemon was not able to bind to the specified port.

C027

Cannot create TCP service for %s Explanation: An error occurred within the TCP/IP subsystem

Check the TCP/IP subsystem on the machine by trying to ping it or run FTP or telnet to or from it.

C028

Unable to register (%s, %d, tcp) Explanation: This error may happen when a portmapper is used (host:a) but the portmapper is not available.

Enable the portmapper. Avoid using the portmapper by not using :a when starting the daemon.

C02A

Server thread failed to start Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com.

Troubleshooting in AIS

31-17

Table 314 (Cont.) Client/Server Communication Error Messages Code C02B Message and Explanation Stopping the %s server - no client Explanation: A server that was started by the AIS daemon to service a client did not get a client connection request within one minute. The server terminates. Possible Action In most cases, the client was terminated by a user request, so no specific action is required. If no client can connect to the server, it may be that the server has multiple network cards and the AIS daemon is not aware of this. In this case, start the daemon with an IP address. Contact Attunity support. Contact local support or support@attunity.com.

C02C

Unexpected event - a termination signal intercepted Explanation: Internal error.

C02D

Modified transport, context unknown/lost Explanation Internal error.

Contact Attunity support. Contact local support or support@attunity.com.

C02E

Call made to non-existing procedure %d Verify that the client and server are using the same version of AIS. Explanation: This error typically is caused by a client of a newer version that is calling an old server. Corrupted arguments passed to procedure Explanation: Internal error. Contact Attunity support. Contact local support or support@attunity.com.

C02F

C030

Unable to free arguments for %s() of %s Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com. Contact Attunity support. Contact local support or support@attunity.com. Contact Attunity support. Contact local support or support@attunity.com. Contact Attunity support. Contact local support or support@attunity.com.

C031

Cannot register a non-module RPC %s Explanation: Internal error.

C032

An IRPCD program is required Explanation: Internal error.

C033

An IRPCD super-server is required for module events Explanation: Internal error.

C034

An invalid super-server module ID was specified, %d Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com.

C035

out of memory Explanation: Not enough memory to service a client request.

Increase the process memory quota and/or add memory to the system.

C036

Failed to register RPC procedure module %s Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com.

C037

Failed to register an invalid RPC procedure number %x Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com.

C038

Cannot re-register RPC procedure number %x Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com.

31-18 AIS User Guide and Reference

Table 314 (Cont.) Client/Server Communication Error Messages Code C042 Message and Explanation Remote call to %s failed; %s Explanation: Remote call to API failed. Possible Action Check the daemon log file. If necessary, change the level of detail that is written to the log file to help resolve the problem. Change the level of detail in the daemon configuration and run the irpcd reloadini command.

C043

Failed to connect to host %s;%s Explanation: The remote host is not correctly defined to AIS or is not working.

Check that the remote machine definition in the binding configuration. Check the daemon is up on the remote machine (NAV_UTIL CHECK). Check the network connection by trying to ping the host machine or run FTP or telnet to or from it.

C045

Failed to create a service thread

A system or process quota limit has been exceeded. Either increase the quota or Explanation: The server failed to create a lower the NClientsPerServer setting thread to service a client request. for the server in the Workspace xxx section of the daemon configuration. %s out of memory Explanation: Not enough memory was available to AIS to complete a requested operation.

C047

Kill unnecessary processes running on the server. Add more memory to the system. Allow the process to use more memory. Limit the number of processes that the daemon can start. If the demand for servers exceeds the number of available servers, clients get a message telling them that the maximum number of servers were reached and asking them to try again later. Check that the remote machine definition in the binding configuration. Check the daemon is up on the remote machine (NAV_UTIL CHECK). In case of a network problem, check the network connection by trying to ping the host machine or run FTP or telnet to or from it.

C066

Communication error with the server%s Explanation: Connection to the AIS daemon or server failed, or an established session with a server has failed.

C067

unexpected error occurred in server function %s Explanation: One of the server functions has exited with an exception, such as an Access Violation (a GPE) or an Invalid Instruction).

If the server contains user code, such as an AIS procedure, a user-defined data type, or a user-written provider, verify that this code is not causing the exception. Otherwise, contact Attunity support. Contact local support or support@attunity.com.

C068

fail to login daemon Explanation: The daemon is not running on the server machine.

Use the following command from a PC to check whether a daemon is running on the server: irpcd -l hostname[:port] test

Have the system administrator re-install AIS on the server.

Troubleshooting in AIS

31-19

Table 314 (Cont.) Client/Server Communication Error Messages Code C069 Message and Explanation Fail to get server Explanation: The AIS daemon (IRPCD) on the server machine could not start a server process to serve the client. A separate message provides more detail on why the server process could not start. There are many possible causes of this error. If the cause is not clear from the related message, see the AIS daemon log file on the server Possible Action The resolution to this error is highly dependent on the particular cause. The following are some typical causes and resolutions.

The process creation quota was exceeded. Either try again later or increase the quota or the other relevant system resources. The server startup script failed. This could be caused by some instructions in the process logon script, such as LOGIN.COM on OpenVMS, .cshrc on UNIX. The username given is not allowed to use the requested server. Use an authorized username. A limit on concurrent clients for a server has been reached. Try again later. If you use impersonation, check the user profile on the client. Also see C015.

C06A

Failed to connect to server Explanation: The server assigned to the client did not accept the client connection. A separate message provides more detail about why the server process did not accept the connection.

See the daemon and server log files for the reason that the server was not available to accept its assigned client. If a multi-threaded server is used and many clients are trying to connect to it at the same time, some may get a Connection Refused error if the TCP/IP request queue fills up.

C06B

AIS will automatically try to re-establish a connection with a server when it receives Explanation: A network failure, a server the next SQL command for the server. machine failure, or a server program failure caused the connection to abort. Once the network or machine failure is The currently active transaction is corrected, the connection to the daemon is aborted as well. re-established automatically. Disconnecting from server No conversion between server codepage Using the codepage environment variable %s and client codepage %s in the AIS environment settings, synchronize the codepages used on the Explanation: Client and server machines server and client. use different codepages. Too many codepages in use, cannot load Delete one or more of the codepages any additional codepages specified in the codepage environment variable of the server environment settings. Explanation: Multiple codepages are specified for the server. Versions of AIS client (%d) and server (%d) do not match Explanation: A new version of AIS was installed on either the client or server without using the upgrade installation procedure. Reinstall the new version of AIS.

C06C

C06D

C06E

31-20 AIS User Guide and Reference

Table 314 (Cont.) Client/Server Communication Error Messages Code C06F Message and Explanation There is no codepage defined for the server Explanation: The codepage environment variable is not specified in the environment settings. C070 Server failed to send reply to the client Explanation: Server terminated unexpectedly. C071 Connection to server %s was disconnected. Cursors state was lost. Unless the client was intentionally stopped, for example, using Control-C, contact Attunity support. Contact local support or support@attunity.com. Possible Action Specify the codepage environment variable in the AIS environment settings.

Normally, AIS automatically tries to create a new session with the server upon the next attempt to access the server. If the network Explanation: Either a network failure, a and server are accessible, the next server machine failure or a server operation should succeed, otherwise, the program failure caused the connection network and/or server machine should be to abort. The currently active transaction fixed before the connection is resumed. is aborted as well. In case of a server crash that is not related to a callable user code, contact Attunity support. Contact local support or support@attunity.com. Reconnect to server %s Explanation: This is an informational message only. The client has reestablished its connection with the server. No action required.

C072

C073

The parameters passed to the admin server are invalid: %s Explanation: Internal error.

Contact Attunity support. Contact local support or support@attunity.com.

C074

No authorization to perform the requested operation (%s) Explanation: The user or account has insufficient privileges.

Grant administrative privileges to the user or account with the Administrator parameter of the Security or Workspace sections in the daemon configuration.

C075

Failed to register daemon in the TCP/IP Check that the account running the service table daemon has the permissions to update the TCP/IP services file. Explanation: The registration of irpcd daemon in the TCP/IP services file has failed. Licensed number of concurrent users has been exceeded, try again later Explanation: The number of active AIS sessions that access local data sources exceeds the number licensed. Purchase additional concurrent user licenses.

E000

E001

Failed in lock/release operation

The separate message indicates the cause of this error. Explanation: A lock or release operation of a global resource has failed. A There are various causes for this error, separate message provides more details. including lack of sufficient privileges or a system resource shortage.

Troubleshooting in AIS

31-21

31-22 AIS User Guide and Reference

Part VII
Utilities
This part contains the following topics:

Using Attunity SQL Utility Using the Attunity XML Utility Attunity Query Tool SQL Explain Utility Using Attunity Query Analyzer Utility Using NAV_UTIL Utility Using Attunity BASIC Import Utility

32
Using Attunity SQL Utility
This section includes the following items:

Overview Using the SQL Utility Connecting to an Attunity Server via the SQL Utility Specifying and Executing Queries Modifying Data in a Recordset Working with Chapters

Overview
AIS includes an SQL utility, which enable you to:

Execute queries, including queries that generate chapters Work with and modify chapters Execute parameterized queries Set recordset properties Work with schemas

Using the SQL Utility


Start the SQL Utility by selecting Start, Programs, Attunity, Server utilities, SQL Utility. This figure shows an example of the graphical interface of the SQL Utility.

Using Attunity SQL Utility 32-1

Figure 321 SQL Utility

Connecting to an Attunity Server via the SQL Utility


You can use the SQL Utility to connect to an AIS Server to modify connection properties or change the binding configuration. To connect to an Attunity Server 1. Start the SQL Utility. The Connection screen is displayed.
Figure 322 Connection Screen

Note:

The SQL Utility automatically sets the provider for the connection.

2.

Enter any additional connection properties you want in the Connection string field or via the Advanced button. If you change the binding configuration, via the Advanced button, the binding environment used is still the NAV binding environment.

3. 4.

By default, the cursor location is set to Server. To work with a client-side cursor engine, select Client. Click Connect to connect to the provider.

Specifying and Executing Queries


The SQL Utility enables working with several query screens at the same time. To open a new query screen, select File|New or press <Ctrl>+<N>.

32-2 AIS User Guide and Reference

The query screen, is divided into two parts: the upper part where you specify your SQL statement, and the lower area, which contains a grid with the resulting rowset.
Figure 323 Query Screen

To execute a query Select Query|Execute, or press <F5> or click the Execute button.
Note:

You can specify a number of queries and then execute only one specific query by selecting the query to execute and pressing <F5>.

Working with Parameterized Queries


You can specify parameterized queries, such as:
select * from nation where N_NATIONKEY between ? AND ?

Every query screen in the SQL Utility can have its own set of parameters. To specify query parameters for the current screen 1. Select Query|Parameters or press <Ctrl>+<P>. The Set Parameters screen opens.
Figure 324 Set Parameters Screen

2. 3. 4.

Specify the value of the parameter in the Value field. Select the parameter type from the Type list. To change the parameter values, click Edit or double-click the parameter.

Using Attunity SQL Utility 32-3

5. 6.

To remove a parameter, select its entry in the list and click Remove. To remove all parameters, click Remove All. To set the parameter, click OK.

Modifying Data in a Recordset


The SQL Utility enables you to add, update and delete data in a resulting recordset, if the lock type is not "Read Only". The SQL Utility also supports transactions. The following transaction-related methods are available from the Transaction menu:

Begin: Starts a transaction. Commit: Commits a transaction. Rollback: Rolls back a transaction.

1. 2.

To modify a resulting recordset Click the Modify button in the toolbar. This displays the modify toolbar in place of the edit box of the SQL. Double-click on the row whose value you want to modify. When you begin modifying a certain row, you can click Update or Cancel Update, to perform or cancel the modifications, respectively.
Note:

If you are working with BatchOptimistic locking, you must also click on UpdateBatch or CancelBatch to confirm the batch update or cancel it. In Modify mode, you can also add or delete an existing row.
3.

To exit Modify mode, click Close.

Working with Chapters


The SQL Utility enables you to view and modify chapters. To view a particular chapter, double-click the grid cell containing the chapter.

32-4 AIS User Guide and Reference

Figure 325 Chapter View in SQL Utility

Modifying Chapters
With the SQL Utility you can modify chapters. Note that in ADO 1.5 the chapter is always opened with BatchOptimistic mode, which means you must apply UpdateBatch or CancelBatch after you modify the recordset. To modify a chapter, click on the Modify icon on the toolbar and apply modifications as you do for a normal recordset. Modify chapters carefully; you may end up with several open chapters. Sometimes you may need to close and open the chapter or requery the parent to see the changes.

Specifying Recordset Properties


The SQL Utility enables you specify the following recordset properties:

Cursor type Lock type Cache size Maximum number of records retrieved

Using Attunity SQL Utility 32-5

To set the recordset properties 1. Select Query|Recordset Properties or press <F4>. The Recordset Properties screen is displayed, as shown in the following figure:
Figure 326 Recordset Properties Screen

2.

Specify the parameters corresponding to the recordset property to be set, as follows:

Cursor type: Select the type of cursor from the list: ForwardOnly Keyset Dynamic Static

Note:

The default value is ForwardOnly, meaning you can only scroll forward in an ADO recordset (note that the SQL Utility caches the rowset to the Grid).

Lock type: Select the locking type from the list: Read only Pessimistic Optimistic BatchOptimistic

The default value is read only.

Cache size: Specify the size of the cache for the recordset. The default is -1, which sets the SQL Utility to use the ADO default for the cache size of the recordset. MaxRecord: Specify the maximum number of records that are retrieved. Setting -1 indicates that all rows are returned.
Note:

To save the changes in the registry, select the Save changes check box. The saved values will serve as the new default values.

3.

Click OK.

32-6 AIS User Guide and Reference

Working with Schemas


The SQL Utility enables you to view ADO schemas. The following schema types are supported:

Catalogs Foreign Keys Indexes Primary Keys Procedures Procedure Columns Provider Types Statistics Tables

To view a specific schema Select the schema type from the Schema menu. The following example displays the Tables schema of the mydata data source:
Figure 327 Sample Tables Schema

You can also copy and paste from a schema grid to the Query text box in the Query screen.

Using Attunity SQL Utility 32-7

32-8 AIS User Guide and Reference

33
Using the Attunity XML Utility
This section contains the following topics:

Overview Connecting with XML Executing with XML

Overview
The XML Utility enables access to any AIS adapter defined on any Attunity Server. The XML utility lets you execute the connect, execute, and disconnect commands. Therefore, the XML Utility has two parts, Connect and Execute. The Connect section is used to enter information about the server with the adapter that you want to access and the type of connection. The Execute section lets you enter information necessary to execute a command or get information from the application you are working with. You click Disconnect to stop the connection with the server. You open the XML utility from the Start menu on a Windows computer. You should use the XML Utility on the same computer that Attunity Studio is installed. To open the XML Utility From Programs in the Windows Start menu, select Attunity, Server utilities, XML Utility.

Using the Attunity XML Utility 33-1

The following window opens.


Figure 331 XML Utility

Connecting with XML


The Connect section in the XML utility lets you enter information about the server you are connecting to and the type of connection.

Connect Properties
This section lets you enter the properties for the XML connections between the client and the adapter. These properties determine where the connection is made, the types of connections, and other information about the connection. The following table explains the available properties.
Table 331 Name Server Connect XML Description Description Select the server machine where the adapter you are working with is located. The drop-down list contains the servers that are connected to your system. Select the workspace associated with the adapter to access. Select the adapter to access. Enter the user name of the authorized user. Not necessary if using anonymous login. Enter the users password. Not necessary if using anonymous login.

Workspace Adapter Username Password

33-2 AIS User Guide and Reference

Table 331 (Cont.) Connect XML Description Name Fixed NAT Description Select this check box if the machine you are working with has a fixed Network Address Translation (meaning that it is always connected to the same external IP address when accessing the Internet). For more information, see Firewall Support. Timeout (sec.) Enter the amount of time, in seconds, to wait for interactions before disconnecting. For long interactions, set a number high enough to ensure that the system does not timeout too frequently. Select this check box if you want a persistent connection. A persistent connection lets you make multiple requests on the same connection. Select this if you want to keep the connection alive until the end of the session. When this check box is cleared, the server connection will drop and reconnect as needed. Select this check box if you want the client machine to try to reconnect to the server automatically when a connection is dropped. Select this check box if you want to compress the information transmitted over the network. Select this check box to save the errors in a tracing log. Select this check box to same all actions in a tracing log. Select this check box to view the details of each step. When this is selected, each time you execute a command, a dialog box opens with details that describe the step you are carrying out.

Persistent

Keep Alive

Auto reconnect

Compressed Trace Detailed trace Step ACX

Connect Commands
This section describes the actions (buttons) you can carry out in the Connect section of the XML Utility. You carry out these action with the buttons.
Table 332 Button Connect Commands (Buttons) Description

Connect Disconnect Start Transaction


Commit Rollback

Click to connect to the selected Server. Click to stop the connection to the server. Click to begin a transaction. The adapter you are using must support transactions to use this command.
Click to commit a transaction after the transaction is completed. Click to rollback a transaction to its origin. You must either have Persistent connection checked or start a batch connection to start a transaction. The adapter you are using must support transactions to use this command. Click to start a group of transactions as a batch.

Start Batch

Metadata Events

Click to open the Metadata Browser. The browser lets you view the structure of the adapters metadata. Click to open the Events Listener.

Using the Attunity XML Utility 33-3

Table 332 (Cont.) Connect Commands (Buttons) Button Encryption Description Click this to Set Encryption keys to allow encrypted communication between the client and the adapter.

Metadata Browser
When you click Metadata on the XML Utility, the Metadata Browser opens. The Metadata Browser provides information about the metadata for the currently selected browser. The information in the browser is set up with a tree on the left side When you click an item in the tree, information is displayed on the right side.
Figure 332 Metadata Browser

The following table describes the items in the Metadata Browser.


Table 333 Item Schema (W3C) WSDL (W3C) DTD ACX Description Click to display the XML schema according to W3C standards. Click to display the Web Service Definition (WSDL) structure for this adapter. Click to display the adapters DTD (Data Type Definition) Click one of the following subnodes:

Schema: Displays the ACX metadata schema. Adapter Definition: Displays the XML formatted adapter definition. For more information on adapter definitions, see Defining the Application Adapter.

33-4 AIS User Guide and Reference

Table 333 (Cont.) Item Interactions Description The interactions (type of request) defined for the adapter are shown as subnodes for this item. Click one of interactions to display the following information about the interaction:

Mode Input Output

Events

Click this or any subnodes to display the browser readout for any events associated with this adapter. Events are only displayed if the adapter is an Events Queue adapter. Click this or any subnode to display the browser readout of the actual information contained by the application.

Records

Adapters on this workspace Click this to display the names and other information about the adapters on the current workspace.

Events Listener
The Events Listener opens when you click the Events button in the XML Utility. The events listener connects to an event queue and monitors the events in the queue. Perform the following steps to monitor events. To monitor events with the events listener 1. In the XML Utility, click Events to open the Events Listener.
Figure 333 The Events Listener

Using the Attunity XML Utility 33-5

2. 3. 4. 5.

In the Event names field, enter the name of the event you want to monitor. If you want to monitor all events, enter an asterisk (*). Enter the maximum block size. The default size is 10. Enter the number of events to monitor. If you leave 0, all events will be monitored until you stop monitoring. Click Start Events to start monitoring the events. To stop the events monitoring, click Stop Events. The captured events appear at the top of the screen.

In addition, you can also save the captured events into a log file. You can do the following:

Select Save Captured events into file to save the captured events. Click Browse to select the file to save the events. Click Open Log File to open the file in the Save Captured events into files field.

Set Encryption
The XML Utility lets you set up encrypted communication between the client and the adapter. To do this you must have an encryption key available and set the encryption. Follow the steps below for setting encryption. To set encryption 1. In the XML Utility, click Encryption to open the Set Encryption screen.
Figure 334 Set Encryption

2. 3. 4.

Enter the encryption Key name. Enter the encryption Key. Click OK to save the encryption key. To erase the information click Clear.

Executing with XML


The Execute section in the XML Utility lets you execute XML requests between the client and the adapter. The following table describes the commands used in executing an XML request.
Table 334 Command Interaction Description Select the type of request (interaction) to the adapter. Each adapter has its own types of requests. The requests available in this drop-down list depend on the Adapter selected.

33-6 AIS User Guide and Reference

Table 334 (Cont.) Command Select sample Description Enter a name for the input or output displayed in the XML Utility. You can save the input or output and then select it later from the drop-down list. Click this to save the currently displayed input or output as a sample. The saved sample will have the name that is currently in the Select sample field. To assign a sample a name, enter it in the Select sample field and then click Save sample. You can select a saved sample from the drop-down list to reuse the saved input or output. Delete sample Load Input Output style Input style Execute Execute Batch Deletes the current sample from the Save sample drop-down list. Click this to browse for an XML file that contains the input for the XML request. Click this to open a screen that allows you to attach an XML stylesheet to the received output. Click this to open a screen that allows you to attach an XML stylesheet for the input. Click to execute the XML request. The request you make is entered in the Input tab. Click this to executed a loaded batch. This is available only if you select Start batch in the Connect section.

Save Sample

Creating an XML Request


An XML request consists of two parts, the input which makes the request and the output which is the XML response. The XML Utility has a window with two tabs. These tabs contain the input and output.

XML Input
The input is the request that you are sending in XML form. The XML used by AIS is always wrapped with the tags <acx> </acx>. You can enter the input in the following ways:

Enter the XML request manually in the Input window. Be sure that the XML structure is correct and that it is wrapped with the tags <acx> </acx>. Load the input from an external XML file that is written in the correct structure. To load the input, click Load input and browse to the XML file you want to load. Select the file to add its contents to the Input window. Use a previously saved input sample. For more information, see Select sample and Save Sample.

The following is a sample of an XML input:


<acx> <connect adapter="orders"/> <execute> <findOrder ORDER_ID="1"/> </execute> </acx>

Using the Attunity XML Utility 33-7

XML Output
The output is returned in the Output tab. After you execute the XML request, click the Output tab to view the returned XML. The following is an example of XML output. This returned output is for a request for order information that shows the information for a sample order number 1.
- <findOrderResponse> - <ORDER ORDER_ID="0" ORDERED_BY="" N_LINES="0"> <ADDRESS ADDRESSEE="" STREET="" CITY="" ZIP="" STATE="" COUNTRY="" /> </ORDER> </findOrderResponse>

33-8 AIS User Guide and Reference

34
Attunity Query Tool
This section contains the following topics:

Query Tool Overview Getting Started with the Query Tool Using the Query Builder Creating a Query Manually Managing Queries Using the Query Tool with Transactions

Query Tool Overview


The query tool lets you create SQL queries and execute them. It is an easy way to build SQL queries for any AIS configuration. Because you configure AIS through Attunity Studio, it is easy to build the queries that you need while you set up AIS. The tool works in two ways:

Using the Query Builder lets you select the query type, tables, and other items that are manipulated in a query and add them to a table in the Query Tool interface. Selecting the items builds the query automatically. Creating a Query Manually lets you add additional parts to any SQL statement or build a new query from scratch. Any query statement types that you use when you build a statement must be supported by AIS. For more information on what type of SQL Attunity supports, see Attunity SQL Syntax.

You can save any queries created with the Query Tool. This lets you open and use any queries that you need to reuse later. The query tool also lets you work with Transactions.

Getting Started with the Query Tool


The Query Tool is a part of Attunity Studio. It opens in the Attunity Studios workbench editor. This section describes how to get started with the query tool. It contains the following topics:

Where You Can Execute Queries Opening the Query Tool The Query Builder Interface

Attunity Query Tool

34-1

Where You Can Execute Queries


You can open the Query Tool and execute queries on tables for various locations. These locations are represented in the trees in the Configuration view and the Metadata view. The Query Tool is available in the:

Configuration view for: Data Sources Bindings

Metadata view for: Tables Synonyms Views Data Sources

Opening the Query Tool


You open the Query Tool from a shortcut menu in the Configuration or Metadata view of Attunity Studio. To open the Query Tool 1. From the Metadata or Configuration View in Attunity Studio, right click on an item where you can query tables and select Query Tool. For a list of items where the Query tool is available, see Where You Can Execute Queries.
2.

Select the active workspace for executing SQL queries from the Select Workspace dialog box. The available workspaces are listed in the drop-down list.

Figure 341 The Select Workspace Dialog Box

Note:

If only one workspace is available, the Query Tool will open without displaying this dialog box.

3.

Click OK to open the Query Tool. The Query Tool opens in the Editor. See Query Tool Main Screen.

34-2 Attunity Integration Suite Service Manual

The Query Builder Interface


The Query Tool is displayed in the Editor of the Attunity Studio workbench. The following figure shows the Query Tool and its parts.
Figure 342 Query Tool Main Screen

The following table describes each part of the query tool:


Table 341 Part Path bar File management buttons Query Tool Parts Description The path bar shows you a path with the machine and binding (active workspace) where your tables are located. These buttons provide the basic file functions for any program. For example, you can save a query that you create and then open it to use again at a later time. For more information, see Managing Queries. The query type drop-down list lets you select from four types of queries:

Select query type

Select Update Insert Delete

You will receive an error in your results if you use a query type that is not supported by the data source for your tables (for example, if your data source is not updatable and you use an Update query type, you will receive an error). For more information, see Using the Query Builder.

Attunity Query Tool

34-3

Table 341 (Cont.) Query Tool Parts Part Description

List of Tables, Columns, and This is a hierarchical view with tree nodes that display the tables their data sources. columns, synonyms, and views for each data source in the active workspace. Select an item in the list and then use one of the Move Item Buttons to move the item into or out of the Query Builder main window. You can also drag an item into the Query Builder main window. SQL View button Click the SQL View button to see a list of the columns in a selected table. The SQL view provides you with information about the data in each column. For more information, see Viewing Table Column Information. Use these buttons to move tables, columns or other items from the List of Tables, Columns, and their data sources. on the left side to one of the tabs in the Query Builder main window. This window lists the tables, columns, or other items that you use to build your query. This window has up to six Query Tabs, depending on the query type selected. These tabs represent different parts of a query that you build in the query builder. For example, if you want to add a WHERE clause, click the Where tab to add the columns or tables into this tab. For more information, see Using the Query Builder. The actual SQL query that is build by moving the items into the Query Builder main window is displayed here. You can select Enable manual query editing to manually edit and build the query. In this mode the query becomes the Query Editor. When you use the editor, you can use any query type that is supported by AIS. For more information about building queries, see Using the Query Builder. For information on creating or editing a query manually, see Creating a Query Manually. Use these buttons if you want to execute a transaction using the Query Tool. For more information, see Using the Query Tool with Transactions.

Move Item Buttons

Query Builder main window Query Tabs

SQL query area

Transaction verb buttons.

Using the Query Builder


The query builder is a part of the Query Tool that helps you create a basic SQL Query. You use the Query Builder to build a Select, Update, Insert, or Delete query type. For creating a basic query, follow this procedure: To build a query 1. In Attunity Studio, open the Query Tool (see Opening the Query Tool).
2. 3.

Select a Query Type. Select the tables and columns that you want to query and move them into the correct tab in the Query Builder main window. See The Tables Tab and The Columns Tab. If necessary, click the Where tab and add a WHERE clause. See The Where Tab. If necessary, click the Group tab to add a GROUP BY clause. See The Group Tab. If necessary, click the Having tab to add a HAVING clause. See The Having Tab. If necessary, click the Sort tab to add a SORT BY clause. See The Sort Tab. Click Execute to execute the query. In this mode the query is automatically committed. The results are automatically displayed. You can move between the

4. 5. 6. 7. 8.

34-4 Attunity Integration Suite Service Manual

Query Builder and the results by clicking the Query Builder and Query Results tabs at the bottom of the Query Tool.
Note:

The procedure described above assumes you are using the automatic mode, where the query is committed as soon as you execute it. You can also use the manual mode, where you must manually commit. This allows you to execute transactions with the Query Tool. For more information, see Using the Query Tool with Transactions.

The Tables Tab


You enter the tables that you want to work with in the Tables tab. You can use this tab to create a simple FROM clause statement, such as Select * from Customer. You must include a table in this tab or a column from The Columns Tab for any query that you build. The following figure shows the Tables tab.
Figure 343 Table Tab

The Tables tab has the following columns:


Table: Enter a table in this column Alias: Create an alias for the table by entering a name for the alias.

For adding a table to the Tables tab, follow this procedure. To add a table to the Tables tab 1. From the list of tables on the left side, select the table you want to add. If you do not see the table in the list, find the data source for the table, and expand it by clicking the + next to it.
2.

Do one of the following to enter the selected table in the Tables tab:

Select the table you want to add to the query and click the right arrow move item button Double-click the table you want to add to the Query Builder window Drag the table into the Query Builder window.

Attunity Query Tool

34-5

The table is entered in the SQL Query at the bottom of the Query Tool as shown in the following figure.
Figure 344 Table Only Query

You can make changes to the queries that you build by adding more items to the Query Builder, or by removing them. For removing items, see Managing Queries.

Note:

An asterisk (*) is added to the tab when any information is entered. This reminds you that information is entered in a tab when viewing other tabs.

The Columns Tab


You enter the columns that you want to work with in the Columns tab. You can use this tab to create a FROM clause statement, such as Select <column Name> from Customer. You must include a column in this tab or a table from The Tables Tab for any query that you build.
Note:

The Columns tab is not available for Delete query types.

For adding a column to the Columns tab, follow this procedure. To add a column to the Columns tab 1. From the list of columns on the left side, select the column you want to add. If you do not see the column in the list, find the data source for the table with the column, and expand it by clicking the + next to it., then expand the table, if necessary to see the columns.
2.

Do one of the following to enter the selected column in the Columns tab.

Select the column you want to add to the query and click the right arrow move item button Double-click the column you want to add to the Query Builder window Drag the column into the Query Builder window.

The column is entered in the SQL Query at the bottom of the Query Tool as shown in the following figure.
Figure 345 Column Only Query

34-6 Attunity Integration Suite Service Manual

You can make changes to the queries that you build by adding more items to the Query Builder, or by removing them. For removing items, see Managing Queries.

Note:

An asterisk (*) is added to the tab when any information is entered. This reminds you that information is entered in a tab when viewing other tabs.

The Where Tab


You use the Where tab to create a WHERE clause to add to your query. You can add a WHERE clause to Select, Update, and Delete queries. The WHERE clause is added to the query at the bottom of the Query Tool. The following figure shows the Where tab.
Figure 346 The Where Tab

The Where tab has the following columns:

Column: Enter a column from the table that is referred to in the WHERE clause. Select the column from the Column list to the left. Operator: Select one of the following operators for the WHERE clause:

Column: Enter a column from the table that is referred to in the WHERE clause. Select the column from the Column list to the left. Operator: Select one of the following operators for the WHERE clause: Like: (For strings) Use this to indicate that the results will contain columns with parts that contain the entered value. For example, if you enter Building, all columns that contain the word Building or any part of it are returned in the results. is not: Use this to return columns that are not null. not like: (For strings) Use this to indicate that the results will not contain columns with parts that contain the entered value. For example, if you enter Building, no columns that contain the word Building or any part of it are returned in the results
Attunity Query Tool 34-7

>: (For numbers) Returns results for columns with a value that is greater than the value entered. =: Returns results for columns with a value that is equal to or the same as the value entered. <: (For numbers) Returns results for columns with a value that is less than the value entered. not between: (For numbers): Returns results for columns with a value that is not between the values entered. <>: (For numbers) Returns results for columns with a value that is different than the value entered. is: Use this to return columns that are null. <= : (For numbers) Returns results for columns with a value that is less than or equal to the value entered. >-: (For numbers) Returns results for columns with a value that is greater than or equal to the value entered. between: (For numbers): Returns results for columns with a value that is between the values entered.

Value: Enter a value from the column you entered for this WHERE clause. For example, if you have a column with data types, you can enter a data type that exists in the column. Logical: Select AND or OR if you want to add another line to the WHERE clause. AND will include items with both lines in the condition, and OR will only include items with one or the other.

For example, to create the WHERE clause, WHERE Customer Marketing Segment = HOUSEHOLD AND Comment = Had 3 complaints last year, enter the following:

For the first line: Column: Select and move the Customer Marketing Segment column into the Where tab. Operator: Select = from the drop-down list in this column. Value: Type HOUSEHOLD in this column. Logical: Select AND from the drop-down list in this column.

For the next line: Column: Select and move the Comment column into the Where tab. Operator: Select = from the drop-down list in this column. Value: Type Had 3 complaints last year in this column.

The following shows how this query may be displayed in the Query Tool.
Figure 347 WHERE Clause

34-8 Attunity Integration Suite Service Manual

For adding a WHERE clause, follow this procedure. To enter a WHERE clause 1. From the list of columns on the left side, select the column you want to be part of the WHERE clause. If you do not see the column in the list, find the data source for the table with the column, and expand it by clicking the + next to it., then expand the table, if necessary to see the columns.
2.

Do one of the following to enter the selected column in the Where tab Column.

Select the column you want to add to the query and click the right arrow move item button Double-click the column you want to add to the Query Builder window Drag the column into the Query Builder window.

3. 4. 5.

Select an operator from the drop-down list in the Operator column. Type the value you want to refer to in the WHERE clause in the Value column. If you are adding additional conditions to the WHERE clause, select either logical condition, AND or OR, from the Logical column. Note that AND is the default value for this column.

You can make changes to the queries that you build by adding more items to the Query Builder, or by removing them. For removing items, see Managing Queries.

Note:

An asterisk (*) is added to the tab when any information is entered. This reminds you that information is entered in a tab when viewing other tabs.

The Group Tab


The Group tab lets you create a GROUP BY clause for the query. This determines how the result is displayed when you execute the query. To create a GROUP BY clause, you select at least one column to be used as the GROUP BY criteria.
Note:

The Group tab is available only for Select queries.

For adding a column to the Columns tab, follow this procedure. To add a GROUP BY clause 1. From the list of columns on the left side, select the column you want to use as the GROUP BY criteria. If you do not see the column in the list, find the data source for the table with the column, and expand it by clicking the + next to it., then expand the table, if necessary to see the columns.
2.

Do one of the following to enter the selected column in the Group tab.

Select the column you want to add to the query and click the right arrow move item button Double-click the column you want to add to the Query Builder window Drag the column into the Query Builder window.
Attunity Query Tool 34-9

The GROUP BY clause is entered in the SQL Query at the bottom of the Query Tool as shown in the following figure.
Figure 348 GROUP BY Clause

You can make changes to the queries that you build by adding more items to the Query Builder, or by removing them. For removing items, see Managing Queries.

Note:

An asterisk (*) is added to the tab when any information is entered. This reminds you that information is entered in a tab when viewing other tabs.

The Having Tab


You use the Having tab to create a Having clause to add to your query. You can add a HAVING clause to a query where you added a GROUP BY. The HAVING clause is added to the query at the bottom of the Query Tool. The following figure shows the Having tab.
Note:

The Having tab is available only for Select queries.

Figure 349 The Having Tab

The Having tab has the following columns:

Column: Enter a column from the table that is referred to in the Having clause. Select the column from the Column list to the left. Operator: Select one of the following operators for the HAVING clause: Like: (For strings) Use this to indicate that the results will contain columns with parts that contain the entered value. For example, if you enter Building,

34-10 Attunity Integration Suite Service Manual

all columns that contain the word building or any part of it are returned in the results. is not: Use this to return columns that are not null. not like: (For strings) Use this to indicate that the results will not contain columns with parts that contain the entered value. For example, if you enter Building, no columns that contain the word building or any part of it are returned in the results >: (For numbers) Returns results for columns with a value that is greater than the value entered. =: Returns results for columns with a value that is equal to or the same as the value entered. <: (For numbers) Returns results for columns with a value that is less than the value entered. not between: (For numbers): Returns results for columns with a value that is not between the values entered. <>: (For numbers) Returns results for columns with a value that is different than the value entered. is: Use this to return columns that are null. <= : (For numbers) Returns results for columns with a value that is less than or equal to the value entered. >-: (For numbers) Returns results for columns with a value that is greater than or equal to the value entered. between: (For numbers): Returns results for columns with a value that is between the values entered.

Value: Enter a value from the column you entered for this HAVING clause. For example, if you have a column with data types, you can enter a data type that exists in the column. Logical: Select AND or OR if you want to add another line to the HAVING clause. AND will include items with both lines in the condition, and OR will only include items with one or the other.

For adding a HAVING clause, follow this procedure. To enter a HAVING clause 1. From the list of columns on the left side, select the column you want to be part of the HAVING clause. If you do not see the column in the list, find the data source for the table with the column, and expand it by clicking the + next to it., then expand the table, if necessary to see the columns.
2.

Do one of the following to enter the selected column in the Having tab Column.

Select the column you want to add to the query and click the right arrow move item button Double-click the column you want to add to the Query Builder window Drag the column into the Query Builder window.

3. 4.

Select an operator from the drop-down list in the Operator column. Type the value you want to refer to in the HAVING clause in the Value column.

Attunity Query Tool 34-11

5.

If you are adding additional conditions to the HAVING clause, select either logical condition, AND or OR, from the Logical column.

The HAVING clause is entered in the SQL Query at the bottom of the Query Tool as shown in the following figure.
Figure 3410 HAVING Clause

You can make changes to the queries that you build by adding more items to the Query Builder, or by removing them. For removing items, see Managing Queries.

Note:

An asterisk (*) is added to the tab when any information is entered. This reminds you that information is entered in a tab when viewing other tabs.

The Sort Tab


You use the SORT tab to create a SORT BY clause to add to your query. A SORT BY clause returns the order of the query according to the column you select as the criteria. The following figure shows the SORT tab.
Figure 3411 SORT Tab

The Sort tab has the following columns:


Column: Enter a column to use as the criteria for the order of the results. Order: Select ascending or descending from the drop-down list.

For adding a SORT BY clause, follow this procedure.

34-12 Attunity Integration Suite Service Manual

To add a SORT BY clause 1. From the list of columns on the left side, select the column you want to be the criteria for the result order. If you do not see the column in the list, find the data source for the table with the column, and expand it by clicking the + next to it., then expand the table, if necessary to see the columns.
2.

Do one of the following to enter the selected column in the Sort tab.

Select the column you want to add to the query and click the right arrow move item button Doubl- click the column you want to add to the Query Builder window Drag the column into the Query Builder window.

3. 4.

Select ascending or descending to determine the order of the sort. Enter any additional columns to add to the criteria. You can change the order that the columns appear by moving them up or down in list. Select a column and click Up or Down to move the selected column.

When you are finished, the SORT BY clause is entered in the SQL Query at the bottom of the Query Tool as shown in the following figure. You can make changes to the queries that you build by adding more items to the Query Builder, or by removing them. For removing items, see Managing Queries.

Note:

An asterisk (*) is added to the tab when any information is entered. This reminds you that information is entered in a tab when viewing other tabs.

Creating a Query Manually


The Query Tool also lets you build a query manually. When you build a query manually, you select manual editing. Manual editing lets you do the following:

Type the query manually so that you can enter the best SQL for your purpose, without leaving Attunity Studio. Add to or refine a query that you built with the Query Builder. Add to or refine a query that you created earlier.

Although the Query Builder only supports four query types, when you manually edit or create a query you can use any AIS Supported query type. If you want to use manual query editing, carry out the following procedure. To manually edit a query 1. Select the Enable manual query editing check box. The query area changes to a white background and becomes the Query Editor.
2. 3.

In the SQL Query Editor, you can type or change any of the text to create your query. Click Compile to compile the query and make it ready for execution. The Edited icon at the bottom of the Query area changes to Compile OK with a check mark to indicate that the query is compiled and ready for execution.
Attunity Query Tool 34-13

4.

Click Execute to execute the query and automatically commit it, or Click Manual to start the manual execution mode for transactions. See Using the Query Tool with Transactions.

The following figure shows the SQL Query area enabled for manual editing.
Figure 3412 Query Editor

Note:

If you create a manual query and change back to the Query Builder mode, some or all parts of the query may be lost. Therefore, you should not create a query and then return to the Query Builder to add to it.

Managing Queries
The Query Tool supports basic file management functions. This lets you do the following:

Load and use queries that were created at an earlier time. Save a query to use it at a later time. Save a query with a different name to create a new query that is based on the old one. Clear all information in the Query Builder.

The following table describes the buttons used for the file management functions.
Table 342 Button Query Tool File Management Functions Name Load Query Description Click this button to load a saved query.

34-14 Attunity Integration Suite Service Manual

Table 342 (Cont.) Query Tool File Management Functions Button Name Save Query Description Click this button to save a query. The first time you save the query, enter a name in the Save query dialog box. You can also enter a description about the query.

Save Query as

Click this button to save a query with a new name. Enter a name in the Save query dialog box. You can also enter a description about the query. See Save Query.

Clean Query Builder and Click this button to remove all entries in the Query editor Builder. This will remove all entries in all of the tabs. Check all of the tabs with entries to be sure that you want to remove them globally. Note: This only removes the entries from the Query Builder. It does not erase them physically from the data base. You can also remove one item or multiple items on a single tab. To remove items per tab: Select one item from the Tables and Columns list, or use the CTRL or SHIFT key to select multiple items and click the following button:

Click the following button to remove all of the items in the tab.

Note: If you remove a table, all columns from that table that you entered on any tab are also removed. Be sure to check all tabs with an asterisk (*) to be sure that you are not removing items you do not want removed.

Note:

All saved queries that you load into the Query Tool are opened in manual mode. If you change back to the Query Builder mode, some or all parts of the query may be lost. Therefore, you should not load a query to edit it in the Query Builder.

Attunity Query Tool 34-15

Viewing Table Column Information


You can view information about a table. To view this information, click the SQL View button. This opens the SQL View window. The window presents read-only information about each of the columns in the table. The following table describes the information presented in this window.
Table 343 SQL View Window Description The name of the column. The data type supported by that column. For example, string or integer. Indicates the maximum size allowable for the data in the column. The size is in standard units for the data type. For example, a string with size 40 can have no more than forty characters. Indicates the number of digits allowed after the decimal point for a numeric value. Indicates the total number of digits allowed for a numeric value in the column. If the value has a scale of one or more, then the total number of digits allowed before the decimal point is the precision value minus the scale value. For example, a value with precision 4 and scale 2 can be no larger than 99.99. Indicates whether the column can have a null value. If True, the column is nullable.

Information Type Name Data Type Size

Scale Precision

Nullable

Using the Query Tool with Transactions


In most cases when you execute a query with the Query Tool, the query is committed automatically. This does not let you roll back a query. If you have a set of queries that you want to execute together as a transaction, you must be able to roll back the transaction. In this case you use the manual commit mode to execute a transaction.
Note:

When you execute a transaction, you are executing it against all of the available data sources in the selected binding. Before you begin a transaction, you must make sure that all of the data sources in the binding support transactions. If any one of the data sources does not support transactions, an error message is displayed when you try to commit the transaction. However, it is possible that the system will use the auto-commit mode in some cases and that some data may be lost.

To execute a transaction, use the four buttons at the top of the Query Tool. The buttons are:

Manual: Click Manual to enter the manual mode. When this is selected, other buttons become active. When you click Execute, the query is not automatically committed. Begin: Click Begin to start a new transaction sequence. This indicates that the next set of queries that you execute are part of a transaction. Execute each query and then commit when you are finished.

34-16 Attunity Integration Suite Service Manual

Commit: Click Commit to commit the transactions that you executed after you click Begin. All of the transactions are executed and the transaction sequence ends. To start a new transaction, click Begin. Rollback: Click Rollback to roll back all the transactions that where executed after you clicked Begin. This rolls back all the transactions and ends the transaction sequence. To start a new transaction, click Begin.

For more information on creating and building queries, see Using the Query Builder and Creating a Query Manually.

Attunity Query Tool 34-17

34-18 Attunity Integration Suite Service Manual

35
SQL Explain Utility
This section contains the following topics:

SQL Explain Utility Overview Activating the SQL Explain Utility XML Output

SQL Explain Utility Overview


The AIS Query optimizer is used to optimize the results of an SQL statement. Each statement has a query plan that explains the query optimization. The SQL Explain Utility lets you view this plan in XML. When you activate the explain operation, you can:

Examine how a query is optimized by AIS. View the execution plan for an SQL query in XML output. Analyze an existing SQL query execution plan.

The XML output can be used to:


Create an indented tree view representation of the Query Plan. Format input for a graphical display application, to create a graphical view of the query plan.

Activating the SQL Explain Utility


The SQL Explain utility works in the background by creating an XML file when any SQL statement is sent. You activate the file creation by:

The Explain command in NAV_UTIL. Using the Explain interaction in the query adapter. Automatically generating the file using the analyzeQueryPlan property.

Using the Explain Command in NAV_UTIL


You can use NAV_UTIL to create XML files that explain the Query Optimizer plan. Enter the command explain followed by the SQL Statement you want to explain. For example, enter the following:
explain select * from customer

SQL Explain Utility 35-1

The Query Optimizer plan for this SQL statement is displayed on the screen. The following is an example of the output that you receive for the statement select * from customer.
Figure 351 XML Output Generated When Using NAV_UTIL

A file called expn.xml is saved for each statement. By default, this file is saved to the current directory. For more information on file output, see XML Output. You can set the name of the XML file by adding the name, with the .xml file extension, in single quotes to the explain command. For example:
explain AIS.xml select * from customer

The file will now be called AIS.xml. If you want to save to another directory, include with the file name. Use the switch [-t] if you want the output to be in an text-indented tree.
Note:

For mainframe systems, you must always use the full path including the high level qualifier.

Using the prepareSQL Interaction in the Query Adapter


You can use the Query Adapter to create XML files that explain the Query Optimizer plan. The prepareSQL interaction is used to get the output. The following explains how to use the XML Utility to activate the SQL Explain utility. To create an explain XML Query Optimizer plan with the Query Adapter From Programs in the Windows Start menu, point to Attunity then Server utilities, and select XML Utility. In the XML Utility, Execute section enter the following:

1. 2.

In the Server field, select the machine where you defined the workspace for the Query Adapter. You define the workspace in Attunity Studio. In the Workspace field, select the workspace where you defined the Query Adapter. In the Adapter field, select Query.

3. 4.

In the Connect section, Integration field, select prepareSQL. In the Input tab, enter the XML request with the statement. For example, if you want to explain the Query Optimizer plan for the statement, select * from customer, enter the following:
<prepareSql sql="select * from customer" datasource="navdemo" explain="true"/>

35-2 Attunity Integration Suite Service Manual

Figure 352 XML Utility with the Query Adapter XML Request

5.

Click Execute. You can now see the output in the Output tab.

Figure 353 Output XML

Automatically Generate the File Using the analyzerQueryPlan Property


You can generate files with the XML output automatically by setting the analyzerQueryPlan property to true in the binding Environment Properties. You set the Environment Properties in Attunity Studio. To display environment properties for the binding configuration in Attunity Studio, right-click the binding configuration and select Edit Binding. Click the Properties tab. The Environment Properties are divided into categories. Find the debug category and set the analyzerQueryPlan property to true. This will save the XML files in the TMP folder. The file will have the name expnn.xml. The nn is a number from 01 through 99. The system will only save a maximum of 99 files.

SQL Explain Utility 35-3

XML Output
This section describes the XML output that is returned by the SQL Explain Utility. The XML output produces verbs. There are two main categories:

Tree Node Other verb types

Tree Node Verbs


The following table describes the Tree Node verbs:
Table 351 Verb root Tree Node Verbs Child Verbs The root verb has the following possible child verbs:

cost: the final cost of the plan sql: the original SQL statement

orderBy

The orderBy verb has the following possible child verbs:


orderColumns: A list of the columns by order orderColumn: Has the children, name and order

rdbms

The rdbms verb has the following possible child verbs:


datasource: The datasource name remotePlan: The plan built on the remote machine Parameters: The parameters that are sent to the RDBMS Metadata: The RDBMS metadata

select

The select verb has the following possible child verbs:

columns: The columns that are in the select list. Columns are listed in the order they occur in the query.

aggregateSelect

The aggregateSelect verb has the following possible child verbs:

columns: The aggregate columns. Columns are listed in the order they occur in the query.

filter

The filter verb has the following possible child verbs:

expression: The filter predicate

storeProcedure

The storeProcedure verb has the following possible child verbs:


datasource: The squared procedure target datasource Name: The name of the Qspec Parameters: The stored query spec parameters

rdbmsBaseTable

The rdbmsBaseTable verb has the following possible child verbs:

rdbmsBaseTable: This contains the RDBMS base table information

baseTable

The baseTable verb has the following possible child verbs:

baseTable: This contains the basic table information.

35-4 Attunity Integration Suite Service Manual

Table 351 (Cont.) Tree Node Verbs Verb semiJoin Child Verbs The semiJoin verb has the following possible child verbs:

type: The join type duplicates: The number of predicate duplications cacheLimit: The maximum size, in bytes, for the cache index: The duplicated index columns leftColumns: The left columns for the index

hashJoin

The hashJoin verb has the following possible child verbs:


type: The join type. Possible values are LOJ, CROSS buckets: The number of hash buckets estRowsCaching: The estimated number of rows that are cached before writing to disk index: The hash index columns leftColumns: The left columns for the index rightColumns: All cached right columns

cache

The cache verb has the following possible child verbs:

type: The index type. Possible values are index, subquery, or no index. expression: the cache predicate index: The indexed columns in the cache cachedColumns: The cached columns Max/Initial Buffer Size: The cache sizes (maximal or initial)

nestedJoin

The nestedJoin verb has the following possible child verbs:


expression: The nested join predicate type: The join type. Possible values are LOJ,CROSS.

set

The set verb has the following possible child verbs:

type: The set operation type. Possible values are UNION, MINUS, INTERSECT, UNION ALL

remotePlan

The remotePlan verb has the following possible child verb:

root: The root verb of the plan on the remote machine.

distinct

Other Verb Types


The following table describes the other verb types:
Table 352 Verbs metadata Other Verb Types Child Verbs The metadata verb has the following possible child verbs:

table: The list verbs that describe the table in the metadata procedure: The list of procedures described in the metadata

SQL Explain Utility 35-5

Table 352 (Cont.) Other Verb Types Verbs table Child Verbs The table verb has the following possible child verbs:

name: The table name datasource: The datasource that he table belongs to rowsNumber: The estimated number of rows columns: The description of the columns indexes: The description of the index

procedure column

The list of procedures. The procedure verb has the following possible child verbs:

name: The column name type: The type of column uniqueValues: The estimated number of unique values in the column complexColumns: This is used if the column represents chaptered column or an expression

index

The index verb has the following possible child verbs:


name: The index name uniqueValues: The estimated number of unique values in the index indexProperties: The index properties. Possible values are unique, clustered, hashed, pseudo, hierarchical, array. columns: The columns that the index is constructed from

expression

The expression verb has the following possible child verbs:


stringValue: The expression as a string expressionNode: The expression tree is the expression tree element. This can include the root treeNode, operator(such as = or -), constant, bindParameter.

columns indexes chapter

The list of columns. The list of indexes. The list of chapters

35-6 Attunity Integration Suite Service Manual

36
Using Attunity Query Analyzer Utility
This section contains the following topics:

Overview Using the Query Analyzer Viewing the Execution Plan for an SQL Query Generating a Plan for Every SQL Statement The SQL Statement Plan The Query Analyzer Toolbar The Query Analyzer Icons Working with an Optimization Plan

Overview
AIS includes the Query Analyzer utility. This utility enables you to:

Examine how a query is optimized by AIS. Interactively view the execution plan for an SQL query. Analyze an existing SQL query execution plan.

Using the Query Analyzer


The Query Analyzer graphically displays the query execution strategies for an SQL statement (called a plan). Using the Query Analyzer you can identify the strategies selected by the query optimizer. The query execution plan is displayed as a tree structure, which shows both how the query is broken down into constituent parts and the strategy that is used with each part. Once a plan is generated it can be saved and analyzed without reference to any of the machines or data sources that were used in the query.
Note:

You can automatically generate a plan for every SQL statement that is executed by any application using AIS application.

Start the Query Analyzer by selecting Start, Programs, Attunity, Server utilities, and then select Query Analyzer.

Using Attunity Query Analyzer Utility

36-1

Viewing the Execution Plan for an SQL Query


You can analyze the plan that is generated or save both the SQL and the plan for later analysis. To interactively view the optimization plan for a query: 1. Select Start, Programs, Attunity Server utilities, and then select Query Analyzer to start the Query Analyzer.
2. 3.

If you do not want to use the default binding, then select the binding where the data source you want to access is defined. Select File, and then Analyze SQL. The Analyze SQL Statement screen is displayed, as shown in the following figure:

Figure 361 Analyze SQL Statement screen

4. 5.

Select the default data source (or include the data source as part of the query, as follows: ds:table). Enter the SQL you want to analyze.
Note:

To execute SQL that was previously saved, click Load SQL.

6.

Click Analyze. The optimization plan is displayed.


Note:

You can save the plan by selecting File, and then Save Plan

As.

Generating a Plan for Every SQL Statement


You can generate a plan for every SQL statement that is executed by any application using AIS. To generate a plan for every SQL statement Set the analyzerQueryPlan environment parameter to true, in the binding editor, debug node.

The SQL Statement Plan


A plan is displayed as a tree structure, with each node of the tree represented by an icon indicating the strategy utilized by the Query Processor at that point.

36-2 AIS User Guide and Reference

For example, the following plan shows that a number of different strategies are used, including hash joins and semi-joins.
Figure 362 A Statement Plan

The Query Analyzer Toolbar


The following table lists and describes the toolbar buttons and their functionality:
Table 361 Button Menu Option File, Analyze SQL Query Analyzer Toolbar Buttons Function Enables you to specify a query in order to analyze the optimization plan generated for it by the Query Analyzer. Enables you to open a previously saved plan.

File, Open Plan

File, Save Plan As

Enables you to save the displayed plan.

File, Save Plans SQL As

Enables you to save the SQL that generated the displayed plan.

Binding, Select Binding

Enables you to select the binding configuration containing the connection information for the data sources the query references.

Using Attunity Query Analyzer Utility

36-3

Table 361 (Cont.) Query Analyzer Toolbar Buttons Button Menu Option View, One Level Up Function Enables you to navigate up within levels of a plan, one level at a time. Levels are created for subqueries, chapters and when part of a query is executed by a remote AIS Query Processor.

The Query Analyzer Icons


The following table lists and describes the icons used by the Query Analyzer to represent the different optimization strategies:
Table 362 Icon Description The SQL text in the current query. Query Analyzer Icons

The list of retrieved expressions. If the select list includes a chapter, a link to a subordinate optimization plan is shown in the displayed box. Click on this link to display the subordinate plan. Aggregate Select: Indicates an aggregate function is included on at least one column. Returns the rows of the <select> statement, discarding any duplicate rows. Returns all the rows returned by the <select> statements, including duplicate rows. Returns rows common to both result sets returned by the <select> statements, discarding duplicate rows. Returns rows that appear only in the first <select> statement, discarding duplicate rows. Returns a rowset composed of unique rows.

For every row from the left side, an iteration of the records from the right side is performed and matching records are output. LOJ: Indicates a left outer join logic is performed.

A hash join strategy is used at this point. All the right-side rows are retrieved and broken down into buffer-sized chunks (buckets) and written to a local disk. The left-side rows are partitioned using the same key so that each partition on the left-side corresponds to the same bucket from the right side. A local join is performed between each partition of the left-side and the corresponding memory-resident bucket of the right-side. LOJ: Indicates a left outer join logic is performed.

36-4 AIS User Guide and Reference

Table 362 (Cont.) Query Analyzer Icons Icon Description A semi-join strategy is used at this point. In every iteration of the join, a number of left-side rows are retrieved and cached in memory, and a query is formulated to retrieve all of the potentially relevant right-hand rows. LOJ: Indicates a left outer join logic is performed.

The predicates used to filter the data. If one of the predicates is a subquery, a link to a subordinate optimization plan for this subquery is shown in the pop-up information panel. Click on this link to display the subordinate plan showing the subquery optimization. Ordering of the output is performed according to the ORDER BY clause in the SQL statement. A lookup cache strategy used at this point. The table is read once into memory and efficiently accessed in memory via an index that is built for it. All data is read into memory at one time and accessed in memory.

Data from the left-side of the tree is cached together with corresponding data from the right-side of the tree. On subsequent calls data is fetched from the cache if it is available. Rows returned by a subquery are cached.

The data source is accessed by the specific piece of SQL.

The part of the SQL is processed by a remote Query Processor. A link to a subordinate optimization plan for this part of the SQL is shown in the displayed box. Click on this link to display the optimization performed by a remote Query Processor. A relational data source table is accessed. The information panel includes the known statistics for this table. A file system table is accessed. The information panel includes the known statistics for this table. A stored procedure (stored query or Attunity Connect procedure) is accessed. The name and data source where the stored procedure resides, along with result columns and parameter values are displayed.

Working with an Optimization Plan


You can view an optimization plan by executing an SQL statement or by loading an existing plan. As you move the cursor over an icon in the plan displayed, additional relevant information is displayed by means of pop-up panes. The following table summarizes the controls for displaying the additional information:

Using Attunity Query Analyzer Utility

36-5

Table 363 Action

Information Pane Controls Result The information pop-up pane is displayed. The information pane is permanently displayed (until closed, see below). Clicking an icon when a pane is displayed causes the title bar to blink (to identify the relevant pane).

Moving the cursor over an icon. Clicking an icon

Double-click an information The information pane closes. pane, or click the Close icon on the information pane.

A query optimization plan can include subordinate plans. You display a subordinate plan by clicking on a jump string, located at the lower area of the relevant additional information pane. The following diagram shows a subordinate plan, generated for a chapter, and displayed by clicking the Chapter0 jump string in the Select list information pane:
Figure 363 Subordinate Plan

The following table lists the situations where a subordinate plan is generated:
Table 364 Situation Subordinate Plans Generation Jump string in Information Pane Relevant Icon

A filter in the SQL statement includes SubQueryn (n is the nesting level) a nested SELECT statement (a subquery). Part of the query execution is done by Remote Optimization the AIS Query Processor on the remote server where the data being accessed resides. The SQL statement includes syntax for a chapter; either braces {} or parentheses (). Chaptern (n is the chapter level)

36-6 AIS User Guide and Reference

The current level of a plan is shown in the title bar. For example, a subordinate plan, two levels down (resulting from remote processing of a subquery) is displayed as:
Plan->Subquery->Remote

You can navigate through the plan levels by click the One Level Up toolbar button, or selecting View, and then One Level Up from the menu bar.

Using Attunity Query Analyzer Utility

36-7

36-8 AIS User Guide and Reference

37
Using NAV_UTIL Utility
This section contains the following topics:

Overview Using the NAV_UTIL Command Line Utility

Overview
AIS includes the NAV_UTIL utility. It is a command-line console which enables executing a collection of commands including troubleshooting and metadata utilities.

Using the NAV_UTIL Command Line Utility


Start NAV_UTIL by selecting Start, Programs, Attunity, and then select Command Line Console. This section contains information on the following tasks and commands:

Running NAV_UTIL ADDON ADD_ADMIN AUTOGEN CHECK CODEPAGE DELETE EDIT EXECUTE EXPORT GEN_ARRAY_TABLES IMPORT LOCAL_COPY PASSWORD PROTOGEN REGISTER SERVICE
Using NAV_UTIL Utility 37-1

SVC TEST UPDATE UPD_DS UPD_SEC VERSION VERSION_HISTORY VIEW XML

Running NAV_UTIL
This section contains information on the following topics:

Basic NAV_UTIL Syntax Activating NAV_UTIL Running NAV_UTIL from a Shell Environment Running NAV_UTIL on a Java Machine

Basic NAV_UTIL Syntax


The basic NAV_UTIL syntax reflects the general syntax of the command line utility. Therefore, keep the meaning of the following symbols in mind:

Plain text: an absence of symbols signifies a keyword, which must be entered as it appears. <>: parameters inside angular brackets need to be entered in context. For example <data_source> must be replaced with the appropriate data source on which you wish to conduct the transaction at hand. []: parameters inside square brackets are optional. You can use a combination of angular and square brackets, signifying an optional parameter that, if entered, must be in context. For example: [<data_source>] |: signifies or. For example, <bindings | datasource | remote_ machine> signifies any one of the parameters inside the angular brackets.

Activating NAV_UTIL
The syntax for activating NAV_UTIL is as follows:
Example 371 NAV_UTIL Activation

nav_util [<options>] <command_name> [<utility_params>]

Where:

[<options>]: The following general options, which dictate the way the utility will run, such as the machine where the utility will run: -p<password>: The master password specified for the user profile with the name specified in the -u parameter (or the default NAV user profile if the -u

37-2 AIS User Guide and Reference

option is not specified). If a master password has been set, use of NAV_UTIL requires this password.

-u<name>: The name of a user profile to be used other than the default (NAV). -b<binding_name>: A binding setting other than the default (NAV) binding configuration. -nowait: Eliminates the Press any key to continue prompt. This is useful when batching multiple commands in a script. -command: Runs the utility from a shell environment. -db: Runs the utility on an Attunity Connect or Attunity Federate virtual database.

<command_name>: The name of the command you want to execute. [<utility_params>]: Command-specific parameters. If you do not supply the command parameters, you are prompted for them.

Running NAV_UTIL is platform-dependent. The following table describes how to activate NAV_UTIL on the different platforms.
Platform Windows Activation On the taskbar, click the Start button and point to Programs. Point to Attunity and select Command Line Console. Using the Command Line Console to run NAV_UTIL ensures that the environment settings for AIS are correct. UNIX and OpenVMS Execute nav_login to establish the environment. On OpenVMS, activate this command directly from DCL. On UNIX, activate this command from the shell. For details, see Running NAV_UTIL from a Shell Environment. HP NonStop If AIS is not set up to work with 2PC, make sure the TMF transaction utility is active (that is, the transaction environment property convertAllToDistributed is set to true).

Running NAV_UTIL from a Shell Environment


You can run NAV_UTIL from a shell environment. To start the local shell, run NAV_ UTIL with the -command parameter, as follows:
nav_util -command [-nowait]

If you specify a utility before the -command parameter, this utility is run prior to starting the shell environment. For example, to execute the network utility before you start the shell, run:
nav_util check network -command

Where: nowait: Eliminates the Press any key to continue prompt. This is useful when you batch multiple commands in a script. From within the shell environment, you can run any of the NAV_UTIL utilities as follows:

On the fly

Using NAV_UTIL Utility 37-3

Enter the command (without the NAV_UTIL at the beginning). Press Enter to execute the command.

From a file Enter the full name of a file that contains NAV_UTIL commands, prefixed by @. The file is a text file (with any extension). Multiple commands in the file must be separated by semi-colons (;). Press Enter to execute the commands contained in the file. For example, to execute the commands contained in the navutil.txt file, enter the following: Local> @C:\Program Files\Attunity\Connect \tmp\navutil.txt
Note: On z/OS systems, use single quotes () around the file name. For example: @NAVROOT.TMP.NAVUTIL1

You can access the shell environment and run a file immediately by entering the following command:
nav_util [-options] <-command @navutil_file>

Where:

options: See Activating NAV_UTIL. navutil_file: The name of the file containing the NAV_UTIL commands.

Quit the shell by typing either quit or exit at the prompt and pressing Enter.

Running NAV_UTIL on a Java Machine


To execute NAV_UTIL on a Java machine, use the following syntax:
java Navutil <command_name> [<command_params>]

Where java is the command to run a java class (for example, java under UNIX or jview under z/OS, and Windows). The format is case sensitive (Navutil and not navutil). Use this option to check that you can use AIS in the your Java environment. The following options are available using the Java version of NAV_UTIL:

execute: To check access to data, see EXECUTE. check: To check the client/server interaction, see CHECK.

ADDON
The ADDON command adds parameters to the $NAVROOT/def/addon.def file.
Example 372 ADDON Syntax

nav_util addon <file_name>

Where:

file_name: The file that contains the new parameter.

37-4 AIS User Guide and Reference

ADD_ADMIN
The ADD_ADMIN command enables you to specify which users can manage the machine where this command is run, from within AIS Studio.
Example 373 ADD_ADMIN Syntax

nav_util add_admin <admin_username> | *

Where:

admin_username: The name of a valid user who can administer the current machine from within Attunity Studio. *: All users can administer the current machine from within Attunity Studio.
Note:

The user specified can be changed from within Attunity

Studio.

AUTOGEN
The AUTOGEN command enables you to generate an adapter definition for specific AIS application adapters.
Example 374 AUTOGEN Syntax

nav_util autogen <adapter_name> [-new] <answer_file> [-def< definition_name>] [-file <definition_file_name>]

Where:

adapter_name: The name for the adapter in the <adapter> statement in the binding configuration. -new answer_file: The XML file to which the specified definition template is written. You can generate an XML template for an adapter definition for an AIS application adapter. The template contains empty fields; you can use it to create an adapter definition to import to the AIS repository. answer_file: The input file with the adapter definition. -def definition_name: The name for the definition in the repository, if this is different from the adapter name. The adapter definition is automatically imported to the AIS repository. -file definition_file_name: The XML file to which the adapter definition is written. The definition is generated to an XML file. The adapter definition can be edited and then imported to the repository via the IMPORT command.

CHECK
The CHECK command checks various facets of the client/server system. You can check the following parameters:

check irpcd check network [port] check irpcdstat check tcpip

Using NAV_UTIL Utility 37-5

check server check license check datasource

check irpcd
This checks whether an AIS daemon is running. For example, from a UNIX machine, you can check that the daemon is active under z/OS on the production mainframe (prod.acme.com).
Example 375 check irpcd Syntax

nav_util check irpcd(prod.acme.com)

check network [port]


This lists the machines that have an active daemon. You can list all machines or specific machines, based on a specified port number.
Windows, OpenVMS, and HP NonStop Platforms nav_util check network(<port>)

UNIX Platforms nav_util check network(<port>)

z/OS Platforms NAVROOT.USERLIB(navcmD)

At the prompt, enter:


CHECK NETWORK (<port>)

OS/400 Platforms call prg(navroot/navutil) parm (check network(port)) Example 376 check network Syntax

check irpcdstat
This checks the status of a daemon for all workspaces, including active server processes (both those connected to a client and those that are available) and the name and location of the log file and the IRPCD configurations. Use this option to identify server processes that need terminating. You can also check the status of a specific daemon workspace.
Windows, OpenVMS, and HP NonStop Platforms nav_util check irpcdstat(<daemon_location>, <workspace> [,<username>, <password>])

37-6 AIS User Guide and Reference

UNIX Platforms nav_util check irpcdstat(<daemon_location>, <workspace> [,<username>, <password>])

z/OS Platforms NAVROOT.USERLIB(navcmD) At the prompt enter: CHECK IRPCDSTAT(<daemon_location>, <workspace> [,<username>, <password>])

OS/400 Platforms pgm(navutil) parm(check irpcdstat(<daemon_location><workspace> [<username> <password>]) Where:

daemon_location: The host name with an optional port number, where the port number is specified after a colon, as follows: machine[:port]. workspace: The name of a workspace defined in the daemon configuration. username: A user name with permission to access the server. password: The users password.

check tcpip
This checks the basic TCP/IP configuration on the machine (as far as AIS can check it).

check server
This checks whether a client can access a specific workspace and the details of the workspace configuration.
Windows, OpenVMS, and HP NonStop Platforms nav_util check server(<daemon_location>, <workspace> [,<username>, <password>])

UNIX Platforms nav_util check "server(<daemon_location>, <workspace> <password>])" [,<username>,

z/OS Platforms (under TSO) NAVROOT.USERLIB(navcmD) At the prompt, enter: CHECK SERVER(<daemon_location>, <workspace> [,<username>, <password>])

Using NAV_UTIL Utility 37-7

OS/400 Platforms pgm(navutil) parm(check server("<daemon_location>" "<workspace>" ["<username>" "<password>"]) Where:

daemon_location: The host name with an optional port number, where the port number is specified after a colon, as follows: machine[:port]. workspace: The name of a workspace defined in the daemon configuration. username: A user name with permission to access the server. password: The users password.

check license
This checks the license details. You can also check the license details for a specific remote machine.

check datasource
This tests the connection to a specific data source, defined in the default local binding configuration.
Windows, OpenVMS, and HP NonStop Platforms nav_util check datasource(<ds_name>[,<connect_info>])

UNIX Platforms nav_util check "datasource(<ds_name>[,<connect_info>])"

z/OS Platforms (under TSO) NAVROOT.USERLIB(navcmD) At the prompt, enter: CHECK DATASOURCE(<ds_name>[,<connect_info>])

OS/400 Platforms pgm(navutil) parm(check datasource("<ds_name>"[,"<connect_info>"])) Where:


ds_name: The name of the data source to test, as defined in the binding configuration. connect_info: Any specific connection information to test.

CODEPAGE
CODEPAGE is used to generate a binary file from a text file with a view to mapping to and from an unsupported codepage. The result is a codepage that AIS supports.
Example 377 CODEPAGE Syntax

nav_util codepage <text_file>

Where:

37-8 AIS User Guide and Reference

text_file maps the unsupported codepage to the supported codepage.

DELETE
DELETE is used to remove the following objects from the repository:

Binding User Profile Daemon Application Adapter Definition

There is a separate syntax for Deleting Data Source Objects.


Example 378 DELETE Syntax

nav_util [<options>] delete <obj_type> <obj_name>

Where:

options: See Activating NAV_UTIL. obj_type: The type of object to be deleted. You can specify any of the following:

adapter_def[inition]: Application adapter definition. adapters: The adapters specified in the binding information. binding: A particular set of binding information. daemon: General daemon configuration settings. datasources: The data sources specified in a binding. env[ironment]: Environment properties for a particular binding. remote_machines: Remote machines defined in the binding. user: A user profile definition.

obj_name: The name of the specific object (of the type specified in the obj_type parameter) to be deleted. Use the following table to determine the obj_name to supply, dependent on the value of obj_type:

adapter_def[inition]: The name of the application adapter definition to be deleted. binding, datasources, remote_machines, environment and adapters: The name of the binding in which these objects are defined. daemon: The daemon name. user: The user name that identifies the user profile.

Deleting Data Source Objects


You can delete the following information for a given data source from the repository:

Tables that rely on ADD metadata ADD metadata for a table generated by the LOCAL_COPY command Stored procedures that rely on ADD metadata ADD metadata for a stored procedure generated by the LOCAL_COPY command

Using NAV_UTIL Utility 37-9

Views Synonyms
DELETE Data Source Objects Syntax

Example 379

nav_util [<options>] delete <obj_type> <ds_name> <obj_name>

Where:

options: See Running NAV_UTIL. obj_type: The type of object to be deleted. You can specify any of the following:

table: Deletes the information for the specified table. local_table: Deletes a local copy of a table. procedure: Deletes an AIS procedure. local_procedure: Deletes a local copy of a stored procedure. view: Deletes an AIS view. synonym: Deletes an AIS synonym.

ds_name: The name of the data source, as specified in the binding configuration, for the data source object that is deleted. obj_name: The name of the specific object (of the type specified in the obj_type parameter) to be deleted. Use the following table to determine the obj_name to supply, dependent on the value of obj_type:

table: The name of the table to be deleted or * to delete all the tables for the specified ds_name. local_table: The name of a local copy of a table to be deleted or * to delete all the local copy tables for the specified ds_name. procedure: The name of an AIS procedure for the specified ds_name. local_procedure: The name of a local copy of a procedure to be deleted or * to delete all the local copy procedures for the specified ds_name. view: The name of the view to be deleted or * to delete all the views for the specified ds_name. synonym: The name of the synonym to be deleted or * to delete all the synonyms for the specified ds_name.

EDIT
The EDIT command enables you to modify the contents of a repository. You can directly edit the following types of repository objects:

All configuration information for a particular machine, including all the other elements listed ahead. User profile definitions The list of available bindings Information for a particular binding, which can include information about the following:

Data sources

37-10 AIS User Guide and Reference

Remote machines Environment settings Attunity Connect adapters

Information about the available daemons Information about the following for a particular data source:

Tables that rely on ADD metadata ADD metadata for a table generated by the LOCAL_COPY command Stored procedures that rely on ADD metadata ADD metadata for a stored procedure generated by the LOCAL_COPY command Views Synonyms

Application adapter definitions

The object is exported to an XML file that is automatically displayed in a text editor. When the text editor is closed, the XML file is saved back to the repository. However, you cannot use this command to delete a repository entry from the text editor. To delete a repository entry, use the DELETE command. The text editor used is the native text editor for the operating system. You can change the editor in the miscellaneous environment settings, either using Attunity Studio or by adding misc edit=<path_and_name_of_editor> directly to the binding environment information.
HP NonStop Platforms navedit obj_type [<ds_name> [-native]] <obj_name>

All Other Platforms navedit obj_type [<ds_name> [-native]] <obj_name>

Where:

options: See Running NAV_UTIL obj_type: The type of object to be edited. You can specify the following types of objects:

adapter_def[inition]: Application adapter definition. adapters: The adapters specified in the binding information. bindings: All available bindings and their environments. binding: A particular set of binding information. daemon: General configuration settings of a specific daemon. daemons: General configuration settings of all daemons. datasources: The data sources specified in a binding. remote_machines: Remote machines defined in the binding.

Using NAV_UTIL Utility

37-11

env[ironment]: Environment properties for a particular binding. table: Table definitions that rely on ADD metadata per data source. local_procedure: ADD metadata for a stored procedure generated by the LOCAL_COPY command. local_table: ADD metadata for a table generated by the LOCAL_COPY command. machine: All configuration information for a particular machine. procedure: Stored procedure definitions that rely on ADD metadata. synonym: Synonyms definitions per data source. user: A user profile definition. users: All user profile definitions.

ds_name: The name of data source for the object to be edited, as specified in the binding configuration, when the obj_type is any of: table, local_table, view, procedure, local_procedure, and synonym. -native: Extracts metadata from the native data source. This option is relevant only for viewing the definition of a local table or procedure (when the obj_type value is local_table or local_procedure). obj_name: The name of the specific object (of the type specified in the obj_type parameter) that is edited. Use the following table to confirm the obj_name to supply, according to the value of obj_type, or use * for all of the objects of the specified type:

adapter_def[inition]: The name of the application adapter definition to be edited. adapters: The name of the binding configuration. binding: The name of the binding. If not provided, the default binding (NAV) is used. bindings: No value necessary. datasources: The name of the binding configuration. daemon: The name of the daemon. daemons: No value necessary. env[ironment]: The name of the binding configuration for this working environment. local_procedure: The name of a local copy of a procedure to be edited or * to edit all the local copy procedures for the specified ds_name. local_table: The name of a local copy of a table to be edited or * to edit all the local copy tables for the specified ds_name. machine: No value necessary. procedure: The name of the procedure to be edited or * to edit all the procedures for the specified ds_name. remote_machines: The name of the binding configuration. synonym: The name of the synonym to be edited or *to edit all the synonyms for the specified ds_name.

37-12 AIS User Guide and Reference

table: The name of the table to be edited or * to edit all the tables for the specified ds_name. user: The name of the user that identifies the user profile. view: The name of the view to be edited or * to edit all the views for the specified ds_name.

Supplying a value for obj_name that does not exist in the repository will also create a template, based on the default object (such as NAV for binding or IRPCD for daemon).

EXECUTE
This section contains information on the following topics:

EXECUTE Overview NavSQL Environment Executing SQL Statements NavSQL Commands

EXECUTE Overview
Use the EXECUTE command to test data connections and SQL statements in the interactive NavSQL environment. Running the EXECUTE command gives you the NavSQL prompt. An example of when to use the EXECUTE command is to check the available data types supported by the data source. For example, if a table in the data source requires a float, the SQL must specify a float rather than a string.
z/OS Platforms NAVROOT.USERLIB(NAVSQL) Where NAVROOT is the high-level qualifier where AIS is installed.

All Other Platforms nav_util execute [-P<password>] [-W<workspace>] <ds_name> [<filename>] Where:

password: The master password that was specified for the user profile. If the password is not supplied, you are prompted for it. workspace: The name of the binding that is used as the basis for information. If the binding is not supplied, the default AIS binding is used. ds_name: The name of the data source, as specified in the binding configuration. If you do not supply this parameter, you are prompted for it. filename: The name of a file that contains SQL statements. The SQL statements in the file are run immediately. The file is a text file (with any extension). Multiple SQL statements in the file must be separated by semi-colons (;).

NavSQL Environment
Within the NavSQL environment, you can perform the following tasks:

Execute SQL statements. Request Help and information about a data source. Change the name of the default data source.
Using NAV_UTIL Utility 37-13

Enter the command tdp with the new name that you want as the default data source. This name must have been defined in the binding configuration.

Exit the NavSQL environment. Enter quit or exit.

Each command entered in the NavSQL environment can span a number of lines. End the command with a semi-colon (;).
Figure 371 NavSQL Environment

Executing SQL Statements


You can write and execute SQL in the NavSQL environment using the methods described in the following table.

Method On the fly

Description Compose an SQL statement and end it with a semi-colon. Press Enter to execute the statement. If the SQL contains data from more than one data source, use a colon (:) to identify the data source (that is, datasource_name:Table_name).

37-14 AIS User Guide and Reference

Method From a file

Description Enter the full name of a file that contains the SQL, prefixed by @. Press Enter to execute the SQL. For example: NavSQL> @C:\sql\sql-query.sql; Note: For z/OS systems, use single quotes () around the filename. For example: @NAVROOT.TMP.SQL1 You can access the NavSQL environment and run a file immediately by entering the following command: nav_util execute <data_source> <file> where data_source is the name of the data source as defined in the binding and file is the name of the SQL file. If you want to run all the queries in the file without the overhead of displaying query information on the screen for each query, enter the following command: nav_util execute <data_source> -quiet <file> In this case, only queries that fail cause information to be displayed to the screen during the run. A message is displayed after all the queries have been run, stating the number of queries that succeeded and the number that failed.

From within a transaction

Enter the command begin-transaction (optionally with either read-only or write permission) to start a transaction where you can commit a number of SQL statements together. Use commit to update the data sources with any changes, or rollback if you decide that you do not want to accept the changes.

NavSQL Commands
From within the NavSQL environment, use the HELP command to list all the available NavSQL environment commands. The following transaction-based commands are available for use within the NavSQL environment:

Begin-transaction Commit Rollback

The following command can be used to change the default data source from within the NavSQL environment:

tdp <ds_name> or tdp-default <ds_name>

The following NavSQL environment commands can be used to extract information related to the data source:

describe [<ds-name>:]<table-name> [full] [index]: Provides table information. If full is specified, additional column information is provided. If index is specified, a visual representation of the record structure is displayed where available (this structure can be made available by running the NAV_UTIL EXPORT command). desc is a short form of the describe command.

describe @<proc_name>: Provides a description of a stored procedure and/or procedures that are included in an AIS procedure (the type is Application Connection (Procedure) or Natural/CICS in the binding configuration).

Using NAV_UTIL Utility

37-15

desc is a short form of the describe command.

list catalogs [<mask>]: Lists details about all the catalogs, or a subset of the catalogs when a mask is supplied. list cata or list catas are short forms of the list catalogs command.

list columns [<table-mask>] [<column-mask>]: Lists details about the columns of the data source. You can list details about specific columns of the data source and about columns in specific tables belonging to the data source. You must also specify if the data source management system is case sensitive. list procedures [<mask>]: Lists details of all the AIS procedures, or a subset of the procedures when a mask is supplied. list procedure_col [<proc-mask>] [<column-mask>]: Lists details about the columns referenced by the AIS procedures. You can list details about specific columns and about columns in specific procedures. You must also specify if the data source management system is case sensitive. list special-col [<mask>]: Lists details about all the columns with special characteristics (for example key fields), for the data source or a specific table belonging to the data source when a mask is supplied. list statistics [<mask>]: Lists statistics about all the tables, or a subset of the tables when a mask is supplied. list synonyms: Lists details about all the synonyms. list tables [<mask>]: Lists details about all the tables, identified by the type of table: views, synonyms and system tables. A subset of the tables is displayed when a mask is supplied. list tab or list tabs are short forms of the list tables command.

list tables @*: Lists all procedures included in an AIS procedure (type is Application Connection (Procedure) in the binding configuration). list tab or list tabs are short forms of the list tables command.

show datatype [<dt-id>]: Lists details about all the data types available, or a specific data type when a number (the dt-id parameter) is supplied. list views: Lists details about all the views. native_describe [<ds-name>:]<table-name> [full] [index]: Runs the describe command of the data source. If full is specified, additional column information is provided. query[_describe] <query>: Provides query information, including the number of fields in the query with the field descriptions and the number of parameters expected by the query.

EXPORT
The EXPORT command enables you to export the contents of a repository to an XML document. You can export the following types of objects from the repository to an XML file:

All configuration information for a particular machine User profile definitions The list of available bindings

37-16 AIS User Guide and Reference

Information for a particular binding, which can include information about the following:

Data sources Remote Machine Environment settings AIS adapters

Information about the available daemons Information about the following for a particular data source:

Tables that rely on ADD metadata ADD metadata for a table generated by the LOCAL_COPY command Stored procedures that rely on ADD metadata ADD metadata for a stored procedure generated by the LOCAL_COPY command Views Synonyms

Application adapter definitions

In addition, you can use the EXPORT utility to export metadata from a data source where the metadata is readable by AIS (such as Oracle or Sybase metadata). The metadata is converted to XML, which is editable. When running EXPORT, use the -native option, as described below. After editing, import the metadata to a local repository for the data source. For information about setting up this feature in Attunity Studio, see Extended Native Data Source Metadata.
Example 3710 EXPORT Syntax nav_util [<options>] export <obj_type> [ds_name [-native]] <obj_name> <xml_file> | con:

Where:

options: See Running NAV_UTIL obj_type: The type of object to be exported. You can specify the following types of objects:

adapter_def[inition]: Application adapter definition. adapters: The adapters specified in the binding information. all: All configuration information for a data source. bindings: All available bindings and their environments. binding: A particular set of binding information. daemon: General configuration settings of a specific daemon. daemons: General configuration settings of all daemons. datasources: The data sources specified in a binding. remote_machines: Remote machines defined in the binding.

Using NAV_UTIL Utility

37-17

env[ironment]: Environment properties for a particular binding. table: Table definitions per data source. local_procedure: ADD metadata for a data sources stored procedure generated by the LOCAL_COPY command. local_table: ADD metadata for a table generated by the LOCAL_COPY command. machine: All configuration information for a particular machine. procedure: Stored procedure definitions that rely on ADD metadata. synonym: Synonyms definitions per data source. user: A user profile definition. users: All user profile definitions. view: AIS view on a data source.

ds_name: The name of a data source for the object to be exported, as specified in the binding configuration, when the obj_type is any of: table, local_table, view, procedure, local_procedure, and synonym. -native: Extracts metadata from the native data source where the metadata is readable by AIS (such as Oracle or Sybase metadata). The metadata is converted to XML, which is editable. Use the -native option to view the native metadata. This option is relevant only for exporting a table or stored procedure (when the obj_ type parameter is a table or procedure). For information about setting up this feature in Attunity Studio, see Extended Native Data Source Metadata. If the data source is an ADD data source, the metadata is extracted from the repository and from information specific to the driver for that data source, which is usually retrieved from the data source at runtime. For example, the ISN value in Adabas or RFA column in RMS.

obj_name: The name of the specific object (of the type specified in the obj_type parameter) that is exported. Use the following table to confirm the obj_name to supply, dependent on the value of obj_type, or use * for all of the objects of the specified type

adapter_def[inition]: Application adapter definition. adapters: The adapters specified in the binding information. all: All configuration information for a data source. bindings: All available bindings and their environments. binding: A particular set of binding information. daemon: General configuration settings of a specific daemon. daemons: General configuration settings of all daemons. datasources: The data sources specified in a binding. remote_machines: Remote machines defined in the binding. env[ironment]: Environment properties for a particular binding. table: Table definitions per data source. local_procedure: ADD metadata for a data source stored procedure generated by the LOCAL_COPY command.

37-18 AIS User Guide and Reference

local_table: ADD metadata for a table generated by the LOCAL_ COPYcommand. machine: All configuration information for a particular machine. procedure: Stored procedure definitions that rely on ADD metadata. synonym: Synonyms definitions per data source. user: A user profile definition. users: All user profile definitions. view: An AIS view on a data source.

xml_file: The XML file to which the specified object is exported (output). If a file name is not specified, the output is displayed on the terminal. con: To send the output to the console instead of to an XML file.
Note: con: is for Windows platforms only. The colon (:) is a necessary part of the syntax.

To back up Attunity server definitions 1. Run the following command: nav_util export all sys out.xml where out.xml is the output file (including the path) with the saved configuration. The output file contains the complete configuration settings for the server machine, with the exception of the metadata definitions for data sources that require AIS metadata.
2.

Run the following command: nav_util export all <ds_name> * out1.xml where ds_name is the name of a data source in the binding with AIS metadata defined for it.

3.

Repeat the previous step for every data source with AIS metadata defined for it, changing the name of the output file for each data source. The collection of output files together constitutes a complete backup of all the AIS definitions on the machine.

GEN_ARRAY_TABLES
The GEN_ARRAY_TABLES command creates virtual tables for Adabas, CISAM, DBMS, DISAM, Enscribe, RMS, and VSAM arrays from existing metadata. The Adabas database can be accessed using ADD or Predict. Virtual tables are created automatically by AIS when the metadata is created for the data source. For more information, see Using Virtual Tables to Represent Hierarchical Data.
Example 3711 GEN_ARRAY_TABLES Syntax nav_util gen_array_tables <ds_name> <table>

Where:

Using NAV_UTIL Utility

37-19

ds_name: The data source name, as specified in the binding configuration. table: The name of the table in the repository that is defined with an array. Use wildcards if you want to generate virtual tables for more than one table.

IMPORT
The IMPORT command enables you to import the contents of a valid XML document (formatted correctly for AIS) to the repository. You can import the following types of objects to the repository from an XML file:

User profile definitions Binding information Environment settings (per workspace) Daemon configuration information Table definitions that rely on ADD metadata (per data source) View definitions (per data source) Stored procedures that rely on ADD metadata Synonym definitions (per data source) Adapter definitions Metadata generated by the LOCAL_COPY command

Example 3712 IMPORT Syntax nav_util [<options>] import <name> <xml_file> | con:

Where:

options: See Activating NAV_UTIL name: The name of the application adapter or data source for the object to be imported, as specified in the binding configuration, when the object is any of: table, local_table, view, procedure, local_procedure, and synonym or adapter. The value of ds_name is used and not the value of the data source attribute in the XML file. The data source value is generated when using NAV_UTIL EXPORT. Thus, for example, if you export a table definition and then want to import the definition to another data source, you do not need to change the data source attribute value in the XML file before imported the file.

xml_file: The XML file to which the specified object is exported (output). If a file name is not specified, the output is displayed on the terminal. con: Sends the output to the console instead of to an XML file.
Note: con: is for Windows platforms only. The colon (:) is a necessary part of the syntax.

When importing the following types of objects, you must specify SYS as the ds_name entry:

Binding information

37-20 AIS User Guide and Reference

Daemon configuration information User profiles Working environment configuration Adapter definitions

IRPCDCMD
The IRPCDCMD is a utility for the z/OS systems that is used to perform management tasks on the daemon. This utility can be used from the IRPCDCMD REXX. To use this utility, execute the IRPCDCMD script, which is located in the navroot\userlib directory. When you get the prompt, you can invoke the required command. For example:
> -l 183.22.12.10 status

The IRPCDCMD utility uses the following syntax: irpcd [-l daemon_location] [-u username] [-p password] command [arguments] The following commands are available:

APPLIST [app-name or app-mask] RELOADINI RESETLOG SHUTDOWN [<ABORT|OPERATOR> ["why..."]] STATUS [workspace-name] REFRESH [workspace-name] KILL [workspace-name] TEST ENABLE [workspace-name] DISABLE [workspace-name]

LOCAL_COPY
The LOCAL_COPY command extracts the data definition of a table or stored procedure from the data source catalogs and saves it to the repository. This utility enables you to improve query performance by creating a copy (snapshot) of the data source metadata, which is used instead of the data source metadata. The copy must be on the same machine as the data.
Example 3713 LOCAL_COPY Syntax nav_util local_copy <ds_name> <src_table>

Where:

ds_name: The data source name, as specified in the binding configuration. src_table: The source table name (wildcards are allowed).
Using NAV_UTIL Utility 37-21

PASSWORD
The PASSWORD command allows you to define a master password.
Example 3714 PASSWORD Syntax nav_util password [-u<username>] <new_password>

If you have an existing password, you are prompted to specify it before defining the new master password.

PROTOGEN
The PROTOGEN command generates C/C++ header or DTD/XSD files for an adapter schema.
Example 3715 PROTOGEN Syntax nav_util nav_util nav_util nav_util protogen protogen protogen protogen <xml_schema_file> <xml_schema_file> <xml_schema_file> <xml_schema_file> [<output_directory>] [-verbose] -dtd <dtd_file> [<dtd_root>] -xsd <xsd_file> -wsdl <wsdl_file>

REGISTER
Use this command to register the software directly. You need to register the software before you can access data sources on this machine. To the software, you must have a Product Authorization Key (PAK) file, called license.pak. A PAK is normally supplied by the Attunity vendor. It contains details such as the product expiration date (if any), the maximum number of concurrent sessions allowed, which drivers you are authorized to use, and other information. The PAK is supplied to you in electronic form, and you must register it before you can use the product.
Note: You cannot register two licenses at the same time. If you register a product, the new license will overwrite the old license. If you want to register a new product and continue using the previously registered products, then request a single license for all of the products.

For details, refer to the post-installation part of the Attunity Server Installation Guide for the platform on which you installed the software.

SERVICE
The SERVICE command manages NAV_UTIL service registration, allowing it to be a service, similar to a daemon.
Note:

This command is only valid on Windows platforms.

Example 3716 SERVICE Syntax nav_util service register <service-name> <nav_util-options...> nav_util service unregister <service-name> nav_util service start <service-name> 37-22 AIS User Guide and Reference

nav_util service stop <service-name>

SVC
The SVC command starts a server on the port specified.
Example 3717 SVC Syntax nav_util svc :<port-number>

TEST
For use only when instructed by Attunity Support.

UPDATE
The UPDATE command collects information about tables, indexes, and, optionally, column cardinalities for use by the AIS Query Optimizer. Each time the utility is run, the resulting statistics overwrite previous statistics. This command can be used for all data sources (both those that require ADD metadata and relational data sources). For relational data sources, an entry is created in the AIS repository for the data source. An example of when statistics would be used for a relational driver is with SQL/MP, to generate index statistics in addition to the column statistics generated by SQL/MP.
Caution:

Executing the UPDATE command with the reset option deletes all statistics on the specified table.

z/OS Platforms NAVROOT.USERLIB(navcmD) At the prompt, enter: update[_statistics] <ds_name> <table_name> [EXACT | rows <row_num>] [+All | [column-options] [index-options]]

OS/400 Platforms call pgm(navutil) parm(update <ds_name> <table_name>)

All Other Platforms nav_util update[_statistics] <ds_name> <table_name> [EXACT | rows <row_num>] [+All | [column-options] [index-options]]

Removing Metadata Statistics


The UPDATE command can also be used to remove metadata statistics.
Example 3718 UPDATE Syntax to Remove Metadata Statistics nav_util update[_statistics] <ds_name> <table_name> reset

Where:

Using NAV_UTIL Utility

37-23

ds_name: The name of the data source, as specified in the binding configuration.
Note:

The data source must be local. For a remote data source, run the utility on the remote machine.

table_name: The name of the table. You can specify the wildcards as part of the table name:
Wildcards per Platform Platform(s) Windows UNIX (the single quotes are part of the syntax) OpenVMS and z/OS

Table 371

Wildcard Symbols * and ? * and ? * and %

If you use a wildcard as part of the table name, only the default -All parameter is available (the column-options and index-options parameters are invalid).
Note:

EXACT: The exact statistical information is returned. Note that this option does not work with large tables. rows row_num: The number of rows in the table. This value is used to shorten the time to produce the statistics, assuming that the value specified here is the correct value, or close to the correct value. It is recommended to specify a value for rows. The number of unique values per index is also returned. When the number of rows in the table is not provided, the number of rows used is determined as the maximum value between the value specified in the tuningdsmMaxBufferSize parameter of the environment settings and the value set in the nRows attribute (specified as part of the metadata for the data source).

+All: Information about the table, indexes, partial indexes and columns is included in the output. The default is that only information about the table and indexes is included in the output and not information for partial indexes and columns. column-options: The following column options can be specified:

+fcol_name1 +fcol_name2: Returns information only about the specified table columns. +f*: Returns information about all the table columns (on UNIX, specify +f'*').

index-options: The following index options can be specified:

+i1 +i2 : Returns information only about the specified indexes and partial indexes. +i*: Returns information about all the table indexes (on UNIX, specify +i'*').

37-24 AIS User Guide and Reference

If you want information about all the indexes and only some of the partial indexes, you can run the utility twice: once with the -All option and once with the +i1, +i2,... option for the required partial indexes.

Example 3719 Eliminating Statistics Samples nav_util update disam nation

Estimates the number of rows in the NATION table of the data source. The result is based on the number of nRows specified as part of the metadata for the data source and the amount of available memory as specified by the dsmMaxBufferSize parameter of the environment settings.
nav_util update disam nation rows 100

Estimates the number of rows in the NATION table of the data source. The result is based on the number of rows specified (100). If the value specified here is the correct value, or close to the correct value, the time to calculate the statistics is shortened.
nav_util update disam nation EXACT

Exact statistics for the NATION table of the data source are returned.

UPD_DS
To update the default binding configuration, use the UPD_DS command. This enables you to update the binding only with changes that involve specifying the connection information.
Example 3720 UPD_DS Syntax nav_util [<options>] upd_ds <ds_name> <ds_type> <connect_string>

Where:

options: See Activating NAV_UTIL. ds_name: The name of the data source to be added to the binding configuration. ds_type: The name of the driver that is used when you access the data source. connect_string: The connect string to be used to access the data source.

UPD_SEC
To update the default user profile, use the UPD_SEC command. This enables you to update the user name and password for both a specific data source or machine in a user profile. Use this command if you need to update a user profile on a non-Windows platform.
Example 3721 UPD_SEC Syntax nav_util [<options>] upd_sec <ds_name> | -machine <machine>[:<port>] [-u<username>] [-p<password>]

Where:

options: See Activating NAV_UTIL.

Using NAV_UTIL Utility

37-25

ds_name: The name of the data source, as specified in the binding configuration, to which the user profile is related. machine[:port]: The name and, optionally, the port of the data source to which the user profile is related. username: The user name to access the data source or machine. password: The password to access the data source or machine.

VERSION
The VERSION command enables you to check which version of AIS is running on the machine.To display the version of the AIS installation, use the following command line:
Example 3722 VERSION Syntax nav_util version [-history]

Note:

On UNIX platforms, the history flag prints details of the version, such as the build and installation dates, and lists previous versions of AIS that were installed on the machine.

VERSION_HISTORY
The VERSION_HISTORY command returns a report of installations, upgrades, and patches installed on the machine.
Example 3723 VERSION_HISTORY Syntax nav_util version_history

VIEW
Note:

This command is not supported on HP NonStop platforms.

The VIEW command enables you to view the contents of a repository. With this command, you can see the definitions of the following types of repository objects:

All configuration information for a particular machine, including all the elements listed below. User profile definitions. The list of available bindings. Information for a particular binding, which can include information about the following:

Data sources Remote machines Environment settings Attunity Connect adapters

Information about the available daemons.

37-26 AIS User Guide and Reference

Information about the following for a particular data source:


Tables that rely on ADD metadata ADD metadata for a table generated by the LOCAL_COPY command Stored procedures that rely on ADD metadata ADD metadata for a data sources stored procedure generated by the LOCAL_ COPY command. Views Synonyms

Application adapter definitions

Example 3724 VIEW Syntax nav_util [<options>] view <obj_type> [<ds_name> [-native]] <obj_name>

Where:

options: See Activating NAV_UTIL. obj_type: The type of object whose definition is displayed. You can specify the following types of objects:

adapter_[def]inition: Application adapter definition. adapters: The adapters specified in the binding information. binding: A particular set of binding information. bindings: All available bindings and their environments. datasources: The data sources specified in a binding. daemon: General configuration settings of a specific daemon. daemons: General configuration settings of all daemons. env[ironment]: Environment properties for a particular binding. local_procedure: ADD metadata for a stored procedure generated by the LOCAL_COPY command. local_table: ADD metadata for a table generated by the LOCAL_COPY command. machine: All configuration information for a particular machine. procedure: Stored procedure definitions that rely on ADD metadata. remote_machines: Remote machines defined in the binding. synonym: Synonym definitions per data source. table: Table definitions per data source. user: A user profile definition. view: An AIS view on a data source.

ds_name: The name of data source, as specified in the binding configuration, for the object whose definition is displayed when the obj_type is any of: table, local_table, view, procedure, local_procedure, and synonym.

Using NAV_UTIL Utility

37-27

-native: Extracts metadata from the native data source. This option is relevant only for viewing the definition of a table or stored procedure (when the obj_type value is table or procedure). You usually define this feature in Attunity Studio. obj_name: The name of the specific object (of the type specified in the obj_type parameter) that is displayed. Use the following table to confirm the obj_name to supply, dependent on the value of obj_type, or * for all of the objects of the specified type:

adapter_def[inition]: The name of the application adapter definition to be viewed. adapters: The name of the binding configuration. binding: The name of the binding. If not provided, the default binding (NAV) is used. bindings: No value necessary. datasources: The name of the binding configuration. daemon: The name of the daemon. daemons: No value necessary. env[ironment]: The name of the binding configuration for this working environment. local_procedure: The name of a local copy of a procedure to be viewed, or * to view all the local copy procedures for the specified ds_name. local_table: The name of a local copy of a table to be viewed, or * to view all the local copy tables for the specified ds_name. machine: No value necessary. procedure: The name of the procedure to be viewed, or * to view all the procedures for the specified ds_name. remote_machines: The name of the binding configuration. synonym: The name of the synonym to be viewed, or * to view all the synonyms for the specified ds_name. table: The name of the table to be viewed, or * to view all the tables for the specified ds_name. user: The user name that identifies the user profile. view: The name of the view to be viewed, or * to view all the views for the specified ds_name.

XML
The XML command sends an XML request directly to AIS for processing, much like execute sends an SQL query directly to AIS. XML is particularly suited to troubleshooting, by enabling system administrators and DBAs to check the AIS XML dispatchers handling of queries specified in XML documents.
Example 3725 XML Syntax nav_util xml <fin>.xml | con: <fout>.xml | con:

Where:

37-28 AIS User Guide and Reference

fin.xml: The file name with the input XML. con: To read the input from the keyboard.
Note: con: is for Windows platforms only. The colon (:) is a necessary part of the syntax.

fout.xml: The file name of the output XML. If a file name is not specified, the output is displayed on the terminal. con: To send the output to the console instead of to an XML file.
Note: con: is for Windows platforms only. The colon (:) is a necessary part of the syntax.

XML Samples
AIS processes XML requests (including queries) that are specified only in documents formatted in the syntax specific to AIS. The general structure of this syntax is as follows:
Example 3726 XML Sample header> <request-step1></request-step1> ... <request-stepn></request-stepn> </header>

The following input file is formatted according to the requirements of the AIS XML implementation and specifies the SQL query select * from navdemo:nation.
Example 3727 XML Input FIle Sample <?xml version="1.0"?> <acx> <connect adapter="query" /> <execute> <query id="1"> select * from navdemo:nation </query> </execute> <disconnect/> </acx>

Running the XML command with the above file as input generates the following output file:
Example 3728 XML Output File Sample <?xml version=1.0 encoding=ISO-8859-1?> <acx type=response> <connectResponse idleTimeout=0></connectResponse> <executeResponse> <recordset id=1> <record N_NATIONKEY=0 N_NAME=ALGERIA N_REGIONKEY=0 N_COMMENT=New Distributor <record N_NATIONKEY=1 N_NAME=ARGENTINA N_REGIONKEY=1

/>

Using NAV_UTIL Utility

37-29

N_COMMENT=Far Away <record N_NATIONKEY=2 N_NAME=BRAZIL N_REGIONKEY=1 N_COMMENT=Nearby ... </recordset> </executeResponse> </acx>

/> />

37-30 AIS User Guide and Reference

38
Using Attunity BASIC Import Utility
This section contains the following topics:

Overview Using the Basic Import Utility

Overview
The BAS_ADL import utility produces ADD metadata from BASIC mapfiles. This utility currently runs on OpenVMS platform only.

Using the Basic Import Utility


To generate ADD metadata, use the following command line (activated directly from DCL):
$ BAS_ADL <filelist> <ds_name> [<filename_table>] [<variant_table>] [basic_map_ statement] [<basename>] [options]

Note:

Activation of this utility is based on environment symbols defined by the login file residing in the BIN directory under the directory where AIS is installed. You can replace the environment symbol with the appropriate entry.

Where:

filelist: A list of BASIC files containing file descriptions and text libraries that will be converted into ADD. If you try to pass BASIC code that is not part of the file descriptions, you will receive errors as the utility tries to parse the additional information. Separate the files in this list with commas (white space is not allowed). You can use wildcards for files in the file list.

ds_name: The name of an AIS data source defined in a binding configuration. The imported metadata is stored as ADD metadata in the repository for this data source. filename_table: A file containing a list of tables and their file names. If a table is not found in the file, or this argument is not specified (as its use is optional), then the file name for the table is <table>_FIL. variant_table: A file containing a list of variants with selector fields and their definitions. Each logical line has the following format:
Using Attunity BASIC Import Utility 38-1

<variant-field>, <selector-field>, <val1>, <val2>, ..., <valN>

All the val# arguments must be surrounded by double quotes. If a val# argument contains a comma or double quotes, then the character must be doubled. If a logical line is to be wrapped across physical lines, then the last non-whitespace character of the preceding line must be a comma separator.

basic_map_statement: The ordinal number of the BASIC map statement containing the field definitions to be converted. (The default value is 3). basename: The basename for the ADL and temporary files. options: Enables you to specify the following options:

d: Specifies that all intermediate files are saved (debug). You can check these files if problems occur in the conversion. c: Specifies that the column name is used for an array name, instead of the concatenation of the parent table name with the child table name. If a column name is not unique in a structure (as when a structure includes another structure, which contains a column with the same name as a column in the parent structure), the nested column name is suffixed with the nested structure name.

s: Specifies that periods in BASIC variable names are replaced with underscores (_) in the ADD column names. If this is not specified, all characters before and including the final period are removed when determining the ADD column names.
Note:

To display online help for this utility, run the command without any parameters, as follows: BAS_ADL.

38-2 AIS User Guide and Reference

Part VIII
Data Source Reference
This part contains the following topics:

Adabas C Data Source CISAM /DISAM Data Source DB2 Data Source DBMS Data Source (OpenVMS Only) Enscribe Data Source (HP NonStop Only) Flat File Data Source IMS/DB Data Sources Sybase Data Source Ingres II (Open Ingres) Data Source ODBC Data Source OLEDB-FS (Flat File System) Data Source OLEDB-SQL (Relational) Data Source Oracle Data Source Oracle RDB Data Source (OpenVMS Only) RMS Data Source (OpenVMS Only) SQL Server Data Source (Windows Only) SQL/MP Data Source (HP NonStop Only) Text Delimited File Data Source Virtual Data Source VSAM Data Source (z/OS) Informix Data Source

39
Adabas C Data Source
This section describes the Attunity Adabas C data source driver. It includes the following topics:

Overview Functionality Transaction Support Security Configuration Properties Data Types Platform-specific Information Defining an Adabas Data Source Setting Up Adabas Data Source Metadata (Using the Import Manager) Setting Up Adabas Data Source Metadata (Traditional Method) Testing the Adabas Data Source

Overview
There are two types of Adabas data sources, the Adabas data source and the ADD-Adabas data source. The Adabas data source uses Predict metadata whereas the ADD-Adabas data source uses Attunitys internal repository (ADD), which is usually imported from Natural Data Definition Module (DDM) files. Alternatively, Predict metadata can be exported and subsequently imported into the ADD-Adabas data source. Both Adabas data sources provide very similar functionality. The ADD-Adabas data sources enjoys some added flexibility and functionality resulting from the ability to customize the metadata in the ADD. Unless explicitly stated, all features and procedures described apply to both data sources. This overview also covers the following topics:

Supported Versions and Platforms Supported Features Limitations

Adabas C Data Source 39-1

Supported Versions and Platforms


For information on supported Adabas versions, see Attunity Integration Suite Supported Systems and Resources.

Supported Features
Adabas data sources support the following key features. More information on the details of support for some of these features will be provided in a separate topic within this section.

Read and write access to Adabas data Database and file numbers greater than 255 Adabas security Access to Adabas views as well as physical files Reading of Predict metadata at runtime or importing of metadata from DDM files at design time Predict files residing in a separate database, as well as multi-database Predict files All basic Adabas data types Natural date and time formats ISN as a column, including reading a record by ISN Multi-value (MU) and Periodic Group (PE) fields, including MUs within a PEs Transactions (1PC) using ET and BT commands Multifetch Support for complex multi-index strategies using S1-L1 with search expressions referring to several descriptors Support for various descriptor types including superdescriptors, hyperdescriptors and phonetic descriptors, as well as descriptors on MUs and PEs Null suppression Several logical tables in a single physical file Record-level locking using the L4 and HI commands Full support for XML features like select xml and update xml.

Limitations

multiClient server mode is not supported for Adabas on all platforms except for MVS. All other server modes are supported.For directions on how to setup multiClient on MVS, see z/OS Platforms. DDL operations (e.g., CREATE TABLE, ALTER TABLE) are not supported for Adabas.

Functionality
This section describes the following aspects of Adabas functionality:

Optimizing Adabas Queries Subdescriptors and Superdescriptors with Subfields

39-2 AIS User Guide and Reference

Phonetic-descriptors and Hyper-descriptors Descriptors on MU and PE fields Null Suppression Handling Array Handling Logical Tables Locking Support

Optimizing Adabas Queries


Attunity Connect creates an optimization strategy based on the SQL statement the user provided and the metadata information. This optimization strategy attempts to use the best combination of descriptors and superdescriptors to yield good performance. An automatically generated access strategy may never equal the expertise of an experienced Adabas DBA, but Attunity Connect's optimizer nevertheless yields excellent results. Attunity Connect can construct relatively complex search expressions that use several descriptors and superdescriptors. For example,A5,4,S,A5,4,D,AB,11,GE can be generated by the Adabas driver for an SQL that specified a range on A5 and a greater equal operator on AB. There is no limitation regarding the number of descriptors that may be involved in the search expression.

Statistical Information
The AIS optimizer is a cost-based optimizer that requires up-to-date statistical information regarding the cardinality of tables and indexes. AIS can automatically create this statistical information using the update_statistics utility. See the Statistics Tab for more information. Note that statistical information can be generated for both Predict and ADD data sources. Having no statistics or incomplete/inaccurate statistics may cause the Attunity optimizer to create inefficient execution strategies.

Subdescriptors and Superdescriptors with Subfields


Adabas descriptors can be defined over parts of a field (subfield) rather than the complete field. There is no relational equivalent to this concept - in the relational world an index always includes complete fields. For this reason, the Attunity Connect Adabas driver adds virtual fields for these subfields. In the sample EMPLOYEES file provided by SoftwareAG, the AO (DEPT) field is a 6 character alphanumeric field. The S1 subdescriptor, however, only includes AO(1-4). The Adabas driver will add a DEPT_1_4 field in additional to the original field (DEPT), and add an index using DEPT_1_4. The following shows the new field and index in their XML metadata format.
Example 391 New Field

<field name ="DEPT_1_4" datatype="string" size="4" subfieldOf="DEPT" subfieldStart="0"> <dbCommand>AO</dbCommand> </field> Example 392 Index Definition

<key name="DEPARTMENT" size="4">

Adabas C Data Source 39-3

<segments> <segment name="DEPT_1_4"/> </segments> <dbCommand>S1</dbCommand> </key>

This means that you need to write the SQL correctly in order to get the correct descriptor used. Both of the following queries are valid and will return the same result. The second, however, will use the S1 descriptor and will be faster.

SQL That Will Not Use the S1 Descriptor


Select * from employees where dept like 'ELEC%'

SQL That Will Use the S1 Descriptor


Select * from employees where dept_1_4='ELEC'

Note that for ADD-Adabas the metadata generated by the DDM import looks a bit different with regards to subfields if you look at the generated XML. The ADD-Adabas driver, however, knows how to read the imported Metadata and construct the exact same view as the Predict counterpart. A nav_util export table -native command from an ADD-Adabas data source will export the metadata after the driver normalized it to the same notation as the Adabas Predict notation shown here.

Phonetic-descriptors and Hyper-descriptors


It is not possible for the optimizer to automatically decide to use a phonetic or a hyper descriptor. The use of such a descriptor is, by definition, something that the author of the SQL statement must request. For example, the SQL statement needs to specify that all names sounding like Smithy be retrieved, whether they are spelled as Smithi or Smiti. Hyper descriptors are exactly the same in this regard - the SQL statement must request that they be used. To allow the user writing the SQL statement to request the use of a particular phonetic/hyper descriptor, the Adabas drivers expose special search fields that can be used in a query. The EMPLOYEES sample table1, for example, includes a phonetic descriptor called PH on the NAME field (AE). The Adabas driver generates a search field for it called PH_PHONETIC_NAME. You can use it in a query like the following:
Select * from employees where ph_phonetic_name='Smithy'

Can be used in where clause but not in select clause. There is no real meaning to a query like select ph_phonetic_name from employees. For this reason all search fields are referred to as non-selectable (XML attribute nonSelectable=true). Can only be used in an equal relationship. You cannot execute a query like select * from employees where ph_phonetic_name > 'Smithy'. You cannot use a search field in any other part of the query - e.g., ORDER BY, GROUP BY, HAVING clauses. Unlike normal fields, once a filter is specified on a search field, the Attunity Query Processor cannot verify that the data returns matches the filter. The query

Provided by SoftwareAG in its sample database

39-4 AIS User Guide and Reference

processor recognizes that it must trust the data returned by Adabas implicitly. It cannot, for example, verify that Smiti does sound like Smithy.

To avoid generating search fields, set the configuration parameter disregardNonselectable to true. By default this flag is false and search fields are therefore generated.

Descriptors on MU and PE fields


MU and PE fields are not a part of the relational view of the table. They are relationally represented in virtual array tables. A maximum of two levels of nesting is allowed. Participation of these fields in descriptors does not have a natural relational representation. For this reason, a search field is added to the parent table for every MU/PE descriptor. The name of the search field has the format:
adabas_field_name _ ACSEARCH _ field_name

For example, the LANG (AZ) MU field in the EMPLOYEES sample table is also a descriptor. This descriptor allows a user to easily search for all employees that speak English, for example. The generated search field is AZ_ACSEARCH_LANG. The equivalent query using Attunity Connect will be:
select * from employees where az_acsearch_lang='

Note that the following query will return the same result, but will not use the AZ descriptor. Rather, it will use L2 to scan the EMPLOYEES table. See Handling Arrays for more information about working with arrays and virtual array tables and views.
select e.* from employees e, employees_lang el where e.lang=el._parent and el.lang='ENG'

Null Suppression Handling


Adabas fields can be marked in Adabas as 'null-suppressed'. The Attunity Connect Adabas driver executes an LF command on every Adabas file the first time it is accessed in a server. The NULL suppressed fields are marked as such and handled by Attunity Connect in the following manner:

Exposed as NULLABLE fields. Depending on their datatype, either zeros or spaces are treated as the NULL value. That means that if a field of type A has spaces in it, an SQL user selecting data from it will get a NULL value. When the Attunity query optimizer considers possible execution strategies for a query, it will rule out any execution strategy that uses a descriptor in which at least one NULL suppressed field is not qualified. The reason for this behavior is that Adabas does not include all records in every descriptor inverted list. NULL suppressed fields with 'NULL values' (zeros or spaces) will cause the record to be excluded from any inverted list based on this field.

For example, suppose a table has text fields - AA and AB with superdescriptor S1 defined on AA, AB and AB is NULL suppressed. The user issues the following query:
select * from a_table where aa='XYZ'

The optimizer will not use the S1 superdescriptor for this query because AB is NULL suppressed. Had it used S1 it would not have read records where AA='XYZ' and AB is
Adabas C Data Source 39-5

empty. Because of the NULL suppression, Adabas does not include these records in the S1 inverted list. Because AB is not qualified, the Attunity optimizer knows that this is not a valid execution plan. On the other hand, the following query would use the S1 superdescriptor:
select * from a_table where aa='XYZ' and ab is not null

This is because ab was qualified in the query. Note that qualifying the field with the NULL value (ab=0 or ab=' ') is not considered valid and is not supported. So qualifying a text field with ab<> may return incorrect results whereas ab is not null will be handled correctly. The behavior of the Adabas driver regarding NULL suppressed fields can be modified using the nullSuppressionMode configuration attribute. It can be partially or fully disabled. Care must be taken when deciding to disable the default NULL suppression handling as incomplete results may be returned for queries, as the above example showed. See Configuration Properties for further details.

Array Handling
Attunity Connect fully supports MU and PE fields (including MU inside of a PE). These constructs are mapped as arrays and can be used with the standard AIS array handling infrastructure. See Handling Arrays for more information about Attunity's array handling support. Some specific points to Adabas arrays are listed:

A counter field is automatically added to the table for every array. For the LANG (AZ) MU field in the EMPLOYEES sample table, Attunity Connect will automatically add a field called C_LANG which will translate to AZC in the Adabas format buffer. Whenever any data from an MU/PE is included in the query, Attunity Connectwill automatically add the relevant counter field in the format buffer so that the correct number of rows is returned. For an MU inside a PE, a counter field is created for the MU field. The Adabas format buffer interpretation for this field is a bit more complex as it will generate an explicit format buffer field for every PE instance (e.g., AA1C, AA2C, AA3C). Adabas arrays support INSERT/UPDATE/DELETE operations when using the virtual array tables. All these operations are implemented using the Adabas A1 command. The following notes apply: A DELETE operation sets MU/PE fields to their empty value (zeros or spaces). So a delete of the 3rd member of the AZ MU will cause an A1 operation with format buffer AZ3. and record buffer with spaces. An INSERT operation adds another member to the end of the array. The _ ROWNUM provided is ignored.

Adabas arrays can have up to 191 members. It is, however, highly recommended to use a lower dimension which represents the maximum expected number if members. When using Adabas Predict, this applicative dimension is read from Predict. If Predict specifies no dimension, the defaultOccurrences configuration attribute is used (10 if unspecified). Note that if a record is read with a larger number of array members than the given dimension, the driver will return an error (Bad metadata on array for column.). Attunity Connect arrays are naturally row-wise arrays. When accessing PE members, Adabas returns the results in a column-wise order. For example, a PE

39-6 AIS User Guide and Reference

called INCOME (AQ) in the EMPLOYEES sample table has members like CURR_ CODE (AS) and SALARY (AS). The format buffer will include AR1-40 and AS1-40 which is a column-wise ordering (all instances of AR followed by all instances of AS). This is different than equivalent definitions in VSAM, for example, where you would have AR1, AS1, AR2, AS2, etc. The Attunity ConnectAdabas driver will represent the metadata as an array of CURR_CODE followed by an array of SALARY to match the physical layout of the buffer, but it will add a groupEntry attribute on the structure in order to expose this logically as a single array. Note that this different ordering is transparent to the user. The following is the metadata for this sample:
Example 393 Adabas Array Metadata

<group name="INCOME" groupEntry="true"> <fields> <field name="CURR_CODE" datatype="string" size="3" nullSupressed="true" dimension1="40" counterName="C_INCOME"> <dbCommand>AR</dbCommand> </field> <field name="SALARY" datatype="unsigned_decimal" size="9" nullSupressed="true" dimension1="40" counterName="C_INCOME"> <dbCommand>AS</dbCommand> </field> <field name="BONUS" datatype="unsigned_decimal" size="9" nullSupressed="true" dimension1="40" dimension2="12" counterName="C_INCOME"> <dbCommand>AT</dbCommand> </field> </fields> <dbCommand>AQ</dbCommand> </group>

Logical Tables
Storing several tables in the same physical Adabas file was a common practice, especially when file numbers were limited to 255 files in a database. When accessing these tables using AIS, users usually prefer to map the single physical file to several logical tables. Normally Predict views or DDM definitions exist with this logical mapping. Attunity Connect uses these definitions and only sees the columns and descriptors relevant to the logical table being accessed. Since it is common practice to make all the fields in such a file NULL suppressed, using a descriptor is also fine because only records or that logical table are part of the descriptor's inverted list. The only problem in providing this logical mapping is when the Attunity Query Processor decides to scan such a logical table. Using L2 will return lots of empty rows corresponding to the records of the other logical tables in the file. In order to solve this problem, it is possible to specify that a scan strategy use L3 with a particular descriptor instead of L2. This option is currently supported only using the ADD-Adabas driver. To set it, you just need to specify the descriptor name to be used for scanning the logical table in the table's dbCommand, next to the file number. For example, <dbCommand>17;AA</dbCommand> will cause L3 on the AA descriptor to be used for scanning the table. Care must be taken to make sure that the descriptor selected is such that all records belonging to the logical table will be accessible using the descriptor's inverted list.

Adabas C Data Source 39-7

Locking Support
The Adabas data source supports lock operations as part of an UPDATE/DELETE SQL statement, or as part of a SELECT FOR UPDATE operation. Records will be released when the transaction is committed (in autocommit mode that means immediately following the operation). When attempting to lock a record that is held by another user, the default behavior is to return an error without waiting. You can control this behavior with the lockWait configuration attribute.

Transaction Support
The Adabas data source supports one-phase commit. Transactions are implemented using the Adabas ET and BT commands. The following should be noted regarding transactions:

An Adabas data source can participate in a distributed transaction if it is the only one-phase commit data source being updated in the transaction. See Attunity Connect Data Source Driver Capabilities for more information on setting up two-phase commit transactions. There is no control over isolation-level when accessing Adabas. By default, when scanning a table using L2 Adabas will provide you with uncommitted data from other users that are in the process of modifying the same records you are reading. In cases where this is not acceptable, it is recommended that the scanUsingL1 configuration attribute be set to true. Adabas will then provide only the committed data for all records accessed. Note that using L1 to scan a file is more expensive than using L2.

Security
All the standard infrastructure provided by AIS regarding data source-level security is applicable to Adabas data sources. If a username/password is provided for an Adabas data source, the Adabas driver ignores the username specification and provides the password on the ADDITIONS 3 field of the Adabas control block. For more information, see Managing Security.

Data Types
The following table shows how AIS maps Predict data types to ADD data types.
Table 391 Predict Alphanumeric (A) Binary (B1) Binary (B2) Binary (B4) Binary (B8) Binary (B*) DATE (D) Floating (F4) Floating (F8) Predict Data Types ADD String unsigned_int1 unsigned_int2 unsigned_int4 int81 unspecified ada_d_time dfloat/gfloat double

39-8 AIS User Guide and Reference

Table 391 (Cont.) Predict Data Types Predict Integer (I1) Integer (I2) Integer (I4) Integer (I8) Integer (I1.n) Integer (I2.n) Integer (I4.n) Integer (I8.n) Logical (L) Packed Decimal (P) Unpacked Decimal (N,U) TIME (T)
1

ADD int1 int2 int4 int8 decimal 3 digits decimal 5 digits decimal 10 digits decimal 20 digits string size 1 decimal ada_numstr_s (MVS), numstr_zoned (Others) ada_time

Attunity Connect does not support unsigned int8

The following table shows how AIS maps Adabas data types to ADD data types.
Table 392 Adabas Alphanumeric (A) Binary (B1) Binary (B2) Binary (B4) Binary (B8) Binary (B*) Floating (F1) Floating (F2) Floating (F4) Floating (F8) Floating (F*) Floating (G4) Floating (G8) Packed Decimal (P) Unpacked Decimal (N,U)
1

Adabas Data Types ADD String unsigned_int1 unsigned_int2 unsigned_int4 int81 unspecified int1 int2 int4 int8 unspecified float dfloat/gfloat decimal ada_numstr_s (MVS), numstr_zoned (Others)

Attunity Connect does not support unsigned int8

See also ADD Supported Data Types.

Adabas C Data Source 39-9

Configuration Properties
The following properties can be configured for the Adabas data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

dbNumber: (Predict, ADD) The Adabas database number. predictFileNumber: (Predict only) The Predict file number. predictDbNumber: (Predict only) In some cases the Predict file resides in a different database than the data. If so, use this attribute to specify the database number in which the Predict file resides. svcNumber: (Predict, ADD; MVS only) The installation on MVS places the SVC number of Adabas in the GBLPARMS file. Alternatively, you can specify the SVC number using this attribute. This simplifies configuration in sites where several Adabas installations on different SVC numbers need to be accessed from a single installation. Each SVC will still require a different workspace, but the same GBLPARAMS and the same RACF profile can be used for the different workspaces. addMuInPeCounter: (Predict, ADD) Until version 4.6 AIS did not support counters for MUs inside of PEs. In version 4.6 this support was added, but since it changes behavior for existing users, this attribute was added to allow existing users to turn off this new feature to preserve compatibility. Default: addMuInPeCounter='true'. defaultOccurences: (Predict only) If the Predict occurrences field for multiple value fields or periodic group fields is not specified, the value of this parameter is used. If a record is retrieved with more occurrences than specified, an error is returned. Default: defaultOccurances='10'. disableExplicitSelect: (Predict, ADD) This attribute indicates whether or not the Explicit Select option is disabled. If disables, a select * query on an Adabas table will return all fields in the table, including ISN and subfields which are normally suppressed unless explicitly requested in the query (e.g. select ISN, *). Default: disableExplicitSelect='false'. disregardNonselectable: (Predict, ADD) This attribute enables you to configure the data source to ignore descriptors defined on a multiple value (MU) field, a periodic group (PE) field or phonetic/hyper descriptors. The special ACSEARCH fields which are normally created for a table are referred to as non-selectable because you cannot specify them in the select list of a query. Setting the disregardNonselectable attribute to 'true' will prevent these fields from being created. Default: disregardNonselectable='false'. fileList: (Predict, ADD) This attribute is passed as the record buffer to the OP command. Adabas allows a list of file numbers to be provided in the record buffer of the OP command, along with the operations allowed on each file. By using this attribute a user can restrict access to the database, allowing only specific operations on specific files. See the Software AG documentation of the OP command for more information on the syntax allowed. Note that the value provided in this attribute is passed as-is to Adabas - no validation is performed. Default: fileList='.' (i.e., unrestricted access to all files in the database). lockWait: (Predict, ADD) This attribute specifies whether the data source waits for a locked record to become unlocked or returns a message that the record is locked. In Adabas terms, if this attribute is set to true a space is passed in

39-10 AIS User Guide and Reference

command option 1 of the HI/L4 commands. Otherwise an 'R' is passed in command option 1. Default: lockWait='false'.

multiDatabasePredict: (Predict only) Turn this flag on if your Predict file includes metadata for several different databases. This has two effects on the way that the Predict information is read: Only tables that belong to the current database are returned in the table list. The file number for a table is read separate from the metadata as different databases may include the same table using a different file number.

multifetch: (Predict, ADD) This parameter controls the number of records to be retrieved in a single read command (L2, L3, S1-L1). The value provided in this attribute controls the value passed in the ISN lower limit control block field. By default no multifetch is used. The multifetch buffer size can be controlled as follows: multifetch='0': Lets the driver decide the number of records to retrieve. The driver will generally retrieve rows to fill a 10k buffer. No more than 15 rows are fetch at once. multifetch='n': Causes n rows to be read at a time, where n is a number from 2 to 15. multifetch='-n': Defines a read-ahead buffer with a fixed size, where n is less than or equal to 10000 bytes. multifetch='1': Disables the read-ahead feature. (default)

nullSuppressionMode: (Predict, ADD) This attribute controls the behavior of the Adabas driver with regard to Null Suppression Handling. This attributes allows a user to change this default NULL suppression policy. Note that changing this setting improperly may result in incomplete query results. The following values can be selected: full: (default) NULL suppressed fields are exposed as NULLABLE and must be qualified for the Attunity optimizer to consider using a descriptor based on a NULL suppressed field. disabled: NULL suppressed fields are handled like any other field. Use this setting only if you completely understand the potential implications as incomplete query results may returned. indexesOnly: Only NULL suppressed fields that are part of a descriptor/super-descriptor are exposed as NULLABLE. Other NULL suppressed fields are handled normally. This setting is as safe as the full setting and does not include the risk of incomplete results as the disabled option does.

scanUsingL1: (Predict, ADD) A scan strategy on a table is normally implemented by an L2 command. It is possible, however, to turn on this attribute in order to scan using the L1 command. This has the advantage of providing better data consistency at some performance penalty. Default: scanUsingL1='false'. supportL3Range: (Predict, ADD) Older versions of Adabas did not allow for a range specification on an L3 command (e.g., AA,S,AA in the search buffer). Only the lower limit could be provided. If your version of Adabas supports a range in the L3 command you can turn on this attribute to enjoy better performance in some queries. Default: supportL3Range='false'. traceValueBuffer: This is a debugging tool to be used in conjunction with driverTrace='true' in the environment. Turning on driverTrace will record the
Adabas C Data Source 39-11

Adabas commands executed in the server log file. If you also want a binary dump of the value buffer and record buffer, set this attribute to true. Default: traceValueBuffer='false'.

userInfo: (Predict, ADD) This attribute specifies the value passed as a null-terminated string to Adabas as the seventh parameter on the adabas call. The value provided is then available in Adabas user exits. This has no affect at all on Attunity Connect, but some users have taken advantage of this feature to implement specific types of auditing. Note that it is possible to control the value of the userInfo attribute dynamically at runtime using the nav_proc:sp_setprop stored procedure. Default: userInfo=''. useUnderscore: (Predict, ADD) This attribute indicates whether or not to convert hyphens (-) in table and column names into underscores (_). The inclusion of hyphens in Adabas table names and field names poses an inconvenience when accessing these tables from SQL because names that include a dash need to be surrounded with double quotes. To avoid this inconvenience, the data source can translate all hyphens into underscores. Default: useUnderscore='true'. verifyMetadata: (Predict, ADD) This attribute indicates whether or not to cross-check the Predict or ADD metadata against the LF command. Resulting discrepancies are written to the log and removed from the metadata at runtime. It is usually unnecessary to use this attribute. Default: verifyMetadata='false'.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Platform-specific Information
This section includes Adabas-related information and procedures as they pertain to specific platforms, as follows:

UNIX Platforms z/OS Platforms

UNIX Platforms
This section includes the following topics:

Verifying Environment Variables Accessing 64 Bit Adabas

Verifying Environment Variables


The Attunity Connect server accesses Adabas just like any other Adabas application using the ADALNK interface. As such, Adabas requires some environment variables to be available to ADALNK. It is best to consult your Adabas documentation, or your Adabas DBA for the exact list of required environment variables and their values. The correct place to define these environment variables is in the site_nav_login script in the bin directory of your Attunity installation (see the Attunity Server Installation Guide). As a reference, the following is an example of environment variables defined in the AIS account of a Sun Solaris machine for an Adabas installation residing under /users/sag.
SAG=/users/sag 39-12 AIS User Guide and Reference

ADADIR=/users/sag/ada ADALNK=/users/sag/ada/v125/adalnk.so DICDIR=/users/sag/prd

Relinking to the Adabas Driver on UNIX Platforms


On UNIX machines, you must relink to the Adabas driver. To do this, make sure that you relink to the Adabas drive by running the ada_relink script from navroot/bin. Make sure that the user account that executes this script has write permission to navroot/lib.

Accessing 64 Bit Adabas


Attunity Connect can work with Adabas 64 bit installations, even though the AIS server is 32 bit. In order to work in a 64 bit Adabas environment, you must set your ADALNK environment variable to point at the 32 bit version of ADALNK provided by Software AG. Note that some of the early 64 bit installations of Adabas did not include the 32 bit ADALNK library. If you do not find a 32 bit version of ADALNK, contact Software AG support to obtain this library. The following is a sample setting of ADALNK on a 64 bit AIX machine running Adabas 3.3105.
ADALNK=/users/sag/ada/v33105/libadalnk32.so

z/OS Platforms
Attunity Connect accesses Adabas on MVS platforms just like other platforms. This section details a few MVS specific notes. This section includes the following topics:

Specifying the Adabas SVC Configuring AIS to Run in multiClient Mode

Specifying the Adabas SVC


There are two options for specifying the Adabas SVC number:

Adabas_SVC in GBLPARMS: This is the default option when providing an SVC number in the CUST Rexx script. All Adabas data sources will then use this SVC number. svcNumber property on an Adabas data source: Using this option allows you to access several different Adabas instances using different SVC numbers using the same GBLPARMS and the same RACF profile for the Attunity servers. Note, however, that you must separate the data sources into different workspaces. Each workspace can only access a single Adabas instance.

Configuring AIS to Run in multiClient Mode


Most databases provide an API interface which returns a connection handle when connecting to the database. This means that several clients in a multiClient scenario can have separate connections to the database working in complete isolation from each other, as if they were running on separate servers. This is not the case for Adabas, as Adabas does not provide an API for interfacing, but rather provides a link routine which is used to communicate with the Adabas region. This figure illustrates how an Adabas program (in this case the Attunity server) connects to Adabas. TheAttunity Connect Adabas driver calls ADALNK (the Software AG link routine), which in turn communicates with the Adabas nucleus.

Adabas C Data Source 39-13

Figure 391 Attunity to Adabas Connectivity

You can set up Adabas to work with a multiClient workspace on MVS. multiClient mode is not supported on other platforms.
Note: Currently multiClient cannot be used from an ACX client, i.e., any client using the database adapter or query adapter to get to Adabas.

multiClient provides a way to improve scalability when using Adabas by having several clients share the same Attunity server. This saving in started tasks does come with a price as the server is single-threaded so users can affect each other performance-wise. Use the maxNClientsPerServer attribute of the workspace to control the maximum number of clients sharing a single server. Note that you can use multiClient in conjunction with subtasks to further reduce the number of started tasks. Use the following criteria to decide whether or not to implement multiClient with Adabas:

If your client uses connection pools there is usually no reason to use the multiClient mode. In a well planned application using connection pools, the application picks up the connection only when it has something it needs to do. If your client does not use connection pools, then the client may be utilizing the connection only a small percentage of the time. Using multiClient may reduce the server resources required without impacting performance. Set the maxNClientsPerServer for the workspace to the percentage utilization, for example, for 10% utilization set maxNClientsPerServers=10.

When working in multiClient mode, the Attunity Adabas driver will attempt to load a module called ADALNKR instead of ADALNK. If successful, it will internally construct a unique user ID for the session and continue working with ADALNKR. This internal user ID that we generate is the equivalent of the connection handle of Oracle or SQL Server. This is what makes it possible for multiple clients to work within a single started task (or subtask) against Adabas in isolation, just as though they were in separate started tasks. This, of course, means that each of these servers can perform updates in separate transactions. This figure shows a multiClient scenario with ADALNKR. ADALNKR is the reentrant version of ADALNK supplied by Software AG. Normally, this module is available as source code (assembler) as part of the Adabas installation, but it needs to be compiled and linked after changing it to work with your local SVC number. This module, as opposed to ADALNK, can create several isolated sessions with the Adabas nucleus from the same started task.

39-14 AIS User Guide and Reference

Figure 392 ADALNKR multiClient Scenario

To use multiClient 1. Create the required ADALNKR by editing the assembler code and setting the SVC number to the Adabas SVC number of the site. This figure shows a code snippet from ADALNKR that includes the SVC number setting (245 in this snippet).
Figure 393 SVC Number Setting

2. 3. 4.

Build ADALNKR on the target MVS machine. Open Attunity Studio, and find the workspace. See Editing a Workspace. In the workspace Server Mode tab, select multiClient workspace.

Adabas C Data Source 39-15

Figure 394 multiClient Workspace

5. 6.

Ensure that the ADALNKR module you create is available in the server's STEPLIB. The normal place to put this load module is in the Adabas LOAD library. It is possible to verify that you are indeed using separate connections to Adabas by looking at the list of Adabas users. Use the /F NMPM,DUQA command from SDSF and check the system log for the command output. You should see the following patterns for the user-IDs when using multiClient. This figure shows a sample of the system log output.

Figure 395 System Log Command Output with multiClient

39-16 AIS User Guide and Reference

This figure shows the output of the same command when multiClient is not in use. Note that the user IDs are different.
Figure 396 System Log Command Output without multiClient

Defining an Adabas Data Source


The process of defining an Adabas data source consists of two tasks:

Defining the Adabas Data Source Connection Configuring the Adabas Data Source Properties

Defining the Adabas Data Source Connection


The Adabas data source connection is set using the Design perspective, Configuration View in Attunity Studio. To define the connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your Adabas data source. Expand the Bindings folder. Expand the binding where you want to add the Adabas data source. Right-click the Data Source folder and select New Data Source. The New Data Source screen is displayed.

7. 8.

In the Name field, enter a name for the new data source. Select the import type from the Type list:

Adabas (Predict): to define an Adabas data source that is to use Predict metadata Adabas (ADD): to define an Adabas data source that is to use Attunity metadata

9.

Click Next. The Data Source Connect String page is displayed.

10. Enter the connect string according to the data source type selected:

Adabas C Data Source 39-17

If you are defining an Adabas ADD data source, enter the Database number. If you are defining an Adabas Predict data source, emter the following: Database number Predict File Number: The Adabas Predict file number which describes the specified database. Predict database number: Enter this field only if the Predict file does not reside in the same database as the data.

11. Click Finish.

Configuring the Adabas Data Source Properties


After defining the connection, you set the data source properties. To configure the data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Adabas data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the Adabas data source and select Open. The Configuration editor is displayed.

Figure 397 Adabas Configuration Properties

39-18 AIS User Guide and Reference

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Adabas Data Source Connection. For Adabas (ADD), enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

Setting Up Adabas Data Source Metadata (Using the Import Manager)


Use the Import Manager in Attunity Studio to import metadata from Adabas DDM Declaration files. These files can have a file extension of .ddm or .nsd.
Note:

You can use the import manager with the Adabas (ADD) data source only.

The metadata import procedure is has the following steps:


Selecting the DDM Declaration files Applying Filters Selecting Tables Import Manipulation Metadata Model Selection Import the Metadata

The following sections describe each step, and the screens that appear for that step.

Selecting the DDM Declaration files


This section describes the steps required to select the DDM file that will be used to generate the metadata. The following procedure starts with a preliminary step, also described in Starting the Import Process. To select the DDM Declaration files 1. Open Attunity Studio.
2.

In the Design perspective, Configuration view, right-click the data source and select Show Metadata View. The Metadata tab is displayed with the data source displayed in the Metadata view.

3. 4.

Expand the Adabas data source you are working with. Right-click Imports and select New Import.
Adabas C Data Source 39-19

The New Import screen is displayed, as shown in the following figure:


Figure 398 The New Import screen

5. 6.

Enter a name for the import. The name can contain letters, numbers and the underscore character. Click Finish. The Metadata import wizard opens with the Get Input Files screen, as shown in the following figure:

Figure 399 The Get Input Files screen

39-20 AIS User Guide and Reference

7.

Click Add. The Add Resource screen is displayed, as shown in the following figure:

Figure 3910

The Add Resource screen

8.

If the files are on another machine, then right-click My FTP Sites and select Add. The Add FTP Site screen is displayed, as shown in the following figure:

Figure 3911

The Add FTP Site screen

9.

Enter the server name where the DDM Declaration files are located and, if not using anonymous access, enter a valid username and password to access that computer. The username is then used as the high-level qualifier. qualifier by right-clicking the machine in the Add Resource screen, and selecting Change Root Directory.

10. Click OK. After accessing the remote computer, you can change the high-level

11. Select the files to import and click Finish to start the file transfer. When complete,

the selected files are displayed in the Get Input Files screen. To remove any of these files, select the required file and click Remove.
12. Click Next (The Apply Filters screen opens) to continue to the Applying Filters

step.
Note: The Adabas Declaration files can have file extensions of .DDM or .NSD.

Adabas C Data Source 39-21

Applying Filters
This section describes the steps required to apply filters on the DDM Declaration files used to generate the Metadata. It continues the Selecting the DDM Declaration files procedure. To apply filters 1. Expand all in the Apply Filters screen, as shown in figure below.
2. 3.

Apply the required filter attributes to the DDM Declaration files. The available filters are listed and described in the table below. Click Next (The Select Tables screen opens) to continue to the Selecting Tables step.

The Apply Filters screen is shown in the following figure:


Figure 3912 The Apply Filters screen

This screen lets you change reserved words used for the names of elements in Adabas file in case a problem occurs. The following table describes the options:
Table 393 Option Replace Column CHAPTER for FIELD Reserved Word Replacements Description If the Column FIELD is called CHAPTER, the name is changed to CHAPTER0. You can change this value to CHAPTER(n). If either or the Columns FIELD or INDEX are called END, the name is changed to END0. You can change this value to END(n) If the Column FIELD is called VARIENT, the name is changed to VARIENT0. You can change this value to VARIENT(n)

Replace Column END for FIELD and for INDEX Replace Column VARIANTS for FIELD

39-22 AIS User Guide and Reference

Selecting Tables
This section describes the steps required to select the tables from the DDM Declaration files. The import manager identifies the names of the records in the DDM Files that will be imported as tables. The following procedure continues the Applying Filters procedure. To select the required tables 1. From the Select Tables screen, select the tables that you want to access. To select all tables, click Select All. To clear all the selected tables, click Unselect All. The Select Tables screen is shown in the following figure:
Figure 3913 The Select Tables screen

2.

Click Next (the Import Manipulation screen opens) to continue to the Import Manipulation step.

Import Manipulation
This section describes the operations available for manipulating the imported records (tables). It continues the Selecting Tables procedure. The import manager identifies the names of the records in the DDM Declaration files that will be imported as tables. You can manipulate the general table data in the Import Manipulation Screen To manipulate the table metadata 1. From the Import Manipulation screen (see The Import Manipulation screen figure), right-click the table record marked with a validation error, and select the relevant operation. See the table, Table Manipulation Options for the available operations.
Adabas C Data Source 39-23

2.

Repeat step 1 for all table records marked with a validation error. You resolve the issues in the Import Manipulation Screen. Once all the validation error issues have been resolved, the Import Manipulation screen is displayed with no error indicators.

3.

Click Next to continue to the Metadata Model Selection.

Import Manipulation Screen


The Import Manipulation screen is shown in the following figure:
Figure 3914 The Import Manipulation screen

The upper area of the screen lists the DDM Declaration files and their validation status. The metadata source and location are also listed. The Validation tab at the lower area of the screen displays information about what needs to be resolved in order to validate the tables and fields generated from the COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location). The following operations are available in the Import Manipulation screen:

Resolving table names, where tables with the same name are generated from different files during the import. Selecting the physical location for the data. Selecting table attributes. Manipulating the fields generated from the COBOL, as follows:

39-24 AIS User Guide and Reference

Merging sequential fields into one (for simple fields). Resolving variants by either marking a selector field or specifying that only one case of the variant is relevant. Adding, deleting, hiding, or renaming fields. Changing a data type. Setting the field size and scale. Changing the order of the fields. Setting a field as nullable. Selecting a counter field for array for fields with dimensions (arrays). You can select the array counter field from a list of potential fields. Setting column-wise normalization for fields with dimensions (arrays). You can create new fields instead of the array field where the number of generated fields will be determined by the array dimension. Creating arrays and setting the array dimension.

The following table lists and describes the available operations when you right-click a table entry:
Table 394 Option Fields Manipulation Table Manipulation Options Description Customizes the field definitions, using the Field Manipulation screen. You can also access this screen by double-clicking the required table record. Renames a table. This option is used especially when more than one table with the same name is generated from the COBOL. Sets the physical location of the data file for the table. Sets the table attributes. Specifies an XSL transformation or JDOM document that is used to transform the table definitions. Removes the table record.

Rename Set data location Set table attributes XSL manipulation Remove

You can manipulate the data in the table fields in the Field Manipulation Screen. Double-click a line in the Import Manipulation Screen to open the Field Manipulation Screen.

Field Manipulation Screen


The Field Manipulation screen lets you make changes to fields in a selected table. You get to the Field Manipulation screen through the Import Manipulation Screen. The Field Manipulation screen is shown in the following figure.

Adabas C Data Source 39-25

Figure 3915 Field Manipulation Screen

You can carry out all of the available tasks in this screen through the menu or toolbar. You can also right click anywhere in the screen and select any of the options available in the main menus from a shortcut menu. The following table describes the tasks that are done in this screen. If a toolbar button is available for a task, it is pictured in the table.
Table 395 Command General menu Undo Click to undo the last change made in the Field Manipulation screen. Field Manipulation Screen Commands Description

Select fixed offset

The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select this option to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. When you select a fixed offset you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.

39-26 AIS User Guide and Reference

Table 395 (Cont.) Field Manipulation Screen Commands Command Test import tables Description Select this table to create an SQL statement to test the import table. You can base the statement on the Full table or Selected columns. When you select this option, the following screen opens with an SQL statement based on the table or column entered at the bottom of the screen.

Enter the following in this screen:

Data file name: Enter the name of the file that contains the data you want to query. Limit query results: Select this if you want to limit the results to a specified number of rows. Enter the amount of rows you want returned in the following field. 100 is the default value. Define Where Clause: Click Add to select a column to use in a Where clause. In the table below, you can add the operator, value and other information. Click on the columns to make the selections. To remove a Where Clause, select the row with the Where Clause you want t remove and then click Remove.

The resulting SQL statement with any Where Clauses that you added are displayed at the bottom of the screen. Click OK to send the query and test the table. Attribute menu Change data type Select Change data type from the Attribute menu to activate the Type column, or click on the Type column and select a new data type from the drop-down list.

Adabas C Data Source 39-27

Table 395 (Cont.) Field Manipulation Screen Commands Command Create array Description This command allows you to add an array dimension to the field. Select this command to open the Create Array screen.

Enter a number in the Array Dimension field and click OK to create the array for the column. Hide/Reveal field Select a row from the Field manipulation screen and select Hide field to hide the selected field from that row. If the field is hidden, you can select Reveal field. Select this to change or set a dimension for a field that has an array. Select Set dimension to open the Set Dimension screen. Edit the entry in the Array Dimension field and click OK to set the dimension for the selected array. Set field attribute Select a row to set or edit the attributes for the field in the row. Select Set field attribute to open the Field Attribute screen.

Set dimension

Click in the Value column for any of the properties listed and enter a new value or select a value from a drop-down list. Nullable/Not nullable Select Nullable to activate the Nullable column in the Field Manipulation screen. You can also click in the column. Select the check box to make the field Nullable. Clear the check box to make the field Not Nullable. Set scale Select this to activate the Scale column or click in the column and enter the number of places to display after the decimal point in a data type. Select this to activate the Size column or click in the column and enter the number of total number of characters for a data type.

Set size Field menu

39-28 AIS User Guide and Reference

Table 395 (Cont.) Field Manipulation Screen Commands Command Add Description Select this command or use the button to add a field to the table. If you select a row with a field (not a child of a field), you can add a child to that field. Select Add Field or Add Child to open the following screen:

Enter the name of the field or child, and click OK to add the field or child to the table. Delete field Select a row and then select Delete Field or click the Delete Field button to delete the field in the selected row.

Move up or down

Select a row and use the arrows to move it up or down in the list.

Rename field Sturctures menu Columnwise Normalization

Select Rename field to make the Name field active. Change the name and then click outside of the field.

Select Columnwise Normalization to create new fields instead of the array field where the number of generated fields will be determined by the array dimension.

Adabas C Data Source 39-29

Table 395 (Cont.) Field Manipulation Screen Commands Command Combining sequential fields Description Select Combining sequential fields to combine two or more sequential fields into one simple field. The following dialog box opens:

Enter the following information in the Combining sequential fields screen:

First field name: Select the first field in the table to include in the combined field End field name: Select the last field to be included in the combined field. Make sure that the fields are sequential. Enter field name: Enter a name for the new combined field.

Flatten group

Select Flatten Group to flatten a field that is an array. This field must be defined as Group for its data type. When you flatten an array field, the entries in the array are spread into a new table, with each entry in its own field. The following screen provides options for flattening.

Do the following in this screen:

Select Recursive operation to repeat the flattening process on all levels. For example, if there are multiple child fields in this group, you can place the values for each field into the new table when you select this option. Select Use parent name as prefix to use the name of the parent field as a prefix when creating the new fields. For example, if the parent field is called Car Details and you have a child in the array called Color, when a new field is created in the flattening operation it will be called Car Details_Color.

39-30 AIS User Guide and Reference

Table 395 (Cont.) Field Manipulation Screen Commands Command Mark selector Description Select Mark selector to select the selector field for a variant. This is available only for variant data types. Select the Selector field form the following screen.

Replace variant Select counter field

Select Replace variant to replace a variants selector field. Select Counter Field opens a screen where you select a field that is the counter for an array dimension.

Metadata Model Selection


This section lets you generate virtual and sequential views for imported tables containing arrays. In addition, you can configure the properties of the generated views. It continues the Import Manipulation procedure. This allows you to flatten tables that contain arrays. In the Metadata Model Selection step, you can configure values that apply to all tables in the import or set specific settings for each table. To configure the metadata model Select one of the following:

Adabas C Data Source 39-31

Default values for all tables: Select this if you want to configure the same values for all the tables in the import. Make the following selections when using this option: Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties. Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns. false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Specific virtual array view settings per table: Select this to set different values for each table in the import. This will override the data source default for that table. Make the selections in the table under this selection. See the item above for an explanation.

The Metadata Model Selection screen is shown in the following figure:

39-32 AIS User Guide and Reference

Figure 3916

The Metadata Model Selection Screen

Import the Metadata


This section describes the steps required to import the metadata to the target computer. It continues the Metadata Model Selection step. You can now import the metadata to the computer where the data source is located, or import it later (in case the target computer is not available). To transfer the metadata 1. Select Yes to transfer the matadata to the target computer immediately or No to transfer the metadata later.
2.

Click Finish.

The Import Metadata screen is shown in the following figure:

Adabas C Data Source 39-33

Figure 3917 The Import Metadata screen

Setting Up Adabas Data Source Metadata (Traditional Method)


This section includes the following topics:

Importing Attunity Metadata from DDM Files Exporting Predict Metadata into Adabas ADD

Importing Attunity Metadata from DDM Files


If the metadata exists in DDM files, you can use the DDM_ADL import utility to import this metadata to Attunity metadata. This utility is available on Windows, UNIX and OpenVMS, from the platforms command line interface. This utility is not available on MVS platforms. MVS users need to perform the import on one of the supported platforms and then move the generated metadata to MVS. The metadata is not imported using Attunity Studio. To display online help for this utility, run the command DDM_ADL HELP. To generate the ADD metadata, use the appropriate command according to the platform type. The following table lists the MDD file list format according to platform type.
Table 396 DDM File List Format Format Separate the files in this list with commas. This parameter is at the end of the command. Separate the files in this list with spaces. The name of the file containing the list and the names of the files in the list must be less than or equal to eight characters (with a suffix of three characters). Separate the files in this list with commas.

Platform Type OpenVMS UNIX Windows

39-34 AIS User Guide and Reference

Exporting Predict Metadata into Adabas ADD


Some users who have Predict still prefer to use ADD to store Metadata. The process of moving metadata from Predict to ADD is simple, although manual. It involves exporting from Predict and importing to ADD. The process is carried out using the NAV_UTIL command line interface. In the procedure below, for the purposes of the example, note first of all that the native qualifier is required. On the export side, the procedure generates all table definitions from a Predict data source called adapredict to an XML file. On the import side, the exported metadata is imported to a data source call adaadd. To export Predict metadata into Adabas ADD 1. To export, execute the following NAV_UTIL command, according to your platform:

Windows: c:\> nav_util export table -native adapredict * adapredict.xml UNIX: nav_util export table -native adapredict \* adapredict.xml MVS (After executing the NAVCMD Rexx script in USERLIB): Local> export table -native adapredict * 'ATTUNITY.XML.ADAPRED'

2.

To import, execute the following NAV_UTIL command, according to your platform:


Windows: c:\> nav_util import adaadd adapredict.xml UNIX: nav_util import table adaadd adapredict.xml MVS (After executing the NAVCMD Rexx script in USERLIB): Local> import adaadd 'ATTUNITY.XML.ADAPRED'

Testing the Adabas Data Source


Set the debug environment generalTrace parameter to true to generate entries in the standard log racing the access to Adabas data.
Figure 3918 Adabas Log Entry Format

Adabas C Data Source 39-35

39-36 AIS User Guide and Reference

40
DB2 Data Source
This section contains the following topics:

Overview Functionality Configuration Properties Metadata Transaction Support Security DB2 Data Types Defining a DB2 Data Source

Overview
The DB2 Data Source driver provides a wide range of common standard relational functionality that comply with the Relational Data Source model. It implements connectivity to a DB2 database instance by means of embedded SQL techniques.

Supported Versions and Platforms


For information on supported DB2 versions, see Attunity Integration Suite Supported Systems and Resources.

Functionality
This section describes the following aspects of DB2 functionality:

Stored Procedures Isolation Levels and Locking

Stored Procedures
The DB2 data source supports DB2 Stored Procedure as follows.

DB2 version 7 or higher must be installed to call a stored procedure. The DB2 stored procedure can only be called using a CALL statement and not as part of a SELECT statement.

DB2 Data Source 40-1

Isolation Levels and Locking


The DB2 data source supports the following Isolation Levels:

Dynamic isolation Uncommitted read Committed read Repeatable read Serializable

The isolation level is used only within a transaction. The behavior of updates on locked data depends on the value of the LOCKTIMEOUT variable of DB2. If this variable is set to -1, then the update waits until the lock is released. If LOCKTIMEOUT is set to 0, the update fails immediately; and if it is set to a number greater than 0, the update waits for the specified seconds before failing.

Update Semantics
For tables without a bookmark or other unique index, the data source returns as a bookmark a combination of most (or all) of the columns of the row. The data source does not guarantee the uniqueness of this bookmark; you must ensure that the combination of columns is unique.

Configuration Properties
This section lists the properties that can be configured for the DB2 data source. For information on how to set data source properties in Attunity Studio, see Adding Data Sources. The tables below show the properties for each operating system supported by the DB2 data source. DB2 data source properties on z/OS, are listed in the following table:
Table 401 Property dbname isolationLevel DB2 Data Source Properties on z/OS Description

Enter the existing DB2 database name.


Specifies the default isolation level for the data source. Available values are:

dynamicIsolation: The isolation level is not set at the data source level. Setting this parameter to true allows the application to set this parameter dynamically as needed. readUncommitted: Specifies that corrupt data is not read. This is the lowest isolation level. readCommitted: Specifies that only the data committed before the query begun is displayed. repeatableRead: Specifies that the data used in a query is locked and cannot be used by another query, nor can it be u[dated by another transaction. serializable: Specifies that the data is isolated serially. Handles the data as if transactions are executed sequentially.

Note: If the specified isolation level is not supported by the data source, Attunity Connect defaults to the next highest level. location The location of the library containing the database tables.

40-2 AIS User Guide and Reference

Table 401 (Cont.) DB2 Data Source Properties on z/OS Property noExtendedInfo statementCacheSize Description When set to true, no extended matadata information is read from the DB2 database, including indexes. Specifies the maximum number of SQL statements that are cached. The default is set to 0, indicating no maximum limit.

DB400 data source properties on OS/400 are listed in the following table:
Table 402 Property controlledCommit DB400 Data Source Properties on OS/400 Description When set to true, specifies that Attunity Connect handles transaction commitments. In addition, when set to true, OS/400 journaling must be set.

dbname isolationLevel

The existing DB2 database name.


Specifies the default isolation level for the data source. Available values are:

dynamicIsolation: The isolation level is not set at the data source level. Setting this parameter to true allows the application to set this parameter dynamically as needed. readUncommitted: Specifies that corrupt data is not read. This is the lowest isolation level. readCommitted: Specifies that only the data committed before the query begun is displayed. repeatableRead: Specifies that the data used in a query is locked and cannot be used by another query, nor can it be updated by another transaction. serializable: Specifies that the data is isolated serially. Handles the data as if transactions are executed sequentially. If the specified isolation level is not supported by the data source, Attunity Connect defaults to the next highest level. When set to false (the default), specifies that the DB2 RDBMS handles the commitment control.

Notes:

library noExtendedInfo serverMode

The name of the library that contains the database tables.


When set to true, no extended matadata information is read from the DB2 database, including indexes. Controls the setting for the SQL_ATTR_SERVER_MODE environment attribute. When set to false, the DB2 client processes the SQL statements of all connections within the same job. When set to true, SQL statements of each connection are processed in a separate job. Specifies the maximum number of SQL statements that are cached. The default is set to 0, indicating no maximum limit.

statementCacheSize

DB2 Data Source 40-3

DB2 data source properties on UNIX and Windows are listed in the following table:
Table 403 Property dbname isolationLevel DB2 Data Source Properties on UNIX and Windows Description

Enter the existing DB2 database name.


Specifies the default isolation level for the data source. Available values are:

dynamicIsolation: The isolation level is not set at the data source level. Setting this parameter to true allows the application to set this parameter dynamically as needed. readUncommitted: Specifies that corrupt data is not read. This is the lowest isolation level. readCommitted: Specifies that only the data committed before the query begun is displayed. repeatableRead: Specifies that the data used in a query is locked and cannot be used by another query, nor can it be u[dated by another transaction. serializable: Specifies that the data is isolated serially. Handles the data as if transactions are executed sequentially. If the specified isolation level is not supported by the data source, Attunity Connect defaults to the next highest level. When set to false (the default), specifies that the DB2 RDBMS handles the commitment control.

Notes:

location noExtendedInfo statementCacheSize

The location of the library containing the database tables. When set to true, no extended matadata information is read from the DB2 database, including indexes. Specifies the maximum number of SQL statements that are cached. The default is set to 0, indicating no maximum limit.

In addition, for UNIX platforms, set the shared library environment variable (such as SHLIB_PATH, LD_LIBRARY_PATH, etc.), depending on the platform, to include $DB2HOME/lib. Set the shared library environment variable in the AIS nav_login or site_nav_login file. For further details, see Attunity AIS Installation Guide for UNIX.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source on all of the above platforms. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Metadata
The DB2 data source driver polls the DB2 database for required metadata. Following the relational model, metadata definitions reside on a dedicated system table that is accessed by ordinary SQL queries.

Transaction Support
This section describes DB2 data source transaction support on the following platforms:

40-4 AIS User Guide and Reference

z/OS OS/400 UNIX and Windows

z/OS
The DB2 data source on z/OS systems supports Two-phase Commit and can fully participate in a distributed transaction when the following parameters are set:

The transaction environment property, convertAllToDistributed, is set to true in the binding configuration. RRS must be installed and configured.

The MVSATTACHTYPE parameter is set to RRSAF in the NAVROOT.USERLIB(ODBCINI) member (where NAVROOT is the high-level qualifier where AIS is installed). If RRS is not running, the data source or application adapter can participate in a distributed transaction as the only one-phase commit resource, provided that the logFile parameter is set to NORRS in the transactions node of the binding properties for the relevant binding configuration in the Design perspective, Configuration view in Attunity Studio. The XML representation is as follows:
<transactions logFile="log,NORRS" />

where log is the high-level qualifier and name of the log file. If this parameter is not specified, the format is as follows:
<transactions logFile=",NORRS" />

That is, the comma must be specified. For further details about setting up one-phase commit in a distributed transaction, refer to CommitConfirm Table. To configure two-phase commit capability To use two-phase commit capability to access data on a z/OS machine, define every library in the ATTSRVR JCL as an APF-authorized library.

To define a DSN as APF-authorized, enter the following command in the SDSF screen:
"/setprog apf,add,dsn=navroot.library,volume=ac002"

where ac002 is the volume where you installed AIS and NAVROOT is the high-level qualifier where AIS is installed.

If the AIS installation volume is managed by SMS, then when defining APF-authorization, enter the following command in the SDSF screen:
"/setprog apf,add,dsn=navroot.library,SMS"

Make sure that the library is APF-authorized, even after an IPL (reboot) of the machine. To use distributed transactions from an ODBC-based application, ensure that AUTOCOMMIT is set to 0.

DB2 Data Source 40-5

OS/400
The DB400 data source on OS/400 supports one-phase commit transactions. The data can participate in a distributed transaction as a one-phase commit data source. To configure one-phase commit capability 1. Set the controlledCommit data source parameter to true, as described in Configuration Properties.
2. 3. 4.

Create a CMTCNFRM table. Set journaling to the CMTCNFRM table. Set the following binding parameters attributes for the client machine initiating the transaction:

useCommitConfirmTable= true. convertAllToDistributed= true.

UNIX and Windows


This section includes the following tasks:

Configuring Transaction Support Configuring the Shared Library Environment Variable

Configuring Transaction Support


The DB2 data source supports two-phase commit and can fully participate in a distributed transaction when the transaction environment property convertAllToDistributed is set to true. To set the DB2 client to work with Attunity Connect Transaction Manager From the DB2 client account, run the DB2 command line tool (DB2.exe on Windows platforms or the DB2 executable on UNIX platforms). Enter the following: GET DATABASE MANAGER CONFIGURATION The list of available properties is returned.
3.

1. 2.

Change the Transaction processor monitor name (TP_MON_NAME) property to the name of the DB2 data source (nvdbdb2 on Windows platforms or nvdb_db2 on UNIX platforms) by entering the following:

For Windows: UPDATE DATABASE MANAGER CONFIGURATION USING TP_ MON_NAME nvdbdb2 For UNIX: UPDATE DATABASE MANAGER CONFIGURATION USING TP_ MON_NAME nvdb_db2

Use DB2 with its two-phase commit capability through an XA connection.The daemon server mode must be configured to Single-client mode. For more information, see Server Mode. To use distributed transactions from an ODBC-based application, ensure that AUTOCOMMIT is set to 0.

40-6 AIS User Guide and Reference

Configuring the Shared Library Environment Variable


You can configure the shared library environment variable for the platform and the AIS as follows:

Set the shared library environment variable (such as SHLIB_PATH, LD_ LIBRARY_PATH, depending on the platform) to include $DB2HOME/lib. Set the shared library environment variable in AIS nav_login or site_nav_ login file. For details, refer to the Attunity Installation Guide for UNIX.

Security
The DB2 data source driver is not actively involved in applying or enforcing security policy. It incorporates into the security policy and rules as set at the database instance with which it interacts.

DB2 Data Types


This section describes how Attunity Connect maps various DB2 data types. The following table shows how Attunity Connect maps DB2 data types to OLE DB and ODBC data types.
Table 404 DB2 BIGINT Char (<256), C Char (>255), C Date Decimal (p,s) Double Float Integer Numeric (p,s) Smallint Time Timestamp Varchar (m<256) Varchar (m>255) BIGINT
1

Mapping DB2 Data Types OLE DB DBTYPE_I8 DBTYPE_STR


1

ODBC SQL_BIGINT SQL_VARCHAR SQL_LONGVARCHAR SQL_DATE SQL_NUMERIC(p,s) SQL_DOUBLE SQL_REAL SQL_INTEGER SQL_NUMERIC(p,s) SQL_SMALLINT SQL_TIME SQL_TIMESTAMP SQL_VARCHAR SQL_LONGVARCHAR1 SQL_BIGINT

DBTYPE_STR DBTYPE_DATE dBTYPE_NUMERIC DBTYPE_R8 DBTYPE_R8 DBTYPE_I4 DBTYPE_NUMERIC DBTYPE_I2 DBTYPE_TIMESTAMP DBTYPE_TIMESTAMP DBTYPE_STR DBTYPE_STR DBTYPE_I8

This data type is supported only by the DB2/400 data source.

Note: 1. SQL_LONGVARCHAR: Precision of 2147483647. If the <odbc longVarcharLenAsBlob> parameter is set to true in the AIS environment settings, then precision of m.

See also ADD Supported Data Types.


DB2 Data Source 40-7

The following table shows how Attunity Connect maps data types in a CREATE TABLE statement to DB2 data types.
Table 405 CREATE TABLE Data Types DB2 Char(m) Date Float Float Long Varchar for Bit Data Integer Numeric(p,s) Smallint Text Time Timestamp Smallint Varchar(m)

CREATE TABLE Char(m) Date Double Float Image Integer Numeric [(p[,s])] Smallint Text Time Timestamp Tinyint Varchar(m)

See also ADD Supported Data Types.

Defining a DB2 Data Source


This section describes how to define and configure the DB2 data source on the following platforms:

z/OS OS/400 UNIX and Windows

z/OS
This section describes how to define a DB2 data source on z/OS systems. Defining a DB2 data source, involves the following procedures:

Defining an ODBCINI file Defining the Data Source Connection Configuring the Data Source

Defining an ODBCINI file


The DB2 data source uses an ODBCINI file. During installation of AIS on a z/OS machine, an ODBCINI file is defined as a member in NAVROOT.USERLIB where NAVROOT is the high-level qualifier where AIS is installed. The ODBCINI file is similar to the following:
; This is a comment line... ; Example COMMON odbcini

40-8 AIS User Guide and Reference

COMMON MVSDEFAULTSSID=DSN1 ; Example SUBSYSTEM odbcini for DSN1 subsystem DSN1 MVSATTACHTYPE=CAF PLANNAME=DSNACLI

The PLANAME value is the default DB2 Calling Level Interface (CLI) plan name. If a different plan is used, change the name accordingly. In the example above, CAF is used, so only one-phase commit transactions are supported. Two-phase commit transactions are supported when setting the MVSATTACHTYPE parameter to RRSAF.
Note:

The AUTOCOMMIT initialization parameter is set automatically by Attunity Studio, and should not be set in the ODBCINI file.

You must bind to the plan specified in the PLANAME parameter. Check that the DSNnnn.SDSNSAMP(DSNTIJCL) job includes the following line:
BIND PLAN (DSNACLI)

where DSNnnn is the DB2 high-level qualifier and DSNACLI is the plan name specified for the PLANAME parameter. You can specify other parameters in this file, as described in the IBM ODBC Guide and Reference.

Defining the Data Source Connection


The DB2 data source connection is set using the Design perspective, Configuration view in Attunity Studio. To connect to DB2 data 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the mainframe computer where you want to add your DB2 data source. Expand the Bindings folder. Expand the binding where you want to add the DB2 data source. Right-click the Data sources folder and select New Data source. The New Data Source screen is displayed.

7. 8. 9.

Enter a name for the data source in the Name field. Select DB2 CLI (Mainframe) from the Type list. Click Next. The Data Source connect string page is displayed.

10. Enter the connect string as follows:

Location: The DB2 location name for the connected DB2 instance. The parameter should be specified if the connected DB2 instance is different than the instance defined in the MVSDEFAULTSSID parameter of the ODBCINI file.

DB2 Data Source 40-9

Database name: Enter the existing DB2 database name, only if you are creating new tables using AIS.

11. Click Finish.

Note: If the DB2 CLI driver cannot connect to DB2 (SQLConnect problem), check that ALL ODBC packages are bounded.

See also: Adding Data Sources.

Configuring the Data Source


After defining the connection, you set the data source properties. To configure the DB2 data source 1. Open Attunity Stream.
2. 3. 4. 5.

In the Design perspective, Configuration view, expand the Machines folder. Expand the mainframe machine with your DB2 data source. Expand the Bindings folder, then expand the binding with your DB2 data source. Expand the Data sources folder, then right-click the DB2 data source and select Open. The data source editor is displayed.

Figure 401 DB2 CLI Data Source Configuration Properties

6.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Data Source Connection.

40-10 AIS User Guide and Reference

7.

Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

8.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

OS/400
This section describes how to define a DB400 data source on the OS/400 platform. Defining a DB400 data source, involves the following procedures:

Defining the Data Source Connection Configuring the Data Source

Defining the Data Source Connection


Follow this procedure to set up the DB2 data source. To connect to DB400 data 1. Open Attunity Studio.
2. 3. 4. 5. 6. 7. 8.

In the Design perspective, Configuration view, expand the AS/400 machine with your DB2 data source. Expand the Bindings folder Right-click the binding with the DB2 data source. Right-click the Data sources folder and select New Data source. Enter a name for the data source in the Name field. Select DB400 from the Type field. Click Next. The Data Source connect string screen is displayed.

9.

Enter the connection string as follows:


Database name: The name of the DB2 database. Library name: The library containing the database tables.

10. Click Finish.

See also: Adding Data Sources.

Configuring the Data Source


Do the following to configure the data source properties.

DB2 Data Source 40-11

To configure the DB400 data source 1. Open Attunity Studio.


2. 3. 4. 5.

In the Design perspective, Configuration view, expand the Machines folder. Expand the AS/400 machine with your DB2 data source. Expand the Bindings folder, then expand the binding with your DB2 data source. Expand the Data sources folder, then right-click the DB2/400 data source in the Configuration view, and select Open. The data source editor is displayed.

Figure 402 DB2 400 Data Source Configuration Properties

6. 7.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

8.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

40-12 AIS User Guide and Reference

UNIX and Windows


This section describes how to define a DB2 data source on UNIX and Windows platforms. Defining a DB2 data source, involves the following procedures:

Defining the Data Source Connection Configuring the Data Source

Defining the Data Source Connection


Follow this procedure to set up the DB2 data source. To connect to DB2 data 1. Open Attunity Studio.
2. 3. 4. 5. 6. 7. 8. 9.

In the Design perspective, Configuration view, expand the Machines folder. Expand the UNIX or Windows machine with your DB2 data source. Expand the Bindings folder. Right click the binding with the DB2 data source. Right-click the Data sources folder and select New Data source. Enter a name for the data source in the Name field. Select DB2 from the Type field. Click Next. The Data Source connect string screen is displayed.

10. Enter the connection string as follows:

Database alias: The DB2 database alias.

11. Click Finish.

See also: Adding Data Sources.

Configuring the Data Source


Do the following to configure the data source properties. To configure the DB2 data source 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design perspective, Configuration view, expand the Machines folder. Expand the UNIX or Windows machine with your DB2 data source. Expand the Bindings folder, then expand the binding with your DB2 data source. Expand the Data sources folder, then right-click the DB2 data source, and select Open. The data source editor is displayed.

DB2 Data Source 40-13

Figure 403 DB2 Configuration Properties

6. 7.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

8.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

40-14 AIS User Guide and Reference

41
CISAM /DISAM Data Source
This section contains the following topics:

Overview Configuration Properties Transaction Support Data Types Defining a CISAM/DISAM Data Source Setting Up the CISAM/DISAM Data Source Metadata

Overview
The following sections provide information about defining and configuring the Attunity Connect CISAM/DISAM data source driver. This includes information regarding the Metadata that must be defined.

Supported Features
The CISAM data source supports the following key features:

Array handling. (See Handling Arrays) Record-level locking

Limitations
The number of records that can be locked at the same time in the DISAM data source is no more than 32767. If more than 32767 records are locked at the same time, an overflow occurs.

Configuration Properties
The following parameters can be configured in Attunity Studio for the CISAM/DISAM data source in the Properties tab of the Configuration Properties screen. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

audit=true|false: When set to true, this parameter activates an audit file for each table.

CISAM /DISAM Data Source 41-1

auditFile: The audit filename that is the concatenation of the value specified for the name attribute of the table statement with an aud suffix. This parameter is a string data type. disableExplicitSelect: When set to true, this parameter disables the ExplicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. filepoolCloseOnTransaction: When set to true, all files in the file pool for this data source close at the each end of each transaction (commit or rollback). filepoolSize: The number of instances of a file from the file pool that can be open concurrently. filepoolSizePerFile: The number of instances of a file from the file pool that can be open concurrently for each file. lockWait: This parameter specifies whether the data source driver waits for a locked record to become unlocked or returns a message that the record is locked. newFileLocation=string: The Data directory in the connect string, this parameter specifies the location of the CISAM/DISAM files and indexes you create with CREATE TABLE and CREATE INDEX statements. You must specify the full path for the directory. transactionLogFile: This parameter specifies the name of the file where the transaction log is written. The data type for this parameter is string. useGlobalFilepool: When set to true, this parameter specifies that a global file pool that can span more than one session is used.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Transaction Support
The CISAM/DISAM data source driver supports one-phase commit for CISAM/DISAM version 7.6 and above. It can participate in a distributed transaction if it is the only one-phase commit data source being updated in the transaction. A log file must exist and the path and name of this log file must be specified in the data source configuration, in the binding configuration. For details, refer to Defining a CISAM/DISAM Data Source.
Note:

Both the transaction environment properties convertAllToDistributed and useCommitConfirmTable must be set to true.

Data Types
The table below shows how Attunity Connect maps data types in a CREATE TABLE statement to CISAM/DISAM data types.
Table 411 CREATE TABLE Data Types CISAM/DISAM Char[(m)]

CREATE TABLE Char[(m)]

41-2 AIS User Guide and Reference

Table 411 (Cont.) CREATE TABLE Data Types CREATE TABLE Date Double Float Image Integer Numeric[(p[,s])] Smallint Text Tinyint Varchar(m) CISAM/DISAM Date+time Double Float Integer Numeric(p,s) Smallint Tinyint Varchar(m)

See also ADD Supported Data Types.

Defining a CISAM/DISAM Data Source


The process of defining a CISAM/DISAM Data Source consists of two tasks:

Defining the CISAM/DISAM Data Source Connection Configuring the CISAM/DISAM Data Source

Defining the CISAM/DISAM Data Source Connection


The CISAM/DISAM data source connection is defined using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add your CISAM/DISAM data source. Expand the Bindings folder. Expand the binding where you want to add the CISAM/DISAM data source. Right-click the Data sources folder and select New Data source. The New Data Source screen is displayed.

7. 8. 9.

Enter a name for the data source in the Name field. Select CISAM/DISAM from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the CISAM/DISAM connect string as follows:

Data Location: Enter the directory where the CISAM/DISAM files and indexes created with CREATE TABLE and CREATE INDEX statements reside. You must specify the full path. If a value is not specified in this field, the data

CISAM /DISAM Data Source 41-3

files are written to the DEF directory under the directory where AIS is installed.
Note:

The data files can have a physical file name or an environment variable name (which is translated before accessing the data). This is useful if the data is distributed among several physical files. For example, under UNIX the environment variable can be similar to the following:

setenv ALL_EMPLOYEES /users/db/boston/emp.dat, /users/db/paris/emp.dat

when the employees table name is set in the data dictionary to $ALL_ EMPLOYEES. The value specified is used for the Data File field in the Design perspective Metadata tab in Attunity Studio
11. Click Finish.

See also: Adding Data Sources.

Configuring the CISAM/DISAM Data Source


The CISAM/DISAM connection is set using the Design perspective, Configuration view in Attunity Studio.: To configure the CISAM/DISAM data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your data source. Expand the Bindings folder and the binding with your CISAM/DISAM data source. Expand the Data sources folder. Right-click the CISAM/DISAM data source and select Open. The Configuration Properties screen is displayed.

41-4 AIS User Guide and Reference

Figure 411 CISAM/DISAM Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the CISAM/DISAM Data Source Connection. Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

Setting Up the CISAM/DISAM Data Source Metadata


The CISAM/DISAM data source requires Attunity Metadata. You define metadata and update the statistics for the data in the Design perspective, Metadata tab in Attunity Studio. When defining the CISAM/DISAM records as tables in Attunity metadata, the Data file value, which specifies where the physical CISAM/DISAM files are located, must not include the file suffix (this suffix is included with all the other data sources). For details of the filename attribute, see General Tab.

CISAM /DISAM Data Source 41-5

41-6 AIS User Guide and Reference

42
DBMS Data Source (OpenVMS Only)
This section contains the following topics:

Overview Functionality Configuration Properties Data Types Transaction Support Platform-specific Information Defining the DBMS Data Source Setting Up the DBMS Data Source Metadata

Overview
The DBMS data source provides:

Dynamic multi-streaming support: Any combination of databases and subschemas can be accessed, when needed. Minimal realm impact: Realms are readied only when you access their data, which ensures that you do not impact more resources than necessary. Full relational mapping: All DBMS operations are mapped to a relational model. This includes joining set members with their owners, joining owners with their set members, and using specific system sets. Update operations such as connecting, disconnecting, and reconnecting records and sets are also mapped to equivalent relational operations. This provides a host of client/server tools with access to DBMS data, without sacrificing DBMS functionality.

In addition, the DBMS data source provides array handling; see Handling Arrays. Hierarchical queries over owner and member sets, using the owner or member column to produce a chaptered result are also described in Producing Chapters.
Note:

The DBMS access examples refer to the PARTS database that is described throughout the Oracle DBMS documentation. See the Introduction to Oracle DBMS for information about creating the PARTS database.

DBMS Data Source (OpenVMS Only) 42-1

Prerequisites
For information on supported DBMS versions, see Attunity Integration Suite Supported Systems and Resources.

The DBMS data source does not require CDD. A DBMS runtime license is sufficient to use the DBMS data source. A development license is not necessary.

Functionality
This section describes the following aspects of DBMS functionality:

Locking

Locking
Records are locked only when an explicit lock is requested by the client application or when a record is updated. This solves most of the locking problems that DBMS users frequently encounter. Locking behavior using the DBMS data source differs in some cases from that of typical DBMS strategies because the DBMS data source relaxes some of the strict locking strategies DBMS usually imposes when reading records. However, update related locks are imposed by DBMS regardless of the data source. Refer to the DBMS documentation set for further discussion of locking issues. Locking-related aspects of the DBMS data source include:

No records are locked when read unless a lock is requested by the client application. Because DBMS automatically locks every record you read, the data source driver implements this by unlocking each record immediately after it is read. If a client explicitly requests a lock on a particular record, the data source driver implements this request by placing the record DBKEY in a keeplist (see the DBMS documentation set for details on keeplists). If the client subsequently unlocks the record, the data source driver removes the corresponding DBKEY from the keeplist. Updating a record causes the record to be locked for the duration of the transaction. Connecting or disconnecting a record from a set causes an update lock on the record itself, and possibly on the prior and next records in a chain set. These locks remain in effect for the duration of the transaction.

Update Semantics
Updating a column or deleting a row in SQL is very similar to DBQ, although the syntax differs slightly. For example, the following syntax is used when changing a part price:
UPDATE PART SET PART_PRICE = 50 WHERE PART_ID = BR890123

For examples of comparisons between SQL and DBQ syntax, see SQL to DBQ Mapping Examples. Set operations, like connecting or disconnecting a record and an owner-member set or a systems set, however, are not typical SQL operations. For these operations to work from any SQL-based client, they need to be mapped to standard SQL operations. The

42-2 AIS User Guide and Reference

mapping implemented by the DBMS data source involves the same virtual columns used to map various set read operations.

Configuration Properties
The following properties can be configured for the DBMS data source in the Properties tab of the Configuration Properties screen. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect: When set to true, this parameter disables the explicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. rootFile: The root file in the connect string, this parameter specifies the database root file. This file may be referenced using a logical name. runMode: The Access Mode in the connect string, this parameter specifies the operation mode of the server readOnly, readwrite, batchRetrieval or reporting). The default setting is readonly. subSchema: The Sub schema in the connect string, this parameter specifies the name of a subschema. A separate directory must be specified for each subschema.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Data Types
This table shows how the DBMS_ADL utility imports DBMS database field types to the Attunity Data Dictionary (ADD) data types, which Attunity Connect maps to ODBC and OLE DB data types:
Table 421 DBMS Character D_Floating D_Floating Complex Date F_Floating F_Floating Complex G_Floating G_Floating Complex H_Floating H_Floating Complex Left Overpunched Numeric Mapping DBMS Data Types ADD string dfloat Not Supported vms_date dfloat Not Supported double Not Supported Not Supported Not Supported numstr_nlo SQL_NUMERIC SQL_NUMERIC SQL_NUMERIC DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_NUMERIC SQL_DOUBLE DBTYPE_R8 SQL_DATE SQL_DOUBLE DBTYPE_DATE DBTYPE_R8 ODBC SQL_CHAR SQL_DOUBLE OLE DB DBTYPE_STR DBTYPE_R8

Left Separate Numeric numstr_nl Packed Decimal decimal

DBMS Data Source (OpenVMS Only) 42-3

Table 421 (Cont.) Mapping DBMS Data Types DBMS Right Overpunched Numeric Right Separate Numeric Signed Byte Signed Longword Signed Octaword Signed Quadword Signed Word Unsigned Byte Unsigned Longword Unsigned Numeric Unsigned Octaword Unsigned Quadword Unsigned Word Zoned Numeric ADD numstr_s numstr_nr int1 int4 Not Supported int8 int2 int1 int4 numstr_u Not Supported Not Supported int2 numstr_zoned SQL_SMALLINT SQL_NUMERIC DBTYPE_I2 DBTYPE_NUMERIC SQL_DOUBLE SQL_SMALLINT SQL_SMALLINT SQL_INTEGER SQL_NUMERIC DBTYPE_I4 DBTYPE_NUMERIC SQL_DOUBLE DBTYPE_I2 ODBC SQL_NUMERIC SQL_NUMERIC SQL_TINYINT SQL_INTEGER DBTYPE_I4 OLE DB DBTYPE_NUMERIC DBTYPE_NUMERIC

The DBMS data types listed above as Not Supported are transferred into the Attunity data dictionary as placeholders to allow Attunity Connect to be used on data that is supported. See also ADD Supported Data Types.

Transaction Support
The DBMS data source supports one-phase commit. It can participate in a distributed transaction if it is the only one-phase commit data source being updated in the transaction. Both the transaction environment properties convertAllToDistributed and useCommitConfirmTable must be set to true.

Platform-specific Information
This section contains the following topics:

Database Model Mapping Requirements Virtual Column Categories Virtual Columns and Indexes Using Virtual Columns Accessing DBMS Data DBMS Error Codes

42-4 AIS User Guide and Reference

Database Model Mapping Requirements


The DBMS data source must provide an accurate mapping between the CODASYL model which DBMS implements and the relational model that client/server tools use. An accurate mapping allows you to use a relational-based tool without losing any of the CODASYL functionality. This section describes the mapping implemented by the DBMS data source and illustrates how to implement common DBMS data access operations using SQL. The DBMS data source adds one or more virtual columns to every DBMS table in order to implement the CODASYL-RDBMS mapping. These columns, referred to as Virtual Columns, appear like any other column to the client application, even though they are not physically part of the database. The data source driver makes special use of these columns in both read and write operations. This section contains the following topics:

Virtual Column Categories Virtual Columns and Indexes Using Virtual Columns SQL to DBQ Mapping Examples

Virtual Columns
Virtual columns are columns which are not part of the DBMS data record. Virtual columns are used to facilitate the mapping of sets to the relational model. This section has the following topics:

Using Virtual Columns Virtual Columns and Indexes Virtual Column Categories

Using Virtual Columns


This section describes how to use the set of virtual columns and includes the following topics:

Performing Owner-to-Member Set Joins Performing Member-to-Owner Set Joins Using System Sets Producing Chapters Using Reverse Fetch Connecting, Disconnecting, and Reconnecting Records and Owner Sets Connecting and Disconnecting Records and System Sets

Performing Owner-to-Member Set Joins Users experienced in retrieving data from DBMS are familiar with the idea of traveling through the database. For example, you start from the CLASS record and travel to the PART record using the CLASS_PART set to join the tables. In DBQ commands, you would fetch the CLASS record and then fetch all the parts within CLASS_PART to join CLASS to PART using the CLASS_PART set. In the DBMS data source, you use the virtual columns to traverse owner-member set joins. To move from one table to another table, you must pick up the DBKEY column

DBMS Data Source (OpenVMS Only) 42-5

of the current table, and equate it to the virtual column of the set you want to use in the target table. For example, in SQL, using the virtual columns, write the following SQL statement:
SELECT * FROM CLASS CLASS, PART PART WHERE (PART._M_CLASS_PART = CLASS.__CLASS)

Start from the CLASS table and take the __CLASS column. Then move to the PART table using the CLASS_PART set, and equate the __CLASS column with the _M_ CLASS_PART column in the PART table. This yields the correct result because the _M_ CLASS_PART is always the DBKEY of the owner CLASS record, and the __CLASS field is also the DBKEY of the CLASS record. This concept of data traversal can be demonstrated using graphical query tools such as Microsoft MS Query. This figure shows the MS Query display for a join between CLASS, PART and QUOTE. Note how the query starts with CLASS, moves to PART (CLASS.__CLASS = PART._M_CLASS_PART), and then moves from PART to QUOTE (PART.__PART = QUOTE._M_PART_INFO).
Figure 421 MS Query Example for Data Traversal

Performing Member-to-Owner Set Joins Traversing the data from member-to-owner is specified just like the owner-to-member joins described in Performing Owner-to-Member Set Joins. For example, to start from PART and join to CLASS by equating PART.__PART and CLASS._O_CLASS_PART, use the following SQL syntax:
SELECT * FROM PART PART, CLASS CLASS WHERE (PART.__PART = CLASS._O_CLASS_PART)

Even though the operations resemble the owner-to-member join, the processing of the member-to-owner join is slightly different. The _O_CLASS_PART is usually empty. When used in an SQL statement like the above example, the DBMS data source assigns _O_CLASS_PART to the DBKEY of the member record. This is done so that the equation will remain true; the _O_CLASS_PART column will have the value of the __PART column.

42-6 AIS User Guide and Reference

This figure shows a combination of an owner-to-member join with a member-to-owner join. Note how the query starts with CLASS, moves to PART (owner-to-member), and then moves to EMPLOYEE (member-to-owner).
Figure 422 Member-to-Owner Query Example

Using System Sets The virtual columns for system sets, such as _S_ALL_PARTS_ ACTIVE, are assigned either 1 or 0 to indicate whether the row is a member of the set. To access the ALL_ACTIVE_PARTS set using this information, you write the following SQL:
SELECT * FROM PART PART WHERE ("_S_ALL_PARTS_ACTIVE" = 1)

You can also do the opposite, selecting all the records that are not members of the set:
SELECT * FROM PART PART WHERE ("_S_ALL_PARTS_ACTIVE" = 0)

Although the syntax is similar, the two operations are implemented differently: the first will actually use the ALL_PARTS_ACTIVE set, while the second may scan the table sequentially in order to find the matching rows. The Query Processor may use a system set without a virtual column even though the set was not explicitly requested in the SQL statement, if using such a set will yield better performance. For example, consider the following SQL statement:
SELECT * FROM PART PART WHERE (PART_ID = AZ456789)

The Query Processor will choose the ALL_PARTS system set because it is the equivalent of a normal index in a relational data source. The Query Processor factors these indexes into its execution strategy for the query when advantageous. Producing Chapters By using the virtual column of an owner record in the SQL, the DBMS data source retrieves all the set members for each specific owner record. In this example, the virtual column for the CLASS table is used to retrieve the parts for each class record:

DBMS Data Source (OpenVMS Only) 42-7

Figure 423 Chapter Output View in the SQL Utility

In the SQL Utility, clicking on a chapter in the output displays all the set members for the specific owner. The chapterOF attribute must be specified for the member in the Attunity metadata. This attribute is generated automatically when generating Attunity metadata using the DBMS_ADL utility (see Metadata Considerations). Using Reverse Fetch The virtual column to enable a reverse fetch for a table is named _ REVERSE_FETCH. Such columns are not assigned real values, but serve as flags indicating that rows should be retrieved in reverse order. To use reverse fetch, you must include _REVERSE_FETCH in the WHERE clause, assigning any value to the column. For example, to retrieve all the rows in PARTS in reverse order, write the following SQL:
SELECT * FROM PART WHERE _REVERSE_FETCH = 1

This results in the DBQ command FETCH LAST followed by one or more FETCH PRIOR , instead of FETCH FIRST and then FETCH NEXT. Connecting, Disconnecting, and Reconnecting Records and Owner Sets The connection between a record and an owner set can be manipulated by changing the value of the associated _M_ driver column. The following statement, for example, will disconnect the PART record from the RESPONSIBLE_FOR set:
UPDATE PART SET _M_RESPONSIBLE_FOR = NULL WHERE PART_ID = BR890123

Similarly, using the following example you can connect the PART record to the EMPLOYEE record whose DBKEY is 4:2:1:
UPDATE PART SET _M_RESPONSIBLE_FOR = 4:2:1 WHERE PART_ID = BR890123

The DBMS data source includes the following special features related to set connections:

42-8 AIS User Guide and Reference

The data source driver validates any connection changes against the insertion and retention requirements defined in DBMS. For example, trying to disconnect from a mandatory retention set results in an appropriate error. Similarly, trying to insert a new record without supplying values for all the automatic insertion sets also fails. When changing a set connection, the data source driver checks whether the record was previously connected to the set. The data source driver then determines whether to use the DBQ CONNECT or RECONNECT command to implement the request. NULL values and blank values are treated the same for virtual columns; both cause the record to be disconnected from the set. An update command that tries to reconnect a record to its current set is ignored by the data source driver. You may connect a record to a specific position in a chained set by giving the currency of the preceding member instead of the currency of the owner as the value of the virtual column. The data source driver validates all supplied DBKEYs before performing any of the updates. If any element of the update request is invalid, the update is not performed. For example, when you issue an update that connects a PART to the RESPONSIBLE_FOR set and changes the PART_PRICE field, the entire operation fails if the DBKEY of the EMPLOYEE is invalid.

Connecting and Disconnecting Records and System Sets As with owner-member sets, the membership of a record in a system set is manipulated by changing the value of the associated virtual column. Virtual columns for system sets accept either 1 or 0 to denote membership or non-membership, respectively. For example, the following statement can be used to make a PART record non-active, that is, disconnecting it from the ALL_PARTS_ACTIVE set:
UPDATE PART SET _S_ALL_PARTS_ACTIVE = 0 WHERE PART_ID = BR890123

Similarly, you can make a PART active using the following syntax:
UPDATE PART SET _S_ALL_PARTS_ACTIVE = 1 WHERE PART_ID = BR890123

SQL to DBQ Mapping Examples The following examples show how the DBMS data source driver processes SQL statements into DBQ commands. Users familiar with DBQ may find these examples helpful in understanding how the data source driver works and how to best utilize it. Each example shows SQL text and the DBQ commands used by the data source driver to implement the request. (Tracing writes the actual DBQ commands to the server log file for the SQL.)
Example 421 Selecting from a Table Without Key Criteria

Selecting from a table without any key criteria (set) causes Attunity Connect to read through the records in DBMS chain (sequential or sorted) order.
(SQL) SELECT PART.PART_ID, PART.PART_DESC FROM PART (DBQ) FIND FIRST PART GET PART_ID PART_DESC FREE ALL CURRENT

DBMS Data Source (OpenVMS Only) 42-9

Example 422

Utilizing Key Columns Specified in WHERE Statement

If any key columns are specified in the WHERE statement, Attunity Connect attempts to utilize the key (set).
(SQL) SELECT PART.PART_ID, PART.PART_DESC FROM PART WHERE (PART.PART_ID=BR890123) (DBQ) FIND FIRST PART WITHIN ALL_PARTS WHERE PART_ID EQ "BR890123" GET PART_ID PART_DESC FREE ALL CURRENT Example 423 Referencing a DBMS Set With _S_<SetName> Virtual Column Name

To reference a DBMS set that has the _S_<SetName> virtual column name, you must set the column equal to 1. This results in the Query Processor passing the column name to the data source driver. The data source driver then utilizes the set.
(SQL) SELECT PART.PART_ID, PART.PART_DESC FROM PART PART WHERE (PART."_S_ALL_PARTS_ACTIVE"=1) (DBQ) FIND FIRST PART WITHIN ALL_PARTS_ACTIVE GET PART_ID PART_DESC FREE ALL CURRENT Example 424 Joining an Owner Record to a Member Record

To select and join an owner record to a member record, set the _M_<SetName> virtual column in the member record equal to the virtual anchor __<RecordName> of the owner record. (Note the double underscore __ in the virtual anchor name.)
(SQL) SELECT PART.PART_ID, COMPONENT.COMP_SUB_PART FROM COMPONENT COMPONENT, PART PART WHERE (COMPONENT."_M_PART_USES" = PART."__PART") (DBQ) FIND FIRST PART GET PART_ID FREE ALL CURRENT FIND DBKEY FIND FIRST COMPONENT WITHIN PART_USES GET COMP_SUB_PART FREE ALL CURRENT

To select and join a member record to an owner record, set the _O_<SetName> virtual column in the owner record equal to the virtual anchor __<RecordName> of the member record.
(SQL) SELECT COMPONENT.COMP_SUB_PART, PART.PART_ID FROM COMPONENT COMPONENT, PART PART WHERE (PART."_O_PART_USED_ON" = COMPONENT."__COMPONENT") (DBQ) FIND FIRST COMPONENT GET COMP_SUB_PART FREE ALL CURRENT FIND DBKEY FIND OWNER WITHIN PART_USED_ON GET PART_ID FREE ALL CURRENT

42-10 AIS User Guide and Reference

Example 425

Adding a Record

To add a record (simple):


(SQL) INSERT INTO CLASS (CLASS_CODE, CLASS_DESC, CLASS_STATUS,"__CLASS") VALUES (OL, OL DESC, N, NULL) (DBQ) STORE CLASS FREE ALL CURRENT COMMIT

To add a new record, all of the automatic insertion _M_<SetName> member virtual columns must be set to a valid DBKEY. The DBKEY can be that of a desired owner record or the DBKEY of an existing record in the table which has the owner that is needed, in the format Area:Page:Line. To add a new record with automatic insertion using an owner record and a system chain set:
(SQL) INSERT INTO PART (PART_ID, PART_DESC, PART_STATUS, PART_PRICE, PART_COST, PART_SUPPORT, "_S_ALL_PARTS_ACTIVE", "_M_CLASS_PART", "_M_RESPONSIBLE_FOR", "__PART") VALUES (AA0001,DESC, G, 1.5, 0.5, Y, 1, 2:4:1, NULL, NULL) (DBQ) FIND DBKEY RETAINING ALL EXCEPT CLASS_PART STORE PART FREE ALL CURRENT COMMIT Example 426 Deleting a Record

To delete a record:
(SQL) DELETE FROM CLASS WHERE (CLASS_CODE = OL) (DBQ) FIND FIRST CLASS WITHIN ALL_CLASS WHERE CLASS_CODE EQ "OL" GET CLASS_CODE CLASS_DESC CLASS_STATUS FREE ALL CURRENT FIND DBKEY RETAINING ALL EXCEPT ALL_CLASS FIND NEXT CLASS WITHIN ALL_CLASS WHERE CLASS_CODE EQ "OL" FREE ALL CURRENT FIND DBKEY ERASE FREE ALL CURRENT COMMIT

To delete a record that has a mandatory member with records, the member records must be removed first. As shown in the following example, attempting to delete such a record fails, and the transaction is rolled back.
(SQL) DELETE FROM CLASS WHERE (CLASS_CODE = PC)

This statement results in the following error:


Modify Rows failed: Table name = CLASS. (DBQ) FIND FIRST CLASS WITHIN ALL_CLASS WHERE CLASS_CODE EQ "PC"

DBMS Data Source (OpenVMS Only) 42-11

GET CLASS_CODE CLASS_DESC CLASS_STATUS FREE ALL CURRENT FIND DBKEY RETAINING ALL EXCEPT ALL_CLASS FIND NEXT CLASS WITHIN ALL_CLASS WHERE CLASS_CODE EQ "PC" FREE ALL CURRENT FIND DBKEY ERASE DB_FS_INTERFACE(35); Error: DB_DBMS_INTERFACE(2), %DBM-F-ERASEMANDT, MANDATORY member can be erased only with ERASE ALL; EXECUTE DB_DBMS_INTERFACE(2), ERASE(DELETE) FREE ALL CURRENT ROLLBACK Example 427 Utilizing a Connect on the PART Record

The following example shows a connect on the PART record:


(SQL) UPDATE PART SET "_M_RESPONSIBLE_FOR" =4:8:1 WHERE ("_M_RESPONSIBLE_FOR" IS NULL AND PART_ID = AZ000003) (DBQ) FIND FIRST PART WITHIN ALL_PARTS WHERE PART_ID EQ "AZ000003" GET PART_ID PART_DESC PART_STATUS PART_PRICE PART_COST PART_SUPPORT FIND CURRENT WITHIN ALL_PARTS_ACTIVE RETAINING ALL FIND OWNER WITHIN CLASS_PART RETAINING ALL FREE ALL CURRENT FIND DBKEY RETAINING ALL EXCEPT ALL_PARTS FIND NEXT PART WITHIN ALL_PARTS WHERE PART_ID EQ "AZ000003" FREE ALL CURRENT FIND DBKEY 1:57:4 FIND DBKEY RETAINING ALL EXCEPT RESPONSIBLE_FOR 4:8:1 FETCH DBKEY RETAINING ALL 1:57:4 CONNECT PART TO RESPONSIBLE_FOR FREE ALL CURRENT COMMIT

Note: If you place any member virtual columns _M_<SetName> into a SELECT, the data source driver reads the member records to get a DBKEY value for the member virtual column. This occurs regardless of whether the SELECT includes a specific value or a wildcard for the member virtual column. You should do this only if you either need information from the member record or will join to the member record.

Virtual Columns and Indexes


The following table summarizes the rules that the data source uses in exposing virtual columns and virtual indexes on these columns.

42-12 AIS User Guide and Reference

Table 422 Set Type System sets System sets

Exposing Virtual Columns and Indexes Insertion Automatic Manual Retention Fixed Mandatory Fixed Mandatory Optional Virtual Column None _S_<SetName> Index _K_<SetName> _S_<SetName>

System sets

Automatic

Optional N/A Fixed Mandatory Optional

_S_<SetName> _O_<SetName> _M_<SetName>

_S_<SetName> _O_<SetName> _M_<SetName>

Owner/Membe N/A r (Owner side) Owner/Membe Automatic r (Member side) Manual

An ordered system set that has insertion and retention such that every row is always a member of the set is considered to be the equivalent of a regular index in a Relational Data Source. As such, it does not need a special virtual column. An index (_K_ <SetName>) is created for it and the Query Processor utilizes it if it benefits query performance. An example is the ALL_PARTS system set in the PARTS table. All system sets that include only a subset of the table rows have a _S_ driver column created for them. An example is the ALL_PARTS_ACTIVE system set. In the Attunity metadata for the virtual columns, the explicitSelect clause should not be set in order to display the columns with a SELECT * statement. If the explicitSelect clause is set, the virtual columns must be explicitly stated in the SELECT statement (SELECT *, _M_aaa, ). The following table shows the values returned in virtual columns for a simple SELECT * FROM PART SQL statement.
Table 423 Values Returned for a SELECT SQL Statement _S_ALL _PARTS PART_ID BR890123 TE234567 TE217890 _ACTIVE 1 0 0 _M _CLASS _PART 1:53:2 1:21:1 1:21:1 _M_RESPONSIBLE _FOR 4:82:1 4:82:1 4:23:1 _O_ PARTS _USES _O_PART _USED _ON _O_PART _INFO __PART 1:8:1 1:13:1 1:16:1

Note the following:

BR890123 is the only active part. The others have 0 as the value of _S_ALL_ PARTS_ACTIVE. 1:53:2 is the DBKEY of the CLASS record that owns BR890123. 4:82:1 is the DBKEY of the EMPLOYEE record that is responsible for part BR890123 and part TE234567. All of the owner virtual columns are empty unless used in a member-to-owner join. 1:8:1 is the DBKEY of the BR890123 PART record.

DBMS Data Source (OpenVMS Only) 42-13

Virtual Column Categories


All virtual columns fall into one of the categories listed in this table.
Table 424 Column Type Table DBKEY Database Model Mapping Requirements Naming Convention __<table-name> Description This column contains the current row DBMS DBKEY (for example, 1:3:56). This column is central to many operations. This column is used to map system sets that cannot be defined as keys. Possible values are 0 or 1 where 0 means the row is not a member of the set and 1 indicates that the row is a member of the set. This column appears as part of the owner table of an owner-member type set. This virtual column is used in member-to-owner type joins. If selected in other cases it will always be empty. This column appears as part of the member table of an owner-member type set. This virtual column is used in owner-to-member type joins. If selected it will always return the DBKEY of the parent row. This column is used as a flag in WHERE clauses to cause rows to be retrieved (fetched) in reverse order. Values for this field do not signify anything.

System set

_S_<set-name>

Owner of a set

_O_<set-name>

Member of a set _M_<set-name>

Reverse Fetch flag

_REVERSE_ FETCH

The CLASS table of the sample PARTS database, for example, is the owner of the CLASS_PART set, while the PART table is the member of the CLASS_PART set, as well as the RESPONSIBLE_FOR set. It is also the owner of the PARTS_USES, PART_USED_ ON and PART_INFO sets. This example shows the columns exposed by the data source for the CLASS and PART tables:
Figure 424 CLASS and PARTS Tables

Accessing DBMS Data


The DBMS User Work Area (UWA) is restricted to 150,000 bytes. This area includes all the records and subschemas loaded at any one time.

42-14 AIS User Guide and Reference

If the DBMS data source link was not set during the AIS installation, then you need to link the data source before you can access DBMS data. To link the DBMS data source 1. Link the DBMS data source using the following command:
$ @NAVROOT:[BIN]NAV_DBMS_BUILD

If specified DBMS as one of the data sources during the installation., then continue to step 2.
2.

If you want to install the DBMS data source (NAVROOT:[BIN]NVDB_DBMS.EXE) as a shareable image, add the DBMS data source to NAV_START.COM in NAVROOT:[BIN] and SYS$STARTUP so that stopping and restarting AIS will install the image. You must restart AIS after relinking the data source by executing the NAV_ START.COM script.

3.

If you are upgrading the DBMS data source from a previous version, the DBMS data source installation links the data source at the site with the current version of DBMS. If you upgrade the DBMS installation to a new version, then you may need to relink the DBMS data source (you must relink when upgrading from DBMS 4.3 to the 6.0 series). The following command relinks the data source:
$ @NAVROOT:[BIN]NAV_DBMS_BUILD

Note:

When accessing DBMS, do not specify multiClient as the server mode in the daemon workspace.

DBMS Error Codes


Whenever a DBMS error occurs, the log file lists the numeric code for the error. This table lists the symbolic name that corresponds to each numeric code. You can use the DBQ HELP ERRORS command to get more information about the error that you encountered based on the symbolic name.
Table 425 DBMS Error Codes DBMS Code Symbolic Name 2654228 2654244 2654260 2654276 2654292 2654308 2654324 2654340 2654356 2654372 2654388 2654404 DBM$_ALLREADY DBM$_ASTINPROG DBM$_BADBIND DBM$_BADDEVNAM DBM$_BADKUNBIND DBM$_BADVERSION DBM$_BOUND DBM$_CANTASSDBJ DBM$_CANTCRERUJ DBM$_CANTOPENDBS DBM$_CANTPUTRUJ DBM$_CHKITEM

DBMS Code Symbolic Name 2654220 2654236 2654252 2654268 2654284 2654300 2654316 2654332 2654348 2654364 2654380 2654396 DBM$_ABORT_WAIT DBM$_AREABUSY DBM$_BAD_ARGLST DBM$_BADDBNAME DBM$_BADKBIND DBM$_BADSSCLST DBM$_BADZERO DBM$_BUGCHECK DBM$_CANTBINDRT DBM$_CANTEXTDBS DBM$_CANTOPENOUT DBM$_CANTUSERUJ

DBMS Data Source (OpenVMS Only) 42-15

Table 425 (Cont.) DBMS Error Codes DBMS Code Symbolic Name 2654412 2654428 2654444 2654460 2654476 2654492 2654508 2654524 2654540 2654556 2654572 2654588 2654604 2654620 2654636 2654652 2654668 2654684 2654700 2654716 2654732 2654748 2654764 2654780 2654796 2654820 2654836 2654852 2654868 2654884 2654900 2654916 2654932 2654945 2654964 2654980 2654996 DBM$_CHKMEMBER DBM$_CKEYMOVE DBM$_COMPMOVE DBM$_CRELM_NULL DBM$_CRTYP_NULL DBM$_CRUN_NULL DBM$_CSTYP_NULL DBM$_DBBUSY DBM$_DUPNOTALL DBM$_ERASEMANDT DBM$_ID_MAP DBM$_INTERLOCK DBM$_NODEFVAL DBM$_NOLOCKAVAIL DBM$_NOMONITOR DBM$_NOROOTBIND DBM$_NOTIMPLYET DBM$_NOTWITHIN DBM$_NOT_MBR DBM$_NOT_OPTNL DBM$_NOT_UPDATE DBM$_OVERFLOW DBM$_REENTRANCY DBM$_SHUTDOWN DBM$_SKEYMOVE DBM$_TRAN_IN_PROG DBM$_UNDERFLOW DBM$_UNSCONV DBM$_USRFRCEXT DBM$_SSVERSION DBM$_CURDISPLA DBM$_BUGCHKDMP DBM$_ROLLBACK DBM$_TRUE DBM$_DORURECOV DBM$_CANTOPENIN DBM$_DATCNVERR DBMS Code Symbolic Name 2654420 2654436 2654452 2654468 2654484 2654500 2654516 2654532 2654548 2654564 2654580 2654596 2654612 2654628 2654644 2654660 2654676 2654692 2654708 2654724 2654740 2654756 2654772 2654788 2654804 2654828 2654844 2654860 2654876 2654892 2654908 2654924 2654937 2654956 2654972 2654988 2655004 DBM$_CHKRECORD DBM$_COMPLEX DBM$_CONVERR DBM$_CRELM_POS DBM$_CRTYP_POS DBM$_CRUN_POS DBM$_CSTYP_POS DBM$_DEADLOCK DBM$_END DBM$_FIXED DBM$_ILLNCHAR DBM$_NOCREMBX DBM$_NOLLBAVAIL DBM$_NONDIGIT DBM$_NOREALM DBM$_NOSSBIND DBM$_NOTOTYP DBM$_NOT_BOUND DBM$_NOT_MTYP DBM$_NOT_READY DBM$_NOW_MBR DBM$_READY DBM$_SETSELECT DBM$_SINGSTYP DBM$_STAREAFUL DBM$_TRUNCATION DBM$_UNSCOMP DBM$_USE_EMPTY DBM$_WRONGRTYP DBM$_SSVERSION2 DBM$_KPLDISPLA DBM$_ABORTED DBM$_FALSE DBM$_NOWILD DBM$_INVDBSFIL DBM$_CNVNUMDAT DBM$_MISMMORDD

42-16 AIS User Guide and Reference

Table 425 (Cont.) DBMS Error Codes DBMS Code Symbolic Name 2655012 2655028 2655044 2655236 2655252 2655268 2655284 2655300 2655316 2655332 2655348 2655364 2655380 2655396 2655412 2655428 2655444 2655460 2655476 2655492 2655508 2655524 2655540 2655556 2655572 2655588 2655604 2655620 2655636 2655652 2655668 2655684 2655700 2655716 2655732 2655748 2655764 DBM$_BDDATRANG DBM$_NOTSYSCONCEAL DBM$_NODBK DBM$_BAD_KCALL DBM$_KBOUND DBM$_NOTBOOL DBM$_UNSARITH DBM$_STRNOTFND DBM$_EXQUOTA DBM$_SIP DBM$_CANTOPENRUJ DBM$_MONMBXOPN DBM$_NOPRIV DBM$_MONTRMFOR DBM$_MONMBXDEL DBM$_MONDELLOG DBM$_CANTSNAP DBM$_BADAIJFILE DBM$_ROOMAJVER DBM$_BADAIJTAD DBM$_CANTCREMBX DBM$_CANTSENDMAIL DBM$_NOCHAR DBM$_CANTDELETE DBM$_LOGINISTA DBM$_LOGREINIT DBM$_CANTCREDBS DBM$_CANTGETRUJ DBM$_CANTREADDBS DBM$_CHECKSUM DBM$_BUFTOOSML DBM$_LOGCREATE DBM$_LOGSNPSTA DBM$_LOGSNPINI DBM$_INVREREADY DBM$_BADBNDPRM DBM$_AIJOPEN DBMS Code Symbolic Name 2655020 2655036 2655052 2655244 2655260 2655276 2655292 2655308 2655324 2655340 2655356 2655372 2655388 2655404 2655420 2655436 2655452 2655468 2655484 2655500 2655516 2655532 2655548 2655564 2655580 2655596 2655612 2655628 2655644 2655660 2655676 2655692 2655708 2655724 2655740 2655756 2655772 DBM$_BADDATDEF DBM$_LCKCNFLCT DBM$_NOUSERNAM DBM$_ISI DBM$_MONABORT DBM$_WASBOOL DBM$_STALL DBM$_AREA_CORRUPT DBM$_NOSIP DBM$_NOSNAPS DBM$_CANTOPENAIJ DBM$_NOMONLOGNAM DBM$_MONTRMDEL DBM$_MONTRMSUI DBM$_GBLSECDEL DBM$_CANTDELLOG DBM$_EMPTYAIJ DBM$_NOTROOT DBM$_MUSTRECDB DBM$_DBRABORTED DBM$_CANTREADAIJ DBM$_CANTCREDBR DBM$_LOGDELFIL DBM$_LOGINIDBS DBM$_LOGNEWDBS DBM$_CANTFLSHRUJ DBM$_CANTCONNRUJ DBM$_CANTTRNCRUJ DBM$_QIOXFRLEN DBM$_CANTWRITEDBS DBM$_CANTCREROO DBM$_LOGGBLSEC DBM$_LOGSNPFNM DBM$_ACCVIO DBM$_NONODE DBM$_BADBOUNDS DBM$_CORRUPT_ROOT

DBMS Data Source (OpenVMS Only) 42-17

Table 425 (Cont.) DBMS Error Codes DBMS Code Symbolic Name 2655780 2655796 2655812 2655828 2655844 2655860 2655876 2655892 2655908 2655924 2655940 2655956 2655972 2655988 2656004 2656020 2656036 2656052 2656068 2656084 2656100 2656116 2656132 2656148 2656164 2656180 2656196 2656212 2656244 2656284 2656300 2656316 2656332 2656356 2656372 2656388 2656412 DBM$_DUPGSDNAM DBM$_INV_ROOT DBM$_INV_SSDD DBM$_NORTUPB DBM$_RECOVERY DBM$_ROOT_OPEN DBM$_STOPPED DBM$_DBJ_NOTREADY DBM$_BADAIJNAM DBM$_FREE_VM DBM$_ROOTMAJVER DBM$_HARDERROR DBM$_TERMINATE DBM$_CANTREAD DBM$_BADFILTYP DBM$_MODVALSTR DBM$_CANTQIOMBX DBM$_AIJSQ2EOF DBM$_AIJCLSTAD DBM$_AIJLIMBOC DBM$_AIJSTART2 DBM$_AIJUNCOMR DBM$_AIJLIMBOI DBM$_AIJBADPID DBM$_AIJSTART1 DBM$_BADROOTMATCH DBM$_CANTMODFYEOF DBM$_CANTMASTERDB DBM$_NO DBM$_CANTADDUSER DBM$_OPERSHUTDN DBM$_CANTLCKTRM DBM$_CANTREADDB DBM$_DBNOTACTIVE DBM$_MONSTOPPED DBM$_CANTASSMBX DBM$_FILACCERR DBMS Code Symbolic Name 2655788 2655804 2655820 2655836 2655852 2655868 2655884 2655900 2655916 2655932 2655948 2655964 2655980 2655996 2656012 2656028 2656044 2656060 2656076 2656092 2656108 2656124 2656140 2656156 2656172 2656188 2656204 2656236 2656260 2656292 2656308 2656324 2656340 2656364 2656380 2656404 2656420 DBM$_INHIBRECOV DBM$_INV_SCDD DBM$_INV_STDD DBM$_NOUSERPID DBM$_ROOT_NOT_OPEN DBM$_SSNOTINROOT DBM$_NORESEXT DBM$_NOWILDAIJ DBM$_NOLABEL DBM$_GET_VM DBM$_MICROFAIL DBM$_SUICIDE DBM$_CANTWRITE DBM$_MONOC DBM$_LOGMODIFY DBM$_CANTCREMON DBM$_AIJERREOF DBM$_AIJSEQEOF DBM$_AIJRECFST DBM$_AIJLIMBOR DBM$_AIJLOGMSG DBM$_AIJERRSTP DBM$_AIJSUCCES DBM$_AIJBADMAI DBM$_CANTMAPTROOT DBM$_CANTGETEOF DBM$_TOOMANYDUPS DBM$_YES DBM$_MODVAL DBM$_CANTCLOSEDB DBM$_CANTCREGBL DBM$_CANTOPENDB DBM$_DBACTIVE DBM$_DBSHUTDOWN DBM$_OPERCLOSE DBM$_MODAREVAL DBM$_NOAIJDIR

42-18 AIS User Guide and Reference

Table 425 (Cont.) DBMS Error Codes DBMS Code Symbolic Name 2656428 2656444 2656460 2656476 2656492 2656508 2656524 2656540 2656556 2656572 2656588 2656604 2656620 2656636 2656652 2656668 2656684 2656700 2656716 2656732 2656748 2656764 2656780 2656796 2656812 2656828 2656844 2656860 2656876 2656892 2656908 2656924 2656940 2658308 2658324 2658340 2658356 DBM$_LOGFILACC DBM$_NOAIJDEF DBM$_AIJENABLED DBM$_NOTRANAPP DBM$_LOGAIJBCK DBM$_TADMISMATCH DBM$_NOCONVERT DBM$_NOTDSKFIL DBM$_NODEVDIR DBM$_QUIETPT DBM$_MODAREFLG DBM$_LOGRECSTAT DBM$_INVHEADER DBM$_BKUPEMPTYAIJ DBM$_ERROPENIN DBM$_ERRFOREIGN DBM$_RUJDEVDIR DBM$_LOGBCKAIJ DBM$_LOGCREDB DBM$_LOGCRESTO DBM$_LOGCREOUT DBM$_LOGINISTO DBM$_LOGRECDB DBM$_LOGMODFLG DBM$_LOGMODSTR DBM$_ERRWRITE DBM$_LOGALGFAC DBM$_AREARSTR DBM$_BADSPAMINT DBM$_MONFLRMSG DBM$_BADBUFSIZ DBM$_LOGMODSPM DBM$_CONFTXNOPTION DBM$_GROUPNA DBM$_NOTSTAREA DBM$_BADUWALST DBM$_OBSRTDDCB DBMS Code Symbolic Name 2656436 2656452 2656468 2656484 2656500 2656516 2656532 2656548 2656564 2656580 2656596 2656612 2656628 2656644 2656660 2656676 2656692 2656708 2656724 2656740 2656756 2656772 2656788 2656804 2656820 2656836 2656852 2656868 2656884 2656900 2656916 2656932 2656948 2658316 2658332 2658348 2658364 DBM$_LOGINIFIL DBM$_BADASCTOID DBM$_LOGRECOVR DBM$_AIJDISABLED DBM$_RESTART DBM$_CANTSPAWN DBM$_LOGCONVRT DBM$_NOTIP DBM$_BADAIJVER DBM$_DBNOTOPEN DBM$_PREMEOF DBM$_EMPTYFILE DBM$_DELAREA DBM$_AREA_INCONSIST DBM$_ERROPENOUT DBM$_AIJDEVDIR DBM$_SNAPFULL DBM$_LOGOPNAIJ DBM$_LOGCREAIJ DBM$_LOGCRESNP DBM$_LOGCREBCK DBM$_LOGINISNP DBM$_LOGPAGCNT DBM$_LOGMODVAL DBM$_READ_ONLY DBM$_LOGALGCNT DBM$_SETWIDTH DBM$_BADPARAM DBM$_TIMEOUT DBM$_PARTDTXNERR DBM$_LOGMODSTO DBM$_GETTXNOPTION DBM$_LOGRESOLVE DBM$_SECURVIO DBM$_BADSTRADDR DBM$_NOSTREAM DBM$_NOROLLB

DBMS Data Source (OpenVMS Only) 42-19

Table 425 (Cont.) DBMS Error Codes DBMS Code Symbolic Name 2658372 2658388 2658404 2658420 2658436 2658452 2658468 2658484 2662411 2662427 2662443 2662459 2662475 2662491 2662507 2662523 2662539 2662555 2664451 2664467 2664483 2664499 2664515 2664531 2668555 2668571 2668587 2668603 2668619 10059796 10059812 10059828 10059844 10059860 10059876 10059892 10059908 DBM$_ONSTREAM DBM$_OUTSTRCTX DBM$_NETERR DBM$_RECNOTINSS DBM$_UNIRECORD DBM$_EPCBADCAL DBM$_DTXNABORTED DBM$_NOBATUPD DBM$_STAT_COMMIT DBM$_STAT_DISCONNECT DBM$_STAT_FETCH DBM$_STAT_KEEP DBM$_STAT_READY DBM$_STAT_STORE DBM$_STAT_IF_EMPTY DBM$_STAT_IF_NULL DBM$_STAT_IF_TENANT DBM$_STAT_USE DBM$_STAT_DBM_VERBS DBM$_STAT_LCK_LOCK DBM$_STAT_PIO_DB_R DBM$_STAT_RUJ_FLUSH DBM$_STAT_LCK_CNFL DBM$_STAT_PIO_FETCH DBM$_STAT_PSII_CRE DBM$_STAT_PSII_INS DBM$_STAT_PSII_REM DBM$_STAT_PSII_DIST1 DBM$_STAT_PSII_DIST3 DBQ$_BAD_ARGLST DBQ$_CABORT DBQ$_CANTDOIT DBQ$_END DBQ$_EXTRAINPUT DBQ$_ILLCHAR DBQ$_LISTTOOBIG DBQ$_NOFILE DBMS Code Symbolic Name 2658380 2658396 2658412 2658428 2658444 2658460 2658476 2662403 2662419 2662435 2662451 2662467 2662483 2662499 2662515 2662531 2662547 2662563 2664459 2664475 2664491 2664507 2664523 2668547 2668563 2668579 2668595 2668611 10059788 10059804 10059820 10059836 10059852 10059868 10059884 10059900 10059916 DBM$_NO_DBMREG DBM$_BADPROTOCOL DBM$_BADDBKEY DBM$_CURDISNUL DBM$_UNIMEMBER DBM$_NOCMRLDTXN DBM$_NOTSET DBM$_STAT_TRANS DBM$_STAT_CONNECT DBM$_STAT_ERASE DBM$_STAT_FREE DBM$_STAT_MODIFY DBM$_STAT_ROLLBACK DBM$_STAT_IF_ALSO DBM$_STAT_IF_MEMBER DBM$_STAT_IF_OWNER DBM$_STAT_IF_WITHIN DBM$_STAT_STAT DBM$_STAT_DBM_VROLL DBM$_STAT_LCK_DEMO DBM$_STAT_PIO_DB_W DBM$_STAT_RUJ_PUT DBM$_STAT_LCK_HOLD DBM$_STAT_PSII_BAL DBM$_STAT_PSII_DES DBM$_STAT_PSII_MOD DBM$_STAT_PSII_SEA DBM$_STAT_PSII_DIST2 DBQ$_AMBIGITEM DBQ$_BADELSE DBQ$_IGNRCAST DBQ$_OBS_CONVERR DBQ$_EXIT DBQ$_IGNORED DBQ$_INCONRECITEM DBQ$_NOCURREC DBQ$_NOTBOUND

42-20 AIS User Guide and Reference

Table 425 (Cont.) DBMS Error Codes DBMS Code Symbolic Name 10059924 10059940 10059956 10059972 10059988 10060004 10060020 DBQ$_OBS_NOTIMPLYET DBQ$_NOTITEM DBQ$_SYNTAX DBQ$_TOKTOOBIG DBQ$_CANTBEUSED DBQ$_EMPTYLOOP DBQ$_OPANDOVR DBMS Code Symbolic Name 10059932 10059948 10059964 10059980 10059996 10060012 10060028 DBQ$_NOTINCALL DBQ$_NOTPOSNUM DBQ$_SYNTXEOS DBQ$_ZABORT DBQ$_CANTPRINT DBQ$_MISMATMOV DBQ$_OPATOROVR

Defining the DBMS Data Source


The process of defining a DBMS data source consists of two tasks:

Defining the DBMS Data Source Connection Configuring the DBMS Data Source Properties

Defining the DBMS Data Source Connection


The DBMS data source driver connection is set using the Design perspective, Configuration view in Attunity Studio. To define the connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add your data source. Expand the Bindings folder. Expand the binding where you want to add the DBMS data source. Right-click the Data sources folder and select New Data Source. The New Data Source screen is displayed.

7. 8. 9.

Enter a name for the data source in the Name field. Select DBMS from the Type list. Enter the connect string as follows:

Root File: Specify the database root file. This file may be referenced using a logical name. Subschema: Specify the name of a subschema. (You must specify a separate directory for each subschema.) Access Mode: Enter the access mode for the connection (ReadOnly, readwrite,batchRetrieval or reporting). The default value is reporting.
Note:

You must specify a database definition for each database and subschema combination that you need to access.

DBMS Data Source (OpenVMS Only) 42-21

10. Click Finish.

See also: Adding Data Sources.

Configuring the DBMS Data Source Properties


After setting the binding, you set the data source properties. To configure the DBMS data source properties 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the DBMS data source and select Open. The Configuration Properties screen is displayed.

Figure 425 DBMS Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the DBMS Data Source Connection. Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.
Note:

You must specify a database definition for each database and subschema combination that you need to access.

42-22 AIS User Guide and Reference

9.

After setting the binding, you must define Attunity metadata describing the DBMS data.

Setting Up the DBMS Data Source Metadata


The DBMS data source requires Attunity Metadata. You can use the DBMS_ADL import utility to import this metadata to Attunity metadata. Use the Design perspective, Metadata tab of Attunity Studio to define new metadata and update the statistics for the data. The DBMS import utility produces Attunity metadata from a DBMS database. The utility accepts a DBMS root file, subschema name and the name of the Data Source as specified in the Binding. It then reads the metadata from DBMS and populates the relevant Repository. Activation of this utility is based on environment symbols defined by the login file that resides in the BIN directory under the directory where AIS is installed. You can always replace the environment symbol with the appropriate entry. To generate ADD metadata To generate ADD metadata, use the following command line (activated directly from DCL):
$ dbms_adl root-file subschema ds_name [exp_select] [basename]

where:

root-file: The name of the DBMS root file. If a logical is used for the root-file, the extension.ROO must be part of the logical specification.

subschema: The name of the DBMS subschema. ds_name: The name of a data source defined in the binding. The imported metadata is stored as ADD metadata in the repository for this data source. exp_select: When x is specified for this parameter, the explicit select attribute of virtual fields is disabled. basename: A user defined name, used for the intermediate files used during the import operation

The following example creates an ADD entry in the repository for the DBMS_PROD data source, for the PARTS.ROO and the PARTSS1 subschema: $ CREATE/DIRECTORY DKA100:[PARTS.PARTSS1] $ DBMS_ADL PART$DIR:PARTS.ROO PARTSS1 DBMS_PROD Also refer to Data Types and how they are converted to Attunity metadata data types.

DBMS Data Source (OpenVMS Only) 42-23

42-24 AIS User Guide and Reference

43
Enscribe Data Source (HP NonStop Only)
This section describes the Attunity Enscribe Data Source Driver. It includes the following topics:

Overview Functionality Configuration Properties Metadata Transaction Support Security Enscribe Data Types Defining the Enscribe Data Source Setting up the Enscribe Data Source Metadata Testing the Enscribe Data Source

Overview
Attunity Connect supports the following Enscribe file types:

Key-sequenced. Entry-sequenced Relative: The Enscribe driver exposes a column called # for the relative record number. The # column can be used in SQL statements just like any other column. The Enscribe driver exposes a virtual index on the # column. It implements the index functionality using the Enscribe API. Unstructured: Unstructured files that keep fixed length records are supported. The unstructured file is read using a read-ahead buffer of 4K. The Enscribe driver also supports RBA usage. The RBA holds the relative record number in the stream file where the record begins. The RBA column can be used in SQL statements just like any other column. Set the record organization in the metadata to unstructured.

Functionality
The Attunity Enscribe data source driver supports the following Enscribe key features:

Record-level locking on Enscribe data for both audited and unaudited files using the HP NonStop LOCKREC and UNLOCKREC calls

Enscribe Data Source (HP NonStop Only) 43-1

Array structures within the Enscribe record Variant (redefined) structures within the Enscribe record

Supported Versions and Platforms


Attunity Enscribe data source driver is supported on all HP NonStop platforms supported by AIS. See Attunity Integration Suite Supported Systems and Resources.

Configuration Properties
The following parameters can be configured in Attunity Studio for the Enscribe data source in the data source Properties tab. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

createAuditFiles: Specifies that any Enscribe file created using the CREATE TABLE SQL statement is audited. The volume must be audited in order to create audited files. This parameter value can be set to True or False. disableExplicitSelect: Disables the field level ExplicitSelect ADD attribute. Every field is returned by a SELECT * FROM... statement. This parameter value can be set to True or False. enscribeLockMode: Specifies the value of the SETMODE parameter of the Guardian Set Lock Mode Procedure. The default value is set to 3. Valid values are:

0: Normal mode. A request is suspended when a read or lock is attempted and an established record lock or file lock is encountered. 1: Reject mode. A request is rejected with file-system error 73 when a read or lock is attempted and an established record lock or file lock is encountered. No data is returned for the rejected request. 2: Read-through/normal mode. READ or READUPDATE ignores record locks. Encountering a lock does not delay nor prevents the reading of a record. The locking response of LOCKFILE, LOCKREC, READLOCK, and READUPDATELOCK is identical to normal mode (mode 0). 3: Read-through/reject mode. READ or READUPDATE ignores record locks and file locks. Encountering a lock does not delay nor prevents the reading of a record. The locking response of LOCKFILE, LOCKREC, READLOCK, and READUPDATELOCK is identical to reject mode (mode 1). 6: Read-warn/normal mode. READ or READUPDATE returns data without regard to record and file locks. However, encountering a lock causes code 9 to be returned with the data. The locking response of LOCKFILE, LOCKREC, READLOCK, and READUPDATELOCK is identical to normal mode (mode 0). 7: Read-warn/reject mode. READ or READUPDATE returns data without regard to record and file locks. However, encountering a lock causes code 9 to be returned with the data. The locking response of LOCKFILE, LOCKREC, READLOCK, and READUPDATELOCK is identical to reject mode (mode 1).

enscribeLockType: Specifies whether record or file level locking are used when a lock is requested. Valid values are:

-1: No locking.

43-2 AIS User Guide and Reference

0 (default): Record locking. 1: File locking.

filepoolCloseOnTransaction: Specifies that all files in the file pool for this data source close at each end of transaction (commit or rollback). This parameter value can be set to True or False. filepoolSize: Specifies how many instances of a file from the file pool may be open concurrently. filepoolFilePerSize: Specifies how many instances of a file from the file pool may be open concurrently for each file. newFileLocation: Specifies the location of the Enscribe files and indexes you create with CREATE TABLE and CREATE INDEX statements. You must specify the full path for the subvolume. This parameter value is of a string type. supportFormat2: Specifies whether Format2 is supported by the version of the NSK operating system used. This property can be overridden in the metadata by setting dbCommand to FORMAT1 or FORMAT2. This parameter value can be set to True or False. transactions: Specifies whether or not TMF transactions are started. Set this property to False when dealing with unaudited files as a TMF transaction is not required. This parameter value can be set to True or False. useGlobalFilepool: Specifies a global file pool that can span over more than one session. This parameter value can be set to True or False.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Metadata
The Enscribe data source driver requires Attunity Metadata. You can import the metadata from COBOL copybooks, a DDL subvolume or from TAL files. If COBOL copybooks or a DDL subvolume or TAL files, which describe the Enscribe records, do not exist, then the metadata must be manually defined. If the metadata exists only as COBOL copybooks, then you can import the metadata to Attunity using the Enscribe import utility in the Design perspective, Metadata tab of Attunity Studio. If the metadata exists in a DDL subvolume or as TAL data files, you can use the stand-alone ADDIMP and TALIMP import utilities to import metadata to Attunity metadata from these sources. You use the Metadata tab in Attunity Studio to maintain the Attunity metadata and update the statistics for the data.

Transaction Support
Attunity Enscribe data source driver supports one-phase commit transactions. It can
participate in a distributed transaction if it is the only one-phase commit data source being updated in the transaction, when the covertAllToDistributed and useCommitConfirmTable environment properties are set to true. The Enscribe and SQL/MP data sources and the Pathway adapter share the same transaction which automatically provides consistency between Enscribe and SQL/MP.

Enscribe Data Source (HP NonStop Only) 43-3

As a result, you cannot start the new transaction for SQL/MP when one is open for Enscribe.
Note:

TMF is required when updating audited files.

Security
The Enscribe data source driver does not apply or enforce security policy.

Enscribe Data Types


The following table lists data types in a CREATE TABLE statement and how they are mapped to Enscribe data types:
Table 431 Create Table Char [(m)] Date Double FLOAT Image Integer Numeric [(p[,s])] Smallint Text Tinyint Varchar (m) Create Table Data Types Enscribe Char [(m)] Date+time Double Float Integer Numeric (p,s) Smallint Smallint Varchar (m)

See also ADD Supported Data Types.

Defining the Enscribe Data Source


The definition of an Enscribe data source consists of the following tasks:

Defining the Enscribe Data Source Connection Configuring the Enscribe Data Source

Defining the Enscribe Data Source Connection


The Enscribe data source connection is defined using the Design perspective, Configuration view in Attunity Studio. To define the Enscribe data source 1. Open Attunity Studio.
2. 3.

In the Design perspective, Configuration view, expand the Machines folder. Expand the computer where you want to add your Enscribe data source.

43-4 AIS User Guide and Reference

4. 5. 6.

Expand the Bindings folder. Expand the binding where you want to add the Enscribe data source. Right-click the Data sources folder and select New Data source. The New Data Source screen is displayed.

7.

Enter a name for the data source in the Name field.


Note:

When you enter a name for the data source, the name must begin with a letter. In addition, if you do not supply the Repository Information on the Advanced tab of the Data Source editor, the default names that AIS generates for the metadata files will be unique. AIS creates a NOS file and a BBNOS file. The file names are created by using the characters in the data source name and then by ensuring that the first letter is legal on the HP NonStop platform. To make sure that the generated files are unique, you must make the sixth through eighth alphanumeric characters of the data source name alphabetic and unique. If the data source name contains fewer than eight alphanumeric characters, the last three alphanumeric characters must be alphabetic and unique. If the data source name contains five or more alphanumeric characters, the last five cannot all be the letter B.

If you enter the Repository Information on the Advanced tab of the Data source editor, make sure that you use the rule above for the filenames created by the Repository directory (HP NonStop volume/subvolume) and name, not the data source name.
8. 9.

Select Enscribe from the Type list. Click Next. The Data Source Connect String screen opens.

10. Enter the Enscribe the Data SubVolume where the Enscribe files and indexes

created with CREATE TABLE and CREATE INDEX statements reside. If a value is not specified, then created files are written to the subvolume where the Attunity Connect is installed. The name of an index created in this subvolume must be different from the name of any table in the subvolume. If only accessing existing Enscribe files, then this value does not need to be specified.
11. Click Finish.

The new Enscribe data source is now displayed in the Configuration view, and its properties tabs are displayed in the Editor pane. See also: Adding Data Sources.

Configuring the Enscribe Data Source


After defining the connection, you set the data source properties.: To configure the Enscribe data source 1. Open Attunity Studio.
2.

In the Design perspective, Configuration view, expand the Machines folder.


Enscribe Data Source (HP NonStop Only) 43-5

3. 4. 5. 6.

Expand the machine with your Enscribe data source. Expand the Bindings folder and the binding with your Enscribe data source. Expand the Data sources folder. Right-click the Enscribe data source and select Open. The Configuration Properties screen is displayed.

Figure 431 Enscribe Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Enscribe Data Source Connection. Configure the Enscribe data source parameters as required. For a description of the available Enscribe parameters, see Configuration Properties.

Setting up the Enscribe Data Source Metadata


This section includes the following topics:

Importing Metadata from COBOL Importing Metadata Using the ADDIMP Utility Importing Metadata Using the TALIMP Utility Maintaining Metadata

Importing Metadata from COBOL


If COBOL copybooks describing the Enscribe records are available, import the metadata by running the metadata import utility in Attunity Studio.

43-6 AIS User Guide and Reference

Starting the Import Process


This section describes the steps required to begin importing metadata from data sources that must use Attunity metadata, such as COBOL copybooks. This metadata is used to generate the Attunity metadata. To begin the import process 1. Open Attunity Studio.
2.

In the Design perspective, Configuration view expand the Machines folder and then expand the machine with the data source where you are importing the metadata. Expand the Bindings folder and expand the binding with the data source metadata you are working with. Expand the Data Source folder. Right-click the data source that you are working with and select Show Metadata View. The Metadata view opens with the selected data source displayed.

3. 4. 5.

6.

Right-click Imports under the data source and select New Import. The New Import dialog box is displayed, as shown in the following figure:

Figure 432 The New Import screen

7. 8.

Enter a name for the import. The name can contain letters, numbers and the underscore character. Select the Import type from the list. You can select:

Enscribe Import Manager COBOL Import Manager for Databases

9.

Click Finish.

See Importing Data Source Metadata with the Attunity Import Wizard for an explanation of how to use the import wizard.

Enscribe Data Source (HP NonStop Only) 43-7

Importing Metadata Using the ADDIMP Utility


The Enscribe ADDIMP import utility produces Attunity metadata for HP NonStop Enscribe data sources from a DDL subvolume and COBOL copybooks. If the metadata exists only as COBOL copybooks, you can import the metadata using the Enscribe import utility in the Design perspective Metadata tab of Attunity Studio. To generate the ADD metadata 1. From the Start menu, select Programs, Attunity, and then select Command Line Console.
2.

Enter the following at the prompt:


RUN ADDIMP [-t template] [-r replace_string] -n ds_name [-e DDL_export_list] [-f filename_table] [-v variant_table] [-m rec_filter] [-b basename] [-d] [-c] [-g] [-p] [-6] [-z] files

Where:

template: The template used in the COBOL COPYFROM statement. replace_string: The string used to replace all occurrences of template, if it has a value. If template has a value and replace_string is not set, all occurrences of template are made empty. ds_name: The name of the data source defined in the binding. The imported metadata is stored as ADD metadata in the repository for this data source. DDL_export_list: The records to be imported from the DDL dictionary. If the list contains more than one record, then the list must be surrounded by double quotes (""). This parameter defaults to *, importing all records. To import DDL definitions, set the z parameter as described below. filename_table: A text file containing a list of records and the names of their data files. Each row in this file has two entries: record_name and physical_enscribe_data_file_name (used for the value for the Data file field for the table in Attunity Studio Design perspective metadata tab). If a table is not listed in this text file, the entry for the Data file field for the table defaults to table_FIL, where table is the name of the table. If this text file does not exist, the names for the Data file field specifying the tables default to table_FIL, where table is the name of the table. The format of the file is tablename $vol.subvol.file. For example:
EMPLYEE $USER.PERS.EMPLOYEE

Notes: Fields mapped to a key segment must be contiguous in each table definition.

When the filename defaults to table_FIL, you must change this name (using Attunity Studio Design perspective metadata tab, or NAV_UTIL EDIT) to the correct name, in order to access the data.

variant_table: A list of variants with their selector fields and the valid values for each selector field. Each line in the list has the following format:
variant-field, selector,-field, "val1", "val2", ..., "valN"

43-8 AIS User Guide and Reference

Note:

All the val# arguments must be surrounded by double quotes. If a val# argument contains a comma or double quote, then the character must be doubled.

If the variant line is too long, break the line at a comma separator. For example:
var_1,selector_1,"a","b","c" var_2,selector_2,"a23456789012345", "b23456789012345","c23456789012345", "d23456789012345"

rec_filter: Specifies the set of records you want to import as an AWK regular expression. For example, you can use special characters, such as ., *, "[...]", "\{n,m\}", ^, $. For information on AWK regular expression, see AWK reference documentation.

basename: Specifies the user-defined name of the intermediate files used during the import procedure. The following files are generated (the default file names appear in parentheses): basenameA (IADDIMPA) basenameF (IADDIMPF) basenameL (IADDIMPL) basenameC (IADDIMPC)
Note:

The basename entry must be no longer than 7 characters.

If these files already exist, write/purge access to them is required when you run the utility.

d: Specifies that all intermediate files are saved. You can check these files if problems occur in the conversion. c: Specifies that the column name is used for an array name, instead of the concatenation of the parent table name with the child table name.
Note:

f a column name is not unique in a structure (as when a structure includes another structure, which contains a column with the same name as a column in the parent structure), the nested column name is suffixed with the nested structure name.

g: Specifies that a group name is prefixed to the column name.


Note:

f a counter field is defined after the array field, check the resulting XML to ensure that the column name is fully qualified.

Enscribe Data Source (HP NonStop Only) 43-9

p: Specifies that punch card formatting is implemented. Columns after column 72 in every line are ignored. 6: Specifies that the first 6 columns of every line are ignored (ANSI COBOL is assumed instead of extended COBOL). 7: Specifies that COBOL74 is used. If this is not specified, then COBOL85 is used. z: Specifies that the metadata is exported from DDL definitions and not the DDL records (the default). files: Specifies the structure files for the Enscribe tables. Each file is in the format of a COBOL copylib file. Only one subvolume of a DDL dictionary is allowed. An intermediate filename file and an intermediate copylib file are generated from the dictionary. Separate the files in this list with blanks.
Note: The COBOL file must start at the 01 level (that is 01 record-name) and include the entire record definition (and not a reference to a copybook file elsewhere).

Example
run addimp -n ENSDATA d0117.ddldata

To display online help for this utility, run the command with help as the only parameter, as follows:
ADDIMP -i

The files in the command line are processed in order. When the same table name occurs more than once, the latest definition of the table is used for the ADD. If a DDL dictionary is specified, the generated intermediate location file and structure file are placed first. The ADDIMP utility enables you to overwrite the location (listed in the DDL dictionary) of an Enscribe table with a specific filename file, in cases when the dictionary entry is not current. For unstructured Enscribe files, ADDIMP sets the Organization attribute in the generated metadata to Index. Edit the metadata from Attunity Studio Design perspective, Metadata tab to change the Organization value from Index to Unstructured for each table, in the metadata editor General tab.
Note:

Note that you must include a filler field, of size one, when including an odd record size in an even unstructured Enscribe file.

Importing Metadata Using the TALIMP Utility


The Enscribe TALIMP import utility produces Attunity metadata for HP NonStop Enscribe data sources from TAL data files and a DDL subvolume. Use the TALIMP utility in preference to the ADDIMP utility in the following circumstances:

You have a TAL data file instead of a COBOL copybook. The DDL uses identifiers that are restricted words in COBOL but not in TAL.

43-10 AIS User Guide and Reference

Note:

When using a HP NonStop Himalaya $CMON system monitor, the system monitor may conflict with the import utility. This may cause the utility to hang. In this case, temporarily stop the $CMON process

To generate the metadata 1. From the Start menu, select Programs, Attunity, and then select Command Line Console.
2.

Enter the following at the prompt:


RUN TALIMP -n ds_name [-e DDL_export_list] [-f filename_table] [-v variant_table] [-m rec_filter] [-p TAL_columns] [-b basename] [-q] [-d] [-c] [-z] files

Where:

ds_name: The name of the data source defined in the binding. The imported metadata is stored as ADD metadata in the repository for this data source. DDL_export_list: The records to be imported from the DDL dictionary. If the list contains more than one record, then the list must be surrounded by double quotes (""). This parameter defaults to *, importing all records. To import DDL definitions, set the z parameter as described below. filename_table: A text file containing a list of records and the names of their data files. Each row in this file has two entries: record_name and physical_enscribe_data_file_name (used for the value for the Data file field for the table in Attunity Studio Design perspective, metadata tab). If a table is not listed in this text file, the entry for the Data file field for the table defaults to table_FIL, where table is the name of the table. If this text file does not exist, the names for the Data file field specifying the tables default to table_FIL, where table is the name of the table. The format of the file is tablename $vol.subvol.file. For example:
EMPLYEE $USER.PERS.EMPLOYEE

Notes: Fields mapped to a key segment must be contiguous in each table definition.

When the filename defaults to table_FIL, you must change this name (using Attunity Studio Design perspective metadata tab, or NAV_UTIL EDIT) to the correct name, in order to access the data.

variant_table: A list of variants with their selector fields and the valid values for each selector field. Each line in the list has the following format:
variant-field, selector,-field, "val1", "val2", ..., "valN"

Note:

All the val# arguments must be surrounded by double quotes. If a val# argument contains a comma or double quote, then the character must be

Enscribe Data Source (HP NonStop Only)

43-11

If the variant line is too long, break the line at a comma separator. For example:
var_1,selector_1,"a","b","c" var_2,selector_2,"a23456789012345", "b23456789012345","c23456789012345", "d23456789012345"

rec_filter: Specifies the set of records you want to import as an AWK regular expression. For example, you can use special characters, such as ., *, "[...]", "\{n,m\}", ^, $. For information on AWK regular expression, see AWK reference documentation.

TAL_columns: Specifies the value of the COLUMNS directive in the TAL compilation command. If a value is not specified, then 132 is used (the TAL default). basename: Specifies the user-defined name of the intermediate files used during the import procedure. The following files are generated (the default file names appear in parentheses): basenameA (TTALIMPA) basenameF (TTALIMPF) basenameF (TTALIMPF) basenameL (TTALIMPL) basenameC (TTALIMPC)
Note:

The basename entry must be no longer than 7 characters.

If these files already exist, write/purge access to them is required when you run the utility.

q: Turns on the query mode, in which the import process pauses at each intermediate step. d: Specifies that all intermediate files are saved. You can check these files if problems occur in the conversion. c: Specifies that the column name is used for an array name, instead of the concatenation of the parent table name with the child table name.
Note:

f a column name is not unique in a structure (as when a structure includes another structure, which contains a column with the same name as a column in the parent structure), the nested column name is suffixed with the nested structure name

z: Specifies that the metadata is exported from DDL definitions and not the DDL records (the default). files: Specifies the structure files for the Enscribe tables. Each file is in the format of a COBOL copylib file. Only one subvolume of a DDL dictionary is

43-12 AIS User Guide and Reference

allowed. An intermediate filename file and an intermediate copylib file are generated from the dictionary. Separate the files in this list with blanks. To display online help for this utility, run the command with help as the only parameter, as follows:
ADDIMP -i

The files in the command line are processed in order. When the same table name occurs more than once, the latest definition of the table is used for the ADD. If a DDL dictionary is specified, the generated intermediate location file and structure file are placed first. Embedded COLUMNS are processed according to TAL language rules. The TALIMP utility enables you to overwrite the location (listed in the DDL dictionary) of an Enscribe table with a specific filename file, in cases when the dictionary entry is not current. For unstructured Enscribe files, TALIMP utility sets the Organization attribute in the generated metadata to Index. Edit the metadata from Attunity Studio Design perspective, Metadata tab to change the Organization value from Index to Unstructured for each table, in the metadata editor General tab. You must include a filler field, of size one, when including an odd record size in an even unstructured Enscribe file.

Maintaining Metadata
You can maintain the metadata and update the statistics for the data in Attunity Studio Design perspective, Metadata tab. The indexId attribute in Attunity metadata for an alternate index is the ASCII value corresponding to the 2 bytes of the key specifier. For example, for an alternate index in Enscribe described by fup as:
ALTKEY (2, FILE 0, KEYOFF 15, KEYLEN 12, UNIQUE)

you need the following ADD clause:


indexId is "21337"

21337 is derived from the ASCII values of S (0x53) and Y (0x59). Thus the ASCII value of SY is 0x5359 and its decimal equivalent is 21337 (the value in the indexId clause must be in quotes).

Testing the Enscribe Data Source


You can perform the following tests on the Enscribe data source:

Connection test: This tests the physical connection to the data source. Query test: This test runs an SQL SELECT query against the data source.

These test are described in the following sections: To test the connection to the Enscribe data source 1. Open Attunity Studio.
2. 3. 4.

In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your Enscribe data source. Expand the binding with the Enscribe data source.

Enscribe Data Source (HP NonStop Only)

43-13

5. 6.

Expand the Data sources folder. Right-click the required Enscribe data source, and select Test. The Test Wizard screen opens.

7.

Select Navigator from the Active Workspace Name list, and click Next. The system now tests the connection to the data source, and returns the test result status.

8.

Click Finish to exit the Test wizard.

1. 2. 3. 4. 5. 6.

To test the Enscribe data source by query Open Attunity Studio. In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your Enscribe data source. Expand the binding with the Enscribe data source. Expand the Data sources folder. Right-click the required Enscribe data source entity, and select Query Tool. The Select Workspace screen opens.

7.

Select Navigator and click OK. The Query Tool opens in the Editor pane, with the Build Query tab displayed (see step 10.).

8. 9.

Select the required query type from the Query Type list. The default is a SELECT-type query. Locate and expand the required Enscribe data source. The Enscribe data source tables are listed.

10. Drag the required table to the Table column, as shown in the following figure:

43-14 AIS User Guide and Reference

Figure 433 The Query Tool screen

11. Click Execute query.

The Query Result tab opens, displaying the results of the query.
12. Close the Query Tool in the Editor pane.

Sample Log File Explained


Log files are used for troubleshooting and error handling. The log file is generated when the driverTrace debug binding parameter is set to True. The log file includes various information concerning the functions used or called by the driver, queries executed, data sources accessed, etc. The following provides a sample log file output:
Attunity Server Log (V4.8.1.0, DEC-UNIX) Started at 2005-12-03T13:38:53 Licensed by ATTUNITY LTD. on 09-AUG-2000 (001001237) Licensed to ATTUNITY for <all providers> on 194.90.22.* (<all platforms>) binding.c (351): ; [B001] Binding to a datasource of type 'ADD-XML' cannot be performed binding.c (351): ; [B001] Binding to a datasource of type 'mf' cannot be performed binding.c (351): ; [B001] Binding to a datasource of type 'MEMORY_GDB' cannot be performed nvOUT (./qp_sqtxt.c 56): select * from nation limit to 3 rows nvRETURN (./qpsynon.c 1140): -1 SELECT T0000.n_nationkey AS c000, T0000.n_name AS c001, T0000.n_regionkey AS c002, T0000.n_comment AS c003 FROM 'inf9'.nation T0000

<<<<<<<<<<<<<<<<<<<

Execution Strategy Begin <<<<<<<<<<<<<<<<<<<<<<<<<<<<

Original SQL: select * from nation limit to 3 rows

Enscribe Data Source (HP NonStop Only)

43-15

Accessing Database 'informix' with SQL: SELECT T0000.n_nationkey AS c000, T0000.n_name AS c001, T0000.n_regionkey AS c002, T0000.n_comment AS c003 FROM 'inf9'.nation T0000

>>>>>>>>>>>>>>>>>>>> Execution Strategy End >>>>>>>>>>>>>>>>>>>>>>>>>>>> nvOUT (./qpsqlcsh.c 140): ---------------------------> Using Cached QSpec SELECT T0000.n_nationkey AS c000, T0000.n_name AS c001, T0000.n_regionkey AS c002, T0000.n_comment AS c003 FROM 'inf9'.nation T0000

nvRETURN (./drviunwn.c 804): -1210 (Last message occurred 2 times) Disabled FilePool Cleanup(DB=___sys, FilePool Size=0) FilePool Shutdown(DB=___SYS, FilePool Size=0) Closing log file at SAT DEC 3 13:39:12 2005

43-16 AIS User Guide and Reference

44
Flat File Data Source
This section contains the following topics:

Configuration Properties Defining a Flat File Data Source Setting Up the Flat File Data Source Metadata

Configuration Properties
The following parameters can be configured in Attunity Studio for the Flat File data source in the Properties tab of the Configuration Properties screen. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect: When set to true, this parameter disables the ExplicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. headerRows: This parameter specifies the number of lines at to be skipped at the beginning of each file. You can override this value by specifying a <dbCommand> statement in ADD. newFileLocation: The data location in the connect string, this parameter specifies the location of the Flat files and indexes you create with CREATE TABLE statements. You must specify the full path for the directory.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Defining a Flat File Data Source


The process of defining a a Flat File Data Source consists of two tasks:

Defining the Flat File Data Source Connection Configuring the Flat File Data Source

Defining the Flat File Data Source Connection


The Flat File data source connection is set using the Design perspective, Configuration view in Attunity Studio.

Flat File Data Source 44-1

To define the data source connection 1. Open Attunity Studio.


2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add your Flat File data source. Expand the Bindings folder. Expand the binding where you want to add the Flat File data source. Right-click the Data sources folder and select New Data Source. The New Data Source screen is displayed.

7. 8. 9.

Enter a name for the data source in the Name field. Select Flat files from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the Flat File connect string as follows:

Data Location: Specify the directory where the Flat files and indexes created with CREATE TABLE and CREATE INDEX statements reside. You must specify the full path. If a value is not specified in this field, the data files are written to the DEF directory under the directory where AIS is installed.
Note:

The value specified is used for the Data File field in the Design perspective, Metadata tab in Attunity Studio.

11. Click Finish.

See also: Adding Data Sources.

Configuring the Flat File Data Source


After defining the connection, you set the data source properties. To configure the Flat File data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the Flat File data source and select Open. The Configuration Properties screen is displayed.

44-2 AIS User Guide and Reference

Figure 441 Flat File Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Flat File Data Source Connection. Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

Setting Up the Flat File Data Source Metadata


The Flat File data source requires Attunity metadata. If COBOL copybooks do not exist that describe the Flat File records, the metadata must be manually defined. For details on the metadata definition, see Managing Metadata This section includes the following topics:

Importing Attunity Metadata from COBOL Maintaining Attunity Metadata

Importing Attunity Metadata from COBOL


If COBOL copybooks describing the data source records are available, you can import the metadata by running the metadata import in the Attunity Studio Design perspective, Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), import the metadata from copybooks with the same settings and later import the metadata from the other copybooks. COBOL copybooks are required for the import. These copybooks are copied to the machine running Attunity Studio as part of the import procedure.

Flat File Data Source 44-3

To define Flat File metadata 1. Open Attunity Studio.


2.

In the Design perspective, Configuration view, right-click the data source and select Show Metadata View. The Metadata tab is displayed with the data source displayed in the Metadata view.

3. 4. 5. 6. 7.

Right-click Imports and select New Import. Enter a name for the import. The name can contain letters, numbers and the underscore character. Select COBOL Import Manager for Data Sources as the import type. Click Finish. The Metadata Import Wizard is displayed. Click Add in the Import Wizard to add COBOL copybooks. The Add Resource screen is displayed, providing the option of selecting files from the local machine or to FTP the files from another machine. This figure shows the Add Resource Screen.

Figure 442 Add Resource Screen

8. 9.

If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the COBOL copybooks reside and, if not using anonymous access, enter a valid username and password to access the machine. using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.

10. To browse and transfer files required to generate the metadata, access the machine

11. Select the files to import and click Finish to start the transfer.

The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process.

44-4 AIS User Guide and Reference

You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The selected files are displayed in the Get Input Files screen of the wizard, as shown in Get Input Files Screen.
Figure 443 Get Input Files Screen

12. Click Next. The Apply Filters screen is displayed.

Flat File Data Source 44-5

Figure 444 Apply Filters Screen

13. Apply filters to the copybooks, as needed.

The following filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity] metadata, is replaced with an underscore. Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.

14. Click Next. The Select Tables screen is displayed:

44-6 AIS User Guide and Reference

Figure 445 Select Tables Screen

The import manager identifies the names of the records in the COBOL copybooks that will be imported as tables.
15. Select the tables that you want to access (and thus require Attunity metadata) and

then click Next. The Import Manipulation screen opens.

Flat File Data Source 44-7

Figure 446 Import Manipulation Screen

You can perform the following actions in the Import Manipulation screen.

Resolve table names, where tables with the same name are generated from different COBOL copybooks specified during the import. Specify the physical location for the data. Specify table attributes. Manipulate the fields generated from the COBOL, as follows: Merge sequential fields into one for simple fields Resolve variants either by marking a selector field or by specifying that only one case of the variant is relevant Add, delete, hide, or rename fields Change a data type Set a field size and scale Change the order of the fields Set a field as nullable Select a counter field for array for fields with dimensions (arrays) Set r fields with dimensions (arrays) Create new fields instead of the array field, where the number of generated fields will be determined by the array dimension. Create arrays and set the array dimension.

The Validation tab in the bottom half of the screen displays information about what must be done to validate the tables and fields generated from the COBOL.

44-8 AIS User Guide and Reference

The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location).
16. To manipulate table information or the fields in the table, right-click the table and

select the required option. The following options are available:

Fields manipulation: Access the Fields Manipulation screen to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table. Set table attributes: Set table attributes. The table attributes are described XSL manipulation location: Specify an XSL transformation or JDOM document that is used to transform the table definition.

17. Click Next

This section lets you generate virtual and sequential views for imported tables containing arrays. In addition, you can configure the properties of the generated views. In the Metadata Model Selection step, you can select configure values that apply to all tables in the import or set specific settings for each table. To configure the metadata model

Select one of the following: Default values for all tables: Select this if you want to configure the same values for all the tables in the import. Make the following selections when using this option: Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties. Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns.

Flat File Data Source 44-9

false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties. Specific virtual array view settings per table: Select this to set different values for each table in the import. This will override the data source default for that table. Make the selections in the table under this selection. See the item above for an explanation.

The Metadata Model Selection screen is shown in the following figure:


Figure 447 The Metadata Model Selection Screen

18. Click Next.

The final window lets you import the metadata to the machine where the data source is located or leave the generated metadata on the Attunity Studio machine, to be imported later.
19. Specify that you want to transfer the metadata to the machine where the data

source is located and click Finish. The metadata is imported to the machine where the data source is located.

44-10 AIS User Guide and Reference

Maintaining Attunity Metadata


You can maintain the metadata and update the statistics for the data in the Design perspective, Metadata tab in Attunity Studio.

Flat File Data Source 44-11

44-12 AIS User Guide and Reference

45
IMS/DB Data Sources
This section contains the following topics:

Overview Functionality Configuration Properties Transaction Support Defining the IMS/DB DLI Data Source Defining the IMS/DB DBCTL Data Source Defining the IMS/DB DBDC Data Source Setting Up IMS/DB Metadata

Overview
AIS supports SQL data access to IMS/DB data in the following three IMS/DB environments:

IMS-DLI: Batch access. The AIS server issues direct DLI commands to retrieve data as a standalone batch program. This means that the database is accessed from the AIS-started task without going through any of the IMS control regions. This access method is suited to nightly processing, such as bulk loading when the IMS control region is down. It is usually not suited for general multi-user data access. IMS-DBCTL: This data source is suited to users employing CICS as their primary application platform for accessing IMS data. All AIS servers communicate with an AIS-supplied CICS program. This program accepts requests for scheduling PSBs and retrieving data through DBCTL services. IMS-DBDC: This data source is suited to users employing IMS/TM as their primary application platform for accessing IMS/DB data. All AIS servers communicate with an AIS-supplied IMS/TM transaction. This MPP transaction accepts requests from AIS servers and performs the DLI requests on their behalf.

Supported Versions and Platforms


For information on supported IMS/DB versions, see Attunity Integration Suite Supported Systems and Resources. For IMS/DBCTL access, the following versions of CICS are required:

CICS version 4.1 or higher.

IMS/DB Data Sources 45-1

CICS Transaction Server version 1.3 or higher.

Supported Features
The IMS/DB data sources support the following key features:

Relational-like view of an IMS hierarchical database. For more information, see Functionality. Metadata import from a combination of DBD files, TSBs and COBOL copybooks. DML access to IMS/DB data. Transactional support including one-phase commit or two-phase commit. OCCURS clauses within segment data are supported. For more information, see Handling Arrays. Primary key access support to IMS/DB data.

Environmental Prerequisites
Environmental prerequisites vary according to specific data source.

IMS-DLI Prerequisites IMS-DBCTL Prerequisites IMS-DBDC Prerequisites

IMS-DLI Prerequisites
There are no specific environmental prerequisites for IMS-DLI.

IMS-DBCTL Prerequisites
AIS uses EXCI to interface to CICS. EXCI requires some set up:

IRC must be open. Use CEMT I IRC from the CICS screen to check your IRC status. If in closed state, set it to open. A specific connection must be set up. Use CEMT I connection to get the list of available connections. Note that you can only use specific connections which have a VTAM netname associated with them. The default available on most systems is BATCHCLI. Attunity provides a JCL for defining an Attunity connection. See the CICS CONF member in the USERLIB. An EXCI mirror transaction ID must be available. The default on most systems is transaction ID EXCI. You can use the CEMT I TRA PROG(DFHMIRS) to get the list of EXCI transaction IDs available on your system. The PSBs to be accessed through AIS must be available in the DBCTL environment, i.e. the AIS CICS program must be able to do EXEC DLI SCHEDULE PSB.

IMS-DBDC Prerequisites

OTMA must be installed and running by executing the command /DISP OTMAto verify that this is the case.

45-2 AIS User Guide and Reference

Figure 451 OTMA Up and Running

An XCF group and member must be defined for IMS. The OTMA C/I must be installed. Note that for OTMA C/I, the OTMAINIT job must be run following an IPL. See Appendix C of the OTMA Guide and Reference online at http://tinyurl.com/lrpn.

Limitations
When accessing IMS data, the following limitations apply:

General IMS Limitations Limitations Specific to IMS/DLI Limitations Specific to IMS-DBCTL Limitations Specific to IMS/DBDC

General IMS Limitations


The following limitations apply to all IMS data sources:

Supported are only databases that have a unique key for every element in a hierarchy, except for end-segments. The support of end-segments without a unique key is limited to read only, with no array (OCCURS clause) support. DDL is not supported. Logical databases are supported. Logical children are not supported. Secondary indexes are not supported. Segments whose key is partitioned to several fields in the COBOL layout are supported by exposing both the COBOL-level fields and an additional field that overlays these fields and spans the entire key. The following limitations apply: Only queries that refer to the overlaid field in the WHERE clause develop an efficient execution strategy. For such a key, only alphanumeric field types are supported.

CREATE TABLE operations are not supported.

IMS/DB Data Sources 45-3

Limitations Specific to IMS/DLI


The following limitations apply to the IMS/DLI data source (batch access) only:

The IMS/DLI data source must be set up in a separate workspace for every PSB accessed because the PSB is explicitly coded in the started task JCL. Transactional operations, such as COMMIT and ROLLBACK, are not supported. All DML operations are therefore non-transactional. UPDATE operations are supported, but it is not recommended to run them over several servers in parallel.

Limitations Specific to IMS-DBCTL


The following limitations apply to the IMS/DBCTL data source only:

Segments within a non-unique index are not supported on any level.

Limitations Specific to IMS/DBDC


The following limitations apply to the IMS/DBDC data source only:

Transactional operations, such as COMMIT and ROLLBACK, are not supported. All DML operations are therefore non-transactional. Every data source definition can work with a single PSB only. Multiple data source can be created for multiple PSBs. Segments within a non-unique index are not supported at any level.

Functionality
Examples in this section use the Hospital Database Example. This section contains information on the following topics:

Hierarchical Modelling Constructing DLI Commands from SQL Requests

Hierarchical Modelling
The IMS/DB data sources map the hierarchical model of IMS/DB to the relational model in the following manner:

Every segment is mapped to a table. The fields in a table consist of the IMS segment buffer and the IMS keyfeedback area. The index for an IMS table consists of the keyfeedback, i.e. the entire path leading to the specific segment.

The Hospital Database Example includes a simple hierarchy HOSPITAL > WARD > PATIENT. The following figures show the relational model of this three-level hierarchy in AIS.

45-4 AIS User Guide and Reference

Figure 452 HOSPITAL in Relational Model

Figure 453 WARD in Relational Model

Figure 454 PATIENT in Relational Model

Constructing DLI Commands from SQL Requests


This section contains the following topics:

Selecting a PCB DLI Samples

IMS/DB Data Sources 45-5

Selecting a PCB
When accessing any segment, the data source driver first needs to select a PCB to be used for this purpose. The choice of PCB is made according to the metadata. The metadata import includes the PSB as one of the import sources. As a result, each table definition in the AIS data dictionary includes a list of PCB numbers that can be used for every table. For example, in the following figure, PCB0 will be used to access the HOSPITAL database. Note that you can have several PCBs for each table if your PSB includes several PCBs for the same database.
Figure 455 PCB Selection

DLI Samples
The Attunity IMS data source employs a small but effective vocabulary of IMS commands and SSA variations to satisfy incoming requests. The following list of examples shows SQL queries and the equivalent IMS commands used.
Table 451 SQL Query select * from patient; SQL Queries with Respective IMS Commands IMS Command GU/GN HOSPITAL WARD PATIENT select * from patient where hospname='Spalding Rehabilitat'; GU/GN HOSPITAL*C(Spalding Rehabilitat) WARD PATIENT select * from patient where hospname='Spalding Rehabilitat' and wardno='3 '; select * from patient where hospname='Spalding Rehabilitat' GU/GN HOSPITAL WARD PATIENT GU/GN HOSPITAL WARD PATIENT *C(Spalding Rehabilitat3 020 ) select * from patient where hospname>'Spalding Rehabilitat'; GU/GN HOSPITAL(HOSPNAME>=Spalding Rehabilitat) WARD PATIENT select * from patient where hospname='Spalding Rehabilitat' and wardno>'2 '; GU/GN HOSPITAL*C(Spalding Rehabilitat) WARD PATIENT ( WARDNO >=2 ) *C(Spalding Rehabilitat3 )

45-6 AIS User Guide and Reference

Table 451 (Cont.) SQL Queries with Respective IMS Commands SQL Query select * from patient where hospname='Spalding Rehabilitat' and wardno='3 ' and bedident>='020'; IMS Command GU/GN HOSPITAL WARD *C(Spalding Rehabilitat3 )

PATIENT (BEDIDENT>=020 )

Configuration Properties
The properties configured for the IMS/DB data source vary according to the specific type of data source in use.

IMS/DB DLI Configuration Properties IMS/DB DBCTL Configuration Properties IMS/DB DBDC Configuration Properties

For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

IMS/DB DLI Configuration Properties


The following properties can be configured for the IMS/DB DLI data source in the Properties tab of the Configuration Properties screen:

disableExplicitSelect: When set to true, this parameter disables the explicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement.

IMS/DB DBCTL Configuration Properties


The following properties can be configured for the IMS/DCB DBCTL data source in the Properties tab of the Configuration Properties screen:

cicsProgname: This parameter specifies the ATYDBCTL program that is supplied with AIS to enable updating the IMS data source. To use the ATYDBCTL program, copy the program from NAVROOT.LOAD to a CICS DFHRPL library (such as CICS.USER.LOAD) and then define the ATYDBCTL program under CICS using any available group such as ATY group: NAVROOT is the high-level qualifier where AIS is installed. After defining the ATYDBCTL program to a group, install it as follows: CEDA IN G(ATY)

cicsTraceQueue: This parameter specifies the e name of queue for output which is defined under CICS when tracing the output of the ATYDBCTL program. When not defined, the default CICS queue is used. disableExplicitSelect=true|false: When set to true, this parameter disables the explicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. exciTransid: This parameter specifies the CICS TRANSID. This value must be EXCI or a copy of this transaction. pbsName=string: The PSB Name in the connect string, this parameter contains details of all the IMS/DB databases that you want to access.

IMS/DB Data Sources 45-7

targetSystemApplid: The Target System in the connect string, this parameter specifies the VTAM applid of the CICS target system. The default value is CICS. This parameter is used when updating VSAM data. You can determine this value by activating the CEMT transaction on the target CICS system. The legend APPLID=target_system appears in the bottom right corner of the screen. vtamNetname: The VTAM NetName in the connect string, this parameter specifies the connection being used by EXCI (and MRO) to relay the program call to the CICS target system. The default value is ATYCLIEN.

IMS/DB DBDC Configuration Properties


The following properties can be configured for the IMS/DCB DBDC data source in the Properties tab of the Configuration Properties screen.

disableExplicitSelect: When set to true, this parameter disables the explicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. imsTransname: This parameter specifies the name of the IMS transaction that points to the program that is used to access the PSB used to access the IMS/DB data. The default name of the transaction is ATYIMSTM. maxSessions: This parameter specifies the maximum number of sessions allowed. The default value is 5. racfGroupId: This parameter specifies the security facility group identification (for example, the RACF group identification). racfUserId: This parameter specifies the security resource user name. tpipePrefix: The TPipe prefix in the connect string, this parameter is used to associate between the transaction and the transaction pipe it is using. The default is ATTU. xcfClient: This parameter specifies the client name for the Cross System Coupling Facility to which the connection belongs. xcfGroup: The XCF group in the connect string, this parameter specifies the Cross System Coupling Facility collection of XCF members the connection belongs to. A group may consist of up to eight characters, and may span between multiple systems. xcflmsMember: This parameter specifies the Cross System Coupling Facility group member. xcfServer: The XCF server in the connect string, this parameter specifies the Cross System Coupling Facility group member.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Transaction Support
The IMS-DBCTL data source supports transaction processing. The IMS/DB DLI and IMS/DB DBDC data sources do not support transaction processing.

45-8 AIS User Guide and Reference

Using Attunity Connect with One-phase Commit


The IMS-DBCTL data source can be set up as a one-phase commit data source. As such, CICS programs activated within the context of a transaction are activated with no SYNCONRETURN option in the EXCI DPL request. When the transaction is committed, ATRCMIT will be called to trigger a sync point. Note the following points:
1. 2.

RRS must be configured and running on your system in order to use 1PC. When working with 1PC, it is important to correctly configure the timeout of your EXCI mirror transaction. The DTIMEOUT parameter in the CEDA transaction definition must exceed the maximum expected transaction duration. The default EXCI transaction is usually configured with a DTIMEOUT of 10 seconds, which may be problematic in terms of its short duration.

3.

Hospital Database Example1


The following is referenced in various parts of this section. The full source files for the Hospital database are provided below.
Example 451 Hospital Cobol Copybook PIC X(20). PIC X(30). PIC X(10). PIC X(20).

01 HOSPITAL. 03 HOSPNAME 03 HOSP-ADDRESS 03 HOSP-PHONE 03 ADMIN 01 WARD. 03 WARDNO 03 TOT-ROOMS 03 TOT-BEDS 03 BEDAVAIL 03 WARDTYPE

PIC PIC PIC PIC PIC

XX. XXX. XXX. XXX. X(20).

01 PATIENT. 03 PATNAME 03 PATADDRESS 03 PAT-PHONE 03 BEDIDENT 03 DATEADMT 03 PREV-STAY-FLAG 03 PREV-HOSP 03 PREV-DATE 03 PREV-REASON

PIC X(20). PIC X(30). PIC X(10). PIC X(4). PIC X(6). PIC X. PIC X(20). PIC X(4). PIC X(30).

01 SYMPTOM. 03 DIAGNOSE PIC X(20). 03 SYMPDATE PIC X(6). 03 PREV-TREAT-FLAG PIC X. 03 TREAT-DESC PIC X(20). 03 SYMP-DOCTOR PIC X(20). 03 SYMP-DOCT-PHONE PIC X(10). 01
1

TREATMNT.

Kapp, Dan and Leben, Joe: IMS Programming Techniques. Van Nostrand Reinhold Company Inc., New York, 1986.

IMS/DB Data Sources 45-9

03 03 03 03 03 03 03

TRTYPE PIC X(20). TRDATE PIC X(6). MEDICATION-TYPE PIC X(20). DIET-COMMENT PIC X(30). SURGERY-FLAG PIC X. SURGERY-DATE PIC X(6). SURGERY-COMMENT PIC X(30).

01 DOCTOR. 03 DOCTNAME 03 DOCT-ADDRESS 03 DOCT-PHONE 03 SPECIALT 01 FACILITY. 03 FACTYPE 03 TOT-FACIL 03 FACAVAIL Example 452

PIC X(20). PIC X(30). PIC X(10). PIC X(20).

PIC X(20). PIC XXX. PIC XXX.

Hospital DBD

PRINT NOGEN DBD NAME=HOSPDBD,ACCESS=HDAM,RMNAME=(DFSHDC40,40,100) DATASET DD1=PRIME,DEVICE=3390 SEGM NAME=HOSPITAL,PARENT=0,BYTES=80 FIELD NAME=(HOSPNAME,SEQ,U),BYTES=20,START=1,TYPE=C FIELD NAME=ADMIN,BYTES=20,START=61,TYPE=C SEGM NAME=WARD,PARENT=HOSPITAL,BYTES=31 FIELD NAME=(WARDNO,SEQ,U),BYTES=2,START=1,TYPE=C FIELD NAME=BEDAVAIL,BYTES=3,START=9,TYPE=C FIELD NAME=WARDTYPE,BYTES=20,START=12,TYPE=C SEGM NAME=PATIENT,PARENT=WARD,BYTES=125 FIELD NAME=(BEDIDENT,SEQ,U),BYTES=4,START=61,TYPE=C FIELD NAME=PATNAME,BYTES=20,START=1,TYPE=C FIELD NAME=DATEADMT,BYTES=6,START=65,TYPE=C SEGM NAME=SYMPTOM,PARENT=PATIENT,BYTES=77 FIELD NAME=(SYMPDATE,SEQ),BYTES=6,START=21,TYPE=C FIELD NAME=DIAGNOSE,BYTES=20,START=1,TYPE=C SEGM NAME=TREATMNT,PARENT=PATIENT,BYTES=113 FIELD NAME=(TRDATE,SEQ),BYTES=6,START=21,TYPE=C FIELD NAME=TRTYPE,BYTES=20,START=1,TYPE=C SEGM NAME=DOCTOR,PARENT=PATIENT,BYTES=80 FIELD NAME=DOCTNAME,BYTES=20,START=1,TYPE=C FIELD NAME=SPECIALT,BYTES=20,START=61,TYPE=C SEGM NAME=FACILITY,PARENT=HOSPITAL,BYTES=26 FIELD NAME=FACTYPE,BYTES=20,START=1,TYPE=C FIELD NAME=FACAVAIL,BYTES=3,START=24,TYPE=C DBDGEN FINISH END

45-10 AIS User Guide and Reference

Example 453

Hospital PSB

PRINT NOGEN PCB TYPE=DB,DBDNAME=HOSPDBD,PROCOPT=AP,KEYLEN=32 * SENSEG SENSEG SENSEG SENSEG SENSEG SENSEG SENSEG * PSBGEN LANG=COBOL,PSBNAME=HOSPPSB END NAME=HOSPITAL,PARENT=0 NAME=WARD,PARENT=HOSPITAL NAME=PATIENT,PARENT=WARD NAME=SYMPTOM,PARENT=PATIENT NAME=TREATMNT,PARENT=PATIENT NAME=DOCTOR,PARENT=PATIENT NAME=FACILITY,PARENT=HOSPITAL

Defining the IMS/DB DLI Data Source


The IMS/DB DLI data source connects directly to the IMS/DB data. However, the IMS/DB DLI data source exclusively locks the database when it writes to IMS/DB. Therefore the following must be observed:

If you are writing to IMS/DB under CICS, use the IMS-DBCTL data source. If you are writing to IMS/DB under IMS/TM, use the IMS-DBDC data source. Change the allocation size of the IEFREDR data set, dependent on server usage. If the data set is used too often a DUMP results.

The process of defining an IMS/DB DLI data source consists of the following tasks:

Defining the IMS/DB DLI Data Source Connection Configuring the IMS/DB DLI Data Source Setting Up the Daemon Workspace

Defining the IMS/DB DLI Data Source Connection


The IMS/DB DLI data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add your IMS/DB DLI data source. Expand the Bindings folder. Expand the binding where you want to add the IMS/DB DLI data source. Right-click the Data sources folder and select New data source. The New Data Source screen is displayed.

7. 8. 9.

Enter a name for the data source in the Name field. Select IMS-DLI from the Type list. Click Next. The Data source connect string screen is displayed. No connection information is required.
IMS/DB Data Sources 45-11

10. Click Finish.

See also: Adding Data Sources.

Configuring the IMS/DB DLI Data Source


After defining the connection, you set the data source properties. To configure the IMS-DLI data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the IMS/DB DLI data source and select Open. The Configuration Properties screen is displayed.

Figure 456 IMS/DB DLI Configuration Properties

7.

Configure the data source parameters as required. For a description of the available parameters, see IMS/DB DLI Configuration Properties.

Setting Up the Daemon Workspace


To access IMS/DB, you need to configure a daemon workspace to run the IMS server.

45-12 AIS User Guide and Reference

Either use the default NVIMSSRV workspace, supplied as part of the AIS installation, or, in a workspace that you define, set the server type to IMS and the startup script to NVIMSSRV.XY. The NVIMSSRV workspace has the same settings as the normal Navigator default workspace, except that it uses the IMS server.
Notes:

The suffix for the startup script enables instances of the server process for the workspace. Any suffix can be used, and Attunity Connect automatically extends the suffix for each instance. The remote machine specification in the binding setting for the IMS/DB data source on the client must include the workspace as part of the <remoteMachine> statement. The IMS server does not work with subtasks. Therefore, you must set the number of subtasks in the NsubTasks parameter to zero.

Defining the IMS/DB DBCTL Data Source


The IMS/DB DBCTL data source accesses IMS/DB data under CICS. If you are accessing IMS/DB data directly, use the IMS/DB DLI data source. If you are accessing IMS/DB data under IMS/TM, use the IMS-DBDC data source.

Defining the IMS/DB DBCTL Data Source Connection Configuring the IMS/DB DBCTL Data Source Accessing IMS/DB Data under CICS

Defining the IMS/DB DBCTL Data Source Connection


The IMS/DB DBCTL data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your IMS/DB DBCTL data source. Expand the Bindings folder. Expand the binding with the IMS/DB DBCTL data source. Right-click the Data sources folder and select New Data Source. The New Data Source screen is displayed.

7. 8. 9.

Enter a name for the data source in the Name field. Select IMS-DBCTL from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the IMS-DBCTL connect string as follows:

IMS/DB Data Sources 45-13

PSB Name: Specify the name of the PSB file that contains details of all the IMS/DB databases that you want to access. Target System: Specify the VTAM APPLID of the CICS target system. The default value is CICS. This parameter is used when updating VSAM data. You can determine this value by activating the CEMT transaction on the target CICS system. The legend APPLID=target_system appears in the bottom right corner of the screen. VTAM NetName: The VTAM netname of the specific connection being used by EXCI (and MRO) to relay the program call to the CICS target system. For example, if you issue the following command to CEMT: CEMT INQ CONN On the display screen that the netname is BATCHCLI (this is the default connection supplied by IBM upon the installation of CICS). The default value is ATYCLIEN.

11. Click Finish.

See also: Adding Data Sources.

Configuring the IMS/DB DBCTL Data Source


After defining the connection, you can set the data source properties. To configure the IMS/DB DBCTL data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your IMS/DB DBCTL data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the IMS/DB DBCTL data source and select Open. The Configuration Properties screen is displayed.

45-14 AIS User Guide and Reference

Figure 457 IMS/DB DBCTL Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the IMS/DB DBCTL Data Source Connection. Configure the data source parameters as required. For a description of the available parameters, see IMS/DB DBCTL Configuration Properties.

Accessing IMS/DB Data under CICS


The IMS/DB DBCTL data source accesses IMS/DB data under CICS. To access IMS/DB data under CICS 1. Copy the ATYDBCTL member from NAVROOT.LOAD to a CICS DFHRPL library (such as CICS.USER.LOAD) and then define the ATYDBCTL program under CICS using any available group such as the ATY group: CEDA DEF PROG(ATYDBCTL) G(ATY) LANG(C) DA(ANY) DE(ATY IMSDB CICS PROGRAM) NAVROOT is the high-level qualifier where the AIS is installed.
2.

After assigning the ATYDBCTL program to a group, install it as follows: CEDA IN G(ATY

3.

Under CICS, run the CDBC transaction and select the first option (Connection). Provide the startup table suffix and DBCTL ID override value.

IMS/DB Data Sources 45-15

Defining the IMS/DB DBDC Data Source


The IMS/DB DBDC data source accesses IMS/DB data under IMS/TM. If you are accessing IMS/DB data directly, use the IMS/DB DLI data source. If you are accessing IMS/DB data under CICS, use the IMS/DB DBCTL data source. The process of defining an IMS/DB DBDC data source consists of the following tasks:

Defining the IMS/DB DBDC Data Source Connection Configuring the IMS/DB DBDC Data Source Accessing IMS/DB Data under IMS/TM

Defining the IMS/DB DBDC Data Source Connection


The IMS/DB DBDC data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your IMS/DB DBDC data source. Expand the Bindings folder. Expand the binding with the IMS/DB DBDC data source. Right-click the Data sources folder and select New Data Source. The New Data Source screen is displayed.

7. 8. 9.

Enter a name for the data source in the Name field. Select IMS-DBDC from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the IMS-DBDC connect string as follows:

XCF Group: Specify the Cross System Coupling Facility collection of XCF members to which the connection belongs. A group may consist of up to eight characters, and may span between multiple systems. XCF server: Specify the Cross System Coupling Facility group member. TPipe prefix: Specify the transaction pipe prefix used to associate between the transaction and the transaction pipe it is using. The default value is ATTU. User name: Specify the security facility user identification (for example, the RACF user identification). Group name: Specify the security facility group identification (for example, the RACF group identification).

11. Click Finish.

See also: Adding Data Sources.

Configuring the IMS/DB DBDC Data Source


After defining the connection, you can set the data source properties.

45-16 AIS User Guide and Reference

To configure the IMS/DB DBDC data source 1. Open Attunity Studio.


2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your IMS/DB DBDC data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the data source and select Open. The Configuration Properties screen is displayed.

Figure 458 IMS/DB DBDC Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the IMS/DB DBDC Data Source Connection. Configure the data source parameters as required. For a description of the available parameters, see IMS/DB DBDC Configuration Properties.

Accessing IMS/DB Data under IMS/TM


The IMS/DB DBCTL data source accesses IMS/DB data under IMS/TM. To access IMS/DB data under IMS/TM 1. Copy the ATYDBC program from NAVROOT.LOAD to an IMS/TM program library (such as IMS.PGMLIB) with the name of the PSB used to access the IMS/DB data.
2.

Define a transaction to point to the program, using statements similar to the following:

IMS/DB Data Sources 45-17

APPLCTN PSB=ATYDBDC,SCHDTYP=PARALLEL TRANSACT CODE=ATYIMSTM,PRTY=(7,10,2),INQUIRY=NO,MODE=SNGL,EDIT=ULC The default transaction name is ATYIMSTM. If you use a different transaction, the transaction name must be less than or equal to eight characters and you must specify this value in the imsTransname data source property in the binding.
3.

Set up OTMA, as described in the Attunity Server Installation Guide for z/OS.

Setting Up IMS/DB Metadata


The IMS/DB data sources requires Attunity metadata. Setting up the metadata in Attunity Connect to enable working with IMS/DB is the same for all the IMS/DB data sources. The mapping of IMS/DB metadata to Attunity metadata format has the following rules:

Segments are defined as tables within <table> elements. Tables inherit the key fields of their ancestors. Tables have indexes for their full hierarchical paths. Additional information is defined in dbCommand attributes specified within the <table> and <field> elements.

If COBOL copybooks describing the data source records are available, then you can import the metadata by running the metadata import in the Attunity Studio Design perspective, Metadata tab. If the metadata is provided in a number of COBOL copybooks with different filter settings (such as whether the first 6 columns are ignored or not), first import the metadata from copybooks with the same settings, and then import the metadata from the other copybooks. If COBOL copybooks do not exist that describe the IMS/DB records, then you must manually define the metadata. For more information, see Managing Data Source Metadata.

COBOL copybooks: These copybooks are copied to the machine running Attunity Studio as part of the import procedure. DBD files: These files are copied to the machine running Attunity Studio as part of the import procedure. PSB file: This file is copied to the machine running Attunity Studio as part of the import procedure. This step is optional.

The metadata import procedure is has the following steps:


Selecting the Input Files Applying Filters Selecting Tables Matching DBD to COBOL Import Manipulation Metadata Model Selection Import the Metadata

45-18 AIS User Guide and Reference

Selecting the Input Files


This section describes the steps required to select the input files that will be used to generate the metadata. The IMS/DB data source requires two types of files, DBD files and COBOL copybooks. In addition a PSB file may also be necessary. See Selecting the Input Files for an explanation of these files. Perform the following steps to enter the input files.
1. 2.

Open Attunity Studio. In the Design perspective, Configuration view, right-click the data source and select Show Metadata View. The Metadata tab is displayed with the data source displayed in the Metadata view.

3. 4. 5. 6. 7.

Right-click Imports under the data source and select New Import. Enter a name for the import. The name can contain letters, numbers and the underscore character. Select the import type. The is only one choice available depending on the type of IMS/DM data source you are using. Click Finish. The Metadata Import Wizard is displayed. Click Add in the Import Wizard to add DBD files. The Add Resource screen is displayed, providing the option of selecting files from the local machine or copying the files from another machine. Add Resource Screen shows the Add Resource screen.

Figure 459 Add Resource Screen

8. 9.

If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the DBD files reside and, if not using anonymous access, enter a valid username and password to access the machine. using the username as the high-level qualifier.

10. To browse and transfer files required to generate the metadata, access the machine

IMS/DB Data Sources 45-19

After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.
11. Select the files to import and click Finish to start the transfer. 12. Repeat the procedure for COBOL copybooks.

The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process. You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks.
13. Click Add in the Import wizard to add a PSB file, if necessary.

The selected files are displayed in the Get Input Files screen, as shown in the figure below.
Figure 4510 Get Input Files Screen

14. Click Next to go to the Applying Filters step.

Applying Filters
This section describes the steps required to apply filters on the COBOL Copybook files used to generate the Metadata. It continues the Selecting the Input Files step. Perform the following steps to apply filters.
1.

Click Next, the Apply Filters step is displayed in the editor.

45-20 AIS User Guide and Reference

Figure 4511

Apply Filters Screen

2.

Apply filters to the copybooks, as needed. The following COBOL filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.

The following DBD filters are available:


Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Ignore labels: Ignore labels in the DBD files.

IMS/DB Data Sources 45-21

The following PSB filters are available:


Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks.

3.

Click Next to go to the Selecting Tables step.

Selecting Tables
This section describes the steps required to select the tables from the COBOL Copybooks. The following procedure continues the Applying Filters procedure. Perform these steps to select the tables.
1.

From the Select Tables screen, select the tables that you want to access. To select all tables, click Select All. To clear all the selected tables, click Unselect All. The Select Tables screen is shown in the following figure:

Figure 4512 Select Tables Screen

The import manager identifies the names of the segments in the DBD files that will be imported as tables.
2.

Click Next (the Import Manipulation screen opens) to continue to the Matching DBD to COBOL step.

Matching DBD to COBOL


This step lets you match the DBD file to your COBOL copybook. It is a continuation of the Selecting Tables step. The following figure shows the DBD to COBOL step that is displayed in the Editor.

45-22 AIS User Guide and Reference

Figure 4513

Match DBD to COBOL Screen

1.

Match each table selected from the DBD file with the COBOL copybook that contains the relevant table structure. Select the files and tables from the dropdown lists for each DBD entry. Click Next (the Import Manipulation screen opens) to continue to the Import Manipulation step.

2.

Import Manipulation
This section describes the operations available for manipulating the imported records (tables). It continues the Selecting Tables procedure. The import manager identifies the names of the records in the DDM Declaration files that will be imported as tables. You can manipulate the general table data in the Import Manipulation Screen. Perform the following steps to manipulate the table metadata.
1.

From the Import Manipulation screen (see Import Manipulation Screen figure), right-click the table record marked with a validation error, and select the relevant operation. See the table, Table Manipulation Options for the available operations. Repeat step 1 for all table records marked with a validation error. You resolve the issues in the Import Manipulation Screen. Once all the validation error issues have been resolved, the Import Manipulation screen is displayed with no error indicators.

2.

3.

Click Next to continue to the Metadata Model Selection.

Import Manipulation Screen


The Import Manipulation screen is shown in the following figure:

IMS/DB Data Sources 45-23

Figure 4514 Import Manipulation Screen

The upper area of the screen lists the DDM Declaration files and their validation status. The metadata source and location are also listed. The Validation tab at the lower area of the screen displays information about what needs to be resolved in order to validate the tables and fields generated from the COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location). The following operations are available in the Import Manipulation screen:

Resolving table names, where tables with the same name are generated from different files during the import. Selecting the physical location for the data. Selecting table attributes. Manipulating the fields generated from the COBOL, as follows: Merging sequential fields into one (for simple fields). Resolving variants by either marking a selector field or specifying that only one case of the variant is relevant. Adding, deleting, hiding, or renaming fields. Changing a data type. Setting the field size and scale. Changing the order of the fields. Setting a field as nullable. Selecting a counter field for array for fields with dimensions (arrays). You can select the array counter field from a list of potential fields.

45-24 AIS User Guide and Reference

Setting column-wise normalization for fields with dimensions (arrays). You can create new fields instead of the array field where the number of generated fields will be determined by the array dimension. Creating arrays and setting the array dimension.

The following table lists and describes the available operations when you right-click a table entry:
Table 452 Option Fields Manipulation Table Manipulation Options Description Customizes the field definitions, using the Field Manipulation screen. You can also access this screen by double-clicking the required table record. Renames a table. This option is used especially when more than one table with the same name is generated from the COBOL. Sets the physical location of the data file for the table. Sets the table attributes. Specifies an XSL transformation or JDOM document that is used to transform the table definitions. Removes the table record.

Rename Set data location Set table attributes XSL manipulation Remove

You can manipulate the data in the table fields in the Field Manipulation Screen. Double-click a line in the Import Manipulation Screen to open the Field Manipulation Screen.

Field Manipulation Screen


The Field Manipulation screen lets you make changes to fields in a selected table. You get to the Field Manipulation screen through the Import Manipulation Screen. The Field Manipulation screen is shown in the following figure.

IMS/DB Data Sources 45-25

Figure 4515 Field Manipulation Screen

You can carry out all of the available tasks in this screen through the menu or toolbar. You can also right click anywhere in the screen and select any of the options available in the main menus from a shortcut menu. The following table describes the tasks that are done in this screen. If a toolbar button is available for a task, it is pictured in the table.
Table 453 Command General menu Undo Click to undo the last change made in the Field Manipulation screen. Field Manipulation Screen Commands Description

Select fixed offset

The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select this option to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. When you select a fixed offset you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.

45-26 AIS User Guide and Reference

Table 453 (Cont.) Field Manipulation Screen Commands Command Test import tables Description Select this table to create an SQL statement to test the import table. You can base the statement on the Full table or Selected columns. When you select this option, the following screen opens with an SQL statement based on the table or column entered at the bottom of the screen.

Enter the following in this screen:

Data file name: Enter the name of the file that contains the data you want to query. Limit query results: Select this if you want to limit the results to a specified number of rows. Enter the amount of rows you want returned in the following field. 100 is the default value. Define Where Clause: Click Add to select a column to use in a Where clause. In the table below, you can add the operator, value and other information. Click on the columns to make the selections. To remove a Where Clause, select the row with the Where Clause you want t remove and then click Remove.

The resulting SQL statement with any Where Clauses that you added are displayed at the bottom of the screen. Click OK to send the query and test the table. Attribute menu Change data type Select Change data type from the Attribute menu to activate the Type column, or click on the Type column and select a new data type from the drop-down list.

IMS/DB Data Sources 45-27

Table 453 (Cont.) Field Manipulation Screen Commands Command Create array Description This command allows you to add an array dimension to the field. Select this command to open the Create Array screen.

Enter a number in the Array Dimension field and click OK to create the array for the column. Hide/Reveal field Select a row from the Field manipulation screen and select Hide field to hide the selected field from that row. If the field is hidden, you can select Reveal field. Select this to change or set a dimension for a field that has an array. Select Set dimension to open the Set Dimension screen. Edit the entry in the Array Dimension field and click OK to set the dimension for the selected array. Set field attribute Select a row to set or edit the attributes for the field in the row. Select Set field attribute to open the Field Attribute screen.

Set dimension

Click in the Value column for any of the properties listed and enter a new value or select a value from a drop-down list. Nullable/Not nullable Select Nullable to activate the Nullable column in the Field Manipulation screen. You can also click in the column. Select the check box to make the field Nullable. Clear the check box to make the field Not Nullable. Set scale Select this to activate the Scale column or click in the column and enter the number of places to display after the decimal point in a data type. Select this to activate the Size column or click in the column and enter the number of total number of characters for a data type.

Set size Field menu

45-28 AIS User Guide and Reference

Table 453 (Cont.) Field Manipulation Screen Commands Command Add Description Select this command or use the button to add a field to the table. If you select a row with a field (not a child of a field), you can add a child to that field. Select Add Field or Add Child to open the following screen:

Enter the name of the field or child, and click OK to add the field or child to the table. Delete field Select a row and then select Delete Field or click the Delete Field button to delete the field in the selected row.

Move up or down

Select a row and use the arrows to move it up or down in the list.

Rename field Sturctures menu Columnwise Normalization

Select Rename field to make the Name field active. Change the name and then click outside of the field.

Select Columnwise Normalization to create new fields instead of the array field where the number of generated fields will be determined by the array dimension.

IMS/DB Data Sources 45-29

Table 453 (Cont.) Field Manipulation Screen Commands Command Combining sequential fields Description Select Combining sequential fields to combine two or more sequential fields into one simple field. The following dialog box opens:

Enter the following information in the Combining sequential fields screen:

First field name: Select the first field in the table to include in the combined field End field name: Select the last field to be included in the combined field. Make sure that the fields are sequential. Enter field name: Enter a name for the new combined field.

Flatten group

Select Flatten Group to flatten a field that is an array. This field must be defined as Group for its data type. When you flatten an array field, the entries in the array are spread into a new table, with each entry in its own field. The following screen provides options for flattening.

Do the following in this screen:

Select Recursive operation to repeat the flattening process on all levels. For example, if there are multiple child fields in this group, you can place the values for each field into the new table when you select this option. Select Use parent name as prefix to use the name of the parent field as a prefix when creating the new fields. For example, if the parent field is called Car Details and you have a child in the array called Color, when a new field is created in the flattening operation it will be called Car Details_Color.

45-30 AIS User Guide and Reference

Table 453 (Cont.) Field Manipulation Screen Commands Command Mark selector Description Select Mark selector to select the selector field for a variant. This is available only for variant data types. Select the Selector field form the following screen.

Replace variant Select counter field

Select Replace variant to replace a variants selector field. Select Counter Field opens a screen where you select a field that is the counter for an array dimension.

Metadata Model Selection


This section lets you generate virtual and sequential views for imported tables containing arrays. In addition, you can configure the properties of the generated views. It continues the Import Manipulation procedure. This allows you to flatten tables that contain arrays. In the Metadata Model Selection step, you can select configure values that apply to all tables in the import or set specific settings for each table. To configure the metadata model Select one of the following:

IMS/DB Data Sources 45-31

Default values for all tables: Select this if you want to configure the same values for all the tables in the import. Make the following selections when using this option: Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties. Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns. false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Specific virtual array view settings per table: Select this to set different values for each table in the import. This will override the data source default for that table. Make the selections in the table under this selection. See the item above for an explanation.

The Metadata Model Selection screen is shown in the following figure:

45-32 AIS User Guide and Reference

Figure 4516

The Metadata Model Selection Screen

Import the Metadata


This section describes the steps required to import the metadata to the target computer. It continues the Metadata Model Selection step. You can now import the metadata to the computer where the data source is located, or import it later (in case the target computer is not available). Perform the following steps to transfer the metadata.
1. 2.

Select Yes to transfer the matadata to the target computer immediately or No to transfer the metadata later. Click Finish.

IMS/DB Data Sources 45-33

The Import Metadata screen is shown in the following figure:


Figure 4517 The Import Metadata screen

45-34 AIS User Guide and Reference

46
Informix Data Source
This section describes the Attunity Informix data source driver. It includes the following topics:

Overview Functionality SQL Capability Configuration Properties Metadata Transaction Support Security Data Types Defining an Informix Data Source Testing the Informix Data Source

Overview
Attunity supports two Informix-type data sources, Informix 7.x and Informix 9.x. Both provide similar functionality and are bundled into two separate loadable libraries, which are introduced upon demand.

Functionality
The Informix database provides a wide range of common functionality that comply with the relational database model, as follows:

Supports common RDBMS capabilities including transaction management, logging, security, recovery, ordinary scalar data types, large data objects, locking, triggers, and stored procedures. Implements connectivity to an Informix database instance by means of embedded SQL techniques, through which it interacts with the database server, database access, and data manipulation. Serves as a mediating layer between a consumer of database resources within Attunity AIS and an Informix database instance. Complies with appropriate database application design outlines (as a database process).

Informix Data Source 46-1

As a run-time component, it is subjected to the deployment considerations that involves the available machines, I/O devices characteristics, network throughput and traffic.

Supported Versions and Platforms


For information on supported Informix versions, see Attunity Integration Suite Supported Systems and Resources.
Note:

The connection to Informix data source, is implemented by an Informix client. If you need a direct connection from this machine, then the Informix client must be installed locally. Otherwise, a connection to Informix data sources can be defined using remote computers, where the Informix client is already installed.

SQL Capability
Attunity Connect SQL syntax and semantics are enhanced with extended functionality and post-relational capabilities. It delegates supported SQL to the Informix RDBMS engine for processing, assuming standard SQL is supported. Extended capabilities and/or unsupported functionality are processed internally by Attunity Connect. The following table lists and describes the Informix RDBMS SQL elements supported by Attunity Connect:
Table 461 Supported Informix SQL Elements Description DB does not support DISTINCT keyword in AGG HAVING expressions. Supports GROUP BY <num> syntax, when num refers to expression. Does not support expression in INSERT: insert into values (10+10). LOJ is not supported. IDs from right LOJ side table cant participate in a filter expression. Only one table can be from right side of LOJ (A LOJ B). DB does not support LOJ in father and subquery. For LOJ that does not support the ON clause (ORACLE: =(+)). DB supports distributed transactions. DB supports read uncommitted transactions.

Symbolic Property Notation nvDB_NO_DISTINCT_HAVING nvDB_GROUP_EXPR_BY_NUM nvDB_NO_EXPR_IN_INSERT ~nvDB_NO_LOJ_SUPPORT nvDB_NO_LOJ_ID_IN_FILTER nvDB_RS_LOJ_ONLY nvDB_NO_NESTED_LOJ nvDB_NO_LOJ_ON_CONDITION nvDB_SUPPORT_2PHASE_COMMIT nvDB_SET_SUPP_READ_ UNCOMMITETED

nvDB_SET_SUPP_REPEATABLE_READ DB supports repeatable read transactions. nvDB_SET_SUPP_SERIALIZABLE nvDB_USE_QUOTED_OWNER DB supports serializable transactions. Quotes the owner name when creating table name in SQL.

46-2 AIS User Guide and Reference

Table 461 (Cont.) Supported Informix SQL Elements Symbolic Property Notation nvDB_USE_COL_PREFIX_IN_UPD_ nvDB_SUPPORTS_UNION_ nvDB_SUPPORTS_UNION_ALL_ nvDB_ORDER_BY_NUM_IN_UNION_ nvDB_SUPPORTS_FOR_UPDATE_ nvDB_SUPPORTS_UPDATE_OF_ nvDB_NO _ORDER_FOR_UPDATE_ nvDB_ONE_TAB_FOR_UPDATE_ Description Prefixes each column in UPDATE/DELETE statements with the full table name. DB supports UNION. DB supports UNION ALL. Supports only ORDER BY <num> syntax in a UNION SQL, when num is a column number. DB supports FOR UPDATE. DB supports FOR UPDATE OF. Does not support FOR UPDATE with SELECT statement that includes ORDER BY. DB does not support FOR UPDATE with statement select data from more than one table.

The following table lists the Informix data type textual representation, as expressed in its SQL:
Table 462 Data Type DT_B_ DT_Q_ DT_F_ DT_OPAQUE_ DT_IMAGE_ DT_ODBC_DATE_ DT_ODBC_TIME_ DT_ODBC_TIMESTAMP_ Informix Data Types and Textual Representation Informix SQL Textual Representation SMALLINT INT DOUBLE PRECISION BYTE Not Supported. DATETIME YEAR TO SECOND Not Supported. Not Supported.

The following table lists the main Informix SQL functionalities and their symbolic syntax:
Table 463 Informix SQL Functions and Symbolic Syntax Symbolic Syntax Not Supported Not Supported Not Supported TRIM(LEADING " " FROM ~) TRIM(TRAILING " " FROM ~) Not Supported Not Supported MOD(~,~) Not Supported

Functional Notation YACC_POSITION_ YACC_SUBSTR2_ YACC_SUBSTR3_ YACC_LTRIM_ YACC_RTRIM_ YACC_LOWER_ YACC_UPPER_ YACC_MOD_ YACC_NVL_

Informix Data Source 46-3

Table 463 (Cont.) Informix SQL Functions and Symbolic Syntax Functional Notation YACC_COS_ YACC_SIN_ YACC_TAN_ YACC_ACOS_ YACC_ASIN_ YACC_ATAN_ YACC_LOG10_ YACC_LN_ YACC_EXP_ YACC_POWER_ YACC_ROUND_ YACC_TRUNC_ YACC_FLOOR_ YACC_DATE_CONST_ YACC_TIMESTAMP_ CONST_ YACC_CURRENT_DATE_ YACCCURRENT_TIME_ STAMP_ YACC_DAYOFWEEK_ Symbolic Syntax COS(~) SIN(~) COS(~) ACOS(~) ASIN(~) ATAN(~) LOG10(~) LOGN(~) EXP(~) POW(~,~) ROUND(~,~) TRUNC(~,~) TRUNC(~,0) DATE("1/2/0") DATETIME(~-~-~ ~:~:~.~) YEAR TO FRACTION") TODAY CURRENT YEAR TO SECOND (WEEKDAY(~) +1)

Stored Procedures
Attunity Informix data source driver supports Informix stored procedures. To retrieve output parameters and the return code from the stored procedure, the ?=CALL syntax should be used. In addition, the Attunity Informix data source driver is capable of handling a row set as output.

Limitations
The following are known restrictions when using Informix stored procedures:

The LIKE keyword, which is supported by Informix SQL syntax, cannot be used when defining parameters. Thus, procedures that include the LIKE keyword in the parameter definitions, are not supported and errors can be anticipated. For example, the following stored procedure returns an error because the LIKE keyword is used:
CREATE PROCEDURE "inf92".nation_sp1( n_nkeyp like "inf92".nation.n_nationkey ) RETURNING int; define n_nkey int; foreach SELECT n_nationkey INTO n_nkey FROM "inf92".nation WHERE n_nationkey>n_nkeyp return n_key with resume; end foreach; END PROCEDURE;

46-4 AIS User Guide and Reference

Informix Stored Procedure Language (SPL) enforces certain limitations on SQL statements applicable within the stored procedure itself. For details, check with your specific Informix vendor. Informix identifier length is limited to 18 characters (inclusive). Therefore, procedure names must be less than or equal to 18 characters long.

Informix CLOB/BLOBs
The Informix driver provides support for the built-in, predefined opaque data types CLOB/BLOB. This is implemented by relying on the CLOCATORTYPE type, associated with a DESCRIBE-d data type at the ESQL/C programming interface.

Using Passthru Queries


Attunity SQL provides support for delegating SQL statements to be processed by the back-end RDBMS engine, directly. This is useful in cases where high levels of accuracy are required in controlling the processed statements. The following example (using NAV_UTIL) shows how to create a table with explicit control on the Informix LOCK MODE clause, using Passthru:
NavSQL> text = {{CREATE TABLE students (name VARCHAR(30), average REAL, birthdate DATETIME YEAR TO DAY) lock mode row}}; Executing: text = {{CREATE TABLE students (name VARCHAR(30),... OK 0 rows affected

Configuration Properties
The following properties can be configured for the Adabas data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect: Set to true to enable explicit selection. isolationLevel: The isolation level to be applied on an Informix transaction.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Metadata
Informix metadata definitions resides at a dedicated system table which is accessed by the Attunity Informix data source driver, using standard SQL queries. Informixsystables holds information regarding tables and sysprocedures holds information regarding procedures. Similarly, sysindexes holds information regarding table indexes. The following example is a typical metadata query issued by the driver for table listing:
select tabname, owner, tabtype from informix.systables where tabid>=100 and

Informix Data Source 46-5

(tabname like xxx order by tabname;";

Specific table descriptions or record details such as fields, types, etc. are sampled by querying the table and using a DESCRIBE statement, by ESQL/C standard techniques, as listed in the following example:
set $sql_str to "SELECT * FROM MYtable" $ PREPARE record_query FROM $sql_str; $ DESCRIBE record_query INTO rec_sqlda;

Statistics
Attunity Informix data source driver collects available statistics held in an Informix database instance. Statistics are used for developing and evaluating execution strategies at the query optimization stages. Statistics elements of interest can be the number of rows in a table, the number of blocks occupied, index properties, structural information, etc. The following query provides an example to the way by which statistics are retrieved:
select t.nrows, t.npused from informix.systables t where t.tabname = ? and t.owner = ?;

Owner Support
The Informix database, each table, view, index, procedure and synonym are owned by a specific owners. Database objects ownership is a key issue in maintaining privacy and privileges management. Usually, the owner of a database object is the person who created it, yet users with DBA privilege can create objects to be owned by others. The Attunity Informix data source driver assumes ownerships on an "AS IS" basis. It recognizes OWNER as an acceptable syntactical token wherever the underlying SQL syntax allows it. It accepts the ownership semantics, as implemented by the Informix RDBMS.

Transaction Support
Attunity Informix data source driver for Informix version 7.x supports the one-phase commit transaction protocol. The driver for Informix version 9.x supports the two-phase commit transactions and can fully participate in a distributed transaction. Transaction logging must be enabled in the Informix database or transactions are not supported by the driver (which is always in auto-commit mode). You use Informix with its two-phase commit capability through an XA connection.The daemon server mode must be configured to single-client mode. To use distributed transactions with Informix, use DBAccess to change the locking level of tables to row level locking. In addition, from an ODBC-based application, ensure that AUTOCOMMIT is set to 0. An Informix client on a PC cannot use MTS as the transaction manager.

Locking Levels
Informix database supports a multi-user database environment by utilizing a locking mechanism for managing a consistent and concurrent access to data. The Informix

46-6 AIS User Guide and Reference

database management system provides run-time and design-time syntax for controlling the locking level. Attunity Informix data source driver has no explicit control over the locking policy. It functions under the existing database settings. The locking mechanism includes four locking levels which are applicable on data items, as follows:

Page: A page is a physical amount of data that Informix works with at any one time. Under this locking mode, a page is locked. Row: A row is a logical record. Under this locking mode, the logical record is being locked as an atomic unit. Table: Under this locking mode, a table as a whole, is being locked as a complete unit. Database: Under this locking mode, the entire database is locked.

Isolation Levels
The Attunity Informix data source driver supports the following isolation levels, where each is controlled by an appropriate configuration setting. The isolation level is used only within a transaction. The terminology used within each configuration setting is based on common descriptive keywords, used by Attunity Connect. The following table summarizes the isolation level semantics applied by the Attunity Informix data source driver:
Table 464 Isolation Level Semantics Informix Transaction Isolation Level "Dirty Read" Comments Provides no isolation from locks. All locks are ignored by the header. The reader process can view data freely, regardless of locks applied by others. The default setting. Ensures that the reader only returns values that are committed to the database. Incompatible locks held by others causes the reader to wait or fail. Ensures that while stepping along a cursor, the current row is not changed by others. This is achieved by placing a shared lock on the current row, which prevents others from placing a "modifying" exclusive lock on that row. This shared lock is released as the cursor moves to the next row. Ensures that once a cursor has been read, all consecutive reads will pass back the same values. This is implemented by applying a similar locking policy to "Cursor Stability", excluding the locks releases.

AIS Configuration Setting readUncommitted

readCommitted

"Committed Read"

repeatableRead

"Cursor Stability"

serializable

"Repeatable Read"

Informix Data Source 46-7

Security
Attunity Informix data source driver is not actively involved in applying or enforcing any security policy. It complies with the security policy and rules as set in the Informix database instance with which it interacts.

Data Types
This section lists the Informix data types and how Attunity maps these data types to OLE DB and ODBC data types. The following table lists Informix data types and how they are mapped to Attunity data types:
Table 465 Informix BYTE CHAR DATE DATETIME DECIMAL FLOAT INTEGER INTERVAL MONEY NCHAR NVARCHAR SERIAL SMALLFLOAT SMALLINT TEXT VARCHAR Informix Data Types AIS DT_TYPE_OPAQUE_ DT_TYPE_T_ DT_TYPE_INF_DATE_ DT_TYPE_INF_DATETIME_ DT_TYPE_INF_DECIMAL_ DT_TYPE_G_ DT_TYPE_L_ DT_TYPE_CSTRING_ DT_TYPE_INF_MONEY_ Not Supported Not Supported DT_TYPE_L_ DT_TYPE_F_ DT_TYPE_W_ DT_TYPE_T_ DT_TYPE_CSTRING Comments BLOB property is set. Character-coded text; a single character or a string. Informix date type implementation. Informix date-time implementation. Informix decimal type implementation. G_floating; 64-bit double-precision floating point. Long word integer; 32-bit signed 2s-complement integer. A null terminated string. Informix money type implementation. Long word integer; 32-bit signed 2s-complement integer. F_floating; 32-bit signed 2s-complement integer. Word integer; 16-bit 2s complement integer. BLOB property is set. A null terminated string.

The following table shows how Attunity Connect maps Informix data types to ODBC and OLE DB data types.
Table 466 Informix BYTE Char(m<256), Character(m<256) Informix Data Types Mapping to OLE DB and ODBC OLE DB DBTYPE_BYTES DBTYPE_STR ODBC SQL_LONGVARBINARY SQL_CHAR

46-8 AIS User Guide and Reference

Table 466 (Cont.) Informix Data Types Mapping to OLE DB and ODBC Informix Char(m>255), Character(m>2566) Date Datetime Dec, Decimal Double Float Int, Integer Interval Money Nchar (not supported) Numeric Nvarchar (not supported) Precision Real Serial Smallfloat Smallint Text Varchar(m<256) Varchar(m>256)
1 2 3

OLE DB DBTYPE_STR DBTYPE_DBTIMESTAMP DBTYPE_DBTIMESTAMP DBTYPE_NUMERIC DBTYPE_R8 DBTYPE_R8 DBTYPE_I4 DBTYPE_STR DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_R8 DBTYPE_R8 DBTYPE_I4 DBTYPE_R4 DBTYPE_I2 DBTYPE_STR DBTYPE_STR DBTYPE_STR
2

ODBC SQL_LONGVARCHAR 1 SQL_TIMESTAMP SQL_TIMESTAMP SQL_NUMERIC SQL_DOUBLE SQL_DOUBLE SQL_INTEGER SQL_CHAR SQL_NUMERIC SQL_NUMERIC SQL_DOUBLE SQL_REAL SQL_INTEGER SQL_REAL SQL_SMALLINT SQL_LONGVARCHAR SQL_CHAR SQL_LONGVARCHAR 3

IS_LONG attribute is TRUE IS_LONG attribute is TRUE Precision of 2147483647. If the <odbc longVarCharLenAsBlob> parameter is set to true in the Server environment settings, then precision of m.

See also ADD Supported Data Types.

Defining an Informix Data Source


The process of defining an Adabas data source consists of two tasks:

Defining the Informix Data Source Connection Configuring the Informix Data Source Properties

Defining the Informix Data Source Connection


The Adabas data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the connection 1. Open Attunity Studio.
2.

In the Design perspective, Configuration view expand the Machines folder.


Informix Data Source 46-9

3. 4. 5. 6.

Expand the machine where you want to add your Informix data source. Expand the Bindings folder. Expand the binding where you want to add the Informix data source. Right-click the Data Source folder and select New Data Source. The New Data Source screen is displayed.

7. 8. 9.

In the Name field, enter a name for the new data source. Select Informix from the Type list: Click Next. The Data Source Connect String page is displayed.

10. Enter the connect string according to the data source type selected:

If you are defining an Adabas ADD data source, enter the Database number. If you are defining an Adabas Predict data source, enter the following: Database name: Enter the Informix database instance identification.

11. Click Finish.

Configuring the Informix Data Source Properties


After defining the connection, you set the data source properties. To configure the data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Adabas data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the Adabas data source and select Open. The Configuration editor is displayed.

46-10 AIS User Guide and Reference

Figure 461 Informix Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Informix Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

Testing the Informix Data Source


You can perform the following tests on the Informix data source:

Connection test: This tests the physical connection to the data source. Query test: This test runs an SQL SELECT query against the data source.

These test are described in the following sections: To test the connection to the Informix data source 1. Open Attunity Studio.

Informix Data Source 46-11

2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your Informix data source. Expand the binding with the Informix data source. Expand the Data sources folder. Right-click the required Informix data source, and select Test. The Test Wizard screen opens.

7.

Select Navigator from the Active Workspace Name list, and click Next. The system now tests the connection to the data source, and returns the test result status.

8.

Click Finish to exit the Test wizard.

To test the Informix data source by query 1. Open Attunity Studio.


2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your Informix data source. Expand the binding with the Informix data source. Expand the Data sources folder. Right-click the required Informix data source, and select Query Tool. The Select Workspace screen opens.

7.

Select Navigator and click OK. The Query Tool opens in the Editor pane, with the Build Query tab displayed (see step 10.).

8. 9.

Select the required query type from the Query Type list. The default is a SELECT-type query. Locate and expand the your Informix data source. The Informix data source tables are listed.

10. Drag the required table to the Table column, as shown in the following figure:

46-12 AIS User Guide and Reference

Figure 462 The Query Tool screen

11. Click Execute query.

The Query Result tab opens, displaying the results of the query.
12. Close the Query Tool in the Editor pane.

Sample Log File Explained


Log files are used for troubleshooting and error handling. The log file is generated when the driverTrace debug binding parameter is set to True. The log file includes various information concerning the functions used or called by the driver, queries executed, data sources accessed, etc. The following provides a sample log file output:
Attunity Server Log (V4.8.1.0, DEC-UNIX) Started at 2005-12-03T13:38:53 Licensed by ATTUNITY LTD. on 09-AUG-2000 (001001237) Licensed to ATTUNITY for <all providers> on 194.90.22.* (<all platforms>) binding.c (351): ; [B001] Binding to a datasource of type 'ADD-XML' cannot be performed binding.c (351): ; [B001] Binding to a datasource of type 'mf' cannot be performed binding.c (351): ; [B001] Binding to a datasource of type 'MEMORY_GDB' cannot be performed nvOUT (./qp_sqtxt.c 56): select * from nation limit to 3 rows nvRETURN (./qpsynon.c 1140): -1 SELECT T0000.n_nationkey AS c000, T0000.n_name AS c001, T0000.n_regionkey AS c002, T0000.n_comment AS c003 FROM 'inf9'.nation T0000

<<<<<<<<<<<<<<<<<<<

Execution Strategy Begin <<<<<<<<<<<<<<<<<<<<<<<<<<<<

Original SQL: select * from nation limit to 3 rows

Informix Data Source 46-13

Accessing Database 'informix' with SQL: SELECT T0000.n_nationkey AS c000, T0000.n_name AS c001, T0000.n_regionkey AS c002, T0000.n_comment AS c003 FROM 'inf9'.nation T0000

>>>>>>>>>>>>>>>>>>>> Execution Strategy End >>>>>>>>>>>>>>>>>>>>>>>>>>>> nvOUT (./qpsqlcsh.c 140): ---------------------------> Using Cached QSpec SELECT T0000.n_nationkey AS c000, T0000.n_name AS c001, T0000.n_regionkey AS c002, T0000.n_comment AS c003 FROM 'inf9'.nation T0000

nvRETURN (./drviunwn.c 804): -1210 (Last message occurred 2 times) Disabled FilePool Cleanup(DB=___sys, FilePool Size=0) FilePool Shutdown(DB=___SYS, FilePool Size=0) Closing log file at SAT DEC 3 13:39:12 2005

46-14 AIS User Guide and Reference

47
Ingres II (Open Ingres) Data Source
This section contains the following topics:

Supported Versions and Platforms Functionality Configuration Properties Transaction Support Data Types Platform-Specific Information Defining the Ingres II Data Source

Supported Versions and Platforms


For information on supported Ingress versions, see Attunity Integration Suite Supported Systems and Resources.

Functionality
This section describes the following aspects of Ingres II functionality.

Stored Procedures Isolation Levels and Locking

Stored Procedures
The Ingres II Data Source supports Ingres II Stored Procedures. To retrieve output parameters and the return code from the stored procedure, use the ? = CALL syntax, described in the CALL Statement.

Isolation Levels and Locking


The Ingres II data source supports the following isolation levels:

Dynamic isolation Read uncommitted Read committed Repeatable read Serializable read
Ingres II (Open Ingres) Data Source 47-1

Ingres II supports page level locking. Updates are performed with the no wait flag. If one of the records is locked, then the update operation fails. This value cannot be changed. Once a record is locked, all other update operations fail. In accordance with ANSI standards, if the user or application specifies the read-uncommitted isolation level, by default Ingres II grants read-only access. If the user or application specifies a different isolation level, by default Ingres grants read-write permission.

Update Semantics
For tables with no bookmark or other unique index, the data source returns a combination of most (or all) of the columns of the row as a bookmark. The data source does not guarantee the uniqueness of this bookmark; you must ensure that the combination of columns is unique.

BLOBs
The Ingres II data source data source driver provides support for the built-in predefined opaque data types known as Segmented String/List Of Byte Varying. Both READ and WRITE operations are supported. BLOBs are addressed as ordinary fields. They are handled by dedicated cursors that step along their data. For handling a BLOB field, successfully the table must have a genuine unique key defined.

Passthru Queries
SQL capabilities, as implemented internally by AIS, are equipped with means for delegating SQL statements as-is to the backend engine. When using this technique, known as Passthru, SQL statements are passed as a whole and processed directly by the database engine, much like the ordinary interactive SQLplus utility would process them. Passthru processing is especially advantageous in cases where one attempts gain explicit control on the processed statement and facilitate proprietary features. For example, introducing refined specific nuances of a CREATE INDEX statement can be carried out using this technique. The following example demonstrates how one can create an index with explicit control on specified attributes, with Passthru Using NAV_UTIL Utility.
NV_EMPLOY is defined as follows: NavSQL > desc nv_employ; Connecting .... ----------------------------------------------------------------Table NV_EMPLOY There are 5 fields in the table: # Name Datatype Size Width Scl Nullable ----------------------------------------------------------------------- 0 EMPLOYEE_ID string 5 5 0 no 1 LAST_NAME string 14 14 0 yes 2 CITY string 20 20 0 yes 3 ROWID string 18 18 0 yes (2 indexes specified) Index 1: length is 5, Unique name is NV_EMPLOY_ID segments are: EMPLOYEE_ID Index 2: length is 18, Unique, Hashed

47-2 AIS User Guide and Reference

name is ROWID segments are: ROWID <END OF TABLE DESCRIPTION> NavSQL >

Assume that last_name is a common key of reference which warrants an index. Nevertheless, for the sake of usability, one attempts to have a case blind index to ease formulating and processing queries where case sensitivity is of no importance. The Ingres II data source provides the following syntax for doing this:
CREATE INDEX lname_case_blind ON NV_EMPLOY(UPPER(LAST_NAME));

This syntax would take the following format when Using NAV_UTIL Utility:
NavSQL > text = {{CREATE INDEX lname_case_blind ON NV_EMPLOY(UPPER(LAST_NAME))}}; Executing: text = {{CREATE INDEX lname_case_blind ON NV_EMPLOY(UPPER(LAST_NAME))}} OK 0 rows affected

This index would typically improve delegated queries like the following:
SELECT * FROM NV_EMPLOY WHERE UPPER(LAST_NAME) = 'SMITH'; NavSQL > select * from nv_employ; EMPLOYEE_ID LAST_NAME CITY 00164 Toliver Chocorua 00165 Smith Chocorua 00166 Dietrich Boscawen 00167 Kilpatrick Marlow 00168 Nash Meadows 00169 Gray Etna 00170 Wood Jefferson 7 rows returned NavSQL > SELECT * FROM NV_EMPLOY WHERE UPPER(LAST_NAME) = 'SMITH'; EMPLOYEE_ID LAST_NAME CITY 00165 Smith Chocorua 1 rows returned NavSQL >

Recall that the SQL statement is passed through on an as-is basis, with no interpretation and/or any other intervention of any kind. Consequently - it should be phrased according to the lexical notations and syntax rules that comply with those expected by the backend Ingres II SQL engine.

Configuration Properties
The following properties can be configured for the Adabas data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

isolationLevel: Specifies the default isolation level for the Data Source, as follows:

Ingres II (Open Ingres) Data Source 47-3

dynamicIsolation: The isolation level is not set at the data source level. Setting this parameter to true allows the application to set this parameter dynamically as needed. readUncommitted: Specifies that corrupt data is not to be read. This is the lowest isolation level. readCommitted: Specifies that only the data committed before the query began is displayed. repeatableRead: Specifies that data used in a query is locked and cannot be used by another query nor updated by another Transaction. serializable: Specifies that the data is isolated serially. Treats data as if transactions are executed sequentially.
Note:

If the specified level is not supported by the data source, then Attunity Connect defaults to the next highest level.

lockWait: Specifies how many seconds a transaction waits before timing out when it encounters a locked row, as follows: -1: Sets the transaction to wait indefinitely (the default). 0: Sets the transaction to wait the minimum amount of time possible. n(>0): Specifies how many seconds the transaction waits.

openIngresConnect: Specifies the name of the virtual node used by the Ingres client to access a remote networked Ingres II server and the name of the database with the format vnode::database_name: Enables you to specify a logical name (environment variable in UNIX) instead of the database name if the logical database is distributed among several physical databases. The Ingres data source translates the logical name before binding. For example, a logical name (ALL_ SITES) can be defined to use as the Database name for a logical database distributed among two physical databases (BOSTON_DB and PARIS_DB), as follows:

For OpenVMS DCL,define ALL_SITES BOSTON_DB,PARIS_DB For UNIX C-shell, define setenv ALL_SITES BOSTON_DB,PARIS_ DB

readLockMode: Specifies the lock mode as either read only or writable. timezone: Sets the time (in hours) on the client to be the same as the time on the server, when the two times are different. For example, if the client time is 13:00 and the server time is 9:00, set <Properties timezone=4 />. A negative number sets the number of hours ahead of the client.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Transaction Support
The Ingres II data source supports Two-phase Commit and can fully participate in a distributed Transaction when the transaction environment property convertAllToDistributed is set to true.

47-4 AIS User Guide and Reference

You use Ingres II with its two-phase commit capability through an XA connection.The daemon server mode must be configured to Single Client mode (see Server Mode). To use distributed transactions with Ingres II from an ODBC-based application, ensure that AUTOCOMMIT is set to 0.

Data Types
The following table shows how Attunity Connect maps Ingres II data types to OLE DB and ODBC data types:
Table 471 Ingres Byte Char (m<256), C Char (m>255), C Date Float, Float8 Float4 Integer, Integer4 Integer1 Integer2 Money Ingres II Data Types OLE DB DBTYPE_BYTES DBTYPE_STR DBTYPE_STR DBTYPE_DBTIMESTAMP DBTYPE_R8 DBTYPE_R4 DBTYPE_I4 DBTYPE_I1 DBTYPE_I2 DBTYPE_R8 ODBC SQL_LONGVARBINARY SQL_CHAR SQL_LONGVARCHAR 1 SQL_TIMESTAMP SQL_DOUBLE SQL_REAL SQL_INTEGER SQL_TINYINT SQL_SMALLINT SQL_DOUBLE

Long Byte (not supported) Long Varchar (not supported) SmallInt Text Varchar (m<256) Varchar (m>255) DBTYPE_I2 DBTYPE_STR DBTYPE_STR DBTYPE_STR SQL_SMALLINT SQL_VARCHAR SQL_CHAR SQL_LONGVARCHAR1

Note: Precision of 2147483647. If the <odbc longVarcharLenAsBlob> parameter is set to true in the AIS environment settings, then precision of m.

This table shows how Attunity Connect maps data types in a CREATE TABLE statement to Ingres II data types.
Table 472 CREATE TABLE Data Types Ingres Byte Char[(m)] Date Float Real

CREATE TABLE Binary Char[(m)] Date Double Float

Ingres II (Open Ingres) Data Source 47-5

Table 472 (Cont.) CREATE TABLE Data Types CREATE TABLE Image Integer Numeric [(p[,s])] Smallint Text Time Timestamp Tinyint Varchar(m) Ingres Long Byte Integer Float Smallint Long Varchar Date Date Integer1 Varchar(m)

See also ADD Supported Data Types.

Platform-Specific Information
Checking Ingres Environment Variables on UNIX
Check that the II_SYSTEM environment variable is correctly set and that the Ingres database is readable by Attunity Connect. For example, set the following in the nav_ login or site_nav_login file on a UNIX machine: Make sure that the Ingres user has enough privileges or the data source returns the following error: II_SS01007_PRIV_NOT_GRANTED. Use the Ingres accessDB utility to add or modify users so that they have the correct privileges.
setenv II_SYSTEM /ingsw setenv PATH $II_SYSTEM/ingres/utility:$II_SYSTEM/ingres/bin:$PATH setenv PATH usr/openwin/bin:$PATH setenv LD_LIBRARY_PATH $II_SYSTEM/ingres/lib:$LD_LIBRARY_PATH

Defining the Ingres II Data Source


The process of defining an Ingres Data Source consists of two tasks:

Defining the Ingres II Data Source Connection Configuring the Ingres II Data Source Properties

Defining the Ingres II Data Source Connection


The Ingres II data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3.

In the Design Perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your Ingres data source.

47-6 AIS User Guide and Reference

4. 5. 6.

Expand the Bindings folder. Expand the binding where you want to add the Ingres data source. Right-click the Data Source folder and select New Data Source. The New Data Source screen is displayed.

7. 8. 9.

In the Name field, enter a name for the new data source. Select Ingres from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter a connect string in the Connect string field. The connect string should

contain the following information: vnode::database_name Where:

vnode: Specify the name of the virtual node which is used by the Ingres II client to access a remote networked Ingres II server. You can retrieve a list if the available nodes on the machine by running the Ingres net_util utility. If you specify only database name, omitting vnode name, then the data source binds to the specified local database. database_name: Specify the name of the database. You can specify a logical name (environment variable in UNIX) instead of the database name if the logical database is distributed among several physical databases. The Ingres data source translates the logical name before binding. For example, a logical name (ALL_SITES) can be defined to use as the Database name for a logical database distributed among two physical databases (BOSTON_DB and PARIS_DB), as follows: OpenVMS DCL define ALL_SITES BOSTON_DB,PARIS_DB UNIX C-shell setenv ALL_SITES BOSTON_DB,PARIS_DB If you want to connect to a particular Ingres class-server instance that is already defined, specify the following in the Database name field: database/Ingres_instance
Note:

To access Ingres II on 64-bit operating systems (HP-UX 11 and higher, AIX 4.4 and higher and Sun Solaris 2.8 and higher), the data source 32-bit client must be used.

11. Click Finish.

Configuring the Ingres II Data Source Properties


After defining the connection, you set the data source properties. To configure the Ingres data source 1. Open Attunity Studio.

Ingres II (Open Ingres) Data Source 47-7

2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Ingres data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the Ingres data source and select Open. The Configuration editor is displayed.

Figure 471 Ingress Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Ingres II Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties. After setting the binding, you must define Attunity Metadata describing the Ingres data.

47-8 AIS User Guide and Reference

48
ODBC Data Source
This chapter contains the following sections:

Overview Functionality SQL Capabilities Configuration Properties Metadata Transaction Support Security Data Types Platform-Specific Information Defining the ODBC Data Source Testing the ODBC Data Source

Overview
The ODBC Data Source provides a wide range of common standard relational functions that comply with the Relational Data Source model. The ODBC Data Source is a generic Driver to data providers that have an SQL processing capability and expose the ODBC API. Some capabilities are actively implemented as data source driver functions while others are implied by the methods and techniques the data source driver uses for interacting with the ODBC Backend Database.

Supported Versions and Platforms


The ODBC data source driver can be used on all platforms where Attunity Connect servers can run. For information on supported ODDBC versions, see Attunity Integration Suite Supported Systems and Resources.

Supported Features
The ODBC data source driver supports the core of the following common traditional relational capabilities:

ODBC Data Source 48-1

Data manipulation (DML) Data definition (DDL) Transaction management Logging Security Recovery Ordinary scalar data types Large data objects Locking Stored procedures

Functionality
This section describes the following aspects of ODBC functionality:

Stored Procedures Isolation Levels

Stored Procedures
The ODBC data source supports Stored Procedures. You can use a SELECT statement only for a procedure having only a single SELECT statement. To retrieve output parameters, multiple result sets, and the return code from a stored procedure, use the? = CALL syntax as described in the CALL Statement.

Isolation Levels
The ODBC data source supports the following Isolation Levels:

Dynamic isolation Uncommitted read Committed read Repeatable read Serializable

If the back-end data source does not support an isolation level, the data source supports the isolation levels that are supported by the back-end data source. The isolation level is used only within a transaction.

SQL Capabilities
The ODBC data source conforms to ANSI92 SQL. This standard includes both the SQL syntax and semantics. AIS uses its own extended SQL (see Attunity SQL Syntax), endowed with advanced capabilities. In most cases, AIS will attempt to delegate SQL portions to the ODBC back end engine for the processing of those SQL features which are supported there.

48-2 AIS User Guide and Reference

Extended and/or unsupported functions are processed internally by AIS. This notion is a part of Attunity Query Processor and Query Optimizer core. Because the ODBC data source is a generic driver, it assumes only the following minimal set of supported SQL elements about the back end:

Basic SQL operators. Any expressions allowed in GROUP BY clauses. When creating table name in, owner name is also quoted.

Configuration Properties
The following parameters can be configured for the ODBC Data Source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect: Exposes explicit-select fields. The default setting is false. disableExtendedFetch=true|false: Specifies whether or not extended fetch is used by the driver, regardless of whether extended fetch is supported by the Backend Database. ignoreSqlState: A string property that can include a concatenation of SQLStates to ignore. Since this data source driver is ODBC generic, some providers behave differently than others so that you may want to ignore certain SQLStates with one provider and other with another. For example: ignoreSqlState=S1104

isolationLevel: The default isolation level for the data source, as follows: dynamicIsolation: Specifies that he isolation level is not set at the data source level. Setting this parameter to true allows the application to set this parameter dynamically as needed. readUncommitted: When selected corrupt data is not read. This is the lowest isolation level. readCommitted:When selected, only the data committed before the query began is displayed. repeatableRead: When selected, data used in a query is locked and cannot be used by another query nor updated by another transaction. serializable: When selected, the data is isolated serially. Treats data as if transactions are executed sequentially.
Note:

If the specified level is not supported by the data source, then AIS defaults to the next highest level.

notSupportSqlColumns: Specifies whether the driver uses the SQLColumns ODBC API. The default value is false. optTrace: When set to true, this boolean field activates the tracing by the ODBC provider.

ODBC Data Source 48-3

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Metadata
The ODBC data source uses the standard ODBC API to get metadata information from the back end database.

Statistics
The ODBC data source gathers available statistics held inside the database instance by using the SQLStatistics ODBC API. This information is used later by the run time components for developing, evaluating and choosing execution strategies at query optimization phases.

Transaction Support
The ODBC data source supports one-phase commit if Transactions are supported in the Backend Database. It can participate in a distributed transaction if it is the only one-phase commit Data Source being updated in the transaction.
Note:

When the ODBC data source participates in a distributed transaction, the transaction environment properties convertAllToDistributed and useCommitConfirmTable must be set to true.

Security
The ODBC data source driver is not actively involved in applying or enforcing a security policy. It incorporates the security policy and rules as set at the database instance with which it interacts. User name and password are passed to the provider when SQLConnect is called. See Managing a User Profile in Attunity Studio for more information.

Data Types
The following table shows how ODBC data types are mapped to OLE DB data types.
Table 481 ODBC SQL_BINARY SQL_BIT SQL_CHAR SQL_DATE SQL_DOUBLE SQL_FLOAT SQL_INTEGER SQL_LONGVARBINARY Mapping ODBC Data Types OLE DB DBTYPE_BYTES DBTYPE_I1 DBTYPE_STR DBTYPE_DBDATE DBTYPE_R8 DBTYPE_R8 DBTYPE_I4 DBTYPE_BYTES

48-4 AIS User Guide and Reference

Table 481 (Cont.) Mapping ODBC Data Types ODBC SQL_LONGVARCHAR SQL_NCHAR SQL_NTEXT SQL_NVARCHAR SQL_NUMERIC SQL_REAL SQL_SMALLINT SQL_TIME SQL_TIMESTAMP SQL_TINYINT SQL_VARBINARY SQL_VARCHAR OLE DB DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_R4 DBTYPE_I2 DBTYPE_DBTIME DBTYPE_DBTIMESTAMP DBTYPE_I1 DBTYPE_BYTES DBTYPE_STR

The following table shows how data types in a CREATE TABLE statement are mapped to ODBC data types.
Table 482 CREATE TABLE Data Types ODBC SQL_CHAR[(m)] SQL_DATE SQL_DOUBLE SQL_DOUBLE SQL_LONGVARBINARY SQL_INTEGER SQL_NUMERIC SQL_DOUBLE SQL_DECIMAL SQL_SMALLINT SQL_LONGVARCHAR SQL_TIME SQL_TIMESTAMP SQL_TINYINT SQL_VARCHAR(m)

CREATE TABLE Char[(m)] Date Double Float Image Integer Numeric Numeric(p,s) Numeric(p[,s]) Smallint Text Time Timestamp Tinyint Varchar(m)

See also ADD Supported Data Types.

ODBC Data Source 48-5

Platform-Specific Information

When running a Microsoft Jet driver with the ODBC data source, to access BLOBs stored in an MS Access database, use Attunity Studio to set odbc fixAccessBug=true in the environment properties. To insert a BLOB into a table, the table must have a primary key. When running a Microsoft Jet driver with the ODBC data source, to access an Microsoft Access database, the application might unexpectedly lock. In this case, use Attunity Studio to set queryProcessor noThreads=true in the environment properties (the application will run slower than it does if this parameter is not set).

Defining the ODBC Data Source


The process of defining an ODBC data source consists of two tasks:

Defining the ODBC Data Source Connection on a Windows Platform orDefining the ODBC Data Source Connection on a non-Windows Platform Configuring the ODBC Data Source

Defining the ODBC Data Source Connection on a Windows Platform


The ODBC data source connection on a Windows platform is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your ODBC data source. Expand the Bindings folder. Expand the binding where you want to add the ODBC data source. Right-click the Data Source folder and select New Data Source. The New Data Source screen is displayed.

7. 8. 9.

In the Name field, enter a name for the new data source. Select ODBC from the Type list. Click Next. The Data Source Connect String wizard is displayed.

10. Enter a connect string in the ODBC Connect String field. The connect string

should have the following information. ODBC Connect String: Specify the User, System, or File DSN that previously defined using the Microsoft ODBC Driver Manager. For a User or System DSN, specify the name of the DSN. For a File DSN, specify the following: filedsn=dsn where dsn is the name of the File DSN.

48-6 AIS User Guide and Reference

If you are connecting to the data source through the driver for that data source, precede the DSN by the name of the driver as it appears in the Registry, followed by a semi-colon (;).
Note:

Make sure that you specify the name exactly as it appears in the Registry (since the Registry is case sensitive).

11. Click Finish.

Defining the ODBC Data Source Connection on a non-Windows Platform


Access to the ODBC data source on non-Windows platforms is dependent on the version of ODBC on the non-Windows platform. If you are using version ODBC 2.5, then a driver manager, such as the driver manager provided by Intersolv, is not required. To define the data source connection for ODBC 2.5 (without driver manager) 1. Define a file with the following format:
[name] TYPE=ODBC SHAREABLE-NAME=odbc_backend

where:

name: Specifies the unique identifier for the ODBC data source. This value is used as the type attribute for a data source defined in the <datasource> section of the binding when you identify a data source to be accessed by this custom ODBC. Therefore, the name of an ODBC data source must be unique. odbc_backend: Specifies the full path and name of the ODBC back end sharable on the non-Windows platform.

2.

Use the NAV_UTIL ADDON to register the ODBC data source definition to Attunity Connect: nav_util addon define_file where define_file is the input text file containing the ODBC data source specification. Unless you specify a full path, NAV_UTIL searches for the file in the current working directory. For z/OS systems, run the following command: NAVROOT.USERLIB(NAVCMD) and enter ADDON define_file at the prompt (where NAVROOT is the high-level qualifier specified during installation of AIS Server). For OS/400 platforms, run the following command: define_file NAV_UTIL ADDON is used when developing using the developer SDK (to define a custom data type, data source or application adapter). For details, see Attunity Developer SDK.

3.

Specify a binding entry similar to the following: <datasource name=PERSONNEL type=datasource_type connect="odbc_connect" syntaxName="ODBC" />

ODBC Data Source 48-7

where:

datasource_type: The section name specified in the define_file file odbc_connect: The connect string for the ODBC back end data source required by the ODBC back end on the non-Windows platform.

To define the data source connection for ODBC 2.5 (with a driver manager) 1. For example, from applications that link to the ODBC driver manager provided by Intersolv, define an entry in the odbc.ini file of Intersolv (for example, /opt/odbc/odbc.ini ) as follows:
[DS-NAME] Driver = odbc_backend

where:

DS-NAME: The name of the data source as defined in the binding odbc_backend: The directory where AIS Server is installed

2.

Define a data source connection for ODBC 2.5 without driver manager. Make sure to do the following:

Define the driver manager as the ODBC back end, that is use the driver manager sharable as the back end sharable. Use the DS-NAME defined in step 1 as the ODBC connect attribute value in the bindings data source definition.

Configuring the ODBC Data Source


After defining the connection, you set the data source properties. To configure the ODBC data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your ODBC data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the ODBC data source and select Open. The Configuration editor is displayed.

48-8 AIS User Guide and Reference

Figure 481 ODBC Data Source Configuration Properties

7.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the ODBC Data Source Connection on a Windows Platform. For Adabas (ADD), enter the information in the Authentication section, if necessary. You can define the following parameters:

8.

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties

Testing the ODBC Data Source


You can perform the following tests on the ODBC data source:

Connection test: This tests the physical connection to the data source. Query test: This test runs an SQL SELECT query against the data source.

These tests are described in the following procedures: To test the connection to the ODBC data source Open Attunity Studio.
ODBC Data Source 48-9

1.

2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your ODBC data source. Expand the binding with the ODBC data source. Expand the Data sources folder. Right-click the required ODBC data source, and select Test. The Test Wizard screen opens.

7.

Select the workspace that you want to work against from the Active Workspace Name list, and click Next. The system now tests the connection to the data source, and returns the test result status.

8.

Click Finish to exit the Test wizard.

To test the ODBC data source by query 1. Open Attunity Studio.


2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your ODBC data source. Expand the binding with the ODBC data source. Expand the Data sources folder. Right-click the required ODBC data source entity, and select Query Tool. The Select Workspace screen opens.

7.

Select the workspace that you want to work against and click OK. The Query Tool opens in the Editor pane, with the Build Query tab displayed (see step 10).

8. 9.

Select the required query type from the Query Type list. The default is a SELECT-type query. Locate and expand the your ODBC data source. The ODBC data source tables are listed.

10. Drag the required table to the Table column, as shown in the following figure:

48-10 AIS User Guide and Reference

Figure 482 The Query Tool screen

The image shows the Query Tool screen, when testing the ODBC RDB data source. ***********************************************************************************************
11. Click Execute query.

The Query Result tab opens, displaying the results of the query.
12. Close the Query Tool in the Editor pane.

Logging
Log files are used for troubleshooting and error handling. The log file is generated when the driverTrace debug binding parameter is set to True. The log file includes various information concerning the functions used or called by the driver, queries executed, data sources accessed, and so on. First, you need to create the log file. To create a log file 1. Open Attunity Studio.
2.

From the main menu, click Windows, Preferences. The Preferences screen is displayed.

ODBC Data Source 48-11

Figure 483 Studio Preferences

The image shows the Studio Preferences screen. ***********************************************************************************************


3. 4. 5. 6. 7. 8. 9.

In the left pane, click Studio. Select the Advanced tab. Select the Show advanced environment parameters check box. Click OK. In the Design perspective, Configuration view, right-click the binding under which the ODBC data source resides, and select Open. Expand the Debug category. Select GDB Trace and General Trace.

10. Refresh the server.

48-12 AIS User Guide and Reference

49
OLEDB-FS (Flat File System) Data Source
This section contains the following topics:

Overview Data Provider Requirements Functionality Transaction Support Data Types Configuration Properties Defining the Data Source

Overview
The OLEDB-FS Data Source driver is a generic data source for data providers that do not have SQL processing capabilities but expose OLE DB Index interfaces. The data source is certified against JOLT, a Microsoft OLE DB interface over JET. The current version of this data source does not support OLE objects. It also does not support the Variant data type for columns. Attunity Connect passes any required username and password to the provider when the application calls IDBInitialize::Initialize(). See Managing a User Profile in Attunity Studio.

Supported Versions and Platforms


The OLEDB-FS data source driver can be used with all Microsoft Windows platforms supported by Attunity Connect.

Data Provider Requirements


Since the OLEDB-FS data source is generic, it can connect to a number of different data providers that expose OLE DB interfaces. The current version, however, has specific requirements that every such data provider must meet:

For tables to be updateable, the provider must expose bookmarks. The provider must expose the following OLE DB interfaces.

OLEDB-FS (Flat File System) Data Source 49-1

This table lists the OLE DB interfaces.


Table 491 Interface IAccessor IColumnsInfo IOpenRowset IDBCreateSession IRowsetChange IRowsetLocate IRowsetUpdate IDBInitialize IDBSchemaRowset ILockBytes (OLE)1 IRowsetIndex2 IErrorInfo3 IErrorRecords IRowset IStream (OLE)1 ITransactionLocal (optional) ISupportErrorInfo ITableDefinition IDBProperties OLE DB Interfaces Methods CreateAccessor, ReleaseAccessor GetColumnsInfo (Command and Rowset objects) OpenRowset CreateSession DeleteRows, SetData, InsertRow GetRowsByBookmark Update (optional) Initialize, Uninitialize GetRowset (tables, columns, indexes; optionally also procedures, procedure parameters) Flush, ReadAt, SetSize, Stat, WriteAt SetRange GetDescription, GetSource GetErrorInfo GetData, GetNextRows, ReleaseRows, RestartPosition Read, Seek, SetSize, Stat, Write StartTransaction, Commit, Abort InterfaceSupportsErrorInfo CreateTable, DropTable SetProperties

Notes:
1. 2. 3.

Required only if BLOBs are used in the OLE DB provider. Required only if indexes are used in the OLE DB provider. The IErrorLookup with the GetErrorDescription method can also be used.

Functionality
This section describes the following aspect of OLEDB-FS functionality:

Isolation Levels

Isolation Levels
The OLEDB-FS data source supports the following Isolation Levels:

Dynamic isolation Uncommitted read Committed read

49-2 AIS User Guide and Reference

Repeatable read Serializable

If the back-end data source does not support an isolation level, the data source supports the isolation levels that are supported by the back-end data source. The isolation level is used only within a transaction.

Transaction Support
The OLEDB-FS data source supports one-phase commit if distributed transactions are supported in the back-end data source. It can participate in a distributed transaction if it is the only one-phase commit data source being updated in the transaction. Both the transaction environment properties convertAllToDistributed and useCommitConfirmTable must be set to true.

Data Types
SQL data types are mapped to OLE DB data types as described in OLE DB (ADO) Client Interface. See also ADD Supported Data Types.

Configuration Properties
The following properties can be configured for the OLEDB-FS data source Design perspective, Configuration view of Attunity Studio. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect: This parameter indicates whether or not the Explicit Select option is disabled. isolationLevel: This parameter specifies the default isolation level for the data source, as follows: dynamicIsolation: The isolation level is not set at the data source level. Setting this parameter to true allows the application to set this parameter dynamically as needed. readUncommitted: This value specifies that corrupt data is not be read. This is the lowest isolation level. readCommitted: This value specifies that only the data committed before the query began is displayed. repeatableRead: This value specifies that data used in a query is locked and cannot be used by another query nor updated by another transaction. serializable: This value specifies that the data is isolated serially. Data is treated as if transactions are executed sequentially.
Note:

If the specified level is not supported by the data source, then Attunity Connect defaults to the next highest level.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.
OLEDB-FS (Flat File System) Data Source 49-3

Defining the Data Source


The process of defining an OLEDB-FS data source consists of two tasks:

Defining the OLEDB-FS Data Source Connection Configuring the OLEDB-FS Data Source Properties

Defining the OLEDB-FS Data Source Connection


The OLEBD-FS data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your OLEDB-FS data source. Expand the Bindings folder. Expand the binding where you want to add the OLEDB-FS data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

7. 8. 9.

In the Name field, enter a name for the new data source. Select OLEDB-FS from the Type list. Enter the connect string information as described below: Enter the following:

OLEDB Provider: Enter the name of the provider as it appears in the registry (this value is case sensitive). Oledb Data Source Name: Enter the name of the data source. Catalog Name (optional): Enter the name of the catalog.

Or, only enter the:

Data Link File : Specify the full path and name of the UDL file.

10. Click Finish.

Configuring the OLEDB-FS Data Source Properties


After defining the connection, you set the data source properties. To configure the OLEDB-FS data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your OLEDB-FS data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the OLEDB-FS data source and select Open. The Configuration editor is displayed.

49-4 AIS User Guide and Reference

Figure 491 OLEDB-FS Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the OLEDB-FS Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

OLEDB-FS (Flat File System) Data Source 49-5

49-6 AIS User Guide and Reference

50
OLEDB-SQL (Relational) Data Source
This section contains the following topics:

Overview Data Provider Requirements Functionality Transaction Support Data Types Configuration Properties Defining the Data Source

Overview
The OLEDB-SQL Data Source is a generic driver for data providers that have an SQL processing capability and expose OLE DB interfaces. This data source has been tested with Microsoft Kagera (MSDASQL), enabling an OLE DB interface over the Microsoft SQL Server data provider. Attunity Connect passes username and password to the provider when calling IDBInitialize::Initialize(). For more information, see Managing a User Profile in Attunity Studio. When working with a BRAZOS (with MS Access) database, the OLEDB-SQL data source driver does not support tables or stored procedures, which have duplicate column names.

Supported Versions and Platforms


The OLEDB SQL data source driver can be used with all Microsoft Windows platforms supported by Attunity Connect.

Data Provider Requirements


The OLEDB-SQL is a generic data source driver that can connect to data providers that expose OLE DB interfaces. The current version however, has specific requirements that every such data provider must meet:

The provider must be registered with the OLE clsid. The provider must have an SQL processing capability exposed via the ICommand interface.

OLEDB-SQL (Relational) Data Source 50-1

Batch UPDATE commandsin standard ANSI 92 SQL must be supported. For tables to be updateable, at least one unique index with non-nullable key fields must be reported by the provider. The provider must expose the following OLE DB interfaces.

This table lists the OLE DB Interfaces.


Table 501 Interface IAccessor IColumnsInfo ICommand ICommandPrepare ICommandProperties ICommandText ICommandWithParameters IDBCreateCommand IDBCreateSession IDBInitialize IDBSchemaRowset IErrorInfo1 IErrorRecords ILockBytes (OLE)2 IRowset IStream (OLE)2 ISupportErrorInfo ITransactionLocal (optional) OLE DB Interfaces Methods CreateAccessor, ReleaseAccessor GetColumnsInfo (Command and Rowset objects) Execute Prepare SetProperties SetCommandText GetParameterInfo CreateCommand CreateSession Initialize GetRowset (tables, columns, indexes; optionally also procedures, procedure parameters) GetDescription, GetSource GetErrorInfo Flush, ReadAt, SetSize, Stat, WriteAt GetData, GetNextRows, ReleaseRows, RestartPosition Read, Seek, SetSize, Stat, Write InterfaceSupportsErrorInfo StartTransaction, Commit, Abort

Notes:
1. 2.

IErrorLookup can be used with the GetErrorDescription method as well. Required only if BLOBs are used in the OLE DB provider.

Functionality
This section describes the following aspects of OLEDB-SQL data source driver functionality:

Isolation Levels Stored Procedures

50-2 AIS User Guide and Reference

Isolation Levels
The OLEDB-SQL data source supports the following Isolation Levels:

Dynamic isolation Uncommitted read Committed read Repeatable read Serializable

If the back-end data source does not support an isolation level, the data source supports the isolation levels that are supported by the back-end data source. The isolation level is used only within a transaction.

Stored Procedures
The OLEDB-SQL data source driver supports stored procedures. To retrieve output parameters, multiple resultsets, and the return code from the stored procedure, use the "? = CALL" syntax (see CALL Statement).

Transaction Support
The OLEDB-SQL data source driver supports one-phase commit if distributed transactions are supported in the back-end data source. It can participate in a distributed transaction if it is the only one-phase commit data source being updated in the transaction. Both the transaction environment properties convertAllToDistributed and useCommitConfirmTable must be set to true.

Data Types
This table shows how Attunity Connect maps OLE DB data types to SQL data types.
Table 502 Mapping OLE DB Data Types SQL Data Type SQL_TIMESTAMP SQL_DATE SQL_BINARY SQL_TIME SQL_TINYINT SQL_SMALLINT SQL_INTEGER SQL_REAL SQL_DOUBLE SQL_CHAR SQL_VARCHAR SQL_TINYINT

OLE DB Data Type DBTTYPE_DBTIMESTAMP DBTYP_DBDATE DBTYPE_BYTES DBTYPE_DBTIME DBTYPE_I1 DBTYPE_I2 DBTYPE_I4 DBTYPE_R4 DBTYPE_R8 DBTYPE_STR DBTYPE_STR DBTYPE_UI1

OLEDB-SQL (Relational) Data Source 50-3

Table 502 (Cont.) Mapping OLE DB Data Types OLE DB Data Type DBTYPE_UI2 DBTYPE_UI4 SQL Data Type SQL_SMALLINT SQL_INTEGER

The OLEDB-SQL data source driver does not support the Variant data type for columns. See also ADD Supported Data Types.

Configuration Properties
The following properties can be configured for the OLEDB-SQL data source Design perspective, Configuration view of Attunity Studio. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect: This parameter indicates whether or not the Explicit Select option is disabled. isolationLevel: This parameter specifies the default isolation level for the data source, as follows: dynamicIsolation: The isolation level is not set at the data source level. Setting this parameter to true allows the application to set this parameter dynamically as needed. readUncommitted: This value specifies that corrupt data is not be read. This is the lowest isolation level. readCommitted: This value specifies that only the data committed before the query began is displayed. repeatableRead: This value specifies that data used in a query is locked and cannot be used by another query nor updated by another transaction. serializable: This value specifies that the data is isolated serially. Data is treated as if transactions are executed sequentially.
Note:

If the specified level is not supported by the data source, then Attunity Connect defaults to the next highest level.

Defining the Data Source


The process of defining an OLEDB-SQL data source consists of the following tasks:

Defining the OLEDB-SQL Data Source Connection Configuring the OLEDB-SQLData Source Properties

Defining the OLEDB-SQL Data Source Connection


The OLEBD-SQL data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.

50-4 AIS User Guide and Reference

2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your OLEDB-SQL data source. Expand the Bindings folder. Expand the binding where you want to add the OLEDB-SQL data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

7. 8. 9.

In the Name field, enter a name for the new data source. Select OLEDB-SQL from the Type list. Enter the connect string information as described below: Enter the following:

OLEDB Provider: Enter the name of the provider as it appears in the registry (this value is case sensitive). OLEDB Data Source Name: Enter the name of the data source. Catalog Name (optional): Enter the name of the catalog.

Or, only enter the:

Data Link File: Specify the full path and name of the UDL file.

10. Click Finish.

Configuring the OLEDB-SQLData Source Properties


After defining the connection, you can set the data source properties. To configure the OLEDB-SQL data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your OLEDB-SQL data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the OLEDB-SQLdata source and select Open. The Configuration editor is displayed.

OLEDB-SQL (Relational) Data Source 50-5

Figure 501 OLEDB-SQL Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the OLEDB-SQL Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

50-6 AIS User Guide and Reference

51
Oracle Data Source
This section contains the following topics:

Overview Functionality SQL Capabilities Configuration Properties Metadata Transaction Support Security Data Types Platform-Specific Information Defining the Oracle Data Source Testing the Oracle Data Source

Overview
The Oracle Data Source provides a wide range of common standard relational functionalities that comply with the Relational Data Source model. The Oracle data source Driver implements connectivity to an Oracle database instance by means of an embedded OCI interface. Some capabilities are actively implemented as data source driver functionalities while others are implied by the methods and techniques the data source driver uses for interacting with the Oracle Backend Database.

Supported Versions and Platforms


For information on supported Oracle versions, see Attunity Integration Suite Supported Systems and Resources.

Supported Features
The Oracle data source driver covers the core of the following common traditional relational capabilities:

Data manipulation (DML) Data definition (DDL)


Oracle Data Source 51-1

Transaction management Logging Security Recovery Ordinary scalar data types Large data objects Locking Triggers Stored procedures

Functionality
This section covers the following aspects of Oracle functionality:

Stored Procedures Isolation Levels and Locking BLOBs Passthru Queries

Stored Procedures
The Oracle data source driver supports Oracle stored procedures with scalar type parameters and output cursors (one or more). It also supports functions with a return type of cursor. The stored procedure name must be less than 27 characters. The Oracle data source driver carries out the Invocation of stored procedures natively. To retrieve output parameters and the return code from the stored procedure, use the ? = CALL syntax. Oracle Stored Procedure enforces certain limitations on SQL statements and logic applicable within the stored procedure itself. See specific product documentation for details.

Isolation Levels and Locking


In the Oracle multi-user environment, numerous users often attempt to update the same information simultaneously. Oracle uses a locking mechanism to coordinate these UPDATE activities, allowing only one user to update a particular data block at a time, while disallowing others from modifying the same data. Basically, Oracle as a product supports two different levels of locking:

Row-level lock Table-level lock

With a row-level locking strategy, each row within a table can be locked individually. Under table-level locking strategy, the entire table is locked as an entity. The Oracle data source locks themselves are categorized into two modes:

51-2 AIS User Guide and Reference

The exclusive lock mode prevents locked objects from being shared. An exclusive lock is obtained for modifying data. The first transaction that applies an exclusive lock is the only one that can modify the locked object, until the lock is released. The shared lock mode allows objects to be share among several users, subject to the operation involved. Multiple users can read data in a shared mode while preventing access by writers who attempt to apply an exclusive lock.

Locks applied on database objects within a transaction scope are released only as the transaction ends, i.e. when changes are either committed or rolled back. Oracle automatically converts a table lock mode of lower restrictiveness to one of higher restrictiveness as appropriate. For example, assume that a transaction uses SELECT FOR UPDATE syntax to lock rows. If the transaction later UPDATEs locked rows, the shared lock is automatically converted to an exclusive lock on the row. Another approach, known as lock escalation, occurs when numerous locks are held at a given object level of granularity (i.e. rows) and a database raises the locks to a higher level of granularity (for example, table). The Oracle data source does not escalate locks, thereby reducing the chances for potential deadlocks where two or more users are waiting for data locked by each other in a circular fashion. See also:

Consistency Attunity Connect Treatment of Locking Attunity Connect Treatment of Isolation Levels

Consistency
Oracle data source optimized concurrent access policy for obtaining consistent read access can be briefly summarized as follows:

Readers do not wait for Writers Readers do not wait for other readers Writers do not wait for Readers (of the same data) Writers wait for other Writers only if they attempt to update identical rows in concurrent transactions

The Oracle data source controls read consistency by qualifying transactions with an appropriate isolation level. Isolation level can be set to either READ COMMITED, which is the default, or SERIALIZABLE, in which case all queried data within that transaction reflect the state of the database as of the time that transaction began. Queries that do not modify any data can ask for a READ ONLY transaction.

Attunity Connect Treatment of Locking


The Oracle data source driver addresses multiple user issues by providing managing certain controlling syntax elements and internally implemented functionalities. Generally, the decision to apply a locking mode, is left to the Oracle engine itself. No configurable setting can affect Oracle policy in this regard. However, the data source driver implements LOCK_ROW / UNLOCK_ROW functionalities internally. These are incorporated into AIS processing and are invoked upon UPDATE logic. This implementation addresses the lock issue in a very accurate

Oracle Data Source 51-3

manner by specifying individual rows to be locked upon demand at a specific point in time. The Oracle data source driver is sensitive to the transaction mode being applied. In cases where it senses a READ ONLY transaction, it composes a SET TRANSACTION statement qualified with a READ ONLY clause:
SET TRANSACTION READ ONLY ...

Attunity Connect Treatment of Isolation Levels


The isolation level is a configurable attribute that can be set to the standard values supported by AIS syntax, which is shown below:

DynamicIsoaltion ReadUncomitted ReadCommitted RepeatableRead Serializable

The configured setting affects the ISOLATION LEVEL clause of the composed SET TRANSACTION statement. However, since the Oracle data source supports only READ COMMITED and SERIALIZABLE, the following conversion is applied:
Table 511 Isolation Level Conversions Effective Isolation Level Clause READ COMMITTED READ COMMITTED SERIALIZABLE SERIALIZABLE

Configured Setting ReadUncomitted ReadCommitted RepeatableRead Serializable

When no value is configured, ReadCommitted is taken as the default.

BLOBs
The Oracle data source data source driver provides support for the built-in predefined opaque data types known as Segmented String/List Of Byte Varying. Both READ and WRITE operations are supported. BLOBs are addressed as ordinary fields. They are handled by dedicated cursors that step along their data. For handling a BLOB field, successfully the table must have a genuine unique key defined.

Passthru Queries
SQL capabilities, as implemented internally by AIS, are equipped with means for delegating SQL statements as-is to the backend engine. When using this technique, known as Passthru, SQL statements are passed as a whole and processed directly by the database engine, much like the ordinary interactive SQLplus utility would process them. Passthru processing is especially advantageous in cases where one attempts gain explicit control on the processed statement and facilitate proprietary features. For

51-4 AIS User Guide and Reference

example, introducing refined specific nuances of a CREATE INDEX statement can be carried out using this technique. The following example demonstrates how one can create an index with explicit control on specified attributes, with Passthru Using NAV_UTIL Utility.
NV_EMPLOY is defined as follows: NavSQL > desc nv_employ; Connecting .... ----------------------------------------------------------------Table NV_EMPLOY There are 5 fields in the table: # Name Datatype Size Width Scl Nullable ----------------------------------------------------------------------- 0 EMPLOYEE_ID string 5 5 0 no 1 LAST_NAME string 14 14 0 yes 2 CITY string 20 20 0 yes 3 ROWID string 18 18 0 yes (2 indexes specified) Index 1: length is 5, Unique name is NV_EMPLOY_ID segments are: EMPLOYEE_ID Index 2: length is 18, Unique, Hashed name is ROWID segments are: ROWID <END OF TABLE DESCRIPTION> NavSQL >

Assume that last_name is a common key of reference which warrants an index. Nevertheless, for the sake of usability, one attempts to have a case blind index to ease formulating and processing queries where case sensitivity is of no importance. The Oracle data source provides the following syntax for doing this:
CREATE INDEX lname_case_blind ON NV_EMPLOY(UPPER(LAST_NAME));

This syntax would take the following format when Using NAV_UTIL Utility:
NavSQL > text = {{CREATE INDEX lname_case_blind ON NV_EMPLOY(UPPER(LAST_NAME))}}; Executing: text = {{CREATE INDEX lname_case_blind ON NV_EMPLOY(UPPER(LAST_NAME))}} OK 0 rows affected

This index would typically improve delegated queries like the following:
SELECT * FROM NV_EMPLOY WHERE UPPER(LAST_NAME) = 'SMITH'; NavSQL > select * from nv_employ; EMPLOYEE_ID LAST_NAME CITY 00164 Toliver Chocorua 00165 Smith Chocorua 00166 Dietrich Boscawen 00167 Kilpatrick Marlow 00168 Nash Meadows 00169 Gray Etna 00170 Wood Jefferson 7 rows returned

Oracle Data Source 51-5

NavSQL > SELECT * FROM NV_EMPLOY WHERE UPPER(LAST_NAME) = 'SMITH'; EMPLOYEE_ID LAST_NAME CITY 00165 Smith Chocorua 1 rows returned NavSQL >

Recall that the SQL statement is passed through on an as-is basis, with no interpretation and/or any other intervention of any kind. Consequently - it should be phrased according to the lexical notations and syntax rules that comply with those expected by the backend Oracle SQL engine.

SQL Capabilities
The Oracle data source conforms to ANSI92 SQL. This standard includes both the SQL syntax and semantics. AIS uses its own extended SQL (see Attunity SQL Syntax), endowed with advanced capabilities. In most cases, AIS will attempt to delegate SQL portions to the Oracle back end engine for the processing of those SQL features which are supported there. Extended and/or unsupported functionalities are processed internally by AIS. This notion is a part of Attunity Query Processor and Query Optimizer core. To implement it, AIS is fluent in Oracle QL syntax and functionality. The Oracle data source driver assumes the following Oracle SQL elements:

Allows any expressions in GROUP BY clauses Allow any expressions in ORDER BY clauses DB supports distributed Transactions DB does support LOJ transactions LOJ queries are supported but subject to any Query Processor restrictions When creating table name in t2sql, quote also the owner name DB supports UNION operator DB supports UNION ALL DB supports FOR UPDATE inclusion in statement DB supports FOR UPDATE OF inclusion in statement Oracle native RDBMS hints are supported

The following table lists the main Oracle functionalities as symbolically expressed in its SQL.
Table 512 Functionality _POSITION_ _NVL_ _CASE_ _DATE_CONST_ _TIMESTAMP_CONST_ _CURRENT_TIMESTAMP_ Oracle Functionality Expressed in SQL Symbolic Syntax Not Supported NVL(~, ~) DECODE (`,) TO_DATE(''~-~-~'', ''YYYY-MM-DD'') TO_DATE(''~-~-~ ~:~:~'',''YYYY-MM-DD HH24:MI:SS'') SYSDATE

51-6 AIS User Guide and Reference

Table 512 (Cont.) Oracle Functionality Expressed in SQL Functionality _ADD_MONTHS_ _LAST_DAY_ _MONTHS_BETWEEN_ _NEXT_DAY_ _CONTAINS2_ _CONTAINS3_ _COS_ _SIN_ _TAN_ _ACOS_ _ASIN_ _ATAN_ _COSH_ _SINH_ _TANH_ _LOG10_ _LN_ _EXP_ _POWER_ Symbolic Syntax ADD_MONTHS(~, ~) LAST_DAY(~) MONTHS_BETWEEN(~, ~) NEXT_DAY(~, ~) CONTAINS(~, ~) CONTAINS(~, ~,~) COS(~) SIN(~) TAN(~) ACOS(~) ASIN(~) ATAN(~) COSH(~) SINH(~) TANH(~) LOG(10,~) LN(~) EXP(~) POWER(~)

Using Oracle Hints in the SQL


AIS supports using Oracle hints in a query. Some reasons for using hints are:

To optimize the result of a query. To force the query processor to read past any locked tables. This is useful when you need to scan tables during work hours and you must get information as quickly as possible. To limit the rows of a table that are scanned. To carry out an index scan.

AIS supports both its own hints and backend relational database hints (currently only Oracle hints are supported). For more information on using hints in SQL statements, see Attunity SQL Syntax. See also:

Attunity Hints Oracle Hints

Attunity Hints
Attunity hints indicate the optimization strategy. If more than one hint is entered in the statement, the optimizer uses the most efficient strategy. Before selecting an

Oracle Data Source 51-7

optimization strategy, you should check it with the Attunity Query Analyzer. The following hints are supported:

SCAN: Indexes are ignored and the data is scanned according to its physical order. INDEX: The index identified by the indname parameter is used to find specific values in the WHERE clause. If the WITHOUT parameter is used, the index is ignored for the indicated values. INDEXSCAN: The index identified by the indname parameter is used to seek for all values. If the WITHOUT parameter is entered, the index is ignored. FIRST: The left table in the join strategy. This table will be the first table in the optimized tree (on the left side). LAST: The table will be the last table in the optimized tree. ON <cond>: A condition determining the results of the <join>.

The following parameters are used with hints in the SQL statement to define how the hint will work:

WITH: Use the optimization strategy indicated by the hint. WITHOUT: Do not use the optimization strategy indicated by the hint. indname: The name or an ordinal of an index. <n>: The number of segments used with the INDEX hint. A value of zero (0) is the same as using the INDEXSCAN hint.

The following is an example of an SQL statement using Attunity hints:


Select * from T1 <Access (Index (emp_prim, 2))> Where key = 323 or key = 512

In this Select statement, the hint Index is used and emp_prim is the indname parameter; 2 is the n or number parameter.

Oracle Hints
Attunity connect also supports hints in Oracle syntax when using an Oracle data source. For a list of hints used with the Oracle syntax, see the Oracle documentation. You use hints with the following statements:

SELECT UPDATE DELETE INSERT

You can use any supported Oracle hint. You can also create a statement that includes more than one table and more than one hint. In this case, AIS combines the hints and separates them with a space delimiter. AIS combines the hints from all the tables processed by the delegated SQL statement. The hints are added after the first key word of the resulting statement. For example: When you enter the following SQL statement:
Select from A <DBHINT(hint1)> ;

It is converted to:
Select /*+ hint1 */ from A ;

When you enter the following SQL statement:

51-8 AIS User Guide and Reference

Select from A <DBHINT(hint1)>,B <DBHINT(hint2)> ;

It is converted to:
Select /*+ hint1 hint2*/ from A,B ;

When you enter the following SQL statement:


Insert into A <DBHINT(hint1)> select * from B <DBHINT(hint2)> ;

It is converted to:
Insert /*+ hint1 hint2*/into A select * from B ;

When a hint refers directly to a schema (such as, tablename, index, or column) the hint must contain an explicitly provided alias. For example:
select * from T1 b <dbhint ('index (B t1i2)')> where b.c = 1);

Notes:

If your hint refers to a schema, and you do not provide an alias, AIS will generate its own alias and the hint will not work. AIS does not check the content of Oracle hints.

Configuration Properties
The following properties can be configured for the Oracle data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

isolationLevel=value: This parameter specifies the default isolation level for the data source, as follows: readUncommitted: This value specifies that corrupt data is not to be read. This is the lowest isolation level. readCommitted: This value specifies that only the data committed before the query began is displayed. repeatableRead: This value specifies that data used in a query is locked and cannot be used by another query nor updated by another transaction. serializable: This value specifies that the data is isolated serially. Treats data as if transactions are executed sequentially.
Note:

If the specified level is not supported by the data source, then Attunity Connect defaults to the next highest level.

newDecimal=true|false: This parameter specifies whether or not the decimal data type is treated as a decimal or double data type. The default value is false, which is valid for users of Oracle 8.0. Users of Oracle 8i and higher should change the value to true.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Oracle Data Source 51-9

Metadata
Within the relational model, metadata definitions are maintained and held in dedicated system tables that are accessed by ordinary SQL queries. The Oracle data source queries these system tables for its own metadata needs. A metadata query issued by the data source driver to display a list of tables matching a given pattern would have the following generic format:
select table_name, table_owner from all_synonyms where synonym_name = :1 and (owner like :2 or owner='PUBLIC')";

As an illustration, a query instance of this kind can be issued directly:


NavSQL > select table_name, table_owner from all_synonyms where synonym_name like '%JOB%' and (owner like '%' or owner='PUBLIC'); table_name table_owner DBA_JOBS_RUNNING SYS DBA_JOBS SYS USER_JOBS SYS USER_JOBS SYS DBMS_JOB SYS WK$JOB_INFO WKSYS WK_JOB WKSYS JOBS HR JOB_HISTORY HR 9 rows returned NavSQL >

Similarly, retrieving index information for a given table is achieved by the following compound generic statement:
select i.index_name, i.uniqueness, c.column_name, i.distinct_keys from all_indexes i,all_ind_columns c where i.index_name = c.index_name and i.table_name = c.table_name and i.owner = c.index_owner and i.table_owner = c.table_owner and c.table_name = :1 and c.table_owner like :2 order by c.index_name,c.column_position

Specific details regarding the inner components of a given table, such as fields, data types and so forth are sampled by querying the table and describing its contents using standard programmed techniques. A sequence which is logically equivalent to the following example is applied programmatically:
Set recqry_str to "SELECT * FROM EMPLOYEES" PREPARE record_query FROM recqry_str; DESCRIBE record_query INTO rec_sqlda;

As a result rec_sqlda structure now holds the full mapping of EMPLOYEES table.

Statistics
Statistical information is, in a sense, yet another piece of metadata maintained internally at dedicated system tables. Consequently, the same notions regarding metadata objects prevail for statistics as well.

51-10 AIS User Guide and Reference

The Oracle data source gathers available statistics held inside the database instance. This information is used later by the run time components for developing, evaluating and choosing execution strategies at query optimization phases. Consider the following example:
select num_rows , blocks from all_tables where table_name = :1 and owner like :2

A particular instantiation of this query introduced to the driver will result the following:
NavSQL > select num_rows , blocks from all_tables where table_name = 'EMPLOYEES' and owner like '%'; num_rows 107 1 rows returned NavSQL > blocks 5

Upon optimizing a query involving the EMPLOYEES table for achieving best solution, these figures are considered, together with other structural and quantitative factors. In a similar manner, index statistics, which have an important role in the optimization phase, are also gathered from the metadata tables.

Transaction Support
The Oracle driver supports two-phase commit and can fully participate in a distributed transaction when the transaction environment property convertAllToDistributed is set to true. Use Oracle with its two-phase commit capability through an XA connection.The daemon server mode must be configured to Single-client mode (see Server Mode). To use distributed transactions from an ODBC-based application, ensure that AUTOCOMMIT is set to 0.

Security
The Oracle data source driver is not actively involved in applying or enforcing security policy. It incorporates into the security policy and rules as set at the database instance with which it interacts.

Data Types
This table shows how Attunity Connect maps Oracle data types to OLE DB and ODBC data types.
Table 513 Oracle BFILE BLOB CFILE Char (m<256) Char (m>255) Mapping Oracle Data Types OLE DB DBTYPE_BYTES DBTYPE_BYTES DBTYPE_STR DBTYPE_STR DBTYPE_STR ODBC SQL_LONGVARBINARY SQL_LONGVARBINARY SQL_LONGVARCHAR SQL_CHAR SQL_LONGVARCHAR1

Oracle Data Source 51-11

Table 513 (Cont.) Mapping Oracle Data Types Oracle CLOB Date Float Long Long Raw Number(9<p<31) Number(p,s) Number(p<=4) Number(p<=9) Number(p>31) Varchar2 (m<256) Varchar2 (m>255) OLE DB DBTYPE_STR DBTYPE_DATE DBTYPE_R8 DBTYPE_BYTES DBTYPE_BYTES DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_I2 DBTYPE_I4 DBTYPE_R8 DBTYPE_STR DBTYPE_STR ODBC SQL_LONGVARCHAR SQL_TIMESTAMP SQL_DOUBLE SQL_LONGVARCHAR SQL_LONGVARBINARY SQL_NUMERIC(p) SQL_NUMERIC(p,s) SQL_SMALLINT SQL_INTEGER SQL_DOUBLE SQL_VARCHAR SQL_LONGVARCHAR1

Note:

21. Precision of 2147483647. If the <odbc longVarcharLenAsBlob> parameter is set to true in the Attunity Server environment settings, then precision of m.

This table shows how Attunity Connect maps data types in a CREATE TABLE statement to Oracle data types.
Table 514 CREATE TABLE Data Types Oracle Raw Char[(m)] Date Float Float Long Raw Raw(m) Number (10) Float Numeric(p,s) Number (5) Long Date Date Number (3) Varchar2(m)

CREATE TABLE Binary Char[(m)] Date Double Float Image Image(m) Integer Numeric Numeric(p[,s]) Smallint Text Time Timestamp Tinyint Varchar(m)

51-12 AIS User Guide and Reference

See also ADD Supported Data Types.

Platform-Specific Information
This section includes Oracle-related information and procedures as they pertain to specific platforms, as follows:

UNIX Platforms OpenVMS Platform

UNIX Platforms
This section includes Oracle-related information and procedures as they pertain to UNIX platforms.

Verifying Environment Variables on UNIX Platforms


You can verify that the Oracle environment variables are correctly defined in the configuration settings. To verify the environment variables 1. Add the following line to the shared library environment variable: $ORACLE_HOME/rdbms/lib and $ORACLE_HOME/lib directories where $ORACLE_HOME is the directory where Oracle is installed.
2.

Make sure that the Oracle shared library directories come after $NAVROOT/lib in the UNIX shared library environment variable.

Oracle only allows a dba group or superuser the right to access the Oracle libraries. In client/server mode, the server process owner allocated by the daemon must have the main group as dba or super user.

Linking to Oracle Libraries on UNIX Platforms


Before you link to Oracle libraries, make sure that you set the environment variables. Then link to Oracle libraries by running the ora8_build script from navroot/bin. Make sure that the user account that executes this script has write permission to navroot/lib.

OpenVMS Platform
This section includes Oracle-related information and procedures as they pertain to OpenVMS platforms. The following topics provide information about the Oracle Data Source on the OpenVMS Platform.

Verifying Environment Variables on OpenVMS Platforms Linking to Oracle Libraries on OpenVMS Platforms

Verifying Environment Variables on OpenVMS Platforms


You can verify that the Oracle environment variables are correctly defined in the configuration settings. Usually, these variables are set at the system level; in this case, you do not need to set them at the process or job level. However, if they have not been set, or if you work with multiple versions of Oracle, you need to set the environment variables now.

Oracle Data Source 51-13

To verify the environment variables 1. Add the following line to the shared library environment variable: ORACLE_HOME:[rdbms.lib] and ORACLE_HOME:[lib] directories where ORACLE_HOME is the directory where Oracle is installed.
2.

Make sure that the Oracle shared library directories come after NAVROOT:[lib] in the OpenVMS shared library environment variable.

Oracle only allows a dba group or superuser the right to access the Oracle libraries. In client/server mode, the server process owner allocated by the daemon must have the main group as dba or super user.

Linking to Oracle Libraries on OpenVMS Platforms


Before you link to Oracle libraries, make sure that you set the environment variables. If the necessary link was not configured when the Attunity Server was installed, you can link Attunity Connect with Oracle by running the necessary commands from a privileged and Oracle-enabled account. To link Attunity connect with Oracle 1. Run the following commands from a privileged and Oracle-enabled account: $ @SYS$MANAGER:NAV_SHUT
2.

Run the login procedure NAV_LOGIN using its full pathname. $ @NAVROOT:[BIN]NAV_ORA_BUILD $ @SYS$STARTUP:NAV_START

Defining the Oracle Data Source


The process of defining an Oracle data source consists of the following tasks:

Defining the Oracle Data Source Connection Configuring the Oracle Data Source Properties Configuring Table and Column Names to be Case Sensitive Checking Oracle Environment Variables

Defining the Oracle Data Source Connection


The Oracle data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your Oracle data source. Expand the Bindings folder. Expand the binding where you want to add the Oracle data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

51-14 AIS User Guide and Reference

7. 8. 9.

In the Name field, enter a name for the new data source. Select Oracle from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the Oracle connect string. See the Oracle documentation for the specific

connect string.
11. Click Finish.

Note: The data source 32 bit client must be used to access Oracle on 64 bit operating systems (HP-UX 11 and higher, AIX 4.4 and higher and Sun Solaris 2.8 and higher).

Configuring the Oracle Data Source Properties


After defining the connection, you set the data source properties.
1. 2. 3. 4. 5. 6.

Open Attunity Studio. In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Oracle data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the Oracle data source and select Open. The Configuration editor is displayed.

Oracle Data Source 51-15

Figure 511 Oracle Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. See the Oracle documentation for the specific connect string. For Adabas (ADD), enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties

10. Click Finish.

Configuring Table and Column Names to be Case Sensitive


To access Oracle tables and columns using case sensitive name, specify Oracle_ SYNTAX in the Syntax name field in the Advanced tab of the Configuration Properties screen in Attunity Studio. When specifying case sensitive table and column names in SQL queries over Oracle data, use quotes (") to delimit the name. Make sure that the case sensitivity of the names within the quotes is exact. When specifying a table owner name in a query, the case of the owner name must match that defined in the binding settings.

51-16 AIS User Guide and Reference

Checking Oracle Environment Variables


Check that Oracle environment variables such as ORACLE_HOME and ORACLE_SID are correctly defined and readable by Attunity Connect. If necessary, define the variables in the startup script defined for the workspace in the daemon configuration, such as nav_server.script, or in the site_nav_login file. For more information see the AIS Installation Guide.

Testing the Oracle Data Source


You can perform the following tests on the Oracle data source:

Connection test: This tests the physical connection to the data source. Query test: This test runs an SQL SELECT query against the data source.

These tests are described in the following procedures: To test the connection to the Oracle data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your Oracle data source. Expand the binding with the Oracle data source. Expand the Data sources folder. Right-click the required Oracle data source, and select Test. The Test Wizard screen opens.

7.

Select Navigator from the Active Workspace Name list, and click Next. The system now tests the connection to the data source, and returns the test result status.

8.

Click Finish to exit the Test wizard.

1. 2. 3. 4. 5. 6.

To test the Oracle data source by query Open Attunity Studio. In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your Oracle data source. Expand the binding with the Oracle data source. Expand the Data sources folder. Right-click your Oracle data source, and select Query Tool. The Select Workspace screen opens.

7.

Select Navigator and click OK. The Query Tool opens in the Editor pane, with the Build Query tab displayed (see step 10).

8. 9.

Select the required query type from the Query Type list. The default is a SELECT-type query. Locate and expand your Oracle data source. The Oracle data source tables are listed.

Oracle Data Source 51-17

10. Drag the required table to the Table column, as shown in the following figure: Figure 512 The Query Tool screen

11. Click Execute query.

The Query Result tab opens, displaying the results of the query.
12. Close the Query Tool in the Editor pane.

Sample Log File


Log files are used for troubleshooting and error handling. The log file is generated when the driverTrace debug binding parameter is set to True. The log file includes various information concerning the functions used or called by the driver, queries executed, data sources accessed, etc. First, you need to create the log file. To create a log file 1. Open Attunity Studio.
2.

From the main menu, click Windows, Preferences. The Preferences screen is displayed.

51-18 AIS User Guide and Reference

Figure 513 Studio Preferences

3. 4. 5. 6. 7. 8. 9.

In the left pane, click Studio. Click the Advanced tab. Click the Show advanced environment parameters check box. Click OK. In the Design perspective Configuration view, right-click the binding with the Oracle data source, and select Open. Expand the Debug section. Select GDB trace and General trace.

10. Execute the following query: Select * from nation limit to 3 rows.

The following is a sample log file output:


Example 511 Sample Oracle Log File

Attunity Server Log (V4.9.0.0, ALPHA-VMS) Started at 2006-04-25T12:51:12 Licensed by ISG LTD. on 02-APR-2000 (001001205) Licensed to ISG LTD for <all providers> on 194.90.22.* (<no platform>) Licensed by ATTUNITY LTD. on 09-AUG-2000 (001001237) Licensed to ATTUNITY for <all providers> on 194.90.22.* (<all platforms>) RDB Execute: CONNECT AS 'NAV_READmyRDBSQL'

RDB Execute: ATTACH 'ALIAS MYRDBSQL FILENAME US:[TEST.APTEST.APP.RDB70]MAIN_ DB.RDB' RDB Execute: SET TRANSACTION ON MYRDBSQL USING (READ ONLY NOWAIT )

COMMIT NAV_READmyRDBSQL

Oracle Data Source 51-19

RDB Execute: SET TRANSACTION ON MYRDBSQL USING (READ ONLY NOWAIT

COMMIT NAV_READmyRDBSQL

nvOUT (VER:[WORK]QP_SQTXT.C;6 56): select * from nation limit to 3 rows nvRETURN (VER:[WORK]QPSYNON.C;6 1140): -1 RDB Execute: SET TRANSACTION ON MYRDBSQL USING (READ ONLY NOWAIT

SELECT T0000.N_NATIONKEY, T0000.N_NAME, T0000.N_REGIONKEY, T0000.N_COMMENT FROM MYRDBSQL.nation T0000

<<<<<<<<<<<<<<<<<<<

Execution Strategy Begin <<<<<<<<<<<<<<<<<<<<<<<<<<<<

Original SQL: select * from nation limit to 3 rows

Accessing Database 'myrdbsql' with SQL: SELECT T0000.N_NATIONKEY, T0000.N_NAME, T0000.N_REGIONKEY, T0000.N_COMMENT FROM MYRDBSQL.nation T0000

>>>>>>>>>>>>>>>>>>>> Execution Strategy End >>>>>>>>>>>>>>>>>>>>>>>>>>>> nvOUT (VER:[WORK]QPSQLCSH.C;6 140): ---------------------------> Using Cached QSpec SELECT T0000.N_NATIONKEY, T0000.N_NAME, T0000.N_REGIONKEY, T0000.N_COMMENT FROM MYRDBSQL.nation T0000 nvRETURN (VER:[WORK]DRVIUNWN.C;6 804): -1210 (Last message occurred 2 times) Disabled FilePool Cleanup(DB=___sys, FilePool Size=0) COMMIT NAV_READmyRDBSQL RDB Execute: disconnect 'NAV_READmyRDBSQL' FilePool Shutdown(DB=___SYS, FilePool Size=0) Closing log file at TUE APR 25 12:52:08 2006

51-20 AIS User Guide and Reference

52
Oracle RDB Data Source (OpenVMS Only)
This section contains the following topics:

Overview Functionality SQL Capabilities Configuration Properties Metadata Transaction Support Security Oracle RDB Data Types Defining the Oracle RDB Data Source Testing the Oracle RDB Data Source

Overview
The Oracle RDB Data Source provides a wide range of common standard relational functions that comply with the Relational Data Source model. The Oracle RDB data source Driver implements connectivity to an Oracle RDB database instance by means of an embedded SQL technique. Some capabilities are actively implemented as data source driver functions while others are implied by the methods and techniques the data source driver uses for interacting with the Oracle RDB Backend Database.

Supported Versions and Platforms


For information on supported Informix versions, see Attunity Integration Suite Supported Systems and Resources.

Supported Features
The Oracle RDB data source driver covers the core of the following common traditional relational RDBMS capabilities:

Data manipulation (DML) Data definition (DDL) Transaction management


Oracle RDB Data Source (OpenVMS Only) 52-1

Logging Security Recovery Ordinary scalar data types Large data objects Locking Triggers Stored procedures

Functionality
This section covers the following aspects of Oracle RDB functions:

Stored Procedures Isolation Levels and Locking BLOBs Passthru Queries

Stored Procedures
The Oracle RDB data source driver supports Oracle RDB stored procedures, however the stored procedure name must be less than 27 characters and cannot return a resultset. Invocation of stored procedures is carried out natively by the data source driver. To retrieve output parameters and the return code from the stored procedure, use the? = CALL syntax. The Oracle RDB data source driver is not capable of handling row set as output. Oracle RDB Stored Procedure enforces certain limitations on SQL statements and logic applicable within the stored procedure itself. See specific product documentation for details.

Isolation Levels and Locking


Oracle RDB uses a locking mechanism for controlling concurrency and enforcing logical and physical integrity of the database. The strategy for locking objects is as follows:

Lock the object Perform the required work on the object Unlock the object at a later time, most likely at the end of the Transaction

Among lockable objects the following hierarchy is found:


Database Table Page Row

52-2 AIS User Guide and Reference

Index node

Oracle RDB implements a dedicated logic that chooses the appropriate object and adjusts the lock granularity. It selects a suitable lock object and level which are based on the operation being performed in a given context. All this is aimed at minimizing potential lock conflicts. Note that the locking scope is normally associated with a transaction. The SET TRANSACTION syntax contains elements for controlling the LOCKING applied:

READ/WRITE WAIT [x]/NOWAIT

Nevertheless Oracle RDB locking policy is rather strict. Lock conflicts are a matter of routine in a database that serves multiple users concurrently. A user can be a 'Blocker', while holding a locked resource required by others, or 'Blocked By', while waiting for a resource locked by others to be released or 'Both'. Circular lock conflicts can lead to deadlock situations where User A locks Resource 1 and waits for Resource 2, which is locked by User B who is waiting for Resource 1. The Oracle RDB data source driver is aware of Oracle RDB potential locking conflicts and provides certain built-in relief measures:

The default transaction WAIT parameter is 0. This translates to NOWAIT, which actually means that a default transaction initiated by the Oracle RDB data source will not be blocked. Instead it will inform of a lock conflict if one is encountered. The default proposed isolation level is READ COMITTED which reduces lock contention and increases the degree of concurrency. Note that this affects 'pure' transactional integrity, since data committed by others is visible in your transaction. The Oracle RDB data source maintains dual (multiple) connections to the database. This notion is based on the fact that when snapshots are enabled for a database (the default), READ ONLY transactions do not lock the rows they read. Data is read from the snapshot maintained by Oracle RDB. Therefore the Oracle RDB data source driver normally holds two connections to the database. The first, denoted as NAV in the data source driver's log, is used for WRITE transactions, which are subject to LOCK conflicts. The second, denoted as NAVREAD, serves the READ transactions. This notion of duplicating connections is also extended to separating DDL and stored procedure locking activities from the main data source driver's course, which is READ/WRITE operations. The Oracle RDB data source provides explicit control on elements that affect locking along the transaction. This is carried out by assigning appropriate values to configurable parameters, as described above. In particular, ISOLATION LEVEL can be set explicitly both for READ and WRITE transactions as follows: readCommitted: Allows your transaction to see all data committed by other transactions. repeatableRead: Guarantees that if you execute the same query again, your program receives the same rows it read the first time. However, you may also see rows inserted and committed by other transactions (known as Phantoms). serializable: Guarantees that the operations of concurrently executed transactions are not affected by any other transaction.

Oracle RDB Data Source (OpenVMS Only) 52-3

BLOBs
The Oracle RDB data source data source driver provides support for the built-in predefined opaque data types known as Segmented String/List Of Byte Varying. Both READ and WRITE operations are supported. BLOBs are addressed as ordinary fields. They are handled by dedicated cursors that step along their data. For handling a BLOB field successfully, the table must have a genuine unique key defined.

Passthru Queries
Attunity SQL Syntax provides the ability to pass SQL code directory to the Oracle engine. This technique, known as Passthru, is useful in cases when attempting to be very accurate in controlling the processed statement and facilitate proprietary features. For example, introducing refined specific nuances of a CREATE INDEX statement can be carried out using this technique. The following example shows how to create an index with explicit control on RANKED/COMPRESSED qualifiers, using Passthru Using NAV_UTIL Utility. In interactive SQL it would look as follows:
SQL> CREATE INDEX JH_EMP_ID cont> ON JOB_HISTORY (EMPLOYEE_ID) cont> TYPE IS SORTED RANKED cont> DUPLICATES ARE COMPRESSED;

In NAV_UTIL it would have the following format:


NavSQL> text = {{CREATE INDEX JH_EMP_ID ON JOB_HISTORY (EMPLOYEE_ID) TYPE IS SORTED RANKED DUPLICATES ARE COMPRESSED} };

Remember that the SQL statement is passed through on an as-is basis, with no interpretation and/or any other intervention of any kind. Consequently, this statement should comply with the syntax rules of the Oracle SQL engine.

SQL Capabilities
The Oracle8 data source conforms to the ANSI 92 SQL standard for both syntax and semantics. While it has its own extended SQL (see Attunity SQL Syntax), including advanced capabilities, it will endeavour to delegate all the SQL code that can be processed by Oracle to the database engine. All extended or unsupported functions are passed to the Oracle engine for processing. The Oracle RDB data source driver assumes the following Oracle RDB SQL elements:

Does not support aliases for columns Allows an optional prefix before table name Allows 'ORDER BY <column name>' when <column name> does not have to appear in SELECT If a date/timestamp literal is set, this means that the month is JAN and not 01 No support for owners DB supports UNION DB supports UNION ALL

52-4 AIS User Guide and Reference

DB supports FOR UPDATE DB supports FOR UPDATE OF DB supports LOJ transactions LOJ queries are supported but subject to any Query Processor restrictions

The following table lists the main Oracle RDB functions as symbolically expressed in its SQL.
Table 521 Oracle RDB Functions Expressed in SQL Symbolic Syntax Not Supported CHAR_LENGTH(~) SUBSTRING(~ FROM ~ FOR 9999) SUBSTRING(~ FROM ~ FOR ~) TRIM(LEADING '' '' FROM ~) TRIM(TRAILING '' '' FROM ~) CASE WHEN '0 >= 0 THEN '0 ELSE -('0) END Not Supported COALESCE(~, ~) CASE ~ {{WHEN ~ THEN ~}} [[ELSE ~]] END CASE {{WHEN ~ THEN ~}} [[ELSE ~]] END_ Not Supported CAST(~ AS ~) DATE VMS '''2-'1-'0 00:00:00.00'" DATE VMS '''2-'1-'0 '3:'4:'5.'6'' TIME ''~:~:~.~' CURRENT_DATE CURRENT_TIMESTAMP(2) CURRENT_TIME(2) CASE EXTRACT(WEEKDAY FROM '0) WHEN 7 THEN 1 ELSE (EXTRACT(WEEKDAY FROM '0) + 1) END

Functional Notation YACC_POSITION_ YACC_LENGTH_ YACC_SUBSTR2_ YACC_SUBSTR3_ YACC_LTRIM_ YACC_RTRIM_ YACC_ABS_ YACC_MOD__ YACC_NVL_ YACC_CASE_ YACC_CASE2_ YACC_SQRT_ YACC_CONVERT_ YACC_DATE_CONST_ YACC__TIMESTAMP_ CONST_ YACC_TIME_CONST_ YACC_CURRENT_DATE_ YACC_CURRENT_ TIMESTAMP_ YACC_CURRENT_TIME_ YACC_DAYOFWEEK_

The following table specifies Oracle data type SQL lexical notations as associated with the corresponding AIS types.
Table 522 Oracle and AIS corresponding data types AIS Symbolic Type DT_B_ DT_W_ DT_L_ DT_Q_

Oracle Lexical Notation NUMBER(3) NUMBER(5) NUMBER(10) NUMBER(20)

Oracle RDB Data Source (OpenVMS Only) 52-5

Table 522 (Cont.) Oracle and AIS corresponding data types Oracle Lexical Notation FLOAT NUMBER(%d,%d) LONG VARCHAR2(%d) RAW(%d) LONG RAW Not Supp Not Supp AIS Symbolic Type DT_D_ DT_NUMERIC DT_TEXT_ DT_C_ DT_OPAQUE_ DT_IMAGE_ DT_ODBC_TIME_ DT_ODBC_TIMESTAMP_

Configuration Properties
The following properties can be configured for the Oracle RDB data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.
Note: The dbName element identifies the Oracle RDB database instance for example - dbName='personnel.rdb'.

accessModeThreshold: Setting this parameter to readOnlyThreshold will restrict the access to read only while setting it to readWriteThreshold will allow both read and write. No additional thresholds are currently supported.

commitReadOnly: Instructs the data source driver to place COMMIT between consecutive READ ONLY transactions. This has certain effects on the managed transaction and snapshot renewal as implemented by Oracle RDB. Further details are available in the product documentation. constraintsMode: Does not affect the ordinary data source driver mode of execution. defaultTransaction: Directs the data source driver as to how to handle situations where no explicit transaction is initiated. It may be either readOnly or readWrite. DisableExplicitSelect: Relevant if the data source driver allows for the suppressing of certain fields for Select *. Thus, if you disable this parameter, all fields will appear for Select *. For example, rowID and dbKey are explicit select. imposedTransactionMode: Enables the enforcement of a transaction mode regardless of the specified transaction qualifiers. You can specify: none: This value specifies that the default access mode is used. read: This value specifies that the database is read only. write: This value specifies that the database is accessed in WRITE mode.

This is useful for imposing read access via the Oracle RDB data source driver.

52-6 AIS User Guide and Reference

isolationLevel: In charge of controlling the isolation level applied on an Oracle RDB transaction. Isolation level determines to what extent data manipulated at a given session is affected by changes made by others concurrently. Oracle RDB is fully compliant with the ANSI92 standard in this regard.There are four supported levels that can be configured as attributes for this element: readUncommitted: This value specifies that corrupt data is not to be read. This is the lowest isolation level. readCommitted repeatableRead serializable

See Isolation Levels and Locking for additional descriptive information. Further details regarding the corresponding semantics can be found at the related Oracle RDB /ANSI92 documentation.

lockWait: Controls the composition of the WAIT clause upon transaction start. The default setting of 0 results in the NOWAIT keyword. A positive integer X assembles a WAIT X clause. reservingForReadTransaction/reservingForWriteTransaction: Can be set with a 'reserving clause' of choice which is introduced upon starting a Read/Write transaction respectively. showDbkey: determines whether the DBKEY field is recognized by the Oracle RDB data source driver as a field associated with manipulated tables. A DBKEY is an artificial field managed for every table. It holds information regarding the physical addressing of a given record. By setting it appropriately, you can expose or hide the DBKEY. Thus, when setting showDbkey=true, the following is accepted:
select * from nation where DBKey = ? select n_nationkey,dbkey from nation;

However, when set to false, these DMLs are rejected.

transactionsInSP: addresses conflicts that may occur between the transactions managed by the driver and those possibly set at the stored procedure body. Since a stored procedure is an independent code section that can hold most of the SQL repertoire, there may be an internally initiated transaction, which the data source driver is not aware of. This can put the managed transaction protocol out of synchronization. For these cases, setting this parameter to TRUE results in a dedicated connection for the stored procedure that does not interfere with the data source driver transactional sequence.

useSeparateRWconnections: Manage only one RW connection to a database. The default is two RW connections.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Oracle RDB Data Source (OpenVMS Only) 52-7

Metadata
The Oracle RDB data source driver polls the Oracle RDB database for required metadata. Following the relational model, metadata definitions reside in a dedicated system table that is accessed by ordinary SQL queries. In this regard, system tables of interest are RDB$RELATIONS, holding information about tables, and RDB$ROUTINES, where procedures are maintained. A metadata query issued by the data source driver for a table listing can look as follows:
SELECT RDB$RELATION_NAME, RDB$DESCRIPTION FROM RDB$RELATIONS

Detailed table/record information fields, types and alike are sampled by querying the table and describing its contents by standard programmed techniques. The following statement sequence illustrates this symbolically:
Set recqry_str to "SELECT * FROM EMPLOYEES" PREPARE record_query FROM recqry_str; DESCRIBE record_query INTO rec_sqlda;

Now the rec_sqlda structure holds the mapping of EMPLOYEES table.

Statistics
The same notions regarding Metadata objects prevail for statistics also. The Oracle RDB data source driver gathers available statistics held inside the database instance. This information is used later for developing, evaluating and choosing execution strategies at query optimization phases (unless dealing with pure Passthru Queries delegated as a whole). Oracle RDB maintains internal cardinality book keeping of various kinds. These figures provide an (estimated) indication as to 'how many pieces are there'. The following illustrates the retrieval of index statistics:
SELECT I.RDB$INDEX_NAME, F.RDB$FIELD_NAME, I.RDB$UNIQUE_FLAG, F.RDB$FLAGS, I.RDB$CARDINALITY FROM RDB$INDICES I, RDB$INDEX_SEGMENTS F WHERE I.RDB$INDEX_NAME = F.RDB$INDEX_NAME and I.RDB$RELATION_NAME = 'EMPLOYEES';

As shown, index properties and structural information are also queried and retrieved.

Transaction Support
The Oracle RDB data source provides support for managing transactions using the following transactional qualifiers:

READ/WRITE: Sets the access mode applied. WAIT x/NOWAIT: Determines how the transaction reacts when a locked resource is required. ISOLATION LEVEL: A configurable property for READ and WRITE that control the degree of sharing/concurrency among users. Reserving clause: Provides refined instruction regarding access/lock mode to be applied on individual listed tables.

52-8 AIS User Guide and Reference

Read/Write qualifiers are normally implied by the DML being processed. Other qualifiers are configured as described above. This relates to the ordinary fundamental one-phase commit transactional protocol as implemented by Oracle RDB. Starting from Oracle RDB V.7x, running on Alpha OpenVMS 7.x, supports Two-phase Commit and can fully participate in a distributed transaction. 2PC support is implemented via a dedicated AIS run time component, namely the XAGW manager. The XAGW manager is a gateway to DECdtm which is the common transaction manager in OpenVMS. The protocol implemented there is the XA interface which is the X/OPEN de facto standard. An Oracle RDB instance plays the role of a Resource Manager that responds to AIS transaction management. The XAGW manager is an independent component which is not a part of the Oracle RDB data source driver.

Installing XA-related Shareable Libraries


In order to use Two-phase Commit, XA related shareable libraries need to be installed. To install XA related shareable libraries 1. Install the HP DECdtm Distributed Transaction Manager.
2.

Install the XA related shareable libraries.


$ install add sys$share:ddtm$xg_ss.exe/prot/share $ install add sys$share:ddtm$xa_ss.exe/prot/share

3.

Use the XGCP control program utility to create a new XG gateway log with the same name as the name of the machine that is within the OpenVMS cluster.
run sys$system:xgcp XA Gateway Control Program V1.0 XGCP> create_log ALPHA XGCP>

where ALPHA is the machine name.


4.

Run the server using the XGCP program utility to run the server.
XGCP> start_server

5.

Define a logical name to point to the XAGW_Manager.


$ def XAGW_MANAGER navroot:[bin]XAGW_MANAGER.EXE

Security
The Oracle RDB data source driver is not actively involved in applying or enforcing security policy. All security policies and rules are defined in and applied by the Oracle database engine.

Oracle RDB Data Types


This table lists the generic Oracle RDB types as they are mapped to AIS types.
Table 523 Corresponding AIS and Oracle RDB Data Types Generic Oracle RDB SQL Types CHAR

Common AIS Mapped Data type DT_TYPE_T_

Oracle RDB Data Source (OpenVMS Only) 52-9

Table 523 (Cont.) Corresponding AIS and Oracle RDB Data Types Common AIS Mapped Data type DT_TYPE_RDB_DBKEY_ DT_TYPE_VT_ DT_TYPE_ADT_ DT_TYPE_G_, DT_TYPE_F_ DT_TYPE_P_ , DT_TYPE_NL_ _ DT_TYPE_L_ DT_TYPE_LS_ DT_TYPE_W_ DT_TYPE_WS_ DT_TYPE_B_ DT_TYPE_BS_ DT_TYPE_Q_ DT_TYPE_OPAQUE_ DT_TYPE_FILLER_ Generic Oracle RDB SQL Types DBKEY VARCHAR DATE, SQLDA2_DATETIME FLOAT DECIMAL INTEGER Scaled INTEGER SMALLINT Scaled SMALLINT BYTE Scaled BYTE QUADWORD SEGSTRING Others

This table shows how Attunity Connect maps data types in a CREATE TABLE statement to Oracle RDB data types.
Table 524 CREATE TABLE Data Types Oracle RDB CREATE TABLE Text DATE LIST OF BYTE VARYING(x) LIST OF BYTE VARYING(x) VARCHAR(x) CHAR(x) TINYINT SMALLINT INTEGER DECIMAL(x, y) REAL DOUBLE PRECISION

AIS Generic Type DATE of any type DT_TYPE_OPAQUE_ STRING (sacled) DT_TYPE_CSTRING_ DT_TYPE_T_ DT_TYPE_B_ DT_TYPE_W_ DT_TYPE_L_ Scaled DT_TYPE_F_ DT_TYPE_F_

Defining the Oracle RDB Data Source


The process of defining an Oracle Rdb data source consists of the following tasks:

Defining the Oracle RDB Data Source Connection Configuring the Oracle RDB Data Source Properties

52-10 AIS User Guide and Reference

Defining the Oracle RDB Data Source Connection


The Oracle RDB data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your Oracle RDB data source. Expand the Bindings folder. Expand the binding where you want to add the Oracle RDB data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

7. 8. 9.

In the Name field, enter a name for the new data source. Select RDBSQL from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the connect string as follows:

Database file path: Specify the full path name of the database. You can specify a logical name. This is useful if the logical database is distributed across multiple physical databases. For example, if the logical database is distributed across two physical databases called BOSTON_DB and PARIS_DB, you can define the logical name as follows:
define ALL_SITES BOSTON_DB,PARIS_DB

After defining this logical, you can specify ALL_SITES as the Database file path. The driver translates the logical name before binding.
Note:

Attunity Connect does not natively support multi schema databases. As a workaround to use multi schema databases via their stored names, you must explicitly disable multi schema mode. You do this by specifying the following in the Database File Path:

database_name multischema is off

11. Click Finish.

Configuring the Oracle RDB Data Source Properties


After defining the connection, you set the data source properties. To configure the Oracle RDB data source properties 1. Open Attunity Studio.
2. 3. 4.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Oracle RDB data source. Expand the Bindings folder and the binding with your data source.

Oracle RDB Data Source (OpenVMS Only) 52-11

5. 6.

Expand the Data sources folder. Right-click the Oracle RDB data source and select Open. The Configuration editor is displayed.

Figure 521 Oracle RDB Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Oracle RDB Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source driver properties as required. For a description of the available parameters, see Configuration Properties.

10. Click Finish.

Testing the Oracle RDB Data Source


You can perform the following tests on the Oracle RDB data source:

Connection test: This tests the physical connection to the data source.

52-12 AIS User Guide and Reference

Query test: This test runs an SQL SELECT query against the data source.

These tests are described in the following procedures: To test the connection to the Oracle RDB data source Open Attunity Studio. In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your Oracle RDB data source. Expand the binding with the Oracle RDB data source. Expand the Data sources folder. Right-click the required Oracle RDB data source, and select Test. The Test Wizard screen opens.
7.

1. 2. 3. 4. 5. 6.

Select Navigator from the Active Workspace Name list, and click Next. The system now tests the connection to the data source, and returns the test result status.

8.

Click Finish to exit the Test wizard.

To test the Oracle RDB data source by query 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand machine with your Oracle RDB data source. Expand the binding with the Oracle RDB data source. Expand the Data sources folder. Right-click the your Oracle RDB data source, and select Query Tool. The Select Workspace screen opens.

7.

Select Navigator and click OK. The Query Tool opens in the Editor pane, with the Build Query tab displayed (see step 10.)

8. 9.

Select the required query type from the Query Type list. The default is a SELECT-type query. Locate and expand your Oracle RDB data source. The Oracle RDB data source tables are listed.

10. Drag the required table to the Table column, as shown in the following figure:

Oracle RDB Data Source (OpenVMS Only) 52-13

Figure 522 The Query Tool screen

11. Click Execute query.

The Query Result tab opens, displaying the results of the query.
12. Close the Query Tool in the Editor pane.

Sample Log File


Log files are used for troubleshooting and error handling. The log file is generated when the driverTrace debug binding parameter is set to True. The log file includes various information concerning the functions used or called by the driver, queries executed, data sources accessed, etc. First, you need to create the log file. To create a log file 1. Open Attunity Studio.
2.

From the main menu, click Windows, Preferences. The Preferences screen is displayed.

52-14 AIS User Guide and Reference

Figure 523 Studio Preferences

3. 4. 5. 6. 7. 8. 9.

In the left pane, click Studio. Click the Advanced tab. Click the Show advanced environment parameters check box. Click OK. In the Design perspective, Configuration view, right-click the binding with the Oracle RDB data source, and select Open. Expand the Debug section. Select GDB Trace and General Trace.

10. Execute the following query: Select * from nation limit to 3 rows.

The following is a sample log file output:


Example 521 Sample Oracle RDB Log File

Attunity Server Log (V4.9.0.0, ALPHA-VMS) Started at 2006-04-25T12:51:12 Licensed by ISG LTD. on 02-APR-2000 (001001205) Licensed to ISG LTD for <all providers> on 194.90.22.* (<no platform>) Licensed by ATTUNITY LTD. on 09-AUG-2000 (001001237) Licensed to ATTUNITY for <all providers> on 194.90.22.* (<all platforms>) RDB Execute: CONNECT AS 'NAV_READmyRDBSQL'

RDB Execute: ATTACH 'ALIAS MYRDBSQL FILENAME US:[TEST.APTEST.APP.RDB70]MAIN_ DB.RDB' RDB Execute: SET TRANSACTION ON MYRDBSQL USING (READ ONLY NOWAIT )

COMMIT NAV_READmyRDBSQL

Oracle RDB Data Source (OpenVMS Only) 52-15

RDB Execute: SET TRANSACTION ON MYRDBSQL USING (READ ONLY NOWAIT

COMMIT NAV_READmyRDBSQL

nvOUT (VER:[WORK]QP_SQTXT.C;6 56): select * from nation limit to 3 rows nvRETURN (VER:[WORK]QPSYNON.C;6 1140): -1 RDB Execute: SET TRANSACTION ON MYRDBSQL USING (READ ONLY NOWAIT

SELECT T0000.N_NATIONKEY, T0000.N_NAME, T0000.N_REGIONKEY, T0000.N_COMMENT FROM MYRDBSQL.nation T0000

<<<<<<<<<<<<<<<<<<<

Execution Strategy Begin <<<<<<<<<<<<<<<<<<<<<<<<<<<<

Original SQL: select * from nation limit to 3 rows

Accessing Database 'myRDBSQL' with SQL: SELECT T0000.N_NATIONKEY, T0000.N_NAME, T0000.N_REGIONKEY, T0000.N_COMMENT FROM MYRDBSQL.nation T0000

>>>>>>>>>>>>>>>>>>>> Execution Strategy End >>>>>>>>>>>>>>>>>>>>>>>>>>>> nvOUT (VER:[WORK]QPSQLCSH.C;6 140): ---------------------------> Using Cached QSpec SELECT T0000.N_NATIONKEY, T0000.N_NAME, T0000.N_REGIONKEY, T0000.N_COMMENT FROM MYRDBSQL.nation T0000 nvRETURN (VER:[WORK]DRVIUNWN.C;6 804): -1210 (Last message occurred 2 times) Disabled FilePool Cleanup(DB=___sys, FilePool Size=0) COMMIT NAV_READmyRDBSQL RDB Execute: disconnect 'NAV_READmyRDBSQL' FilePool Shutdown(DB=___SYS, FilePool Size=0) Closing log file at TUE APR 25 12:52:08 2006

52-16 AIS User Guide and Reference

53
RMS Data Source (OpenVMS Only)
This section contains the following topics:

Overview Functionality Configuration Properties Transaction Support Data Types Defining the RMS Data Source Setting Up the RMS Data Source Metadata with the Import Manager

Overview
The following sections provide information about defining and configuring the RMS Data Source.

Supported Versions and Platforms


RMS data sources can be used with OpenVMS platforms only. For information on which OpenVMS versions that AIS supports, see Attunity Integration Suite Supported Systems and Resources.

Supported Features
The RMS data source supports the following key features:

Hierarchical Queries RFA Usage. The RFA can be used as a column in a WHERE clause, when the RMS record is indexed.

Functionality
Record level locking is supported. The lock is released only when the file is closed or when the record is re-read without a lock being applied.

RMS Data Source (OpenVMS Only) 53-1

Configuration Properties
The following properties can be configured for the RMS data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect: When true, this parameter disables the ExplicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. filepoolCloseOnTransaction: When true, this parameter specifies that all files in the filepool for this data source close at each end of transaction (commit or rollback). filepoolSize: This parameter specifies how many instances of a file from the filepool may be open concurrently. filepoolSizePerFile: Specifies how many instances of a file from the filepool may be open concurrently for each file. lockWait: Specifies whether or not the data source waits for a locked record to become unlocked or returns a message that the record is locked. newFileLocation: The Data directory in the connect string, this parameter specifies the location of the CISAM files and indexes you create with CREATE TABLE and CREATE INDEX statements. You must specify the full path for the directory. useGlobalFilepool: When true, this parameter specifies that a global filepool that can span more than one session is used. useRmsJournal: Specifies whether or not RMS journalling is enabled when the SET FILE/RU_JOURNAL command is issued under OpenVMS. The SET FILE/RU_JOURNAL command marks an RMS file for recovery unit journalling. Any RMS table used in a Transaction where journalling applies must be defined with an index. This table lists the SQL statements that are used with RMS journalling and their OpenVMS equivalents.

Table 531 SQL Begin Commit Rollback

OpenVMS Equivalents of SQL Statements OpenVMS SYS$START_TRANS SYS$END_TRANS SYS$ABORT_TRANS

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Transaction Support
The RMS Data Source supports Two-phase Commit and can fully participate in a distributed Transaction when the transaction environment property convertAllToDistributed is set to true. You use RMS with its two-phase commit capability through an XA connection using the OpenVMS integrated DTM transaction processing manager.

53-2 AIS User Guide and Reference

Multiple concurrent transactions are not supported. Also refer to the useRmsJournal data source property described in Defining the RMS Data Source.

Data Types
This table shows how Attunity Connect maps data types in a CREATE TABLE statement to RMS data types:
Table 532 CREATE TABLE Data Types RMS Char[(m)] Date+time Double Float Integer Numeric(p,s) Smallint Tinyint Varchar(m)

CREATE TABLE Char[(m)] Date Double Float Image Integer Numeric[(p[,s])] Smallint Text Tinyint Varchar(m)

See also ADD Supported Data Types.

Defining the RMS Data Source


The process of defining an RMS data source consists of two tasks:

Defining the RMS Data Source Connection Configuring the RMS Data Source

Defining the RMS Data Source Connection


The RMS data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your RMS data source. Expand the Bindings folder. Expand the binding where you want to add the RMS data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

RMS Data Source (OpenVMS Only) 53-3

7. 8. 9.

In the Name field, enter a name for the new data source. Select RMS from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the RMS connect string as follows:

Data Location: Enter the directory where the RMS files and indexes you create with CREATE TABLE and CREATE INDEX statements reside. You must specify the full path for the directory. If a value is not specified, created files are written to the DEF directory under the directory where AIS is installed. The value specified is used for the Data file field of the Design perspective, Metadata tab in Attunity Studio.
11. Click Finish.

Configuring the RMS Data Source


After setting the binding, you set the data source properties. To configure the RMS data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your RMS data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the RMS data source and select Open. The Configuration editor is displayed.

53-4 AIS User Guide and Reference

Figure 531 RMS Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the RMS Data Source Connection. Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties. After you set up the data source, you must define Attunity metadata describing the RMS data.

Setting Up the RMS Data Source Metadata with the Import Manager
The RMS data source requires Attunity Metadata. You can import the metadata from COBOL copybooks or CDD metadata. If COBOL copybooks or CDD metadata do not exist that describe the RMS records, the metadata must be manually defined. For more information of the metadata definition see Managing Data Source Metadata and Data Source Metadata Overview. If COBOL copybooks describing the data source records are available, you can import the metadata by running the metadata import in the Attunity Studio Design perspective, Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), first import the metadata from copybooks with the same settings, and then import the metadata from the other copybooks. COBOL copybooks are required for this import procedure. These copybooks are copied to the machine running Attunity Studio. This process has the following steps:

Selecting the Input Files

RMS Data Source (OpenVMS Only) 53-5

Applying Filters Selecting Tables Import Manipulation Metadata Model Selection Import the Metadata

Selecting the Input Files


This section describes the steps required to select the input files that will be used to generate the metadata. To select the input files 1. Open Attunity Studio.
2.

In the Design perspective, Configuration view, right-click the data source and select Show Metadata View. The Metadata tab is displayed with the Data Source displayed in the Metadata view.

3. 4. 5.

Right-click Imports and select New Import. Enter a name for the import. The name can contain letters, numbers and the underscore character. Select one of the following import types.

RMS Import Manager COBOL Import Manager for Data Sources

6. 7.

Click Finish. The Metadata Import Wizard is displayed. Click Add in the Import Wizard to add COBOL copybooks. The Add Resource screen is displayed, providing the option of selecting files from the local machine or copying the files from another machine using FTP. The following figure shows the Add Resource screen.

Figure 532 Add Resource Screen

53-6 AIS User Guide and Reference

8. 9.

If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the COBOL copybooks reside and, if not using anonymous access, enter a valid username and password to access the Machine. using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.

10. To browse and transfer files required to generate the metadata, access the machine

11. Select the files to import and click Finish to start the transfer.

The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process. You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The selected files are displayed in the Get Input Files screen of the wizard, as shown in the following figure.
Figure 533 Import Wizard

12. To manipulate table information or the fields in the table, right-click the table and

select the option you want. The following options are available:

Fields manipulation: Access the Fields Manipulation screen to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table.

RMS Data Source (OpenVMS Only) 53-7

Set table attributes: Set table attributes. The table attributes are described in Table Attributes. XSL manipulation location: Specify an XSL transformation or JDOM document that is used to transform the table definition.

13. Click Next to go to the Applying Filters step.

Applying Filters
This section describes the steps required to apply filters on the COBOL Copybook files used to generate the Metadata. It continues the Selecting the Input Files step. To apply filters 1. Click Next, the Apply Filters step is displayed in the editor.
Figure 534 Apply Filters

2.

Apply filters to the copybooks, as needed. The following COBOL filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks.

53-8 AIS User Guide and Reference

Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.

3.

Click Next to go to the step.

Selecting Tables
This section describes the steps required to select the tables from the COBOL Copybooks. The following procedure continues the Applying Filters step.
1.

From the Select Tables screen, select the tables that you want to access. To select all tables, click Select All. To clear all the selected tables, click Unselect All. The following figure shows the Select Tables screen.

Figure 535 Select Tables

The import manager identifies the names of the records in the COBOL copybooks that will be imported as tables.
2.

Select the tables that you want to access (that require Attunity metadata) and then click Next to go to the Import Manipulation step.

RMS Data Source (OpenVMS Only) 53-9

Import Manipulation
This section describes the operations available for manipulating the imported records (tables). It continues the Selecting Tables step. The import manager identifies the names of the records in the DDM Declaration files that will be imported as tables. You can manipulate the general table data in the Import Manipulation Screen To manipulate the table metadata 1. From the Import Manipulation screen (see Import Manipulation Screen figure), right-click the table record marked with a validation error, and select the relevant operation. See the table, Table Manipulation Options for the available operations.
2.

Repeat step 1 for all table records marked with a validation error. You resolve the issues in the Import Manipulation Screen. Once all the validation error issues have been resolved, the Import Manipulation screen is displayed with no error indicators.

3.

Click Next to continue to the Metadata Model Selection.

Import Manipulation Screen


The Import Manipulation screen is shown in the following figure:
Figure 536 Import Manipulation Screen

The upper area of the screen lists the DDM Declaration files and their validation status. The metadata source and location are also listed. The Validation tab at the lower area of the screen displays information about what needs to be resolved in order to validate the tables and fields generated from the

53-10 AIS User Guide and Reference

COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location). The following operations are available in the Import Manipulation screen:

Resolving table names, where tables with the same name are generated from different files during the import. Selecting the physical location for the data. Selecting table attributes. Manipulating the fields generated from the COBOL, as follows: Merging sequential fields into one (for simple fields). Resolving variants by either marking a selector field or specifying that only one case of the variant is relevant. Adding, deleting, hiding, or renaming fields. Changing a data type. Setting the field size and scale. Changing the order of the fields. Setting a field as nullable. Selecting a counter field for array for fields with dimensions (arrays). You can select the array counter field from a list of potential fields. Setting column-wise normalization for fields with dimensions (arrays). You can create new fields instead of the array field where the number of generated fields will be determined by the array dimension. Creating arrays and setting the array dimension.

The following table lists and describes the available operations when you right-click a table entry:
Table 533 Option Fields Manipulation Table Manipulation Options Description Customizes the field definitions, using the Field Manipulation screen. You can also access this screen by double-clicking the required table record. Renames a table. This option is used especially when more than one table with the same name is generated from the COBOL. Sets the physical location of the data file for the table. Sets the table attributes. Specifies an XSL transformation or JDOM document that is used to transform the table definitions. Removes the table record.

Rename Set data location Set table attributes XSL manipulation Remove

You can manipulate the data in the table fields in the Field Manipulation Screen. Double-click a line in the Import Manipulation Screen to open the Field Manipulation Screen.

RMS Data Source (OpenVMS Only) 53-11

Field Manipulation Screen


The Field Manipulation screen lets you make changes to fields in a selected table. You get to the Field Manipulation screen through the Import Manipulation Screen. The Field Manipulation screen is shown in the following figure.
Figure 537 Field Manipulation Screen

You can carry out all of the available tasks in this screen through the menu or toolbar. You can also right click anywhere in the screen and select any of the options available in the main menus from a shortcut menu. The following table describes the tasks that are done in this screen. If a toolbar button is available for a task, it is pictured in the table.
Table 534 Command General menu Undo Click to undo the last change made in the Field Manipulation screen. Field Manipulation Screen Commands Description

Select fixed offset

The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select this option to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. When you select a fixed offset you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.

53-12 AIS User Guide and Reference

Table 534 (Cont.) Field Manipulation Screen Commands Command Test import tables Description Select this table to create an SQL statement to test the import table. You can base the statement on the Full table or Selected columns. When you select this option, the following screen opens with an SQL statement based on the table or column entered at the bottom of the screen.

Enter the following in this screen:

Data file name: Enter the name of the file that contains the data you want to query. Limit query results: Select this if you want to limit the results to a specified number of rows. Enter the amount of rows you want returned in the following field. 100 is the default value. Define Where Clause: Click Add to select a column to use in a Where clause. In the table below, you can add the operator, value and other information. Click on the columns to make the selections. To remove a Where Clause, select the row with the Where Clause you want t remove and then click Remove.

The resulting SQL statement with any Where Clauses that you added are displayed at the bottom of the screen. Click OK to send the query and test the table. Attribute menu Change data type Select Change data type from the Attribute menu to activate the Type column, or click on the Type column and select a new data type from the drop-down list.

RMS Data Source (OpenVMS Only) 53-13

Table 534 (Cont.) Field Manipulation Screen Commands Command Create array Description This command allows you to add an array dimension to the field. Select this command to open the Create Array screen.

Enter a number in the Array Dimension field and click OK to create the array for the column. Hide/Reveal field Select a row from the Field manipulation screen and select Hide field to hide the selected field from that row. If the field is hidden, you can select Reveal field. Select this to change or set a dimension for a field that has an array. Select Set dimension to open the Set Dimension screen. Edit the entry in the Array Dimension field and click OK to set the dimension for the selected array. Set field attribute Select a row to set or edit the attributes for the field in the row. Select Set field attribute to open the Field Attribute screen.

Set dimension

Click in the Value column for any of the properties listed and enter a new value or select a value from a drop-down list. Nullable/Not nullable Select Nullable to activate the Nullable column in the Field Manipulation screen. You can also click in the column. Select the check box to make the field Nullable. Clear the check box to make the field Not Nullable. Set scale Select this to activate the Scale column or click in the column and enter the number of places to display after the decimal point in a data type. Select this to activate the Size column or click in the column and enter the number of total number of characters for a data type.

Set size Field menu

53-14 AIS User Guide and Reference

Table 534 (Cont.) Field Manipulation Screen Commands Command Add Description Select this command or use the button to add a field to the table. If you select a row with a field (not a child of a field), you can add a child to that field. Select Add Field or Add Child to open the following screen:

Enter the name of the field or child, and click OK to add the field or child to the table. Delete field Select a row and then select Delete Field or click the Delete Field button to delete the field in the selected row.

Move up or down

Select a row and use the arrows to move it up or down in the list.

Rename field Sturctures menu Columnwise Normalization

Select Rename field to make the Name field active. Change the name and then click outside of the field.

Select Columnwise Normalization to create new fields instead of the array field where the number of generated fields will be determined by the array dimension.

RMS Data Source (OpenVMS Only) 53-15

Table 534 (Cont.) Field Manipulation Screen Commands Command Combining sequential fields Description Select Combining sequential fields to combine two or more sequential fields into one simple field. The following dialog box opens:

Enter the following information in the Combining sequential fields screen:

First field name: Select the first field in the table to include in the combined field End field name: Select the last field to be included in the combined field. Make sure that the fields are sequential. Enter field name: Enter a name for the new combined field.

Flatten group

Select Flatten Group to flatten a field that is an array. This field must be defined as Group for its data type. When you flatten an array field, the entries in the array are spread into a new table, with each entry in its own field. The following screen provides options for flattening.

Do the following in this screen:

Select Recursive operation to repeat the flattening process on all levels. For example, if there are multiple child fields in this group, you can place the values for each field into the new table when you select this option. Select Use parent name as prefix to use the name of the parent field as a prefix when creating the new fields. For example, if the parent field is called Car Details and you have a child in the array called Color, when a new field is created in the flattening operation it will be called Car Details_Color.

53-16 AIS User Guide and Reference

Table 534 (Cont.) Field Manipulation Screen Commands Command Mark selector Description Select Mark selector to select the selector field for a variant. This is available only for variant data types. Select the Selector field form the following screen.

Replace variant Select counter field

Select Replace variant to replace a variants selector field. Select Counter Field opens a screen where you select a field that is the counter for an array dimension.

Metadata Model Selection


This section lets you generate virtual and sequential views for imported tables containing arrays. In addition, you can configure the properties of the generated views. It continues the Import Manipulation procedure. This allows you to flatten tables that contain arrays. In the Metadata Model Selection step, you can select configure values that apply to all tables in the import or set specific settings for each table. To configure the metadata model Select one of the following:

RMS Data Source (OpenVMS Only) 53-17

Default values for all tables: Select this if you want to configure the same values for all the tables in the import. Make the following selections when using this option: Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the the data source is not configured to include the row number column. false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties. Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns. false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Specific virtual array view settings per table: Select this to set different values for each table in the import. This will override the data source default for that table. Make the selections in the table under this selection. See the item above for an explanation.

The Metadata Model Selection screen is shown in the following figure:

53-18 AIS User Guide and Reference

Figure 538 The Metadata Model Selection Screen

Import the Metadata


This section describes the steps required to import the metadata to the target computer. It continues the Metadata Model Selection step. You can now import the metadata to the computer where the data source is located, or import it later (in case the target computer is not available). To transfer the metadata 1. Select Yes to transfer the matadata to the target computer immediately or No to transfer the metadata later.
2.

Click Finish.

RMS Data Source (OpenVMS Only) 53-19

The Import Metadata screen is shown in the following figure:


Figure 539 The Import Metadata screen

Importing Attunity Metadata Using the RMS_CDD Import Utility


The RMS data source requires Attunity Metadata. You can import the metadata from COBOL copybooks or CDD metadata. If the metadata exists in a CDD data dictionary, you can use the RMS_CDD import utility to import this metadata to Attunity Metadata. To generate Attunity metadata, use the following command line (activated directly from DCL):
$ CDD_ADL [cdd_dir] [record_spec] ds_name [filename_table] [CDD/Plus_version_ major] [organization] [basename] [options]

Note:

Use double quotes (") to hold the position of a parameter when you do not specify its value and it is followed by a parameter whose value is specified. For example, $ CDD_ADL "" sales. Activation of this utility is based on environment symbols defined by the login file that resides in the BIN directory under the directory where AIS is installed. You can always replace the environment symbol with the appropriate entry.

where:

cdd_dir: The location of the CDD records record_spec: The set of records that you want converted. All CDD records matching record_spec in the specified CDD directory are converted. If you

53-20 AIS User Guide and Reference

want all the record information in the directory converted, do not specify a value for record_spec.

ds_name: The name of a data source defined in the binding. The imported metadata is stored as ADD metadata in the repository for this data source. filename_table: A text file containing a list of records and the names of their data files. Each row in this file has two entries: record_name and physical_ Cdd_data_file_name (which is used for the value for the Data file field for the table in the Design perspective, Metadata tab in Attunity Studio). If a table is not listed in this text file, the entry for the Data file field for the table defaults to table_FIL, where table is the name of the table. If this text file does not exist, the names for the Data file field specifying the tables default to table_FIL, where table is the name of the table. The text file specified in this parameter contains entries similar to the following: on OpenVMS Platforms:
orders us5:[attunity.acme]orders.inx purchase us5:[attunity.acme]purchase.inx

Note:

In cases where the filename defaults to table_FIL, the filename must be changed to the correct name in order to access the data. This is done using the Design perspective, Metadata tab of Attunity Studio, or NAV_UTIL EDIT.

CDD/Plus_version_majorThe CDD/Plus version major, which determines the separator between the record_name and version_number in the output CDD records, and the repository metadata format. The values available for this parameter are:

0: Specifies that the metadata is in DMU format and the semicolon (;) is the separator between record name and version number. 4: Specifies that the metadata is in CDO format and the semicolon (;) is the separator between record name and version number. x (anything else, or blank): Specifies that the metadata is in CDO format and the open parenthesis ( is the separator between record name and version number.

If you are using a version of CDD prior to V4.0, specify "0" (since CDO format was introduced in version 4.0).

basename: Specifies the user-defined name of the intermediate files used during the import operation. Options: Enables you to specify the following options:

d: Specifies that all intermediate files are saved. You can check these files if problems occur in the conversion. c: Specifies that the column name is used for an array name, instead of the concatenation of the parent table name with the child table name. If a column name is not unique in a structure (as when a structure includes another structure, which contains a column with the same name as a column in the parent structure), the nested column name is suffixed with the nested structure name.

RMS Data Source (OpenVMS Only) 53-21

Example
$ CDD_ADL cdd$top.personnel sales rmsdemo

To display online help for this utility, run the command with help as the only parameter: ADDIMP -i.

53-22 AIS User Guide and Reference

54
SQL Server Data Source (Windows Only)
This section includes the following topics:

Overview Supported Versions and Platforms Configuration Properties Transaction Support Data Types Defining an SQL Server Data Source

Overview
Attunity provides an ODBC-based SQL Server driver. In Attunity Studio, this driver appears as SQL Server; in the binding configuration, it appears as MSSQLODBC. The SQL Server driver is supported for existing applications that use it.
Note:

The SQL Server driver requires an SQL Server connection per SQL statement. This can cause problems when you work with multiple data sources referencing more than a single SQL Server database or when you work with Microsofts InterDev application (since it issues an SQL_CONNECT command every time it executes a query). You can use the SQL Server Enterprise Manager to increase the number of available connections.

Table names in SQL Server must be less than or equal to 64 characters to be usable.

The security information required to access SQL Server is taken from the machine where AIS and SQL Server are installed. The drivers support:

Integrated (trusted) security mode Standard security mode

Supported Versions and Platforms


SQL Server data sources can be used with Windows platforms only.
SQL Server Data Source (Windows Only) 54-1

For information on supported SQL Server versions, see Attunity Integration Suite Supported Systems and Resources.

Configuration Properties
The following properties can be configured for the SQL Server data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

ansiNull: This parameter specifies whether or not the treatment of NULLs is compliant with the ANSI standard. The default is true. cursorMode: The cursor type that is used. Select one of the following modes: clientMultipleConnections: This lets you work with client-side cursors with multiple connections. serverMultipleConnections: This lets you work with server-side cursors with multiple connections. clientSingleConnection: This lets you work with client-side cursors with a single connection. serverSingleConnection: This lets you work with client-side cursors with a single connection. marsConnection: This lets you work with multiple access result sets.

isolationLevel: This parameter specifies the default isolation level for the data source, as follows: readUncommitted: This value specifies that corrupt data is not to be read. This is the lowest isolation level. readCommitted: This value specifies that only the data committed before the query began is displayed. serializable: This value specifies that the data is isolated serially. Treats data as if transactions are executed sequentially. repeatableRead: This value specifies that data used in a query is locked and cannot be used by another query nor updated by another transaction.
Note:

If the selected level is not supported by the data source, then the next highest level is used.

NumericNullable: This parameter specifies that a numeric field can receive null values. timeout: This parameter specifies the timeout on any SQL Server API call, via the dbsettime API. values.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

54-2 AIS User Guide and Reference

Transaction Support
The SQL Server drivers support two-phase commit and can fully participate in a distributed transaction when the transaction environment parameter convertAllToDistributed is set to true. You can use SQL Server with its two-phase commit capability both under MTS, and directly through an XA connection. In both cases, Microsoft DTC must be running on the server. If you are working under MTS, start an OLE transaction. The SQL Server data source is automatically included in the distributed transaction. If the connection to the data is through an XA connection, the connection is made automatically. The daemon server mode must be configured to Single-client mode (see Server Mode). To use distributed transactions from an ODBC-based application, ensure that AUTOCOMMIT is set to 0.

Data Types
This table shows how Attunity Connect maps SQL Server data types to OLE DB and ODBC data types.
Table 541 SQL Server Binary Bit Char(m<256) Char(m>255) Datetime Decimal Float Image Integer Money Nchar Ntext Numeric Nvarchar Real Small Datetime Small Int Small Money Text Timestamp TinyInt Mapping SQL Server Data Types OLE DB DBTYPE_BYTES DBTYPE_I2 DBTYPE_STR DBTYPE_STR DBTYPE_DBTIMESTAMP DBTYPE_NUMERIC DBTYPE_R8 DBTYPE_BYTES DBTYPE_I4 DBTYPE_NUMERIC DBTYPE-STR DBTYPE-STR DBTYPE_NUMERIC DBTYPE-STR DBTYPE_R4 DBTYPE_DBTIMESTAMP DBTYPE_I2 DBTYPE_numeric DBTYPE-STR DBTYPE_BYTES DBTYPE_I2 ODBC SQL_BINARY SQL_TINYINT SQL_CHAR SQL_LONGVARCHAR1 SQL_TIMESTAMP SQL_NUMERIC SQL_DOUBLE SQL_LONGVARBINARY SQL_INTEGER SQL_NUMERIC(19,4) SQL_VARCHAR SQL_LONGVARCHAR1 SQL_NUMERIC SQL_VARCHAR SQL_REAL SQL_TIMESTAMP SQL_SMALLINT SQL_NUMERIC(10,4) SQL_LONGVARCHAR1 SQL_BINARY SQL_TINYINT

SQL Server Data Source (Windows Only) 54-3

Table 541 (Cont.) Mapping SQL Server Data Types SQL Server Varbinary Varchar(m<256) Varchar(m>255) OLE DB DBTYPE_BYTES DBTYPE-STR DBTYPE-STR ODBC SQL_BINARY SQL_VARCHAR SQL_LONGVARCHAR1

Note:
1.

Precision of 2147483647.If the <odbc longVarcharLenAsBlob> parameter is set to true in the AIS environment settings, then precision of m. Supported by the SQL Server (ODBC) or MSSQLODBC driver. The column definition is returned as varchar(255) and the data is truncated to 255 characters.

2. 3.

This table shows how Attunity Connect maps data types in a CREATE TABLE statement to SQL Server data types.
Table 542 CREATE TABLE Data Types SQL Server Raw Char[(m)] Datetime Float Real Image Binary(m) Integer Float Numeric(p,s) Smallint Text Datetime Datetime Tinyint Varchar(m)

CREATE TABLE Binary Char[(m)] Date Double Float Image Image (m) Integer Numeric Numeric(p[,s]) Smallint Text Time Timestamp Tinyint Varchar(m)

See also ADD Supported Data Types.

Defining an SQL Server Data Source


The process of defining an SQL Server data source consists of two tasks:

Defining the SQL Server Data Source Connection

54-4 AIS User Guide and Reference

Configuring the SQL Server Data Source Properties

Defining the SQL Server Data Source Connection


The SQL Server data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your SQL Server data source. Expand the Bindings folder. Expand the binding where you want to add the SQL Server data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

7. 8. 9.

In the Name field, enter a name for the new data source. Select SQL Server from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the connect string as follows:

SQL Server name: Enter the name of the server machine for the SQL Server data. Database name: Enter the name of the database.
Note:

If you enter the Database name only, without the Server name, the driver uses the following subtree of the Windows registry:

HKEY_LOCAL_MACHINE\ SOFTWARE\ Microsoft\ MSSQLServer\ Client\ ConnectTo

11. Click Finish.

Configuring the SQL Server Data Source Properties


After defining the connection, you set the data source properties. To configure the data source 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your SQL Server data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder.
SQL Server Data Source (Windows Only) 54-5

6.

Right-click the SQL Server data source and select Open. The Configuration editor is displayed.

Figure 541 SQL Server Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the SQL Server Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

54-6 AIS User Guide and Reference

55
SQL/MP Data Source (HP NonStop Only)
This section contains the following topics:

Overview Functionality Configuration Properties Transaction Support Data Types Defining the SQL/MP Data Source

Overview
The SQL/MP and Enscribe data sources and Pathway adapter share the same transaction, which automatically provides consistency between Enscribe and SQL/MP. As a result, you cannot start a new transaction for Enscribe when one is open for SQL/MP.
Note:

When specifying a passthru query to SQL/MP, if the query is not within a transaction, you must include the words BROWSE ACCESS at the end of the query. Fully qualified table names in non-returning rowsets (such as UPDATE, INSERT, DELETE and DDL statements) must be delimited by quotes ().

The SQL/MP driver uses the maxSqlCache parameter, which is set in the queryProcessor category of the binding environment. In addition to the Query Processor cache, the SQL/MP driver also caches SQL/MP queries for reuse. For details of the Query Processor parameter, see the queryProcessor category in Environment Properties.

Limitations

If you are using view with a Left Outer Join, note that a Left Outer Join cannot contain multuple tables ar a view. Functions are not supported in SQL/MP, therefore there is no delegation for queries that have functions in the select list.

SQL/MP Data Source (HP NonStop Only)

55-1

Functionality
This section describes the following aspects of SQL/MP functionality:

Mapping SQL/MP Table Names SQL/MP Primary Keys Partitioned Tables Transaction Support Isolation Levels and Locking

Mapping SQL/MP Table Names


Since SQL/MP table naming conventions restrict names to eight characters and limits the character set that can be used, you can create a mapping file to map SQL/MP table names to logical names. Create a mapping file called NAVMAP in the subvolume where AIS is installed. In the NAVMAP file you define a section for each SQL/MP data source (defined in the binding settings) for which you want a mapping. Within each section you specify the mapping you want for each table name in the database, as it is identified in Attunity Connect. Enter the following syntax in the NAVMAP file for each table name:
table_alias = \machine_name.$volume_name.subvolume_name.filename

For example:
a1 = \mach1.$D3018.sqlmp.emp

SQL/MP Primary Keys


Since SQL tables always have a primary key in SQL/MP, when creating SQL Tables with no primary key, SQL/MP automatically adds a hidden column named SYSKEY, which forms the primary key for the table. This column is not included in SELECT * clauses or INSERT VALUES ( ) statements. It is recommended always to have a user defined primary key, to improve access time to the data.
Note: When a CREATE TABLE statement is immediately followed by CREATE UNIQUE INDEX statement, a table with a primary key is created and therefore a SYSKEY column is not created.

Since SQL tables always have a primary key (either user defined or SYSKEY), a unique index is always defined. The SQL/MP driver always generates table bookmarks consisting of the fields of the first unique index, thereby guaranteeing the uniqueness of the bookmark.

Partitioned Tables
You can create SQL/MP tables that are partitioned. The first (head) partition is always the partition with the lowest key range. Different partitions of one table can be registered in different catalogs the only restriction being that a partition must be registered in a catalog of the same system as the partition itself.

55-2 AIS User Guide and Reference

In handling partitioned tables, you follow standard SQL/MP behavior. This includes the ability to refer to any one of the partitions and not necessarily to the head partition. For example, when an SQL/MP database has more than one partitioned table with the same table name, for the first table you use the short name and for the other tables you use the full path name of any one of the partitions. (Attunity Connect refers to the full path name by dropping the $ prefix and replacing the periods with underscores, as in volume_subvolume_tablename.) For information about defining an alias for the full path name, see Mapping SQL/MP Table Names. Example The following example creates a new SQL/MP catalog in $D0117.partcat anda table $DSMSCM.orders.ODETAIL that consists of two partitions. $DSMSCM.orders.ODETAIL is the first (head) partition and $D0117.orders.ODETAIL is the second partition.
CREATE TABLE $DSMSCM.orders.ODETAIL ORDERNUM NUMERIC (6) UNSIGNED NO DEFAULT NOT NULL, PARTNUM NUMERIC (4) UNSIGNED NO DEFAULT NOT NULL, UNIT_PRICE NUMERIC (8,2) NO DEFAULT NOT NULL, QTY_ORDERED NUMERIC (5) UNSIGNED NO DEFAULT NOT NULL, PRIMARY KEY ( ORDERNUM, PARTNUM ) ) CATALOG $D0117.partcat ORGANIZATION KEY SEQUENCED PARTITION $D0117.orders.ODETAIL CATALOG $D0117.partcat EXTENT (16368,64) MAXEXTENTS 650 FIRST KEY 450000 ) EXTENT (16368,64) MAXEXTENTS 650 BUFFERED NO AUDIT;

Isolation Levels and Locking


The SQL/MP driver supports the following isolation levels:

Browse access Stable access Repeatable access

The isolation levels supported can be overwritten in the binding settings. This table lists the SQL/MP isolation levels for the IsolationLevel attribute.
Table 551 Isolation Level Attributes Equivalent SQL/MP Isolation Level BROWSE ACCESS STABLE ACCESS REPEATABLE ACCESS REPEATABLE ACCESS

IsolationLevel Attribute readUncommited readCommited repeatableRead serializable

SQL/MP Data Source (HP NonStop Only)

55-3

To change the isolation level, set the data source isolation level field to the desired isolation level. Once the isolation level is changed all statements sent to SQL/MP are sent with the corresponding SQL/MP isolation level, if under a transaction or not. When setting SQL/MP isolation level to either REPEATABLE ACCESS or STABLE ACCESS, a transaction must be started explicitly. Otherwise SQL/MP returns the following error:
8312 The statement cannot be executed because no TMF transaction is currently in progress. The error was detected for value-1.

The isolation level is used only within a Transaction. The SQL/MP data source supports locking of the single row in the database table. SELECT statements are performed with BROWSE ACCESS, so there is no wait for the data to be unlocked by another application.

Configuration Properties
The following properties can be configured for the SQL/MP data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect=true|false: When set to true, this parameter disables the explicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. isolationLevel=value: This parameter specifies the default isolation level for the data source, as follows: readUncommitted: This value specifies that corrupt data is not to be read. This is the lowest isolation level. readCommitted: This value specifies that only the data committed before the query began is displayed. repeatableRead: This value specifies that data used in a query is locked and cannot be used by another query nor updated by another transaction. serializable: This value specifies that the data is isolated serially. Treats data as if transactions are executed sequentially.
Note:

If the specified level is not supported by the data source, then Attunity Connect defaults to the next highest level.

statementCacheSize=value: This parameter specifies the maximum number of SQL statements that are cached. The default value is 6.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Transaction Support
The SQL/MP driver supports one-phase commit. It can participate in a distributed transaction if it is the only one-phase commit data source being updated in the Transaction.

55-4 AIS User Guide and Reference

Note:

Both the transaction environment properties convertAllToDistributed and useCommitConfirmTable must be set to true.

If Using Attunity Connect as a Stand-alone Transaction Coordinator is necessary, in the Configuration Properties, set convertAllToDistributed to TRUE If you want to use SQLMP as one-phase commit data source participating in a Two-phase Commit distributed transaction, in the Configuration Properties, set useCommitConfirmTable to TRUE Other than the above conditions, do not set either of the above-mentioned parameters to TRUE.

Data Types
This table shows how Attunity Connect maps SQL/MP data types to OLE DB and ODBC data types.
Table 552 SQL/MP Char(m<256) Char(m>255) Date Datetime year to day Datetime year to minue Datetime year to fraction Double Precision Float Integer Large Integer Numeric (x,y) Real Small Integer Time Timestampt Varchar(m<256) Varchar(m>255) Mapping SQL/MP Data Types OLE DB DBTYPE_STR DBTYPE_STR DBTYPE_DBTIME DBTYPE_DBTIMESTAMP DBTYPE_DBTIMESTAMP DBTYPE_DBTIMESTAMP DBTYPE_R8 DBTYPE_R4 DBTYPE_I4 DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_R4 DBTYPE_I2 DBTYPE_DBTIME DBTYPE_DBTIMESTAMP DBTYPE_STR DBTYPE_STR ODBC SQL_CHAR SQL_LONGVARCHAR1 SQL_DATE SQL_TIMESTAMP SQL_TIMESTAMP SQL_TIMESTAMP SQL_DOUBLE SQL_REAL SQL_INTEGER SQL_NUMERIC SQL_NUMERIC SQL_REAL SQL_SMALLINT SQL_TIME SQL_TIMESTAMP SQL_CHAR SQL_LONGVARCHAR1

Note: 1. Precision of 2147483647.If the <odbc longVarcharLenAsBlob> parameter is set to true in the AIS environment settings, then precision of m.

SQL/MP Data Source (HP NonStop Only)

55-5

Other SQL/MP data types (such as Interval and Multibyte string) are not supported. When retrieving a table that includes columns with unsupported data types, only columns with supported data types are retrieved correctly in all circumstances. This table shows how Attunity Connect maps data types in a CREATE TABLE statement to SQL/MP data types.
Table 553 CREATE TABLE Data Types SQL/MP Char[(m)] Date Float Real Integer signed Float Numeric(p) Numeric(s) Smallint signed Text Time Datetime year to fraction Smallint signed Varchar(m)

CREATE TABLE Binary Char[(m)] Date Double Float Image Integer Numeric Numeric(p) Numeric(s) Smallint Text Time Timestamp Tinyint Varchar(m)

See also ADD Supported Data Types.

Defining the SQL/MP Data Source


The process of defining an SQL/MP data source consists of the following tasks:

Defining the SQL/MP Data Source Connection Configuring the SQL/MP Data Source Properties

Defining the SQL/MP Data Source Connection


The SQL/MP data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your SQL/MP data source. Expand the Bindings folder. Expand the binding where you want to add the SQL/MP data source.

55-6 AIS User Guide and Reference

6.

Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

7.

In the Name field, enter a name for the new data source.
Note:

If you use a local copy of the metadata or extended metadata, the data source name must begin with a letter. In addition, if you do not supply the Repository Information on the Advanced tab of the Data Source editor, the default names that AIS generates for the metadata files will be unique. AIS creates a NOS file and a BBNOS file. The file names are created by using the characters in the data source name and then by ensuring that the first letter is legal on the HP NonStop platform. To make sure that the generated files are unique, you must make the sixth through eighth alphanumeric characters of the data source name alphabetic and unique. If the data source name contains fewer than eight alphanumeric characters, the last three alphanumeric characters must be alphabetic and unique. If the data source name contains five or more alphanumeric characters, the last five cannot all be the letter B. If you enter the Repository Information on the Advanced tab of the Data source editor, make sure that you use the rule above for the filenames created by the Repository directory (HP NonStop volume/subvolume) and name, not the data source name.

8. 9.

Select SQL/MP from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Specify the connect string as follows:

Catalog name: Specify the subvolume used as the default catalog for new tables.
11. Click Finish.

Configuring the SQL/MP Data Source Properties


After defining the connection, you set the data source properties.
1. 2. 3. 4. 5. 6.

Open Attunity Studio. In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your SQL/MP data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the SQL/MP data source and select Open. The Configuration editor is displayed.

SQL/MP Data Source (HP NonStop Only)

55-7

Figure 551 SQL/MP Data Source Configuration Properties

7. 8. 9.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the SQL/MP Data Source Connection. Configure the driver parameters as required. For a description of the available parameters, see Configuration Properties Click Finish.

55-8 AIS User Guide and Reference

56
Sybase Data Source
This section contains the following topics:

Overview Supported Versions and Platforms Functionality Configuration Properties Transaction Support Data Types Platform-Specific Information Defining the Sybase Data Source

Overview
The following sections provide information about defining and configuring the Sybase Data Source. This includes information regarding the Metadata that must be defined.

Supported Versions and Platforms


For information on supported Sybase versions, see Attunity Integration Suite Supported Systems and Resources

Functionality
This section describes the following aspects of Sybase functionality.

Stored Procedures Isolation Levels and Locking

Stored Procedures
The Sybase data source driver supports Sybase stored procedures functions, including procedures that return multiple result sets. To retrieve output parameters, multiple result sets, and the return code from the stored procedure, use the ? = CALL syntax.

Sybase Data Source 56-1

Isolation Levels and Locking


The Sybase data source supports the following isolation levels:

Uncommitted read Committed read Serializable

The isolation level is used only within a transaction. Sybase supports page level locking. Updates in Sybase are blocking, i.e. if another connection tries to access a locked record, AIS is locked. Update Semantics For tables with no bookmark or other unique index, the data source driver returns a combination of most (or all) of the columns of the rows as a bookmark. The driver does not guarantee the uniqueness of this bookmark; you must ensure that the combination of columns is unique.

Configuration Properties
The following properties can be configured for the Sybase data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

ansiNull: This parameter determines whether or not the treatment of NULLs is compliant with the ANSI standard. The default setting is true. chained: This parameter determines whether or not the Sybase transaction mode is set to chained transactions. This is equivalent to the Sybase command set chained on. The default setting is false. cursorRows: This parameter specifies the number of rows retrieved in a read-ahead buffer. This value controls the CS_CURSOR_ROWS parameter in the Sybase OpenClient CTLIB. If n is negative, the number of rows read at one time is the number that will fill the buffer. If n is 0, read-ahead is disabled. dbName: The Database name in the connect string, this parameter specifies the name of the database. If you omit the Database name, the data source driver binds to the default Sybase database. disableCursors: This parameter specifies whether or not CTLIB cursors are used. When set to true, performance of retrieval-based queries is improved. However, parameters and BLOBs cannot be used in the queries and only one open statement can be used. The setting is ignored if the query is included in a started transaction. The default setting is false. interfaceFile: The Interface file in the connect string, this parameter specifies the full path and name of a Sybase interface file. If you omit the interface file, the default Sybase interface file is used. isolationLevel=value: This parameter specifies the default isolation level for the data source, as follows: readUncommitted: This value specifies that corrupt data is not to be read. This is the lowest isolation level. readCommitted: This value specifies that only the data committed before the query began is displayed.

56-2 AIS User Guide and Reference

repeatableRead: This value specifies that data used in a query is locked and cannot be used by another query nor updated by another transaction. serializable: This value specifies that the data is isolated serially. Treats data as if transactions are executed sequentially.
Note:

If the specified level is not supported by the Data Source, then Attunity Connect defaults to the next highest level.

packetSize: This parameter specifies the size of a Sybase packet. Valid values are 1 to n, where n is the number of units for the packet. Each unit is 512 bytes. parmWithAt: This parameter specifies whether or not all parameters in Sybase stored procedures begin with the symbol @. The default setting is false.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Transaction Support
The Sybase Data Source supports one-phase commit. It can participate in a distributed Transaction if it is the only one-phase commit data source being updated in the transaction.
Note:

Both the transaction environment properties convertAllToDistributed and useCommitConfirmTable must be set to true.

Data Types
This shows how Attunity Connect maps Sybase data types to OLE DB and ODBC data types.
Table 561 Sybase Bit Binary Char(m<255) Char(m>255) Datetime Decimal Double Precision Float Image Integer Money Numeric Mapping Sybase Data Types OLE DB DBTYPE_I2 DBTYPE_BYTES DBTYPE_STR DBTYPE_STR DBTYPE_DBTIMESTAMP DBTYPE_numeric DBTYPE_R4 DBTYPE_R8 DBTYPE_BYTES DBTYPE_I4 DBTYPE_numeric DBTYPE_numeric ODBC SQL_TINYINT SQL_BINARY SQL_CHAR SQL_LONGVARCHAR1 SQL_TIMESTAMP SQL_NUMERIC SQL_REAL SQL_DOUBLE SQL_LONGVARBINARY SQL_INTEGER SQL_NUMERIC(19,4) SQL_NUMERIC

Sybase Data Source 56-3

Table 561 (Cont.) Mapping Sybase Data Types Sybase Real Small Datetime Small Int Small Money Text TinyInt Varbinary Varchar(m<256) Varchar(m>255) OLE DB DBTYPE_R4 DBTYPE_DBTIMESTAMP DBTYPE_I2 DBTYPE_numeric DBTYPE_BYTES DBTYPE_I2 DBTYPE_BYTES DBTYPE_STR DBTYPE_STR ODBC SQL_REAL SQL_TIMESTAMP SQL_SMALLINT SQL_NUMERIC(10,4) SQL_LONGVARCHAR SQL_SMALLINT SQL_BINARY SQL_CHAR SQL_VLONGVARCHAR1

Note: 1. Precision of 2147483647.If the <odbc longVarcharLenAsBlob> parameter is set to true in the AIS environment settings, then precision of m.

This table shows how Attunity Connect maps data types in a CREATE TABLE statement to Sybase data types.
Table 562 CREATE TABLE Data Types Sybase Raw Char[(m)] Datetime Float Real Image Binary(m) Integer Float Numeric(p,s) Smallint Text Datetime Datetime Tinyint Varchar(m)

CREATE TABLE Binary Char[(m)] Date Double Float Image Image(m) Integer Numeric Numeric(p[,s]) Smallint Text Time Timestamp Tinyint Varchar(m)

See also ADD Supported Data Types.

56-4 AIS User Guide and Reference

Platform-Specific Information
This section includes Sybase-related information and procedures as they pertain to specific platforms, as follows:

Verifying Environment Variables on UNIX Platforms

Verifying Environment Variables on UNIX Platforms


Make sure that the OCS directory is placed at the beginning of the shared library environment variable (before the ASE and FTS directories). The Attunity Sybase data source looks for the nvdb_syb.so file. This is the sharable image file needed to use this data source. Some Sybase versions come with other files. To use these files with the Attunity Sybase data source do the following:

For Sybase version 12.5, rename the nvdb_syb125.so file to nvdb_syb.so. For Sybase version 15, rename the nvdb_syb150.so file to nvdb_syb.so.

Replace the original nvdb_syb.so with the renamed file.

Defining the Sybase Data Source


The process of defining an Sybase Data Source consists of the following tasks:

Defining the Sybase Data Source Connection Configuring the Sybase Data Source Properties Checking Sybase Environment Variables

Defining the Sybase Data Source Connection


The Sybase data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your Sybase data source. Expand the Bindings folder. Expand the binding where you want to add the Sybase data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

7. 8. 9.

In the Name field, enter a name for the new data source. Select Sybase from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the connect string as follows:

Server name: Enter the name of the Sybase Server. If you omit the Server name, the data source driver binds to Sybases default server as specified in the Sybase interface file.
Sybase Data Source 56-5

Database name: Enter the name of the database. If you omit the Database name, the data source driver binds to Sybases default database as specified in the Sybase interface file.
Note:

The entries for both the Server name and Database name fields are case sensitive.

Interface file: Enter the full path and name of a Sybase interface file. If you omit the interface file, the default Sybase interface file is used.

11. Click Finish.

Note: To access Sybase on 64 bit operating systems (HP-UX 11 and above, AIX 4.4 and above and Sun Solaris 2.8 and above), the data source 32 bit client must be used.

Configuring the Sybase Data Source Properties


After defining the connection, you set the data source properties.
1. 2. 3. 4. 5. 6.

Open Attunity Studio. In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Sybase data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the Sybase data source and select Open. The Configuration editor is displayed.

56-6 AIS User Guide and Reference

Figure 561 Sybase Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Sybase Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties

10. Click Finish.

Checking Sybase Environment Variables


Check that the SYBASE environment variable is correctly set and that the Sybase database is readable by Attunity Connect. If necessary, define the variable in the startup script (such as nav_server.script) defined for the workspace in the daemon configuration information, or in the nav_login or site_nav_login file. For more information see the Attunity Server Installation Guide. Sybase's CTLIB library files must be installed on the host and the directory containing the Sybase System client-shared libraries must be included in the library path of the

Sybase Data Source 56-7

operating system. For UNIX specific instructions, refer to Verifying Environment Variables on UNIX Platforms.

56-8 AIS User Guide and Reference

57
Text Delimited File Data Source
This section contains the following topics:

Overview Configuration Properties Defining the Text Delimited File Data Source Setting Up the Text Delimited Data Source Metadata

Overview
Text files are called text delimited when:

The text fields are delimited by a specified character Rows are delimited by new lines

Features
The Text Delimited File data source supports the following key feature:

Variable length records. The data source handles files larger than 2GB on UNIX platforms only.

The Text Delimited File data source does not support the following SQL statements:

UPDATE statements DELETE statements

Limitations
The Text Delimited File data source does not support transactions.

Configuration Properties
The following properties can be configured for the Text Delimited data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect: When true, this parameter disables the ExplicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. headerRows=n: This parameter specifies the number of lines to be skipped at the beginning of each file.

Text Delimited File Data Source 57-1

Note:

You can override this value by specifying a <dbCommand> statement in ADD.

newFileLocation: The Data directory in the connect string, this parameter specifies the location of the text-delimited files and indexes you create with CREATE TABLE statements. You must specify the full path for the directory.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Defining the Text Delimited File Data Source


The process of defining an Text Delimited File data source consists of two tasks:

Defining the Text Delimited File Data Source Connection Configuring the Text Delimited File Data Source

Defining the Text Delimited File Data Source Connection


The Text Delimited File data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your Text Delimited data source. Expand the Bindings folder. Expand the binding where you want to add the Text Delimited data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

7. 8. 9.

In the Name field, enter a name for the new data source. Select Delimited Text Files from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Specify the connect string as follows:

Data Location: Enter the directory where the Text Delimited files and indexes you create with CREATE TABLE statements reside. You must specify the full path for the directory. If a value is not specified, created files are written to the DEF directory under the directory where AIS is installed. The value specified is used for the Data file field of the Design perspective, Metadata tab of Attunity Studio.
11. Click Finish.

Configuring the Text Delimited File Data Source


After defining the connection, you set the data source properties.

57-2 AIS User Guide and Reference

To configure the Text Delimited File data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Text Delimited data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the Text Delimited data source and select Open. The Configuration editor is displayed.

Figure 571 Text Delimited Data Source Configuration Properties

7. 8. 9.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Text Delimited File Data Source Connection. Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties. Click Finish. After you define the data source, you must define Attunity metadata describing the Text Delimited File data.

Setting Up the Text Delimited Data Source Metadata


The Text Delimited File data source requires Attunity Metadata. You can import the metadata from COBOL copybooks.

Text Delimited File Data Source 57-3

If COBOL copybooks do not exist that describe the Text Delimited File records, the metadata must be manually defined. For more information about the metadata definition, see Managing Data Source Metadata. This section includes the following topics:

Importing Attunity Metadata from COBOL Maintaining Attunity Metadata

Importing Attunity Metadata from COBOL


If COBOL copybooks describing the data source records are available, you can import the metadata by running the metadata import in the Attunity Studio Design perspective, Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), first import the metadata from copybooks with the same settings, and then import the metadata from the other copybooks. COBOL copybooks are required for the import. These copybooks are copied to the machine running Attunity Studio as part of the import procedure. To define Text Delimited File metadata 1. In the Configuration view, right-click the data source and select Edit Metadata. The Metadata tab is displayed with the data source displayed in the Metadata view.
2. 3. 4. 5. 6.

Right-click Imports under the data source and select New Import. Enter a name for the import. The name can contain letters, numbers and the underscore character. Select COBOL Import Manager for Data Sources as the import type. Click Finish. The Metadata Import Wizard is displayed. Click Add in the Import Wizard to add COBOL copybooks. The Add Resource screen is displayed, providing the option of selecting files from the local machine or copying the files from another machine using FTP.

57-4 AIS User Guide and Reference

This figure shows the Add Resource screen.


Figure 572 Add Resource Screen

7. 8.

If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the COBOL copybooks reside and, if not using anonymous access, enter a valid username and password to access the machine. To browse and transfer files required to generate the metadata, access the machine using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.

9.

10. Select the files to import and click Finish to start the transfer.

The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process. You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks.

Text Delimited File Data Source 57-5

The selected files are displayed in the Get Input Files screen of the wizard, as shown in this figure.
Figure 573 Import Wizard

11. Click Next. The Apply Filters screen is displayed. Figure 574 Apply Filters Screen

12. Apply filters to the copybooks, as needed.

The following filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor.

57-6 AIS User Guide and Reference

Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.

13. Click Next. The Select Tables screen is displayed. Figure 575 Select Tables Screen

The import manager identifies the names of the records in the COBOL copybooks that will be imported as tables.
14. Select the tables that you want to access (and thus require Attunity Metadata) and

then click Next. The Import Manipulation screen is displayed.

Text Delimited File Data Source 57-7

Figure 576 Import Manipulation Screen

You can perform the following actions in the Import Manipulation screen:

Resolve table names, where tables with the same name are generated from different COBOL copybooks specified during the import. Specify the physical location for the data. Specify table attributes. Manipulate the fields generated from the COBOL, as follows: * * * * * * * * * * * Merge sequential fields into one for simple fields Resolve variants either by marking a selector field or by specifying that only one case of the variant is relevant Add, delete, hide, or rename fields Change a data type Set a field size and scale Change the order of the fields Set a field as nullable Select a counter field for array for fields with dimensions (arrays) Set column wise normalization for fields with dimensions (arrays) Create new fields instead of the array field, where the number of generated fields will be determined by the array dimension. Create arrays and set the array dimension.

The Validation tab in the bottom half of the screen displays information about what must be done to validate the tables and fields generated from the COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location).
15. To manipulate table information or the fields in the table, right-click the table and

select the option you want. The following options are available:

57-8 AIS User Guide and Reference

Fields manipulation: Access the Fields Manipulation screen to customize the field definitions. Rename: Rename a table name. This option is used especially when more than one table is generated from the COBOL with the same name. Set data location: Set the physical location of the data file for the table. Set table attributes: Set table attributes. The table attributes are described in Managing Metadata. XSL manipulation location: Specify an XSL transformation or JDOM document that is used to transform the table definition.

16. Click Next.

The final window enables you to import the metadata to the machine where the data source is located or leave the generated metadata on the Attunity Studio machine, to be imported later.
17. Specify that you want to transfer the metadata to the machine where the data

source is located and click Finish. The metadata is imported to the machine where the data source is located.

Maintaining Attunity Metadata


You can maintain the metadata and update the statistics for the data in the Design perspective, Metadata tab in Attunity Studio. To use the Text Delimited File data source, you need the ADD, which you use to store metadata. In the Attunity metadata, use the delimited and quoteChar table attributes to specify the delimiting character and the character used for quotations. See Table Attributes for a description of the delimited quoteChar attribute. Attunity Connect assumes that the data in the in the physical file is always of type string and converts this string to the data type specified in the metadata. When this is not the case, for example when the data in the physical file is of type nls_string, you can prevent the data in the physical file from being treated as a string and converted by Attunity Connect by using the DISABLE_CONVERT dbCommand, as follows:
<field name=name datatype=string size=25> <dbCommand>DISABLE_CONVERT</dbCommand> </field>

Text Delimited File Data Source 57-9

57-10 AIS User Guide and Reference

58
Virtual Data Source
This section contains the following topics:

Overview Configuration Properties Defining the Virtual Data Source

Overview
Attunity Connect includes a data source driver enabling you to access AIS proprietary data sources. These data sources are accessed using the Virtual data source driver. You can use a Virtual data source to store the following in a location other than the default SYS data source:

Creating Views Defining Stored Procedures Creating Synonyms

The following statement, for example, creates a view on Oracle data and stores it in a Virtual data source named oraviews (rather than in the default SYS Virtual data source):
create view oraviews:emps as select * from ora:employees_us,ora:employees_uk

The Virtual data source is also for Using a Virtual Database.

Configuration Properties
The following properties can be configured for the Virtual data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

audit: Activates an audit file. This property must be set when including a Virtual data source in a distributed Transaction.
Note:

on HP (Compaq) NonStop platforms, the volume must be audited in order to create audited files.

auditFile: The audit filename is the concatenation of the value specified for the name attribute of the <table> statement and an.aud suffix.

Virtual Data Source 58-1

disableExplicitSelect: When set to true, this parameter disables the ExplicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. filepoolCloseOnTransaction: This parameter specifies that all files in the file pool for this data source close at each end of transaction (commit or rollback). filepoolSize: This parameter specifies how many instances of a file from the file pool may be open concurrently. filepoolSizePerFile=n: This parameter specifies how many instances of a file from the file pool may be open concurrently for each file. lockWait: This parameter specifies whether the data source driver waits for a locked record to become unlocked or returns a message that the record is locked. maxSecondsLockingWait: This parameter specifies the maximum amount of time (in seconds) to wait before a locked record becomes unlocked or returns a message that the record is locked. newFileLocation: The Data location in the connect string, this parameter specifies the location where the views and stored procedure definitions reside. The connect attribute is optional. You must specify the full path for the directory. transactionLogFile: This parameter specifies the name of the file where the transaction log is written. transactions: This parameter specifies whether to start the TMF transactions. Use this property when dealing with unaudited files. useGlobalFilepool: This parameter specifies whether a global file pool that can span more than one session is used.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Platform-specific Configuration Properties


Certain parameters are applicable or specific platforms only. This section describes the platform-specific configuration parameters, according to platform type:

HP NonStop Platforms z/OS Platforms OpenVMS Platforms

HP NonStop Platforms
The following parameters are unique to HP NonStop platforms:

enscribeLockMode=read|write: This parameter specifies the lock mode as either read only or writable. enscribeLockType=value: This parameter specifies the LockType property of the Recordset object:

adLockReadOnly (default): Read-only mode. adLockPessimistic: Pessimistic locking.

58-2 AIS User Guide and Reference

adLockOptimistic: Optimistic locking. adLockBatchOptimistic: Optimistic batch updates. Required for batch update mode as opposed to immediate update mode.

newFileVolume=string: The Data disk in the connect string, this parameter specifies the file volume where the file is catalogued.

z/OS Platforms
The following parameters are unique to z/OS systems:

newFileSMSDataClass: This parameter specifies the data class where views and stored procedure definitions reside. newFileSMSStorageClass: This parameter specifies the storage class where views and stored procedure definitions reside.

OpenVMS Platforms
The following parameter is unique to OpenVMS platforms: useRmsJournal: This parameter specifies enables the use of RMS journaling, when SET FILE/RU_JOURNAL is issued under OpenVMS. The SET FILE/RU_JOURNAL OpenVMS command marks an RMS file for recovery unit journaling. Any RMS table used in a Transaction where journaling applies must be defined with an index. The following SQL statements are used with RMS journaling, with their OpenVMS equivalents:

Begin: SYS$START_TRANS Commit: SYS$END_TRANS Rollback: SYS$ABORT_TRANS

Defining the Virtual Data Source


The process of defining a Virtual data source consists of two tasks:

Defining the Virtual Data Source Connection Configuring the Virtual Data Source

Defining the Virtual Data Source Connection


The Virtual data sourceconnection is set using the Design perspective, Configuration view in Attunity Studio. To define the Virtual data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your Virtual data source. Expand the Bindings folder. Expand the binding where you want to add the Virtual data source. Right-click the Data Source folder and select New Data Source.

Virtual Data Source 58-3

The New Data Source wizard is displayed.


7. 8.

Select Virtual from the Type list. Click Next. The Data Source Connect String screen is displayed.

9.

Specify the connect string as follows:

Data location: Specify the full path for the directory where the views and stored procedure definitions reside. If a value is not specified, the views and stored procedures are stored in data source files in the DEF directory under the directory where AIS is installed.

10. Click Finish.

Configuring the Virtual Data Source


After defining the connection, you set the data source properties. To configure the data source properties 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Virtual data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the Virtual data source and select Open. The Configuration editor is displayed.

58-4 AIS User Guide and Reference

Figure 581 Virtual Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Virtual Data Source Connection. Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties. After setting the binding, you must define Attunity Metadata.

Virtual Data Source 58-5

58-6 AIS User Guide and Reference

59
VSAM Data Source (z/OS)
This section includes the following topics:

Overview Configuration Properties Metadata Transaction Support Data Types Defining a VSAM Data Source Setting Up the VSAM Data Source Metadata

Overview
Attunity Connect supports two types of VSAM data sources:

VSAM (CICS): The VSAM under CICS data source accesses VSAM by making EXCI calls to a CICS program provided as part of the AIS installation. This agent CICS program does the actual VSAM reads and writes from within CICS. When accessing VSAM data using this data source, the following restrictions apply:

SQL DELETE operations are not supported for ESDS files. Using an alternate index to access an ESDS file is not supported. A non-unique alternate index for a KSDS file is not supported.

VSAM: This data source connects directly to the VSAM data and is limited if the VSAM files are managed by CICS. In this case, it is recommended to use this data source for read-only access to VSAM files. However, this may not give you adequate read integrity if some changes are buffered by CICS. Another alternative is to use the VSAM under CICS data source. When accessing VSAM data using the VSAM data source, the following restrictions apply:

Transactions are not supported when accessing VSAM directly. When accessing VSAM under CICS, two-phase commit transactions are supported.

Locking is not supported. You cannot update an array value (a child record in a hierarchical table) when the parent record is included in the SQL in a subquery. SQL DELETE operations are not supported for ESDS files.

VSAM Data Source (z/OS) 59-1

An RRDS file cannot have an alternate index. The primary key of a KSDS file must be one segment only (however, it can be several consecutive fields). You cannot modify the primary key value of a KSDS file. To enable you to create and delete VSAM data under z/OS, submit the following JCL: // IDCSYSIN DD DSN=&&VSAM,DISP=(NEW,DELETE,DELETE), // SPACE=(TRK,(1)),UNIT=SYSDA, // DCB=(BLKSIZE=3200,LRECL=80,RECFM=FB)

Supported Versions and Platforms


For information on supported VSAM versions and the supported operating systems for this CDC agent, see Attunity Integration Suite Supported Systems and Resources.

Environmental Prerequisites
AIS uses EXCI to interface to CICS. EXCI requires some set up:

IRC must be open. Use CEMT I IRC from the CICS screen to check your IRC status. If in closed state, set it to open. A specific connection must be set up. Use CEMT I connection to get the list of available connections. Note that you can only use specific connections which have a VTAM netname associated with them. The default available on most systems is BATCHCLI. Attunity provides a JCL for defining an Attunity connection. See the CICS CONF member in the USERLIB. An EXCI mirror transaction ID must be available. The default on most systems is transaction ID EXCI. You can use the CEMT I TRA PROG(DFHMIRS) to get the list of EXCI transaction IDs available on your system.

Configuration Properties
This section includes the following topics:

VSAM Data Source Parameters VSAM (CICS) Data Source Parameters

VSAM Data Source Parameters


The following properties can be configured for the VSAM data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

disableExplicitSelect: When set to true, this parameter disables the ExplicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. filepoolCloseOnTransaction: This parameter specifies that all files in the file pool for this data source close at each end of transaction (commit or rollback). filepoolSize: This parameter specifies how many instances of a file from the file pool may be open concurrently. newFileSMSStorageClass: This parameter specifies the storage class when SMS is used to manage volumes.

59-2 AIS User Guide and Reference

newFileSMSDataClass: This parameter specifies the data class when SMS is used to manage volumes. trigger: A name of a user defined trigger or user exit that can be setup. User code is activated on specific events like PRE-UPDATE, POST-READ etc. Triggers are normally used for either compression/decompression code or advanced logic for filtering. The the Attunity SDK book for further information. useGlobalFilepool: This parameter specifies whether or not a global file pool that can span more than one session is used.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

VSAM (CICS) Data Source Parameters


The following properties can be configured for the VSAM (CICS) data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Data Sources.

allowUpdateKey: When set to true, this parameter specifies that the key is updatable. cicsProgname: The ProgramName in the connect string, this parameter specifies the UPDTRNS program that is supplied with AIS to enable updating VSAM data. cicsTraceQueue: The TraceQueue in the connect string, this parameter indicates the name of queue for output that is defined under CICS when tracing the output of the UPDTRNS program. When not defined, the default CICS queue is used. disableExplicitSelect: When set to true, this parameter disables the ExplicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. trigger: A name of a user defined trigger or user exit that can be setup. User code is activated on specific events like PRE-UPDATE, POST-READ etc. Triggers are normally used for either compression/decompression code or advanced logic for filtering. The the Attunity SDK book for further information. exciTransid=string: The Transaction ID in the connect string, this parameter indicates the CICS TRANSID. This value must be EXCI or a copy of this transaction. targetSystemApplid=string: The CICS Application ID in the connect string, this parameter specifies the VTAM applid of the CICS target system. vtamNetname=string: The VTAM Netname in the connect string, this parameter specifies the connection being used by EXCI (and MRO) to relay the program call to the CICS target system.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

VSAM Data Source (z/OS) 59-3

Metadata
The VSAM and VSAM (CICS) data source drivers require Attunity metadata. COBOL copybooks are required for the import. These copybooks are copied to the machine running Attunity Studio as part of the import procedure. If COBOL copybooks describing the data source records are available, then you can import the metadata by running the metadata import in Attunity Studio Design perspective, Metadata tab. If COBOL copybooks that describe the VSAM records are not available, then you must manually define the metadata. For information about the metadata definition, see Managing Data Source Metadata. If the metadata is provided in a number of COBOL copybooks with different filter settings (such as whether the first 6 columns are ignored or not), first import the metadata from copybooks with the same settings, and then import the metadata from the other copybooks. This section describes specific metadata requirements for the following data sources:

VSAM Metadata Requirements VSAM (CICS) Metadata Requirements

VSAM Metadata Requirements


The following are specific metadata requirements for the VSAM data source:

Index entries The dbCommand in the <table> statement must specify the volume for VSAM files created via Attunity Connect: <dbCommand>volume</dbCommand>

The dbCommand of all alternate keys must include the filename of the alternate key. For example:
<key name="AccountNo" size="4"> <dbCommand>VSAM.DATA.ACCOUN01.IO.PATH</dbCommand> <segments> ... </segments> </key>

For information about the metadata definition, see Managing Data Source Metadata.

VSAM (CICS) Metadata Requirements


The following are specific metadata requirements for the VSAM (CICS) data source:

Index entries. The filename attribute must specify the CICS logical name. The dbCommand in the <table> statement must specify the VSAM file type:
<dbCommand>file_type</dbCommand>

where file_type can be either ESDS, RRDS or KSDS.

The dbCommand of all primary and alternate keys must include the CICS logical name of the alternate key. For example:

59-4 AIS User Guide and Reference

<key name="EMP-ID" size="8" unique="true"> <dbCommand>CICS_logical_filename</dbCommand> <segments> ... </segments> </key>

For information about the metadata definition, see Managing Data Source Metadata.

Transaction Support
The VSAM (CICS) data source supports two-phase commit and can fully participate in distributed transactions when the transaction environment parameter convertAllToDistributed is set to true. To use Attunity Connect with 2PC, you must have RRS installed and configured and have CICS TS 1.3 or above installed. If RRS is not running, then the data source can participate in a distributed transaction as the only one-phase commit data source if the logFile parameter is set to NORRS in the Transactions section of the binding properties for the relevant binding configuration in the Design perspective, Configuration tab in Attunity Studio. The XML representation is as follows: <transactions logFile="log,NORRS" /> where log is the high-level qualifier and name of the log file. If this parameter is not specified, then the format is the following: <transactions logFile=",NORRS" /> That is, the comma must be specified. For further details about setting up a data source to be one-phase commit in a distributed transaction, see CommitConfirm Table for more information. To use two-phase commit capability to access data on the z/OS machine, define every library in the ATTSRVR JCL as an APF-authorized library.

To define a DSN as APF-authorized, in the SDSF screen enter the command: "/setprog apf,add,dsn=navroot.library,volume=ac002" where ac002 is the volume where you installed AIS and NAVROOT is the high-level qualifier where AIS is installed.

If the AIS installation volume is managed by SMS, then when defining APF-authorization enter the following command in the SDSF screen: "/setprog apf,add,dsn=navroot.library,SMS"

Make sure that the library is APF-authorized, even after an IPL (reboot) of the machine.

The VSAM file participating in the 1PC or 2PC transaction must be defined as recoverable. To use distributed transactions from an ODBC-based application, ensure that AUTOCOMMIT is set to 0.

VSAM Data Source (z/OS) 59-5

Using Attunity Connect with One-phase Commit


The VSAM under CICS data source can be set up as a one-phase commit data source. As such, CICS programs activated within the context of a transaction are activated with no SYNCONRETURN option in the EXCI DPL request. When the transaction is committed, ATRCMIT will be called to trigger a sync point. Note the following points:
1. 2.

RRS must be configured and running on your system in order to use 1PC. When working with 1PC, it is important to correctly configure the timeout of your EXCI mirror transaction. The DTIMEOUT parameter in the CEDA transaction definition must exceed the maximum expected transaction duration. The default EXCI transaction is usually configured with a DTIMEOUT of 10 seconds, which may be problematic in terms of its short duration.

3.

Data Types
This table shows how Attunity Connect maps data types in a CREATE TABLE statement to VSAM data types.
Table 591 CREATE TABLE Data Types VSAM Char[(m)] Date+time Double Float Integer Numeric(p,s) Smallint Tinyint Varchar(m)

CREATE TABLE Char[(m)] Date Double Float Image Integer Numeric[(p[,s])] Smallint Text Tinyint Varchar(m)

See also ADD Supported Data Types.

Defining a VSAM Data Source


The process of defining an VSAM data source consists of two tasks:

Defining the VSAM Data Source Connection or Defining the VSAM (CICS) Data Source Connection Configuring the VSAM Data Source Properties or Configuring the VSAM (CICS) Data Source Properties

Defining the VSAM Data Source Connection


The VSAM data source connection is set using the Design perspective, Configuration view in Attunity Studio.
59-6 AIS User Guide and Reference

To define the data source connection 1. Open Attunity Studio.


2. 3. 4. 5. 6.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your VSAM data source. Expand the Bindings folder. Expand the binding where you want to add the VSAM data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

7.

In the Name field, enter a name for your data source.


Note:

The default names that AIS generates for the metadata files must be unique. AIS creates files for the data source, the file names are created by using the characters in the data source name and then by ensuring that the first character in the file name is legal. If the first character is not legal, it is replaced by a G. To make sure that the generated files are legal, you must make the last eight characters of the name alphabetic and unique.

8. 9.

Select VSAM from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the connect string as follows:


Data HLQ: The high-level qualifier where the data files are located. Disk Volume Name: The high-level qualifier (volume) where the data resides. The values specified are used in the Data File field in the Attunity Studio Design perspective, Metadata tab. For tables created using the CREATE TABLE statement, the values specified are used to create the data files. If values are not specified, then data files are written to the DEF high-level qualifier under the high-level qualifier where AIS is installed. When SMS is used to manage the volumes, leave this value empty and set the newFileSMSStorageClass and newFileSMSDataClass properties as described in VSAM Data Source Parameters.

11. Click Finish.

Defining the VSAM (CICS) Data Source Connection


The VSAM (CICS) data source connection is set using the Design perspective, Configuration view in Attunity Studio. To define the data source connection 1. Open Attunity Studio.
2. 3. 4.

In the Design perspective, Configuration view expand the Machines folder. Expand the machine where you want to add your VSAM (CICS) data source. Expand the Bindings folder.

VSAM Data Source (z/OS) 59-7

5. 6.

Expand the binding where you want to add the VSAM (CICS) data source. Right-click the Data Source folder and select New Data Source. The New Data Source wizard is displayed.

7.

In the Name field, enter a name for your data source.


Note:

The default names that AIS generates for the metadata files must be unique. AIS creates files for the data source, the file names are created by using the characters in the data source name and then by ensuring that the first character in the file name is legal. If the first character is not legal, it is replaced by a G. To make sure that the generated files are legal, you must make the last eight characters of the name alphabetic and unique.

8. 9.

Select VSAM (CICS) from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the connect string as follows:

CICS Application ID: The VTAM applid of the CICS target system. The default value is CICS. This parameter is used when updating VSAM data. You can determine this value by activating the CEMT transaction on the target CICS system. On the bottom right corner of the screen appears the legend APPLID=target_system. Transaction ID: The mirror transaction within CICS that receives control via MRO, which transfers the transaction from the AIS environment to CICS. The default value is EXCI. VTAM Netname: The VTAM netname of the specific connection being used by EXCI (and MRO) to relay the program call to the CICS target system. For example, if you issue the following command to CEMT: CEMT INQ CONN Then you see on the display screen that the netname is BATCHCLI (this is the default connection supplied by IBM upon the installation of CICS). The default value is ATYCLIEN. If you plan to use the IBM defaults, then specify BATCHCLI as the VTAM_ netname parameter, otherwise define a specific connection (with EXCI protocol) and use the netname you provided there for this parameter. * Attunity provides a netname, ATYCLIEN that can be used after the following procedure is followed: Either, use the JCL in the NAVROOT.USERLIB(CICSCONF) member to submit the DFHCSDUP batch utility program to add the resource definitions to the DFHCSD dataset (see the IBM CICS Resource Definition Guide for further details). Or Use the instream SYSIN control statements in the NAVROOT.USERLIB(CICSCONF) member as a guide to defining the resources online using the CEDA facility.

59-8 AIS User Guide and Reference

After the definitions have been added (via batch or using the CEDA facility), logon to CICS and issue the following command to install the resource definitions under CICS: CEDA INST GROUP(ATYI) Henceforth, specify ATYCLIEN as the NETNAME.

Program Name: The UPDTRNS program that is supplied with AIS to enable updating VSAM data. To use the UPDTRNS program, copy the program from NAVROOT.LOAD to a CICS DFHRPL library (such as CICS.USER.LOAD) and then define the UPDTRNS program under CICS using any available group such as ATY group: CEDA DEF PROG(UPDTRNS) G(ATY) LANG(C) DA(ANY) DE(ATTUNIT VSAM UPDATE PROG) NAVROOT is the high-level qualifier where AIS is installed. After defining the UPDTRNS program, install it as follows: CEDA IN G(ATY)

Trace Queue: The name of the queue for output that is defined under CICS when tracing the output of the UPDTRNS program. When not defined, the default CICS queue is used.

11. Click Finish.

Configuring the VSAM Data Source Properties


After defining the connection, you set the data source properties. To configure the data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your VSAM data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the VSAM data source and select Open. The Configuration editor is displayed.

VSAM Data Source (z/OS) 59-9

Figure 591 VSAM Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the VSAM Data Source Connection. Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

Configuring the VSAM (CICS) Data Source Properties


After defining the connection, you set the data source properties. To configure the data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your VSAM (CICS) data source. Expand the Bindings folder and the binding with your data source. Expand the Data sources folder. Right-click the VSAM (CICS) data source and select Open. The Configuration editor is displayed.

59-10 AIS User Guide and Reference

Figure 592 VSAM (CICS) Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the VSAM Data Source Connection. Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

9.

Configure the data source parameters as required. For a description of the available parameters, see Configuration Properties.

Setting Up the VSAM Data Source Metadata


The metadata import procedure is comprised of the following steps:

Selecting the COBOL files Applying Filters Selecting Tables Import Manipulation Create VSAM Indexes (VSAM (ADD) only)

VSAM Data Source (z/OS)

59-11

Assigning File Names (VSAM CICS only) Assigning Index File Names (VSAM CICS only Metadata Model Selection Importing the Metadata

The following sections describe each step, and the screens that appear for that step.

Selecting the COBOL files


This section describes the steps required to select the COBOL copybooks that will be used to generate the metadata. The following procedure starts with a preliminary step, also described in Starting the Import Process. To select the COBOL files 1. Open Attunity Studio.
2.

In the Design perspective, Configuration view, right-click the data source and select Show Metadata View. The Metadata tab is displayed with the data source displayed in the Metadata view.

3.

Right-click Imports and select New Import. The New Import screen is displayed, as shown in the following figure:

Figure 593 The New Import screen

4. 5. 6.

Enter a name for the import. The name can contain letters, numbers and the underscore character. Select VSAM Import Manager or COBOL Import Manager for Data Sources from the list. Click Finish. The Metadata import wizard opens with the Get Input Files screen, as shown in the following figure:

59-12 AIS User Guide and Reference

Figure 594 The Get Input Files screen

7.

Click Add. The Add Resource screen is displayed, as shown in the following figure:

Figure 595 The Add Resource screen

8.

If the files are on another machine, then right-click My FTP Sites and select Add. The Add FTP Site screen is displayed, as shown in the following figure:

VSAM Data Source (z/OS)

59-13

Figure 596 The Add FTP Site screen

9.

Enter the server name where the COBOL copybooks are located and, if not using anonymous access, enter a valid username and password to access that computer. The username is then used as the high-level qualifier. qualifier by right-clicking the machine in the Add Resource screen, and selecting Change Root Directory.

10. Click OK. After accessing the remote computer, you can change the high-level

11. Select the files to import and click Finish to start the file transfer. When complete,

he selected files are displayed in the Get Input Files screen. To remove any of these files, select the required file and click Remove.
12. Click Next (The Apply Filters screen opens) to continue to the Applying Filters

step.

Applying Filters
This section describes the steps required to apply filters on the COBOL copybooks used to generate the Metadata. It continues the Selecting the COBOL files procedure. To apply filters Expand all in the Apply Filters screen. Apply the required filter attributes to the COBOL copybooks. The available filters are listed and described in the table that describes the Metadata Filters. Click Next (The Select Tables screen opens) to continue to the Selecting Tables step.

1. 2. 3.

This figure shows the Apply Filters screen.

59-14 AIS User Guide and Reference

Figure 597 The Apply Filters screen

The following table lists the available filters:


Table 592 Filter COMP_6 switch Metadata Filters Description The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. The compiler vendor. The following are available:

Compiler source

z/OS AS400 HP NonStop VMS MICROFOCUS Hardware (UNIX vendor) Default/Not known/Other

Storage mode

The MicroFocus Integer Storage Mode. Specify one of the following:


NOIBMCOMP for byte storage mode. IBMCOMP for word storage mode.

Ignore after column 72 IgnoreFirst6

When set to true, ignores columns 73 to 80 in the COBOL copybook. When set to true, ignores the first six columns in the COBOL copybook.

When set to true, replaces all hyphens in either the record or Replace hyphens (-) in record and field names with field names in the metadata generated from the COBOL with underscore characters. underscores (_)

VSAM Data Source (z/OS)

59-15

Table 592 (Cont.) Metadata Filters Filter Prefix nested columns Case sensitive Find Replace with Description When set to true, prefix all nested columns with the previous level heading. Specifies whether to be sensitive to the search string case. Searches for the specified value. Replaces the value specified for Find with the value specified here

Selecting Tables
This section describes the steps required to select the tables from the COBOL copybooks. The import manager identifies the names of the records in the COBOL copybooks that will be imported as tables. The following procedure continues the Applying Filters procedure. To select the required tables 1. From the Select Tables screen, select the tables that you want to access. To select all tables, click Select All. To deselect all selected tables, click Unselect All.
Figure 598 The Select Tables screen

2.

Click Next (the Import Manipulation screen opens) to continue to the Import Manipulation step.

Import Manipulation
This section describes the operations available for manipulating the imported records (tables). It continues the Selecting Tables procedure.

59-16 AIS User Guide and Reference

The import manager identifies the names of the records in the DDM Declaration files that will be imported as tables. You can manipulate the general table data in the Import Manipulation Screen. To import the metadata 1. From the Import Manipulation screen (see Import Manipulation Screen figure), right-click the table record marked with a validation error, and select the relevant operation. See the table, Table Manipulation Options for the available operations.
2.

Repeat step 1 for all table records marked with a validation error. You resolve the issues in the Import Manipulation Screen. Once all the validation error issues have been resolved, the Import Manipulation screen is displayed with no error indicators.

3.

Click Next.

If you are importing metadata for a VSAM (ADD) data source, continue to Create VSAM Indexes. If you are importing metadata for a VSAM CICS data source, continue to Assigning File Names.

Import Manipulation Screen


The Import Manipulation screen is shown in the following figure:
Figure 599 Import Manipulation Screen

The upper area of the screen lists the COBOL files and their validation status. The metadata source and location are also listed. The Validation tab at the lower area of the screen displays information about what needs to be resolved in order to validate the tables and fields generated from the COBOL. The Log tab displays a log of what has been performed (such as renaming a table or specifying a data location).
VSAM Data Source (z/OS) 59-17

The following operations are available in the Import Manipulation screen:

Resolving table names, where tables with the same name are generated from different files during the import. Selecting the physical location for the data. Selecting table attributes. Manipulating the fields generated from the COBOL, as follows: Merging sequential fields into one (for simple fields). Resolving variants by either marking a selector field or specifying that only one case of the variant is relevant. Adding, deleting, hiding, or renaming fields. Changing a data type. Setting the field size and scale. Changing the order of the fields. Setting a field as nullable. Selecting a counter field for array for fields with dimensions (arrays). You can select the array counter field from a list of potential fields. Setting column-wise normalization for fields with dimensions (arrays). You can create new fields instead of the array field where the number of generated fields will be determined by the array dimension. Creating arrays and setting the array dimension.

The following table lists and describes the available operations when you right-click a table entry:
Table 593 Option Fields Manipulation Table Manipulation Options Description Customizes the field definitions, using the Field Manipulation screen. You can also access this screen by double-clicking the required table record. Renames a table. This option is used especially when more than one table with the same name is generated from the COBOL. Sets the physical location of the data file for the table. Sets the table attributes. Specifies an XSL transformation or JDOM document that is used to transform the table definitions. Removes the table record.

Rename Set data location Set table attributes XSL manipulation Remove

You can manipulate the data in the table fields in the Import Manipulation Screen. Double-click a line in the Import Manipulation Screen to open the Field Manipulation Screen.

Field Manipulation Screen


The Field Manipulation screen lets you make changes to fields in a selected table. You get to the Field Manipulation screen through the Import Manipulation Screen. The Field Manipulation screen is shown in the following figure.

59-18 AIS User Guide and Reference

Figure 5910

Field Manipulation Screen

You can carry out all of the available tasks in this screen through the menu or toolbar. You can also right click anywhere in the screen and select any of the options available in the main menus from a shortcut menu. The following table describes the tasks that are done in this screen. If a toolbar button is available for a task, it is pictured in the table.
Table 594 Command General menu Undo Click to undo the last change made in the Field Manipulation screen. Field Manipulation Screen Commands Description

Select fixed offset

The offset of a field is usually calculated dynamically by the server at runtime according the offset and size of the proceeding column. Select this option to override this calculation and specify a fixed offset at design time. This can happen if there is a part of the buffer that you want to skip. When you select a fixed offset you pin the offset for that column. The indicated value is used at runtime for the column instead of a calculated value. Note that the offset of following columns that do not have a fixed offset are calculated from this fixed position.

VSAM Data Source (z/OS)

59-19

Table 594 (Cont.) Field Manipulation Screen Commands Command Test import tables Description Select this table to create an SQL statement to test the import table. You can base the statement on the Full table or Selected columns. When you select this option, the following screen opens with an SQL statement based on the table or column entered at the bottom of the screen.

Enter the following in this screen:

Data file name: Enter the name of the file that contains the data you want to query. Limit query results: Select this if you want to limit the results to a specified number of rows. Enter the amount of rows you want returned in the following field. 100 is the default value. Define Where Clause: Click Add to select a column to use in a Where clause. In the table below, you can add the operator, value and other information. Click on the columns to make the selections. To remove a Where Clause, select the row with the Where Clause you want t remove and then click Remove.

The resulting SQL statement with any Where Clauses that you added are displayed at the bottom of the screen. Click OK to send the query and test the table. Attribute menu Change data type Select Change data type from the Attribute menu to activate the Type column, or click on the Type column and select a new data type from the drop-down list.

59-20 AIS User Guide and Reference

Table 594 (Cont.) Field Manipulation Screen Commands Command Create array Description This command allows you to add an array dimension to the field. Select this command to open the Create Array screen.

Enter a number in the Array Dimension field and click OK to create the array for the column. Hide/Reveal field Select a row from the Field manipulation screen and select Hide field to hide the selected field from that row. If the field is hidden, you can select Reveal field. Select this to change or set a dimension for a field that has an array. Select Set dimension to open the Set Dimension screen. Edit the entry in the Array Dimension field and click OK to set the dimension for the selected array. Set field attribute Select a row to set or edit the attributes for the field in the row. Select Set field attribute to open the Field Attribute screen.

Set dimension

Click in the Value column for any of the properties listed and enter a new value or select a value from a drop-down list. Nullable/Not nullable Select Nullable to activate the Nullable column in the Field Manipulation screen. You can also click in the column. Select the check box to make the field Nullable. Clear the check box to make the field Not Nullable. Set scale Select this to activate the Scale column or click in the column and enter the number of places to display after the decimal point in a data type. Select this to activate the Size column or click in the column and enter the number of total number of characters for a data type.

Set size Field menu

VSAM Data Source (z/OS)

59-21

Table 594 (Cont.) Field Manipulation Screen Commands Command Add Description Select this command or use the button to add a field to the table. If you select a row with a field (not a child of a field), you can add a child to that field. Select Add Field or Add Child to open the following screen:

Enter the name of the field or child, and click OK to add the field or child to the table. Delete field Select a row and then select Delete Field or click the Delete Field button to delete the field in the selected row.

Move up or down

Select a row and use the arrows to move it up or down in the list.

Rename field Sturctures menu Columnwise Normalization

Select Rename field to make the Name field active. Change the name and then click outside of the field.

Select Columnwise Normalization to create new fields instead of the array field where the number of generated fields will be determined by the array dimension.

59-22 AIS User Guide and Reference

Table 594 (Cont.) Field Manipulation Screen Commands Command Combining sequential fields Description Select Combining sequential fields to combine two or more sequential fields into one simple field. The following dialog box opens:

Enter the following information in the Combining sequential fields screen:

First field name: Select the first field in the table to include in the combined field End field name: Select the last field to be included in the combined field. Make sure that the fields are sequential. Enter field name: Enter a name for the new combined field.

Flatten group

Select Flatten Group to flatten a field that is an array. This field must be defined as Group for its data type. When you flatten an array field, the entries in the array are spread into a new table, with each entry in its own field. The following screen provides options for flattening.

Do the following in this screen:

Select Recursive operation to repeat the flattening process on all levels. For example, if there are multiple child fields in this group, you can place the values for each field into the new table when you select this option. Select Use parent name as prefix to use the name of the parent field as a prefix when creating the new fields. For example, if the parent field is called Car Details and you have a child in the array called Color, when a new field is created in the flattening operation it will be called Car Details_Color.

VSAM Data Source (z/OS)

59-23

Table 594 (Cont.) Field Manipulation Screen Commands Command Mark selector Description Select Mark selector to select the selector field for a variant. This is available only for variant data types. Select the Selector field form the following screen.

Replace variant Select counter field

Select Replace variant to replace a variants selector field. Select Counter Field opens a screen where you select a field that is the counter for an array dimension.

Create VSAM Indexes


This wizard page is used to create VSAM indexes. It is valid only for VSAM (ADD) data sources. If you are using a VSAM CICS data sources, go to the Assigning File Names step. This step continues the Import Manipulation step. This figure shows the Create VSAM Indexes step.

59-24 AIS User Guide and Reference

Figure 5911

Create Indexes (For VSAM (ADD) only)

To create VSAM Indexes Click Next to go to the Metadata Model Selection step. The indexes are created automatically.
Note: Make sure that the data location is supplied in the Import Manipulation step.

Assigning File Names


This section describes the steps required to specify the physical file name (including the high-level qualifier), and the CICS logical file name for each record. It is valid only for VSAM CICS data sources. If you are using a VSAM (ADD) data sources go to the Create VSAM Indexes step. This step continues the Import Manipulation step. To assign the file names 1. In the Assign File Names screen enter the physical file name, including the high-level qualifier, for each record listed.
2. 3.

Specify the CICS logical file name for each record listed. Click Next (The Assign Index File Names screen opens) to continue to the Assigning Index File Names step.

VSAM Data Source (z/OS)

59-25

This figure shows the Assign File Names step.


Figure 5912 The Assign File Names screen

Assigning Index File Names


This section describes the steps required to specify the physical index file name, and the CICS logical index file name for each record. It continues the Assigning File Names step. To assign the index file names 1. In the Assign Index File Names screen, enter the physical index file name for each record listed.
2. 3.

Specify the CICS logical index file name for each record listed. Click Next (The Import Metadata screen opens) to continue to the Metadata Model Selection step.

59-26 AIS User Guide and Reference

This figure shows the Assign Index File Names screen.


Figure 5913 The Assign Index File Names screen

Metadata Model Selection


This section lets you generate virtual and sequential views for imported tables containing arrays. In addition, you can configure the properties of the generated views. It continues the Create VSAM Indexes procedure if you are using VSAM (ADD) data sources, or it follows the Assigning Index File Names procedure if you are using VSAM CICS data sources. This step allows you to flatten tables that contain arrays. In the Metadata Model Selection step, you can select configure values that apply to all tables in the import or set specific settings for each table. To configure the metadata model Select one of the following:

Default values for all tables: Select this if you want to configure the same values for all the tables in the import. Make the following selections when using this option: Generate sequential view: Select this to map non-relational files to a single table. Generate virtual views: Select this to have individual tables created for each array in the non-relational file. Include row number column: Select one of the following: true: Select true, to include a column that specifies the row number in the virtual or sequential view. This is true for this table only, even in the the data source is not configured to include the row number column.

VSAM Data Source (z/OS)

59-27

false: Select false, to not include a column that specifies the row number in the virtual or sequential view for this table even if the data source is configured to include the row number column. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties. Inherit all parent columns: Select one of the following: true: Select true, for virtual views to include all the columns in the parent record. This is true for this table only, even in the data source is not configured to include all of the parent record columns. false: Select false, so virtual views do not include the columns in the parent record for this table even if the data source is configured to include all of the parent record columns. default: Select default to use the default data source behavior for this parameter. For information on how to configure these parameters for the data source, see Configuring Data Source Advanced Properties.

Specific virtual array view settings per table: Select this to set different values for each table in the import. This will override the data source default for that table. Make the selections in the table under this selection. See the item above for an explanation.

The Metadata Model Selection screen is shown in the following figure:


Figure 5914 The Metadata Model Selection Screen

59-28 AIS User Guide and Reference

Importing the Metadata


This section describes the steps required to import the metadata to the target computer. It continues the Metadata Model Selection step. You can now import the metadata to the computer where the data source is located, or import it later (in case the target computer is not available).
Note:

If you want to update the metadata using the VSAM data source, and then access the data under CICS, you must: Close the file in CICS Make the updates in with the VSAM data source Exit navcmd. This closes the file in VSAM Return to CICS and open the file there. The data should be available and you can now read the file.

To import the metadata 1. Specify Yes to transfer the matadata to the target computer or No to transfer the metadata later.
2.

Click Finish.

If you specified Yes, then the metadata will be imported to the target computer immediately. The Import Metadata screen is shown in the following figure:
Figure 5915 The Import Metadata screen

VSAM Data Source (z/OS)

59-29

59-30 AIS User Guide and Reference

Part IX
Procedure Data Source Reference
This part contains the following topics:

Procedure Data Sources Natural/CICS Procedure Data Source (z/OS) Procedure Data Source (Application Connector) CICS Procedure Data Source

-1

-2 Product Title/BookTitle as a Variable

60
Natural/CICS Procedure Data Source (z/OS)
This section includes the following topics:

Overview Supported Platforms and Versions Metadata Security Defining the Natural/CICS Procedure Data Source Writing a Natural Remote Procedure Call Maintaining the CICS Environment for the Natural Agent

Overview
Attunity AIS provides a Natural data source driver and application agent to execute Natural subprograms under CICS as remote procedure calls. The returned rowset is handled in the same way that data from any data source is handled, using the relational model: The procedure can be used in SQL and a resulting rowset can be joined with other row sets from other data sources or Attunity Connect procedures The following diagram shows a simplified model of the how the Natural data source and application agent work within a CICS environment:
Figure 601 Natural Data Source and Application Agent within CICS Environment

Natural/CICS Procedure Data Source (z/OS) 60-1

The Natural/CICS procedure data source receives a remote procedure call from the client and passes it via a CICS EXCI-interface transaction to the agent in CICS (ATYNAGNT). The agent receives control and assigns a server task to process the client request. The server executes the subprogram call and passes the results back to the agent. The agent packages the results for the EXCI interface, which passes them back to the Natural data source, which then returns the results to the client. You can execute the Natural/CICS transaction by calling it either directly in a CALL statement, or within a SELECT statement. For example:
CALL NATPROC:TESTLSTN(1234.123,STRINGA ,123.123) Select * from NATPROC:TESTLSTN(1234.123,STRINGA ,123.123)

Environmental Prerequisites
AIS uses EXCI to interface to CICS. EXCI requires some set up:

IRC must be open. Use CEMT I IRC from the CICS screen to check your IRC status. If in closed state, set it to open. A specific connection must be set up. Use CEMT I connection to get the list of available connections. Note that you can only use specific connections which have a VTAM netname associated with them. The default available on most systems is BATCHCLI. Attunity provides a JCL for defining an Attunity connection. See the CICS CONF member in the USERLIB. An EXCI mirror transaction ID must be available. The default on most systems is transaction ID EXCI. You can use the CEMT I TRA PROG(DFHMIRS) to get the list of EXCI transaction IDs available on your system.

Supported Platforms and Versions


For information on supported Natural/CICS versions, see Attunity Integration Suite Supported Systems and Resources.

Configuration Properties
The following properties can be configured for the Natural/CICS procedure data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Procedure Data Sources.

disableExplicitSelect: Set to true to disable the ExplicitSelect ADD attribute; every field is returned by a SELECT * FROM... statement. exciTransid: The CICS TRANSID. This value must be EXCI or a copy of this transaction.

Configuring Advanced Data Source Properties


You can set advanced properties for procedure data sources. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Metadata
The metadata describes the input and output structures for the Natural/CICS transaction to be executed.

60-2 AIS User Guide and Reference

To define the metadata, define a new procedure in the Attunity Studio Design perspective Metadata tab, and provide the following information by editing the XML in the Source tab, displayed after the procedure is generated.

The program name to be run in the Natural/CICS transaction. The input parameters and output data. Attunity Connect treats the input variables as parameters and the output variables are treated as a rowset produced by the Natural/CICS transaction.

To define metadata for a Natural procedure, right-click the Procedures folder, under the data source, and select New Procedure. Once the procedure has been defined in Attunity Studio, you can define the metadata for it, in XML.

Specifying the Program to Execute


The following information is specific to the metadata definition for a Natural/CICS transaction. Within a <procedure> statement you must include a <dbCommand> statement to specify the program to run, and the TRANSID.

Syntax
<dbCommand>PROGRAM=ATYLSTN;TRANSID=ATYI; SUBPROGRAM=natural_subprogram; LIBRARY=library_of_subprogram; CONTEXT=context_data_field; </dbCommand>

Where:
Table 601 Parameter PROGRAM (optional) TRANSID (optional) <dbCommand> Statement Parameters Description The main program of the Natural agent. The default value is ATYLSTN. The mirror transaction within CICS that receives control through MRO, which transfers the Natural transaction from the Attunity AIS environment to CICS. The default value is ATYI. The Natural subprogram that is executed. The library where the Natural subprogram resides. A user area, used by the subprogram to read and save the state between successive calls to the subprogram.

SUBPROGRAM library (optional) CONTEXT

Specifying Input and Output Parameters


Within a <parameters> statement, which includes <field> statements and that defines the input parameters, you can include a <dbCommand> statement, for use when accessing a transaction with a context. Within a <field> statement, which defines the output parameters, you can include a <dbCommand> statement, for use when accessing a transaction with a context.

Syntax

For the <field> statement:


<dbCommand>EOS_VALUE=value</dbCommand>

Natural/CICS Procedure Data Source (z/OS) 60-3

Where EOS_VALUE is the value that signals the end of the transaction, assuming that the transaction has a context. The value assigned can be a string or an integer value, that when encountered, causes the transaction to end. If you specify more than one EOS_VALUE, then a logical OR condition is implied.
Note:

If EOS_VALUE is specified neither for a field nor a parameter (see below), then the transaction is assumed not to have a context and only one row is returned. If a value is specified both for a field and a parameter, then the first value encountered causes the transaction to end.

For the <parameters> statement:


<dbCommand>EOS_VALUE=value;REAPPLY; OUTPUTOFFSET=offset</dbCommand>

Where:
Table 602 Parameter EOS_VALUE <dbCommand> Statement Parameters Description The value that signals the end of the transaction, assuming that the transaction has a context. The value assigned can be either a string or an integer value, that when encountered causes the transaction to end. If you specify more than one EOS_VALUE, the a logical OR condition is implied. Note: If EOS_VALUE is specified neither for a parameter nor a field (see above), the transaction is assumed not to have a context and only one row is returned. If a value is specified both for a parameter and a field, the first value encountered causes the transaction to end. REAPPLY The original value supplied for the parameter is reapplied when the transaction modifies the parameter value. This attribute is only relevant when executing stream-type transactions. The offset of the output parameter from the input parameter. If OUTPUTOFFSET=0, the same field is used for both the input and output parameters, with the value updated for the output. If OUTPUTOFFSET is not specified, you cannot configure input parameters to also be output parameters and all the input parameters must physically precede the output fields in the COMMAREA itself. ADD Metadata

OUTPUTOFFSET

Example 601

<?xml version=1.0 encoding=US-EBCDIC?> <navobj> <procedure name=testlstn> <dbCommand> program=atylstn;transid=atyi; subprogram=testlstn; </dbCommand> <parameters> <field name=p1 datatype=ada_numstr_s size=15 scale=3 /> <field name=p2 datatype=string size=39 /> <field name=p3 datatype=ada_decimal size=17 scale=4 /> <field name=p4 datatype=string size=10 />

60-4 AIS User Guide and Reference

</parameters> <fields> <field name=f1 datatype=ada_numstr_s size=10 scale=3/> <field name=f2 datatype=string size=45 /> <field name=f3 datatype=ada_decimal size=17 scale=4 /> </fields> </procedure> </navobj>

Security
The required security authorizations depend on the specific site. The following rules should be applied:

Check the list of resources being defined to CICS in the CICSDEF member in the NatAgent source library (programs, transactions, connections, etc.) and decide what to authorize in order for the Natural/CICS agent to work. The ATYN, ATYP, and ATYM transactions are all being started as asynchronous non-terminal tasks and should be defined to the security tool (for example, RACF) accordingly. To activate the installed groups ATYI and ATYL in the applid of CICSPRDC, add these groups to the applid of CICSPRDC list LISTC, where LISTC includes all groups that will be active or included at CICS startup. For example, the applid of CICSPRDC startup parms can have the parameter specified as the following:
GRPLIST=(DFHLIST, LISTC316)

The ATYCLIEN connection for EXCI has to be defined to the security tool (for example, RACF).

Defining the Natural/CICS Procedure Data Source


The process of defining a Natural/CICS procedure data source consists of two tasks

Defining the Natural/CICS Procedure Data Source Connection Configuring the Natural/CICS Data Source

Defining the Natural/CICS Procedure Data Source Connection


The Natural/CICS procedure data source connection is set using the Attunity Studio, Design perspective Configuration view. To define a Natural/CICS Procedure data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add your Natural/CICS procedure data source. Expand the Bindings folder. Expand the binding where you want to add the Natural/CICS procedure data source. Right-click Data sources and select New Data source.

Natural/CICS Procedure Data Source (z/OS) 60-5

7. 8. 9.

Enter a name for the procedure data source in the Name field. Select Natural/CICS from the Type list. Click Next. The Data Source Connect String screen is displayed.

10. Enter the connect string as follows:


Target system: The VTAM applid of the CICS target system. VTAM Netname: The VTAM netname of the specific connection used by the ATYI transaction (and MRO) to relay the program call to the CICS target system.

For example, if you issue the following command to CEMT, you will see ATYCLIEN for the ATYS connection on the display screen:
CEMT INQ CONN 11. Click Finish.

Configuring the Natural/CICS Data Source


After you define the connection, you set the data source properties: To set the data source properties 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Natural/CICS procedure data source. Expand the Bindings folder and the binding with your procedure data source. Expand the Data sources folder. Right-click the procedure data source and select Open. The Configuration editor is displayed.

60-6 AIS User Guide and Reference

Figure 602 Natural/CICS Procedure Data Source Configuration Properties

7.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the Natural/CICS Procedure Data Source Connection. Configure the data source driver properties as required. For a description of the available parameters, see Configuration Properties. Click Finish.

8. 9.

Writing a Natural Remote Procedure Call


A Natural Remote Procedure Call is a Natural subprogram, which uses an API supplied as part of the kit to receive input parameters from and return output parameters to the Natural Agent. In addition, the API may be used to retrieve context data from and return it to the Agent, as a means of saving the state between procedure calls. Every Natural subprogram called by the Natural Agent must include a Local Data Area (LDA) called ATYPARMS. This LDA was stored in the ATTUNITY library during installation of the Natural Agent. To make it available to the user, copy ATYPARMS to the SYSTEM library or to the user's application library, or, alternatively, the ATTUNITY library can be added to the user's STEPLIB. The programmer codes the following in the data definition portion of the subprogram:
LOCAL USING ATYPARMS

Include the ATYLRTN routine as part of the subprogram. This is the Natural Agent general service routine to implement the API. The routine includes a number of call requests available for use by the user. These include the call requests listed in the following table:

Natural/CICS Procedure Data Source (z/OS) 60-7

Table 603 Request GETPARMS Description

Call Requests within ATYLRTN Routine

Retrieves input parameters from the Natural agent. It has the following format: CALL ATYLRTN GETPARMS ATYL_STRUCT len1 inparm1 len2 inparm2 len3 inparm3 ... The first parameter is the group-field ATYL_STRUCT, which is defined in the LDA mentioned above. Following this field are a series of one or more parameter definitions, where each definition consists of an optional length override field and the actual input parameter field. The length override field is a four-byte signed binary integer (I4 in Natural terminology), which is normally set to zero but may be set to an override value. If the length override is set to zero, the length of the input parameter is taken from its Natural data definition. If the length override is set to a value greater than zero, that value defines the length of the input parameter, as in the following example, where the data definition portion of the subprogram contains the following: DEFINE DATA LOCAL USING ATYPARMS LOCAL 1 #LAST-NAME (A20) 1 #JOB-TITLE (A20) 1 #ADDRESS (A30) 1 #CITY (A20) 1 #L70 (I4) CONST <70> 1 #L0 (I4) CONST <0> ... END-DEFINE The subprogram could receive from the Natural Agent a 70-byte group field containing the elementary fields #JOB-TITLE, #ADDRESS, and #CITY, as well as a 20-byte elementary field containing #LAST-NAME, using the following call: CALL ATYLRTN USING GETPARMS ATYL_STRUCT #L70 #JOB-TITLE #L0 #LAST-NAME Note: The specification of the input parameters in the call must match their specification in the ADD metadata defined for the Natural procedure call on the client side (see Metadata). You can pass a maximum of 50 parameters (where a parameter is defined as the parameter and its length).

60-8 AIS User Guide and Reference

Table 603 (Cont.) Call Requests within ATYLRTN Routine Request GETCTEXT Description Retrieve context data from the Natural Agent. It has the following format: CALL ATYLRTN GETCTEXT ATYL_STRUCT len cdata The first parameter is the group-field ATYL_STRUCT, which is defined in the LDA mentioned above. Following this field is a parameter definition consisting of an optional length override field and the actual context data field. The length override field is a four-byte signed binary integer (I4 in Natural terminology), which is normally set to zero but may be set to an override value. If the length override is set to zero, the length of the context data is taken from its Natural data definition. If the length override is set to a value greater than zero, that value defines the length of the context data, as in the following example, where the data definition portion of the subprogram contains the following: DEFINE DATA LOCAL USING ATYPARMS LOCAL #CONTEXT-DATA1 (A30) #CONTEXT-DATA2 (A20) 1 #L50 (I4) CONST <50> ... END-DEFINE The subprogram could receive 50 bytes of context data from the Natural agent, using the following call: CALL ATYLRTN USING GETCTEXT ATYL_STRUCT #L50 #CONTEXT-DATA1 Note: The length of the context data in the call must match its length specification in the ADD metadata defined for the Natural procedure call on the client side (see Metadata).

Natural/CICS Procedure Data Source (z/OS) 60-9

Table 603 (Cont.) Call Requests within ATYLRTN Routine Request GETUINFO Description Retrieve user information from the Natural agent. The user information is defined at the data source level at the client side, and is passed to every procedure call in the data source. Its use is optional, and its often used to implement user security at the application (subprogram) level. The GETUINFO call has the following format: CALL ATYLRTN GETUINFO ATYL_STRUCT userid password len userinfo The first parameter is the group-field ATYL_STRUCT, which is defined in the LDA mentioned above. The second parameter is an eight-byte field which receives the userid that was defined in the procedure call. The third parameter is an eight-byte field, which receives the password that was defined in the procedure call, providing that SECMODE=0 or SECMODE=1; if SECMODE=2, the password field is filled with blanks for security reasons. Following the password field is a parameter definition consisting of an optional length override field and the actual user information field. The length override field is a four-byte signed binary integer (I4 in Natural terminology), which is normally set to zero but may be set to an override value. If the length override is set to zero, the length of the user information is taken from its Natural data definition. If the length override is set to a value greater than zero, that value defines the length of the user information, as in the following example, where the data definition portion of the subprogram contains the following: DEFINE DATA LOCAL USING ATYPARMS LOCAL 1 1 1 #USERID (A8) #PASSWORD #USER-INFO 2 #USER-INFO-1 (A20) 2 #USER-INFO-2 (A40) #L60 (I4) CONST <60>

1 ... END-DEFINE

The subprogram could receive 60 bytes of user information from the Natural agent using the following call: CALL ATYLRTN USING GETUINFO ATYL_STRUCT #USERID #PASSWORD #L60 #USER-INFO-1 Note: The length of the user information in the call from the Natural subprogram must not be smaller than its length specification as defined at the data source level on the client side (see Metadata). PUTPARMS Return output parameters to the Natural Agent. It has the following format: CALL ATYLRTN PUTPARMS ATYL_STRUCT len1 outparm1 len2 outparm2 len3 outparm3 ... The first parameter is the group-field ATYL_STRUCT, which is defined in the LDA. Following this field are a series of one or more parameter definitions, where each definition consists of an optional length override field and the actual input parameter field. The length override field is a four-byte signed binary integer (I4 in Natural terminology), which is normally set to zero but may be set to an override value. If the length override is set to zero, the length of the input parameter is taken from its Natural data definition. If the length override is set to a value greater than zero, that value defines the length of the input parameter. See the GETPARMS call request above for an example.

60-10 AIS User Guide and Reference

Table 603 (Cont.) Call Requests within ATYLRTN Routine Request PUTCTEXT Description Return context data to the Natural agent. It has the following format: CALL ATYLRTN PUTCTEXT ATYL_STRUCT len cdata The first parameter is the group-field ATYL_STRUCT, which is defined in the LDA mentioned above. Following this field is a parameter definition consisting of an optional length override field and the actual context data field. The length override field is a four-byte signed binary integer (I4 in Natural terminology), which is normally set to zero but may be set to an override value. If the length override is set to zero, the length of the context data is taken from its Natural data definition. If the length override is set to a value greater than zero, that value defines the length of the context data. See the GETCTEXT call request above for an example. REPLY Return code and optional error message to the Natural agent. It has the following format: CALL ATYLRTN REPLY ATYL_STRUCT Prior to the call, the return code and error message are set, as in the following example: IF AS_RETURN_CODE = 0 COMPRESS *PROGRAM EXECUTED SUCCESSFULLY INTO AS_ERROR_TEXT END-IF CALL ATYLRTN REPLY ATYL_STRUCT In the above example, if the previous call to ATYLRTN was unsuccessful, the return-code and error-message set by ATYLRTN are not altered and are returned to the Natural agent unchanged. Otherwise, a new success message is created and returned together with the zero return-code. Note: The fields AS_RETURN_CODE and AS_ERROR_TEXT are defined in the group ATYL_STRUCT supplied in the ATYPARMS LDA.

The Subprogram General Structure


The general structure of a subprogram written as a Natural Remote Procedure Call is as follows:
1. 2. 3. 4.

Declaration of ATYPARMS LDA in the data definition. Declaration of input and output parameters, length override fields and other work fields. CALL ATYLRTN USING GETCTEXT etc. to retrieve input parameters from the Natural agent. Optionally, CALL ATYLRTN USING GETCTEXT etc. to retrieve context data from the Natural agent. On the first procedure call the Natural subprogram receives the context data initialized to binary zeros. Processing section of the subprogram. CALL ATYLRTN USING PUTPARMS etc. to return output parameters to the Natural agent, assuming the subprogram logic leads to a successful outcome.
Note:

5. 6.

This call is not required if the subprogram does not execute successfully, that is, the return-code does not equal zero.

7.

Optionally, CALL ATYLRTN USING PUTCTEXT etc. to return context data to the Natural agent.
Natural/CICS Procedure Data Source (z/OS) 60-11

8.

CALL ATYLRTN USING REPLY etc. to return a return-code and optional error-message to the Natural agent.

Maintaining the CICS Environment for the Natural Agent


To provide maximum performance, a number of Natural server tasks are maintained in the CICS region, waiting for work in threads maintained by the Natural agent. Aside from the first procedure call that a server task handles, all subsequent procedure calls are handled immediately by the server without requiring re-initialization of the Natural/CICS environment, providing a significant performance advantage. The system programmer can configure system parameters, including non-activity limits, maximum number of threads (that is, server tasks), so as to tailor the system to the installation's specific requirements.
Note:

Transids, enqueue names, program names, TSQ prefixes, TDQ names, etc. should be left unchanged unless this creates a conflict with already existing names and ids in the system.

The following system parameters can be tailored:

TTENQ: Enqueue name for Attunity AIS Natural/CICS thread table. Default value: ATYLTHTB PTASK: Transid responsible for purging a runaway server task. Default value: ATYP. If changed, the corresponding CICS PCT entry must be modified accordingly.

NTASK: Transid activating the Natural server front end, which in turn activates the Natural server nucleus. Default value: ATYN. If changed, the corresponding CICS PCT entry must be modified accordingly.

MTASK: Transid activating the Attunity Connect message handler program. Default value: ATYM. If changed, the corresponding CICS PCT entry must be modified accordingly.

NFRONT: Name of server front end program. Default value: ATYFRONT. If changed, the actual program must also be renamed and the corresponding PCT and PPT entries modified accordingly.

NBACK: Name of server back end program. Default value: ATYBACK. If changed, the actual program must also be renamed and the corresponding PPT entry modified accordingly.

NATNUC: Name of Natural/CICS nucleus, to which the server front end does an XCTL. This should be the name of the standard Natural/CICS nucleus installed on the system. Default value: NC314RE. PDELAY: The delay in seconds allowed for the Natural server to perform the remote procedure call. Default value: 60. If the time set is exceeded, it is assumed that the Natural server is in a runaway loop and it is purged from the system.

60-12 AIS User Guide and Reference

Note: In most environments, PDELAY and LDELAY (see below) should be identical.

LDELAY: The delay in seconds allowed for the agent to wait for the Natural server to perform the remote procedure call. Default value: 60. If the time set is exceeded, the Agent returns a non-zero response code to the client indicating that the server has not responded.
Note:

In most environments PDELAY (see above) and LDELAY should be identical.

PXINTQ: The prefix used to build, together with the thread number, the name of a temporary storage queue used to pass the input parameters to the procedure call. Default value: ATYIN. For example, assuming the default, the TSQs will be named ATYIN001, ATYIN002, etc.

PXOUTQ: The prefix used to build, together with the thread number, the name of a temporary storage queue used to pass the output parameters from the procedure call back to the agent. Default value: ATYOU. For example, assuming the default, the TSQs will be named ATYOU001, ATYOU002, etc.

PXCLTQ: The prefix used to build, together with the thread number, the name of a temporary storage queue used to pass control data from the agent to the subprogram and back. Default value: ATYCL. For example, assuming the default, the TSQs for the respective threads will be named ATYCL001, ATYCL002, etc.

PXPTRQ: The prefix used to build, together with the thread number, the name of a request id (REQID) used to identify the wait of the task responsible for purging the Natural server should it enter a runaway loop. Default value: ATYPT. For example, assuming the default, the REQIDs will be named ATYPT001, ATYPT002, etc.

PXLWRQ: The prefix used to build, together with the thread number, the name of a request id (REQID) used to identify the wait into which the Agent enters while awaiting a reply from the Natural server. Default value: ATYLW. For example, assuming the default, the REQIDs will be named ATYLW001, ATYLW002, etc.

PXNWRQ: The prefix used to build, together with the thread number, the name of a request id (REQID) used to identify the wait into which the Natural server task enters after it has concluded a procedure call and waits for its thread to be chosen for a subsequent procedure call. Default value: ATYNW. For example, assuming the default, the REQIDs will be named ATYNW001, ATYNW002, etc.

MAXTH: The maximum number of threads (and therefore the maximum number of Natural server tasks available for work) that can be activated per CICS address space.

Natural/CICS Procedure Data Source (z/OS)

60-13

This value should not exceed the number of sessions available for specific (non-generic) EXCI connections to the CICS address space. Default value: 10.

MSG: The destination for error messages written by the Natural/CICS agent or its component modules. Permissible values are: JOBLOG: Error messages are written as operator messages to the CICS job log. TDQ: (The default value) Messages are written to the transient data queue specified by the TDQID parameter. BOTH: Messages are written to both JOBLOG and TDQ destinations.

TDQID: The name of the transient data queue to which error messages are written if the MSG parameter (see above) is specified as TDQ or BOTH. Default value: ATYL. MAXNTM: Maximum number of no thread available error messages that will be written within the period of an hour. Default value: 20. MAXMSG: Maximum number of all error messages that will be written within the period of an hour. Default value: 300. MXINAC: Maximum inactivity in minutes permitted to a Natural server task before it terminates and is purged from the system. Default value: 15. SECMODE: A flag specifying which security strategy the user has opted to implement for the Natural server task. The following options are: 0 (No server-side security): Natural Security is not installed. A library parameter, if passed, will cause a logon to that library if it is not yet the current library-id for the server task. All userid and password passed parameters are optional and have no effect on the agent per se; the subprogram may interrogate them as well as additional user information for the purpose of enforcing homemade security as it sees fit. 1 (Minimal server-side security (trusted mode)): This is the default value. Natural Security is installed. The Natural server task will be initialized with a trusted userid and password (specified in the configuration parameters) with which it will work throughout the life of the server task. All libraries from which subprograms are to be invoked must be authorized for use with this trusted user. A library parameter, if passed, will cause a logon to that library if it is not yet the current library-id for the server task. All userid and password passed parameters are optional and have no effect on the agent per se; the subprogram may interrogate them as well as additional user information for the purpose of enforcing homemade security as it sees fit. 2 (Maximum server-side security): Natural Security is installed. The Natural server task will be initialized with a userid of limited authorization. Each procedure call must supply its own library/userid/password combination as part of the call. This will enable a high level of server-side security but will incur considerable overhead during the repeated authorization work being performed by Natural Security. The Natural agent will try to minimize this overhead as much as possible by attempting to dedicate a separate thread for each library/userid/password combination, up to the maximum thread limit defined in the configuration parameters. In addition, if no new threads are available, then the agent will attempt to locate an available thread (server) currently logged onto the same userid, requiring only a change to the current library. If no such threads are available, only then will the agent choose the oldest inactive thread and cause its associated server to incur the full overhead of a library/userid/password re-logon. Nonetheless, one should expect additional processing overhead using SECMODE=2.

60-14 AIS User Guide and Reference

Note:

If AUTO=ON is specified, then SECMODE=2 will be flagged as a configuration parameter error, since AUTO=ON does not provide a means to alter the userid in the middle of the Natural session.

AUTO: The value to which the Natural Security dynamic parameter AUTO should be set. ON (the default value): Userid and password are not entered during Natural logon; the userid is taken from the CICS External Security Interface. OFF: Userid and password are entered during Natural logon. This parameter is specified only when SECMODE=2.
Note:

Changes have to be made to the Natural assembler exit NCIUIDEX in order that an asynchronous (non-terminal) Natural task will obtain the userid externally. To avoid this problem, even users that normally implement Natural Security with AUTO=ON should consider using AUTO=OFF for the asynchronous Natural server tasks.

SENDER: The value to which the Natural dynamic parameter SENDER should be set for asynchronous Natural tasks. This should be the name of a transient data queue. Default value: CSSL. OUTDEST: The value to which the Natural dynamic parameter OUTDEST should be set for asynchronous Natural tasks. This should be the name of a transient data queue. Default value: CSSL. TBTCH: Specifies whether the Natural server task issues the command SET CONTROL 'T=BTCH' during its initialization, in order to operate inline support mode for messages, etc. output to the SENDER and OUTDEST destinations. YES: When the Natural server task initializes, the command SET CONTROL 'T=BTCH' is issued. This enables error messages and messages issued by the WRITE and DISPLAY commands in called subprograms to be output successfully to the SENDER or OUTDEST destinations.
Note: The NATBTCH module must be installed in the Natural CICS nucleus when the nucleus is link-edited. If the NATBTCH module is not present and TBTCH=YES, an error will occur during server task initialization.

NO (the default value): The command SET CONTROL 'T=BTCH' is not be issued during Natural server task initialization. TBTCH should be allowed to default to NO if the NATBTCH module is not present in the link-edit of the Natural CICS nucleus. Natural error messages and WRITE or DISPLAY messages output by subprograms are not written to the SENDER or OUTDEST destinations (that is, the messages will be lost but the server task will continue to operate normally).

LOGON: The startup library to which Natural should logon when the server task is initiated. This library will be specified in the LOGON command specified in the dynamic STACK parameter when Natural is started. Default value: ATYLSTN.

Natural/CICS Procedure Data Source (z/OS)

60-15

USERID: The userid with which Natural should logon when the server task is initiated. This userid is specified in the LOGON command specified in the dynamic STACK parameter when Natural is started. Default value: ATTY. This parameter is specified only when SECMODE=2 and AUTO=OFF (see above).

PASSWD: The password with which Natural should logon when the server task is initiated. This password is specified in the LOGON command specified in the dynamic STACK parameter when Natural is started. Default value: ATTY. This parameter is specified only when SECMODE=2 and AUTO=OFF (see above).

MONITR: Name of Natural dispatcher program. This program receives control when the Natural server task is initiated and is responsible for dispatching each subsequent procedure call in its thread, error-handling, interaction with the Agent, etc. Default value: ATYNDISP. If changed from default value, the actual program must be renamed as well.

ADDPARM: A string of additional dynamic parameters for the Natural server task, which may be optionally added to the dynamic parameters mentioned above. Default value: (null string). For example:
ADDPARM=(MADIO=0,LT=99999,TD=2)

Note: Use this parameter with great care. You must make sure that the correct library is logged onto upon initialization of the Natural server session and the correct Natural agent dispatcher program is activated. Normally this parameter is used to specify a profile defined by SYSPARM or NTSYS, where the actual list of dynamic parameters are defined.

TRSRECL: The maximum length of a record that may be written to main temporary storage. Default value: 32748. DYNPARM: A string of dynamic parameters for the Natural server task, which will replace the dynamic parameters mentioned above. Default value: (null string). For example:
DYNPARM=(SYS=NATAGENT)

Note: Use this parameter with great care. You must make sure that the correct library is logged onto upon initialization of the Natural server session and the correct Natural agent dispatcher program is activated. Normally this parameter is used to specify a profile defined by SYSPARM or NTSYS, where the actual list of dynamic parameters are defined.

60-16 AIS User Guide and Reference

61
Procedure Data Source (Application Connector)
This section includes the following topics:

Overview Configuration Properties Transaction Support Security Data Types Platform-specific Information Defining the Procedure Data Source Setting Up Procedure Data Source Metadata Testing the Procedure Data Source Executing a Procedure

Overview
This section contains information on the following topics:

Introduction Supported Versions and Platforms Supported Features Limitations

Introduction
The Procedure data source deals with activating any piece of 3GL code (COBOL, C, RPG, etc.) as an SQL stored procedure. The Metadata for such a stored procedure includes a description of the input and output parameters as well as the DLL name and entry name where the code resides. The Procedure data source is activated when the client application executes one of its stored procedures. It uses its metadata to construct the calling stack for the back-end 3GL code. It loads the user DLL, locates the entry point, runs the function and returns a result as SQL stored procedure output parameters.

Procedure Data Source (Application Connector) 61-

Note:

Attunity Connect includes separate modules for data access, as opposed to application access. This distinction however is drawn more along the lines of Client Interface and access method rather than whether a database or application is being activated. The Procedure Data Source is part of the data access component, although it deals strictly with activating applications. This is because the model used is SQL Stored Procedures. If SQL language and/or interfaces are not required, it is recommended to use the Legacy Plug Application Adapter instead of the Procedure data source.

Supported Versions and Platforms


The Procedure data source works on all platforms and operating systems. For limitations, see Platform-specific Information.

Supported Features
The Procedure data source supports the following main features:

Exposes user-written functions and/or procedures as Stored Procedures that can be activated from any supported platform to any Attunity-supported interface. Parameter-passing mechanisms by value, reference, and descriptor. On OpenVMS, the descriptor-passing mechanism is especially useful because the Attunity descriptor structure matches the OpenVMS descriptor structure. You can view the C-language version of the Attunity descriptor structure in the gdb_val_desc struct and GDB_VAL_DESC typedef that is contained in the C header file dbgdb.h. This file is part of the AIS kit for each platform.

Functions optionally returning a value. Complex structures passed as parameters. Multiple levels of indirection (for example, in C, char***). Triggers activated on session and transactional events. When using structures, different alignments are supported. Multi-record result set composed by multiple activation of the user procedure until satisfying an end condition.

Limitations

No array support. If array support is required, you must use the Legacy Plug Adapter.

For platform-specific limitations, see Platform-specific Information.

Configuration Properties
This section contains the following topics:

Overview Parameter Descriptions

61-2 AIS User Guide and Reference

Overview
The simple task of activating user 3GL code does not require any configuration parameters to be set. All the information regarding the DLL name, symbol name, parameter passing mechanisms are all defined as part of the Stored Procedure Metadata. The following set of configuration properties deals with setting up triggers on a Procedure data source Driver. The need for these properties is best explained by an example. For the purposes of this example, the 3GL code being activated by the Procedure Data Source driver accesses an Oracle database. This type of use case raises the following challenges.

At what point does the user code connect to the Oracle database; when can it disconnect from the database? How does a user of this data source control the transactional boundaries? How does one roll back changes performed by activating one or more of the stored procedures?

To support this type of use case, the Procedure data source allows for the registration of user triggers/user exits to be called on specific events related to the data source.

Parameter Descriptions
The following properties can be configured for the Procedure data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Procedure Data Sources.

commitTransaction: The name of a symbol within the triggerShareable attribute, to be activated when the client commits the transaction. The function prototype is GDB_STATUS commit_transaction_trigger(void *connect_handle);. This trigger has one argument and a return value:
commitTransaction Attribute Arguments Input/Output Input Type void * Description The handle returned by the connect trigger, or NULL if no such trigger was provided. The trigger should normally return a GDB_OK_ in case of success, or GDB_NOT_ in case of failure. These enumeration codes can be found in the DBGDB.h header file in the include directory.

Table 611 Name

connect_handle

return-value

Output

GDB_STATUS

connect: The name of a symbol within the triggerShareable attribute, to be activated when the client application connects to the data source. Note that in a reusable server, this trigger may be called several times as different clients connect and disconnect from the server. The function prototype is GDB_STATUS connect_trigger(void **connect_handle, char *datasource_name,

Procedure Data Source (Application Connector) 61-3

void * unused, char *username_password);. The function has four arguments and a return value:
Table 612 Name connect_handle connect Attribute Arguments Input/Output Output Type void ** Description The address of a handle that can be created by the connect trigger to store and pass information between the triggers. The handle provided by the connect trigger will be passed as input to all the other triggers. The use of different connection handles is important in a multi-client scenario, to separate the environments of different clients (for example, from the Overview example, maintain separate connections to Oracle for each of the clients). A NULL-terminated string containing the name of the data source. Not currently in use. If a user name and password was set up for the Procedure data source, these will be passed to the connect trigger. A Procedure data source does not require a user name or password for itself, but the back-end user code may require a user name and password (e.g. in the Overview example, the user name and password of the Oracle database).

datasource_name

Input

char *

Future Use username_password

Input Input

void * char *

61-4 AIS User Guide and Reference

Table 612 (Cont.) connect Attribute Arguments Name return-value Input/Output Output Type GDB_STATUS Description The trigger should normally return a GDB_OK_ in case of success, or GDB_ NOT_ in case of failure. These enumeration codes can be found in the DBGDB.h header file in the include directory.

disableExplicitSelect: For future use. disconnect: The name of a symbol within the triggerShareable attribute, to be activated when the client application disconnects from the data source. Note that in a reusable server, this trigger may be called several times as different clients connect and disconnect from the server. This trigger is normally used for clean-up operations. The function prototype is GDB_STATUS disconnect_ trigger(void *connect_handle);. This trigger has one argument and one return value:
disconnect Attribute Arguments Input/Output Input Type void * Description The handle returned by the connect trigger, or NULL if no such trigger was provided. The trigger should normally return a GDB_OK_ in case of success, or GDB_ NOT_ in case of failure. These enumeration codes can be found in the DBGDB.h header file in the include directory.

Table 613 Name

connect_handle

return-value

Output

GDB_STATUS

rollbackTransaction: The name of a symbol within the triggerShareable attribute, to be activated when the client rolls back the transaction. The function prototype is GDB_STATUS rollback_transaction_trigger(void *connect_handle);. This trigger has one argument and a return value:
rollbackTransaction Attribute Arguments Input/Output Input Type void * Description The handle returned by the connect trigger, or NULL if no such trigger was provided.

Table 614 Name

connect_handle

Procedure Data Source (Application Connector) 61-5

Table 614 (Cont.) rollbackTransaction Attribute Arguments Name return-value Input/Output Output Type GDB_STATUS Description The trigger should normally return a GDB_OK_ in case of success, or GDB_ NOT_ in case of failure. These enumeration codes can be found in the DBGDB.h header file in the include directory.

startTransaction: The name of a symbol within the triggerShareable attribute to be called when a transaction is started. The function prototype is GDB_ STATUS start_transaction_trigger(void *connect_handle, GDB_ TRANS_MODE mode);. This trigger has two arguments and a return value:
startTransaction Attribute Arguments Input/Output Input Type void * Description The handle returned by the connect trigger, or NULL if no such trigger was provided. The mode of a transaction. It can be read-only or read-write. The value of this enumeration can be found in the DBGDB.h header file in the include directory. The trigger should normally return a GDB_OK_ in case of success, or GDB_ NOT_ in case of failure. These enumeration codes can be found in the DBGDB.h header file in the include directory.

Table 615 Name

connect_handle

mode

Input

GDB_TRANS_MODE

return-value

Output

GDB_STATUS

triggerShareable: The name of the DLL/shareable image containing the triggers. This DLL may be separate from the procedural 3GL code being activated, or more often, within the same DLL as the user code being activated.

Configuring Advanced Data Source Properties


You can set advanced properties for this data source. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

61-6 AIS User Guide and Reference

Transaction Support
There is no generic meaning to Transaction support when connecting to 3GL code. In some cases however, the back end code being activated does require transactional context. For example, 3GL code connecting to an Oracle database would potentially require the client application to control the transactional boundaries. Support for such back end systems can be achieved using the trigger mechanisms described in Configuration Properties.

Security
There is no generic meaning to security when connecting to 3GL code. In case the back end system activated is such that it requires security credentials, this can be achieved using the trigger mechanism for the connect trigger described in Configuration Properties.

Data Types
All available Attunity Connect data types can be used as parameters to the Procedure data sources stored procedures. Support for passing parameters by value is somewhat limited. You may only pass fields by value if their size does not exceed the pointer size of the machine, i.e. four bytes on 32-bit AIS, and eight bytes on 64-bit AIS. Also refer to Passing Parameters in OS/400.

Platform-specific Information
This section contains the following topics:

Windows Platforms and AIS Procedures (ADO Considerations) HP NonStop Platforms and Attunity Connect Procedures Load Modules and DLLs on MVS Descriptors on OpenVMS OS/400 Issues

Windows Platforms and AIS Procedures (ADO Considerations)


When you call an Attunity Connect procedure in ADO, you must include parentheses after the name of the Attunity Connect procedure. For example, the following sample is used to execute an Attunity Connect procedure named MyProcedure which does not include parameters:
cmd.Text = "MyProcedure()" cmd.Type = adCmdStoredProc rs = cmd.Execute

To specify an Attunity Connect procedure that does include parameters, specify within the parentheses a question mark for each parameter and manually supply the parameters to Attunity Connect.
' Set parameters Dim con As ADODB.Connection

Procedure Data Source (Application Connector) 61-7

Dim com As ADODB.Command Dim prm As ADODB.Parameter ... Set prm = com.CreateParameter("Julian", adVarChar, adParamInput, 15, "Empty") Set prm = com.CreateParameter("White", adVarChar, adParamInput, 10, "Empty") ... ' execute procedure with parameters cmd.Text = "MyProcedure(?,?)" cmd.Type = adCmdStoredProc rs = cmd.Execute

HP NonStop Platforms and Attunity Connect Procedures


HP NonStop platforms have only limited support for DLLs. The preferred way to use Attunity Connect procedures on an HP NonStop platform is to statically link the Attunity Connect procedures with an Attunity Connect library. Use the following Attunity Connect function to register the procedures:
long nav_register_function(STRING fnc_name, STRING fnc_class, FNC_PT fnc_pt)

Where:

STRING fnc_name: The name of the function to be registered. STRING fnc_class: The name of the group you want the function to be in (this is equivalent to the DLL module name). The name specified for this parameter is the same name specified in the filename attribute in the ADD metadata definition XML file. FNC_PT fnc_pt: A pointer to the user function. To register the symbols, call this function within the main() function of the procedure, as shown in the following example:
#include "utlmain.h" main(int argc, char *argv[]) { /* calls to nav_register_function() with all the new procedures */ return(nav_util_main(argc, argv)); }

The AIS installation package includes several files that you can use to become familiar with Attunity Connect procedures. These are:

mathc: This file contains sample procedures. It includes main() as in the above example. prcsmxml: ADD metadata XML files for the math procedures. utlmainh: An h file that should be included in the main program. bldproc: A procedure that links the libnava library with the math module. This file generates a new mathmain file that is used instead of the navutil call. That is, 'run mathmain execute MyProc' is used to call Attunity Connect with the MyProc data source (the Attunity Connect procedure definition in the binding configuration).

61-8 AIS User Guide and Reference

Notes:

When working in client/server mode, change the bldproc procedure to reproduce navutil, instead of mathmain. When using COBOL for the Attunity Connect procedure, you can only use HP NonStop NMCOBOL. To use Attunity Connect procedures on an HP NonStop platform 1. Create a custom version of the main() function for the NAV_UTIL program.
2. 3.

Add a call to the NAVSETUP() function at the beginning of your main. This is an internal function that loads the AIS server environment. In the custom main() function, register all user functions that will be referenced by the procedure driver. Use the following Attunity Connect function to register the functions: long nav_register_function(STRING fnc_name, STRING func_class, FNC_PT fnc_ pt) where: fnc_name: The name of the "external" function to be registered (that is, the LOAD function for a driver, the registration function for a data type, or a startup function). fnc_class: The name of the group you want the function to be in (this is equivalent to the DLL module name on other platforms). fnc_pt: A pointer to the user function.

4.

Statically link the custom functions with the Attunity Connect library LIBNAVA, located in the subvolume where Attunity Connect is installed, to create a custom version of the NAV_UTIL program.

Example 611 #include "utlmain.h" main(int argc, char *argv[]) { NAVSETUP(); nav_register_function("AddEmployee", "CLASS1", add_employee); return(nav_util_main(argc, argv)); }

Load Modules and DLLs on MVS


The Procedure data source supports load modules and DLLs on MVS.

Load module: This is a program, and as such has one entry point. There is no need for a symbol name, since there is only one function. DLL: This includes several entry points, each of which can be mapped as stored procedure using the Procedure data source. Therefore every stored procedure must specify the entry point name. Note that the DLL will only be loaded once into memory.

Procedure Data Source (Application Connector) 61-9

Descriptors on OpenVMS
Parameters passed by the descriptor are always set up as static text descriptors.

DSC$K_CLASS_S DSC$_DTYPE_T

OS/400 Issues
Service Programs
Currently activation of programs is not supported. You can only use the Procedure data source to activate service programs.

Passing Parameters
Passing parameters by value is not supported on OS/400. If required, you can pass parameters by reference or descriptor.

Defining the Procedure Data Source


The process of defining a Virtual data source consists of two tasks:

Defining the Procedure Data Source Connection Configuring the Procedure Data Source

Defining the Procedure Data Source Connection


The Procedure data source connection is set using Attunity Studio, in the Design perspective, Configuration view. To define a Procedure data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6. 7. 8. 9.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add your Procedure data source. Expand the Bindings folder. Expand the Binding where you want to add the Procedure data source. Right-click the Data sources folder and select New Data source. Enter a name for the procedure data source in the Name field. Select Procedure (Application Connector) from the Type field. Click Finish.

Configuring the Procedure Data Source


After setting the binding, you can set data source properties, as follows: To set the data source properties 1. Open Attunity Studio.
2. 3.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your Procedure data source.

61-10 AIS User Guide and Reference

4. 5. 6.

Expand the Bindings folder and the binding with your procedure data source. Expand the Data sources folder. Right-click the Procedure data source and select Open. The Configuration editor is displayed.

Figure 611 Procedure Data Source Configuration Properties

7.

Enter the information in the Authentication section, if necessary. You can define the following parameters:

User profile: Select the user profile that has permissions to access this data source. The available user profiles are in the list. For information on user profiles, see User Profiles. User name: Enter the name of user with access to this data source. Password: Enter the password for the user with access to this data source. Confirm Password: Enter the password again, to ensure it was entered correctly.

8.

Configure the Procedure data source driver properties as required. For a description of the available parameters, see Configuration Properties.

Setting Up Procedure Data Source Metadata


Attunity metadata describes the procedure. To define the Procedure data source 1. In the Design perspective Configuration view, right-click the procedure for which you want to manage the metadata, and select Edit metadata.
Procedure Data Source (Application Connector) 61-11

The Metadata tab is displayed with the selected procedure data source highlighted in the tree.
2. 3. 4. 5. 6.

Right-click Procedures, and select New Procedure. Enter the procedure name that the data source will search for in the given DLL. Enter the path and/or name for the DLL or, click Browse and locate the DLL. Select the source language of the DLL. Click Finish. The procedure is now displayed in the Metadata view and the procedure opens for editing.

Defining Return Values


Return values for the procedure are specified in the General tab, as shown in the following figure:
Figure 612 The Procedure Properties General tab

Note:

The Language field in the The Procedure Properties General tab is intended for future implementation.

To specify return values 1. Click Add. The New Field screen opens.
2. 3.

Enter the return value name and click OK. The default return data type is string. Edit this value by clicking the data type and selecting the relevant data type from the list.

The following table lists and describes the default properties that are displayed for each return value:

61-12 AIS User Guide and Reference

Table 616 Property MECHANISM

Default Field Properties Description The method by which this argument is passed/received by the procedure. Valid values are VALUE, DESCRIPTOR, and REFERENCE. For outer-level (non-nested) arguments, structure arguments (for the structure itself, and not structure members), and variant arguments, the default value is REFERENCE. The default for all other columns is VALUE. A DESCRIPTOR can only be a static descriptor containing strings. Note: For OS/400 Platforms, any parameter that is an argument to the function (that is, contains a dbCommand statement with a non-zero ORDER) cannot have a MECHANISM of VALUE. This arises from the fact that the natural size of an integer on the OS/400 stack is smaller than a pointer.

ORDER

The ORDER for a return value is zero (0). Additional return value properties can be manually added by clicking Value for a property, and then clicking the ellipses button. The DB Command string is defined as follows: property=value;property=value;...

BASE_ALIGNMENT

The value for nested structures/variant fields. Valid values are SYSTEM (the default) or an integer greater than 0. This value is not case sensitive. Specifying an integer as the value for BASE_ALIGNMENT sets the initial offset, in increments of bytes, of the alignment of every structure and variant (until reset). For example, setting BASE_ ALIGNMENT to 2, aligns structures and variants on a word boundary.

MEMBER_ALIGNMENT

The value (until reset) for structure/variant fields. Valid entries are SYSTEM, YES, and NO. This value is not case sensitive. The default is the value at the table attribute level. Setting MEMBER_ALIGNMENT to YES specifies that the start of every structure member and variant member (until reset) is aligned on a boundary equivalent to the atomic size of that member. For example, a word is aligned on a word boundary.

Defining Input and Output Arguments


Input and output arguments for the procedure are specified in the Arguments tab, as shown in the following figure:

Procedure Data Source (Application Connector)

61-13

Figure 613 The Procedure Properties Argument tab

To specify arguments 1. Click Add. The New Field screen opens.


2. 3.

Enter the argument name and click OK. Click the Type field, and set the type of the argument (input, output or input/output).
Note:

If an argument is defined as Input/Output, an additional argument is created in order to set the parameters input properties.

4. 5.

The default argument data type is string. Edit this value by clicking the Data Type area and selecting the relevant data type from the list. Use the Up and Down buttons to set the order of the arguments. The following table lists and describes the default properties that are displayed for each argument:

Table 617 Property MECHANISM

Input/Output Argument Default Properties Description The method by which this argument is passed/received by the procedure. Valid values are VALUE, DESCRIPTOR, and REFERENCE. This entry is not case-sensitive For outer-level (non-nested) arguments, structure arguments (for the structure itself, and not structure members), and variant arguments, the default value is REFERENCE. The default for all other columns is VALUE. A DESCRIPTOR can only be a static descriptor containing strings. Note: For OS/400 Platforms, any parameter that is an argument to the function (that is, contains a dbCommand statement with a non-zero ORDER) cannot have a MECHANISM of VALUE. This arises from the fact that the natural size of an integer on the OS/400 stack is smaller than a pointer.

61-14 AIS User Guide and Reference

Table 617 (Cont.) Input/Output Argument Default Properties Property ORDER Description The procedure argument number. The order can be changed using the Up and Down buttons. Additional argument properties can be manually added by clicking Value for a property, and then clicking the ellipses button. The DB Command string is defined as follows: property=value;property=value;... BASE_ALIGNMENT The initial value for the current argument, and all subsequent structure/variant arguments. Valid values are SYSTEM (the default) or an integer greater than 0. This value is not case sensitive. Specifying an integer as the value for BASE_ALIGNMENT sets the initial offset, in increments of bytes, of the alignment of every structure and variant (until reset). For example, setting BASE_ ALIGNMENT to 2, aligns structures and variants on a word boundary. MEMBER_ALIGNMENT The initial value for the current argument and all subsequent structure/variant arguments. Valid entries are SYSTEM, YES, and NO. This value is not case sensitive. The default is the value at the table attribute level. Setting MEMBER_ALIGNMENT to YES specifies that the start of every structure member and variant member from this point on is aligned on a boundary equivalent to the atomic size of that member. For example, a word is aligned on a word boundary. LEVEL The number of levels of indirection of the field pointer. This argument applies only to fields passed/received by REFERENCE. The argument value must be greater than 0 and defaults to 1 if not specified. Setting this token to 1 indicates that the address of the column is passed to the procedure. Setting the token to 2 indicates that you are using a pointer to another pointer. NULL This is used only for arguments passed by REFERENCE or DESCRIPTOR. This arguments value specifies what to pass to the procedure when the Attunity Connect buffer contains a null value for a nullable parameter. Valid entries differ depending on how the parameter is passed:

For parameters passed by REFERENCE, the valid entries are any integer from 0 through the value of the LEVEL parameter, where 0 passes a pointer to the data types null-value and n passes a pointer that when dereferenced (n 1) times is NULL. This defaults to the value of the LEVEL parameter for REFERENCE parameters. For parameters passed by DESCRIPTOR, the valid entries are 0 and 1, where 0 passes a pointer to the descriptor to the data types null-value and 1 passes a NULL pointer. This defaults to 1 for DESCRIPTOR parameters. For example, assuming foo = NULL, a parameter with LEVEL=2 and NULL=1 would pass &foo, while a parameter with LEVEL=2 and NULL=2 would pass &&foo.

EOS_VALUE

Optional, for output arguments. The value that marks the end of stream. By default, the stream ends after each fetch. If there are any EOS_VALUE arguments, then all must match their respective field values to end the stream. This entry is case sensitive.

Procedure Data Source (Application Connector)

61-15

Testing the Procedure Data Source


For testing the Procedure data source, perform the following steps. For an actual implementation as a sample, see the following directory on the Attunity installation CD: Samples\Drivers\StoreProc To test the Procedure data source 1. In the Design perspective Configuration view, right-click the data source and select Test.
2.

Click Next. A successful test is indicated (or not).

3.

Click Finish.

Executing a Procedure
You can execute a 3GL procedure by calling it as follows:
Call MyProc:MATH_SIMPLE(12,33)

where MyProc is treated as a data source name and MATH_SIMPLE is the name of a function defined in MyProc. You can also use the procedure in a select statement, even performing joins between the results returned by the procedure and other tables. The following example shows the procedure used in a SELECT statement:
SELECT * from MyProc:MATH_SIMPLE(12,33)

Attunity Connect treats the input variables of the procedure as parameters and the output variables are treated as a rowset produced by this procedure. For an actual implementation as a sample, see the following directory on the Attunity installation CD: Samples\Drivers\StoreProc

61-16 AIS User Guide and Reference

62
CICS Procedure Data Source
This section describes the Attunity CICS Procedure data source driver. It includes the following topics:

Overview Configuration Properties Metadata Transaction Support Security Data Types Defining the CICS Procedure Data Source Setting-up the CICS Procedure Data Source Metadata Editing the XML in the Source Code

Overview
The CICS Procedure data source enables you to execute a CICS program using standard SQL from any supported Attunity client platform and API (e.g., from Windows using ODBC). The data source uses the External CICS Interface and the EXCI mirror transaction to execute programs within a CICS region. The CICS Procedure data source uses COMMAREA to pass input values to the CICS program and receives the output and therefore can only be used to execute COMMAREA programs. 3270 based CICS programs are not supported. After you define the new CICS program to the data source, it will be listed as a stored procedure to SQL clients, enabling you to execute a program by calling it either directly in an SQL CALL statement, or within an SQL SELECT statement. For example:
CALL CICSPROG1 (100, TEST1);

Or
SELECT * FROM CICSPROG1 (100, TEST2);

Supported Versions and Platforms


Attunity CICS Procedure data source can be used with z/OS systems only. The following versions of CICS are supported:

CICS Procedure Data Source 62-1

CICS version 4.1 or higher. CICS Transaction Server version 1.3 or higher.

Environmental Prerequisites
AIS uses EXCI to interface to CICS. EXCI requires some set up:

IRC must be open. Use CEMT I IRC from the CICS screen to check your IRC status. If in closed state, set it to open. A specific connection must be set up. Use CEMT I connection to get the list of available connections. Note that you can only use specific connections which have a VTAM netname associated with them. The default available on most systems is BATCHCLI. Attunity provides a JCL for defining an Attunity connection. See the CICS CONF member in the USERLIB. An EXCI mirror transaction ID must be available. The default on most systems is transaction ID EXCI. You can use the CEMT I TRA PROG(DFHMIRS) to get the list of EXCI transaction IDs available on your system.

Limitations

The CICS Procedure data source does not support arrays. If the CICS program returns an array, then you must use the CICS application adapter. For more information, see CICS Application Adapter (z/OS Only). The CICS Procedure data source does not support conversational programs.

Design Considerations
A single CICS Procedure data source can support one CICS region and is defined by a CICS APPLID and an IRC connection name. When creating the metadata for multiple CICS programs, it is recommended to use one data source unless you have different requirements of the different programs, such as:

Programs from different CICS regions Using a different mirror transaction (for monitoring, workload, or security reasons) Different level of transaction support (one-phase commit vs. Two-phase Commit)

Configuration Properties
The following properties can be configured for the CICS procedure data source. You set the properties in Attunity Studio, Design perspective. For information on how to set data source properties in Attunity Studio, see Adding Procedure Data Sources.

disableExplicitSelect: relevant if the data source driver allows for the suppressing of certain fields for Select *. So if you disable this, all fields will appear for Select *. exciTransid: Specifies the EXCI mirror transaction ID in the CICS region. On most systems you will usually find a default mirror transaction called EXCI. You can find out the transaction ID on your system using the following command from the CICS screen: CEMT INQUIRE TRANS PROG(DFHMIRS). The default value is EXCI.

62-2 AIS User Guide and Reference

reusePipe: Specifies if a PIPE on the EXCI connection is going to be reused for multiple DPL requests. Valid value is true or false. transactionSupport: The CICS data source normally works without a transactional context. This means that CICS programs are activated within their own unit of work (SYNCONRETURN). The data source is capable, however, of working in 1PC mode or even 2PC mode. See Transaction Support. userInfo: Not currently used. You can issue to CEMT the following command:CEMT INQ CONNThe netname is BATCHCLI to get the list of connections defined on your system. The default connection supplied by IBM is called has a VTAM_netname of BATCHCLI.
Note:

The AIS installation includes a JCL for defining a CICS connection to be used with AIS. If you choose to use the AIS connection (netname ATYCLIEN) perform the following procedure:

Either use the JCL in the NAVROOT.USERLIB(CICSCONF) member to submit the DFHCSDUP batch utility program to add the resource definitions to the DFHCSD dataset (see the IBM CICS Resource Definition Guide for further details) or use the instream SYSIN control statements in the NAVROOT.USERLIB(CICSCONF) member as a guide to defining the resources online using the CEDA facility. After the definitions have been added (via batch or using the CEDA facility), log on to CICS and issue the following command to install the resource definitions under CICS:
CEDA INST GROUP(ATYI)

Henceforth, specify ATYCLIEN as the NETNAME.

Configuring Advanced Data Source Properties


You can set advanced properties for procedure data sources. For information on setting advanced properties, see Configuring Data Source Advanced Properties.

Metadata
The CICS Procedure data source requires Attunity Metadata. The metadata describes the program that should be executed, its input and output. Each CICS program to be activated requires a procedure definition in the data source ADD. You need to supply the following information:

The program name to be executed in CICS. The layout of the COMMAREA (the input parameters and output data).

Attunity Connect treats the input variables as parameters and the output variables are treated as a rowset produced by the CICS program. If COBOL copybooks which describe the CICS program input and output records are available, then you can use the Attunity Studio Metadata Import wizard. For more information, see Importing Procedure Metadata to create the metadata for these programs. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the

CICS Procedure Data Source 62-3

metadata from copybooks with the same settings and later import the metadata from the other copybooks. Otherwise, the metadata must be manually defined, as described in Setting-up the CICS Procedure Data Source Metadata.

Transaction Support
Attunity CICS Procedure data source can work with either one-phase commit or Two-phase Commit transactions. It can fully participate in a distributed transaction. By default, the Attunity CICS Procedure data source is configured as zero-phase commit. This means that transactions are not supported; rollbacks are not available and all operations via EXCI are executed as SYNCONRETURN invocations.

Using Attunity Connect with One-phase Commit


The CICS Procedure data source can be set up as a one-phase commit data source. As such, CICS programs activated within the context of a transaction are activated with no SYNCONRETURN option in the EXCI DPL request. When the transaction is committed, ATRCMIT will be called to trigger a sync point. Note the following points:
1. 2.

RRS must be configured and running on your system in order to use 1PC. When working with 1PC, it is important to correctly configure the timeout of your EXCI mirror transaction. The DTIMEOUT parameter in the CEDA transaction definition must exceed the maximum expected transaction duration. The default EXCI transaction is usually configured with a DTIMEOUT of 10 seconds, which may be problematic in terms of its short duration.

3.

To enable one-phase commit 1. In Attunity Studio, in the Configuration perspective, double-click the relevant data source to open the Data Source Editor.
2. 3. 4.

At the bottom of the editor, click the Advanced tab. In the Transaction Type field, select 1PC. Click Save.

Using Attunity Connect with Two-phase Commit


The following scenarios enable a CICS data source to participate in Two-phase Commit transactions:
1.

As a full two-phase commit resource: In this scenario you must use CICS TS 1.3 or higher, and have the Resource Recovery Services (RRS) configured for your CICS region. As a one-phase commit resource: In this scenario you must use CICS TS 1.3 or higher, but you dont need to have RRS installed. Attunity Connect can still manage a two-phase commit scenario with one of the data sources participating in the transaction is a one-phase commit resource. To use Two-phase Commit capability to access data on the z/OS computer, you must define each library in the ATTSRVR JCL as APF-authorized. To define a DSN as APF-authorized, enter the following command in the SDSF screen:

2.

3.

62-4 AIS User Guide and Reference

"/setprog apf, add, dsn=navroot.library, volume=ac002"

Where ac002 is the volume where at is installed and NAVROOT is the high-level qualifier where AIS is installed. If the AIS installation volume is managed by SMS, enter the following command in the SDSF screen when defining APF authorization:
"/setprog apf, add, dsn=navroot.library, SMS"

Make sure that you add these libraries to your APF list so they will remain APF-authorize after IPL. To use CICS Procedure in two-phase commit transactions you must make the following configuration changes:
1.

Edit the following parameters under the Transactions section of the binding properties for the relevant binding configuration in Attunity Studio Design Perspective:

Set convertAllToDistributed to true. Set logFile to provide the recovery log file dataset name.

If RRS is not running, add,NORRS to the logFile parameter, as follows:


logFile=log,NORRS

Where log is the log file dataset name. If this parameter is not specified, then the format is:
logFile=,NORRS

Note that the comma must be specified.


2.

Edit the CICS Procedure data source in Attunity Studio Design Perspective Configuration tab, and change the transactionSupport parameter value to 2PC or 1PC, according to the level of participation in the two-phase commit scenario.

Security
Depending on the security level on your mainframe, you may need grant the following permissions to Attunity Server in addition to the general security requirements:

READ access to the EXCI library (e.g. SYS1.CICSTS.SDFHEXCI). UPDATE authority on the IRC connection (e.g. DFHAPPL.ATYCLIEN). Execute permission on the CICS programs you want to call and all their dependant resources.

Data Types
The CICS Procedure data source supports add ADD data types. For a list of all ADD data types, see ADD Supported Data Types. For further information on mapping COBOL types to ADD data types, see COBOL Data Types to Attunity Data Types. Input parameters can only be simple (real) data types. Output fields can be of simple types and variants (COBOL REDEFINES).
CICS Procedure Data Source 62-5

Defining the CICS Procedure Data Source


The process of defining a CICS procedure data source consists of two tasks:

Defining the CICS Procedure Data Source Connection Configuring the CICS Procedure Data Source

Defining the CICS Procedure Data Source Connection


To define the CICS procedure data source connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine where you want to add your CICS procedure data source. Expand the Bindings folder. Expand the binding where you want to add the CICS procedure data source. Right-click the Data sources folder and select New Data Source. The New Data Source wizard opens:

7. 8. 9.

Enter a name for the new CICS Procedure data source in the Name field. Select CICS from the Type list. Click Next. The Data Source Connect String screen opens.

10. Enter the connection parameters as follows:


Target System: Specify the VTAM APPLID of the CICS region. Vtam NetName: Specify the VTAM netname of the IRC connection being used by the EXCI transaction. If you do not know the NetName you are using in the CICS region, ask your CICS administrator, or issue the following CEMT command:
CEMT INQ CONN STATUS: RESULTS - OVERTYPE Con(ATYS) Net(atyclien) Con(EXCG) Con(EXCS) Net(BACHCLI) TO MODIFY Ins Irc Ins Irc Ins Irc

Exci Exci Exci

The netname displayed on the screen is BATCHCLI (this is the default connection supplied by IBM upon installation of CICS). If you plan to use IBM defaults, then specify BATCHCLI as the Vtam_netName parameter. Otherwise, define a special connection (with EXCI protocol) and use the netname you provided there for this parameter.

62-6 AIS User Guide and Reference

Note: Attunity provides a netname ATYCLIEN, which can be used after the following procedure is carried out:
1.

Use the JCL in the NAVROOT.USERLIB(CICSCONF) member to submit the DFHCSDUP batch utility program to add the resource definitions to the DFHCSD dataset (see the IBM CICS Resource Definition Guide for further details). Or Use the instream SYSIN control statement in the NAVROOT.USERLIB(CICSCONF) member as a guide to defining the resources online using the CEDA facility.

2.

After the definitions have been added (via batch or using the CEDA facility), log on to CICS, and issue the following command to install the resource definitions under CICS: CEDA INST GROUP(ATYI)

11. Click Finish.

Configuring the CICS Procedure Data Source


To configure the CICS Procedure data source 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective, Configuration view, expand the Machines folder. Expand the machine with your CICS procedure data source. Expand the Bindings folder and the binding with your procedure data source. Expand the Data sources folder. Right-click the CICS Procedure data source in the Configuration view, and select Open. The Configuration editor is displayed.

CICS Procedure Data Source 62-7

Figure 621 CICS Procedure Data Source Configuration Properties

7. 8.

If necessary, edit the information in the Connection section. For a description, see the connection string in Defining the CICS Procedure Data Source Connection. Configure the CICS Procedure data source parameters as required. For a description of the available CICS Procedure parameters, see Configuration Properties.

Setting-up the CICS Procedure Data Source Metadata


This section includes the following topics:

Importing Metadata from COBOL Editing the XML in the Source Code

Importing Metadata from COBOL


The following information is needed during the import:

COBOL copybooks, to describe the programs COMMAREA for input and output. The copybooks need to be available by either copying them over to the computer on which you use Attunity Studio, or via FTP connection to a remote location. The names of the CICS programs to be executed via the CICS Procedure data source. A basic knowledge of the programs and their COMMAREA structure for input/output.

To define CICS metadata 1. Open Attunity Studio.

62-8 AIS User Guide and Reference

2. 3. 4. 5.

Expand Machine folder and then expand the machine you are using. Expand the binding where you want to import the metadata. Expand the Data sources folder. Right-click the CICS Procedure data source where you want to import the metadata, and select Show Metadata View. The Metadata tab is displayed with the data source displayed in the Metadata view.

6.

Right-click Imports, and select New Import. The New Import screen opens.

7. 8.

Enter a name for the import. The name can contain letters, number and the underscore character. Select one of the following from the Import Type list.

CICS Import Manager COBOL Import Manager for Data Sources

9.

Click Finish. The Metadata Import wizard is displayed.

10. Click Add.

The Add Resource dialog box opens. This screen lets you select files from the local computer, or to copy the files from another computer.
Figure 622 The Add Resources screen

11. If the files are on another computer, then right-click My FTP Sites and select Add.

The Add FTP Site screen opens (see step 12.).

CICS Procedure Data Source 62-9

12. Enter the server name or IP address where the COBOL copybooks reside and, if

not using anonymous connection, enter a valid username and password to access the computer. Once the required information is entered, the Add FTP Site screen should look like the following figure:
Figure 623 The Add FTP Site screen

13. Click OK.

The new FTP site is added to the list of available sites.


14. To browse for and transfer the metadata definition files, access the remote

computer using the username as the high-level qualifier. You can change the high-level qualifier by right-clicking the machine, and selecting Change Root Directory.
15. Select the required files for the import and click Finish to start the transfer.

The selected files are now displayed in the Get Input Files screen of the Metadata Import wizard.
16. Click Next.

The Apply FIlters screen is displayed, as shown in the following figure:

62-10 AIS User Guide and Reference

Figure 624 The Apply Filters screen

17. Apply filters to the copybooks if required.

The following COBOL filters are available:


Table 621 Filter Compiler source Ignore after column 72 Ignore first 6 columns Available FIlters Description The compiler vendor. Ignores columns 73 to 80 in the COBOL copybook. Ignores the first six columns in the COBOL copybook.

Replace hyphens (-) in Replace all hyphens in either the record or field names in the record and field names with metadata generated from the COBOL with underscore underscores (_) characters. Prefix nested columns Prefix all nested columns with the previous level heading. In addition, you can specify a search string and the string that will replace this search string in the generated metadata, and whether the replacement is dependent on the case of the found string. Case sensitive Find Replace with Specifies whether to be sensitive to the search string case. Searches for the specified value. Replaces the value specified for Find with the value specified here.

18. Click Next.

The Create Procedures screen is displayed (see step 21.).


19. Click Add to add a procedure. 20. Specify the following for each procedure:

CICS Procedure Data Source 62-11

Procedure Name: A name for the procedure. This name is used by Attunity Connect to access the procedure. A default, editable procedure name is automatically provided. Record Name: The record name of the I/O structure. The record is selected from a list of records, extracted from the designated COBOL copybooks. You must specify a record for each procedure.

Program Name: The name of the CICS program to be executed by the procedure. You must specify a program for each procedure.

EXCI TRANSID: The mirror transaction ID in the CICS region (default value is EXCI).

21. The fields from the selected record are displayed in the Table Details area. Check

the fields from this record that you want to use for the input and output to the program which is called by the procedure. Input can only be set for simple field types. Output can be set for all field types except for arrays (COBOL OCCURS). The following figure shows two procedures that call a program called PROG1, which expects to receive the customer key (CUST_KEY) as input and returns the CUSTOMERS record without the MKT_SEGMENT value as output:
Figure 625 The Create Procedures screen

22. Add as many procedures as necessary, and then click Next.

62-12 AIS User Guide and Reference

23. The last two steps let you determine how you want to handle arrays in your

metadata, and then import the metadata. For more information, see:

Metadata Model Selection Import the Metadata

The metadata is imported to the server computer.

Editing the XML in the Source Code


After defining the new procedure in the Metadata tab, you need to edit various elements in the XML source code. This section includes the following topics:

Editing the <procedure> Statement Editing the <field> Statement Editing the <parameters> Statement Sample ADD Metadata

Editing the <procedure> Statement


The <procedure> statement specifies the program to run. Within a <procedure> statement you must include a <dbCommand> statement to specify the program to run, the TRANSID and the COMMAREA size. Syntax:
<dbCommand>PROGRAM=value; TRANSID=value; COMMAREASIZE=value;</dbCommand>; OUTPUTOFFSET=value

Where:

PROGRAM: The name of the program to run in the COMMAREA. TRANSID: The TRANSID for the program. The TRANSID is the mirror transaction ID in the CICS region, such as EXCI, or a copy of this transaction. If a value is not specified, then EXCI is used. COMMAREASIZE: The size required to hold the output from the program in the COMMAREA buffer. If a value is not specified, the size of the record is dynamically determined. Setting this value allows greater control over the COMMAREA size in case all the data is mapped. OUTPUTOFFSET: The offset in the COMMAREA buffer where the mapping of the stored procedure fields begins. If not specified, the output offset is dynamically set to immediately follow the input.

Editing the <field> Statement


The <field> statement defines the layout of the output COMMAREA buffer. Within a <field> statement you can include a <dbCommand> statement for use when accessing a CICS program with a context. Syntax:
<dbCommand>EOS_VALUE=value; OFFSET=offset;</dbCommand>

CICS Procedure Data Source 62-13

Where:

EOS_VALUE: Specifies the value that signals the end of the program, assuming that the program has a context. The value assigned can be either a string or an integer value, that when encountered, causes the program to end. Use this property when you need to make consecutive calls to the CICS program in order to return multiple records and the program can provide a context field to be passed as an input on the next call or has its own context. If you specify more than one EOS_VALUE, then a logical OR condition is implied. If an EOS_VALUE is specified neither for a field nor a parameter, then the CICS program is assumed not to have a context and only one row is returned. If a value is specified both for a field and a parameter, then the first value encountered causes the program to end.

OFFSET: The offset attribute allows you to control the field offset of an output field within the COMMAREA buffer. If a value is not specified for the offset, it is dynamically evaluated to immediately follow the preceding field. If an offset is not specified for the first output field, the offset is dynamically evaluated to be either immediately following the last input parameter, or according to the output offset attribute in the procedure <dbCommand>. Note that by setting the offset of an input parameter and output field to the same value, you can map an input/output area.

Editing the <parameters> Statement


The <parameters> statement defines the layout of the input COMMAREA buffer. Within a <parameters> statement you can include a <dbCommand> statement for use when accessing a transaction with a context. You do not need to specify a <dbCommand> statement for a transaction without a context. Syntax:
<dbCommand>EOS_VALUE=value;REAPPLY; OFFSET=offset;</dbCommand>

Where:

EOS_VALUE: Specifies the value that signals the end of the program, assuming that the program has a context. The value assigned can be either a string or an integer value, that when encountered, causes the program to end. Use this property when you need to make consecutive calls to the CICS program in order to return multiple records and the program can provide a context field to be passed as an input on the next call or has its own context. If you specify more than one EOS_VALUE, then a logical OR condition is implied. If an EOS_VALUE is specified neither for a field nor a parameter, then the CICS program is assumed not to have a context and the program will only be executed once and one row returned. If a value is specified both for the parameter and a field, then the first value encountered causes the transaction to end.

REAPPLY: The original value supplied for the parameter is reapplied when the program modifies the parameter value. This attribute is only relevant when executing stream-type programs.

62-14 AIS User Guide and Reference

OFFSET: The offset attribute allows you to control the field offset of an input field within the COMMAREA buffer. If a value is not specified for the offset, it is dynamically evaluated to immediately follow the preceding parameter. If an offset is not specified for the first input field, the offset is dynamically evaluated to zero. Note that by setting the offset of an input parameter and output field to the same value, you can map an input/output area.

Sample ADD Metadata


The following sample represents the XML representation of a procedure named proc1:
<procedure name="proc1"> <dbCommand>PROGRAM=PROG1;TRANSID=EXCI;COMMAREASIZE=220</dbCommand> <fields> <field name="CUST_KEY" datatype="numstr_u" size="4"> <dbCommand>OFFSET=0</dbCommand> </field> <field name="NAME" datatype="string" size="25"> <dbCommand>OFFSET=4</dbCommand> </field> <field name="ADDRESS" datatype="string" size="40"> <dbCommand>OFFSET=29</dbCommand> </field> <field name="NATION_KEY" datatype="numstr_u" size="4"> <dbCommand>OFFSET=69</dbCommand> </field> <field name="PHONE" datatype="string" size="15"> <dbCommand>OFFSET=73</dbCommand> </field> <field name="ACCT_BAL" datatype="numstr_u" size="8"> <dbCommand>OFFSET=88</dbCommand> </field> <field name="MKT_SEGMENT" datatype="string" size="10"> <dbCommand>OFFSET=96</dbCommand> </field> <field name="COMMENT" datatype="string" size="114"> <dbCommand>OFFSET=106</dbCommand> </field> </fields> <parameters> <field name="CUST_KEY" datatype="numstr_u" size="4"> <dbCommand>OFFSET=0</dbCommand> </field> </parameters> </procedure>

CICS Procedure Data Source 62-15

62-16 AIS User Guide and Reference

Part X
Adapters Reference
This part contains the following topics:

CICS Application Adapter (z/OS Only) COM Adapter (Windows Only) Pathway Application Adapter (HP NonStop Only) IMS/TM Adapter (z/OS Only) Legacy Plug Application Adapter Tuxedo Application Adapter (UNIX and Windows Only)

63
CICS Application Adapter (z/OS Only)
This section includes the following topics:

Overview Transaction Support Configuration Properties Defining the CICS Application Adapter Setting Up CICS Application Metadata Testing CICS Application Adapter Interactions

Overview
The CICS Application Adapter allows users of the AIS application engine to activate CICS programs from any of the supported AIS application interface. These include JCA, .NET, COM, XML and 3GL. The CICS Application Adapter accepts client requests, constructs a COMMAREA, activates the CICS program using EXCI, and translates the response back to XML. Each program to be activated must be predefined in the Attunity Data Dictionary (ADD). The COMMAREA layout is usually imported from COBOL copybooks. Several options exist regarding unit-of-work (UOW) scope ranging from each program activation residing in its own UOW, to larger UOWs including other data sources in a two-phase commit transaction.

Supported Versions and Platforms


CICS Application Adapters can be used in conjunction with z/OS systems only. The following versions of CICS are supported:

CICS version 4.1 or higher. CICS Transaction Server version 1.3 or higher.

Supported Features
The CICS application adapter supports the following key features:

Activates any COMMAREA-based CICS program Supports zero-phase commit, one-phase commit, and two-phase commit transactions

CICS Application Adapter (z/OS Only) 63-1

All AIS data types and constructs are supported. This includes OCCURS clauses and REDEFINES.

Environmental Prerequisites
AIS uses EXCI to interface to CICS. EXCI requires some set up:

IRC must be open. Use CEMT I IRC from the CICS screen to check your IRC status. If in closed state, set it to open. A specific connection must be set up. Use CEMT I CONNECTION to get the list of available connections. Note that you can only use specific connections which have a VTAM netname associated with them. The default available on most systems is BATCHCLI. Attunity provides a JCL for defining an Attunity connection. See the CICSCONF member in the USERLIB. An EXCI mirror transaction ID must be available. The default on most systems is transaction ID EXCI. You can use the CEMT I TRA PROG(DFHMIRS) to get the list of EXCI transaction IDs available on your system.

Limitations

3270 programs are not supported Conversational transactions are not supported The CICS adapter requires the use of a specific connection; generic connections are not supported (e.g. the default EXCS connection is supported via NETNAME BATCHCLI, while the default generic connect EXCG is not supported). REDEFINES are supported on both input and output as the AIS variant metadata construct. However, when specifying a variant without a selector field in the program output, only the first variant case is returned.

Transaction Support
In the context of transaction1 support, the CICS Application Adapter can be configured in one of three ways:

Zero-phase commit adapter (default): Each CICS program invocation resides within its own unit of work. CICS programs are activated using EXCI with SYNCONRETURN. One-phase commit adapter: Requires CICS TS 1.3 or higher. Requires RRS to be installed and running. CICS programs are activated using EXCI with NOSYNCONRETURN. The client indicates a commit or a rollback using the appropriate client interface. A commit request triggers an ATRCMIT call. A rollback request triggers an ATRBACK call. This allows the client application to control the unit of work boundaries. When working with 1PC, it is important to correctly configure the time-out of your EXCI mirror transaction. The DTIMEOUT parameter in the CEDA transaction definition must exceed the maximum expected transaction duration. The default EXCI transaction is usually configured with a DTIMEOUT of 10 seconds, which may be problematic in terms of its short duration.

The term transaction in this section refers to the unit of work in the database sense of the word. This should not be confused with the use of the term transaction in the CICS world.

63-2 AIS User Guide and Reference

Two-phase Commit adapter: Requires CICS TS 1.3 or higher. Requires RRS to be installed and running. The Attunity server-started task communicates with the NAVCRM server to receive an RRS context ID. All CICS program invocations are activated as part of that RRS context. The governing transaction manager (usually an application server transaction manager) views this application adapter as a 2PC resource manager. The required mechanisms for preparing the transaction, recovery of the transaction after failure, transaction status inquiries, are all handled by the application adapter.

Configuration Properties
The following parameters can be configured for the CICS adapter in the Attunity Studio Design perspective, Configuration view, Configuration Properties editor. For information on how to add adapters to Attunity Studio, see Adding Application Adapters.

exciTransid: The four-character EXCI MRO transaction ID (program DFHMIRS). The IBM default for the EXCI transaction ID is EXCI. targetSystemApplid: The VTAM applid of the CICS target system.
Note: You can determine this value by activating the CEMT transaction on the target CICS system. On the bottom right corner of the screen appears the legend APPLID=target_system.

transactionSupport: The level of Transaction support for this adapter. To set transaction support, refer to Transaction Support. vtamNetnamg: The VTAM netname of the specific connection being used by EXCI (and MRO) to relay the program call to the CICS target system. You can issue to CEMT the following command:CEMT INQ CONNThe netname is BATCHCLI to get the list of connections defined on your system. The default connection supplied by IBM is called has a VTAM_netname of BATCHCLI.
Note:

The AIS installation includes a JCL for defining a CICS connection to be used with AIS. If you choose to use the AIS connection (netname ATYCLIEN) perform the following procedure:

Either use the JCL in the NAVROOT.USERLIB(CICSCONF) member to submit the DFHCSDUP batch utility program to add the resource definitions to the DFHCSD dataset (see the IBM CICS Resource Definition Guide for further details) or use the instream SYSIN control statements in the NAVROOT.USERLIB(CICSCONF) member as a guide to defining the resources online using the CEDA facility. After the definitions have been added (via batch or using the CEDA facility), log on to CICS and issue the following command to install the resource definitions under CICS:
CEDA INST GROUP(ATYI)

Henceforth, specify ATYCLIEN as the NETNAME.

Defining the CICS Application Adapter


The process of defining a CICS application adapter consists of the following tasks:
CICS Application Adapter (z/OS Only) 63-3

Defining the CICS Application Adapter Connection Configuring the CICS Application Adapter Setting Up CICS Application Metadata Refining CICS Application Adapter Metadata

Defining the CICS Application Adapter Connection


The CICS adapter connection is set using the Design Perspective Configuration view in Attunity Studio. To define the adapter connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective Configuration view, expand the Machines folder. Expand the machine where you want to add the CICS adapter. Expand the Bindings folder. Expand the binding where you want to add the CICS adapter. Right-click the Adapters folder and select New Adapter. The New Adapter dialog box is displayed.

7.

Enter a name for the adapter in the Name field.


Note:

The word event is a reserved word and cannot be used when naming an application adapter.

8. 9.

Select CICS from the Type list. Click Finish.

Configuring the CICS Application Adapter


After defining the connection, you set the adapter properties. To configure the CICS adapter 1. Open Attunity Studio.
2. 3. 4. 5. 6. 7.

Expand the Machines folder. Expand the machine with your CICS adapter. Expand the Bindings folder. Expand the binding with the CICS adapter. Expand the Adapters folder. Right-click the CICS adapter that you want to work with and select Open. The adapter Configuration editor is displayed.

63-4 AIS User Guide and Reference

Figure 631 CICS Adapter Configuration Properties

8. 9.

Configure the adapter Configuration parameters as required. For a description of the available parameters, see Configuration Properties. Click Finish.

Setting Up CICS Application Metadata


After setting up the Binding, define the Metadata (an Adapter Definition) for the CICS adapter. The adapter definition includes two main sections:

A list of interactions. Each interaction corresponds to a single CICS program to be invoked. It includes details such as the CICS 8-character program name, and references to an input and output record describing the COMMAREA program activation. A schema section, including a group of record definitions, usually derived from a COBOL copybook, describing the layout and structure of the COMMAREA input and output.

If COBOL copybooks describing the COMMAREA layout are available, you can import the metadata using the import utility in the Design Perspective Metadata tab in Attunity Studio. See also Importing Attunity Metadata from COBOL

Importing Attunity Metadata from COBOL


AIS provides an import utility from COBOL copybooks describing the COMMAREA layout. AIS application connectivity infrastructure provides the ability to specify a different input and output record for an interaction. In the case of CICS, the COMMAREA layout described in the COBOL copybook is typically both the input

CICS Application Adapter (z/OS Only) 63-5

and the output. During the import process, users therefore usually provide the same record name for input and output. Some further manual refinement can be done outside the import process, as described in Setting Up CICS Application Metadata. The following information is needed during the import:

COBOL copybooks which are copied to the machine running Attunity Studio as part of the import procedure. The names of the CICS programs to be executed via the procedure data source driver.

1. 2.

To define CICS application adapter metadata Open Attunity Studio. In the Design perspective, Configuration view, right-click the adapter and select Edit Metadata. The Metadata tab is displayed with the CICS Adapter displayed in the Metadata view.

3. 4. 5. 6. 7.

Right-click Imports under the adapter in the view and select New Import. Enter a name for the import. The name can contain letters, numbers, and the underscore character. Select CICS Import Manager as the import type. Click Finish. The Metadata Import Wizard is displayed. Click Add in the Import Wizard to add COBOL copybooks. The Add Resource screen is displayed, providing the option of selecting files from the local machine, or copying the files from another machine using FTP.2 This figure shows the Add Resource screen.

Figure 632 Add Resource Screen

8.
2

If the files are on another machine, then right-click My FTP Sites and select Add. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), first import the metadata from copybooks with the same settings, and then import the metadata from the other copybooks.

63-6 AIS User Guide and Reference

9.

Set the FTP data connection by entering the server name where the COBOL copybooks reside and, if not using anonymous access, enter a valid username and password to access the machine. using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory. This figure shows the Add Resource screen detailing the FTP sites

10. To browse and transfer files required to generate the metadata, access the machine

Figure 633 Add Resource Screen

11. Select the files to import and click Finish to start the transfer.

The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process. You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The selected files are displayed in the Get Input Files screen.

CICS Application Adapter (z/OS Only) 63-7

Figure 634 Get Input Files Screen

12. Click Next. The Apply Filters screen is displayed. Figure 635 Apply Filters Screen

63-8 AIS User Guide and Reference

13. Apply filters to the copybooks, as needed.

The following filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.

14. Click Next. The Add Interactions screen is displayed. Figure 636 Add Interactions Screen

CICS Application Adapter (z/OS Only) 63-9

15. Click Add to add interactions for the CICS Adapter. You can change the default

name that is specified for the interaction and then specify the mode of the interaction, which can be one of the following:

sync-receive: A response is expected from the program. That is, the program returns a result. sync-send: The program expects an input. sync-send-receive: The program expects an input and the program returns a result. This is the default mode.

You specify an input record used by the program associated with the interaction from the dropdown list in the Input column. This list is generated from the COBOL programs specified at the beginning of the procedure. Select a relevant record for the interaction.
Note:

You must specify an input record for each interaction before clicking Next. If the interaction does not require an input record (sync-receive-mode), the record specified here is ignored.

If the mode is either sync-receive or sync-send-receive, you must also specify an output record used by the program from the dropdown list in the Output column. Select a relevant record for the interaction. Finally, for each interaction, specify the program that you want associated with the interaction.
Note:

You must specify a program name for each interaction.

This example shows an interaction that calls a program called CUST, which expects to receive input and returns a result.

63-10 AIS User Guide and Reference

Figure 637 Add Interactions Screen

16. Add as many interactions as necessary and then click Next.

The Import Metadata screen, lets you import the metadata to the z/OS machine or leave the generated metadata on the Attunity Studio machine, to be imported later.

CICS Application Adapter (z/OS Only)

63-11

Figure 638 Import Metadata Screen

17. Select Yes to transfer the metadata to the mainframe machine and click Finish.

The metadata is imported to the mainframe machine.


Note:

After performing the import, you can view the metadata in the Metadata view. You can also make any fine adjustments to the metadata and maintain it, as necessary. See Working with Application Adapter Metadata.

Refining CICS Application Adapter Metadata


The automated import process has no way of distinguishing between import portions and output portions of the COMMAREA layout. It is possible to further refine the generated definition by splitting the interaction input and output into two separate records as follows. To refine the CICS Application Adapter metadata 1. In the Design Perspective Configuration View, expand the Machines folder and then expand the machine where you want to add the data source.
2. 3. 4.

Expand the Bindings folder and then expand the binding with the adapter metadata you are working with. Expand the Adapters folder and then right-click the data source for which you want to manage the metadata. Right-click the required adapter and select Show Metadata View. The Metadata view opens.

5.

Expand the Adapter.

63-12 AIS User Guide and Reference

6. 7.

Under Schema right-click the relevant record and select Copy. Right-click Schema and select Paste. You are prompted to provide a new name for the record to be replicated.

8. 9.

Add _output (or another appropriate differentiation) to the end of the record name and click OK. Under Interactions, double click the imported interaction(s).

10. Modify the Output to reflect the name of the replicated record.

At this point you can start refining the input and output records separately.
11. To refine the input record:

Double-click the input record. For input fields, optionally, provide a default value by specifying the default property in the Property sheet. The default value is used if the client application does not provide a default value as part of the request. For output fields, in the Property sheet, ensure that the private field is set to true.

12. To refine the output record, in the Property sheet, ensure that the private field is

set to false.
Note:

You can also use this technique to hide output fields that are of no interest to the client application.

Testing CICS Application Adapter Interactions


Follow these steps for testing adapter interactions. To test adapter interactions 1. In the Configuration view, right-click the application adapter and select Edit Metadata. The Metadata tab is displayed with the CICS Application Adapter displayed in the Metadata view.
2. 3.

Expand the Interactions folder. Right-click the Interaction you want to test and select Test. The Test Interaction wizard is displayed.

4.

Click Next. The parameters specification wizard is displayed.

5.

Specify any values for the parameters as necessary, and click Next. The test result is displayed.

6.

Click Finish.

CICS Application Adapter (z/OS Only)

63-13

63-14 AIS User Guide and Reference

64
COM Adapter (Windows Only)
This section includes the following topics:

Overview Defining COM Data Types Registering the COM Application Defining the COM Application Adapter Setting Up COM Application Interactions Defining COM Data Types

Overview
The COM Application Adapter provides connectivity to simple COM-based applications. COM objects are defined with interfaces, which group public methods into sets. The COM application adapter treats each COM method as an interaction, specified in an adapter definition. The definition functions as the glue linking the COM application adapter and the COM object accessed by the adapter. This adapter definition provides the run-time executable of the adapter with definitions for the following:

The COM object the adapter accesses. The interactions/methods the adapter invokes (the COM adapter can run interactions with up to 10 parameters). The properties of the COM object that are accessible to the adapter. The parameters of the COM object.

Supported Versions and Platforms


The COM application adapter can be used with all Windows operating systems and platforms. For information on which Windows versions are supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Data Types
The following table lists the supported COM data types and their equivalent data types in AIS:

COM Adapter (Windows Only) 64-1

Table 641

COM Adapter Data Types Mapping Value 16 2 3 20 4 5 6 7 8 10 11 14 17 18 19 21 22 23 29 25 30 31 C/Windows Type char short long __int64 float double CY DATE BSTR SCODE VARIANT_BOOL DECIMAL BYTE USHORT ULONG ACX Generated Type type="int" nativeType="int1" type="int" nativeType="int2" type="int" nativeType="int4" type="int" nativeType="int8" type="float" type="double" N/A type="string" type="int" nativeType="ole_date" type="string" type="int" nativeType="uint4" type="Boolean" type="ole_decimal" type="int" nativeType="uint1" type="int" nativeType="uint2" type="int" nativeType="uint4" N/A type="string" type="int" nativeType="int4" type="int" nativeType="uint4" type=nameOfUserDefinedTty pe type="int" nativeType="uint4" type="string" type="string"

COM Named Type VT_I1 VT_I2 VT_I4 VT_I8 VT_R4 VT_R8 VT_CY VT_DATE1 VT_BSTR VT_ERROR VT_BOOL VT_DECIMAL2 VT_UI1 VT_UI2 VT_UI4 VT_UI8 VT_INT VT_UINT VT_USERDEFINED3 VT_HRESULT VT_LPSTR VT_LPWSTR
1 2 3

unsigned _int64 int UINT

struct HRESULT

char * wchar_t*

Dates are interpreted according to the user defined locale definitions. Decimal does not support the ByReference attribute. Defining a user-defined data type is described below.

64-2 AIS User Guide and Reference

Registering the COM Application


In order to use the COM adapter, you need to first register the COM application in Windows. To register the COM application 1. From the Windows Start menu, select Run. The Run screen is displayed.
2.

Run the following command:


regsvr32 dll

Where dll is the name of the COM application.

Defining the COM Application Adapter


The COM adapter is set using the Design Perspective Configuration view in Attunity Studio.
1. 2. 3. 4. 5.

Open Attunity Studio. In the Design perspective Configuration view, expand the Machines folder. Expand the machine where you want to add the COM adapter. Expand the Bindings folder. Expand the binding where you want to add the COM adapter.
Note:

Adding a machine is described in Setting up Machines.

6.

Right-click the Adapters folder and select New Adapter. The New Adapter dialog box is displayed.

7. 8. 9.

Enter a name for the adapter in the Name field. Select COM from the Type list. Click Finish.

Setting Up COM Application Interactions


After setting up the binding, define the metadata (an adapter definition) for the COM adapter. The Adapter Definition describes the program that should be executed via a CICS transaction for each required interaction. AIS provides a utility to generate a template adapter definition, which you can modify and then import as a valid adapter definition to the repository. To generate the adapter definition 1. Run Nav_Util Autogen with the -new option, as follows:
Nav_Util autogen com_adapter -new answer.xml

where adapter is the name specified in the binding configuration for the adapter. An XML file called answer.xml is generated with the following content:
<autogen ProgId=sample_Prog_Id /> <autogen ProgId=sample_Prog_Id typeLib=

COM Adapter (Windows Only) 64-3

ignoreList=Equals,GetHashCode,GetType,ToString/>

where:

ProgId: The program id of the COM object, as defined in the Windows registry. typeLib (Optional): A typeLib that overrides the default (which is the one held in the registry). IgnoreList (Optional): A comma delimited (case sensitive) list of functions to be ignored upon adapter schema generation. It defaults to the internal interfaces contributed by.NET interop.

2.

Change sample_Prog_Id to the program id of the COM object, as defined in the Windows registry. For example: <autogen ProgId=trig.TrigCls />

3.

Import the definition to the repository: Nav_Util autogen adapter answer.xml where adapter is the name specified in the binding configuration for the adapter.

An adapter definition called adapter is generated and imported to the repository.

COM Adapter Attributes


The adapter definition describes the COM object whose program id is specified in the answer file.The methods specified in the COM object are listed as interactions in the adapter definition. There are groups of definitions as follows:

Super-methods by which the application can Get/Put properties values collectively:


<interaction name='comPropertyGet' description='Get properties from COM' mode='sync-send-receive' input='comPropertyGet' output='comPropertyGetResponse'/> <interaction name='comPropertyPut' description='Put properties onto COM' mode='sync-send-receive' input='comPropertyPut' output='comPropertyPutResponse'/>

A list of the interactions that correspond to methods found in the COM object.

The definition uses the XML protocol described in Application Adapter Definition.

Record Level Attributes


The following attributes are specific to the COM adapter and are used at the Record level:

EntryRef: The name of the method to be invoked within that object. IID: The UUID of a user-defined type. This attribute is used for user-defined data types only. libIID: The UUID of the library in which the user-defined data type is defined. This attribute is used for user-defined data types only. ObjectRef: A ProgID or UUID of the COM to which this input record refers. ParamCount: The number of parameters passed to that method.

64-4 AIS User Guide and Reference

Field Level Attributes


The following attributes are specific to the COM adapter and are used at the Field level:

Usage: Explains what the COM adapter is about to do with this field: InstanceTag: Names an object instance. Property: Treats the field as a property. Parameter: Passes the field value to/from a method.as a parameter. RetVal: Specifies that the field will hold the return value of a method. COMtype: Specifies the data type of the field as recognized by COM, using explicit COM enumeration values (for details, see Data Types.
Note:

After performing the import, you can view the metadata in the Metadata tab. You can also make any fine adjustments to the metadata and maintain it, as necessary. For details, see Adapter Metadata General Properties.

Defining COM Data Types


The COM adapter supports user-defined data types. The data type is defined as a record structure in the schema part of the adapter definition. The following is a Visual Basic definition of a user-defined type, which is a part of a COM object:
Public Type PersonRecord ID As Integer firstName As String lastName As String birthDate As Date End Type

Running NAV_UTIL AUTOGEN against this code produces the following definition:
record name=PersonRecord IID={F81A190B-F903-424F-898D-9F23B7BD5EB6} libIID={C140C9C8-0F9C-41DA-ADBA-B367DA66791B}> <field name=ID type=int nativeType=int2 required=true COMtype=2/> <field name=firstName type=string nativeType=string required=true COMtype=8/> <field name=lastName type=string nativeType=string required=true COMtype=8/> <field name=birthDate type=date nativeType=ole_date required=true COMtype=7/> </record>

COM Adapter (Windows Only) 64-5

64-6 AIS User Guide and Reference

65
IMS/TM Adapter (z/OS Only)
This section includes the following topics:

Overview Transaction Support Configuration Parameters Defining the IMS/TM Application Adapter Setting Up the IMS/TM Application Metadata

Overview
You can execute a program via IMS/TM using the IMS/TM Application Adapter.

Supported Versions and Platforms


IMS/TM application adapters can be used with z/OS systems only. For information on supported IMS/TM versions, see Attunity Integration Suite Supported Systems and Resources.

Transaction Support
The IMS/TM Application Adapter supports Two-phase Commit and can fully participate in a distributed Transaction when the transaction environment property convertAllToDistributed is set to true. To use the IMS/TM application adapter with 2PC, you must have RRS installed and configured.

IMS/TM Adapter (z/OS Only) 65-1

Note:

If RRS is not running, the Data Source can participate in a distributed transaction, as the only one-phase commit data source, if the logFile parameter is set to NORRS in the Transaction section of the binding properties for the relevant binding configuration, in the Attunity Studio Design perspective Configuration view. The XML representation is as follows: <transactions logFile=log,NORRS />

where log is the high-level qualifier and name of the log file. If this parameter is not specified, the format is the following: <transactions logFile=log,NORRS /> That is, the comma must be specified. For further details about setting up a data source to be one-phase commit in a distributed transaction, refer to the CommitConfirm Table. To use two-phase commit capability to access data on the z/OS machine, define every library in the ATTSRVR JCL as an APF-authorized library.
Note:

To define a DSN as APF-authorized, in the SDSF screen enter the command: /setprog apf,add,dsn=navroot.library,volume=ac002 where ac002 is the volume where you installed AIS and NAVROOT is the high-level qualifier where AIS is installed. If the AIS installation volume is managed by SMS, when defining APF-authorization enter the following command in the SDSF screen: /setprog apf,add,dsn=navroot.library,SMS Make sure that the library is APF-authorized, even after an IPL (reboot) of the machine.

Configuration Parameters
The following parameters can be configured for the IMS/TM Application Adapter in Attunity Studio Design perspective, Configuration view, Configuration Properties editor. For information on how to add adapters to Attunity Studio, see Adding Application Adapters.

cacheLastTpipe: This parameter specifies whether or not to cache the last transaction pipe used. cacheXcfConnection: This parameter specifies whether or not to cache the XCF connection information. maxSessions: This parameter specifies the maximum number of sessions allowed. The default value is 5. racfGroupId: This parameter specifies the RACF facility group identification. racfUserId: This parameter specifies the RACF facility user identification.

65-2 AIS User Guide and Reference

tpipePrefix: This parameter specifies the transaction pipe prefix used to associate between the Transaction and the transaction pipe it is using. The default value is ATTU. xcfClient: This parameter specifies the Cross System Coupling Facility client to which the connection belongs. xcfGroup: This parameter specifies the Cross System Coupling Facility collection of XCF members to which the connection belongs. A group may consist of up to eight characters, and may span multiple systems. xcfImsMember: This parameter specifies the Cross System Coupling Facility group member.

Defining the IMS/TM Application Adapter


You define the IMS/TM Application Adapter using the following tasks:

Defining the IMS/TM Application Adapter Connection Configuring the IMS/TM Application Adapter

Defining the IMS/TM Application Adapter Connection


The IMS/TM adapter connection is set using the Design Perspective Configuration view in Attunity Studio. To define the adapter connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective Configuration view, expand the Machines folder. Expand the machine where you want to add the IMS/TM adapter. Expand the Bindings folder. Expand the binding where you want to add the IMS/TM adapter. Right-click the Adapters folder and select New Adapter. The New Adapter dialog box is displayed.

7.

Enter a name for the adapter in the Name field.


Note: The word event is a reserved word and cannot be used when naming an adapter.

8. 9.

Select IMS/TM from the Type list. Click Finish.

Configuring the IMS/TM Application Adapter


After you define the connection, edit the adapter properties. To configure the IMS/TM adapter 1. Open Attunity Studio.
2. 3.

Expand the Machines folder. Expand the machine with your IMS/TM adapter.
IMS/TM Adapter (z/OS Only) 65-3

4. 5. 6. 7.

Expand the Bindings folder. Expand the binding with the IMS/TM adapter. Expand the Adapters folder. Right-click the IMS/TM adapter that you want to work with and select Open. The adapter Configuration editor is displayed.

Figure 651 IMS/TM Adapter Configuration Properties

8.

Configure the adapter parameters as required. For a description of the available parameters, see Configuration Parameters.

Setting Up the IMS/TM Application Metadata


After setting up the Binding, define the Metadata (an Adapter Definition) for the IMS/TM application adapter. The adapter definition describes the program that should be executed via a IMS/TM transaction for each required interaction. If COBOL copybooks describing the procedure input and output structures are available, you can import the metadata using the import utility in the Design Perspective Metadata tab in Attunity Studio. If COBOL copybooks do not exist that describe the IMS/TM records, the metadata must be manually defined. For information about defining metadata, see Adapter Metadata General Properties. This section includes the following topic: Importing Attunity Metadata from COBOL

65-4 AIS User Guide and Reference

Importing Attunity Metadata from COBOL


If COBOL copybooks describing the procedure input and output structures are available, you can import the Metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), first import the metadata from copybooks with the same settings, and then import the metadata from the other copybooks. The following information is needed during the import:

COBOL copybooks which are copied to the machine running Attunity Studio as part of the import procedure. The names of the IMS/TM programs to be executed via the procedure driver.

To define IMS/TM adapter metadata 1. In the Configuration view, right-click the Data Source and select Edit Metadata. The Metadata tab is displayed with the IMS/TM Adapter displayed in the Metadata view.
2. 3. 4. 5. 6.

Right-click Imports under the data source view and select New Import. Enter a name for the import. The name can contain letters, numbers, and the underscore character. Select IMS/TM Import Manager as the import type. Click Finish. The Metadata Import Wizard is displayed. Click Add in the Import Wizard to add COBOL copybooks. The Add Resource screen is displayed. In this screen, you select files from the local machine or copy the files from another machine using FTP.

Figure 652 Add Resource Screen

7. 8.

If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the COBOL copybooks reside, and if not using anonymous access, enter a valid username and password to access the Machine.

IMS/TM Adapter (z/OS Only) 65-5

9.

To browse and transfer files required to generate the metadata, access the machine using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory. This figure shows the Add Resource screen detailing the FTP sites.

Figure 653 Add Resource Screen

10. Select the files to import and click Finish to start the transfer.

The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process. You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The selected files are displayed in the Get Input Files screen.

65-6 AIS User Guide and Reference

Figure 654 Get Input Files Screen

11. Click Next. The Apply Filters screen is displayed. 12. Apply filters to the copybooks, as needed. Figure 655 Apply Filters Screen

IMS/TM Adapter (z/OS Only) 65-7

The following filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore.s Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.

13. Click Next. The Add Interactions screen is displayed. Figure 656 Add Interactions Screen

14. Click Add to add interactions for the IMS/TM Adapter. You can change the

default name that is specified for the interaction and then specify the mode of the interaction, which can be one of the following:

65-8 AIS User Guide and Reference

sync-receive: A response is expected from the program. That is, the program returns a result. sync-send: The program expects an input. sync-send-receive: The program expects an input and the program returns a result. This is the default mode.

You specify an input record used by the program associated with the interaction from the drop-down list in the Input column. This list is generated from the COBOL programs specified at the beginning of the procedure. Select a relevant record for the interaction.
Note:

You must specify an input record for each interaction before clicking Next. If the interaction does not require an input record (sync-receive-mode), the record specified here is ignored.

If the mode is either sync-receive or sync-send-receive, you must also specify an output record used by the program from the drop-down list in the Output column. Select a relevant record for the interaction. Finally, for each interaction, you specify the program that you want associated with the interaction.
Note:

You must specify a program name for each interaction.

The maxSegmentSize property enables dynamically splitting large messages into smaller segments. By default the largest segment size is 32763. This value can be modified by setting the maxSegmentSize property. However, the logic of the IMS/TM Transaction must correspond to this behavior. The transaction must perform a GU call followed by a series of GN calls in order to compile the entire input message.
Note:

There is no effect on the output message of the transaction which can be larger that 32K and composed of several segments (the OTMS C/I interface already performs the task of assembling the output segments into a single buffer).

An interaction that calls a program called STUD, which expects to receive input and returns a result is displayed.

IMS/TM Adapter (z/OS Only) 65-9

Figure 657 Add Interactions Screen

15. Add as many interactions as necessary and then click Next.

The Mark first data field page is displayed.


Figure 658 Mark First Data Field

65-10 AIS User Guide and Reference

16. Select the first data field to be used for inputs and outputs. You can select one of

the following:

: Select this and enter a line number for both the First input field and the First output field. The data at that line number will be the first data used for the input or output. Right-click Schema and select New record. Manually mark the first data field of the transaction following the LL ,ZZ and TRANSNAME fields: When you select this option, select the first line to use for the imports and outputs from the field below.

17. Click Next.

The Import Metadata screen, lets you import the metadata to the mainframe machine or leave the generated metadata on the Attunity Studio machine, to be imported later.
Figure 659 Import Metadata Screen

18. Select Yes to transfer the metadata to the mainframe machine and click Finish.

The metadata is imported to the mainframe machine.

Note:

After performing the import, you can view the metadata in the Metadata tab. You can also make any fine adjustments to the metadata and maintain it, as necessary. For more information, see Adapter Metadata General Properties.

IMS/TM Adapter (z/OS Only) 65-11

65-12 AIS User Guide and Reference

66
Legacy Plug Application Adapter
This section includes the following topics:

Overview Configuration Parameters Defining the Legacy Plug Application Adapter Setting Up Legacy Application Metadata

Overview
The Legacy Plug Application Adapter provides access to legacy applications using interfaces such as XML, JCA and COM.
Note:

The Legacy Plug application adapter enables executing a program via JCA, XML or on a Microsoft Windows platform from a COM-based application, unlike the Procedure Data Source (Application Connector), which enables an SQL front end to the program.

If the procedure returns an array, you must use the Legacy Plug application adapter and not the Procedure Data Source.

Configuration Parameters
The following parameters can be configured for the Legacy Plug adapter in the Attunity Studio Design perspective, Configuration view, Configuration Properties editor. For information on how to add adapters to Attunity Studio, see Adding Application Adapters.

dllName: (Optional) This parameter specifies the path and name of the DLL. If the DLL information is not specified here it must to be specified for each interaction in the Adapter Definition. triggerDllName: (Optional) This parameter specifies the path and name of the DLL which is used to trigger the adapter. For further details, refer to Configuring a Trigger for the Legacy Plug Adapter. triggerSymbolName: (Optional) This parameter specifies the function in the DLL that triggers the adapter. If a value is not specified, triggerSymbolName defaults to the triggerDllName value. For further details, refer to Configuring a Trigger for the Legacy Plug Adapter.

Legacy Plug Application Adapter 66-1

Defining the Legacy Plug Application Adapter


The process of defining a Legacy Plug adapter consists of two tasks:

Defining the Legacy Plug Application Adapter Connection Configuring the Legacy Plug Application Adapter

Defining the Legacy Plug Application Adapter Connection


The Legacy Plug adapter connection is set using the Design Perspective Configuration view in Attunity Studio. To define the adapter connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective Configuration view, expand the Machines folder. Expand the machine where you want to add the Legacy Plug adapter. Expand the Bindings folder. Expand the binding where you want to add the Legacy Plug adapter. Right-click the Adapters folder and select New Adapter. The New Adapter dialog box is displayed.

7.

Enter a name for the adapter in the Name field.


Note: The word event is a reserved word and cannot be used when naming an adapter.

8. 9.

Select Legacy Plug from the Type list. Click Finish.

Configuring the Legacy Plug Application Adapter


After defining the Legacy Plug adapter connection, you can edit the adapter properties. To configure the Legacy Plug adapter 1. Open Attunity Studio.
2. 3. 4. 5. 6. 7.

Expand the Machines folder. Expand the machine with your Pathway adapter. Expand the Bindings folder. Expand the binding with the Pathway adapter. Expand the Adapters folder. Right-click the Legacy Plug adapter that you want to work with and select Open. The adapter Configuration editor is displayed.

66-2 AIS User Guide and Reference

Figure 661 Legacy Plug Adapter Configuration Properties

8. 9.

Configure the adapter parameters as required. For a description of the available parameters, see Configuration Parameters. Click Finish.

Setting Up Legacy Application Metadata


After you define the connection and configure the adapter, define the Metadata (an Adapter Definition) for the Legacy Plug application adapter. The adapter definition describes the program that should be executed via a Legacy Plug Transaction for each required interaction. You can import metadata for the adapter from PCML (Program Call Markup Language) files, generated by an iSeries RPG compiler to set metadata for the adapter, using the import utility in the Design Perspective Metadata tab in Attunity Studio. If PCML files that describe the input and output structures do not exist, the metadata must be manually defined. For information about defining the metadata definition, see Adapter Metadata General Properties. This section includes the following topics:

Importing Attunity Metadata from PCML Files Defining Interactions and Records Configuring a Trigger for the Legacy Plug Adapter

Importing Attunity Metadata from PCML Files


During the import you will need the PCML files. These files are copied to the machine running Attunity Studio as part of the import procedure.

Legacy Plug Application Adapter 66-3

To import Legacy Plug Adapter metadata 1. In the Configuration view, right-click the adapter and select Edit Metadata. The Metadata tab is displayed with the Legacy Plug Adapter displayed in the Metadata view.
2. 3. 4. 5. 6.

Right-click Imports under the adapter and select New Import. Enter a name for the import. The name can contain letters, numbers, and the underscore character. Select Legacy plug Import Manager Using PCML Files as the import type. Click Finish. The Metadata Import Wizard is displayed. Click Add in the Import Wizard to specify the PCML files. The Add Resource screen is displayed. In this screen you select files from the local machine or copy the files from another machine using FTP.

Figure 662 Add Resource Screen

7. 8.

If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the PCML files reside and, if not using anonymous access, enter a valid username and password to access the machine. To browse and transfer files required to generate the metadata, access the machine using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.

9.

10. Select the files to import and click Finish to start the transfer.

Note:

Importing metadata can be done as often as necessary.

The selected file is displayed in the Get Input Files step.

66-4 AIS User Guide and Reference

Figure 663 Get Input Files

11. Click Next. The Apply Filters screen is displayed. You do not have to do anything

in this step.
12. Click Next. The source files are analyzed. When this is done, the Select Interactions

step is displayed.

Legacy Plug Application Adapter 66-5

Figure 664 Select Interactions

13. Select the interactions for which you want to import metadata, and click Next.

If you are importing data types that are not supported by PCML (Date, Time, Timestamp, Pointer, Procedure Pointer, 1-byte Integer or 8-byte Unsigned Integer), a message indicating an error in the PCML file is displayed. This error message can be ignored since AIS deals with these data types independently of PCML when they are extracted from the PCML file. The Import Metadata window lets you import the metadata to the computer where the application adapter is defined or leave the generated metadata on the Attunity Studio machine, to be imported later.
14. Click Next to display the Import Metadata step.

66-6 AIS User Guide and Reference

Figure 665 Import Metadata

15. Select Yes to import the metadata now or click No to leave the generated metadata

on the Attunity Studio machine and click Finish. The Legacy Plug Adapter Configuration Properties editor opens. New interactions and schema records are added to the Metadata view.

Defining Interactions and Records


This section describes how to configure the interactions and records in the Legacy Plug metadata.

Defining Interaction Properties


You define the interaction properties in the Design Perspective, Metadata view in Attunity Studio. To define the interaction properties 1. Finish Importing Attunity Metadata from PCML Files, or Open Attunity Studio.
2. 3. 4.

In the Design perspective, Metadata view, expand the Interactions folder for the Legacy Plug adapter with the Metadata you are working with. Right-click the interaction you want to define and select Refresh. Right-click the interaction and select Open. The Interactions editor for the selected interaction is displayed.

Legacy Plug Application Adapter 66-7

Figure 666 Interaction Editor

5.

Enter the properties for the interaction as follows:

dllName (Optional): The path and name of the DLL or program (when the value of symbolName is @main). If the information is not specified here, then it must be specified in the binding, as described in Defining the Legacy Plug Application Adapter. If the value of symbolName is @main, the value of dllName is a program name, including the full path.

symbolName (Optional): The function in the program to be executed. If a value is not specified, symbolName defaults to the interaction name. If the value of symbolName is @main, the value of dllName is a program name, including the full path. Working with multiple entry points requires that aliases be constructed for any entry point that is not the main entry point of the program.

66-8 AIS User Guide and Reference

Note:

For z/OS systems, an alias is specified for a load module within the load library (or PDS). Each new alias creates an additional entry in the load library directory pointing to an additional entry point within a load module. A load module can have many aliases, each pointing to a different entry point within that module. An alias is established during the link-edit step of the build of the module. For example, using the MATHSAMP sample supplied with AIS, we can set up the link-edit control cards as follows:

//SYSLIN DD * INCLUDE SYSLIB(MATHSAMP ALIAS MATHSTRU ALIAS MATHASTR ALIAS MATHVARI ALIAS MATHINOU ALIAS MATHSTRM ALIAS MATHEOSV ENTRY MATHSIMP NAME MATHSIMP(R) /*

There are seven entries in the load library directory (one main entry name and six aliases), corresponding to each of the seven entry points in the load module.

Defining Schema Records


You define the interaction properties in the Design Perspective, Metadata view in Attunity Studio. To define the schema record properties 1. Finish, Importing Attunity Metadata from PCML Files or Open Attunity Studio.
2. 3. 4.

In the Design perspective, Metadata view, expand the Schema for the Legacy Plug adapter with the Metadata you are working with. Right-click the record you want to define and select Refresh. Right-click the record and select Open to define any parameters to open the Schema Record editor.

Legacy Plug Application Adapter 66-9

Figure 667 Schema Record Editor

The schema definitions define the input and output structure fields and records for each interaction. The field and record parameters contain information regarding the method (using the mechanism attribute) by which this field is passed or received by the procedure, and the argument number of the procedure (using the paramNum attribute).
5.

Enter the paramNum for the record or any field in the record as relevant. The paramNum attribute can be specified as 0 to indicate it is a return value, 1 to indicate it is first argument, and so on. If paramNum is specified at the record level, it cannot be specified for any of the record members (at the field level). Enter the mechanism by editing the source XML for the adapter. The mechanism is the method by which the field is passed or received by the procedure. The mechanism attribute can be set to byValue and byReference. For outer-level (non-nested) parameters, structure parameters (for the structure itself, and not structure members), and variant parameters, the default value is byReference. When a parameter is used for both input and output, the mechanism must be set as the same for both the input and the output. For example:
<schema version=1.0> <record name=MATH_INOUT_OUT> <field name=SUM1 type=int paramnum=4/> <field name=SUBTRACT type=int paramnum=1/> <field name=MULTIPLY type=int paramNum=2/> <field name=DIVIDE type=int paramNum=3/> </record> <record name=MATH_RETURN_OUT> <field name=SUM1 type=int paramNum=0 mechanism=byValue/> <field name=SUBTRACT type=int paramNum=1/>

6.

66-10 AIS User Guide and Reference

<field name=MULTIPLY type=int paramNum=2/> <field name=DIVIDE type=int paramNum=3/> </record> </schema>

Configuring a Trigger for the Legacy Plug Adapter


To configure a trigger for the Legacy Plug adapter, specify the triggerDllName and optionally the triggerSymbolName in the configuration properties for the adapter, as described in Defining the Legacy Plug Application Adapter. The code for the trigger itself must include the gap.h header file (located in the NAVROOT\include directory, where NAVROOT is the directory where the Attunity server is installed) and match the following prototype:
/************* Legacy Plug Triggers ***************/ typedef enum { GAP_LP_TRIGGER_EVENT_CONNECT_ = 0, GAP_LP_TRIGGER_EVENT_DISCONNECT_ = 1, GAP_LP_TRIGGER_EVENT_TRANSACTION_START_ = 2, GAP_LP_TRIGGER_EVENT_TRANSACTION_COMMIT_ = 3, GAP_LP_TRIGGER_EVENT_TRANSACTION_ROLLBACK_ = 4 #ifdef _VAR_ENUM_MACHINE ,GAP_LP_TRIGGER_LAST_VALUE = 0x7FFFFFFF #endif } GAP_LP_TRIGGER_EVENT; typedef struct { void *pPrivate; GAP_STATUS eStatus; char szException[132+1]; char szExceptionText[256+1]; } GAP_LP_TRIGGER_CONTEXT; /* Some input params passed by reference for easy COBOL implementation */ typedef GAP_STATUS (*GAP_FNC_LEGACY_PLUG_TRIGGER) ( /* in/out */ GAP_LP_TRIGGER_CONTEXT *pContext, /* in */ GAP_LP_TRIGGER_EVENT *peEvent);

where:

pContext: Provides a way for the trigger to keep some context. For example, if the connect trigger establishes a connection with a back end application, it may want to keep a handle to that connection to be used with the other triggers. The pPrivate pointer in the pContext structure is used to keep that context. The context structure is allocated by Attunity Connect and passed to the trigger with every call. The pPrivate pointer can be used by the trigger for any purpose. The other exception fields can be used for error reporting. peEvent: A pointer to an enumeration that specifies what event occurred. This is passed by reference to make it easier for a COBOL implementation. Note that the szException text must be one of the following exceptions to allow the client application to behave correctly:

server.internalError client.requestError client.xmlError client.noActiveConnection

Legacy Plug Application Adapter

66-11

server.resourceLimit server.redirect client.noSuchResource client.authenticationError client.noSuchInteraction client.noSuchConnection server.notImplemented server.xaProtocolError server.xaUnknownXID server.xaDuplicateXID server.xaInvalidArgument client.autogenRejected server.xaTransactionTooFresh server.resourceNotAvailable client.authorizationError server.configurationError

Further information about these exceptions is documented in the Attunity Developer SDK. Sample In the following sample, the trigger allows the math_all_structs function to pass back an error condition with a text message when the second input parameter is zero (which would cause an error when trying to divide the value specified as the first input parameter by zero).
#ifdef WIN32 # define EXPORT_SYMBOL __declspec(dllexport) #else # define EXPORT_SYMBOL #endif #include <stdio.h> #include <string.h> #include "gap.h" /* Global variable used to keep the context in case of an exception */ static GAP_LP_TRIGGER_CONTEXT *pGlobalContext = NULL; typedef struct { int oper1 ; int oper2 ; } MATH_IN_STRUCTURE ; typedef struct { int sum ; int subtract ; int multiply ; int divide ; } MATH_STRUCTURE ;

66-12 AIS User Guide and Reference

EXPORT_SYMBOL GAP_STATUS trigger_sample(GAP_LP_TRIGGER_CONTEXT *pContext, GAP_LP_ TRIGGER_EVENT *peEvent) { /* On every connect the context changes, so keep it globally */ if (*peEvent == GAP_LP_TRIGGER_EVENT_CONNECT_) pGlobalContext = pContext; return GAP_STATUS_OK; } EXPORT_SYMBOL void math_all_structs(MATH_STRUCTURE *m_struct, MATH_IN_STRUCTURE *m_in) { /* This shows how an error condition is reported */ if (m_in->oper2 == 0) { pGlobalContext->eStatus = GAP_STATUS_ERROR; strcpy(pGlobalContext->szException, "client.requestError"); strcpy(pGlobalContext->szExceptionText, "division by zero error"); return ; } m_struct->sum = m_in->oper1 + m_in->oper2 ; m_struct->subtract = m_in->oper1 - m_in->oper2 ; m_struct->multiply = m_in->oper1 * m_in->oper2 ; m_struct->divide = m_in->oper1 / m_in->oper2 ; }

Legacy Plug Application Adapter

66-13

66-14 AIS User Guide and Reference

67
Pathway Application Adapter (HP NonStop Only)
This section includes the following topics:

Overview Transaction Support Pathway Adapter Configuration Parameters Defining the Pathway Application Adapter Setting Up Pathway Application Metadata

Overview
The Pathway environment is used to manage online Transaction processing applications. You can execute a program via a Pathway using the Pathway application adapter. The application adapter supports TMF transactions, enabling a connection between Enscribe, SQL/MP and Pathway in a single transaction.
Note:

If a TMF transaction is started before a Pathway server is activated, all changes made by that server are also part of the TMF transaction.

Supported Versions and Platforms


Pathway adapters can be used HP NonStop platforms only. For information on which HP NonStop versions are supported, see Attunity Integration Suite Supported Systems and Resources.

Transaction Support
An ACX request can explicitly specify transaction tokens (transactionStart, transactionCommit, transactionRollback) that cause a TMF Transaction to be started, committed or aborted. All Pathway servers activated after a transaction is started run under that TMF transaction. This enables full control over the transaction boundary. By using persistent connections you can also have the TMF transaction span more than one ACX request.

Pathway Application Adapter (HP NonStop Only) 67-1

Note:

The ACX request can be generated from any supported front end application: JCA, COM or an XML file using the ACX protocol. For more information, see Implementing an Application Access Solution.

If no transaction tokens are provided in the ACX request, the ACX dispatcher assumes you are working in autocommit mode. In this case, the dispatcher calls the adapter to start a transaction before the first interaction is run and commits the transaction after the last interaction is run. Thus, each ACX request is a single TMF transaction; whether it is a single interaction request or a batch request with several interactions in it. If you do not want a TMF transaction to be started by AIS (for example, with Pathway servers that control the transactions themselves (start and commit a transaction), then transaction support should be disabled in the adapter configuration. TMF transactions are coordinated across the server so that you can use the same server to run a Pathway server as well as access SQL/MP and Enscribe audited files directly from Attunity Connect, all within the same TMF transaction. The dispatcher will rollback a TMF transaction if the Pathway server returns SERVERCLASS_SEND with a non-zero status.
Note:

An error condition that should cause a rollback of the TMF transaction should be handled by the Pathway server returning the actual error.

Pathway Adapter Configuration Parameters


The following parameters can be configured for the Pathway adapter in the Attunity Studio Design perspective, Configuration view, Configuration Properties editor. For information on how to add adapters to Attunity Studio, see Adding Application Adapters.

pathmonProcess: This parameter specifies the Pathway monitor process name. This parameter must be specified for each interaction in the transaction component of the adapter definition. transaction: This parameter specifies whether or not a TMF transaction is started when activating a Pathway interaction, either explicitly by the user in the ACX XML request document, or implicitly by the server using the autocommit feature. When transaction is set to false, a TMF Transaction is not started, even if the user explicitly specifies a startTransaction statement in the ACX XML request document. This is useful when accessing Pathway servers that start their own TMF transaction.

Defining the Pathway Application Adapter


You define the Pathway adapter using the following tasks:

Defining the Pathway Application Adapter Connection Configuring the Pathway Application Adapter

67-2 AIS User Guide and Reference

Defining the Pathway Application Adapter Connection


The Pathway adapter connection is set using the Design Perspective Configuration view in Attunity Studio. To define the adapter connection 1. Open Attunity Studio.
2. 3. 4. 5. 6.

In the Design perspective Configuration view, expand the Machines folder. Expand the machine where you want to add the Pathway adapter. Expand the Bindings folder. Expand the binding where you want to add the Pathway adapter. Right-click the Adapters folder and select New Adapter. The New Adapter dialog box is displayed.

7.

Enter a name for the adapter in the Name field.


Note:

The word event is a reserved word and cannot be used when naming an application adapter.

8. 9.

Select Pathway from the Type list. Click Finish.

Configuring the Pathway Application Adapter


After setting the Binding, you can edit the adapter properties. To configure the Pathway adapter 1. Open Attunity Studio.
2. 3. 4. 5. 6. 7.

Expand the Machines folder. Expand the machine with your Pathway adapter. Expand the Bindings folder. Expand the binding with the Pathway adapter. Expand the Adapters folder. Right-click the adapter the Pathway adapter that you want to work with and select Open. The adapter Configuration editor is displayed.

Pathway Application Adapter (HP NonStop Only) 67-3

Figure 671 Pathway Adapter Configuration Properties

8. 9.

Configure the adapter parameters as required. For a description of the available parameters, see Pathway Adapter Configuration Parameters. Click Finish.

Setting Up Pathway Application Metadata


After setting up the Binding, define the Metadata (an Adapter Definition) for the Pathway application adapter. The adapter definition describes the program that should be executed via a Pathway transaction for each required interaction. If COBOL copybooks describing the procedure input and output structures are available, you can import the metadata using the import utility in the Design Perspective Metadata tab in Attunity Studio. If COBOL copybooks do not exist that describe the Pathway records, the metadata must be manually defined. For more information, see Adapter Metadata General Properties.
Note:

The Pathway application adapter utilizes the Pathway PATHSEND interface, enabling the Pathway environment to handle non-screen COBOL clients.

This section includes the following topic: Importing Attunity Metadata from COBOL

67-4 AIS User Guide and Reference

Importing Attunity Metadata from COBOL


If COBOL copybooks describing the procedure input and output structures are available, you can import the Metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), first import the metadata from copybooks with the same settings, and then import the metadata from the other copybooks. During the import you will need the COBOL copybooks which are to be copied to the machine running Attunity Studio as part of the import procedure. To define Pathway Adapter metadata 1. In the Configuration view, right-click the data source and select Edit Metadata. The Metadata tab is displayed with the Pathway Adapter displayed in the Metadata view.
2. 3. 4. 5. 6.

Right-click Imports under the data source and select the New Import menu option. Enter a name for the import. The name can contain letters, numbers and the underscore character. Select Cobol Import Manager as the import type. Click Finish. The Metadata Import Wizard is displayed. Click Add in the Import Wizard to add COBOL copybooks. The Add Resource screen is displayed, providing the option of selecting files from the local machine or copy the files from another machine using FTP. This figure shows the Add Resource screen.

Figure 672 Add Resource Screen

7. 8.

If the files are on another machine, then right-click My FTP Sites and select Add. Set the FTP data connection by entering the server name where the COBOL copybooks reside and, if not using anonymous access, enter a valid username and password to access the machine.

Pathway Application Adapter (HP NonStop Only) 67-5

9.

To browse and transfer files required to generate the metadata, access the machine using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory. This figure shows the Add Resource screen detailing the FTP sites.

Figure 673 Add Resource Screen

10. Select the files to import and click Finish to start the transfer.

The format of the COBOL copybooks must be the same. For example, you cannot import a COBOL copybook that uses the first six columns together with a COBOL copybook that ignores the first six columns. In this type of case, repeat the import process. You can import the metadata from one COBOL copybook and later add to this metadata by repeating the import using different COBOL copybooks. The selected files are displayed in the Get Input Files screen.

67-6 AIS User Guide and Reference

Figure 674 Get Input Files Screen

When importing using the VSAM Under Pathway Import Manager there is an additional step (described later in this procedure), and hence the step shown in the top right of the screen is step 1 of 6.
11. Click Next. The Apply Filters screen is displayed.

Pathway Application Adapter (HP NonStop Only) 67-7

Figure 675 Apply Filters Screen

12. Apply filters to the copybooks, as needed.

The following filters are available:

COMP_6 switch: The MicroFocus COMP-6 compiler directive. Specify either COMP-61 to treat COMP-6 as a COMP data type or COMP-62 to treat COMP-6 as a COMP-3 data type. Compiler source: The compiler vendor. Storage mode: The MicroFocus Integer Storage Mode. Specify either NOIBMCOMP for byte storage mode or IBMCOMP for word storage mode. Ignore after column 72: Ignore columns 73 to 80 in the COBOL copybooks. Ignore first 6 columns: Ignore the first six columns in the COBOL copybooks. Prefix nested column: Prefix all nested columns with the previous level heading. Replace hyphens (-) in record and field names with underscores (_): A hyphen, which is an invalid character in Attunity metadata, is replaced with an underscore. Case sensitive: Specifies whether to consider case sensitivity or not. Find: Searches for the specified value. Replace with: Replaces the value specified for in the Find field with the value specified here.

13. Click Next. The Add Interactions screen is displayed.

67-8 AIS User Guide and Reference

Figure 676 Add Interactions Screen

14. Click Add to add interactions for the Pathway Adapter. You can change the

default name that is specified for the interaction and then specify the mode of the interaction, which can be one of the following:

sync-receive: A response is expected from the program. That is, the program returns a result. sync-send: The program expects an input. sync-send-receive: The program expects an input and the program returns a result. This is the default mode.

You specify an input record used by the program associated with the interaction from the drop down list in the Input column. This list is generated from the COBOL programs specified at the beginning of the procedure. Select a relevant record for the interaction.
Note:

You must specify an input record for each interaction before clicking Next. If the interaction does not require an input record (sync-receive-mode), the record specified here is ignored.

If the mode is either sync-receive or sync-send-receive, you must also specify an output record used by the program from the drop down list in the Output column. Select a relevant record for the interaction.
15. Add as many interactions as necessary and then click Next.

The Import Metadata screen lets you import the metadata to the machine where the application adapter is defined or leave the generated metadata on the Attunity Studio machine, to be imported later.
Pathway Application Adapter (HP NonStop Only) 67-9

Figure 677 Import Metadata Screen

16. Select Yes to transfer the metadata to the machine where the application adapter is

defined and click Finish. The metadata is imported to the machine.


17. Right-click the Interactions folder for the adapter in the Metadata tab and select

Refresh.
18. Right-click each interaction and select View interaction. The Adapter Metadata

General Properties editor for that interaction opens.


19. Define the properties for the interaction.

pathmonProcess: (Optional) The Pathway monitor process name. This is a system process that manages all Pathway objects, including SERVER objects which the requester program activates. If the name is not specified here, it must be specified in the binding configuration (via the pathmonProcess parameter), as described in Pathway Adapter Configuration Parameters.
Note: The Pathway application adapter uses PATHSEND process. Pathway provides two ways to activate servers: Screen COBOL SEND command and PATHSEND interfaced available to non-screen-COBOL clients.

serverClass: The serverClass specifies a single server class that provides several services (such as insert, read and delete services).
Note:

The interaction serverClass value is often the destination for the pathway SEND command.

67-10 AIS User Guide and Reference

The record statements for both the input and output must not be aligned. Alignment is set in the Source XML. An example record definition for a Pathway adapter is similar to the following:
<record name="read_request" noAlignment="true"> <field ... /> ... </record> <record name="read_reply" noAlignment="true"> <field ... /> ... </record>

where: noAlignment: A boolean value used to determine whether or not buffers are aligned.
Note:

noAlignmentis essential for Pathway as HP NonStop COBOL does not expect aligned structures. By default all buffers constructed are aligned.

Structure initializations prior to the SEND command are often values that can be used for the default attribute in a field statement.

Pathway Application Adapter (HP NonStop Only)

67-11

67-12 AIS User Guide and Reference

68
Tuxedo Application Adapter (UNIX and Windows Only)
This section includes the following topics:

Overview of the Tuxedo Application Adapter Configuration Properties Metadata Transaction Support Data Types Security Checking the Tuxedo Environment Variables Defining the Tuxedo Adapters Setting Up Tuxedo Application Adapter Interactions Testing Tuxedo Application Interactions

Overview of the Tuxedo Application Adapter


The Tuxedo Application Adapter is used to execute BEA Tuxedo services and to store a message in a queue and post an event. There are two types of Tuxedo adapters:

Attunity Tuxedo Application Adapter: This adapter supports inbound transactions. It provides direct access to the platform where Tuxedo runs. Attunity Tuxedo Queue: This adapter supports outbound transactions by pulling messages from a queue.

Supported Versions and Platforms


The Tuxedo adapters can be used with UNIX and Windows platforms only. For information on supported Tuxedo versions, see Attunity Integration Suite Supported Systems and Resources.

Feature Highlights
The Tuxedo adapter interacts with Tuxedo using the Tuxedo ATMI (Application-to-Transaction Manager Interface) procedure library. In addition, the adapter uses the Tuxedo System Field Manipulation Language (FML) to support related data transfer needs.

Tuxedo Application Adapter (UNIX and Windows Only) 68-1

Configuration Properties
The following parameters can be configured for the Tuxedo Application Adapter and the Tuxedo Queue adapter. in Attunity Studio in the Properties tab of the Configuration Properties screen:

applicationPassword: The application password in unencrypted format that is used for validation against the application password (mapped to Tuxedo TPINIT password). clientName: The client name. The semantics are application defined (mapped to to the Tuxedo TPINIT cltname).

The following parameters can be configured for the Tuxedo Queue adapter only.

queueName: The name of the Tuxedo Queue. Queue spaces are used for mapping requests in the Tuxedo queue. queueRetryInterval: When sending an interaction (for example, get time), this is the amount of time that the Tuxedo Queue waits for an answer. If no answer is received in this time interval, the Tuxedo Queue sends the interaction again. It continues to send the interaction at the defined interval until an answer is received. queueSpaceName: The name of the Tuxedo queue space.

Metadata
After setting up the binding, define the metadata (an adapter definition) for the Tuxedo adapter. The adapter definition describes the program that should be executed via a Tuxedo transaction for each required interaction. If either a BEA Jolt bulk loader file or Tuxedo configuration and FML/VIEW source files describing the adapter are available, you can import the adapter definition by running the metadata import in the Design Perspective Metadata view in Attunity Studio.
Note:

Jolt files are files where definitions of Tuxedo services are bulked into one file along with the metadata using the BEA Jolt Bulk Loader. FML and VIEW files are metadata files used by Tuxedo, often in conjunction with a configuration file that includes the Tuxedo services.

If BEA Jolt bulk loader files, Tuxedo configuration and FML/VIEW source files do not exist that describe the input and output structures, the metadata must be manually defined. For details on the metadata definition, see:

Adapter Metadata General Properties Setting Up Tuxedo Application Adapter Interactions

Transaction Support
The Tuxedo adapter supports one-phase commit transactions. You can limit the transaction duration before timing out at the adapter level.

68-2 AIS User Guide and Reference

Data Types
The Tuxedo adapter provides support for the following Tuxedo data types as Input/Output:

STRING: Null terminated character array. CARRAY: Array of uninterrupted arbitrary binary data. XML: The XML formatted data. VIEW: C structure layout. VIEW32: C structure layout with 32-bit FML identifiers. FML: Tuxedo system type that provides transparent data portability. FML32: FML type where 32-bit FML identifiers are used.
Note:

Synonyms for the above list (such as X_C_TYPE, X_OCTET) are also recognized.

Security
The Attunity Tuxedo adapter works with the BEA Tuxedo security parameters. See Configuration Properties for a list of these parameters. For more information on using security in Tuxedo, see the BEA Tuxedo documentation.

Checking the Tuxedo Environment Variables


You need to check that the following Tuxedo environment variables are correctly set:

TUXDIR is set to the Tuxedo root directory. WSNADDR is set to the Tuxedo Workstation network address

To verify the Tuxedo environment variables On UNIX platforms, verify that the shared library environment variable includes the path to the Tuxedo bin directory, as in the following example:
LD_LIBRARY_PATH = /disk2/users/tuxedo/tuxedo8.0/bin

Instead of LD_LIBRARY_PATH, set LIBPATH on IBM AIX and SHLIB_PATH on HP-UX.

On Windows platforms, verify that the PATH environment variable includes the path to the Tuxedo bin directory, as in the following example:
PATH=C:\tuxed\tuxedo8.0\bin

Defining the Tuxedo Adapters


This section describes how to define the Tuxedo adapters. It contains the following topics:

Defining the Tuxedo Application Adapter Defining the Tuxedo Queue Adapter

Tuxedo Application Adapter (UNIX and Windows Only) 68-3

Defining the Tuxedo Application Adapter


The Tuxedo adapter is set using the Design Perspective Configuration view in Attunity Studio.
1. 2. 3.

Open Attunity Studio. In the Design perspective Configuration view, expand the Machines folder. Expand the machine where you want to add the Tuxedo Queue adapter.
Note: If the machine is not displayed in Attunity Studio, see Setting up Machines to see how to add machines to the Configuration view.

4. 5. 6. 7. 8. 9.

Expand the Bindings folder. Expand the binding where you want to add the Tuxedo adapter. Right-click the Adapters folder and select New Adapter. Enter a name for the adapter in the Name field. Select Tuxedo from the Type list. Click Finish.

When you finish defining the connection, you must set up the interactions. For information on how to set up Tuxedo adapter interactions, see Setting Up Tuxedo Application Adapter Interactions.

Defining the Tuxedo Queue Adapter


The Tuxedo Queue adapter is set using the Design Perspective Configuration view in Attunity Studio.
1. 2. 3.

Open Attunity Studio. In the Design perspective Configuration view, expand the Machines folder. Expand the machine where you want to add the Tuxedo Queue adapter.
Note: If the machine is not displayed in Attunity Studio, see Setting up Machines to see how to add machines to the Configuration view.

4. 5. 6. 7. 8. 9.

Expand the Bindings folder. Expand the binding where you want to add the Tuxedo Queue adapter. Right-click the Events folder and select New Event. Enter a name for the adapter in the Name field. Select Tuxedo Queue from the Type list. Click Finish.

When you finish defining the connection, you must set up the interactions. For information on how to set up Tuxedo adapter interactions, see Setting up Tuxedo Queue Adapter Interactions.

68-4 AIS User Guide and Reference

Setting Up Tuxedo Application Adapter Interactions


This section includes the following topics:

Importing Metadata Using a BEA Jolt Bulk Loader File Importing Metadata Using FML/VIEW Files

Importing Metadata Using a BEA Jolt Bulk Loader File


During the import you will need the PBEA Jolt bulk loader file or Tuxedo configuration and FML/VIEW source files. These files are copied to the computer running Attunity Studio as part of the import procedure. To define Tuxedo Adapter metadata using a BEA Jolt bulk loader file 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design perspective, Configuration view, expand the Machines folder and then expand the machine with your Tuxedo Adapter. Expand the Bindings folder and then expand the binding with the Tuxedo adapter you are working with. Expand the Adapters folder. Right-click the your Tuxedo application adapter and select Show Metadata View. The Metadata view is displayed with the Tuxedo Adapter selected.

6. 7. 8. 9.

Right-click Imports and select New Import. Enter a name for the import. The name can contain letters, numbers, and the underscore character. Select Tuxedo Import Manager Using Jolt Bulk Loader File as the import type. Click Finish. The Metadata Import Wizard Get Input file step is displayed.

10. Click Add in the Import Wizard to add a Jolt bulk loader file.

The Add Resources screen is displayed, as shown in the following figure:

Tuxedo Application Adapter (UNIX and Windows Only) 68-5

Figure 681 Add Resources screen

11. If the files are on another machine, then right-click My FTP Sites and select Add. Figure 682 Add FTP Site screen

The image shows resources that can be added. ***********************************************************************************************


12. Enter the server name where the Jolt bulk loader file resides, and if not using

anonymous access, enter a valid username and password to access the machine. The user name provided is used as the high-level qualifier.
13. Click OK. 14. After accessing the machine, you can change the high-level qualifier by

right-clicking the machine and selecting Change Root Directory.


15. Select the files to import and click Finish to start the transfer. The Select screen is

displayed with the list of transferred files in the right-hand pane.


16. Click Finish.

The selected files are displayed in the Get Input File step, as shown in the following figure:

68-6 AIS User Guide and Reference

Figure 683 Get Input File Screen

17. Click Next.

The Applying filters step is displayed. There is nothing to add in this step.
18. Click Next. The import wizard analyzes and converts the source files. When this is

complete, the Select Interactions step is displayed, as shown in the following figure:

Tuxedo Application Adapter (UNIX and Windows Only) 68-7

Figure 684 Select Interactions Screen

The window lists the Services from the input Jolt file as interactions, along with the input and output structures to use with each interaction.
19. Select the interactions you want to implement from the list and Click Next.

The Configure Tuxedo Records step is displayed, as shown in the following figure:

68-8 AIS User Guide and Reference

Figure 685 Configure Tuxedo Records screen

20. Define how the Tuxedo records should be configured and click Next to generate

the metadata. The Import Metadata Screen, lets you import the metadata to the target computer or leave the generated metadata on the computer running Attunity Studio, to be imported later.

Tuxedo Application Adapter (UNIX and Windows Only) 68-9

Figure 686 Import Metadata Screen

21. Specify that you want to transfer the metadata to the remote computer and click

Finish. The metadata is imported to the target machine.


Note:

After importing the metadata, you can access the metadata in the Design perspective, Metadata view. You can make any fine adjustments to the metadata and maintain it, as necessary. For more information, see Working with Application Adapter Metadata.

Importing Metadata Using FML/VIEW Files


This section describes the procedure required when using FML/VIEW files to generate the metadata for the Tuxedo adapter. To define Tuxedo adapter metadata using a FML/View files 1. Open Attunity Studio.
2. 3. 4. 5.

In the Design perspective, Configuration view, expand the Machines folder and then expand the machine with your Tuxedo Adapter. Expand the Bindings folder and then expand the binding with the Tuxedo adapter you are working with. Expand the Adapters folder. Right-click the your Tuxedo application adapter and select Show Metadata View. The Metadata view is displayed with the Tuxedo Adapter selected.

6.

Right-click Imports and select New Import.

68-10 AIS User Guide and Reference

7. 8. 9.

Enter a name for the import. The name can contain letters, numbers, and the underscore character. Select Tuxedo Import Manager Using VIEW Files as the import type. Click Finish. The Metadata Import Wizard, Get Input Files step is displayed. Import Wizard to add the configuration files. You can add the following types of files:

10. Click Add next to one or more of the sections in the Get Input Files step of the

Select Tuxedo Records Definition FML files. Select Tuxedo Records Definition VIEW files Select Tuxedo Configuration files

Configuration files are used to get the information necessary for starting application servers and initializing the bulletin boards in an orderly sequence in Tuxedo. Configuration files have a number of sections, including a SERVICES section which provides information on services used by the application, from which interactions are generated during the import procedure. The FML files contain metadata used by Tuxedo and are used to provide the data structures for the interactions. The Add Resource window is displayed. In this screen you can select files from the local machine or copy the files from another machine using FTP.
Figure 687 Add Resource Screen

11. If the files are on another machine, then right-click My FTP Sites and select Add. 12. Set the FTP data connection by entering the server name where the FML/VIEW

files reside and, if not using anonymous access, enter a valid username and password to access the machine.
13. To browse and transfer files required to generate the metadata, access the machine

using the username as the high-level qualifier. After accessing the machine, you can change the high-level qualifier by right-clicking the machine and selecting Change Root Directory.
14. Select the files to import and click Finish to start the transfer.

Tuxedo Application Adapter (UNIX and Windows Only) 68-11

The selected files are displayed in the Get Input File step, as shown in the following figure:
Figure 688 Get Input Files

15. Click Next. The Add FML Records step is displayed. Carry out the procedures in

this step only if you are using FML records.

68-12 AIS User Guide and Reference

Figure 689 Add FML Records

Do the following information in this step:

Createrecords: Click Add to create a new record. Enter a name for the record. You add fields from the FML fields selected in the previous step. After you add the record, it is added to the Records list. When you click on the record, the fields you added to the record are displayed in the Selected fields left on the top right section of this step. If you have a long list of records, use the field at the top of the Records list to filter the records.

Add fields to the record. Select a record from the Records list. You can add fields to the record by selecting one or more fields in the Available Fields list and adding them to the Selected Fields list. Use the triangle buttons on the screen to add the fields to the Selected Fields list or to remove fields. Use the double-triangle buttons to add all of the fields in the Available Fields list to the Selected Fields list or to remove all fields from the Selected Fields list. If you have a long list of records, use the field at the top of the Available Fields list to filter the records.

View the fields for a record. Select a record from the Records list. The fields added to that record are displayed in the Selected Fields list. If you have a long list of records, use the field at the top of the Selected Fields list to filter the records.

16. Click Next. The Get Tuxedo Records step is displayed.

Tuxedo Application Adapter (UNIX and Windows Only) 68-13

Figure 6810 Get Tuxedo Records

Add additional simple record definitions, if necessary. These records are stored in the following Tuxedo buffer types:

XML data String data Carrays

17. Enter the name of the record and select the buffer type from the Field list.

Unstructured message buffers are wrapped within a record as follows:

A message buffer of type STRING is wrapped within a record containing a single field of type string. A message buffer of type CARRAY is wrapped within a record containing a single field of type binary with a fixed size. A message buffer of type XML is wrapped within a record containing a single field of type XML.

18. Click Next.

The Add Interactions step is displayed, as shown in the following figure:

68-14 AIS User Guide and Reference

Figure 6811

Add Interactions

19. Enter the adapter interactions:

Name: The name of the interaction. If a configuration file was included as one of the input files for the import, the name is selected from the drop down list, which is generated based on the services specified in the configuration file. Mode: The interaction mode. Your options are: sync-receive: A response is expected from the program. That is, the program returns a result. sync-send: The interaction sends a request and does not expect to receive a response. sync-send-receive: The interaction sends a request and expects to receive a response.

Input: Identifies an input record. The input record is the data structure for the outbound interaction. The records generated from the FML/VIEW files specified at the beginning of the procedure are included in a drop down list. Select the relevant record for the interaction
Note:

You must specify an input record for each interaction before you can click Next. If the interaction does not require an input record, the record specified here is ignored.

Output: Identifies an output record. The output record is the data structure for the results of the outbound interaction. The records generated from the FML programs specified at the beginning of the procedure are included in a drop down list. Select the relevant record for the interaction.

Tuxedo Application Adapter (UNIX and Windows Only) 68-15

You must specify an output record for the interaction if the mode is set to sync-send-receive or sync-receive, before you can click Next.
Note:

Description: Free text describing the interaction.

20. Enter the following interaction-specific parameters:


Input Buffer Type: The type of data used for the input. Output Buffer Type: The type of the buffer to use for the results of an outbound interaction. No Transaction: Enables a service to be executed, regardless of transaction context. This parameter should always be checked. No Reply Expected: For future use. No Blocking Request: Avoids a FROM request submission if a blocking condition exists. No Timeouts: Ignores blocking timeouts. Signal Restart: If selected, whenever a signal interrupts an underlying system call, this call is reissued. Interaction Type: Select any of the following: service: Enables service interaction (default). enqueue: Stores a message on the queue that is specified by the information provided in the Queue Name box and the Queue Space Name box. post: Posts an event, whose name is specified by the Event Name box, and any related data.

Queue Space Name: Specifies the name of the queue space. Only enabled with enqueue interactions. Queue Name: Specifies the name of the queue. Only enabled with enqueue interactions. Event Name: Specifies the name of the event. Only enabled with event interactions.

21. Click Next.

The Configure Tuxedo Records step is displayed, as shown in the following figure:

68-16 AIS User Guide and Reference

Figure 6812

Configure Tuxedo Records

22. Define how the Tuxedo records should be configured and click Next. 23. Click Next to generate the metadata.

The Import Metadata screen lets you import the metadata to remote computer or leave the generated metadata on the computer running Attunity Studio to be imported later.

Tuxedo Application Adapter (UNIX and Windows Only) 68-17

Figure 6813 Import Metadata

24. Select Yes to transfer the metadata to the target computer and click Finish.

The metadata is imported to the target computer.


Note:

After performing the import, you can view the metadata in the Metadata tab. You can also make any fine adjustments to the metadata and maintain it, as necessary. For more information, see Adapter Metadata General Properties.

Setting up Tuxedo Queue Adapter Interactions


The queue adapter requires metadata describing the inbound interaction, including its structure. During the import procedure, Tuxedo FML or VIEW configuration source files are copied to the computer running Attunity Studio as part of the import procedure. Alternatively, this procedure enables you to manually define the queue adapter metadata. Before generating the interactions, note the following:

All the events described in a single Tuxedo Queue adapter, should have the same Tuxedo buffer type. In case that FML/FML32 buffer type is used, a common field with the same FBName must be included in all events. This field should contain the record/event name. The interaction is of async-send type (it does not expect to receive a response).

Carry out the following steps to generate inbound interaction metadata:

68-18 AIS User Guide and Reference

1. 2. 3. 4. 5.

Open Attunity Studio. In the Design perspective, Configuration view, expand the Machines folder and then expand the machine with your Tuxedo Queue adapter. Expand the Bindings folder and then expand the binding with the Tuxedo Queue adapter you are working with. Expand the Events folder. Right-click the Tuxedo Queue adapter and select Show Metadata View. The Metadata view is displayed, with the Tuxedo Queue adapter displayed.

6.

Right-click Imports and select New Import. The New Import dialog box is displayed.

7. 8.

Enter a name for the import. The name can contain letters, numbers and the underscore character only. Select the import type from the Import Type list. You have the following options:

Tuxedo Queue Import Manager for XML/STRING/CARRAY Buffers: This option lets you manually define the required Tuxedo record. Tuxedo Queue Import Manager for VIEW/VIEW32 Buffers Tuxedo Queue Import Manager for FML/FML32 Buffers Tuxedo Queue Import Manager for FML/FML32 Buffers by VIEW

9.

Click Finish. The next step depends on the selected import type. If the VIEW/VIEW32 Buffers, the FML/FML32 Buffers by VIEW, or the FML/FML32 Buffers options is selected, then the Metadata Import wizard opens, in which case, proceed to the following step. If the XML/STRING/CARRAY Buffers option is selected, go to Defining the Tuxedo Queue Unstructured Records for an explanation of the manual metadata import.

10. In the Get Input Files step of the input wizard, click Add. 11. The Select Resources dialog box is displayed, which provides the option to select

files from the local computer or copy the files from another computer.
12. If the files are on another computer, right-click My FTP Sites and select Add.

Optionally, double-click Add FTP Site. The Add FTP Site screen is displayed.
13. Enter the server name or IP address where the required reside and enter a valid

username and password to access the computer (if anonymous access is used, select Anonymous connection), then click OK. The FTP site is added to the list of available sites.

Tuxedo Application Adapter (UNIX and Windows Only) 68-19

Figure 6814 The Select Resources screen

14. Right-click the computer and select Set Transfer Type. Enter the appropriate

transfer type, and click OK.


15. Expand the added site and locate the files. To change the directory, right-click the

computer and select Change Root Directory. Enter the new directory name, and click OK.
16. Select the required file or files and click Finish.

The selected file or files are displayed in the Metadata Import wizard, Get Input Files step, as shown in the following figure:
Figure 6815 Get Input Files

68-20 AIS User Guide and Reference

17. Click Next.

The Configure Tuxedo Records screen is displayed, as shown in the following figure:
Figure 6816 Configure Tuxedo Queue Records

18. Ensure that the settings are correct for the following properties:

Buffer type: Indicates the buffer type, as read from the FML/VIEW file. This property should not be modified. Get Tuxedo Queue header field in the output record: Indicates that the header filed of each record is read. The default setting is true. Read strings from buffer as null terminated: Indicates that strings are handled as null terminated. The default setting is true.

19. Specify the FBName of the field that contains the event name. This field is common

to all incoming events and it should include the record name. This property is required only for FML/FML32 files.
20. Click Next to generate the metadata definitions for the Tuxedo Queue adapter, and

display the Import Metadata step.

Tuxedo Application Adapter (UNIX and Windows Only) 68-21

Figure 6817 Import Metadata

21. Select Yes to transfer the data to the server, then click Finish.

The Import wizard generates record structures, which are used for the record structures for the inbound interactions, and the metadata is imported based on the options specified and it is stored on the target platform. An XML representation of the metadata is also generated. After performing the import, you can view the metadata in Attunity Studio Design perspective Metadata tab, under Imports of the Queue adapter. You can also make any fine adjustments to the metadata.
Note:

See Working with Application Adapter Metadata for details about fine tuning the adapter metadata.

Defining the Tuxedo Queue Unstructured Records


You can manually define the required Tuxedo records when all the events in the Tuxedo queue are of the same type and are unstructured. Only one record is defined. Carry out the following steps to manually define the required Tuxedo records, as follows:
1. 2. 3. 4. 5.

Open Attunity Studio. In the Design perspective, Configuration view, expand the Machines folder and then expand the machine with your Tuxedo Queue adapter. Expand the Bindings folder and then expand the binding with the Tuxedo Queue adapter you are working with. Expand the Events folder. Right-click the Tuxedo Queue adapter and select Show Metadata View.

68-22 AIS User Guide and Reference

The Metadata view is displayed, with the Tuxedo Queue adapter displayed.
6.

Right-click Imports and select New Import. The New Import screen is displayed.

7. 8. 9.

Enter a name for the import. The name can contain letters, numbers and the underscore character only. Select Tuxedo Queue Import Manager for XML/STRING/CARRAY Buffers from the Import Type list. Click Finish. The Get Tuxedo Records screen is displayed, as shown in the following figure:

Figure 6818

The Get Tuxedo Records screen

10. Click Add Record. A new record entry is added to the records list, with a default

type.
11. Select the field type from the Field Type list. Your options are:

STRING (the default) CARRAY XML X_OCTET

12. Specify the maximum buffer size in the Size column. This is not required if XML

was selected as the field type.


13. Click Next.

The Configure Tuxedo Records screen is displayed, as shown in the following figure:

Tuxedo Application Adapter (UNIX and Windows Only) 68-23

Figure 6819 Configure Tuxedo Queue Records

14. Ensure that the settings are correct for the following properties:

Buffer type: Indicates the buffer type, as read from the FML/VIEW file. This property should not be modified. Get Tuxedo Queue header field in the output record: Indicates that the header filed of each record is read. The default setting is true. Read strings from buffer as null terminated: Indicates that strings are handled as null terminated. The default setting is true.

15. Specify the FBName of the field that contains the event name. This field is common

to all incoming events and it should include the record name. This property is required only for FML/FML32 files.
16. Click Next to generate the metadata definitions for the Tuxedo Queue adapter, and

open the Import Metadata step.

68-24 AIS User Guide and Reference

Figure 6820

Import Metadata

17. Select Yes to import the data to the target platform, then click Finish.

The record structure is generated, and the metadata is imported to and stored on the target platform. An XML representation of the metadata is also generated. After performing the import, you can view the metadata in Attunity Studio Design perspective Metadata tab, under Imports of the Queue adapter. You can also make any fine adjustments to the metadata.
Note:

See Working with Application Adapter Metadata for details about fine tuning the adapter metadata.

Testing Tuxedo Application Interactions


Carry out the following steps for testing adapter interactions:
1. 2. 3. 4. 5. 6.

Open Attunity Studio. In the Design perspective Configuration view, expand the Machines folder. Expand the machine with the Tuxedo interactions you want to test. Expand the Bindings folder, then expand the binding with the Tuxedo Adapter you are testing. Expand the Adapters folder Right-click the Tuxedo Application adapter with the interactions you want to test and select Show Metadata View. The Metadata view opens, with the Tuxedo Application Adapter selected.

Tuxedo Application Adapter (UNIX and Windows Only) 68-25

7. 8. 9.

Expand the Interactions folder. Right-click the interaction you want to test and select Test. Click Next. The parameters specification wizard opens.

10. Specify any values for the parameters, as necessary, and click Next. 11. View the test result and click Finish.

68-26 AIS User Guide and Reference

Part XI
Non-Application Adapters Reference
This part contains the following topics:

Database Adapter Query Adapter Managing the Execution of Queries over Large Tables

69
Database Adapter
This section includes the following topics:

Overview Configuration Properties Metadata Security SQL Interaction Types Transaction Support Interaction Parameters Defining the Database Adapter Configuring Database Adapter Interactions Testing Database Adapter Interactions Creating SQL Queries

Overview
The Database Adapter enables access to all Attunity Connect Data Sources using predefined SQL statements, via JCA, XML, COM or .NET.
Note:

If you do not have predefined SQL, use the Query Adapter.

Supported Versions and Platforms


The Database Adapter is supported by all Attunity Connect versions and on all platforms where AIS runs. For information on the operations systems and data sources supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Supported Features
The Database Adapter supports generating automatic and manual SQL interactions that, once generated, can be executed via JCA, XML, COM or .NET to access Attunity Connect data sources.

Database Adapter 69-1

Configuration Properties
The following properties can be configured for the Database Adapter in the Properties tab of the Configuration Properties screen, when Configuring the Database Adapter.

connectString: This parameter specifies the connect string used to access the data source. For backward compatibility only. defaultDatasource: This parameter specifies name of a Data Source in the Binding configuration. multipleResult: This parameter specifies whether or not multiple results are returned. This property default value is true. It applies to Stored Procedure Call interactions (see Stored Procedure Call Interaction), where the call procedure includes multiple SQL statements (either batched one after the other or in a loop, such as a WHILE loop).

Metadata
The Database Adapter has a set of built-in schemas and interactions. These can be viewed and used from the Design Perspective Metadata tab.

Security
There are no specific security requirements for this adapter. The security for this adapter is governed by the administration authorization parameters for the current machine, and the relevant data source. For more information, see Add Authenticators.

SQL Interaction Types


The Database Adapter supports the following SQL interaction types when Manually Creating Interactions.

Database Query Interaction Database Modification Interaction Stored Procedure Call Interaction

Database Query Interaction


The query interaction specifies a SELECT statement and includes the following information.

SQL Statements: The SQL query that will be performed when executing the interaction. Pass Through: Whether the query is passed directly to the Backend Database for processing or is processed by the Query Processor. Reuse compiled query: Whether or not the query objects created in the previous execution are saved in a cache for reuse. Encoding: The type encoding used to return binary data in text format. The options are: base64: Sets base 64 encoding. hex: Sets hexadecimal encoding.

Event: The interaction mode (async-send or sync-receive).

69-2 AIS User Guide and Reference

Fail on no rows returned: Whether or not an error is returned if data is not returned. Root element: The root element name for records returned by the query. Record element: The record element name for records returned by the query. Max. records: The maximum number of records returned by the query execution. NULL string: The string returned in place of a null value. If not specified, the column is skipped. Interaction Parameters: The input parameters to the query in sequential order (as they appear in the SQL statement). For details about parameter definition, refer to Specifying Parameters.

Database Modification Interaction


The modification interaction specifies an SQL batch update statement (INSERT, DELETE or UPDATE), which includes the following information:

SQL Statement: The SQL batch update statement that you want to execute. For example, an INSERT INTO or DELETE statement. Pass Through: Whether the batch update statement is passed directly to the Backend Database for processing or processed by the Query Processor. Reuse compiled query: Whether or not the query objects created at previous execution are saved in a cache for reuse. Fail on Zero Affected: Whether or not an error is returned if no rows were affected as a result of the batch update execution. Interaction Parameters: The input parameters to the query in sequential order (as appear in the SQL statement). For details about parameter definition, refer to Specifying Parameters.

Stored Procedure Call Interaction


The Stored Procedure Call interaction enables a stored procedure to be called. It includes the following information:

Datasource: The name of the Data Source as defined in the Binding configuration, where the stored procedure is found. Procedure Name: The name of the stored procedure. Procedure Return Value: The procedure return value attribute name (the optional default is RETURNVALUE). Return Results: Whether to fetch the procedure results (default is false). Encoding: The encoding used to return binary data in text format, optional values: base64: Sets base 64 encoding. hex: Sets hexadecimal encoding.

Root Element: The root element name and the record element name for records returned by the query, using the format <root>\<record>. (Optional default is 'root\record') NULL string: The string returned in place of a null value. If not specified, the column is skipped.

Database Adapter 69-3

Interaction Parameters: The input parameters for the stored procedure in the sequential order defined in the creation of the procedure. For details about parameter definition, refer to Specifying Parameters.

Transaction Support
The Database adapter supports working in the following modes.

Non-transacted mode: The adapter works in auto-commit mode. Local transaction mode: The first interaction starts a transaction that lasts until an explicit commit or an explicit rollback occurs. Distributed transaction operation: The ACX adapter participates in a distributed transaction by exposing the appropriate XA methods.

For more information on how to work with transactions, see Transaction Support.

Interaction Parameters
Any input parameters you add when Manually Creating Interactions must be specified. The following information is configurable for a parameter.

name: The name of the parameter. This will be the attribute name in the input record. type: The type of parameter, which can be any one of the following: string number timestamp binary xml

nullable (boolean): Whether the value can be null or not. The default is true. default: A default value for the parameter, used if the parameter attribute is missing in the input record.
Notes:

If a field is not nullable and a default value is not supplied in the schema part of the Adapter Definition, an error occurs if the parameter attribute is missing in the input record. The parameters must be defined in the same order as they are used in the SQL statement. With the Stored Procedure Call interaction, if the database reports output parameters as input-output, make sure to provide an input value.

Defining the Database Adapter


The process of defining a Database Adapter consists of the following tasks:

Defining the Database Adapter Connection

69-4 AIS User Guide and Reference

Configuring the Database Adapter

Defining the Database Adapter Connection


A Database Adapter connection is set using the Design Perspective Configuration view in Attunity Studio. Follow these steps for defining the adapter connection.
1. 2. 3. 4. 5. 6.

Open Attunity Studio. In the Design perspective, Configuration view, expand the Machine folder. Expand the Machine with the adapter you are working with. Expand the Binding folder. Expand the binding where with the adapter you are working with. Right-click Adapters and select New Adapter. The New Adapter dialog box is displayed.

7.

Enter a name for the adapter in the Name field.


Note:

The word event is a reserved word and cannot be used when naming an adapter.

8. 9.

Select Database from the Type list. Click Finish.

Configuring the Database Adapter


After defining the connection, you edit the adapter properties. Follow these steps for configuring the Database Adapter.
1.

Right-click the adapter and select Open. The Configuration Properties screen is displayed.

2. 3.

Select the Properties tab. The configuration properties are displayed. Configure the adapter parameters as required. For a description of the available parameters, see Configuration Properties.

Configuring Database Adapter Interactions


After setting up the Binding, define the Metadata (an Adapter Definition) for the Database Adapter. The adapter definition describes the SQL statement that should be executed for each required interaction. Interactions can be generated automatically, or defined manually.
Note:

The Database Adapter requires predefined SQL statements, as opposed to the Query Adapter, where the SQL statement is specified at runtime.

This section includes the following topics:

Database Adapter 69-5

Automatically Creating Interactions Manually Creating Interactions

Automatically Creating Interactions


Automatic generation of interactions enables the following SQL to be executed for each table:

SELECT INSERT UPDATE DELETE

Follow these steps for automatically generating interactions.


1.

In the Configuration view, right-click Adapters and select Show Metadata View. The Metadata tab is displayed with the Database Adapter displayed in the Metadata view.

2.

Right-click the Interactions folder and select New. The New Interaction wizard is displayed.

Figure 691 New Interaction Creation Mode

3. 4.

To automatically generate interactions, select Automatic. Click Next. The Select Tables screen is displayed, enabling you to add tables from any of the data sources in the same binding as the Database Adapter and that you want to access with the interaction.

5.

Click Add to add tables.

69-6 AIS User Guide and Reference

Expand the data sources and select the tables that you want to access with the interaction and click the right arrow to move these tables to the right-hand pane.
Figure 692 Select Tables for Interaction

6.

Click Finish. The selected data sources and tables are displayed. Optionally, you can select a table and set the following for the SQL statements generated for that table:

Use XML Field: The output is formatted as XML according to its actual structure. This is useful for tables which include variant fields and arrays.
Note:

For this field to be available, the exposeXmlField in the misc section of the binding environment properties must be set to true.

Use Key in Select: The table key columns are used in the SELECT statement in the WHERE section as column1 = ? AND column2 = ? AND . If no key exist, all columns will be added.

Database Adapter 69-7

Upsert: This property has the default of false but, when set to true makes the insert and update interactions generated behave as follows: When inserting a row, if the row does not exist, it is inserted, otherwise the row is updated with the new information. When updating a row, if the row exist, it is updated, otherwise new row is inserted with the new information.

7.

Click Finish. Four interactions are generated for each table selected, together with the input and output record structures to support the interactions.
Note:

If the Upsert property is checked, only three interactions are generated. An insert interaction is generated with an update property instead of both an insert and update interaction.

8.

Click Yes to complete the task. The interactions and the record structures that relate to the interactions are displayed in the Metadata tab.

Manually Creating Interactions


Manual creation of interactions generates one interaction, based on the type of SQL selected: Database Query (a SELECT statement) or Database Modification (an INSERT, UPDATE or DELETE statement) or a Stored Procedure Call. For further details on the interaction types, refer to SQL Interaction Types. Follow these steps for manually creating query interactions.
1.

In the Configuration view, right-click Adapters and select Show Metadata View. The Metadata tab is displayed with the Database Adapter displayed in the Metadata view. Right-click Interactions and select New. The New Interaction wizard is displayed.

2.

69-8 AIS User Guide and Reference

Figure 693 New Interaction Creation Mode

3. 4.

To manually generate interactions, select Manual. Select the type of SQL as Query for the interaction and click Next. You are prompted to provide a name for the interaction.

Database Adapter 69-9

Figure 694 Interaction Name

5.

In the Query Group area, using the radio buttons, select if you want to create a new query or load an existing one. To select a previously saved query, click Browse and select the SQL.
Note: Only queries saved on the current machine running Attunity Studio and defined for data sources in the same binding as the database adapter can be used. Queries can be saved when they are created using the Query Tool in Attunity Studio by right-clicking a data source and selecting Query Tool.

6.

Click Next. The Define Interaction step in displayed.

69-10 AIS User Guide and Reference

Figure 695 Define Interaction

7.

Create the SQL query. This can be done as described in Creating SQL Queries or the query can be written manually by checking Enable manual query editing, and actually writing the query in the bottom pane. Click Next. The Interaction Properties screen is displayed.

8.

Database Adapter 69-11

Figure 696 Interaction Properties

9.

If necessary, configure the interaction properties as described in Database Query Interaction.

10. Click Next.

The Interaction Parameters screen is displayed.

69-12 AIS User Guide and Reference

Figure 697 Interaction Parameters

11. Specify the input parameters for the interaction, as detailed in Specifying

Parameters.
12. Click Finish to generate the interaction, including the record schema required to

support the interaction input and output. The SQL used in the interactions can be modified to the exact application requirements. For more information, see Adapter Metadata General Properties. The SQL can be changed as long as the changes are supported by the schema. Follow these steps for manually creating modification interactions.
1.

In the Configuration view, right-click Adapters and select Show Metadata View. The Metadata tab is displayed with the Database Adapter displayed in the Metadata view. Right-click the Interactions folder and select New. The New Interaction wizard is displayed.

2.

Database Adapter 69-13

Figure 698 New Interaction Creation Mode

3. 4.

To manually generate interactions, select Manual. Select the type of SQL as Modification for the interaction and click Next. You are prompted to provide a name for the interaction.

69-14 AIS User Guide and Reference

Figure 699 Interaction Name

5.

In the Query Group area, using the radio buttons, select if you want to create a new query or load an existing one. To select a previously saved query, click Browse and select the SQL.
Note: Only queries saved on the current machine running Attunity Studio and defined for data sources in the same binding as the database adapter can be used. Queries can be saved when they are created using the Query Tool in Attunity Studio by right-clicking a data source and selecting Query Tool.

6.

Click Next. The Define Interaction screen in displayed.

Database Adapter 69-15

Figure 6910 Define Interaction

7.

Select the query type. Your choices are:


Insert Update Delete

8.

Create the SQL query. This can be done as described in Creating SQL Queries or the query can be written manually by checking Enable manual query editing, and actually writing the query in the bottom pane. At the bottom of the screen, under Interaction properties, you can optionally configure the interaction properties as described in Database Modification Interaction.

9.

10. Click Next.

The Interaction Parameters screen is displayed.

69-16 AIS User Guide and Reference

Figure 6911

Interaction Parameters

11. Specify the input parameters for the interaction, as detailed in Specifying

Parameters.
12. Click Finish to generate the interaction, including the record schema required to

support the interaction input and output. The SQL used in the interactions can be modified to the exact application requirements. For more information, see Adapter Metadata General Properties. Follow these steps for manually creating stored procedure interactions.
1.

In the Configuration view, right-click Adapters and select Edit Metadata. The Metadata tab is displayed with the Database Adapter displayed in the Metadata view.

2.

Right-click Interactions and select New. The New Interaction wizard is displayed.

Database Adapter 69-17

Figure 6912 New Interaction Creation Mode

3. 4.

To manually generate interactions, select Manual. Select the type of SQL as a Stored Procedure Call and click Next. The Interaction Name screen is displayed.

Figure 6913 Interaction Name

5.

Provide a name for the interaction.

69-18 AIS User Guide and Reference

6.

Click Next. The Define Interaction screen is displayed.

Figure 6914

Define Interaction

7.

Specify the data source and procedure name, and optionally the other procedure interaction properties. For more information about these properties see Stored Procedure Call Interaction. Click Next. The Interaction Parameters screen is displayed.

8.

Database Adapter 69-19

Figure 6915 Interaction Parameters

9.

Specify the input parameters for the interaction, as detailed in Specifying Parameters. support the interaction input and output.

10. Click Finish to generate the interaction, including the record schema required to

Specifying Parameters
When Manually Creating Interactions, upon defining the interaction, if you decide to enable manual query editing and there are parameters in your SQL statement, you will need to define the Interaction Parameters for each input parameter entered. Follow these steps for adding a parameter.
1.

Click Add. You are prompted to name the parameter.

2. 3. 4.

Enter a unique name and click OK. Edit the parameters properties, as necessary. Refer to Interaction Parameters for details. Click Finish.

69-20 AIS User Guide and Reference

Testing Database Adapter Interactions


Follow these steps for testing adapter interactions.
1.

In the Configuration view, right-click the data source and select Show Metadata View. The Metadata tab is displayed with the Database Adapter displayed in the Metadata view. Expand Interactions. Right-click the Interaction you want to test and select Test. The Test Interaction wizard is displayed.

2. 3.

4.

Click Next. The parameters specification wizard is displayed.

5.

Specify any values for the parameters as necessary, and click Next. The test result is displayed.

6.

Click Finish.

Creating SQL Queries


When creating Query Interactions or Modification Interactions, you have the option of manually defining you SQL statements. In Query Interactions, this can be seen in the figure Define Interaction. In Modification Interactions, this can be seen in figure Define Interaction. Follow these steps for creating SQL queries.
1.

Select tables.

In the left pane, expand the data source where the table resides. Select the table and click the right arrow to move the table to the right pane of selected tables

2.

Select columns.

In the left pane, expand the data source and the table containing the column. Open the Columns tab in the right pane. Select the column and click the right arrow to move the column to the right pane.

3.

Join columns from different tables. The Create Joint tables pane is displayed.

Expand the table and select the column you want to join. Click the right arrow to move the column to the right pane. Optionally, click Next and edit the join statement. Click Finish.

4.

Add conditions in a WHERE clause. WHERE clauses are set in the Where tab.

Select and move the column you are setting the WHERE clause for to the right pane.

Database Adapter 69-21

Set the operator and value conditions as needed.

5.

Filter results using a HAVING clause. The HAVING clause provides conditions for grouping columns. HAVING clauses are set in the Having tab.

Select and move the column you are filtering to the right pane. Set the operator and value conditions as needed.

6.

Sort results. Query results are sorted in the Sort tab.

Select and move the column whose query result you want to sort to the right pane. Select the sorting order as either ascending or descending. For a modification query, the following options are also available: Pass Through: Whether the query is passed directly to the Backend Database for processing or processed by the Query Processor. Reuse compiled query: Whether the query is saved in a cache for reuse. Fail on Zero Affected: Whether an error is returned if data is not returned.

69-22 AIS User Guide and Reference

70
Query Adapter
This section includes the following topics:

Overview Metadata Security Predefined Interactions Transaction Support Using the Query Adapter

Overview
The Query Adapter enables accessing all of the AIS data source drivers to execute SQL statements via JCA, XML, COM or .NET. The Query adapter is automatically set and configured as part of the AIS Installation. If you have predefined SQL, use the Database Adapter. The Query Adapter can be used in an application without any definition in a binding configuration. However, if you want to specify a default data source to use with the adapter (so that the data source does not have to be included in SQL statements), you can define another adapter in a binding configuration, of type "query".

Supported Versions and Platforms


The Query Adapter is supported by all AIS versions and on all platforms where AIS server can run. For information on the operations systems and data sources supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Features Highlights
The Query Adapter supports a set of predefined interactions which enable you to access AIS using SQL statements using applicative interfaces.

Metadata
The Query adapter has a set of built-in schemas and interactions, which compose its metadata.

Query Adapter 70-1

Security
There are no specific security requirements for this adapter. The security for this adapter is governed by the administration authorization parameters for the current machine, and the relevant data source For more information, see Add Authenticators.

Transaction Support
The Query adapter supports the following transaction modes:

Non-transacted: The adapter operates in auto-commit mode. Local transaction: The first interaction starts a transaction that lasts until an explicit commit or an explicit rollback occurs. Distributed transaction: The ACX adapter participates in a distributed transaction by exposing the appropriate XA methods.

For further information on working with transactions, see Transaction Support.

Predefined Interactions
The following interactions can be used with the ACX EXECUTE verb on data sources and AIS procedures which are defined in the binding configuration:

callProcedure Interaction ddl Interaction getSchema Interaction query Interaction setErrorAction Interaction update Interaction

callProcedure Interaction
The callProcedure interaction enables calling a stored procedure. If the stored procedure has multiple results, then all the recordsets are returned in the output. The last recordset is always dedicated to the return-value and output parameters.
Note: Several databases report output parameters as input-output. If this is the case with the procedure you are running, ensure that you provide an input value for this parameter.

Input Record
The callProcedure interaction gets the input attributes listed in the following table:
Table 701 Attribute id (string) datasource callProcedure Interaction Attributes Required Yes Optional

Description A string which uniquely identifies the interaction. The name of the data source, as defined in the binding configuration where the stored procedure is found.

70-2 AIS User Guide and Reference

Table 701 (Cont.) callProcedure Interaction Attributes Attribute name inputParameter Description The name of the stored procedure. The input parameters to the stored procedure in sequential order (as defined in the createProcedure statement). You can specify the following attributes for each parameter:

Required Yes Yes

value: A value for the parameter. type: The parameter type. Your options are string|number|timestamp|binary|xml. null (boolean): Specifies whether the value is null or not. xmlValue: Specifies the value when the parameter is of XML type.

outputFormat

Specifies how the query results are formatted. Your Optional options are:

attributes (the default) elements msado (specifies the MS ADO XML recordset persistance format) Optional

binaryEncoding

Specifies the encoding used to return binary data in text format. Your options are:

base64 (the default): Sets base 64 encoding. hex: Sets hexadecimal encoding. Optional

metadata (boolean) Specifies whether the interaction returns metadata for the retrieved recordset. The default is set to false. nullString (string) outputRoot Specifies the string returned in place of a null value. If not specified the column is skipped. Specifies the root element name and the record element name for records returned by the query, using the <root>\<record> format. Specifies whether to fetch the procedure results. The default is set to false. Specifies the procedure return value attribute name. The default is RETURNVALUE.

Optional Optional

returnResults (boolean) returnValueName (string)

Optional Optional

Output Record
The callProcedure interaction output record looks as follows:
<record name="multipleResultset"> <field name="id" type="string"/> <field name="sqlCode" type="string"/> <field name="recordset" type="recordset" reference="true" array="*"/> </record> <record name="recordset"> <field name="data" type="xml" array="*"/> <field name="id" type="string"/> <field name="sqlCode" type="string"/> </record>

Query Adapter 70-3

Where:

data: An array of XML sub element which are the query execution results in the appropriate format. id: The id attribute specified in the input record. sqlCode: Status code indicating the query execution success or failure.
A Simple callProcedure Interaction

Example 701

The following XML request calls a stored procedure called in_out_multi, stored in a database specified as dbsql in the binding settings:
<?xml version="1.0" encoding="UTF-8"?> <acx> <connect adapter="query"/> <execute> <callProcedure datasource="dbsql" name="in_out_multi" metadata="true" returnResults="true"> <inputParameter value="1" type="number"/> <inputParameter value="2" type="number"/> </callProcedure> </execute> <disconnect/> </acx>

The following figure shows what is returned:

ddl Interaction
The ddl interaction specifies a DDL statement.

Input Record
The ddl interaction gets the input attributes listed in the following table:

70-4 AIS User Guide and Reference

Table 702 Attribute id (string) sql (string) passTrough (boolean) datasource (string)

ddl Interaction Attributes Required Optional Yes Optional

Description A string which uniquely identifies the DDL SQL. The DDL SQL that you want to execute. It can be specified as an attribute or as a text sub element. Specifies whether the SQL is passed directly to the back-end database for processing, or processed by the Query Processor. The default is set to false. Specifies the default data source against which to execute the query. The query itself, in non pass-through case, may include the data source.

Optional

Output Record
The ddl interaction output record looks as follows:
<record name="status"> <field name="id" type="string"/> <filed name="sqlStatus" <field name="sqlCode" type="string"/> <field name="rowsAffected" type="int"/> </record>

Where:

id: The id attribute specified in the input record. sqlStatus: The SQL status. It include ok|error|duplicateKey| constraintViolation|permissionError. sqlCode: Status code indicating the query execution success or failure. rowsAffected: Indicates the number of rows affected.
A Simple ddl Interaction

Example 702

The following figure shows a simple ddl interaction:

The following figure shows what is returned:

The following figure shows a pass-through ddl interaction:


Query Adapter 70-5

getSchema Interaction
The getSchema interaction retrieves schema information for different data source objects.

Input Record
The getSchema interaction gets the input attributes listed in the following table:
Table 703 Attribute id (string) type getSchema Interaction Attributes Required

Description A string which uniquely identifies the interaction call. The type of object whose schema you want to retrieve. One of the following must be specified:

Yes

datasource: Retrieves schema information on a data source. tables: Retrieves schema metadata on all tables. table: Retrieves schema metadata on a specific table. procedures: Retrieves schema metadata on all procedures. procedure: Retrieves schema metadata on a specific procedure. columns: Retrieves schema metadata on all columns. indexes: Retrieves schema metadata on all indexes. procedureColumns: Retrieves schema metadata on all procedure columns. primaryKeys: Retrieves schema metadata on all primary keys. foreignKeys: Retrieves schema metadata on all foreign keys.

datasource owner

The name of the data source, whose schema is retrieved. The name of the owner of the object whose schema is retrieved.

70-6 AIS User Guide and Reference

Table 703 (Cont.) getSchema Interaction Attributes Attribute table tableType column Description The name of the table whose schema is retrieved. The type of table whose schema will be retrieved, such as synonym, system or table. The name of the column whose schema is retrieved. This attribute is required only when columns is specified for the type attribute. The name of the procedure whose schema is retrieved. This attribute is required only when procedure is specified for the type attribute. The owner of the table that is referenced in the foreignKeys specification for the type attribute. This attribute can be specified only if foreignKeys is specified for the type attribute. The name of the table referenced in the foreignKeys specification of the type attribute. This attribute can be specified only if foreignKeys is specified for the type attribute. Required

procedure

foreignOwner

foreignTable

Note:

For each unused attribute, a schema is returned for all optional

values.
Example 703 A tables getSchema Interaction Call

<getSchema id="1" type="tables" datasource="DISAM"/>

The following is returned:


<schema id="1"> <tables> <table name="ACCOUNT" type="TABLE" owner="" datasource="DISAM"/> <table name="NATION" type="TABLE" owner"" datasource="DISAM"/> ... </tables> </schema>

Example 704

A table getSchema Interaction Call

<getSchema id="1" type="table" datasource="DISAM" table="nv_dept"/>

The following is returned:


<schema id="1"> <table name="nv_dept" datasource="DISAM"> <column name="dept_id" type="string" nullable="true" maxLength="4" precision="4" scale="0"/> <column name="dept_budget" type="float" nullable="true" maxLength="8" precision="0" scale="0"/> <column name="XML" type="string" nullable="false" maxLength="65" precision="64" scale="0"/> <index name="dept_prim" unique="true" primary="false" cardinality="0" sort="ascending" <segment name="dept_id"/> </index>

Query Adapter 70-7

</table> </schema>

query Interaction
The query interaction specifies an SQL query, which is executed by the ACX EXECUTE verb.

Input Record
The query interaction gets the input attributes listed in the following table:
Table 704 Attribute id (string) sql (string) outputFormat query Interaction Attributes Description A string which uniquely identifies the query. The SQL query that you want to execute. It can be specified as an attribute or as a text sub element. Required Optional Yes

Specifies how the query results are formatted. Your Optional options are:

attributes (the default) elements msado (specifies the MS ADO XML recordset persistance format) xml (Used to get the full table schema when a SELECT XML statement is executed. This is the standard ACX format)

Note: For details, see Output Data Formats. outputRoot Specifies the root element name and the record element name for records returned by the query, using the <root>\<record> format. Specifies the encoding used to return binary data in text format. Your options are:

Optional

binaryEncoding

Optional

base64 (the default): Sets base 64 encoding. hex: Sets hexadecimal encoding.

metadata (boolean) maxRecords (integer)

Specifies whether the query returns metadata for Optional the retrieved recordset. The default is set to false. Specifies the maximum number of records returned by the query. The default value is set to zero, indicating no limitation Specifies the string returned in place of a null value. If not specified the column is skipped. Specifies whether the query is passed directly to the back-end database for processing or it is processed by the Query Processor. The default is set to false. Specifies the default data source against which to execute the query. The query, in a non pass-through case may include the data source. Optional

nullString (string) passThrough (boolean)

Optional Optional

datasource (string)

Optional

70-8 AIS User Guide and Reference

Table 704 (Cont.) query Interaction Attributes Attribute inputParameter Description The input parameters to the query in sequential order (as it appears in the SQL statement). You can specify the following attributes for each parameter:

Required Yes

value: A value for the parameter. type: The parameter type. Your options are string|number|timestamp|binary|xml. null (boolean): Specifies whether the value is null or not. xmlValue: Specifies the value when the parameter is of XML type. Optional

failOnNoRowsReturned Specifies whether the query execution should fail (boolean) in case that no rows are returned from it. The default is set to false.

Output Record
The query interaction output record looks like the following:
<record name="recordset"> <field name="data" type="xml" array="*"/> <field name="id" type="string"/> <field name="sqlCode" type="string"/> </record>

Where:

data: An array of XML sub element which are the query execution results in the appropriate format. id: The id attribute specified in the input record. sqlCode: Status code indicating the query execution success or failure.

Output Data Formats


This section describes the output format for the following:

Attributes: Each column is represented by an attribute in the row element. Recordsets which contain chapter columns are formatted as child elements. For example, the format for a chapter column (columnX), is as follows:
<record column1="value1" ... columnN="valueN"> <columnX> <rowchild child_column1="value1" ... child_columnN="valueN"/> ... </columnX> </record>

Recordsets containing BLOB columns are formatted as child elements with encoded binary text content. For example, the format for a BLOB column (columnX), is as follows:

Query Adapter 70-9

<record column1="value1" ... columnN="valueN"> <columnX encoding="base64|hex"> ... encoded_binary_data ... </columnX> </record>

The binary data can be either of base64 encoding or hex encoding (two hexadecimal digits per data byte). Recordsets containing CLOB columns are formatted as child elements with CDATA content.

Elements: Each column is represented by a sub element in the row element. For example:
<records> <record> <column1>value1</column1> ... <columnN>valueN</columnN> </record> ... </records>

If the value of a column is NULL, then the corresponding child element is omitted. This format takes more space then the attributes format but it lends itself more easily to representing hierarchical data. For example, the format for a chapter (columnX) is as follows:
<record> <column1>value1</column1> ... <columnX> <record>...child_row ...</record> ...child_rows... </columnX> ... <columnN>value1</columnN> </record>

Recordsets containing BLOB columns are formatted as child elements with encoded binary text content. For example, the format for a BLOB column (columnX), is as follows:
<record> <column1>value1</column1> ... <columnXencoding="base64|hex"> ...encoded_binary_data... </columnX> ... <columnN>valueN</columnN> </record>

The binary data can be either of base64 encoding or hex encoding (two hexadecimal digits per byte). Recordsets containing CLOB columns are formatted as child elements with CDATA content.

70-10 AIS User Guide and Reference

Example 705 An ACX query interaction that returns a recordset formatted in theattributes (default) format <?xml version="1.0"?> <acx> <connect adapter="query"/> <execute> <query id="1"> <sql> select * from navdemo:nation </sql> </query> </execute> <disconnect/> </acx>

The output is formatted in the default format, as follows:


<?xml version=1.0 encoding=ISO-8859-1?> <acx type=response> <connectResponse idleTimeout=0></connectResponse> <executeResponse> <recordset id=1> <record N_NATIONKEY=0 N_NAME=ALGERIA N_REGIONKEY=0 N_COMMENT=New Distributor/> <record N_NATIONKEY=1 N_NAME=ARGENTINA N_REGIONKEY=1 N_COMMENT=Far Away/> ... </recordset> </executeResponse> </acx>

Example 706 metadata

An ACX query interaction that returns a recordset which includes

<?xml version="1.0"?> <acx> <connect adapter="query" /> <execute> <query id="1" metadata="true"> <sql> select * from navdemo:nation </sql> </query> </execute> <disconnect/> </acx>

The following output includes metadata:


<?xml version=1.0 encoding=ISO-8859-1?> <acx type=response> <connectResponse idleTimeout=0></connectResponse> <executeResponse> <recordset id=1> <metadata> <element name=record type=record> <attribute name=N_NATIONKEY type=int maxLength=4 nullable=false ordinal=1/> <attribute name=N_NAME type=string maxLength=25 nullable=false ordinal=2/> ... </element> </metadata> <record N_NATIONKEY=0 N_NAME=ALGERIA N_REGIONKEY=0

Query Adapter

70-11

N_COMMENT=New Distributor/> <record N_NATIONKEY=1 N_NAME=ARGENTINA N_REGIONKEY=1 N_COMMENT=Far Away/> ... </recordset> </executeResponse> </acx>

Example 707 An ACX query interaction that returns a recordset formatted in the elements format and using input parameters <?xml version="1.0"?> <acx> <connect adapter="query" /> <execute> <query id="1" outputFormat="elements"> <sql> select * from navdemo:nation where N_NATIONKEY > ? </sql> <inputParameter value="-1" type="number"/> </query> </execute> <disconnect/> </acx>

The output recordset is formatted in the elements format, as follows:


<?xml version=1.0 encoding=ISO-8859-1?> <acx type=response> <connectResponse idleTimeout=0></connectResponse> <executeResponse> <recordset id=1> <record> <N_NATIONKEY>0</N_NATIONKEY> <N_NAME>ALGERIA </N_NAME> <N_REGIONKEY>0</N_REGIONKEY> <N_COMMENT>New Distributor </N_COMMENT> </record> <record> <N_NATIONKEY>1</N_NATIONKEY> <N_NAME>ARGENTINA</N_NAME> <N_REGIONKEY>1</N_REGIONKEY> <N_COMMENT>Far Away</N_COMMENT> </record> ... </recordset> </executeResponse> </acx>

Example 708 An ACX query interaction that executes a hierarchical query and returns a recordset formatted in the default format <?xml version="1.0"?> <acx type="request" id="5169729"> <connect adapter="query" persistent="false" /> <execute> <query outputFormat="attributes"> <sql> select r.r_name as "Region", {select n.n_name as "Name" from navdemo:nation n where n.n_regionkey = r.r_regionkey}

70-12 AIS User Guide and Reference

as "Nations" from navdemo:region r </sql> </query> </execute> <disconnect/> </acx>

The following output recordset is returned:


<?xml version=1.0 encoding=ISO-8859-1?> <acx> <connectResponse></connectResponse> <executeResponse> <recordset> <record Region=AFRICA> <Nations> <record Name=ALGERIA/> ... </Nations> </record> <record Region=AMERICA> <Nations> ... </Nations> </record> </recordset> </executeResponse> </acx>

setErrorAction Interaction
The setErrorAction interaction sets the default adapter behavior on error while performing an interaction. The value set is valid for connection. This interaction has no output.

Input Record
<record name="setErrorAction"> <field name="onError" type="onErrorAction"/> </record>

This interaction gets a single attribute which can get one of the following values:

abort: Indicates that an error is returned. This is the default behavior. next: Causes the adapter to perform the next interaction on error.

update Interaction
The update interaction specifies an SQL batch update statement (such as INSERT or UPDATE), executed by the ACX EXECUTE verb.

Input Record
The update interaction gets the input attributes listed in the following table:

Query Adapter

70-13

Table 705 Attribute id (string) sql (string)

update Interaction Attributes Description A string which uniquely identifies the update SQL query. The batch update SQL query that you want to execute. It can be specified as an attribute or as a text sub element. Specifies whether the query is passed directly to the back-end database for processing or it is processed by the Query Processor. The default is set to false. Required Optional Yes

passThrough (boolean)

Optional

failOnZeroAffected (boolean) datasource (string)

Optional Specifies whether the query execution should fail in case the batch update SQL didnt affect any row. The default is set to false. Specifies the default data source against which to execute the query. The query, in a non pass-through case may include the data source. The input parameters to the query in sequential order (as it appears in the SQL statement). You can specify the following attributes for each parameter:

Optional

inputParameter

Yes

value: A value for the parameter. type: The parameter type. Your options are string|number|timestamp|binary|xml. null (boolean): Specifies whether the value is null or not. xmlValue: Specifies the value when the parameter is of XML type.

Output Record
The update interaction output record looks like the following:
<record name="status"> <field name="id" type="string"/> <field name="sqlStatus"/> <field name="sqlCode" type="string"/> <field name="rowsAffected" type="int"/> </record>

Where:

id: The id attribute specified in the input record. sqlStatus: One of: ok| error| duplicateKey| constraintViolation| permissionError. sqlCode: The status code, indicating the query execution success or failure. rowsAffected: The number of rows affected.
A simple update interaction

Example 709

<?xml version="1.0"?> <acx type="request" id="5169729"> <connect adapter="query" persistent="false"/> <execute>

70-14 AIS User Guide and Reference

<update datasource="DISAM"> <sql> Update nv_dept set dept_budget = 2 where dept_budget = 0 </sql> </update> </execute> <disconnect/> </acx>

The following response is returned:


<?xml version=1.0 encoding=ISO-8859-1?> <acx> <connectResponse></connectResponse> <executeResponse> <status sqlStatus="ok" rowsAffected="1" /> </executeResponse> </acx>

Example 7010 An update interaction with XML parameter <?xml version="1.0"?> <acx type="request" id="5169729"> <connect adapter="query" persistent="false" /> <execute> <update datasource="DISAM"> <sql> Update nv_dept set XML = ? where dept_id = 'DP00' </sql> <inputParameter type="XML"> <nv_dept dept_id="DP00" dept_budget="5"/> </inputParameter> </update> </execute> <disconnect/> </acx>

The following response is returned:


<?xml version=1.0 encoding=ISO-8859-1?> <acx> <connectResponse></connectResponse> <executeResponse> <status sqlStatus="ok" rowsAffected="1" /> </executeResponse> </acx>

Interactions for Internal Use


The following interactions are for internal use and should not be used unless specifically instructed by Attunity Support.

testDatasource: This interaction tests the connection to the specified data source. It attempts to retrieve a list of tables. transactionGetStatus: This interaction retrieves status information on failed transactions that needs to be recovered. transactionHeuristicRecover: This transaction heuristically recovers a transaction. updateStatistics: This interaction updates the specified statistics information. prepareSql: This interaction prepares an SQL query and return its metadata.

Query Adapter

70-15

localCopy: This interaction makes a local copy of a specified table. export: This interaction exports the native schema of a specified table.

Using the Query Adapter


This section includes two examples of the use of the Query adapter.
Example 7011

The following XML request uses the query adapter to access the nation table, which is part of the NAVDEMO demo database, supplied as part of the AIS Server installation. The query returns a recordset that is formatted in the default format.
<?xml version="1.0"?> <acx> <connect adapter="query" /> <execute> <query> select * from navdemo:nation </query> </execute> <disconnect/> </acx>

Note that the table name is qualified by the data source (navdemo). The output is formatted in the default format:
<?xml version=1.0 encoding=ISO-8859-1?> <acx type=response> <connectResponse idleTimeout=0></connectResponse> <executeResponse> <recordset id=1> <record N_NATIONKEY=0 N_NAME=ALGERIA N_REGIONKEY=0 N_COMMENT=New Distributor /> <record N_NATIONKEY=1 N_NAME=ARGENTINA N_REGIONKEY=1 N_COMMENT=Far Away /> <record N_NATIONKEY=2 N_NAME=BRAZIL N_REGIONKEY=1 N_COMMENT=Nearby /> ... </recordset> </executeResponse> </acx> Example 7012

An adapter is defined with a default data source. Thus, the data source does not need to be specified in the SQL statement in the XML document.
<adapter name="demo" type="query" connect="defaulttdp=navdemo" />

The adapter demo can be used to access the nation table using the following XML:
<?xml version="1.0"?> <acx> <connect adapter="demo" /> <execute>

70-16 AIS User Guide and Reference

<query> select * from nation </query> </execute> <disconnect/> </acx>

The same results are produced as shown above.

Query Adapter

70-17

70-18 AIS User Guide and Reference

71
Managing the Execution of Queries over Large Tables
This section includes the following topics:

Overview of Query Governing Configuring Query Governing

Overview of Query Governing


Query governing is defined in Attunity Studio. Query governing enables you to manage the way queries are executed. Query governing parameters are defined at the workspace levels of the Daemon. Thus, any specified restrictions apply to all queries for all Data Sources that require Attunity Metadata (i.e. File-system Data Sources) and which are defined in the Binding associated with the Workspace.

Configuring Query Governing


You can configure the parameters for query governing in the WS Governing tab of the Edit Workspace screen. To configure query governing 1. Open Attunity Studio.
2. 3. 4. 5.

n the Design Perspective Configuration view, expand the Machines with the workspace you are working with. Expand the Daemons folder, then expand the daemon with the workspace you are working with. Right- click the Workspace you want to edit, and select Open Click the General tab, find the Query governing restrictions section at the bottom of the editor.

Managing the Execution of Queries over Large Tables 71-1

Figure 711 Query Governing Restrictions

6.

Enter the relevant values for the following parameters:

Max Number of Rows in a Table That Can be Read: Specify the number of table rows that can be read in a query. When the number of rows read from a table during query execution exceeds the number stated, the query returns an error. Max Number of Rows Allowed in a Table Before Scan is Rejected: Specify the number of table rows that can be scanned during query execution. This parameter also has an impact during query optimization.
Note: After changing the values in the WS Governing tab, you must refresh the daemon and reload the configuration.

71-2 AIS User Guide and Reference

Part XII
CDC Agents Reference
This part contains the following topics:

Adabas CDC on z/OS Platforms Adabas CDC on UNIX Platforms Adabas CDC for OpenVMS DB2 CDC (z/OS) DB2 CDC (OS/400 Platforms) Enscribe CDC (HP NonStop Platforms) IMS/DB CDC on z/OS Platforms Microsoft SQL Server CDC Oracle CDC (on UNIX and Windows Platforms) Query-Based CDC Agent SQL/MP CDC on HP NonStop VSAM Under CICS CDC (on z/OS) VSAM Batch CDC (z/OS Platforms)

72
Adabas CDC on z/OS Platforms
This section includes the following topics:

Overview Functionality Configuration Properties Change Metadata Transaction Support Security Platform Specific Information Data Types Configuring the Adabas CDC Setting up the Adabas Agent in Attunity Studio

Overview
The Attunity Stream CDC solution for Adabas on mainframe systems captures changes made to the Adabas files that are written to archive files using the User Exit 2 (UE2) procedure. The UE2 procedure is activated by Adabas when the current PLOG file is full. The Attunity Adabas agent for the mainframe maintains its own tracking file to register the archive files that are created after configuring the UE2 procedure. For information on how to configure the UE2 procedure, see Adding the Tracking File Usage Step to the UE2 Procedure. The Adabas CDC solution works with Adabas data sources that use either ADD or Predict. Adabas versions 6.2 and 7.4 are supported.

Functionality
The Adabas agent supports the basic functionality for all AIS CDC agents. You should note that the behavior for this agent when setting the Set Stream Position parameter by Time Stamp is different. In this agent the time stamp is defined per block, not per event. Timestamp of a block is defined as the last event in a block. When you configure Set Stream Position by Timestamp, it is possible to

Adabas CDC on z/OS Platforms 72-1

get events that occurred before the requested event and reside in the same block as the event requested by the timestamp. If you want to capture all changes, this will return all the changes from all Adabas archive files registered in the Attunity tracking file. When capturing changes from a specific time stamp, you can select a time that is later than the creation time of the last archive file created.

The Tracking File


The Attunity Adabas CDC solution uses a tracking file to register the archive files that are created after configuring the UE2 procedure. The tracking file specifies the following information regarding the changes:

The dataset names of archived Adabas PLOGs. The timestamp indicating the starting time of each archived PLOG. The Adabas session number. The starting block counter.

Configuration Properties
The following parameters can be configured for the DB2 CDC agent:

Data Source Properties CDC Logger Properties

Data Source Properties


The following are the Data Source Properties:

dbNumber: The Adabas database number. predictFileNumber: (Predict only) The Predict file number. predictDbNumber: (Predict only) When the Predict file resides in a different database than the data indicate the database number in which the Predict file resides.

CDC Logger Properties


Tracking File name: The name of the mainframe file that is used to register the Adabas archive files. This must be the same name that is defined in the UE2 procedure. Adabas Version: The version of Adabas you are using. All versions earlier than version 8 are supported. This agent also supports the standard AIS Agent configuration properties. For more information, see Creating a CDC with the Solution Perspective.

Change Metadata
Changes are captured and maintained in a change table. The table contains the original table columns and CDC header columns. The header columns are described in the following table:

72-2 AIS User Guide and Reference

Table 721

Header Columns Description The current context. The date and time of the occurrence. The name of the table where the change was made. This column lists the operations available for the CDC agent. The available operations are:

Column Name context timestamp tableName operation

BEFOREIMAGE UPDATE INSERT DELETE COMMIT Note: All operations for Adabas Mainframe appear as

committed in the PLOG. In case of rollback, delete will appear before commit and no rollback event is generated.
transactionID fullTransactionID sequence BEFZ indicator recordType userID fileNumber RABN imageType workRabChain The operations transaction ID. The untruncated full transaction ID of the operation, with all 64 bytes. The Adabas input record sequence This is an Adabas PLOG header field This is an Adabas PLOG header field The Adabas record type (INCLUDE or EXCLUDE) The Adabas user ID number The Adabas file number The Relative Adabas Block Number The captured image type (BEFORE or AFTER) This is an Adabas PLOG header field

The data portion is an exact copy of the back-end table layout.

Transaction Support
The Adabas CDC agent for Mainframe systems supports transactions The rollback event is not supported, instead, compensating records are supplied.

Security
The user profile for the Attunity Server (ATTSRV) must have read privileges for the archive files.

Platform Specific Information


For information on the Adabas versions supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Adabas CDC on z/OS Platforms 72-3

Data Types
The Adabas CDC agent for mainframe systems supports both Adabas and ADD-Adabas data sources. For more information, see Adabas Data Types.

Configuring the Adabas CDC


Perform the following tasks to use the Adabas CDC:

Setting up the ATTSRVR Started Task Setting up the Tracking File Adding the Tracking File Usage Step to the UE2 Procedure
Note: Before carrying out the following tasks, be sure that the PLOG (Protection log) is active in the used Adabas instance. You may need to consult the Adabas system administrator.

Setting up the ATTSRVR Started Task


In the ATTSRVR started task STEPLIB, check that there is a DD card that defines the used Adabas load library.

Setting up the Tracking File


To enable Adabas CDC, you must first create a tracking file and then change the Adabas UE2 procedure to activate the tracking file usage. To create the tracking file Edit and submit the JOB from the BADATRF member of NAVROOT.USERLIB. The BADATRF member is shown below:
//BADATRF JOB 'RR','TTT',MSGLEVEL=(1,1),CLASS=A, // MSGCLASS=A,NOTIFY=&SYSUID,REGION=8M //DEFTRF EXEC PGM=IDCAMS //SYSPRINT DD SYSOUT=* //SYSIN DD * DEFINE CLUSTER (NAME(navroot.DEF.ASADATRF.DBXXX) INDEXED UNIQUE VOL(DEV001) TRACKS(10 1) RECORDSIZE(256 1024) KEYS(14 0) SHAREOPTIONS(3 3)) DATA (NAME(navroot.DEF.ASADATRF.DBXXX.DATA)) INDEX (NAME(navroot.DEF.ASADATRF.DBXXX.INDEX)) //VERTRF EXEC PGM=IDCAMS //SYSPRINT DD SYSOUT=* //SYSIN DD * LISTCAT ENTRIES('navroot.DEF.ASADATRF.DBXXX') ALL /*

To edit this file, you should:

72-4 AIS User Guide and Reference

Change navroot to the used Attunity HLQ. Change DBXXX so that the XXX specifies the used Adabase database number. Change the JOB card according to your site demands.

Adding the Tracking File Usage Step to the UE2 Procedure


If your Adabas uses the UE2 procedure for the archiving process, change it according to the example below. If your Adabas uses any other technique to archive PLOG files, change this technique to be consistent with the UE2 example. To change the UE2 procedure Add the following step to the end of the UE2 procedure:
//name EXEC PGM=UADATRF,PARM='<parameters>' //STEPLIB DD DISP=SHR,DSN=navroot.LOAD //ASADTRF DD DISP=SHR,DSN=<tracking file name>

Provide two positional parameters to the UADATRF program: The name of the new archive file The length of the STCK (store clock), depending on the Adabas version. If using an Adabas version 7.4 or later, then set this parameter to a value of 8. This value indicates that the store clock uses 8 bytes. For Adabas versions earlier than version 7.4, use a value of 4. The following is an example of this step.
//ASUPDBSD EXEC PGM=UADATRF, // PARM='ADB.PLOG.D&SDATE..T&STIME 8' //STEPLIB DD DISP=SHR,DSN=Attunity.LOAD //ASADTRF DD DISP=SHR,DSN=Attunity.DEF.ADATRF.DB005

Setting up the Adabas Agent in Attunity Studio


You set-up the Adabas CDC agent for mainframe systems by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for Creating a CDC with the Solution Perspective. The Adabas agent configuration uses the standard solution except for:

Configuring the Data Source Configuring the CDC Service


Notes:

When Creating a New Project, if you are using Adabas with ADD metadata, select ADD-Adabas (mainframe). If you are using Adabas with Predict metadata, select Adabas (mainframe). The CDC solution does not support views when using Adabas with Predict metadata. When selecting tables in a CDC solution using Adabas with Predict metadata, you must use tables that include the full physical file. For information on selecting tables for the CDC solution, see Stream Service.

Adabas CDC on z/OS Platforms 72-5

Configuring the Data Source


For configuring the Adabas data source as part of the Adabas CDC solution, carry out the following procedure: To configure the data source 1. In the Solution perspective, click Implement.
2.

In the Server Configuration section, click Data Source. The Data Source Configuration window is displayed. The following figure shows the Data Source Configuration window when using Predict. If you are using ADD data, this dialog box only contains the Database number field.

Figure 721 Data Source Configuration when using Predict

3.

Enter the following information in the Data Source Configuration window:


Database number: The Adabas database number. PREDICT File Number: (Predict only) The Predict file number. predict database Number: (Predict only) When the Predict file resides in a different database than the data indicate the database number in which the Predict file resides. If the Predict file resides in the same database, enter -1.

4.

Click Finish. The window closes. Continue with Configuring the CDC Service.

Configuring the CDC Service


For configuring the Adabas CDC Service, carry out the following procedure:

72-6 AIS User Guide and Reference

To configure the CDC Service 1. In the Solution perspective, click Implement.


2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

All changes recorded to the journal On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

4.

Click Next to define the logger. The following is displayed.

Figure 722 CDC Logger Definition Window

5.

Enter the following information:

Tracking file name: The name of the tracking file used in the UE2 procedure. See The Tracking File and Setting up the Tracking File for more information Adabas version: Select the Adabas version that you are using. If you are using a version earlier than version 7.4, then select V62; if you are using version 7.4, select V74.

6.

Click Finish.

To set up the stream service, follow the instructions in Creating a CDC with the Solution Perspective.

Adabas CDC on z/OS Platforms 72-7

72-8 AIS User Guide and Reference

73
Adabas CDC on UNIX Platforms
This section contains the following topics:

Overview Functionality Configuration Properties Change Metadata Transaction Support Data Types Security Platform Specific Information Configuring the Adabas CDC Defining Adabas CDC in Attunity Studio

Overview
The Attunity Stream CDC solution for Adabas captures changes made to the Adabas files that are written to PLOG (Protection log) files. The PLOG files are polled for changes. The Adabas CDC solution is works for with Adabas ADD and Adabas Predict data sources.

Functionality
The Adabas agent supports the basic functionality for all AIS CDC agents. You should note that the behavior for this agent when setting the Set Stream Position parameter by Time Stamp is different. In this agent the time stamp is defined per block, not per event. Timestamp of a block is defined as the last event in a block. When you configure Set Stream Position by Timestamp, it is possible to get events that occurred before the requested event and reside in the same block as the event requested by the timestamp.

Adabas CDC on UNIX Platforms 73-1

Configuration Properties
The following are configuration properties that you use when configuring this agent in Attunity Studio.

PLOG Name: The full path to the PLOG file without the session number. Starting Session: Enter the type of session. Adabas Version: Indicate the version of Adabas you are using. For information on the Adabas versions supported by AIS, see Attunity Integration Suite Supported Systems and Resources Virtual Block Len: This is the default size of a block in megabytes. The automatic default is 512.

This agent also supports the standard AIS Agent configuration properties. For more information, see Creating a CDC with the Solution Perspective. For information on how to enter these properties, see Defining Adabas CDC in Attunity Studio.

Change Metadata
Changes are captured and maintained in a change table. The table contains the original table columns and CDC header columns. The header columns are described in the following table:
Table 731 Header Columns Description The date and time of the occurrence. The name of the table where the change was made. This column lists the operations available for the CDC agent. The available operations are:

Column Name timestamp tableName operation

BEFOREIMAGE UPDATE INSERT DELETE COMMIT ROLLBACK

transactionID context

The operations transaction ID. The current context.

The data portion returns a copy of the back-end table layout.

Transaction Support
The Adabas for UNIX agent supports transactions. It uses the Transaction ID to identify the transaction. This agent uses transaction demarcation. It does not use compensating records.

73-2 AIS User Guide and Reference

Data Types
The Adabas for UNIX agent supports both Adabas and ADD-Adabas data sources. For more information, see Adabas Data Types.

Security
There are no special security requirements for this agent.

Platform Specific Information


The Adabas agent for UNIX works like all standard agents and has no platform specific information.

Configuring the Adabas CDC


To run CDC for Adabas, you need to configure the PLOG file to AIS. Defining information for Adabas is done via the Adabas system and Attunity Studio. The PLOG file used must be a physical file. Views cannot be used for CDC. Perform the following tasks to use Adabas CDC:

Identifying the Adabas CDC in the Adabas System

Identifying the Adabas CDC in the Adabas System


Follow these steps for defining the Adabas CDC in the Adabas system. To define Adabas CDC in the Adabas system 1. Make sure that the PLOG (Protection log) is turned on. You may need to consult the Adabas system administrator.
2.

Note the following information:

Find the directory where the PLOG is created and note the session number that appears at the end of the PLOG name. For example: SAG$DEVICE:[SAG.ADABAS.DB001]PLOG.DAT;18 Find the Adabas version being used. For information on the Adabas versions supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

You will need this information when you define the Adabas CDC in Attunity Studio.

Defining Adabas CDC in Attunity Studio


You set-up the Adabas CDC agent for mainframe systems by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for Creating a CDC with the Solution Perspective. The Adabas for UNIX agent uses the standard solution except for:

Configuring the Data Source Configuring the CDC Service

Adabas CDC on UNIX Platforms 73-3

Note: When Creating a New Project, if you are using Adabas with ADD metadata, select ADD-Adabas (Unix). If you are using Adabas with Predict metadata, select Adabas (Unix).

Configuring the Data Source


For configuring the Adabas data source as part of the Adabas CDC solution, carry out the following procedure: To configure the data source 1. In the Solution perspective, click Implement.
2.

In the Server Configuration section, click Data Source. The Data Source Configuration window is displayed. The following figure shows the Data Source Configuration window when using Predict. If you are using ADD data, this dialog box only contains the Database number field.

Figure 731 Data Source Configuration

3.

Enter the following information in the Data Source Configuration window:


Database number: The Adabas database number. PREDICT File Number: (Predict only) The Predict file number. predict database Number: (Predict only) When the Predict file resides in a different database than the data indicate the database number in which the Predict file resides. If the Predict file resides in the same database, enter -1.

4.

Click Finish. The window closes. Continue with.

73-4 AIS User Guide and Reference

Configuring the CDC Service


For configuring the Adabas CDC Service, carry out the following procedure: To configure the CDC Service 1. In the Solution perspective, click Implement.
2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

All changes recorded to the journal On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

4.

Click Next to for the PLOG configuration. The following is displayed.

Figure 732 CDC Logger Definition Window

5.

Enter the following information for the Adabas for UNIX agent:

Plog name: The directory where the PLOG is created and the PLOG name. Omit the session number which appears at the end of the PLOG name. For example, for SAG$DEVICE: [SAG.ADABAS.DB001]PLOG.DAT;18, the PLOG name should be: SAG$DEVICE:[SAG.ADABAS.DB001]PLOG.DAT4

Starting session number: The session number which appears at the end of the PLOG name. For example, if the PLOG name is SAG$DEVICE:[SAG.ADABAS.DB001]PLOG.DAT;18, the Starting Session should be 18.
Adabas CDC on UNIX Platforms 73-5

Adabas version: 221(to indicate Version 2.1.1). Versions 2.1.1. 3.1.1. and 3.3.1 are supported.

6.

Click Finish.

To set up the stream service, follow the instructions in Creating a CDC with the Solution Perspective.

73-6 AIS User Guide and Reference

74
Adabas CDC for OpenVMS
This chapter has the following sections:

Overview Functionality Configuration Properties Change Metadata Transaction Support Data Types Security Platform Specific Information Configuring the Adabas CDC Defining Adabas CDC in Attunity Studio

Overview
The Attunity Stream CDC solution for Adabas on OpenVMS captures changes made to the Adabas files that are written to PLOG (Protection log) files. The PLOG files are polled for changes. The Adabas CDC solution is relevant for Adabas ADD and Adabas Predict.

Functionality
The Adabas agent supports the basic functionality for all AIS CDC agents. Note that the behavior for this agent when setting the Set Stream Position parameter by Time Stamp is different. In this agent the time stamp is defined per block, not per event. Timestamp of a block is defined as the last event in a block. When you configure Set Stream Position by Timestamp, it is possible to get events that occurred before the requested event and reside in the same block as the event requested by the timestamp.

Configuration Properties
The following are configuration properties that are used when you configure this agent in Attunity Studio.:

PLOG Name: The full path to the PLOG file without the version number.

Adabas CDC for OpenVMS 74-1

Starting Session: The PLOG file version number where the event logging begins. Adabas Version: Indicate the version of Adabas you are using. Currently only version 4.1.1 is supported. Virtual Block Len: The size of a PLOG file virtual block in bytes. The default value is 512.

This agent also supports the standard AIS Agent configuration properties. For more information, see Creating a CDC with the Solution Perspective. For information on how to enter these properties, see Defining Adabas CDC in Attunity Studio.

Change Metadata
Changes are captured and maintained in a change table. The table contains the original table columns and CDC header columns. The header columns are described in the following table.
Table 741 Header Columns Description The date and time of the occurrence. The name of the table where the change was made. This column lists the operations available for the CDC agent. The available operations are:

Column Name timestamp tableName operation

BEFOREIMAGE UPDATE INSERT DELETE COMMIT ROLLBACK

transactionID context

The operations transaction ID. The current context.

The data portion returns a copy of the back-end table layout.

Transaction Support
The Adabas for OpenVMS agent supports transactions. It uses the Transaction ID to identify the transaction. This agent uses transaction demarcation. It does not use compensating records.

Data Types
The Adabas agent for OpenVMS supports both Adabas and ADD-Adabas data sources. For more information, see Adabas Data Types.

Security
There are no special security requirements for this agent.
74-2 AIS User Guide and Reference

Platform Specific Information


The Adabas Open VMS Agent works like all standard agents and has no platform specific information.

Configuring the Adabas CDC


To run CDC for Adabas, you need to configure the PLOG file for AIS. Defining information for Adabas is done in the Adabas system and Attunity Studio. The PLOG file used must be a physical file. View cannot be used for CDC. Perform the following tasks to use Adabas CDC:

Identifying the Adabas CDC in the Adabas System

Identifying the Adabas CDC in the Adabas System


Follow these steps for defining the Adabas CDC in the Adabas system. To define Adabas CDC in the Adabas system 1. Make sure that the PLOG (Protection log) is turned on. You may need to consult the Adabas system administrator.
2.

Note the following information:

Find the directory where the PLOG is created and note the session number that appears at the end of the PLOG name. For example: SAG$DEVICE:[SAG.ADABAS.DB001]PLOG.DAT;18 Find the Adabas version being used. Currently only Version 4.1.1 is supported.

You will need this information when you define the Adabas CDC in Attunity Studio.

Defining Adabas CDC in Attunity Studio


You set-up the Adabas CDC agent for mainframe systems by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for Creating a CDC with the Solution Perspective. The Adabas for OpenVMS agent uses the standard solution except for:

Configuring the Data Source Configuring the CDC Service


Note: When Creating a New Project, if you are using Adabas with ADD metadata, select ADD-Adabas (VMS). If you are using Adabas with Predict metadata, select Adabas (VMS).

Configuring the Data Source


For configuring the Adabas data source as part of the Adabas CDC solution, carry out the following procedure: To configure the data source 1. In the Solution perspective, click Implement.

Adabas CDC for OpenVMS 74-3

2.

In the Server Configuration section, click Data Source. The Data Source Configuration window is displayed. The following figure shows the Data Source Configuration window when using Predict. If you are using ADD data, this dialog box only contains the Database number field.

Figure 741 Data Source Configuration

3.

Enter the following information in the Data Source Configuration window:

Database number: Enter the Adabas database number.

4.

Click Finish. The window closes. Continue with.

Configuring the CDC Service


For configuring the Adabas CDC Service, carry out the following procedure: To configure the CDC Service 1. In the Solution perspective, click Implement.
2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

All changes recorded to the journal On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Changes recorded in the journal after a specific date and time.

74-4 AIS User Guide and Reference

When you select this option, click Set time, and select the time and date from the dialog box that is displayed.
4.

Click Next to for the PLOG configuration. The following is displayed.

Figure 742 CDC Logger Definition Window

5.

Enter the following information for the Adabas for UNIX agent:

Plog name: The directory where the PLOG is created and the PLOG name. Omit the session number which appears at the end of the PLOG name. For example, for SAG$DEVICE: [SAG.ADABAS.DB001]PLOG.DAT;18, the PLOG name should be: SAG$DEVICE:[SAG.ADABAS.DB001]PLOG.DAT4

Starting session number: The session number which appears at the end of the PLOG name. For example, if the PLOG name is SAG$DEVICE:[SAG.ADABAS.DB001]PLOG.DAT;18, the Starting Session should be 18. Adabas version: 411(to indicate Version 4.1.1). Only version 4.1.1 is currently supported. Virtual block length: Enter the size (in bytes) of the virtual block in the PLOG file. For example, 512.

6.

Click Finish.

To set up the stream service, follow the instructions in Creating a CDC with the Solution Perspective.

Adabas CDC for OpenVMS 74-5

74-6 AIS User Guide and Reference

75
DB2 CDC (z/OS)
This section includes the following topics:

Overview Functionality Configuration Properties Change Metadata Transaction Support Security Data Types Configuring the DB2 Tables for CDC Configuring the ATTSRVR Started Task Setting up the DB2 Agent in Attunity Studio

Overview
Attunity Stream CDC agent for DB2 on mainframe systems captures changes that are written to DB2 log and archive files. You connect to the agent in the same way that you connect with the DB2 data source. For more information on connecting to a DB2 database, see DB2 Data Source. The DB2 CDC agent uses the DSNJU004 module internally to find an archive or log file that contains the first LRBA corresponding to the provided timestamp, based on the provided bootstrap data set. See Configuring the CDC Service for more information. The DB2 CDC agent uses the IFI interface to capture the records from the log. The monitoring process is started using the IFI command, -STA TRA(MON) CLASS(1) IFCID(306). Creating the CDC solution for DB2 is performed in Attunity Studio using a wizard as described in Creating a CDC with the Solution Perspective.

Functionality
The DB2 CDC for mainframe systems supports the basic functionality for all AIS CDC agents with the exception of the Limitations listed in the section below.

DB2 CDC (z/OS) 75-1

Limitations
The DB2 agent has the following limitations:

The DB2 agent does not support the All changes recorded in the journal mode. If you choose to consume changes from a specific date and time (timestamp), the DB2 agent uses the DSNJU004 module internally to find an archive or active log file that contains the corresponding log records. Then it reads the changes sequentially from the beginning of the file until the log record corresponding to the provided timestamp is found. The DB2 agent does not support DB2 Data Sharing (also known as DB2plex).

Supported Versions and Platforms


Attunity DB2 CDC agent is supported on the following platforms and management system:

z/OS. For more information, see Attunity Integration Suite Supported Systems and Resources.

Configuration Properties
The following parameters can be configured for the DB2 CDC agent:

Location: The DB2 location name for the connected DB2 instance. The parameter should be specified if the connected DB2 instance is different than the instance defined in the MVSDEFAULTSSID parameter of the ODBCINI file. Database name: Enter the existing DB2 database name, only if you are creating new tables using AIS. Bootstrap dataset name: This dataset is used to keep track of the DB2 logs.

Change Metadata
Changes are captured and maintained as a change table. The table contains the original table columns and CDC header columns. The header columns are described in the following table:
Table 751 Header Columns Description The date and time of the event. The name of the table where the change was made. This column lists the operations available for the CDC agent. The available operations are (set as hex values):

Column Name timestamp tableName operation

BEFOREIMAGE UPDATE INSERT DELETE BEGIN COMMIT ROLLBACK

75-2 AIS User Guide and Reference

Table 751 (Cont.) Header Columns Column Name transactionID RBA context Description The identifier for the transaction. The event LRBA The current internal context

Transaction Support
The Attunity DB2 agent for mainframe systems supports transactions.

Security
To work with Attunity Stream and DB2, the following requirements must be met:

All the libraries in the ATTSRVR STEPLIB must be APF-Authorized. Provide the following grants to the owner of the ATTSRVR started task.
Purpose This command grants the -start trace() privilege. This command grants the READA and READS IFI requests privilege.

Command GRANT TRACE GRANT MONITOR2

The owner of the ATTSRVR started task must have privileges in DB2 to run offline. For details about setting DB2 security to use DB2 with Attunity Stream, refer to Attunity Server Installation Guide for z/OS.

Platform Specific Information


The DB2 agent for mainframe systems does not support DB2 instances that use the coupling facility.

Data Types
The following data types are supported by the DB2 CDC agent:

All standard DB2 data types are supported by the DB2 agent for mainframe except for large objects (BLOBs and CLOBs). User defined data types are not supported.

Configuring the DB2 Tables for CDC


In this task you define the data capture changes attribute on the tables where the changes are captured using the DB2 agent. You can use the following DB2 command:
ALTER TABLE <TABLE_NAME> DATA CAPTURE CHANGES;

Configuring the ATTSRVR Started Task


In the ATTSRVR started task STEPLIB, check that there is a DD card that defines the used DB2 load library (usually expressed as HLQ.SDSNLOAD).

DB2 CDC (z/OS) 75-3

Configure the ODBCINI file defined in the DSNAOINI DD card of the ATTSRVR started task. In most cases, you can use the default configurations in the ODBCINI file however, you can make changes to the file, if needed. The following is an example of the ODBCINI file:
; This is a comment line... ; Example COMMON odbcini COMMON MVSDEFAULTSSID=DSN1 ; Example SUBSYSTEM odbcini for DSN1 subsystem DSN1 MVSATTACHTYPE=CAF PLANNAME=DSNACLI

The following table describes the configurations that are important for the DB2 CDC agent.
Table 752 ODBCINI configuration values Description The Sub-System ID (SSID) of the default used DB2 instance. Use the CAF only. If MVSATTACHTYPE has the RRSAF value, the DB2 agent will not work. PLANNAME The DB2 Calling Level Interface (CLI) plan name. Usually the name of the CLI plan is DSNACLI (as defined in HLQ.SDSNSAMP(DSNTIJCL) job).

Configuration MVSDEFAULTSSID MVSATTACHTYPE

You can specify other parameters in this file, as described in the IBM ODBC Guide and Reference.

Setting up the DB2 Agent in Attunity Studio


You set up a DB2 CDC agent for mainframe systems by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for Creating a CDC with the Solution Perspective. The DB2 agent configuration uses the standard solution except for:

Configuring the Data Source Configuring the CDC Service

Configuring the Data Source


For configuring the DB2 data source as part of the DB2 CDC solution, carry out the following procedure: To configure the data source 1. In the Solution perspective, click Implement.
2.

In the Server Configuration section, click Data Source. The following is displayed.

75-4 AIS User Guide and Reference

Figure 751 The DB2 Data Source Configuration

3.

Enter the following information in the Data Source Configuration window:

Location: Enter the DB2 location name for the connected DB2 instance. The parameter should be specified if the connected DB2 instance is different than the instance defined in the MVSDEFAULTSSID parameter of the ODBCINI file. Database name: Enter the existing DB2 database name, only if you are creating new tables using AIS.

4. 5.

Click Next. Enter the User Name and Password, if you need to provide security credentials to the DB2 database. Click Finish.

Configuring the CDC Service


For configuring the DB2 CDC Service, carry out the following procedure: To configure the CDC service 1. In the Solution perspective, click Implement.
2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

4.

Click Next to define the logger. The following is displayed.

DB2 CDC (z/OS) 75-5

Figure 752 CDC Logger Definition

5.

Enter the following information:

Bootstrap dataset name: This dataset is used to keep track of the DB2 logs.

6.

Click Next and select one of the following from the drop-down list to define the logging level:

None API Debug Info Internal Calls

7.

Click Finish.

To set up the stream service, follow the instructions in Creating a CDC with the Solution Perspective.

75-6 AIS User Guide and Reference

76
DB2 CDC (OS/400 Platforms)
This section describes the DB2 CDC agent on OS/400. It includes the following topics:

DB2 CDC Agent Overview Functionality Configuration Properties Change Metadata Transaction Support Data Types Security Platform Specific Information Setting-up the DB2 Journal on OS/400 Setting-up the DB2 for OS/400 Agent in Attunity Studio

DB2 CDC Agent Overview


The Attunity CDC solution for AS/400 captures changes that are written to a journal. This journal is polled for changes. Before setting up the CDC agent, you must set up the journal on the OS/400 platform. Creating a CDC solution for AS/400 is performed in Attunity Studio.

Functionality
The Attunity DB2 for OS/400 CDC agent supports the basic functionality for all AIS CDC agents with the exception of the Limitations listed.

Limitations
The DB2 for AS/400 agent has the following limitations:

The DB2 for AS/400 agent does not handle rollback to savepoint type transactions correctly because some of the rolled back events are reported.

Configuration Properties
The following parameters must be configured for the DB2 CDC agent:

Journal file name: The journal file, physical name.


DB2 CDC (OS/400 Platforms) 76-1

Journal library name: The name of the library where the journal is located.

Change Metadata
Changes are captured and maintained in the journal as events. The journal contains the original table columns and CDC header columns. The header columns are described in the following table:
Table 761 Header Columns Description

Column Name

timestamp tableName operation

The date and time of the occurrence. The name of the table where the change was made.
This column lists the operations available for the CDC agent. The available operations are (set as hex values):

BEFOREIMAGE UPDATE INSERT DELETE BEGIN COMMIT ROLLBACK

transactionID

The identifier for the transaction.

context
Headers (these headers are also returned, and can be used in a consumer application to filter the information retrieved)

The current context.


journalCode jobName jobNumber fileName memberName userProfile entryType userName programName libraryName RRN systemName referentialConstraint objectNameIndicator trigger

The data portion is an exact copy of the back-end table layout.

Transaction Support
The DB2 for OS/400 supports single-phase transactions only.

76-2 AIS User Guide and Reference

Data Types
The DB2 for OS/400 can use all of the data types supported by the DB2 data source. For information on the supported data types, see DB2 Data Types.

Security
No specific security measures are required for this agent.

Platform Specific Information


For information on the DB2 versions supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Setting-up the DB2 Journal on OS/400


This section describes how to set up a DB2 journal on OS/400 platforms for use with the Attunity Stream CDC agent.
Note:

If you are upgrading from a previous version that supported User managed journals only, be sure to consume all changes and redeploy the solution in the current version to use System Managed Journals.

The following steps are used to configure the logstream and make it accessible to an application. To set up the journal 1. In the primary DB2 screen, create a journal receiver by executing the following command:
crtjrnrcv

The Create Journal Receiver screen is displayed, as shown in the following figure:
Figure 761 Create Journal Receiver Screen

2.

Set the required values for the following parameters, then press Enter.
DB2 CDC (OS/400 Platforms) 76-3

Journal receiver: The journal receiver name (up to 10 characters). Library: An existing library name to create the receiver in (up to 10 characters). ASP number: The ASP Number. Leave the default value.
Note:

A value less than 100,000 will automatically be reset to 100,000. When the size of the space for the journal receiver is larger than the size specified by this value, a message is sent to the identified message queue if appropriate, and journaling continues.

Journal receiver threshold: A storage space threshold value (in KB) for the journal receiver. Enter a value ranging between 100,000 and 1,000,000,000 (KB) of storage. Text description: A text description of the connection.

3.

Create a journal by executing the following command:


crtjrn

The Create Journal screen is displayed, as shown in the following figure:


Figure 762 Create Journal screen

4.

Set the required values for the following parameters, then press Enter.

Journal: The journal name (up to 10 characters). Library: The existing library name (up to 10 characters). This name can be the same name used for the journal receiver. Journal receiver: The journal receiver name, which was created in the previous step. Library: The name of journal receiver library. Manage receivers: The journal manager. Set this field to User.

5.

Start journaling the DB2 tables, by executing the following command:


strjrnpf

76-4 AIS User Guide and Reference

The Start Journal Physical Files screen is displayed, as shown in the following figure:
Figure 763 Start Journal Physical File screen

6.

Set the required values for the following parameters, then press Enter.

Physical file to be journaled: The physical file name of the table to be journaled. Only tables specified in the journal are captured, irrelevant of the tables listed to be captured in Attunity Stream.
Note:

The file name changed from the logical file name if the SQL name exceeds 10 characters. For example, THE_NEW_TEST_TABLE logical name can have THE_N00001 as the physical file name. To retrieve system table names, use the following SQL query:

select system_table_name from qsys2.systables where table_name like THE_NEW_TEST_TABLE and system_table_schema like NEWLIB

To retrieve logical table names use the following SQLquery:


select table_name from qsys2/systables where system_table_name like THE_N00001 and system_table_schema like NEWLIB

Library: The library name where database files resides. (+ for more values): Add multiple tables by entering + in this field. A new screen opens where up to 50 files can be added to the journal. Journal: The name of the journal created in the previous step. Library: The library name where the journal was created. Record images: Set *AFTER to record only after image events. Set *BOTH for both before and after image events. Journal entries to be omitted: Leave this field with its default setting.

DB2 CDC (OS/400 Platforms) 76-5

Note:

When selecting the tables captured in the CDC agent, all the database tables are displayed, whether specified in the journal or not. If you choose tables not defined in the journal, they will not have updates captured.

Setting-up the DB2 for OS/400 Agent in Attunity Studio


You set-up the DB2 CDC agent for OS/400 systems by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for Creating a CDC with the Solution Perspective. The DB2 agent configuration uses the standard solution except for:

Configuring the Data Source Configuring the CDC Service

Configuring the Data Source


For configuring the DB2 data source as part of the DB2 CDC solution, carry out the following procedure: To configure the data source 1. In the Solution perspective, click Implement.
2.

In the Server Configuration section, click Data Source. The following is displayed.

Figure 764 The DB2 Data Source Configuration

3.

Enter the following information in the Data Source Configuration window:

Database name: Enter the existing DB2 database name, only if you are creating new tables using AIS. Library name: The name of the library that contains the database tables.

4.

Click Next. The following is displayed:

76-6 AIS User Guide and Reference

Figure 765

Define the DB2 Data Source

5. 6. 7.

At the top of the window, enter the Default table owner. This is the name of the person that owns the DB2 table where the changes are consumed. Enter the User Name and Password, if you need to provide security credentials to the DB2 database. Click Finish.

Configuring the CDC Service


For configuring the DB2 CDC Service, carry out the following procedure: To configure the CDC service 1. In the Solution perspective, click Implement.
2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

All changes recorded to the journal On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

Select Include capture of before-image records if you want to begin logging records before the image.

4.

Click Next to define the logger. The following window is displayed:

DB2 CDC (OS/400 Platforms) 76-7

Figure 766 CDC Logger Definition

5.

Enter the following information about your DB2 database journal:


Journal file name: Enter the name of the physical file name of the journal file. Journal library name: Enter the name of the library that has the journal.

6.

Click Next and select one of the following from the drop-down list to define the logging level:

None API Debug Info Internal Calls

7.

Click Finish.

To set up the stream service, follow the instructions in Stream Service.

76-8 AIS User Guide and Reference

77
Enscribe CDC (HP NonStop Platforms)
This section contains the following topics:

Overview Functionality Configuration Properties Change Metadata Transaction Support Data Types Security Platform-specific Information

Overview
Attunity Stream CDC solution for Enscribe captures changes that are written to the Enscribe database guarded by the Transaction Management Facility (TMF).

Functionality
The Enscribe CDC agent supports the basic functionality for all AIS agents.
Notes: The Enscribe Agent does not fully support unstructured files. If you want to use the Enscribe CDC to capture an unstructured file, you must make sure that any application that changes this file defines its metadata (or at least the buffer size) the same way its defined in Attunity Connect.

Configuration Properties
The Enscribe CDC agent supports the standard configuration properties for all AIS agents.

Change Metadata
Changes are captured and maintained in a change table. The table contains the original table columns and CDC header columns. The header columns are described in the following table:

Enscribe CDC (HP NonStop Platforms) 77-1

Table 771

Header Columns Description The date and time of the occurrence. The name of the table where the change was made. This column lists the operations available for the CDC agent. The available operations are:

Column Name timestamp tableName operation

BEFOREIMAGE UPDATE INSERT DELETE COMMIT ROLLBACK

context TransactionID filename

The current context. The operations transaction ID. The name of the file where changes are made.

In the data portion, the following data is received from the TMF file:

Sequential file: The RBA field that contains the changed record number. Relative file: The RRN field that contains the relative record number of the changed record. Unstructured file: The RBA field that contains the changed record number.

The data portion is an exact copy of the back-end table layout.

Transaction Support
The Enscribe CDC agent supports transactions.

Data Types
The Microbe CDC supports all Enscribe Data Types.

Security
There are no specific security requirements for the Enscribe CDC agent.

Platform-specific Information
There is no platform specific information for this CDC agent.

77-2 AIS User Guide and Reference

Setting Up Enscribe to use the Attunity Enscribe CDC Agent


This section describes how to set up Enscribe to use Attunity CDC solution. Before you define a new Enscribe agent, carry out the following procedure. To set up Enscribe 1. Make sure that the TMF environment is configured and running for the table being monitored for changes.
2.

Set the following attributes for the tables that are captured:
FUP ALTER filename, AUDIT FUP ALTER filename, NO AUDITCOMPRESS

Note:
3.

This agent does not support the use of partitioned tables.

Copy the script that was generated after the deployment to the Tandem terminal prompt.

Adding the Enscribe Agent to Attunity Studio


You set up an Enscribe agent by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for adding a CDC agent to Attunity Studio in Creating a CDC with the Solution Perspective. The Enscribe agent configuration uses the standard solution except for:

Configuring the Data Source Configuring the CDC Service

Configuring the Data Source


For configuring the Enscribe data source as part of the Enscribe CDC solution, carry out the following procedure: To configure the data source 1. In the Solution perspective, click Implement.
2.

In the Server Configuration section, click Data Source. The following is displayed.

Enscribe CDC (HP NonStop Platforms) 77-3

Figure 771 The Enscribe Data Source Configuration

3.

Enter the following information in the Data Source Configuration window:

Data sub-volume:

4.

Click Finish.

Configuring the CDC Service


For configuring the Enscribe CDC Service, carry out the following procedure: To configure the CDC service 1. In the Solution perspective, click Implement.
2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Master audit trail sequence number Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

4.

Click Next to define the logging level. The following is displayed.

77-4 AIS User Guide and Reference

Figure 772 CDC Logger Definition

5.

Select one of the following from the drop-down list:


None API Debug Info Internal Calls

6.

Click Finish.

To set up the stream service, follow the instructions in Creating a CDC with the Solution Perspective.

Enscribe CDC (HP NonStop Platforms) 77-5

77-6 AIS User Guide and Reference

78
IMS/DB CDC on z/OS Platforms
This section describes the Attunity IMS/DB CDC agent. It includes the following topics:

Overview Functionality Configuration Properties Change Metadata Transaction Support Security Data Types Configuring the DFSFLGX0 Exit Setting-up the IMS/DB CDC Agent in Attunity Studio Troubleshooting

Overview
The Attunity Stream CDC solution for IMS/DB, captures changed IMS/DB segments that are passed to the DFSFLGX0 IMS user exit, and saves them in an MVS logstream. The IMS/DB CDC Agent polls the logstream for the changes. Creating a CDC solution for IMS/DB is executed in Attunity Studio. A staging area is used, enabling the elimination of uncommitted changes from being captured, and to reduce the number of the change events generated. For more information, see The Staging Area.

Functionality
The Attunity Stream CDC IMS/DB Batch solution uses its own DFSFLGX0 IMS/DB exit routine for capturing the IMS/DB changes. If another DFSFLGX0 user exit is used, the CDC solution cannot work. The IMS/DB CDC agent supports the basic functionality for all CDC agents. To enable writing IMS/TM internal buffers, the agent sends the IMS/TM CHECKPOINT command, using MCS, or replying to an IMS/TM DFS996I message.

IMS/DB CDC on z/OS Platforms 78-1

Supported Platforms and Versions


The Attunity Stream IMS/DB CDC agent is supported on the following platforms and management systems:

z/OS IMS/TM

For information on the operating system versions supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Configuration Properties
The following properties can be configured for the IMS/DB CDC solution:

CDC Logger Properties CDC$PARM Properties Agent Properties

CDC Logger Properties


Logger Name: the name of the MVS logstream used for the data capture.

CDC$PARM Properties
CDC$PARM is the name of DD card that defines a QSAM data set or PDS member that contains the parameters for a DFSFLGX0 user exit. For an explanation on how to create this and its syntax see Creating and Configuring the CDC$PARM Data Set. The following list describes the CDC$PARM properties:

BUFFER_NUM: The logstream buffer number. The valid values are Default-30. BUFFER_SIZE: The logstream buffer size. The valid values are Default-22550 bytes. DEBUG: If this is ON the debug information is printed using WTO. The default value is OFF. LOGSTREAM: The logstream name. The default value is ATTUNITY.IMS.DCAPDATA.

Agent Properties
The agent properties described below are configured if you want the CHECKPOINT to be sent to IMS/TM instance. If the checkpoint is not configured, the changes may be captured by DFSFLGX0 exit with a delay, if IMS/TM Control Region executes a small amount of updates. The CDC agent properties are configured after the deployment of the solution using the Attunity Studio, Design perspective.

envImsBatch: Set to false to execute the CHECKPOINT command. The default value is true. checkPointFrequency: The frequency for issuing checkpoints. The default value is 60 (seconds). The smallest time frequency supported is 10 seconds. consoleCheckPoint: Set to true to use and extended MCS console. If false a reply to WTOR is used. The default value is true.

78-2 AIS User Guide and Reference

imsJobName: IMS job name for the IMS/TM for which the WTOR reply to the message DFS996I should be sent. This must be provided if be provided if more than one IMS/TM instances run on a Z/OS box. consoleCheckPointCommand: The command that should be sent to the MCS console. The default value is "/CHE."

Change Metadata
Changes are captured and maintained in a change table. The table contains the original table columns and CDC header columns. The header columns are described in the following table:
Table 781 Header Columns Description The records current context. This column lists the operations available for the CDC agent. The available operations are:

Column Name context operation

INSERT DELETE UPDATE BEFOREIMAGE COMMIT ROLLBACK

transactionID tableName

The operations transaction ID. The name of the table where the change was made. For INSERT, UPDATE, and BEFOREIMAGE operations, the owner name and then the table name are displayed. For COMMIT and ROLLBACK operations, this value is the same as the OPERATION value.

timestamp

The date and time of the occurrence.

Transaction Support
IMS/DB CDC supports transactions within IMS/DB transaction boundaries. However, no compensating records are available in the log in case of rollback.

Security
The IMS/DB CDC adapter connects to the MVS logstream with an authorization level of READ. The DFSFLGX0 user exit connects to the logstream with an authorization level of WRITE. To determine the proper security authorizations see the MVS Auth Assm Services Reference ENF-IXG IBM manual.

IMS/DB CDC on z/OS Platforms 78-3

Notes:

To access a logstream in an application with a READ authorization level, set the READ access to RESOURCE(<logstream name>) in SAF class CLASS(LOGSTRM). To update a logstream in a program with a WRITE authorization level, set the ALTER access to RESOURCE(<logstream name>) in SAF class CLASS(LOGSTRM).

Data Types
All data types supported by the IMS/DB data source are supported by the IMS/DB CDC solution.

Configuring the DFSFLGX0 Exit


To use the DFSFLGX0 exit, carry out the following procedures:

MVS Logstream Creation Creating and Configuring the CDC$PARM Data Set Update the IMS Environment Adjust the DBD for the Relevant Databases

MVS Logstream Creation


A sample job for the creation of the DASD MVS logstream called ATTUNITY.IMS.DCAPDATA is supplied in the <HLQ>.USERLIB(LOGCRIMS) member. For additional information, see the MVS Setting Up a Sysplex IBM manual.

Managing the MVS Logstream


The ATYLOGR program that is provided is used to manage MVS logstreams. It provides the following options:

Delete all events Delete events to a specific timestamp Print events between two timestamps Print all events from the oldest to a selected timestamp Print all events from the newest to a selected timestamp Print all events

A sample job for managing MVS Logstreams called ATTUNITY.CDC.VSAMBTCH is supplied in the <HLQ>.USERLIB(RUNLOGR) member.

Creating and Configuring the CDC$PARM Data Set


The CDC$PARM is the DD card name used for configuring the DFSFLGX0 exit. It can be any QSAM data set or member with the LRECL=80 definition. For example, you can build it as a member of the <HLQ>.USERLIB library.

78-4 AIS User Guide and Reference

The data set contains parameters, one parameter on a line, according to the follow syntax: <parameter name>=<parameter value> The parameters and their valid values are described in CDC$PARM Properties.

Update the IMS Environment


You must do the following to update the IMS Environment:

Copy the supplied DFSFLGX0 exit module from the Attunity supplied <HLQ>. LOADCDIM library to the IMS RES library. If necessary, add the CDC$PARM DD card to the IMS Control Region and batch jobs. Restart the IMS Control Region.

Adjust the DBD for the Relevant Databases


You must do the following to adjust the DBD for the relevant databases:

Adjust the DBD for each IMS/DB database that is included in your CDC solution, defining the usage of DFSFLGX0 exit, by adding the following parameter to the DBD macro:
EXIT= (*, KEY, NOPATH, DATA, LOG, (CASCADE, KEY, NODATA, NOPATH))

Recompile DBD and the corresponding PSB and ACB objects, then restart the IMS Control Region.

Setting-up the IMS/DB CDC Agent in Attunity Studio


You set-up the IMS/DB CDC agent by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for Creating a CDC with the Solution Perspective. The DB2 agent configuration uses the standard solution except for:

Configuring the CDC Service

After you set up the IMS/DB CDC agent, follow the directions in Setting the envlmsBatch Property. This is done in the Attunity Studio Design perspective. Before setting-up the IMS/DB CDC agent, make sure that:

The IMS system and logstream are properly configured, as described in Configuring the DFSFLGX0 Exit. The security measures are implemented, as described in Security. If you did not import the metadata while creating the CDC solution, see Setting the envlmsBatch Property.

Configuring the CDC Service


For configuring the IMS/DB CDC service, carry out the following procedure.

IMS/DB CDC on z/OS Platforms 78-5

Note:

When you set up an IMS/DB CDC solution, you must know what type of IMS data source you are using. For more information on using the IMS/DB DLI data source, see Defining the IMS/DB DLI Data Source. For more information on using the IMS/DB DBCTL data source, see, Defining the IMS/DB DBCTL Data Source For more information on using the IMS/DB DBDC data source, see Defining the IMS/DB DBDC Data Source

To configure the CDC Service 1. In the Solution perspective, click Implement.


2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

All changes recorded to the journal On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

4.

Click Next to define the logger. The following is displayed.

Figure 781 CDC Logger Definition Window

5.

Enter the Change Logger Name and specify the name for the logger, as entered in the IMS system fix 80 file. This is configured when Configuring the DFSFLGX0 Exit. the default name for the logger is ATTUNITY.IMS.DCAPDATA. If you changed the name when configuring IMS, then enter the new name in this field.

78-6 AIS User Guide and Reference

6.

Click Next to go to the next step where you set the CDC Service Logging. Enter the CDC logging level in this step.

Figure 782 Logging Level

Select one of the following from the drop-down list:


None API Debug Info Internal Calls

7.

Click Finish.

To set up the stream service, follow the instructions in Stream Service.

Setting the envlmsBatch Property


If using IMS/DB DBDC or IMS/DB DBCTL data sources, then set the envImsBatch agent property to False to receive the latest changes to the data. Carry out this procedure in the Design perspective of Attunity Studio. Before you begin, make sure you are in the Design perspective. To set the envImsBatch property to False 1. Expand the new binding created when you set up the IMS CDC solution and expand the Adapter, as shown in the following figure:

IMS/DB CDC on z/OS Platforms 78-7

Figure 783 The Configuration view

2. 3. 4.

Right-click the adapter for the change data capture, and select Edit Adapter. Select the Properties tab from the adapter editor. Change the value for the envImsBatch property to False, as shown in the following figure:

Figure 784 The Adapter Properties tab

5.

Click the Save button in the toolbar to save the change.

Working with Metadata


If you do not carry out a full import in the metadata as part of the CDC solution, you must make sure that the capturedTable->dbdName attribute is set explicitly in the table dbcommand if full import has not been done. You do this using the following Nav_Util command:Nav_Util edit table. Edit the table, and set the dbdName attribute in the dbCommand element for each captured table. To import the metadata from another source, click the Metadata link when Creating a CDC with the Solution Perspective. For more information on importing IMS metadata, see Setting Up IMS/DB Metadata.

78-8 AIS User Guide and Reference

Troubleshooting
This section describes how to troubleshoot the IMS/CDC agent. Review the following checklist:

Look for any errors in the IMS job. Ensure a message similar to the following appears under the IMS:
DD JESYSMSG: DFSFLGX0 Attunity CDC *Active*

Use the IMS RUNLOGR utility which is available at:


HLQ.USERLIB (RUNLOGR)

Where HLQ is the high-level qualifier where Attunity Server is installed, as shown in the following example:
//RUNLOGR JOB 'RR','TTT',MSGLEVEL=(1,1),CLASS=A, // MSGCLASS=X,NOTIFY=&SYSUID,REGION=8M //* //LOGR EXEC PGM=ATYLOGR, //*PARM=('/DEBUG ATTUNITY.IMS.DCAPDATA MAXLEN 1024 ', // PARM=('/NAME ATTUNITY.IMS.DCAPDATA MAXLEN 1024 ', // 'PRINT FROM 2005-03-13,02:13:57 TO 2007-10-27,02:38:23') //* 'DELETE ALL') //* 'DELETE TO 2004-10-28,02:17:11') //* 'DELETE TO YOUNGEST') //* 'PRINT FROM 2003-12-23,22:07:16 TO 2004-10-27,02:38:23') //* 'PRINT FROM OLDEST TO 2007-10-27,02:38:23') //* 'PRINT FROM 2004-10-27,02:51:57 TO YOUNGEST') //* 'PRINT FROM OLDEST TO YOUNGEST') //STEPLIB DD DISP=SHR,DSN=TEST.AC4800.LOADCDCY

To use RUNLOGR, un-comment the option you want to use and submit the member. The following options are available: Delete all events. Delete events to a specific timestamp. Delete the newest events. Print events between two timestamps. Print all the events from the oldest to a specified timestamp. Print all the events from the newest to a specified timestamp. Print all the events.

IMS/DB CDC on z/OS Platforms 78-9

78-10 AIS User Guide and Reference

79
Microsoft SQL Server CDC
This section contains the following topics:

Overview Functionality Supported Versions and Platforms Configuration Properties Change Metadata Transaction Support Data Types Security Platform Specific Information Setting up the SQL Server CDC in Attunity Studio Enabling MS SQL Replication Configuring Security Properties Setting up Log On Information Setting up the Database Setting Up the TLOG Miner (LGR) Testing Attunitys Microsoft SQL Server CDC Solution Handling Metadata Changes Environment Verification

Overview
Attunitys Microsoft SQL Server CDC solution is based on the MS SQL databases Transaction Logs (TLOG). TLOG records are identified by a Log Sequence Number (LSN). The LSN is used by Attunitys Microsoft SQL Server CDC agent to identify log stream records. When logging an UPDATE operation, the MS SQL Server records only changed data to the TLOG. This is not enough information for the CDC agent to provide before and after images for UPDATE statements. There is not enough information to provide the values for changed columns with primary keys, which is the minimum requirement for a CDC agent.

Microsoft SQL Server CDC 79-1

To solve this problem, you must turn Replication on in the MS SQL Server. The log will be able to report update changes as usual. In addition, Before Image results are also supported in this mode. Replication is valid only for tables with primary keys. Therefore, the MS SQL Server CDC Agent works only with tables that have a primary key. The Microsoft Replication solution used by Attunitys MS SQL Server CDC must be enabled by a qualified system administrator. The system administrator must use the tools provided with the MS SQL Server to enable replication. For information, see Enabling MS SQL Replication. The Microsoft SQL Server handles logs in a way that is not fully compatible with standard Attunity CDC solutions. For example, the MS SQL Server will truncate a TLOG after a period of inactivity to make more space available for logging operations. Uncontrolled LOG truncation could cause a loss of the truncated data. To solve these problems, Attunitys MS SQL Server CDC solution uses a TLOG miner. This component is initiated as a Microsoft Windows service. It mines the data and sends it to a Transient Storage area. The MS SQL Server CDC agent uses the data in Transient Storage to consume changes. For detailed information on the flow and architecture of this solution, see Microsoft SQL Server CDC Solution.

Microsoft SQL Server CDC Solution


The following figure shows the architecture and flow for a Microsoft SQL Server solution.
Figure 791 MS SQL Server CDC Flow

This figure shows that in order to capture the changes in the MS SQL Server TLOGs, a TLOG miner mechanism is used to extract the necessary data and place it into Transient Storage. The data in the Transient Storage logs is used to carry out the standard CDC solution. These sections, explain the main blocks in the diagram.

MS SQL Server

79-2 AIS User Guide and Reference

TLOG Miner Transient Storage

MS SQL Server
The Microsoft SQL server creates transaction logs that log server activity for recovery purposes. When using Replication, the logs hold information in a format that is consumable by Attunitys CDC agents. For more information, see Enabling MS SQL Replication. The TLOGs are divided into two sections. The active section of the TLOG contain the changes made to the currently active transaction. The reusable or inactive section has the information from older transactions, which do not require further processing by MS SQL server. This space is reusable. It is possible to back up the data or the MS SQL Server might truncate the log to create more space. When a TLOG is truncated, some of the data is dropped and is no longer available for the CDC agent to use. The Attunity CDC solution for the MS SQL Server is designed to prevent potential data loss.

TLOG Miner
The TLOG miner (LGR) is an Attunity component and is installed as a stand-alone Microsoft Windows service. It reads the TLOG file and extracts or mines the data and send them to the Transient Storage area. It has two parts.

TLOG Detainer: The Microsoft SQL Server management policy periodically reorganizes the data files when necessary. In this case, the data files are truncated when data is no longer active. The truncated data is erased from the system and cannot be used. Occasional truncation of the transaction LOG can expose the LGR to potential loss of data. No appropriate means are provided for controlling these activities. The detainer is used to prevent TLOG data loss from truncation. Truncation only takes place in the non-active section of the TLOG. The detainer places a detained transaction behind the logged records to be read. This creates a limit for the TLOGs active portion, which protects records from being truncated before they are read.

TLOG Parser: The parser parses the TLOG information and then writes it into Transient Storage.

The TLOG miner must have high availability. To achieve this:


The miner process must be active at all times to prevent loss of changes Change records must be finished to disk before releasing the TLOG detainer Start the service when the machine is started or restarted The miner must be able to restart automatically (see note below)

Manage this service with the Windows Services utility. Set the recovery options to Restart the Service for all failures (see Setting the Recovery Policy.
Note:

Transient Storage
The LGR sends the data it mines from the TLOG into Transient Storage. The log records are kept in Transient Storage according to the LGR cleanup policy.

Microsoft SQL Server CDC 79-3

The CDC consumer reads the changes from Transient Storage, not from the data source itself. The Microsoft SQL Server agent uses the data in transient storage to consume the changes. Transient Storage is implemented as sequential-variable flat files. These files must have a defined size limit. You must also indicate a working folder to store the Transient-Storage files. For more information, see Configuration Properties.

Functionality
The Microsoft SQL Server CDC agent supports the basic functionality for all CDC agents, with the exception of specific Limitations listed in the following section.

Limitations
The MS SQL Server CDC agent has the following limitations:

Savepoints are not supported. Therefore, you cannot rollback to a specific savepoint. Compensating records are not handled. The timestamp used is not exact. It provides an approximate value. When using the Microsoft SQL Server 2005 on the backend, the Attunity MS SQL CDC Agent does not support the MS SQL 2005 Large Row feature, which allows variable data to overflow beyond 8KB page limits.
Note:

If you use varchar or nvarchar data types, and the amount of data is more than 8kb, the CDC will read the field as vacant. The data is replaced by an ~Overflow:Vacant~ designator.

This is because that when the consuming changes, the CDC must have all data present. In some cases, the MS SQL Server will truncate data. When the CDC encounters an MS Large Row instance, it acts as follows: If the data in a row is greater than 8kb, then the overflowed data (which was moved by MS SQL Server to an overflow page and handled like a BLOB) is read by the CDC as vacant and designated as such. However, if the amount of data fits 8KB page limits, the CDC will handle it normally.

When using the Microsoft SQL Server 2000, in very few cases, an Update operation, is executed as an Delete/Insert pair. This is related to internal MS SQL Server behavior. The MS SQL Server agent will report this as it was executed by the SQL Server (a Delete/Insert pair). The Microsoft SQL Server CDC agent does not support filtering of events. This is because of the need for preserving consistency under all circumstances. An UPDATE may, in very few cases, be executed by the SQL Server as a DELETE/INSERT pair. In this case, the filtering of the DELETE values will break the consistency to the logical UPDATE that was originally executed. The same is true for column values where filtering by a given value can match a DELETE event associated with an UPDATE but not its INSERT pair.

79-4 AIS User Guide and Reference

When using the Microsoft SQL Server 2000, problems may occur while capturing changes to tables with a clustered non-unique index. Therefore, data captures in tables with this type of index are not supported by the MS SQL Server on the backend. When using Mircrosoft SQL Server 2005, there may be infrequent instances where there is truncation.This may occur due to the combinations of record fields of various data types when their total actual storage size and internal overhead space is more than 8000 bytes. Captured table names cannot be more than 31 characters in length.

Supported Versions and Platforms


For information on the SQL Server versions supported by AIS, Attunity Integration Suite Supported Systems and Resources. See also: Platform Specific Information.

Configuration Properties
Configure the following Environmental property in Attunity Studio:

transientStorageDirectory: Enter the full path to the folder with the Transient Storage file.

In addition, you must install the TLOG miner, create the TLOG Miner (LGR) service, and enable MS SQL Replication. See the following sections for more information.

Setting Up the TLOG Miner (LGR) Enabling MS SQL Replication

Change Metadata
Changes are captured and maintained in a change table. The table contains the original table columns and CDC header columns. The header columns are described in the following table:
Table 791 Header Columns Description The records current context.The change record stream position in the Staging Area. The column is defined as primary unique index. For more information, see Change Tables. The original change record stream position from the agent (non-numeric). This column is defined as alternate, descending unique index. It is used for the following:
1. 2. 3.

Column Name context

agent_context

It ensures that a change event does not appear more than once in the change table. It allows scanning of a change table backwards, peeking easily for the last N change events. When working with complex records, multiple records may result from a single back-end change record. This column enables the user to associate these records with the single change record.

Microsoft SQL Server CDC 79-5

Table 791 (Cont.) Header Columns Column Name operation Description This column lists the operations available for the CDC agent. The available operations are:

INSERT DELETE COMMIT ROLLBACK BEFOREIMAGE UPDATE

transactionID

The operations transaction ID. The transaction ID is increased each time a new transaction starts and a BEGIN_AXACT record is logged. The name of the table where the change was made. For the DML operations, the owner name and the table name are displayed. For COMMIT and ROLLBACK operations, this value is the same as the OPERATION value (either COMMIT or ROLLBACK).

tableName

rowID timestamp

An identifier for the row. The date and time of the occurrence. In MSSQL Server, the timestamp is an approximate value, which comes from the most recent MS SQL Server COMMIT/ROLLBACK records scanned where a timestamp is recorded.

The data portion returns a copy of the backend table layout.

Transaction Support
The Microsoft SQL Server agent supports transactions.

Data Types
The following table shows the AIS data types and their SQL equivalent that are supported by the Microsoft SQL Server CDC agent. In addition to the data types listed in this table, there is some limited support for User Defined Data Types (UDT).
Table 792 Char Char Datetime Decimal Double Float Int Money Nchar Supported Data Types SQLCHAR SQLCHAR SQLDATETIME SQLDECIMAL SQLFLT8 SQLFLT8 SQLINT4 SQLMONEY SQLNCHAR

79-6 AIS User Guide and Reference

Table 792 (Cont.) Supported Data Types Char Nvarchar Numeric Real Smalldatetime Smallint Smallmoney Tinyint Bigint Binary Varchar Bit (Limited to one column per table) Ntext- Value processing is skipped. The filed is exposed as NULL Text - Value processing is skipped. The filed is exposed as NUL. Uniqueidentifier SQL_UNIQUEIDENTIFIER SQLCHAR SQLNVARCHAR SQLNUMERIC SQLFLT4 SQLDATETIM4 SQLINT2 SQLMONEY4 SQLINT1 SQL_BIGINT SQLBINARY SQLCHAR SQLBIT

Note:

The Microsoft SQL Server agent supports only the data types listed in the Supported Data Types table.

The following data types are supported by Attunitys MS SQL Driver, however the CDC Agent does not support them:

Varbinary Image Timestamp: The MS SQL timestamp data type is a type of automatic record change sequencer. It is not handled by the driver or the CDC Agent. Cursor SQL_VARIANT Table XML

User Defined Data Types (UDT)


The Attunity MS SQL Server CDC agent does not provide complete support for UDT. To handle a UDT, AIS maps it onto its base type and handles it as if it were the base data type.

Microsoft SQL Server CDC 79-7

Security
The following are some specific security requirements for this CDC agent:

You must have an account with administrator rights to run the MS SQL Server and MS SQL Server CDC components. The CDC agent and the TLOG Miner (LGR) must be executed as members of the sysadmin server role.

Platform Specific Information


The following is required to install and work with the MS SQL Server agent.

A full (thick) installation of AIS. This should include Attunity Studio and the Attunity Server. Make sure that you have a valid licence for all Attunity products. Microsoft SQL Server 2000 or 2005 (Standard, Enterprise, or SBS editions). For MS SQL Server 2005 you must use Service Pack 1 or higher.

The Microsoft SQL Server agent works on Windows platforms only. It works according to standard Windows requirements.

Setting up the SQL Server CDC in Attunity Studio


You add a SQL Server agent to Attunity Studio by creating a new CDC Solution in the Solution perspective. Follow these steps for adding an MS SQL CDC Solution to Attunity Studio. To add an MS SQL CDC Solution to Attunity Studio 1. In the Solution perspective, click the Create new project link.
2. 3.

In the Create new project screen, type a Project Name. Select Change Data Capture and then select SQL Server as shown in this figure:

79-8 AIS User Guide and Reference

Figure 792 New SQL Server CDC

4. 5. 6.

Click Finish to close the New Project screen and then click Design from the Project Guide. Enter the details for your project. When you select the machines in your design, use one machine. You select machines in the following screen.

Microsoft SQL Server CDC 79-9

Figure 793 Define Machine Names

If you select a Staging Area machine, you should select Server Machine. This will create the Staging Area on the same machine specified as the Server Machine. Although this is not the default selection, and you can select a different machine, for guaranteed delivery reasons, Attunity recommends a One Machine solution when working with the SQL Server CDC. For more information on how to enter the information for the Design window, see Design Wizard.
7.

Click the links in the under Implement to enter the information requested for the database and stream service configurations. When you enter the information about the data source:

The SQL Server Name and dbName must be in the same literal form as the server and database names given when Setting Up the TLOG Miner (LGR).

When you enter the information about the CDC Service:

Select Include capture of before image records if you want to include these records in the CDC solution. Enter the path to the folder where the transient storage is located. This should be the same location as defined in the LGR setup. The transient storage should be on the same machine where the CDC agent is defined. The following figure shows the Transient Storage section of the CDC Service window.

79-10 AIS User Guide and Reference

Figure 794 Enter Path to Transient Storage

For more information on how to enter the information in these windows, see Implementation Guide.
8.

Deploy the solution. For more information, see Deployment Guide.


Note:

Before activating the solution, carry out the following tasks:


Enabling MS SQL Replication Configuring Security Properties Setting up Log On Information Setting up the Database Setting Up the TLOG Miner (LGR) Run the SQL Server Replication Script that appears in the Deployment Summary. For more information, see Deployment Guide

Enabling MS SQL Replication


An MS SQL Server system administrator must set up SQL Server for replication. This must be done by the system administrator. The following sections explain how to set up replication for SQL Server 2000 and SQL Server 2005.

MS SQL Server 2000 Replication


In SQL Server 2000, open the MS SQL Servers Publishing wizard in the Microsoft SQL Servers Enterprise Manager. and follow the instructions provided by the wizard or see the MS SQL Server documentation. The following should be added to the databases definitions:

A new Distribution database A replication entry A replication monitor entry

MS SQL Server 2005 Replication


In SQL Server 2005, in the Microsoft SQL Servers Management Studio, follow the instructions provided by the Configure Distribution wizard to set up replication or see the MS SQL Server documentation. To open the wizard from Microsoft SQL Server 2005:

In the Microsoft SQL Server Management Studio, right-click the Replication folder and select Configure Distribution.

Microsoft SQL Server CDC

79-11

The Configure Distribution wizard opens. You should make the following selections in the wizard:

In the Distributor step, select <SQL Server Name> will act as its own distributor; SQL Server will create a distribution database and log In the SQL Server Agent Start step, select Yes, configure the SQL Server agent to start automatically

Configuring Security Properties


You configure the security properties from the SQL Server Properties dialog box. The following figure shows the Security tab. The example below may look different on your machine depending on the version of SQL Server you are using. The following example is from MS SQL Server 2005.
Figure 795 Security Settings

Set the Authentication settings to SQL Server and Windows. Set the Audit level to None.

Setting up Log On Information


The SQL Service Login account information must match the configuration information for the Attunity IRPCD and Attunity LGR services. The logon setups should be entered in a way that allows the Attunity Services to access the SQL Server database. In most cases, the services log on at the Local System account. You should enter the following information in the SQL Server (MSSQLSERVER) Properties Log On tab. You access this through the Windows services control panel.

79-12 AIS User Guide and Reference

To access the SQL Server (MSSQLSERVER) service properties 1. From the Windows Start menu, select Control Panel.
2. 3. 4. 5.

Double-click Administrative Tools. Double-click Services. The Services control panel is displayed. From the Services list, right-click SQL Server (MSSQLSERVER) and select Properties. Configure the system as shown in the figure below.

Figure 796 Log On Properties

The configuration shown is the default for the IRPCD and LGR services, which allows anonymous access. If you want the SQL Server (MSSQLSERVER) to Log on using Windows authentication, the system administrator must enter the correct settings to log On to accounts for Attunity services.

Setting up the Database


You must make sure that some of the database set up and configurations are set so that Attunitys Microsoft SQL Server CDC agent will consume the changes made to the database. This section describes the properties that must be set for the correct operation of the CDC agent.

MS SQL Server 2000 Settings


Set the following properties in the SQL Server Enterprise Manager.

In the database properties Options tab, set the Recovery Model to Full. In this mode, the transaction Log is more durable and truncation occurs less frequently. Create enough log space to handle the size of the published database. In the database properties Transaction Log tab, select the correct setting for File Growth based on the applications capacity profile. Set the trunc. log on chkpt property to FALSE. To set this property, enter the following in the SQL Query Analyzer:
EXEC sp_dboption '<database name>', 'trunc. log on chkpt.', 'FALSE'

Make sure that all tables that will be consumed by the SQL Server CDC have a primary key.

Microsoft SQL Server CDC

79-13

Note:

See the documentation provided with Microsoft SQL Server for information on how to set the above properties correctly.

MS SQL Server 2005 Settings


Set the following properties in the SQL Server Management Studio.

From the Object Explorer, right click the database and select Properties. In the Options tab, set the Recovery model to Full. In this mode, the transaction Log is more durable and truncation occurs less frequently. Create enough log space to handle the size of the published database. From the Object Explorer, right click the database and select Properties. In the Files tab, set the initial size and growth parameters for the log files based on the applications capacity profile. Set the trunc. log on chkpt property to FALSE. To set this property, run the following query:
EXEC sp_dboption '<database name>', 'trunc. log on chkpt.', 'FALSE'

Make sure that all tables that will be consumed by the SQL Server CDC have a primary key.
Note:

See the documentation provided with Microsoft SQL Server for information on how to set the above properties correctly.

Setting Up the TLOG Miner (LGR)


Attunitys Log reader (LGR) is the component that actually reads the MSQL transaction LOG. All logged data that is affected by MS replication is read and placed at the transient storage folder. It is implemented as an independent standalone Windows Service. Since its functionality is highly sensitive, it has high availability features and fault tolerance and attempts to be always up. The following sections describe the procedures necessary for setting up the TLOG Miner service:

Call the LGR Service Interface Configuring the Template Input File Registering the TLOG Miner (LGR) Service Setting the Recovery Policy

Call the LGR Service Interface


You must call the service interface. Enter the following command at the service command prompt to call the service interface.
>>>sqlcdclgr -?

The service interface is displayed. The service interface shows commands that you can use. The following is an example of the service interface that is displayed.
SQLCDCLGR Transaction LOG mining service controller: ---------------------------------------------------sqlcdclgr -s register -a <service-name> <input-file> Register a service and its input file

79-14 AIS User Guide and Reference

sqlcdclgr -s unregister -a <service-name> Unregister a service sqlcdclgr -s start -a <service-name> Start service execution sqlcdclgr -s stop -a <service-name> Stop service execution sqlcdclgr -s restart -a <service-name> Restart service execution (=refresh parameters) sqlcdclgr -p name <service-name> Display input file P_arameter name registered for a service sqlcdclgr -p contents <service-name> Display input file P_arameter contents registered for a service sqlcdclgr -p help Display help for parameters values assignment sqlcdclgr -t T_ype an input file template sqlcdclgr -b <input-file> Run the service in an online 'B_locking' mode, using input file sqlcdclgr [-h|-?] Display this H_elp banner Service input is held at: HKEY_LOCAL_MACHINE\SOFTWARE\Attunity\Attunity Server\Services

Configuring the Template Input File


The configuration template defines some basic configuration parameters. You must define some of these parameters manually in the template. Generate the template and then edit the parameters. Enter the following at the command prompt to generate the configuration template.
<your drive>:\<full path>\sqlcdclgr>sqlcdclgr -t >sqlcdclgr_pars.xml

The following is an example of the configuration template that opens.


<serviceConfig> <cdcOrigin server='?xxx?' database='?xxx?' user='sa' password=''defaultOwner=''/> password= useWindowsAuthentication=false defaultOwner=dbo/> <transientStorage directory='?xxx?'maxFileSize='1' totalSize='100' lowThreshold='65' highThreshold='85'/> <logging directory='?xxx?'/> <control batchSize='50000' retryInterval='1' debugLevel='none' traceDBCC='false' traceStatistics='false'/> <detainer detainingTimeInterval='300' detainerTxnDurationLimit='2147483647' traceActivity='false'/> </serviceConfig>

You must enter the correct values for some of the parameters in this file. These parameters are shown as placeholders ?xxx? in the example above. Enter the current information for your system where the placeholders are shown. The following table describes the parameters to be changed.
Table 793 Property Origin Configuration Parameters Parameter database Parameter Description Enter the name of the MS SQL server database you are using. The name given for the database must be in the same literal form as the name given to the dbName when Setting up the SQL Server CDC in Attunity Studio.

Microsoft SQL Server CDC

79-15

Table 793 (Cont.) Configuration Parameters Property Parameter server Parameter Description Enter the name of the server machine where the MS SQL Server is installed. The name given for the server must be in the same literal form as the name given to the SQL Server Name when Setting up the SQL Server CDC in Attunity Studio. user Enter the name of the authorized user for the server. Note: The user entered must have sysadmin permissions in the MS SQL database. password useWindowsAuthenti cation Enter the password for the user entered in the User parameter. The default value for this property is false. Change this property to true if you want to use Windows authentication. In this case, when you start the LGR service you do not need to provide credentials to sign in to the MS SQL Server. Enter the full path to the directory where the transient storage files are located. The maximum size (in MB) allowed for a single transient storage file. For this parameter, you can change the default value.1 The maximum size (in MB) allowed for all of the transient storage. For this parameter, you can change the default value.1 For this parameter, you can change the default value.1 For this parameter, you can change the default value.

transientStorage directory maxFileSize

totalSize

lowThreshold highthreshold

79-16 AIS User Guide and Reference

Table 793 (Cont.) Configuration Parameters Property logging Parameter directory Parameter Description Enter the full path to the directory where he log files are located. LOG files are named by adding the leading prefix, SQLCDCLGR, then the server machine identifier and the database name. An example of an LGR file name is: SQLCDCLGR-192_168_165_ 167+CDClog5#0002.log You can view the information about the log file for an LGR instance in the Windows Event Properties dialog box.

control

batchSize retryInterval

The limit of the batch size for records being read upon a single LGR scan pass. The time interval in seconds that the system waits for additional information when the number of lines is less than defined in the batch parameter. Set the level for the debugging log. Set true or false to determine if trace information is returned to the log. Set true or false to determine if tracing statistics are returned to the log. Set the amount of time is seconds that data is held in the TLOG before it can be truncated. You can change the default for this parameter. This is not an active parameter. Set true or false to determine whether to trace the detainer activity and send the information to the log.

debugLevel traceDBCC traceStatistics detainer detainingTimeInter val

detainerTxnDuratin Limit traceActivity

Transient storage management is space oriented. It is based on a maximum allowed allocated space (default:100MB) and upper/lower thresholds. After every LOG scan, the LGR checks whether the transient storage space is close to exceeding its upper threshold (by default it checks to see if the storage space it at 85% or more). If it is close to exceeding the maximum space, a cleanup is started. The cleanup reduces the occupied space to the lowThreshold parameter (default 65%) of the total size specified. Cleanup activity is always reported in the LGR log file for all debug/trace settings.

Microsoft SQL Server CDC

79-17

Notes:

The -p help option displays a list of these parameters and an explanation for each. All paths must be fully qualified. You cannot use logical names.

Registering the TLOG Miner (LGR) Service


To register and register the LGR service 1. Provide a name for the service. You should use the same name as the name of the database that you are using.
2.

Register the service by entering the following at the system prompt:


C:\Program Files\Attunity\Server\tmp>sqlcdclgr -s register -a <service name> C:\Program Files\Attunity\Server\def\sqlcdclgr_pars.xml

Note:

You must enter the full path to the configuration template file as the last parameter, as shown above.

The following is an example of the system response:


| SQLCDCLGR Transaction LOG mining feature. | Associated program is : C:\Program Files\Attunity\Server\BIN\sqlcdclgr.exe +----------Install(): Service 'SQLCDC' installed setServiceParameter(): Parameter 'C:\Program Files\Attunity\Server\def\sqlcdclgr_pars.xml' has been set for Service 'SQLCDC' at HKEY_LOCAL_MACHINE\SOFTWARE\Attunity\Attunity Server\Services addEventSource(): Key (+values) added : HKEY_LOCAL_ MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application\SQLCDC

Setting the Recovery Policy


You must also set the Recovery policy for the service. The Recovery Policy is set in the Service Properties. Follow these steps to set the Recovery Policy for the new TLOG Miner (LGR) service: To set the recovery policy 1. From the Windows Start menu, select Control Panel, Administrative Tools and double-click the Services icon.
2. 3. 4.

In the Windows Services control Panel, right click your new TLOG Miner service and select Properties. In the Properties screen, click the Recovery tab. Select the following computer response for each failure:

First failure: Restart the Service Second failure: Restart the Service Subsequent failures: Restart the Service

79-18 AIS User Guide and Reference

Figure 797 Recovery Tab

Testing Attunitys Microsoft SQL Server CDC Solution


Check the following to ensure that the Microsoft SQL Server agent will operate correctly.

The system contains a Temporary Transient working folder All consumed tables are "articled" within at least one Replication/publication definition All consumed tables have a primary key Verify that the TLOG Miner components are running (see Environment Verification)

Handling Metadata Changes


When you make changes to the source tables in your SQL Server CDC Solution, you need to be sure that the CDC solution can recognize the changes and work with them. This section provides you with a procedure to handle the metadata in your Microsoft SQL Server CDC solution if changes are made after deploying the solution. You should carry out these steps at a time when there is little or no activity in the system. If you want to receive new events with a new structure, consume the changes for the table you are updating before carrying out any the steps in this process. To handle changes to metadata 1. Deactivate the Solution using the Attunity Studio.
2.

Update the metadata on the backend database for the table you are working with. In Microsoft SQL Server 2005, an inconsistency between the modified metadata and the data layout can appear because of the changes made to the metadata. To handle this inconsistency:

If a clustered index is defined for the table, run:


DBCC DBREINDEX ('<table name>',<clustered index>)

Where <table name> is the table with updated metadata, and <clustered index> is the name of its clustered index.

If no clustered index is defined, reload the table.

Microsoft SQL Server CDC

79-19

3.

Update the metadata in the Staging Area by doing one of the following:

If you made manual changes to the CDC solution after deployment, or if you do not want to redeploy the solution, then on the Router's (Staging Area) machine, do the following: Run Attunity Studio, and open the Design perspective. Edit the Metadata for the Router's Data source. Expand the table list and edit the metadata for the table. If you are adding a new column, make sure to add it to the end of the COLUMN list. This operation can also be done using the Source view. Make sure you select the correct datatype. If you are modifying a datatype, make sure to select the corresponding data type when making the modification. Save the metadata. For more information, see Working with Metadata in Attunity Studio.

For cases where you can redeploy the solution: Run Attunity Studio, and open the Solution perspective. Open the CDC solution project. Click Implement and then click Stream Service. Run the wizard. Redeploy the solution, but do not activate it. For more information, see Creating a CDC with the Solution Perspective.

4. 5.

Delete the physical files that represent the modified tables from the Staging Area. Make sure not to delete the SERVICE_CONTEXT and CONTROL_TABLE files. Reactivate the solution using Attunity Studio.

Environment Verification
The following topics show how to ensure that the SQL Server CDC solution components are configured properly. This section has the following topics.

Verify the MS SQL Server Version Ensure that the Service is Registered Verify that the LGR Service is Running Viewing the Service Greetings Check the Output Files

Verify the MS SQL Server Version


During setup, the LGR checks to see which version of the Microsoft SQL you are using to be sure that it works with the correct standards. You may need to check to verify that the correct SQL Server version is recognized by the LGR service. To verify the version, do one of the following:

In the initial setup section of the LGR log file, find the backend version stamping as follows:

79-20 AIS User Guide and Reference

<<20070315-113327>> Module:sqlcdclgr/Line:697 MS-SQL version sampled: Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Oct 14 2005 00:33:37 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 1)

In the Windows Event Viewer, do the following: Find the lgrdev source for any Information entry. Double-click the entry to display the following. Check to see if the message describes the SQL version. If not, try another entry.

Figure 798 Verify MS SQL Server Version

Ensure that the Service is Registered


Use the System Registry (REGEDIT) to ensure that:

The TLOG Miner service is registered The TLOG Miner service is assigned as a Windows event log source

To check that the TLOG Miner service is registered In the System Registry (REGEDIT), approve the service and its parameters. To access the registry:

Click Start, click Run, type regedit, and then click OK. Scroll thorough the registry tree by expanding the folders that lead to the root folder where you installed the Attunity Server. The path listed here assumes that you installed the Attunity Server in the default location: HKEY_LOCAL_MACHINE\SOFTWARE\Attunity\Attunity Server\Services. Be sure that the LGR service registration is listed on the right side. The following is an example of how the registry may look:

Microsoft SQL Server CDC

79-21

Figure 799 Registry

For checking if the TLOG Miner service is assigned as a Windows event log source, follow this procedure. To check that the LGR service is assigned as a Windows event log source In the System Registry (REGEDIT), browse to the CDClog folder. To access the registry:

Click Start, click Run, type regedit, and then click OK. Scroll thorough the registry tree by expanding these folders: SYSTEM\CurrentControlSet\Services\EventLog\Application\CDClog 5. Be sure that the LGR service registration is listed on the right side. The following is an example of how the registry may look:

Figure 7910 Registry CDCLog folder

Verify that the LGR Service is Running


You should carry out the following to be sure the TLOG Miner service is running:

Ping the Service Start the service for a period of time, and then stop it. To start the service:

79-22 AIS User Guide and Reference

From the Windows Start menu, select Control Panel, Administrative Tools and double-click the Services icon. Find the service is listed in the Name list and click Stop the service. To start the service click Restart the service. See the example below.

Figure 7911

Start and Stop the Service

Viewing the Service Greetings


Open the Event Viewer and view the messages in the Event Properties dialog box. To view the event properties 1. From the Windows Start menu, select Control Panel, Administrative Tools and double-click the Events icon.
2. 3.

Select System on the left side of the viewer. From the right pane, right click and event from the SQL Server CDC and select Properties. The following figure is a sample of the information that is displayed:
Service Greeting

Figure 7912

Check the Output Files


You should check the following files:

LGR Service log files: These files are in the folder or directory that is selected in the Logging parameter of the template input file. Transient storage output file: This file is in the folder or directory that is selected in the transientStorage parameter of the template input file.

For information on where to define these parameters, see Configuring the Template Input File.

Microsoft SQL Server CDC

79-23

79-24 AIS User Guide and Reference

80
Oracle CDC (on UNIX and Windows Platforms)
This section contains the following topics:

Overview Functionality Supported Versions and Platforms Configuration Properties Change Metadata Transaction Support Data Types Security Setting-up the Oracle REDO Log Testing the Database Logging Settings Changing the Operation Mode for Metadata Changes Setting up the Oracle CDC in Attunity Studio

Overview
Attunity Stream CDC solution for Oracle captures changes that are written to the Oracle REDO log by polling the log for changes. The CDC solution for Oracle supports online and archived REDO logs.

Functionality
The following describes the Oracle CDC agent functionality.

The Oracle CDC agent supports the basic functionality for all CDC agents. Basic CDC functionality does not support real-time metadata changes. However, the Oracle CDC solution can be configured to handle the following real-time metadata change operations: Adding a column to a captured Oracle table Changing columns to be nullable Dropping and renaming captured tables

Oracle CDC (on UNIX and Windows Platforms) 80-1

Changing the column metadata type

When these operations are disregarded, the solution will continue to work, however the data for the tables affected by these operations may not be accurate. See Changing the Operation Mode for Metadata Changes for more information on how to activate this mode.

For UNIX, link your Oracle libraries by running the ora8_build and oracdc_ build scripts, from navroot/bin. The user account where these scripts are executed must have WRITE permission to navroot/lib.

Supported Versions and Platforms


For information on the Oracle versions supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Configuration Properties
The following parameters can be configured for the Oracle CDC agent:

useOnlineRedoLogsOnly: This property forces the agent to capture changes from the online REDO logs only. Archive REDO logs are ignored. dictinaryFilename: Enter the full path to the dictionary file you want to use in this property. If no dictionary file is indicated, the Oracle CDC agent uses the dictionary file used by the REDO logs. timeDelay: This property is used only if you are using Oracle Version 9.x with an RAC configuration. The timeDelay property determines the amount of time (in seconds) to wait from the time an operation occurs until it can be returned by the agent. For example, if you set 7 in this parameter, the agent will not return the event from the log miner for at least seven seconds from the time the change was made.
Note: This parameter cannot be less than the time delay indicated in the Oracle RAC configuration. For example, if you set 7 for the time delay, the time delay set in the Oracle configuration must be at no more than seven seconds. It is important to note that the Oracle configuration is in hundredths of a second, therefore a seven second delay in the Oracle RAC configuration is set to 700. For additional information, refer to the Oracle documentation.

maxRollbackedTransactionMinutesDuration: The maximum time (in minutes) a transaction can be open before issuing a rollback to guarantee that CDC solution continues to work. The default value is 60.

Change Metadata
Changes are captured and maintained in a change table. The table contains the original table columns and CDC header columns. The header columns are described in the following table:

80-2 AIS User Guide and Reference

Table 801

Header Columns Description The records current context. This column lists the operations available for the CDC agent. The available operations are:

Column Name context operation

INSERT DELETE UPDATE BEFOREIMAGE XINSERT XDELETE XUPDATE XBEFOREIMAGE COMMIT ROLLBACK

The X operations listed above refer to compensating records. The X operations do not have actual data. These operations are not written to the change tables, however they are used to filter the corresponding operation. transactionID tableName The operations transaction ID. The name of the table where the change was made. For INSERT, DELETE, UPDATE, and BEFOREIMAGE operations, the owner name and the table name are displayed. The same is true for the X operations. For COMMIT and ROLLBACK operations, this value is the same as the OPERATION value. timestamp rowID The date and time of the occurrence. The identification number for the rows in the change record.

The data portion returns a copy of the back-end table layout.

Transaction Support
The Oracle agent supports transactions. It uses the Transaction ID to identify the transaction. This agent uses transaction demarcation. Compensating records are marked as X operations. See Change Metadata.

Data Types
The following data types are supported by the Oracle CDC agent:

ROWID CHAR VARCHAR2 DATE NUMBER(p,s)

Oracle CDC (on UNIX and Windows Platforms) 80-3

Other data types are either set to NULL (if they are NULLABLE) or set to empty values. Multi-byte character sets are not supported.

Security
The Oracle account defined in the CDC solution must be granted with the following privileges:

SELECT ANY TABLE EXECUTE on DBMS_LOGMNR SELECT on V$LOGMNR_LOGS SELECT on V$LOGMNR_CONTENTS SELECT on V$ARCHIVED_LOG SELECT on V$LOG SELECT on V$LOGFILE SELECT on V$DATABASE SELECT on V$PARAMETER SELECT on DBA_REGISTRY

If any of these privileges cannot be granted to a V$xxx, then grant it to the V_$xxx instead). For Oracle 10G, the account should also be granted the following privilege:

SELECT ANY TRANSACTION

Setting-up the Oracle REDO Log


The following section describes how to define an Oracle REDO log for use with the Attunity Stream CDC agent. Oracle can be run in two different modes: the ARCHIVELOG mode and the NOARCHIVELOG mode. To use the REDO logs with Attunity Stream and to ensure the CDC agent integrity, run the database in ARCHIVELOG mode. A script file was generated at the end of the CDC setup wizard in Attunity Studio. For information on editing this information in Attunity Studio see Deployment Guide. This file includes statements for every table you specified in the wizard that you want to capture changes. The statements set the table for logging along with all the table columns. These statements must be run against the Oracle database to set the tables for logging.
Note:

When you edit this script information, table names, owner names, and column names should be in double quotes. For example ALTER TABLE "SYSTEM"."TEST" ADD . This is how Oracle enforces case senitivity. Without quotes all names are translated as upper case. Therefore, if lower-case naming is used, edit the script manually and add the double quotes. Make sure that the case used for table and columns names are corret.

80-4 AIS User Guide and Reference

The following is an example of the script generated by Attunity Studio:


ALTER TABLE SCOTT.NV_DEPT LOGGING; ALTER TABLE SCOTT.NV_DEPT drop supplemental log group SCOTT_NV_DEPT_gr; ALTER TABLE SCOTT.NV_DEPT ADD supplemental log group SCOTT_NV_DEPT_gr(DEPT_ ID,DEPT_BUDGET); ALTER TABLE SCOTT.NV_SAL LOGGING; ALTER TABLE SCOTT.NV_SAL drop supplemental log group SCOTT_NV_SAL_gr; ALTER TABLE SCOTT.NV_SAL ADD supplemental log group SCOTT_NV_SAL_gr(EMP_ ID,MONTH,SAL); ALTER TABLE SCOTT.NV_EMPLOY LOGGING; ALTER TABLE SCOTT.NV_EMPLOY drop supplemental log group SCOTT_NV_EMPLOY_gr; ALTER TABLE SCOTT.NV_EMPLOY ADD supplemental log group SCOTT_NV_EMPLOY_ gr(EMPLOYEE_ID,LAST_NAME,CITY);

Testing the Database Logging Settings


Follow these steps for testing whether the database logging is set up correctly: To test the database logging settings 1. Run the following query:
SELECT name, value, description FROM v$parameter WHERE name = 'compatible';

The returned result should be between GE to 9.0.0.


2.

Run the following query:


SELECT supplemental_log_data_min FROM v$database;

The returned result should be YES or IMPLICIT. Use the command ALTER DATABASE ADD SUPPLEMENTAL LOG DATA to change the value of this property.

Changing the Operation Mode for Metadata Changes


You can set your solution to disregard metadata changes. In this case when you change a table metadata, the solution will continue, although the data in the table that you manipulated may not be accurate. For more information, see Functionality. By default, real-time metadata changes is not allowed. If you want to allow real-time metadata changes, carry out the following procedure. To enable real-time metadata changes 1. Deactivate/disable the router and agent workspaces.
2. 3.

Add a temp feature called cdcIgnoreMetadataChanges, with a value of true to both the router and agent environments. Reactivate/reenable the router and agent workspaces.

Setting up the Oracle CDC in Attunity Studio


You set-up the Oracle CDC agent by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for Creating a CDC with the Solution Perspective. The DB2 agent configuration uses the standard solution except for:

Configuring the Data Source

Oracle CDC (on UNIX and Windows Platforms) 80-5

Configuring the CDC Service

Configuring the Data Source


For configuring the Oracle data source as part of the Oracle CDC solution, carry out the following procedure: To configure the data source 1. In the Solution perspective, click Implement.
2.

In the Server Configuration section, click Data Source. The Data Source Configuration window is displayed.

Figure 801 Connection Information

3. 4.

Enter the Oracle Connect String for the Oracle database where you are consuming changes. Click Next. The Define Data Source window is displayed.

80-6 AIS User Guide and Reference

Figure 802 Database Information

5. 6. 7.

Enter the Default table owner for the Oracle database where you are consuming changes. Enter a User name and Password if the data base where you are consuming changes requires it. Click Finish. The window closes. Continue with .

Configuring the CDC Service


For configuring the Orcle CDC Service, carry out the following procedure: To configure the CDC Service 1. In the Solution perspective, click Implement.
2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

All changes recorded to the journal On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

Oracle CDC (on UNIX and Windows Platforms) 80-7

Select Include capture of before-image records if you want to begin logging records before the image.

4.

Click Next to define the logging level. The following is displayed.

Figure 803 CDC Logger Definition Window

5.

Select one of the followng from the drop-down list:


None API Debug Info Internal Calls

6.

Click Finish.

To set up the stream service, follow the instructions in Stream Service.

Troubleshooting
When deploying a solution, no new events may be received in the Staging Area. The following error message appears in the router log.
"[C044] Select() failed"

The reason for this is the execution time of the query sent to Oracle by the CDC agent takes too long. To prevent this from happening and to be able to receive new events, do the following:
1. 2.

Increase the value of the eventWait router property. Reactivate the CDC solution.

After getting new events, the eventWait parameter can be changed back to it's original value.

80-8 AIS User Guide and Reference

81
Query-Based CDC Agent
This section contains the following topics:

Overview Setting-up Query-based CDC Agent Changing a Query-based CDC Agent Definition

Overview
With query-based change data capture (CDC), the change data capture mechanism polls the database using a specified query and when a change is encountered that meets a set criteria, the relevant data is written to an event queue, where it can then be further processed. When the query-based CDC is all set, a separate binding is created containing the following:

A data source events adapter. This adapter polls the data source for updates and is defined with an interaction type of async-send. A copy of the data source to be polled.

In addition, a new workspace to manage the change data capture event queue is created. The data source is polled using the specified query and when a change is encountered that meets the criteria, the relevant data is written to the event queue. During the query-based CDC agent set-up, an SQL statement is formulated to return all modified records. For example, if timestamps are used as part of the database to mark the last change date to the record, you can formulate a statement similar to the following:
select * from table where last_change_timestamp > ?

Note:

If the SQL statement will not return all changes to the table, then the query-based CDC will fail.

Setting-up Query-based CDC Agent


This section describes how to set-up a query based CDC agent. To define change data capture: 1. Open Attunity Studio.
Query-Based CDC Agent 81-1

2.

In the Design perspective Configuration view, expand the computer where the required data source is located.
Note:

You can add the data source in offline design mode, in a design machine and later drag-and-drop the adapter to this machine, as described in Using an Offline Design Machine to Create Attunity Definitions.

3. 4. 5. 6. 7. 8. 9.

Expand the Binding folder and then expand the binding configuration where you want to add the data source. Right-click Data sources and select New Data source. Enter a unique name to identify the new data source. Select the data source type from the Type list. Click Next. Enter the connect string required to access the data source. The connect string is data source dependent. Click Finish. changes.

10. Right-click the data source which includes tables for which you want to capture 11. Select Add Change Data capture, and then select By Query from the popup menu. 12. Specify a unique name for the change data capture and click Finish.

A popup message is displayed, informing you that changes have been made to the daemon configuration and prompts you to reload the daemon configuration. The change to the daemon is the addition of the new workspace to manage the data source event queue.
13. Click Yes.

The CDC mechanism is defined and the objects required to support the CDC mechanism are created.
14. Right-click the adapter under the change data capture binding and select Edit

metadata from the popup menu.


15. Right-click the Interaction under the CDC adapter and select New.

The Interaction Name screen is displayed, as shown in the following figure:

81-2 AIS User Guide and Reference

Figure 811 The Interaction Name screen

16. Enter a name for the interaction. 17. Click Next.

The Define Interaction screen is displayed, as shown in the following figure:


Figure 812 The Define Interaction screen

You can now build a SELECT statement that checks for changes to a data source table. Build the query as follows:
1.

Selecting tables: Expand the required data source in the left pane.

Query-Based CDC Agent 81-3


2.

Select the required table and click the right-pointing arrow button to move the table to the right-hand pane, where the selected tables are listed.

Selecting columns: Select the Columns tab in the right-hand pane. Expand the data source and the tables containing the required column in the left pane. Select the required column and click the right-pointing arrow button to move the column to the right-hand pane.

3.

Adding conditions in a WHERE clause: Select the Where tab in the right-hand pane. Select and move the column you are setting the WHERE clause for to the right-hand pane. Set the operator and value conditions as needed.

18. Expand the required data source in the left pane. 19. Select the required table and click the > button.

The selected table now appears in the Tables tab in the right-hand pane.
20. Select the Columns tab in the right-hand pane. 21. Expand the data source table in the left pane. 22. Select the required column and click the > button. The selected column now

appears in the Columns tab in the right-hand pane.


23. Select the Where tab in the right-hand pane. 24. Select required column and click the > button. The selected column now appears

in the Where tab in the right-hand pane.


25. Set the operator and value conditions in the Where tab.

Note:

Other features available (such as sorting the results) are not relevant to building the query which checks for changes to the data.

26. Click Next.

The Interaction Properties screen is displayed, as shown in the following figure:

81-4 AIS User Guide and Reference

Figure 813 The Interaction Properties screen

27. Specify the following properties:

Pass Through: Indicates whether the query is passed directly to the back-end database for processing or processed by the Query Processor. Reuse compiled query: Indicates whether the query is saved in a cache for reuse. Encoding: Sets one of the following as the encoding method used to return binary data in text format: base64: Sets base 64 encoding method. hex: Sets hexadecimal encoding method.

Event: Indicates the interaction mode is async-send. Fail on no Rows Returned: Indicates whether an error is returned if data is not returned. Root Element: The root element name for records returned by the query, using the format <root>\<record>. Record Element: The record element name for records returned by the query, using the format <root>\<record>. Max. records: The maximum number of records returned by the query. Null string: The string returned in place of a null value. If not specified, the column is skipped.

28. Click Next.

The Context Field screen is displayed as shown in the following figure:

Query-Based CDC Agent 81-5

Figure 814 The Context Field screen

29. Select a field from the table from the Field list. The selected field will be used for

the initial context.


30. Select the operator to use with this field from the Operator list.

Note:

When the initial context is satisfied, it is incriminated. Thus, the next check of data source for an update is based on the updated context.

31. Define the initial value in the Initial Value field. 32. Click Finish.

A popup message is displayed, stating the interaction generation status and the number of records generated.
Figure 815 The Automatic Generation Results message

33. Click Yes to update the adapter definitions.

Changing a Query-based CDC Agent Definition


Once you have set up a query-based CDC agent you cannot make any changes to the definition, with the exception of adding new interactions. The interactions specify the queries used to capture changes.

81-6 AIS User Guide and Reference

82
SQL/MP CDC on HP NonStop
This section contains the following topics:

Overview Functionality Supported Versions and Platforms Configuration Properties Change Metadata Transaction Support Data Types Security Defining the SQL/MP Agent Setting up the SQL/MP Agent in Attunity Studio

Overview
The Attunity Stream CDC solution for SQL/MP captures changes that are written to the SQL/MP databases guarded by the TMF (Transaction Management Facility).

Functionality
The SQL/MP agent supports the basic functionary for all AIS CDC agents.

Supported Versions and Platforms


For information on the SQL/MP versions supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Configuration Properties
The SQL/MP agent supports the standard Configuration Properties for the SQL/MP data source.

SQL/MP CDC on HP NonStop 82-1

Change Metadata
Changes are captured and maintained as an event in the journal. The journal contains the original table columns and CDC header columns. The header columns are described in the following table.
Table 821 Header Columns Description The date and time of the occurrence. The name of the table where the change was made. This column lists the operations available for the CDC agent. The available operations are:

Column Name timestamp tableName operation

BEFOREIMAGE UPDATE INSERT DELETE COMMIT/ROLLBACK

fileName TransactionID context

The name of the file where the changes were made. The operations transaction ID. The current context.

The following is an example of the data portion of the journal. It is an exact copy of the backend table layout:
<event name="tableA" timestamp="2004-03-18 11:57:04.748320"> <tableA> <header timestamp="2004-03-18 11:57:04.748784" tableName="tableA" operation="update" context="E8406789A0"> </header> <data name="Joe" dept_id="DP02"></data> </tableA> </event>

Transaction Support
The SQL/MP agent supports transactions.

Data Types
The SQL/MP agent is supports all SQL/MP Data Types.

Security
The SQL/MP agent has no specific security requirements.

Defining the SQL/MP Agent


Before you define a new SQL/MP agent make sure to do the following is done to define an SQL/MP agent in AIS.

82-2 AIS User Guide and Reference

To set up SQL/MP 1. Make sure that the TMF environment is configured and running for the table being followed by CDC.
2.

Open the SQLCI utility to set the following attributes for the tables:

ALTER TABLE <tablename> NO AUDITCOMPRESS ALTER TABLE <tablename> AUDIT

If partitioned tables are used, run ALTER TABLE on all partitions.


3.

Copy the script generated post deployment to the HP NonStop terminal prompt.

Setting up the SQL/MP Agent in Attunity Studio


You set up an SQL/MP agent by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for adding a CDC agent to Attunity Studio in Creating a CDC with the Solution Perspective. The SQL/MP agent configuration uses the standard solution except for:

Configuring the Data Source Configuring the CDC Service

Configuring the Data Source


For configuring the SQL/MP data source as part of the SQL/MP CDC solution, carry out the following procedure: To configure the data source 1. In the Solution perspective, click Implement.
2.

In the Server Configuration section, click Data Source. The following is displayed.

Figure 821 The SQL/MP Data Source Configuration

3.

Enter the following information in the Data Source Configuration window:


SQL/MP CDC on HP NonStop 82-3

Catalog Name: Enter the subvolume used as the default catalog for new tables.

4.

Click Finish.

Configuring the CDC Service


For configuring the SQL/MP CDC Service, carry out the following procedure: To configure the CDC service 1. In the Solution perspective, click Implement.
2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Master audit trail sequence number Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

4.

Click Next to define the logging level. The following is displayed.

Figure 822 CDC Logger Definition

5.

Select one of the following from the drop-down list:


None API Debug

82-4 AIS User Guide and Reference

Info Internal Calls

6.

Click Finish.

To set up the stream service, follow the instructions in Creating a CDC with the Solution Perspective.

SQL/MP CDC on HP NonStop 82-5

82-6 AIS User Guide and Reference

83
VSAM Under CICS CDC (on z/OS)
This section describes the VSAM under CICS CDC agent. It includes the following topics:

Overview Functionality Configuration Properties Change Metadata Transaction Support Security Data Types Managing the CICS User Journal

Overview
The Attunity Stream CDC solution for VSAM under CICS captures changes that are written to the CICS User Journal defined for all captured VSAM clusters. Before setting up the CDC agent, you must define the same CICS journal name for each VSAM cluster with changes to be captured, as described in this chapter. Creating a CDC solution for VSAM under CICS is performed using Attunity Studio.

Functionality
The VSAM CICS CDC agent supports the basic functionality for all CDC agents

Configuration Properties
This section describes the configuration properties for the VSAM CICS CDC agent. There are two types of properties:

Data Source Properties CDC Service Properties

Data Source Properties


The following are data source properties:

CICS Application ID (targetSystemApplid): The VTAM ID of the CICS target system (mandatory).
VSAM Under CICS CDC (on z/OS) 83-1

Transaction ID (exciTransid): EXCI or another CICS transaction that activates DFHMIRS CICS program. VTAM NetName (vtamNetname): The VTAM network name of the specific connection being used by EXCI (and MRO) to relay the program call to the CICS target system (mandatory). BATCHCLI is the default connection supplied by IBM when the installing CICS. If you use the IBM defaults, enter BATCHCLI as the VTAM_netname parameter. If not, define a specific connection (with the EXCI protocol) and use the netname you provided for this parameter. Note: Attunity provides a netname, ATYCLIEN, which can be installed by one of the following methods:

Configure and submit a JOB from the NAVROOT.USERLIB(CICSCONF) member to submit the DFHCSDUP batch utility program to add the resource definitions to the DFHCSD dataset (see the IBM CICS Resource Definition Guide for further details). Use the NAVROOT.USERLIB(CICSCONF) member as a guide to define the resources online using the CEDA facility.

After the new netname is defined in CICS, issue the following CICS command to install the new resource definitions under CICS: CEDA INST GROUP(<used CICS group>)

Program Name: (cicsProgname): The UPDTRNS program (supplied by Attunity), if the CDC data source is used to access VSAM under CICS. If the data source is not used, you may not have a UPDTRNS program, however, must enter a value in this property to continue.

Trace Queue (cicsTraceQueue): The name of queue for output which is defined under CICS when tracing the output of the UPDTRNS program. When not defined, the default CICS queue is used.

CDC Service Properties


The following is the CDC Service property for this CDC agent.

Logger Name: The name of the MVS logstream used as the CICS user journal for the data capture.

Change Metadata
The VSAM (CICS) drivers require Attunity metadata. You can import the metadata from COBOL copybooks. If COBOL copybooks that describe the VSAM records do not exist, manually define the metadata. For information creating a data source definition, see Managing Data Source Metadata. If COBOL copybooks describing the data source records are available, you can import the metadata by running the metadata import in the Attunity Studio Design perspective Metadata tab. For more information, see Importing Data Source Metadata with the Attunity Import Wizard and Setting Up the VSAM Data Source Metadata. If the metadata is provided in a number of COBOL copybooks, with different filter settings (such as whether the first 6 columns are ignored or not), you import the

83-2 AIS User Guide and Reference

metadata from copybooks with the same settings and later import the metadata from the other copybooks. Changes are captured and maintained in the CICS journal. The journal contains the original table columns and CDC header columns. The header columns are described in the following table:
Table 831 Header Columns Description The records current context. This column lists the operations available for the CDC agent. The available operations are:

Column Name context operation

INSERT DELETE UPDATE BEFOREIMAGE COMMIT ROLLBACK

transactionID terminalID taskID tableName

The operations transaction ID. The terminal ID that originated the change. The task ID originating the change. The name of the table where the change was made. For INSERT, UPDATE, and BEFOREIMAGE operations, the owner name and then the table name are displayed. For COMMIT and ROLLBACK operations, this value is the same as the OPERATION value.

timestamp

The date and time of the occurrence.

The data portion is an exact copy of the back-end table layout. Each change in the journal is captured as an event with the following format:
<event name=table_name timestamp=...> <table_name> <header ...></header> <data ...></data> </table_name> </event>

Transaction Support
The VSAM CICS CDC agent supports transactions in their CICS boundaries.

Security
The VSAM CICS CDC agent connects to the MVS logstream with an authorization level of READ. All security authorizations need to be set as described in IBMs MVS Auth Assm Services Reference ENF-IXG manual.

VSAM Under CICS CDC (on z/OS) 83-3

To permit access to a logstream with a READ authorization level, set the READ access to RESOURCE(<logstream name>) in the SAF class CLASS(LOGSTRM).
Note:

Data Types
The VSAM CICS CDC agent supports all data types supported by the Attunity VSAM CICS data source. For more information see the VSAM Data Types.

Managing the CICS User Journal


The following are the tasks used for managing the VSAM CDC agent:

Setting up the CICS User Journal for VSAM Print out the CICS User Journal Content

Setting up the CICS User Journal for VSAM


This section describes how to set up the CICS user journal for VSAM. To set up the CICS user journal for VSAM 1. Check if a relevant CICS User Journal is available by entering the following CICS command:
CEMT I JO

This displays the list of all the available CICS Journals, for example:
Jou(DFHJ77) Mvs Ena Str(CICSTS13.CICS.DFHJ77) Jou(DFHJ66) Mvs Ena Str(CICSTS13.CICS.DFHJ66) Jou(DFHLOG) Mvs Ena Str(CICSTS13.CICS.DFHLOG)

Each User Journal has the CICS name DFHJxx, where xx is the number of the journal. You can use any of them as the CDC Logger. Its logstream name (for example, CICSTS13.CICS.DFHJ77) should be provided as CDC Logger Name property. If you cannot use one of the available journals, go to steps 2 and 3. If you can use a journal in the list, go to step 4.
2.

Create an MVS logstream that can be used as a CICS journal. A sample job for the creation of DASD MVS logstream called ATTUNITY.CDC.VSAMBTCH is supplied in the <HLQ>.USERLIB(LOGCRVSM) member. For additional information, see IBMs MVS Setting Up a Sysplex manual. Define and install the log stream as a user journal by entering the following CICS command:
CEDA DEF JO(<journal name>) GR(<CICS group name>) TY(MVS) STR(<logstream name>) CEDA INST JO(journal name) GR((<CICS group name>)

3.

4.

For each VSAM cluster to be captured, use the CICS command CEDA DI FI, and edit the properties as shown in the following example:
+ JOurnal : <journal number> JNLRead : Updateonly JNLSYNCRead : Yes JNLUpdate : Yes JNLAdd : AFter JNLSYNCWrite : Yes

83-4 AIS User Guide and Reference

RECOVERY PARAMETERS RECOVery : Backoutonly Fwdrecovlog : No BAckuptype : Static SECURITY RESsecnum : 00

The value JNLRead means that before images are also written to the journal. The value JNLSYNCWrite means that CICS writes the changes to the journal immediately, instead of saving them in a buffer and writing them to the journal in blocks. Close the VSAM cluster by using the following CICS command:
CEMT S F(<CICS file name>) CLOSE

Reinstall the cluster by using the following CICS command:


CEDA INST F(<CICS file name>) GR(<CICS group name>)

Open the cluster by using the following CICS command:


CEMT S F(<CICS file name>) OPEN

Print out the CICS User Journal Content


You can print out a CICS journal content running a job as described in the following example:
//PRINTLOG JOB 'RR','TTT',MSGLEVEL=(1,1),CLASS=C // MSGCLASS=X,NOTIFY=&SYSUID //PRNTJNL EXEC PGM=DFHJUP //STEPLIB DD DSNAME=<HLQ>.SDFHLOAD,DISP=SHR //SYSPRINT DD SYSOUT=A,DCB=RECFM=FBA //SYSUT1 DD DSNAME=<logstream name>, // DCB=BLKSIZE=32760, // SUBSYS=(LOGR,DFHLGCNV, // * 'FROM=(2003/261,22:07:16),TO=(2004/007,23:59:15), LOCAL') 'FROM=(2003/261,22:07:16),TO=YOUNGEST,LOCAL') //* 09/18/2003 22:07:15 //* 09/19/2003 15:35:34 //SYSIN DD * *-----------------------------------------------------* CONTROL STATEMENT : DEFAULTS * * INPUT = SYSUT1 * * OUTPUT = SYSPRINT * * SELECTION QUALIFIERS : * * 1. DEFAULT = ALL INPUT RECORDS * *----------------------------------------------------* OPTION PRINT END *----------------------------------------------------* /*

where:

DSNAME: This is the name of the journal logstream. FROM: This is the earliest change you want to print out. TO: This is the latest change you want to print out (TO=YOUNGEST prints all the entries up to the last change logged).

VSAM Under CICS CDC (on z/OS) 83-5

Setting up the VSAM CICS Agent in Attunity Studio


You set up a VSAM CICS CDC agent by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for Creating a CDC with the Solution Perspective. The VSAM CICS configuration uses the standard solution except for:

Configuring the Data Source Configuring the CDC Service

Configuring the Data Source


For configuring the VSAM data source as part of the VSAM CICS CDC solution, carry out the following procedure: To configure the data source 1. In the Solution perspective, click Implement.
2.

In the Server Configuration section, click Data Source. The Data Source Configuration window is displayed. The following figure shows the Data Source Configuration window.

Figure 831 Data Source Configuration

3.

Enter the following information in the Data Source Configuration window:

CICS Application ID: Enter the VTAM ID of the CICS target system (mandatory). Transaction ID Enter the EXCI or other CICS transaction that activates DFHMIRS CICS program. VTAM NetName Enter the VTAM network name of the specific connection being used by EXCI (and MRO) to relay the program call to the CICS target system (mandatory). Program Name: Enter the UPDTRNS program (supplied by Attunity), if the CDC data source is used to access VSAM under CICS. If the data source is not used under CICS, you may not have a UPDTRNS program, however, must enter a value in this property to continue.

83-6 AIS User Guide and Reference

Trace Queue Enter the name of queue for output which is defined under CICS when tracing the output of the UPDTRNS program. When not defined, the default CICS queue is used.

4.

Click Finish. The window closes. Continue with the rest of the configuration.

Configuring the CDC Service


For configuring the VSAM CICS CDC Service, carry out the following procedure: To configure the CDC Service 1. In the Solution perspective, click Implement.
2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

All changes recorded to the journal On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

4.

Click Next to for the Logger configuration. The following is displayed.

Figure 832 CDC Logger Definition Window

5.

Enter the following information:

Logger name: Enter the name of the MVS logstream used for the data capture.

6.

Click Finish.

To set up the stream service, follow the instructions in Creating a CDC with the Solution Perspective.
VSAM Under CICS CDC (on z/OS) 83-7

83-8 AIS User Guide and Reference

84
VSAM Batch CDC (z/OS Platforms)
This section includes the following topics:

Overview Functionality Configuration Properties Change Metadata Transaction Support Data Types Security Platform Specific Information Configuring the Logger Setting up the VSAM Batch Agent in Attunity Studio

Overview
The Attunity Stream CDC solution for VSAM clusters updated by batch programs, captures changed VSAM records that are passed to JRNAD VSAM user exit, and saves them in an MVS logstream. The Attunity VSAM Batch CDC agent polls the logstream for the changes. The solution supports genuine VSAM clusters only. You create a CDC solution for VSAM Batch with Attunity Studio.

Functionality
The Attunity Stream CDC VSAM Batch solution automatically sets its own JRNAD user exit routine during the open process of a VSAM cluster (if another JRNAD exit has been already in use, the solution cannot work). A customer makes changes in the corresponding jobs to:

Provide the Attunity VSAM hook for automatic JRNAD definition Manage Logical Transactions (see Logical Transaction Manager)

The VSAM Batch CDC agent supports the basic functionality for all CDC agents.

VSAM Batch CDC (z/OS Platforms) 84-1

Configuration Properties
This section describes the configuration properties for the VSAM Batch CDC agent. There are two types of properties:

CDC Service Properties CDC$PARM Properties

CDC Service Properties


The following is the CDC Service property for this CDC agent:

Logger name: The name of the MVS logstream used for the data capture.

CDC$PARM Properties
CDC$PARM is the name of DD card that defines a QSAM data set or PDS member that contains the parameters for the JRNAD exit and Logical Transaction Management. For more information on the creation and syntax, see Configuring the Logger.
Table 841 Name CD$PARM Values Valid Values Default YES YES OFF * Comment Write before image to logstream Use blocking write Print debug info using WTO VSAM cluster that should be captured; An asterisk (*) indicates that all the VSAM clusters opened with Attunity JRNAD should be captured. Each DSNAME defines only one cluster. You may provide up to 50 clusters. ERROR IGNORE/ABEND ABEND When IGNORE is used, mostabnormal situations cause a warning message and the process terminates. Allows the use of the Logical Transaction Manager

BEFORE_IMAGE YES/NO BLOCKING DEBUG DSNAME YES/NO OFF/ON */<cluster name>

LOGICAL_ TRANSACTION LOGSTREAM

YES/NO

YES

<Logstream name> ATTUNIT The name of the used MVS logstream Y.CDC.V SAMBTCH UPD/DEL/INS OFF/ON YES/NO All OFF YES Defines the type of operations that are written to the logstream. Ignore dummy records used for empty KSDS cluster access. Use synchronize logstream write.

OPER SHOW_DUMMY_ RECORDS SYNC_WRITE

Change Metadata
VSAM batch CDC events contain the original table columns and CDC header columns. The header columns are described in the following table:

84-2 AIS User Guide and Reference

Table 842

Header Columns Description

Column Name timestamp tableName operation

The date and time of the occurrence. The name of the table where the change was made.
This column lists the operations available for the CDC agent. The available operations are:

BEFOREIMAGE UPDATE INSERT DELETE


DELETEALL (available in cases where a COBOL output file is opened in CLEAN ALL mode and an event with a DELETEALL operation is initiated)

COMMIT ROLLBACK

context jobName programName userName stepName procedureStepName

The current context.


The name of the job which instigated the VSAM update. The name of the program that changed the VSAM data. The name of the user running the job. The name of the step in the job. The name of the procedure name run by the step.

programStartTimestamp The time when the program started to be executed.

The data portion is an exact copy of the back-end table layout.

Transaction Support
The VSAM Batch CDC Agent supports Transactions. There are two types of batch transaction management:

Single Program Transaction Manager Logical Transaction Manager

Single Program Transaction Manager


By default, changes that are made by a single program (PGM) are designated as a transaction. In this case, the Attunity VSAM CDC hook sets its own LE termination routine to get the program severity and user return codes. The return codes determine whether the transaction is terminated with COMMIT or ROLLBACK. By default, any severity code less than 2 and any return code less than or equal to 4 (that is a warning return code or a successful return code) result in COMMIT. All other values result in ROLLBACK. Use the CDC agent properties commitMaxTerminationSeverityCode and commitMaxTerminationUserCode to adjust the default behavior.

VSAM Batch CDC (z/OS Platforms) 84-3

Logical Transaction Manager


It is common practice to set up nightly batch jobs to update VSAM clusters to ensure consistency. This is done by maintaining a copy of the VSAM data before the job is run and restoring the previous copy if the batch job is terminated abnormally for any reason. This practice can be viewed as an implementation of a logical transaction that ensures that an entire batch job runs as a single unit of work. With the Attunity VSAM Batch CDC Solution, it is important to maintain the same work unit. This means that the changes should not be delivered to the client application until the entire logical transaction successfully completes. Failure to maintain such a work unit may result in inconsistencies between the VSAM data, that was restored to the original version, and the change consumer. The Attunity-supplied ATYLTRAN program provides complete control over the transactional boundaries of captured changes. ATYLTRAN should be called as a separate STEP when:

The logical transaction is started The logical transaction is terminated using both COMMIT and ROLLBACK The logical transaction is delayed (and should be continued later in another JOB) The logical transaction continued in another JOB

ATYLTRAN receives the following parameters:

The logical transaction operation: BEGIN: Indicates a new logical transaction. If a logical transaction with the same name exists, the old transaction is terminated using ROLLBACK. COMMIT ROLLBACK: Terminates the logical transaction CONTINUE: The default. This should be used at the end of each JOB that does not terminate the current logical transaction, and at the beginning of each JOB that continues a logical transaction initiated by another JOB.

The logical transaction name: By default, the BEGIN operation initiates single job logical transaction with the same name as the JOB. If the logical transaction is continued to another JOB, or the transaction name is changed, the transaction name must be provided explicitly with the BEGIN operation. The same name should be provided with the CONTINUE operation at the beginning of the other JOB to continue the transaction. The transaction name can be up to 15 characters long.

Data Types
The VSAM Batch CDC agent supports all data types supported by the Attunity VSAM Batch data source. For more information see the VSAM Data Types.

Security
The VSAM Batch CDC adapter connects to the MVS logstream with an authorization level of READ. Batch applications updating VSAM files and the VSAM adapter (when updating VSAM files) connect to the logstream with an authorization level of WRITE,

84-4 AIS User Guide and Reference

if change data capture is active. The proper security authorizations should be set as described in the MVS Auth Assm Services Reference ENF-IXG IBM manual.
Notes:

To access a logstream using an application with a READ authorization level, set the READ access to RESOURCE(<logstream name>) in SAF class CLASS(LOGSTRM). To update a logstream using a program with a WRITE authorization level, set the ALTER access to RESOURCE(<logstream name>) in SAF class CLASS(LOGSTRM)

Platform Specific Information


The Attunity VSAM Batch CDC solution runs on the z/OS operating system. For information on the VSAM versions supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Configuring the Logger


To use the VSAM Batch CDC Solution, carry out the following tasks:

Creating the Logstream Creating the CDC$PARM Data Set Updating Jobs and Scripts

Creating the Logstream


A sample job for the creation of DASD MVS logstream called ATTUNITY.CDC.VSAMBTCH is supplied in the <HLQ>.USERLIB(LOGCRVSM) member. For additional information, see IBMs MVS Setting Up a Sysplex manual.

Managing the MVS Logstream


The ATYLOGR program provided is used to manage MVS logstreams. It has the following options:

Delete all events Delete events to a specific timestamp Delete the newest events Print events between two timestamps Print all the events from the oldest to a specified timestamp Print all the events from the newest to a specified timestamp Print all the events

A sample job for managing the default VSAM Batch CDC MVS Logstream called ATTUNITY.CDC.VSAMBTCH is supplied in the <HLQ>.USERLIB(RUNLOGR) member.

VSAM Batch CDC (z/OS Platforms) 84-5

Creating the CDC$PARM Data Set


CDC$PARM is the name of the DD card used to configure the JRNAD and Logical Transaction Manager. It can be any QSAM data set or member with the LRECL=80 definition. For example, you can build it as a member of the <HLQ>.USERLIB library. The data set contains parameters, one per a line, according to the follow syntax:
<parameter name>=<parameter value>

The parameters and their valid values are described in CDC$PARM Properties. The minimal set of the parameters for any solution can be taken from Project Information section of Deployment Summary screen of Attunity Studio Solution Perspective. If the CDC$PARM data set is not provided to the JRNAD or Logical Transaction Manager, it is managed as follows:
DSNAME=* LOGSTREAM=ATTUNITY.CDC.VSAMBTCH

Updating Jobs and Scripts


To make sure that the changes are captured, you have to update the jobs and REXX scripts used to change the VSAM data.

Updating Jobs for Activating CDC JRNAD Updating Jobs for Using the Logical Transaction Manager Update the REXX Scripts

Updating Jobs for Activating CDC JRNAD


To get the Attunity CDC JRNAD program that is being called, add three DD cards to each step that activates a program that is being updated, as shown in the example below:
STEPLIB DD DSN=<HLQ>.LOADCDCY,DISP=SHR ATYLIB DD DSN=<HLQ>.LOADAUT,DISP=SHR CDC$PARM DD DSN=<CDC$PARM DS name>,DISP=SHR

Make sure that that the <HLQ>.LOADAUT load library is APF authorized.

Updating Jobs for Using the Logical Transaction Manager


To use the Logical Transaction manager, add additionl steps to the jobs used as a part of the logical transaction.

Start a Logical Transaction: To begin a logical transaction, add the following step as the first step in a job:
//BEGINLT EXEC PGM=ATYLTRAN, // PARM='BEGIN,TRAN_NAME=<logical transaction name>' //STEPLIB DD DSN=<HLQ>.LOADAUT,DISP=SHR //CDC$PARM DD DSN=<CDC$PARM DS name>, DISP=SHR

Move a Logical Transaction to another job: To move a logical transaction to another job, add the following step as the last step of the job:
//MOVELT EXEC PGM=ATYLTRAN,COND=(4,LT) //STEPLIB DD DSN=<HLQ>.LOADAUT,DISP=SHR //CDC$PARM DD DSN=<CDC$PARM DS name>, DISP=SHR

84-6 AIS User Guide and Reference

Continue Logical Transaction: To continue a logical transaction in another job, add the following step as the first step of this job:
//CONTLT EXEC PGM=ATYLTRAN, // PARM='TRAN_NAME='<logical transaction name>' //STEPLIB DD DSN=<HLQ>.LOADAUT,DISP=SHR //CDC$PARM DD DSN=<CDC$PARM DS name>, DISP=SHR

Terminate Logical Transaction: To terminate a logical transaction, add the following two steps as the last steps of this job:
//COMMITLT EXEC PGM=ATYLTRAN,COND=(4,LT), // PARM='COMMIT' //STEPLIB DD DSN=<HLQ>.LOADAUT,DISP=SHR //CDC$PARM DD DSN=<CDC$PARM DS name>, DISP=SHR //RLBCKLT EXEC PGM=ATYLTRAN,COND=(EVEN,(0,EQ,COMMIT)) // PARM='ROLLBACK' //STEPLIB DD DSN=<HLQ>.LOADAUT,DISP=SHR //CDC$PARM DD DSN=<CDC$PARM DS name>, DISP=SHR

The COND clause in the COMMITLT step should provide the condition when the Logical Transaction terminates successfully by COMMIT. The COND clause in the RLBCKTL step always writes a ROLLBACK to the logstream when COMMITLT is not executed or fails.

Update the REXX Scripts


If you use REXX scripts to run VSAM update programs, you should change them for this purpose. The member NVCMDCDC in <HLQ>.USERLIB is supplied as an example of a REXX script.

Setting up the VSAM Batch Agent in Attunity Studio


You set up a VSAM Batch CDC agent by creating a CDC solution in the Attunity Studio Solution Perspective. Follow the directions for Creating a CDC with the Solution Perspective. The VSAM Batch configuration uses the standard solution except for:

Configuring the Data Source Configuring the CDC Service

Configuring the Data Source


There is no Data Source configuration for configuring the VSAM data source as part of the VSAM Batch CDC solution.

Configuring the CDC Service


For configuring the VSAM Batch CDC Service, carry out the following procedure: To configure the CDC Service 1. In the Solution perspective, click Implement.
2. 3.

In the Server Configuration section, click CDC Service. The CDC Service wizard is displayed. In the first screen select one of the following to determine the Change Capture starting point:

All changes recorded to the journal


VSAM Batch CDC (z/OS Platforms) 84-7

On first access to the CDC (immediately when a staging area is used, otherwise, when a client first requests changes Changes recorded in the journal after a specific date and time. When you select this option, click Set time, and select the time and date from the dialog box that is displayed.

4.

Click Next to for the Logger configuration. The following is displayed.

Figure 841 CDC Logger Definition Window

5.

Enter the following information:

Logger name: Enter the name of the MVS logstream used for the data capture.

6.

Click Finish.

To set up the stream service, follow the instructions in Creating a CDC with the Solution Perspective.

84-8 AIS User Guide and Reference

Part XIII
Interface Reference
This part contains the following topics:

C and COBOL 3GL Client Interfaces JCA Client Interface JDBC Client Interface ODBC Client Interface OLE DB (ADO) Client Interface XML Client Interface

85
C and COBOL 3GL Client Interfaces
This section contains the following topics:

Overview of the C and COBOL 3GL APIs to Applications APIs and Functions Using APIs to Invoke Application Adapters - Examples CICS as a Client Invoking an Application Adapter (z/OS Only) CICS Connection Pooling under CICS IMS/TM as a Client Invoking an Application Adapter (z/OS Only)

Overview of the C and COBOL 3GL APIs to Applications


AIS includes APIs that enable invoking AIS application adapters, either locally or on a remote machine directly from a C or COBOL program. You can also invoke application adapters from an RPG program on an OS/400 platform, by linking to the COBOL APIs from the RPG program. AIS includes APIs that enable invoking application adapters, either locally or on a remote machine directly from a C or COBOL program. From an RPG program you can link to the COBOL APIs, to invoke application adapters. Transactions calling the APIs are provided for CICS and IMS/TM. For details, see CICS as a Client Invoking an Application Adapter (z/OS Only) and IMS/TM as a Client Invoking an Application Adapter (z/OS Only).

Using the 3GL API to Invoke Application Adapters


The APIs may be used with:

C Programs (see Using the API with C Programs) COBOL Programs (see Using the API with COBOL Programs)

Using the API with C Programs


To use the APIs in a C program, you must include the following in the program:

The gap.h header: The GAP API declarations. The GAP_LOAD function: This function loads the API functions.

The function does not have any parameters. A non-zero value is returned for success and 0 for failure
C and COBOL 3GL Client Interfaces 85-1

When called, the function searches for the AIS shared library and loads it. The current path and NAVROOT environment variable are used in the function. On Windows, the NVBASE environment variable is the first to be searched. On OpenVMS, the NVBASESHR logical name is the first to be searched. This function must be exposed.

Using the API with COBOL Programs


Since the COBOL language does not contain macros like the C language does the programmer must directly invoke the ACXAINIT function. This function loads the API functions. This function does not have any parameters. GAP_LOAD is a C macro which invokes the ACXAINIT function.

Supported Interfaces
z/OS Platforms To use the APIs under CICS, link the program with the stub NAVROOT.FIXLIB(ACX3GL).

APIs and Functions


This section describes the following APIs and functions:

Connection APIs Transaction APIs Execution APIs Get Adapter Schema Function Get Event Function Ping Function Get Error Function

The syntax uses C terminology, followed by the equivalent COBOL function name.

Connection APIs
The following functions handle the connection and connection context for a request:

The Connect Function The Clean Connection Function The Disconnect Function The Retry Connection Function

There are two kinds of connections:

Transient connections are created for use within a single request. A transient connection is disconnected when a request ends, or when the connection context changes (that is, with the connect, setConnection, or disconnect functions). Persistent connections can persist across multiple requests or connection context changes. Persistent connections are disconnected upon an explicit disconnect function or when a connection idle timeout expires.

85-2 AIS User Guide and Reference

The Connect Function


The Connect function establishes a new connection context. All the interactions defined take place within a connection context. Upon a successful connect, a connection context with a matching connection ID is established. This connection ID is used with later requests that use the same connection context (an implicit setConnection is performed with the newly created connection ID). A failed connect call leaves the request connection context with an error (that is, if a connection context was established prior to invoking the connect function, that connection context will no longer be in effect). The function returns an integer of 1 (TRUE) or 0 (FALSE) to indicate the success of the function. The following is a set of examples for this function. The examples are both in C and COBOL.
Example 851 Initialization Statistics

ACXAPI_CONNECT( char* char* char* char* char* int long ACXAPI_CONNECT_MODE char* char* char* void Example 852

ServersUrls, Username, Password, Workspace, AdapterName, Persistent, IdleTimeout, ConnectMode, DefinitionFileName, KeyName, KeyValue, **ConnectHandle)

Connect Function called from C

nvBOOL ACXACNCT( STRING_256 sServersUrls, /* IP1:port[,IP2:port] [,...] */ STRING_64 sUsername, STRING_64 sPassword, STRING_64 sWorkspace, STRING_64 sAdapterName, nvINT4 *pbPersistent, nvINT4 *piIdleTimeout, nvINT4 *piConnectMode, STRING_256 sSchemaFileName, STRING_64 sEncKeyName, STRING_256 sEncKeyValue, nvINT4 *phConnectHandle) Example 853 Connect Function called from COBOL

The function name in COBOL is ACXACNCT.


03 CP-SERVERS-URL pic x(256) value is "206.32.128.167:3300". 03 CP-USERNAME pic x(64) value is low-value. 03 CP-PASSWORD pic x(64) value is low-value. 03 CP-WORKSPACE pic x(64) value is low-value. 03 CP-ADAPTER pic x(64) value is "orders". 03 CP-PERSISTENT pic s9(8) value is 0. 03 CP-IDLE-TIMEOUT pic s9(8) value is 0.

C and COBOL 3GL Client Interfaces 85-3

03 03 03 03 03

CP-CONNECT-MODE pic s9(8) value is 0. CP-SCHEMA-FILE pic x(256) value is "tcobxml". CP-ENC-KEY-NAME pic x(64) value is low-value. CP-ENC-KEY-VALUE pic x(256) value is low-value. CP-CON-ID pic s9(8) comp.

This table describes the Connect function parameters.


Table 851 Parameter C: ServersURLs (string_256) COBOL: SERVERS-URL pic x(256) Connect Function Parameters Usage Input Description The address of the server(s) to which the connection is made. A series of servers, separated by commas, can be specified. The connection is made to the first server in the list that is up. If a server is down, the next server is tried. One of the following formats is used: server_name:port,server_name:port,... Or: TCP/IP_address:port,TCP/IP_address:port,... Or: acx://[user:password@]server[:port]/workspace/adapte; ... In the last example, the username, password, workspace and adapterName parameters are specified as part of the URL and any values passed for the parameters are ignored. Username C: (string_64) COBOL: pic x (64) Password C:(string_64) COBOL: pic x(64) Workspace C:(string_64) COBOL: pic x(64) C AdapterName C:(string_64) COBOL: pic x(64) Persistent C: (nvINT4) COBOL: pic s9(8) C: IdleTimeout1 (nvINT4) COBOL: IDLE-TIMEOUT pic s9(8) C: ConnectMode (nvINT4) COBOL: CONNECT-MODE pic s9(8) Input Input A per-connection client idle timeout setting (in seconds). If the client does not use the connection for the specified amount of time, the connection is disconnected by the server and its associated resources are released. This setting is limited by the server side maximum idle connection timeout setting. The mode of the connection: 0 Immediate connection: A connection attempt is made during this call. 1 Deferred connection: A connection attempt is made with the first request over the connection. Input This parameter should be set to true, indicating a persistent connection request. Non-persistent connections are costly and are not recommended. Input The name of the adapter to which the connection is made. Input The name of the workspace where the adapter associated to the connection runs. The default workspace is Navigator. Input The user password required by the adapter. Input The username required by the adapter.

85-4 AIS User Guide and Reference

Table 851 (Cont.) Connect Function Parameters Parameter C: DefinitionFileNa me (string_256) COBOL: SCHEMA-FILE-NAME pic x(256) C: KeyName (string_ 64) COBOL: ENC-KEY-NAME pic x(64) C: KeyValue (string_ 256) COBOL: ENC-KEY-VALUE pic x (256) C: **ConnectHandle Output A pointer to the connection. A pointer is always returned when the connection fails. This enables calling the getError (nvINT4) function to determine what caused the error. COBOL: *CON-ID pic The Disconnect function must always be called to clear the s9 (8) connection handle.
1

Usage Input

Description The name and path of the local adapter definition file used for calling the adapter. This parameter must be provided for remote adapters.

Input

The name of the encryption key.

Input

The value associated with the encryption key.

This parameter represents a common behavior within application servers limiting the amount of time a resource can be tied up by a client.

Identifying the Adapter Schema


The definitionFileName parameter provides the client API with information on how to invoke the adapter. The API needs this parameter to convert the input and output buffers to and from XML. The adapter definition file can be produced (on the server) using a command similar to the command in the example below:
$ NAV_UTIL EXPORT ADAPTER_DEF <adapter-name> <xml-file>

The definition produced using this command matches the server environment and may need editing to be appropriate for the client API. Usually the editing that is necessary is related to datatypes. For example, a database adapter may have an interaction input record with a field called QUANTITY of type Double, but the interaction is invoked from a COBOL program with a buffer where the QUANITY field is a PIC S9(11)V9(2) COMP-3 (a packed decimal). In this case, change the QUANTITY field native type in the adapter definition from DOUBLE to DECIMAL(13,2).
Note:

@hen the API is used when the Server is accessed through CICS (z/OS or OS390) there is a problem indicating the adapter definition file because CICS does not support dynamically opening files. You can provide the adapter definition file by creating an ESDS VSAM file and filling it with the contents of the adapter definition, then add this file to CICS using its CICS name as the value for the definitionFileName parameter.

C and COBOL 3GL Client Interfaces 85-5

The Clean Connection Function


This function is not available with COBOL. The Clean Connection function indicates that the client is working with connection pooling and that the connection is being soft-closed, that is, the connection is being placed in a connections pool. The connection is still valid but various resources on it are freed (for example, objects related to local interactions). The function returns an integer, used to determine the success or failure of the function.

Syntax
ACXAPI_CLEAN_CONNECTION( void* ConnectHandle int forgetAuthentification)

This table describes the Clean Connection function parameters.


Table 852 Parameter ConnectHandle forgetAuthentification Clean Connection Function Parameters Usage Input Input Description A pointer to the connection. Indicates the adapter should forget the authentication information. This behavior is reflected in the adapter metadata.

The Disconnect Function


The Disconnect function destroys the current connection context. All the resources associated with the current connection (persistent or transient) are released.

Syntax
ACXAPI_DISCONNECT( void* ConnectHandle)

Function name in COBOL: ACXADSCO

C function called from COBOL


nvBOOL ACXADSCO( nvINT4 *phConnectHandle)

This table describes the Disconnect function parameters.


Table 853 Parameter C ConnectHandle COBOL *phConnectHandle (nvINT4) Disconnect Function Parameters Usage Input Description A pointer to the connection.

The Retry Connection Function


This function is not available with COBOL.

85-6 AIS User Guide and Reference

The RetryConnection function is used to retry the Connect function when there has been a timeout to all the servers listed in the Connect function. The user can set the RetryConnection function to call a user function that returns TRUE in order to repeat the connection attempt or FALSE to end the process. This enables the user to decide in real-time whether or not to reattempt the connection.

Syntax
ACXAPI_SET_RETRYABLE_CONNECTION_HANDLER( void* ConnectHandle void* Retry)

This table describes the RetryConnection function parameters.


Table 854 Parameter ConnectHandle Retry RetryConnection Function Parameters Usage Input Input Description A pointer to the connection. Calls a user function that returns a boolean result.

Transaction APIs
Transaction APIs are used in the following scenarios:

Non-transacted operation: The adapter works in auto-commit mode. Work is committed immediately and automatically upon execution. This operation mode is the default operation mode when no transaction APIs are used, or when the setAutoCommit function is set to True. Local transaction operation: When auto-commit is set to False, the first interaction starts a transaction that lasts until an explicit commit (using the transactionCommit function) or an explicit rollback (using the transactionRollback function) occurs. All interactions performed in between are part of that transaction. Note that local is used here to indicate the scope of the transaction, rather than its location: Using ACX, the local transaction may be running on a remote machine.

ACX defines the following functions that handle transaction operations:


Set Autocommit Function Transaction Commit Function Transaction Rollback Function

Set Autocommit Function


The Set Autocommit function sets the auto-commit mode of the connection.

Syntax
ACXAPI_SET_AUTO_COMMIT( void* ConnectHandle int AutoCommit)

Function name in COBOL: ACXASCMT

C and COBOL 3GL Client Interfaces 85-7

C function called from COBOL


nvBOOL ACXASCMT( nvINT4 *phConnectHandle, nvINT4 *pbAutoCommit)

This table describes the Set Autocommit function parameters.


Table 855 Parameter C ConnectHandle COBOL *phConnectHandle (nvINT4) C AutoCommit COBOL *pbAutoCommit (nvINT4) Input New auto-commit mode of the connection. If set to True, each interaction immediately commits once executed. The auto-commit mode must be turned off if multiple interactions need to be grouped into a single transaction and committed or rolled back as a unit. When auto-commit is reset and no global transaction is in progress, any interaction starts a local transaction. The client is required to use transactionCommit or transactionRollback at the appropriate time to commit or rollback the transaction. The auto-commit mode is True by default and is reset if a distributed (global) transaction is started. Set Autocommit Function Parameters Usage Input Description A pointer to the connection.

Transaction Commit Function


The Transaction Commit function commits the work done under the global or local transaction.

Syntax
ACXAPI_TRANSACTION_COMMIT void* ConnectHandle ACX_XID *Xid)

Function name in COBOL: ACXACMIT

C function called from COBOL


nvBOOL ACXACMIT( nvINT4 *phConnectHandle)

This table describes the Transaction Commit function parameters.


Table 856 Parameter C ConnectHandle COBOL *phConnectHandle (nvINT4) Transaction Commit Function Parameters Usage Input Description A pointer to the connection.

85-8 AIS User Guide and Reference

Table 856 (Cont.) Transaction Commit Function Parameters Parameter C *Xid Usage Input Description A global transaction identifier, automatically assigned. If not given (or empty), the transaction is assumed to be local. The Xid comprises the following:

formatID: Specifies the format of the Xid. globalTransactionID: Defines the transaction ID. The value must be less than 128. branchQualifier: Defines the transaction branch. The value must be less than 128.

Transaction Rollback Function


The Transaction Rollback function rolls back the work done under the (global) transaction.

Syntax
ACXAPI_TRANSACTION_ROLLBACK( void* ConnectHandle ACX_XID *Xid)

Function name in COBOL: ACXARBCK

C function called from COBOL


nvBOOL ACXARBCK( nvINT4 *phConnectHandle)

This table describes the Transaction Rollback function parameters.


Table 857 Parameter C ConnectHandle COBOL *phConnectHandle (nvINT4) C - *Xid Input A global transaction identifier, automatically assigned. If not given (or empty), the transaction is assumed to be local. The Xid comprises the following:

Transaction Rollback Function Parameters Usage Input Description A pointer to the connection.

formatID: Specifies the format of the Xid. globalTransactionID: Defines the transaction ID. The value must be less than 128. branchQualifier: Defines the transaction branch. The value must be less than 128.

Execution APIs
ACX defines the following execution functions:

Execute Function Execute Batch Function

C and COBOL 3GL Client Interfaces 85-9

Setting Environment Parameters

Execute Function
The Execute function executes a given interaction against the application.

Syntax
ACXAPI_EXECUTE( void* ConnectHandle char* InterationName void* BufferIn void* BufferOut nvINT4 BufferOutLen)

Function name in COBOL: ACXAEXEC

C function called from COBOL


nvBOOL ACXAEXEC( nvINT4 *phConnectHandle, nvINT4 *piInteractionMode, STRING_64 sInteractionName, void *pInput, void *pOutput, nvINT4* iOutputSize)

This table describes the Execute function parameters.


Table 858 Parameter C ConnectHandle COBOL *phConnectHandle (nvINT4) C InteractionMode COBOL *piInteractionMo de (nvINT4) C InteractionName COBOL sInteractionName (STRING_64) C BufferIn COBOL *pInput C BufferOut COBOL *pOutput Output A pointer to the output record. Input A pointer to the input record. Input Name of interaction to execute. Input Mode of the interaction. Note: This parameter only exists in the COBOL function. Execute Function Parameters Usage Input Description A pointer to the connection.

85-10 AIS User Guide and Reference

Table 858 (Cont.) Execute Function Parameters Parameter C BufferOutLen COBOL *iOutputSize (nvINT4) Usage Output Description The length of the output record.

Execute Batch Function


This function is not available with COBOL. The ExecuteBatch function executes all the operations specified since the function was called with the START operation. The output of batch execution includes the outputs of the individual operations (those that produce an output) in XML format.

Syntax
ACXAPI_EXECUTE_BATCH( void* ConnectHandle char* Operation void* BufferOut nvINT4 BufferOutLen)

This table describes the ExecuteBatch function parameters.


Table 859 Parameter ConnectHandle Operation Execute Batch Function Parameters Usage Input Input Description A pointer to the connection. The operation to be performed:

START: Start batching ACXAPI_EXECUTE operations. EXECUTE: Execute all the batched operations. RESET: Clear the input buffer of all interaction information.

BufferOut BufferOutLen

Output Output

A pointer to the output record. The length of the output record.

Setting Environment Parameters


These functions let the caller make programmatic changes to the environment setting of the local process. The parameters that can be changed using these functions are the parameters that appear in the environment definition. Some of the parameters cannot be changed once set, therefore if this function is called to change an unchangable environment parameter after its value is set, the change is ignored

Syntax
ACXAPI_SET_ENVIRONMENT( char* Environment nvINT4 Flags)

Function name in COBOL: ACXASENV

C and COBOL 3GL Client Interfaces

85-11

C function called from COBOL


ACXASENV char* ConnectHandle nvINT4 Flags)

This table describes the Set Environment function parameters.


Table 8510 Parameter C Environment COBOL sEnv (STRING_256) Set Environment Function Parameters Usage Input Description A string with the environment settings. The string has the following format: /group_name/parameter, .... For example, to set the language to Japanese and enable ACX tracing: /misc/language=jpn, /debug/acxTrace=true C Flags COBOL dwFlags (nvINT4) Input Reserved. Set to 0.

Get Adapter Schema Function


This function is not available with COBOL. The GetAdapterSchema function returns the schema of the application adapter that is currently connected.

Syntax
ACXAPI_GET_ADAPTER_SCHEMA( void* ConnectHandle void** *Definition)

This table describes the GetAdapterSchema function parameters.


Table 8511 Parameter ConnectHandle Definition GetAdapterSchema Function Parameters Usage Input Output Description A pointer to the connection. The adapter definition listing.

Get Event Function


The Get Event function determines the event to wait for and how long to wait. When an event is received, the function returns the results of performing the event.

Syntax
ACXAPI_GET_EVENT( void* ConnectHandle char* EventName long iWait int Keep long *iMaxEvents, void* BufferOut 85-12 AIS User Guide and Reference

long BufferOutLen) char** OutputEventName, char** EventTimestamp, long* *iEventsAvailable)

Function name in COBOL: ACXAGTEV

C function called from COBOL


nvBOOL ACXAGTEV( nvINT4 *phConnectHandle, STRING_256 sEventNames, nvINT4 *piWait, nvINT4 *pbKeep, nvINT4 *piMaxEvents, void *pOutput, nvINT4 *piOutputSize, STRING_64 sOutputEventName, STRING_64 sEventTimestamp, nvINT4 *piEventsAvailable)

This table describes the Get Event function parameters.


Table 8512 Parameter C ConnectHandle COBOL *phConnectHandle (nvINT4) C EventName COBOL sEventNames (STRING_256) C Wait COBOL *piWait (nvINT4) C Keep COBOL *pbKeep (nvINT4) C *piMaxEvents COBOL piMaxEvents (nvINT4) C BufferOut COBOL *pOutput C BufferOutLen COBOL *piOutputSize (nvINT4) Output The length of the output record. Output The length of the output record. Output Input Whether the event should be stored in the repository or deleted once finished. The default is False (to delete the event). A pointer to the output record. Input Length of time to wait to receive the event, in seconds. Input Name of event to wait for. Get Event Function Parameters Usage Input Description A pointer to the connection.

C and COBOL 3GL Client Interfaces

85-13

Table 8512 Parameter

(Cont.) Get Event Function Parameters Usage Output Description The name of the returned event.

C OutputEventName COBOL sOutputEventName (STRING_64) C EventTimestamp COBOL sEventTimestamp (STRING_64) C iEventsAvailable COBOL *piEventsAvailab le (nvINT4)

Input

The timestamp of the returned event.

Input

The number of events that are returned.

Ping Function
The Ping function returns, in a pingResponse response, information about an active adapter.

Syntax
ACXAPI_PING( void* ConnectHandle struct _ACX_PING_RESPONSEvoid** OutputStructpPingResponse)

Function name in COBOL: ACXAPING

C function called from COBOL


nvBOOL ACXAPING( nvINT4 *phConnectHandle, STRING_64 sName, STRING_256 sDescription, STRING_64 sVersion, STRING_64 sType, STRING_64 sOperatingSystem, STRING_64 sVendor, STRING_256 sAuxiliaryInfo)

This table describes the Ping function parameters.


Table 8513 Parameter C ConnectHandle COBOL *phConnectHandle (nvINT4) C PingResponset Output The return information describing the structure of the adapter. Ping Function Parameters Usage Input Description A pointer to the connection.

85-14 AIS User Guide and Reference

Table 8513 Parameter

(Cont.) Ping Function Parameters Usage Input Description The adapter name.

C *pszName COBOL sName (STRING_64) C *pszDescription COBOL sDescription (STRING_256) C *pszVersion COBOL sVersion (STRING_64) C *pszType COBOL sType (STRING_64) C *pszOperatingSys tem COBOL sOperatingSystem (STRING_64) C *pszVendor COBOL sVendor (STRING_64) C *pszAuxiliaryInf o COBOL sAuxiliaryInfo (STRING_256) C *pszGenre

Input

A description for the adapter.

Input

The adapter version.

Input

The adapter type.

Input

The operating system where the adapter runs.

Input

The adapter vendor

Input

Adapter specific information

Input

The delimited string describing the adapter abilities.

Get Error Function


The Get Error function returns error information.

Syntax
ACXAPI_GET_ERROR( void* ConnectHandle char* *Error long *Status)

Function name in COBOL: ACXAGTER

C Function called from COBOL


vBOOL ACXAGTER( nvINT4 *phConnectHandle, nvINT4 *piErrorCode, STRING_256 sErrorText)

C and COBOL 3GL Client Interfaces

85-15

This table describes the Get Error function parameters.


Table 8514 Parameter C ConnectHandle COBOL *phConnectHandle (nvINT4) C Error COBOL *piErrorCode (nvINT4) C Status COBOL sErrorText (nvINT4) Output The status of the returned error. Output The error message returned by the function. Get Error Function Parameters Usage Input Description A pointer to the connection.

Using APIs to Invoke Application Adapters - Examples


The example code uses the API to place a new order and then find the order. The functions are shown in bold in the code. This section contains the following example codes:

C Program Example COBOL Program Example

C Program Example
#include <stdio.h> #include <stdlib.h> #include <string.h> #include "gap.h" GAP_HELP_DEFINE; #include "scm.h" struct _SYS_ placeIn = {0, "Julian W", {"Julian White", "Oxford St.", "London", "12345", "UK", "ENGLAND",3 { {1, "Red book", 1, 31.2}, {2, "Green book", 1, 31.2}, {3, "Tarzan book", 3, 5.2}, } };

/* Returns the real length of the string in a buffer the buffer is padded with spaces. */ int bufStrLen(char* Buffer, int Len) { int i; for (i=Len; i>1; i--) { if (Buffer[i-1] != ) { return i; }

85-16 AIS User Guide and Reference

} return 0; } void display(struct _SYS_* ) { int i; printf(" details:\n"); printf(" ID = %i\n", ->i_ID); printf(" By = %.*s\n", bufStrLen(->sED_BY, 64), ->sED_BY); printf(" Address: %.*s\n", bufStrLen(->ADDRESS.sADDRESSEE, 64), ->ADDRESS.sADDRESSEE); printf(" Street: %.*s\n", bufStrLen(->ADDRESS.sSTREET, 64), ->ADDRESS.sSTREET); printf(" City: %.*s\n", bufStrLen(->ADDRESS.sCITY, 64), ->ADDRESS.sCITY); printf(" ZIP: %.*s\n", bufStrLen(->ADDRESS.sZIP, 5), ->ADDRESS.sZIP); printf(" State: %.*s\n", bufStrLen(->ADDRESS.sSTATE, 2), ->ADDRESS.sSTATE); printf(" Country: %.*s\n\n", bufStrLen(->ADDRESS.sCOUNTRY, 64), ->ADDRESS.sCOUNTRY); printf(" Lines %i:\n", ->iN_LINES);

for(i=0; i<->iN_LINES; i++) { printf(" %i. Item Name = %.*s, Quantity = %i, Price = %f\n", ->LINES[i].iLINE_NO, bufStrLen(->LINES[i].sITEM_NAME, 64), ->LINES[i].sITEM_NAME, ->LINES[i].iQUANTITY, ->LINES[i].dITEM_PRICE); } } void report_error(void *pCh, char *pError) { char *pAcxError; ACXAPI_GET_ERROR(pCh, &pAcxError, NULL); printf("%s, %s\n", pError, pAcxError); } long main(int argc, char** argv) { int ret_code = 0; int i = 0; void *pCH; char hostname[128]; struct _SYS_ ; struct _SYS_FIND_ struct _SYS_PLACE__RESPONSE find; placeOut;

if (argc > 1) strcpy(hostname, argv[1]);

C and COBOL 3GL Client Interfaces

85-17

else strcpy(hostname, "localhost:2551"); if (!GAP_LOAD()) { printf("Failed to load the ACX API\n"); exit(1); } printf("\nConnecting to %s...\n\n", hostname); if (!ACXAPI_CONNECT(hostname, "", "", "", "s", TRUE, 0, 0x01, NULL, NULL, NULL, &pCH)) { printf("Failed to connect\n"); return 1; } printf("Place an with %i items.\n\n", placeIn.iN_LINES); if (!ACXAPI_EXECUTE(pCH, "place", &placeIn, &placeOut, sizeof(placeOut))) { report_error(pCH, "ACXAPI_EXECUTE failed"); return 1; } printf(" was accepted, printf("Retrieve an, ID = %i was returned.\n\n", placeOut.i_ID);

ID = %i.\n\n", placeOut.i_ID);

find.i_ID = placeOut.i_ID; if (!ACXAPI_EXECUTE(pCH, "find", &find, &, sizeof())) { report_error(pCH, "ACXAPI_EXECUTE failed"); return 1; } display(&); printf("\nDisconnect...\n"); ACXAPI_DISCONNECT(pCH); pCH = NULL; return 0; }

85-18 AIS User Guide and Reference

COBOL Program Example


identification division. program-id. ACX3GL_TEST. data division. working-storage section. 01 AA-ORDER. 03 AA-ORDER-ID pic s9(8) comp. 03 AA-ORDERED-BY pic x(64). 03 AA-ADDRESS. 05 AA-ADDRESSEE pic x(64). 05 AA-STREET pic x(64). 05 AA-CITY pic x(64). 05 AA-ZIP pic x(5). 05 AA-STATE pic x(2). 05 AA-COUNTRY pic x(64). 03 AA-N-LINES pic s9(8) comp. 03 AA-LINES occurs 0 to 30 times depending on AA-N-LINES. 05 AA-LINE-NO pic s9(8) comp. 05 AA-ITEM-NAME pic x(64). 05 AA-QUANTITY pic s9(8) comp. 05 AA-ITEM-PRICE usage is comp-2. 01 AA-ORDER-CONFIRM. 03 AA-NEW-ORDER-ID pic s9(8) comp. 01 AA-ORDER-CONFIRM-LEN pic s9(8) comp. 01 CONNECT-PARM. 03 CP-SERVERS-URL 03 CP-USERNAME 03 CP-PASSWORD 03 CP-WORKSPACE 03 CP-ADAPTER 03 CP-PERSISTENT 03 CP-IDLE-TIMEOUT 03 CP-CONNECT-MODE 03 CP-SCHEMA-FILE 03 CP-ENC-KEY-NAME 03 CP-ENC-KEY-VALUE 03 CP-CON-ID

pic pic pic pic pic pic pic pic pic pic pic pic

x(256) x(64) x(64) x(64) x(64) s9(8) s9(8) s9(8) x(256) x(64) x(256) s9(8)

value is "localhost:2551". value is low-value. value is low-value. value is low-value. value is "orders". COMP value is 0. COMP value is 0. COMP value is 0. value is "". value is low-value. value is low-value. COMP.

01 GETERR-PARM. 03 GE-STATUS-CODE pic s9(8) comp. 03 GE-ERROR-TEXT pic x(256). 01 EXEC-PARM. 03 EP-INTERACTION-MODE pic s9(8) comp value is 0. 03 EP-INTERACTION-NAME pic x(64). 03 EP-OUT-LENGTH pic s9(8) comp. 77 RET-CODE pic s9(8) comp. procedure division. acx3gl-main section. main-start. display "Initializing ACX3GL API". call "ACXAINIT". display "Connecting to the Orders system". call "ACXACNCT" using CP-SERVERS-URL CP-USERNAME CP-PASSWORD C and COBOL 3GL Client Interfaces 85-19

CP-WORKSPACE CP-ADAPTER CP-PERSISTENT CP-IDLE-TIMEOUT CP-CONNECT-MODE CP-SCHEMA-FILE CP-ENC-KEY-NAME CP-ENC-KEY-VALUE CP-CON-ID returning RET-CODE. if RET-CODE = 0 perform report-error thru report-error-x. display "Placing an order...". perform fill-order thru fill-order-x. move "placeOrder" to EP-INTERACTION-NAME. Some COBOLs have a LENGTH function... compute AA-ORDER-CONFIRM-LEN = 4. call "ACXAEXEC" using CP-CON-ID EP-INTERACTION-MODE EP-INTERACTION-NAME AA-ORDER AA-ORDER-CONFIRM AA-ORDER-CONFIRM-LEN returning RET-CODE. if RET-CODE = 0 perform report-error thru report-error-x. display "New order ID is " AA-NEW-ORDER-ID with conversion. display "Disconnecting...". call "ACXADSCO" using CP-CON-ID. stop run. main-end. exit program. report-error. call "ACXAGTER" using CP-CON-ID GE-STATUS-CODE GE-ERROR-TEXT. display "Error: " GE-ERROR-TEXT. stop run. report-error-x. exit. fill-order. move 0 to AA-ORDER-ID. move "Julian W" to AA-ORDERED-BY. move "Julian White" to AA-ADDRESSEE. move "Oxford St." to AA-STREET. move "London" to AA-CITY. move "12345" to AA-ZIP. move "UK" to AA-STATE. move "ENGLAND" to AA-COUNTRY. move move move move 1 to AA-LINE-NO(1). "Red Book" to AA-ITEM-NAME(1). 12 to AA-QUANTITY(1). 19.90 to AA-ITEM-PRICE(1).

*-

85-20 AIS User Guide and Reference

move move move move

2 to AA-LINE-NO(2). "Gold Book" to AA-ITEM-NAME(2). 5 to AA-QUANTITY(2). 124.90 to AA-ITEM-PRICE(2).

move 2 to AA-N-LINES. fill-order-x. exit.

end program ACX3GL_TEST.

CICS as a Client Invoking an Application Adapter (z/OS Only)


AIS includes a CICS transaction that can be called from a C or COBOL program that enables invoking an application adapter. The CICS transaction is used instead of using the C or COBOL APIs directly. In order to invoke an application adapter using a CICS transaction, you need to perform the following tasks:

Configuring the IBM z/OS Machine Using a CICS Transaction to Invoke an Application Adapter Calling the Transaction

Configuring the IBM z/OS Machine


Before using the CICS transaction, you need to configure the IBM z/OS machine. To configure the IBM z/OS machine 1. Copy NAVROOT.LOAD(ATTCICSD) to a CICS DFHRPL library.
2.

Copy NAVROOT.LOAD(TRANS3GL) to a CICS DFHRPL library. Verify that the CICS Socket Interface is enabled by issuing the following CICS command: EZAO START CICS If you are not sure if the system is configured with the Socket Interface, try running the EZAC transaction. If the transaction produces a screen, you should be able to run the EZAO startup transaction. If not, check if the transaction has been defined in a group that has not been installed, for example: CEDC V TRANS(EZAC) G(*). If it is defined in a group, install that group and try running EZAO again. If not successful, you need to configure CICS as outlined in the TCP/IP V3R2 For MVS: CICS Sockets Interface Guide.

3.

Set up the CICS resource definitions for the C or COBOL program. The following JCL can be used as a template and modified according to the guidelines below:
//ATTCSD JOB Attunity,CSD,MSGLEVEL=1,NOTIFY=&SYSUID //STEP1 EXEC PGM=DFHCSDUP,REGION=512K, // PARM=CSD(READWRITE),PAGESIZE(60),NOCOMPAT //STEPLIB DD DSN=<HLQ1>.SDFHLOAD,DISP=SHR //DFHCSD DD UNIT=SYSDA,DISP=SHR,DSN=<HLQ2>.CSD

C and COBOL 3GL Client Interfaces

85-21

//OUTDD DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSIN DD * */******************************************************/ */* Attunity CICS Definitions */ */******************************************************/ *-------------------------------------------------------* * Note: Install GROUP(att) - CEDA IN G(att)* *If you are rerunning this, uncomment the DELETE command. * *-------------------------------------------------------* * * Start attunity RESOURCES: * * DELETE GROUP(att) DEFINE PROGRAM(ATTCICSD) GROUP(att) LANGUAGE(C) DATALOCATION(ANY) DE(attunity DLL) DEFINE PROGRAM(TRANS3GL) GROUP(att) LANGUAGE(C) DATALOCATION(ANY) DE(attunity DLL) DEFINE PROGRAM(<PROG>) GROUP(att) LANGUAGE(<LANG>) DATALOCATION(ANY) DE(attunity) DEFINE TRANSACTION(<attTRAN>) GROUP(att) PROGRAM(<PROG>) TASKDATAL(ANY) DE(attunity TRAN ID) LIST GROUP(att) * * End attunity RESOURCES * /* // 4.

Modify the JCL, as follows:


Change the JOB card to suit the site. Change <HLQ1> to point to the CICS SDFHLOAD library. Change <HLQ2> to point to the CICS CSD dataset. Change <LANG> to the Language: C or COBOL. Change <PROG> to the COBOL program name. If you are calling the C or COBOL program from a CICS transaction, change <attTRAN> to the CICS transaction name that calls the COBOL program.

5.

Install the ORA group, from CICS, by issuing the following command: CEDA IN G(att)

Using a CICS Transaction to Invoke an Application Adapter


You can use a CICS transaction to invoke an application adapter, instead of using the C or COBOL APIs directly. A buffer is set in the CICS COMMAREA that contains the information needed to trigger an event and then calls the TRANS3GL transaction to send the interaction. The buffer is formatted as described in the following table.

85-22 AIS User Guide and Reference

Table 8515 Parameter Version ServersUrl

Data Buffer Parameters Size 4 256 Description The version of the APIs used. The expected value is 4. The URL of the z/OS machine and the port number where the daemon runs. For example, IP1:2551, where IP1 is the URL and 2551 is the port. A valid username to access the z/OS machine. A valid password for the user name. A daemon workspace. The default is Navigator. The name of the adapter. For future use. Leave blank. For future use. Leave blank. For future use. Leave blank. The name of the interaction in the application definition. The following flags are available:
1. 2. 3. 4. 5. 6.

Username Password Workspace AdapterName SchemaFileName EncKeyName EncKeyValue InteractionName Flags

64 64 64 64 256 64 256 64 4

A trace of the XML is performed. A trace of the communication calls is performed. Both the XML and communication calls are traced. The NAT firewall protocol is used, with a fixed address for the daemon. A trace of the XML is performed and the NAT firewall protocol is used, with a fixed address for the daemon. A trace of the communication calls is performed and the NAT firewall protocol is used, with a fixed address for the daemon. Both the XML and communication calls are traced and the NAT firewall protocol is used, with a fixed address for the daemon.

7.

For information on the NAT firewall protocol, see Firewall Support. Input format 4 The following formats are available: 0: Input is provided as XML. 1: Input is provided using parameters.

C and COBOL 3GL Client Interfaces

85-23

Table 8515 Parameter Input

(Cont.) Data Buffer Parameters Size Description The size of the input depends on the value specified in the Input size parameter. If the Input format is set to 0 (XML), the input is formatted as follows:

The first four bytes specify the size of the input XML string. The next 64 bytes specifies the name of the record used for the output (the inbound interaction). The next bytes (to the exact length specified in the first four bytes) specify the input XML string. For example: <findorder ORDER_NO=17 /> where findorder is the inbound interaction name.

If the Input format is set to 1 (the input is done using parameters), the input is formatted as follows:

The first four bytes specify the number of parameters. The next 4 bytes specify the maximum size of any parameter value. The next 64 bytes specify the name of the record used for the output (the inbound interaction). The next 32 bytes specify the name of the parameter. The next bytes (to the exact length specified in the first four bytes) specify the input parameter. The following bytes repeat the last two entries until all the parameters are specified.

COBOL Data Buffer


The following template can be used to set the COBOL data buffer:
* * COBOL COPY OF DATA BUFFER * 01 COMM-DATA-BUFF PIC X(5000). 01 COMM-DATA-BUFF-ERROR REDEFINES COMM-DATA-BUFF. 05 COMM-ERROR-STATUS PIC S9(8) COMP SYNC. 05 COMM-ERROR-MSG PIC X(256). COMM-DATA-BUFF-INPUT REDEFINES COMM-DATA-BUFF. 05 INPUT-COMMAREA-3GL. 10 INCOM-VERSION PIC S9(8) COMP SYNC. 10 INCOM-SERVERS-URLS PIC X(256). /* IP1:PORT[,IP2:PORT] [,...] */ 10 INCOM-USER-NAME PIC X(64). 10 INCOM-PASSWORD PIC X(64). 10 INCOM-WORKSPACE PIC X(64). 10 INCOM-ADAPTER-NAME PIC X(64). 10 INCOM-SCHEMA-FILE-NAME PIC X(256). 10 INCOM-ENC-KEY-NAME PIC X(64). 10 INCOM-ENC-KEY-VALUE PIC X(256). 10 INCOM-INTERACTION-NAME PIC X(64). 10 INCOM-DW-FLAGS PIC S9(8) COMP SYNC. 10 INCOM-INP-FORMAT PIC S9(8) COMP SYNC.

01

85-24 AIS User Guide and Reference

INCOM-EXEC-INPUT. 15 INCOM-XML-BUFF. 20 INCOM-XML-ILEN PIC S9(8) COMP SYNC. 20 INCOM-XML-INTER-OUTREC-NAME PIC X(64). *====>>> CHANGE ??? TO LEN SPECIFIED IN INCOM-XML-ILEN 20 INCOM-XML-INPUT PIC X(???). 15 INCOM-PARAM-BUFF REDEFINES INCOM-XML-BUFF. 20 INCOM-PARAM-COUNT PIC S9(8) COMP SYNC. 20 INCOM-PARAM-VALUE-LEN PIC S9(8) COMP SYNC. 20 INCOM-PARAM-INT-OUTREC-NAME PIC X(64). *====>>> CHANGE ?? TO COUNT SPECIFIED IN INCOM-PARAM-COUNT 20 INCOM-PARAM-NAME-VALUE OCCURS ?? TIMES. 25 INCOM-PARAM-NAME PIC X(32). *====>>> CHANGE ?? TO LEN SPECIFIED IN INCOM-PARAM-VALUE-LEN 25 INCOM-PARAM-VALUE PIC X(??). 01 COMM-DATA-BUFF-OUTPUT REDEFINES COMM-DATA-BUFF. 05 05 05 COMM-OUT-STATUS COMM-OUT-LENGTH COMM-OUT-DATA PIC S9(8) COMP SYNC. PIC S9(8) COMP SYNC. PIC X(4992)

10

Calling the Transaction


The TRAN3GL transaction is called as follows:
EXEC CICS LINK PROGRAM("TRANS3GL") COMMAREA(commDataBuff) LENGTH(iCommSize);

where: commDataBuff: The buffer with the interaction details, used in the COMMAREA. iCommSize: The size of the buffer. This value is also used to determine the size of the output string. Thus make sure the value is big enough for the expected output. After defining the COMMAREA and calling the TRAN3GL transaction in the COBOL program, compile and move the COBOL program to a CICS DFHRPL (LOAD) library.

Transaction Output
The output includes a 4-byte success flag: Zero for success, otherwise failure. The output overrides the input. If the result is failure, an error message with a length of 256 bytes is returned. If XML was specified for the input and the result is success, the output is formatted as XML, as follows:

The first four bytes specify the size of the output. The following bytes make up the XML output.

If parameters were specified for the input and the result is success, the output is formatted as follows:

The first four bytes specify the size of the output. The next 32 bytes specify the name of the output attribute.

C and COBOL 3GL Client Interfaces

85-25

The next bytes (to the exact length specified for the input string in the) specify the output value. The following bytes repeat the last two entries until all the output is specified.

CICS Connection Pooling under CICS


This section describes how to use CICS with a 3GL interface with connection pooling. Connection pooling is used to provide a persistent connection to the CICS adapter. The following sections describe the programs and operations necessary to enable connection pooling.

Using Connection Pooling under CICS CICS Connection Pool Flow ATTCALL Program Interface ATTCNTRL Program Setting up 3GL under CICS

Using Connection Pooling under CICS


In most cases each CICS task has to load the 3GL library and connect the target adapter, which strongly affects performance. Attunity now supports preloaded CICS tasks that have a persistence connection with the adapter in use. An application program sends 3GL requests through one of free tasks with the necessary connection. To provide the persistent connection, a connection pool is used. Each pool may contain multiple 3GL threads where each thread provides its own connection to the same adapter. The CICS 3GL connection pool uses three programs:

ATTHRDPL: This program is the thread program used for providing a persistent connection to an adapter. ATTCALL: This is a dispatcher program that: Provides a control activity for connection pools and the threads in the pools with a 3GL control protocol Provides 3GL operations being moved to 3GL with a Pool 3GL protocol.

ATTCNTRL: This program supports most of the ATTCALL Control Protocol operations as a standalone CICS transaction.

CICS Connection Pool Flow


The following figure shows the connection pooling flow:

85-26 AIS User Guide and Reference

Figure 851 Connection Pool Flow

The following is the flow as shown in the above figure. It describes each operation in in the flow. In this flow, the user enters the request into the COMMAREA and then activates the ATTCALL Program Interface using the EXEC CICS LINK. The ATTCALL program uses the ATTHRDPL program, which is defined as a CICS transaction, to move 3GL requests to Attunity 3GL operations. The flow has two parts:

Control Operations Flow 3GL Operations Flow

Control Operations Flow


The following operations are executed in the Control Operations flow:

PURGELOG operation: Restarts the logging in the ATYCPLOG file. This can be executed at any time in the in the flow. INITPOOL: The first operation is the initialization operation (INITPOOL). This allocates the pool memory from CICS for each thread and activates the corresponding thread transaction (defined in the COMMAREA). Each thread (ATTHRDPL program) tries to connect to the corresponding adapter using the provided connect string. If the connection cannot be established, each subsequent request initiates a reconnection. TERMPOOL: In this operation, the ATTCALL program checks that there are no active requests on any thread and terminates the request after a delay. The delay is defined as an INITPOOL operation parameter. When there are no more active

C and COBOL 3GL Client Interfaces

85-27

requests in any thread, the transactions are terminated, and the pool memory is freed.

RELOAD: Each 3GL request locks a thread to execute the operation. In some cases the thread is not unlocked after the operation is completed, usually because of an abnormal termination of a user program. The RELOAD operation is used to return the tread to active service. In the RELOAD operation, ATTCALL sends a signal to a thread and the thread activates anther thread and then terminates itself.

3GL Operations Flow


The following operations are executed in the 3GL Operations flow:

BEGINT: In this operation, ATTCALL searches for an inactive tread, locks it, and posts the corresponding thread. This tread executes the Begin Transaction 3GL operation and ATTCALL returns the locked thread ID to the user program. EXECT: In this operation, ATTCALL posts the thread that was locked in the BEGINT operation. The thread executes the requested 3GL interaction and moves the 3GL output data to the user program. ATTCALL returns the 3GL operations return code to the user program. ENDT: In this operation, ATTCALL posts the tread that was locked in the BEGINT operation. The tread executes the End Transaction 3GL operation and ATTCALL unlocks the thread. EXECI: This is a stand-alone operation. In this operation, ATTCALL searches for an inactive thread, locks it, and posts the corresponding thread. The thread executes the requested 3GL interaction and moves the 3GL output data to the user program. ATTCALL returns the 3GL operations return code to the user program and unlocks the thread.

ATTCALL Program Interface


This section describes the ATTCALL program interface. The interface includes the contents of the COMMAREA and the ATTCALL program operations. There are two types of operations, the Control Protocol operations and the 3GL Protocol operations. All of the operations are both input and output type.

COMMAREA
A user program and ATTCNTRL provide a commarea for the ATTCALL program. The following program is loaded into the commarea.
typedef struct ATTREQ { char operation[8]; /* POOL operation*/ char pool_name[8]; int operation_RC;/* POOL return code */ unsigned int trace_level;/* 0/1 */ union { struct { char txn_name[4];/* transaction name of ATTHRDPL */ unsigned int n_threads;/* parallel connections */ unsigned int delay_time;/* wait when all threads are busy */ unsigned int attempt_num;/* retries after the wait */ ATT_CONNECT *ais_connect;/* AIS connect info */ } initpool; struct { unsigned int thread_number; /* the thread to be reload */

85-28 AIS User Guide and Reference

} reload; struct { unsigned int thread_handle; /* for BEGINT/EXECT/ENDT */ char *error_message_buffer;/* ATT 3GL error message */ unsigned int max_error_message_size; /* 0 - no message is returned */ int interaction_RC;/* ATT 3GL return code*/ union {/* thread operations data */ struct { char interaction[64];/* 3GL interaction */ void *input;/* 3GL EXECUTE input */ unsigned int output_length;/* 3GL EXECUTE output length */ void *output;/* 3GL EXECUTE output */ } exec; struct { int commit; /* 1 - commit, 0 - rollback */ } endt; } data; } inter; } op; } ATTREQ;

The Attunity Connect structure should be defined as follows:


typedef struct ATT_CONNECT { char ServersUrls[256]; char Username[64]; char Password[64]; char Workspace[64]; char AdapterName[64]; int Persistent; long IdleTimeout; int ConnectMode; char SchemaFileName[256]; char EncKeyName[64]; char EncKeyValue[256]; } ATT_CONNECT;

Control Protocol
This section describes the ATTCALL Control parameters. The operations in the Control Protocol are:

INITPOOL initiates a pool. It allocates pool and thread memory, and activates thread transactions (that use the ATTHRDPL program) TERMPOOL terminates threads and returns the pool memory STATUS prints a list of pools or a pool status to a temporary storage queue called ATTHRDPL RELOAD interrupts a thread and reloads it TRACELVL changes the pool trace level PURGELOG purges the ATYCPLOG file

The following tables describe the parameters for the Control operations. All of the operations share some common parameters. Some operations have additional parameters. These are also shown in the tables below. The following table describes the Control protocol common parameters:

C and COBOL 3GL Client Interfaces

85-29

Table 8516 Parameter

Control Protocol Common Parameters Description The name of the operation. The name of the connection pool. The ATTCALL trace level. The thread trace level can be changed using the TRACEVL operation. The return code. The valid values are: General:

operation (input) pool_name (input) trace_level (input) operation_RC (output)

0: OK (The operation is completed, the request has been moved to a thread) 08: Invalid operation name 16: Invalid number of threads 20: Error during start thread task 04: Pool is busy 12: Pool not found 16: EXECT/ENDT was not invoked by the task that had invoked BEGINT 20: ABEND in thread

INITPOOL operations:

Other operations:

The following table describes additional parameters in the INITPOOL operation:


Table 8517 Parameter tnx_name (input) n_threads (input) delay_time (input) attempt_num (input) ais_connect (input) INITPOOL Parameters Description The thread CICS transaction name. The number of threads The time (in seconds) used to indicate how long a user task sleeps when all the threads are busy The number of attempts to make when locking a thread when executing a request. The address of the ATT_CONNECT struct.

The following table describes the additional parameter for the RELOAD operation:
Table 8518 Parameter thread_number (input) RELOAD Parameter Description The number of threads to be reloaded (starting from 0).

3GL Protocol
This section describes the ATTCALL 3GL operation parameters. The operations in the 3GL protocol are:

BEGINT: Locks a thread and starts a transaction in the thread. ENDT: Commits or rolls back the transaction in the thread and frees the thread so that it can receive a new transaction.

85-30 AIS User Guide and Reference

EXECT: Executes an interaction in a transaction. EXECI: Executes a stand-alone transaction.

The following tables describe the parameters for the 3GL operations. All of the operations share some common parameters. Some operations have additional parameters. These are also shown in the tables below. The following table describes the 3GL Protocol common parameters:
Table 8519 Parameter error_message_buffer (input) interaction_RC (output) thread_handle parameter (output) 3GL Protocol Common Parameters Description A buffer address that is used to return the 3GL error message max_error_message_size, which is the size of error message buffer. 3GL return code An output for BEGINT and an input parameter for EXECT and ENDT operations.

The following table describes additional parameters in the EXECT and EXECI operations:
Table 8520 Parameter interaction (input) input (input) output (input) output_buffer (input) EXECT and EXECI Parameters Description A buffer containing the name of the 3GL interaction. An input record address An output record address The size of output buffer

The following table describes the additional parameter for the ENDT operation:
Table 8521 Parameter commit (input) ENDT Parameter Description A commit/rollback switch. The values are commit (1) and rollback (0).

ATTCNTRL Program
The ATTCNTRL program receives its input from the CICS window. The input is provided in the following format: tran name [PLOG] operation (poolname) operation parameters The entry of the key words is not case sensitive. You can enter the parameters first three letters or its full name. See ATTCNTRL Parameters for a description of the parameters for this program. You can carry out the PLOG (purge log) operation before the other operations. Only one operation can be executed for each program activation. The ATTCNTRL program has the following operations:

PINIT: The initiation (init) pool TPLVL: Changes the trace level in the pool threads RELOAD: Reloads a pool thread

C and COBOL 3GL Client Interfaces

85-31

STATUS: Prints the pool status or a list of pools (if no pool name is provided). PTERM: Terminates a pool

ATTCNTRL Parameters
This section describes the parameters for each operation in the ATTCNTRL program.

PINIT Parameters TPLVL Parameters RELOAD Parameters STATUS Parameters PTERM Parameters
PINIT Parameters Description

Table 8522 Parameter

Connection Pool Parameters TRACELVL THREADS DELAY RETRYNUM CTARN 3GL Parameters SERVER ADAPTER USER PASSWORD WORKSPACE The URL for the server where the connection pool resides. The name of the CICS adapter being used. The User Name. The Users password. The name of the workspace where the adapter resides. The trace level for the pool threads. Enter 0 or 1. The number of threads that are initiated. The delay time. The number of attempts allowed to retry the connection before getting a BUSY return code. The ATTTHRDPL CICS connection time.

Table 8523 Parameter TRACELVL

TPLVL Parameters Description The trace level for the pool threads. Enter 0 or 1.

Table 8524 Parameter TRACELVL THREAD

RELOAD Parameters Description The trace level for the pool threads. Enter 0 or 1. The number of threads to be reloaded. If reloading the first thread only, enter 0.

Table 8525 Parameter TRACELVL

STATUS Parameters Description The trace level for the pool threads. Enter 0 or 1.

85-32 AIS User Guide and Reference

Table 8526 Parameter TRACELVL

PTERM Parameters Description The trace level for the pool threads. Enter 0 or 1.

Setting up 3GL under CICS


When creating a CICS connection in your system, make sure of the following if you want to use CIC connection pooling under the 3GL interface:

Make sure the ATTCALL program is linked with the REUS parameter. Define the LKED.SYSIN parameter in ATTHRDPL: INCLUDE SYSLIB (EZACIC07) NAME ATTHRDPL(R)

Create a Log File called ATYCPLOG Enter the CICS Definitions

Create a Log File


You can create a log file using the IDCAMS utility. The log file name is ATTCPLOG. Enter the following to create the log file.
DEF CL (NAME() RECSZ(92 92) NUMBERED CYL(1 1) VOL()) DATA (NAME() CISZ(4096))

CICS Definitions
Enter the following CICS definitions:
DEFINE FILE(ATYCPLOG) GROUP(ATY) DSNAME() RECORDFORMAT(F) ADD(YES) BROWSE(YES) DELETE(YES) READ(YES) UPDATE(YES) STRINGS(5) DEFINE PROGRAM(ATTHRDPL) GROUP(ATY) LANGUAGE(C) DEFINE PROGRAM(ATTCALL) GROUP(ATY) LANGUAGE(C) DEFINE PROGRAM(ATTCNTRL) GROUP(ATY) LANGUAGE(C) DEFINE TRANSACTION() PROGRAM(ATTCNTRL) GROUP(ATY) DEFINE TRANSACTION() PROGRAM(ATTHRDPL) GROUP(ATY) DEFINE TRANCLASS(ATY) GROUP(ATY) MAXACTIVE()

IMS/TM as a Client Invoking an Application Adapter (z/OS Only)


AIS includes an IMS/TM transaction that can be called from a C or COBOL program that enables invoking an application adapter. The IMS/TM transaction is used instead of using the C or COBOL APIs directly. In order to invoke an application adapter using an IMS/TM transaction, you need to perform the following tasks:

Setting Up the IBM z/OS Machine

C and COBOL 3GL Client Interfaces

85-33

Setting Up a Call to the Transaction

Setting Up the IBM z/OS Machine


Before using the IMS/TM transaction, you need to set the IBM z/OS machine using the following procedure: To set up the IBM z/OS machine 1. Copy NAVROOT.LOAD(BASE) to an IMS/TM program library (such as IMS.PGMLIB).
2. 3.

To use the transaction in a C program, copy NAVROOT.LOAD(ATYDC3GC) to the same IMS/TM program library (such as IMS.PGMLIB). To use the transaction in a COBOL program, copy NAVROOT.LOAD(ATYDC3GL) to the same IMS/TM program library (such as IMS.PGMLIB).

Setting Up a Call to the Transaction


The C or COBOL program sets up a buffer that contains the information needed for the inbound interaction and then calls the ATYDC3GL (C) or ATYDC3GL (COBOL) program to send the interaction. The buffer is formatted as described in the following table.
Table 8527 Parameter Version ServersUrl Data Buffer Parameters Size 4 256 Description The version of the APIs used. The expected value is 1. The URL of the z/OS machine and the port number where the daemon runs. For example, IP1:2551, where IP1 is the URL and 2551 is the port. A valid username to access the z/OS machine. A valid password for the user name. A daemon workspace. The default is Navigator. The name of the adapter. For future use. Leave blank. For future use. Leave blank. For future use. Leave blank. The name of the interaction in the application definition.

Username Password Workspace AdapterName SchemaFileName EncKeyName EncKeyValue InteractionName

64 64 64 64 256 64 256 64

85-34 AIS User Guide and Reference

Table 8527 Parameter Flags

(Cont.) Data Buffer Parameters Size 4 Description The following flags are available:
1. 2. 3. 4. 5. 6.

A trace of the XML is performed. A trace of the communication calls is performed. Both the XML and communication calls are traced. The NAT firewall protocol is used, with a fixed address for the daemon. A trace of the XML is performed and the NAT firewall protocol is used, with a fixed address for the daemon. A trace of the communication calls is performed and the NAT firewall protocol is used, with a fixed address for the daemon. Both the XML and communication calls are traced and the NAT firewall protocol is used, with a fixed address for the daemon.

7.

For information on the NAT firewall protocol, see Firewall Support. Input format 4 The following formats are available: 0: Input is provided as XML. 1: Input is provided using parameters. Input The size of the input depends on the value specified in the Input size parameter. If the Input format is set to 0 (XML), the input is formatted as follows:

The first four bytes specify the size of the input XML string. The next 64 bytes specifies the name of the record used for the output (the inbound interaction). The next bytes (to the exact length specified in the first four bytes) specify the input XML string. For example: <findorder ORDER_NO=17 /> where findorder is the inbound interaction name.

If the Input format is set to 1 (the input is done using parameters), the input is formatted as follows:

The first four bytes specify the number of parameters. The next 4 bytes specify the maximum size of any parameter value. The next 64 bytes specify the name of the record used for the output (the inbound interaction). The next 32 bytes specify the name of the parameter. The next bytes (to the exact length specified in the first four bytes) specify the input parameter. The following bytes repeat the last two entries until all the parameters are specified.

Calling the Transaction


An ATYDC3GC transaction is called using C (see C Call). The ATYDC3GL transaction is called using COBOL (see COBOL Call).

C and COBOL 3GL Client Interfaces

85-35

C Call
The ATYDC3GC transaction is called as follows:
unsigned char commDataBuff[5000]; short comlen = 5000; typedef void (*f_ptr)(char *, short int*); static f_ptr fetch_ptr; fetch_ptr = (f_ptr) fetch("ATYDC3GC"); ....... preparing the buffer ...... fetch_ptr(commDataBuff , &comlen ); ....... preparing the buffer ...... fetch_ptr(commDataBuff , &comlen );

where: commDataBuff: The buffer with the interaction details. comlen: The size of the buffer. This value is also used to determine the size of the output string. Thus make sure the value is big enough for the expected output. After defining the buffer and calling the ATYDC3GC transaction, compile and move the C program to the IMS/TM program library (such as IMS.PGMLIB).

COBOL Call
The ATYDC3GL transaction is called using a template similar to the following:
* * COBOL COPY OF DATA BUFFER * 01 COMM-DATA-BUFF PIC X(5000). 01 COMM-DATA-BUFF-ERROR REDEFINES COMM-DATA-BUFF. 05 COMM-ERROR-STATUS PIC S9(8) COMP SYNC. 05 COMM-ERROR-MSG PIC X(256). COMM-DATA-BUFF-INPUT REDEFINES COMM-DATA-BUFF. 05 INPUT-COMMAREA-3GL. 10 INCOM-VERSION PIC S9(8) COMP SYNC. 10 INCOM-SERVERS-URLS PIC X(256). /* IP1:PORT[,IP2:PORT] [,...] */ 10 INCOM-USER-NAME PIC X(64). 10 INCOM-PASSWORD PIC X(64). 10 INCOM-WORKSPACE PIC X(64). 10 INCOM-ADAPTER-NAME PIC X(64). 10 INCOM-SCHEMA-FILE-NAME PIC X(256). 10 INCOM-ENC-KEY-NAME PIC X(64). 10 INCOM-ENC-KEY-VALUE PIC X(256). 10 INCOM-INTERACTION-NAME PIC X(64). 10 INCOM-DW-FLAGS PIC S9(8) COMP SYNC. 10 INCOM-INP-FORMAT PIC S9(8) COMP SYNC. 10 INCOM-EXEC-INPUT. 15 INCOM-XML-BUFF. 20 INCOM-XML-ILEN PIC S9(8) COMP SYNC.

01

85-36 AIS User Guide and Reference

INCOM-XML-INTER-OUTREC-NAME PIC X(64). *====>>> CHANGE ??? TO LEN SPECIFIED IN INCOM-XML-ILEN 20 INCOM-XML-INPUT PIC X(???). 15 INCOM-PARAM-BUFF REDEFINES INCOM-XML-BUFF. 20 INCOM-PARAM-COUNT PIC S9(8) COMP SYNC. 20 INCOM-PARAM-VALUE-LEN PIC S9(8) COMP SYNC. 20 INCOM-PARAM-INT-OUTREC-NAME PIC X(64). *====>>> CHANGE ?? TO COUNT SPECIFIED IN INCOM-PARAM-COUNT 20 INCOM-PARAM-NAME-VALUE OCCURS ?? TIMES. 25 INCOM-PARAM-NAME PIC X(32). *====>>> CHANGE ?? TO LEN SPECIFIED IN INCOM-PARAM-VALUE-LEN 25 INCOM-PARAM-VALUE PIC X(??). 01 COMM-DATA-BUFF-OUTPUT REDEFINES COMM-DATA-BUFF. 05 05 05 COMM-OUT-STATUS COMM-OUT-LENGTH COMM-OUT-DATA PIC S9(8) COMP SYNC. PIC S9(8) COMP SYNC. PIC X(4992)

20

77 COMLEN PIC S9(4) COMP SYNC VALUE +5000. 77 API-INTERFACE PIC X(8) VALUE 'ATYDC3GL'. CALL API-INTERFACE USING COMM-DATA-BUFF COMLEN.

where: COMM-DATA-BUFF: The buffer with the interaction details. COMLEN: The size of the buffer. This value is also used to determine the size of the output string. Thus make sure the value is big enough for the expected output. The first time the CALL is performed, it will do a one-time fetch and a call. Thereafter, it will do only a call. To release the module just before termination of the calling program, write the following line of code: CANCEL API-INTERFACE. After defining the buffer and calling the ATYDC3GL transaction, compile and move the COBOL program to the IMS/TM program library (such as IMS.PGMLIB).

The Transaction Output


The output includes a 4 byte success flag: Zero for success, otherwise failure. The output overrides the input. If the result is failure, an error message with a length of 256 bytes is returned. If XML was specified for the input and the result is success, the output is formatted as XML, as follows:

The first four bytes specify the size of the output. The following bytes make up the XML output.

If parameters were specified for the input and the result is success, the output is formatted as follows:

The first four bytes specify the size of the output.

C and COBOL 3GL Client Interfaces

85-37

The next 32 bytes specify the name of the output attribute. The next bytes (to the exact length specified for the input string in the) specify the output value. The following bytes repeat the last two entries until all the output is specified.

85-38 AIS User Guide and Reference

86
JCA Client Interface
This section includes the following topics:

Overview Outbound Connections JCA Client Interface Attunity JCA Enhancements JCA Logging Mechanism JCA Sample Program

Overview
The J2EE Connector architecture (JCA) provides a Java technology solution to the problem of connectivity between the many application servers and today's enterprise information systems (EIS). To achieve a standard system-level connectivity between application servers and EISs, the J2EE Connector architecture defines a standard set of system-level contracts between an application server and EIS. The resource adapter implements the EIS-side of these system-level contracts. A resource adapter is a system-level software driver used by an application server or an application client to connect to an EIS. By plugging into an application server, the resource adapter collaborates with the server to provide the underlying mechanisms, the transactions, security, and connection pooling mechanisms. For information on the JCA versions supported by AIS, see Attunity Integration Suite Supported Systems and Resources.

Outbound Connections
This section includes the following topics:

Creating a Connection Managed Connection Factory settings

Creating a Connection
Connecting to AIS from a JCA Application Server requires a creation and configuration of a ManagedConnectionFactory instance.

JCA Client Interface 86-1

A ManagedConnectionFactory instance is a factory of both ManagedConnection and EIS-specific connection factory instances, providing methods for matching and creation of ManagedConnection instance. ManagedConnection instance represents a physical connection to the underlying EIS, and provides access to the XAResource and LocalTransaction interfaces. The XAResource interface is used by the transaction manager to associate and dissociate a transaction with the underlying EIS resource manager instance and to perform two-phase commit (2PC) protocol. The LocalTransaction interface is used by the application server to manage local transactions. This should be done by the following steps:
Example 861 Creating a connection

AttuManagedConFactory mcf = new AttuManagedConFactory(); //Set ManagedConenctionFactory properties mcf.set(); AttuConnectionFactory cf = (AttuConnectionFactory)mcf.createConnectionFactory(); javax.resource.cci.Connection con = cf.getConnection();

Managed Connection Factory settings


The managed connection parameters can be one or more of the following:
Table 861 Parameter setCompression(String compressionMode) setConnectOptions(String str) setConnectTimeout(String sTimeout) setDefaultPort(StringDefaultPort) setEisName(String eisName) setEncryptionProtocol(String encProtocol) ConnectionFactory Parameters Description The compression mode of the data passed over the network. Can be set to true of false. Reserved for internal use. It can be used to set server connection properties. The timeout to wait until a new connection is created. The default port number where Attunity daemon/server is running. The adapter (EIS) name to be used The protocol used for encrypting network communications. The RC4 protocol is currently supported. The required password for passing or accessing encrypted information over the network. The name associated with the encryption password and which the daemon on the remote server looks up. A flag indicating work with localTransaction under XaTransaction mode. The default is set to false. The firewall protocol name. The fixedNat protocol is currently supported. This parameter is deprecated and should not be used. The default is set to true. Sets the Attunity JCA logger. Relevant when not using the log4j.properties file.

setEncryptionKeyValue(String keyValue) setEncryptionKeyName(String keyName) setFakeXa(String isFakeXa) setFirewallProtocol (String name) setKeepAlive(String keepAlive) setLogger(CoreLogger logger)

86-2 AIS User Guide and Reference

Table 861 (Cont.) ConnectionFactory Parameters Parameter setLogLevel(String logLevel) Description Sets the log level. It can be set to one of the following:

1: ACX log level set to INFO, Application log level set to FATAL. 2: ACX log level set to FATAL, Application log level set to DEBUG. 3: ACX log level set to DEBUG, Application log level is set to FATAL. 4: ACX log level set to INFO, Application log level set to FATAL. 5: ACX log level set to INFO, Application log level set to FATAL. 6: ACX log level set to INFO, Application log level set to FATAL.

setLogWriter(PrintWriter out) setNetworkXMLProtocol(String sNetworkXMLProtocol) setPassword(String password)

Sets the log writer. Sets the network XML protocol. The valid values are text for the old protocol, binary for the latest. If not set, the default (binary) is used. Sets the password for connecting to Attunity daemon/server. Must be called if Attunity daemon/server requests user authentication. This parameter is deprecated and should not be used. The default is set to true. The port number where Attunity daemon/server is running. Must be called if Attunity daemon/server is running on a non-default port. Sets the Attunity Queue adapter for inbound events (for JCA 1.0 only). Sets the Attunity Queue workspace for inbound events (for JCA 1.0 only). The server name, IP address or the Attunity daemon/server where the requested EIS resides. A flag indicating an addition of a namespace to the output response XML. The namespace is added with the following format: xmlins=\"noNamespace://* + adapterName + "\ The default is set to false.

setPersistentConnection(String persistConn) setPortNumber(String portNumber)

setQueueAdapterName(String qAdapter) setQueueWorkspace(String qWorkspace) setServerName(String serverName) setUseNamespace(String useNamespace)

setUserName(String userName)

The user name for connecting to Attunity daemon/server. Must be called if Attunity daemon/server requests user authentication. Sets the Attunity workspace where the EIS resides. If not set, the default Navigator workspace is used.

setWorkspace(String newWorkspace)

JCA Client Interface


The Attunity JCA Client supports the interfaces listed in the following table:

JCA Client Interface 86-3

Table 862 Attunity Class AttuAdapterMd AttuConnection AttuConnectionFactory AttuConnectionManager AttuConnectionMd AttuConnectionSpec AttuConRequestInfo AttuConstants AttuDom AttuDomImpl AttuDomRecord AttuDomWriter AttuEventConnection AttuInteraction AttuInteractionSpec

Supported JCA Interfaces JCA Interface Implemented javax.resource.cci.resourceAdapterMetadata javax.resource.cci.Connection javax.resource.cci.ConnectionFactory javax.resource.spi.ConnectionManager javax.resource.cci.ConnectionMetadata javax.resource.cci.ConnectionSpec, Serializable javax.resource.spi.ConnectionRequestInfo, Serializable An internal class. An interface for all XML-based objects. Implements the AttuDom interface. javax.resource.cci.Record. This class extends the original JCA Record class. An internal class. An internal class. javax.resource.cci.Interaction javax.resource.cci.InteractionSpec. This class extends the original JCA InteractionSpec class and adds some enhancements. The additional methods are:

setInteractionVerb(String verb): Sets the verb mode of interaction with an EIS instance (sync_send, sync_send_ receive, sync_receive). getInteractionVerb(): Returns the mode of interaction with an EIS instance (sync_send, sync_send_receive, sync_receive). setExecutionTimeout(String sTimeout): Sets the number of milliseconds an interaction will wait for an EIS to execute the specified function. getExecutionTimeout(): Returns the execution timeout. setAdapterName(String adapterName): Sets the adapter that will execute the specified function. getAdapterName(): Returns the adapter name if it is set on the interactionSpec. Otherwise, it returns NULL. setFunctionName(String name): Sets the function name. getFunctionName(): Returns the function name.

AttuLocalTransaction AttuLogger AttuManagedCon AttuManagedConFactory AttuManagedConMd AttuMappedRecord AttuMdItem AttuRecordFactory

javax.resource.spi.LocalTransaction and javax.resource.cci.LocalTransaction An internal class, implementing the log4j logger. javax.resource.spi.ManagedCOnnection javax.resource.spi.ManagedConnectionFactory javax.resource.spi.ManagedConnectionMetadata javax.resource.cci.MappedRecord An internal class, common interface for all metadata objects. javax.resource.cci.RecordFactory

86-4 AIS User Guide and Reference

Table 862 (Cont.) Supported JCA Interfaces Attunity Class AttuTrHandle AttuXAResource AttuXid JCA Interface Implemented An internal class. javax.transaction.xa.XAResource javax.transaction.xa.Xid

Attunity JCA Enhancements


This section includes the following topics:

Attunity Metadata Attunity Record

Attunity Metadata
In addition to classes required for the JCA implementation, Attunity JCA provides the extensions for handling metadata required by an application adapter. The following types of metadata objects are provided:
Table 863 Object Name AttuConnectionMd Attunity Enhancement Objects Description This class has been extended to provide metadata-related interactions, records, fields and enumerations, which exist in an application adapter. Metadata information returned from records metadata request includes transitive-clause of all referenced records and enumerations. Enumerations cant be retrieved separately. The instance of the class is created through the AttuConnection.getMetaData() method. AttuEnumMd Describes the metadata of the schema enumeration (a set of name and value pairs). The instance of the class is created through the following steps:
1. 2.

AttuConnection.getMetadata(): Returns an AttuConnectionMd object. AttuConnectionMd.getRecordsMd(Element schemaEl, Vector enumsMd): Returns a vector (enumsMd) with a separate object for every "enumeration" in a schema.

AttuInteractionMd

Describes the metadata of an interaction object. The instance of the class is created through AttuConnection.getInteractionMd(String[] names, int type) where names is an array of interaction names. Describes the metadata of a record field. The field type can be categorized as primitive (int, boolean, etc...), XML (DOMs text or element), record or enumeration. A records structure and enumeration values are defined in the EIS schema and can be accessed through the AttuConnectionMd object. The instance of the class is accessed through AttuRecordMd.getFieldsMd(). Describes the metadata of a record object. The record metadata is built according to the W3C DOM tree describing it. The instance of the class is created through the following steps:
1. 2.

AttuFiledMd

AttuRecordMd

AttuConnection.getMetaData(): Returns an AttuConnectionMd object. AttuConnectionMd.getRecordsMd(): Returns an array of metadata objects. Each object contains metadata of records.

JCA Client Interface 86-5

Table 863 (Cont.) Attunity Enhancement Objects Object Name AttuResourceMd Description Describes the metadata for the application adapter. The instance of the class is created through AttuConnectionMd.getResourceMd(String[] names). Where names is an array of resource names. If "null", then metadata of the currently connected adapter is returned (as the first element is the array). The instance of the class is created through the following steps:
1. 2.

AttuConnection.getMetaData(): Returns an AttuConnectionMd object. AttuConnectionMd.getResourceMd(String[] names): Getsmetadata for specified resources. This should be used when an ACX adapter is connected. If a non-ACX adapter is connected, it is possible to get its metadata by passing "null" to the function.

getMetadataList(String type, String mask, int maxResultItems, String startItemName)

Returns a list of resource/records/interactions/events metadata objects for an application adapter with a live connection. Where:

type: The required metadata object type (adapter, record, interaction or event). mask: A wildcard (*,%) for the names maxResultItems optional parameter. If positive, it limits the number of names that are returned. startItemName: This optional parameter specifies an item name staring from which items are to be returned. If "null", items following the first one are returned.

getMetadataItem(String type, String[] names)

Returns the AttuDom object of the full resources/records/interactions/events metadata objects according to the type requested. Where:

type: The required item type (adapter, interaction, record, schema, all, event, w3cSchema, wsdl, dtd). names: An array of string objects. A list of names which metadata is required for. For adapter, schema, all, w3cSchema, wsdl, or dtd types, the names parameter is ignored. In these cases, metadata for the currently connected resource is returned.

Attunity Record
The following classes extends the original JCA Record classes and adds some enhancements:
Table 864 Object Name AttuDomRecord Attunity RECORDS Enhancement Objects Description This object implements the javax.resource.cci.Record interface. Additional methods are:

getDom(): Returns the DOM record content representation. setDom(Element content): Sets the record content specified by the DOM. SetTextXml (String xml): Sets all XML as input including the mappedRecord roots element.

86-6 AIS User Guide and Reference

Table 864 (Cont.) Attunity RECORDS Enhancement Objects Object Name AttuMappedRecord Description This object implements the javax.resource.cci.MappedRecord interface. Additional method:

getDom(): Returns the DOM record content representation.

Samples
Example 862 Sample code of using the getMetadataList method

AttuConnectionMd cm = (AttuConnectionMd)con.getMetaData(); String[] name = cm.getMetadataList(adapter, null, 0, null); for (int i = 0; i < names.length; i++) { System.out.println("name = " + names[i]); } Example 863 name = query name = DEMO name = calc Example 864 Sample code of using the getMetadataItem method Sample output for the getMetadataList method

AttuConnectionMd cm = (AttuConnectionMd)con.getMetaData(); String[] names = {"WSDL (W3c)"}; AttuDom attuDom = cm.getMetadataItem(wsdl, names); //Printing the output XML if (attuDom.getDom() != null) { coreDOMWriter domW = new CoreDOMWriter(false); String szXml = domW.toXMLString(attuDom.getDom()); System.out.println(szXml); } Example 865 Sample output for each input for the getMetadataItem method

type: adapter
<getMetadataItemResponse> <adapter name='query' description='Attunity Connect Query Adapter' version='1.0' type='query' operatingSystem='INTEL-NT' vendor='Attunity Ltd.' transactionLevelSupport='2PC' authenticationMechanism='basic-password' maxActiveConnections='0' maxIdleTimeout='600' maxRequestSize='32000' connectionPoolingSize='0' poolingTimeout='0'/> </getMetadataItemResponse>

type: interaction, names=null


<getMetadataItemResponse> <interaction name='callProcedure' description='Call a stored procedure' mode='sync-send-receive' input='callProcedure' output='multipleResultset'> </interaction> <interaction name='ddl' description='Perform a DDL query' mode='sync-send-receive' input='ddl' output='status'> </interaction>

JCA Client Interface 86-7

... </getMetadataItemResponse>

type: record, names={inputParameters}


<getMetadataItemResponse> <schema name='query' version='1.00'> <record name='inputParameter' align='4'> <field name='value' type='string'/> <field name='type' type='paramType'/> <field name='null' type='boolean'/> <field name='default' type='boolean'/> <field name='xmlValue' type='xml'/> <field name='bindToSql' type='int' default='0'/> </record> <enumeration name='paramType'> <item name='unspecified' value='0'/> <item name='string' value='1'/> <item name='number' value='2'/> <item name='timestamp' value='3'/> <item name='binary' value='4'/> <item name='xml' value='5'/> </enumeration> </schema> </getMetadataItemResponse>

type: event, names=null


<getMetadataItemResponse> <interaction name='eventStream' mode='async-send' input='eventStream'> </interaction> ... </getMetadataItemResponse>

type: schema, names=null


<getMetadataItemResponse> <schema name='query' version='1.00'> <enumeration name='outputFormat'> <item name='attributes' value='0'/> <item name='elements' value='1'/> <item name='msado' value='2'/> <item name='xml' value='3'/> </enumeration> <record name='ddl' align='4'> <field name='id' type='string'/> <field name='sql' type='string' array='*'/> <field name='passThrough' type='boolean'/> <field name='datasource' type='string'/> </record&> ... </schema> </getMetadataItemResponse>

type: all, names=null


<getMetadataItemResponse> <adapter name='query' description='Attunity Connect Query Adapter' version='1.0' type='query' operatingSystem='INTEL-NT' vendor='Attunity Ltd.' transactionLevelSupport='2PC' authenticationMechanism='basic-password' maxActiveConnections='0' maxIdleTimeout='600' maxRequestSize='32000'

86-8 AIS User Guide and Reference

connectionPoolingSize='0' poolingTimeout='0'> <interaction name='callProcedure' description='Call a stored procedure' mode='sync-send-receive' input='callProcedure' output='multipleResultset'> </interaction> ... <schema name='query' version='1.00> <enumeration name='outputFormat'> <item name='attributes' value='0'/> <item name='elements' value='1'/> <item name='msado' value='2'/> <item name='xml' value='3'/> </enumeration> ... </schema> </adapter> </getMetadataItemResponse>

type: w3cSchema, names=null


<getMetadataItemResponse> <xsd:schema xmlns:xsd='http://www.w3.org/2001/XMLSchema' elementFormDefault='qualified' version='1.00'> <xsd:simpleType name='outputFormat'> <xsd:restriction base='xsd:string'> <xsd:enumeration value='attributes'/> <xsd:enumeration value='elements'/> <xsd:enumeration value='msado'/> <xsd:enumeration value='xml'/> </xsd:restriction> </xsd:simpleType> ... </xsd:schema> </getMetadataItemResponse>

type: wsdl, names=null


<getMetadataItemResponse> <wsdl:definitions targetNamespace='noNamespace://localhost/query' xmlns:tns='noNamespace://localhost/query'xmlns='http://schemas.xmlsoap.org/wsdl/' xmlns:wsdl='http://schemas.xmlsoap.org/wsdl/'xmlns:xsd='http://www.w3.org/2001/XML Schema'> <wsdl:types> <xsd:schema targetNamespace='noNamespace://localhost/query'> <xsd:simpleType name='outputFormat'> <xsd:restriction base='xsd:string'> <xsd:enumeration value='attributes'/> <xsd:enumeration value='elements'/> <xsd:enumeration value='msado'/> <xsd:enumeration value='xml'/> </xsd:restriction> </xsd:simpleType> ... </xsd:schema> </wsdl:types> <wsdl:message name='callProcedureRequest'> <wsdl:part element='tns:callProcedure' name='callProcedure'/> <wsdl:message> <wsdl:message name='callProcedureResponse'> <wsdl:part element='tns:multipleResultset' name='multipleResultset'/> </wsdl:message> ...

JCA Client Interface 86-9

<wsdl:portType name='queryPortType'> <wsdl:operation name='callProcedure'> <wsdl:input message='tns:callProcedureRequest' name='callProcedureRequest'/> <wsdl:output message='tns:callProcedureResponse' name='callProcedureResponse'/> </wsdl:operation> ... </wsdl:portType> </wsdl:definitions> </getMetadataItemResponse>

type: dtd, names=null


<getMetadataItemResponse> <![CDATA[ <!DOCTYPE query [ <!ELEMENT query (sql*,inputParameter*)> <!ATTLIST query id CDATA #IMPLIED outputFormat (attributes | elements | msado | xml) #IMPLIED outputRoot CDATA #IMPLIED binaryEncoding (base64 | hex) #IMPLIED metadata (true|false) #IMPLIED maxRecords CDATA #IMPLIED nullString CDATA #IMPLIED passThrough (true|false) #IMPLIED reuseCompiledQuery (true|false) "false" datasource CDATA #IMPLIED failOnNoRowsReturned (true|false)"false" batch (true|false) "false" > ... ]> ]]> </getMetadataItemResponse>

All the above objects implement the base AttuMdItem and the AttuDom interfaces.
Example 866 A sample code of using the Dom Record method

The following sample demonstrates using the Attunity JCA "Ordersys" adapter, (supplied as part of AIS). The full sample is provided with AIS installation. This sample executes the following steps:

A managedConnection Factory request. A connection factory request. A connection request. An interaction request. Invokes the interaction. Extracts the interaction response information as an XML document (using the Record methods). Places a new order using a MappedRecord object.

/* * This sample demonstrates the usage of the "Connect" JCA

86-10 AIS User Guide and Reference

* legacy adapter in a non-managed (2-tier) mode. * * Prerequisites: * This program requires that the sample Ordersys adapter DLL is installed * in "Connect" with the adapter name 'orders'. * */ import java.io.*; import java.util.Vector; import javax.resource.*; import javax.xml.parsers.*; import com.attunity.adapter.*; import com.attunity.adapter.core.*; import com.attunity.adapter.core.acp.*; import org.w3c.dom.*; import com.attunity.adapter.*; public class OrdersSample { // Operational parameters for this sample static String ATTC_SERVER = "localhost"; static String ATTC_PORT = ""; static String ATTC_ADAPTER = "orders"; public static void main(String[] args) // throws ResourceException,java.io.IOException { if (args.length > 0) { String machine = args[0]; ATTC_SERVER = machine.substring(0, machine.indexOf(":")); int portStart = machine.indexOf(":"); if (portStart >= 0) ATTC_PORT = machine.substring(portStart+1, machine.length()); } PrintWriter printWriter = new PrintWriter(System.out); // Step 1. //-------------------------------------------------------------------System.out.println( "Step 1. Acquire a Managed connection factory and configure it"); AttuManagedConFactory mcf = new AttuManagedConFactory(); try { mcf.setLogWriter(printWriter); mcf.setEisName(ATTC_ADAPTER); mcf.setServerName(ATTC_SERVER); mcf.setPortNumber(ATTC_PORT); // Username and password might be required, depending on // the server settings mcf.setUserName(null); mcf.setPassword(null); } catch (ResourceException re) { System.out.println("Error setting up the Managed connection factory:"); System.out.println(re.getMessage()); re.printStackTrace();

JCA Client Interface 86-11

return; } // Step 2. //-------------------------------------------------------------------System.out.println( "Step 2. Acquire a Connection factory and set it up"); //ConnectionFactory cf; AttuConnectionFactory cf; try { //Attu cf = (AttuConnectionFactory)mcf.createConnectionFactory(); cf.setLogWriter(printWriter); cf.setTimeout(20000); } catch (ResourceException re) { System.out.println("Error setting up the Connection factory:"); System.out.println(re.getMessage()); re.printStackTrace(); return; } // Step 3. //-------------------------------------------------------------------System.out.println( "Step 3. Get an adapter connection from the connection factory"); javax.resource.cci.Connection con; try { con = cf.getConnection(); } catch (ResourceException re) { System.out.println("Error getting an Adapter connection:"); System.out.println(re.getMessage()); re.printStackTrace(); return; } // Step 4. //-------------------------------------------------------------------System.out.println( "Step 4. Get an interaction from the connection"); javax.resource.cci.Interaction interaction; try { interaction = con.createInteraction(); } catch (ResourceException re) { System.out.println("Error creating an Adapter interaction:"); System.out.println(re.getMessage()); re.printStackTrace(); return; } // Step 5. // // The input to the findOrder interaction is an XML document in the // folowing shape: // // <findOrder> // <ORDER_ID>1</ORDER_ID> // </findOrder> // //-------------------------------------------------------------------System.out.println(

86-12 AIS User Guide and Reference

"Step 5. Set up the findOrder interaction input and invoke interaction"); javax.resource.cci.RecordFactory rf; javax.resource.cci.MappedRecord findOrderOut; try { AttuInteractionSpec ispec = new AttuInteractionSpec("findOrder"); //cf.getRecordFactory(); rf = cf.getRecordFactory(); //rf=new RecordFactory(); javax.resource.cci.MappedRecord findOrderIn = rf.createMappedRecord("findOrder"); //rf.createMappedRecord("findOrder"); // Find order with ORDER_ID = 1 findOrderIn.put("ORDER_ID", "1"); findOrderOut = (javax.resource.cci.MappedRecord)interaction.execute(ispec, findOrderIn); } catch (ResourceException re) { System.out.println("Error preparing/calling the 'findOrder' interaction:"); System.out.println(re.getMessage()); re.printStackTrace(); return; } // Step 6. // // The output of findOrder is an XML document in the folowing shape: // // <findOrderResponse> // <ORDER ORDER_ID="1" ORDERED_BY="Jim Bo" N_LINES="2"> // <ADDRESS // ADDRESSEE="Jim Bo & Sons, LLC." // STREET="Fits Road" // CITY="Oukalaka" // ZIP="33212" // STATE="NH" // COUNTRY="USA" /> // <LINES LINE_NO="1" ITEM_NAME="Knife" QUANTITY="5" ITEM_PRICE="1.2" /> // /> // </ORDER> // </findOrderResponse> // //-------------------------------------------------------------------System.out.println( "Step 6. Extract the response information as an XML document"); CoreDomRecord domOrder = (CoreDomRecord) findOrderOut; CoreDOMWriter domWriter = new CoreDOMWriter(false); String xmlOrder; try { xmlOrder = domWriter.toXMLString(domOrder.getDom()); } catch (ResourceException re) { System.out.println("Error writing response to XML:"); System.out.println(re.getMessage()); <LINES LINE_NO="2" ITEM_NAME="Fork" QUANTITY="5" ITEM_PRICE="0.9"

JCA Client Interface 86-13

re.printStackTrace(); return; } System.out.println("Response as XML is:"); System.out.println(xmlOrder); // Step 7. // // record.get("@attr") - retrieves an attribute value // record.get("#") - retrieves an attribute value //-------------------------------------------------------------------System.out.println( "Step 7. Extract the response information - e.g., to print it"); javax.resource.cci.MappedRecord order = (javax.resource.cci.MappedRecord) findOrderOut.get("#ORDER"); if (order == null) System.out.println("\tNo Order information was found"); else { System.out.println("\tOrderID\t" + (String)order.get("@ORDER_ID") ); System.out.println("\tOrderedBy\t" + (String)order.get("@ORDERED_BY") ); javax.resource.cci.MappedRecord address = (javax.resource.cci.MappedRecord) order.get("#ADDRESS"); if (address == null) System.out.println("\tNo address information was found"); else { System.out.println("\tAddress information:"); System.out.println("\t\tAddressee\t" + (String)address.get("@ADDRESSEE") ); System.out.println("\t\tStreet\t" + (String)address.get("@STREET") ); System.out.println("\t\tCity\t" + (String)address.get("@CITY") ); System.out.println("\t\tZip\t" + (String)address.get("@ZIP") ); System.out.println("\t\tState\t" + (String)address.get("@STATE") ); System.out.println("\t\tCountry\t" + (String)address.get("@COUNTRY") ); } Vector lines = (Vector) order.get("#LINES[]"); if (lines == null) System.out.println("\tNo Order lines information was found"); else { for (int i = 0; i < lines.size(); i++) { javax.resource.cci.MappedRecord line = (javax.resource.cci.MappedRecord) lines.get(i); System.out.println("\tLineNo\t" + (String)line.get("@LINE_NO") ); System.out.println("\t\tItemName\t" + (String)line.get("@ITEM_ NAME") ); System.out.println("\t\tQuantity\t" + (String)line.get("@QUANTITY") ); System.out.println("\t\tItemPrice\t" + (String)line.get("@ITEM_PRICE") ); } }

86-14 AIS User Guide and Reference

} // Step 8. //-------------------------------------------------------------------System.out.println( "Step 8. Place a new order using an XML string:"); xmlOrder = "<placeOrder>\n" + "<ORDER ORDERED_BY='Winnie The Pooh'>\n" + " <ADDRESS\n" + " ADDRESSEE='Forrest woods'\n" + " STREET='Dry Lane'\n" + " CITY='South Mountain'\n" + " ZIP='12345'\n" + " STATE='N/A'\n" + " COUNTRY='Legend'/>\n" + " <LINES LINE_NO='1' ITEM_NAME='Honey' QUANTITY='5' ITEM_PRICE='8.2' />\n" + " <LINES LINE_NO='2' ITEM_NAME='Blueberries Jam' QUANTITY='' ITEM_ PRICE='10.9' />\n" + " <LINES LINE_NO='2' ITEM_NAME='Ant Hill' QUANTITY='1' ITEM _PRICE='29.9' />\n" + "</ORDER>\n" + "</placeOrder>\n"; System.out.println(xmlOrder); javax.resource.cci.MappedRecord placeOrderOut; try { AttuInteractionSpec ispec = new AttuInteractionSpec("placeOrder"); javax.resource.cci.MappedRecord placeOrderIn = rf.createMappedRecord("placeOrder"); // Set input record from the XML string ((CoreDomRecord)placeOrderIn).setTextXml(xmlOrder); placeOrderOut = (javax.resource.cci.MappedRecord)interaction.execute(ispec, placeOrderIn); } catch (ResourceException re) { System.out.println("Error preparing/calling the 'placeOrder' interaction:"); System.out.println(re.getMessage()); re.printStackTrace(); return; } } }

JCA Logging Mechanism


Attunity JCA enables runtime logging without modifying the application binary, using log4j (a popular logging package for Java). The logging can be controlled by editing a configuration file, without touching the application binary.

JCA Client Interface 86-15

To activate the Attunity JCA logging 1. Extract the log4j jar file provided with the Attunity JCA product (inside the attunityResourceAdapter.zip file) under the lib directory.
2. 3.

Include the log4j jar file in your application classpath so that logging methods can find the required classes. Extract the log4j.properties file from the zip file and put it in your application classpath. When you add this file to the classpath, ensure it appears before the Attunity Resource Adapter jars. You can make modifications to this file according to your specific requirements, based on the following guidelines:

log4j can log messages with the following priorities: debug: Writes debugging messages which should not be printed when the application is in production. info: Writes messages similar to the "verbose" mode of many applications. warn: Writes warning messages to the log. error: Writes application error messages to the log. fatal: Writes critical messages to the log.

4.

In the log4j.properties file, set "log4j.rootCategory=<LOG_LEVEL>", which instructs log4j to ignore messages with priority less than <LOG_LEVEL>

The following is a sample log4j.properties file used for a JCA project:


Example 867 log4j.properties file

### This file has to be located in the same directory ### as QuerySample.class (the class running the "main()" method). ### Use one appenders to log to a file (it is possible to use more appenders, ### to log to console, for example). log4j.rootCategory=debug log4j.additivity.com.attunity.connect.acx=false log4j.category.com.attunity.connect.acx=DEBUG, logFile log4j.category.com.attunity.connect.jca=DEBUG, logFile

###This appender writes to a file log4j.appender.logFile=org.apache.log4j.RollingFileAppender log4j.appender.logFile.File=D:\\QuerySample.log ##The PatternLayout defaults to %m%n which means print your-supplied message and a newline log4j.appender.logFile.layout=org.apache.log4j.PatternLayout ### ConversionPattern: How to format each log message (What information to include). ### %p Outputs the message priority. ### %t Outputs the thread making the log request. ### %c Outputs the name of the category associated with the log request. ### %m Outputs the message itself, %n is newline. log4j.category.com.attunity.connect.jca.appender.logFile.layout.ConversionPattern= %p %m %d{ISO8601} %n log4j.category.com.attunity.connect.acx.appender.logFile.layout.ConversionPattern= %p %m %d{ISO8601} %n

86-16 AIS User Guide and Reference

#log4j.appender.logFile.layout.ConversionPattern= %p %m

%d{ISO8601} %n

JCA Sample Program


The sample provided in this section demonstrates the usage of the Attunity JCA Query adapter (supplied as part of AIS). in a non-managed (2-tier) mode. The user must provide the following parameters before the query adapter starts the process:

The IP address of the daemon or server. The port number where the daemon/server is running (press Enter for the default port). The user name to connect to the daemon/server (press Enter when the daemon/server doesnt request user authentication). The password used to connect to the daemon/server (press Enter when the daemon/server doesnt request user authentication). An SQL query (e.g: SELECT * from disam:nv_dept).
Note:

When working with the query interaction, the SQL statement must be a SELECT statement.

The following sample executes the following steps:


A connection request. The query execution. Prints the results to the console. Closes the connection.

The sample program:


Example 868 Sample JCA Program

/* * * This sample demonstrates the usage of the "Connect" JCA * query adapter in a non-managed (2-tier) mode. * * Prerequisites: * This program uses the default query adapter in "Connect". * * THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY * KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR * PURPOSE. * */ import javax.resource.*; import java.io.*; import java.util.Vector; import org.w3c.dom.*; import com.attunity.adapter.*; import com.attunity.adapter.core.*;

JCA Client Interface 86-17

public class QuerySample { public QuerySample() { } public static void main(String[] args) throws ResourceException,java.io.IOException { AttuManagedConFactory mcf = new AttuManagedConFactory();

// Ask user for server, user name and password information configureEis(mcf); String sSql = askUser("Enter the SELECT statement for execution: "); AttuConnectionFactory cf = (AttuConnectionFactory)mcf.createConnectionFactory(); //----------------------------------try { javax.resource.cci.Connection con = cf.getConnection(); javax.resource.cci.Interaction interaction = con.createInteraction(); // To execute the query we use "query" interaction AttuInteractionSpec iSpeq = new AttuInteractionSpec("query");

javax.resource.cci.RecordFactory rf = cf.getRecordFactory(); javax.resource.cci.MappedRecord queryRecord = rf.createMappedRecord("query"); queryRecord.put("##text", sSql); javax.resource.cci.Record oRec = interaction.execute(iSpeq, queryRecord); Element outEl = ((CoreDomRecord)oRec).getDom(); CoreDOMWriter domW = new CoreDOMWriter(false); String xml = domW.toXMLString(outEl); System.out.println("got back " + xml); System.out.println("now using the get functions of the Record interface"); /* Here we get an array of sub elements named record */ Vector recordV = (Vector)((CoreMappedRecord)oRec).get("#record[]"); if (recordV == null || recordV.isEmpty()) { System.out.println("No sub elements named record"); return; } /* If the sql was select * from nv_dept and the output columns * are dept_budget && dept_id here we print them . */

86-18 AIS User Guide and Reference

for (int objInd = 0; objInd < recordV.size(); objInd++) { CoreMappedRecord record = (CoreMappedRecord)recordV.get(objInd); String dept_budget = (String)record.get("@dept_budget"); String dept_id = (String)record.get("@dept_id"); System.out.println("dept_budget is " + dept_budget); System.out.println("dept_id is " + dept_id); } interaction.close(); con.close(); } catch (ResourceException ex) { System.out.print("Exception in QuerySample: " + ex.getMessage()); ex.printStackTrace(); throw ex; } } /* * Configure EIS we are working with. * Ask user for server, user name and password information */ static private void configureEis(AttuManagedConFactory mcf) throws java.io.IOException, ResourceException { System.out.println(); System.out.println(); System.out.println("This sample demonstrates the work against 'query' EIS"); System.out.println("====================================================="); System.out.println(); System.out.println(); // The sample demonstrates the work with "query" EIS mcf.setEisName("query"); String sServer = askUser("Please enter the IP of the daemon or server: "); String sPort = askUser("Please enter the port number where daemon/server is running (ENTER for default): "); int iPort; // Use default port if(sPort.length() == 0) sPort = "";

String sUser = askUser("Please enter the user name to connect the daemon/server: "); String sPassword = askUser("Please enter the password to connect the daemon/server: "); mcf.setServerName(sServer); mcf.setPortNumber(sPort); mcf.setUserName(sUser); mcf.setPassword(sPassword);

JCA Client Interface 86-19

} /* * Prints the question onto screen and returns typed answer */ static private String askUser(String sQuestion) throws java.io.IOException { InputStreamReader isr = new InputStreamReader(System.in); char inBuf[] = new char[1000]; String sAnswer; System.out.print(sQuestion); isr.read(inBuf, 0, 999); System.out.println(); sAnswer = new String(inBuf); sAnswer = sAnswer.trim(); return sAnswer; } }

86-20 AIS User Guide and Reference

87
JDBC Client Interface
This section includes the following topics:

Overview Connection Data Types JDBC API Conformance JDBC Client Interface JDBC Sample Program

Overview
JDBC is a Java programming interface allowing external access to SQL database manipulation and update commands. It enables the integration of SQL calls into a general programming environment by providing library routines which interface with the database. The JDBC driver provides support to allow Java applets, servlets and applications, using the JDK version 1.3 or higher, to access data sources. Using the JDBC driver, the user writes a Java program, which interacts with a database. Using standard library routines, the user should open a connection to a database, and then use JDBC to send an SQL code to the database, and process the results that are returned. When the results are done, the user should close the connection. All supported data sources can be accessed from Java applications. Such an approach has to be contrasted with the precompilation route taken with embedded SQL, which is converted to the host language code (C/C++). Call level interfaces do not require precompilation therefore avoid some of the problems of Embedded SQL. The result is increased portability and a cleaner client-server relationship. The following figure shows how Attunity AIS interacts with JDBC:

JDBC Client Interface 87-1

Figure 871 The Interaction between AIS and JDBC

Connection
This section includes the following topics:

Creating a Connection Connection String Accessing Data Sources Directly

Creating a Connection
Before a database can be accessed, a connection must be opened between the client program and the database (server).
Note:

This connection (using connection string) does not enable you to access data sources directly via the JDBC 2 data Source interface functionality. The method for accessing data directly is described in Accessing Data Sources Directly.

This should be done by the following steps:


1.

Loading the vendor specific driver: This step is performed to ensure portability and code reuse. AIS JDBC driver is loaded using the following code snippet:
Class.forName("com.attunity.jdbc.NvDriver");

2.

Making the connection: Once the driver is loaded and ready for a connection to be made, the user may create an instance of a connection object using the following code snippet:
Connection con = DriverManager.getConnection(url, username, password);

Where:

url: The URL for the AIS daemon. username: The username needed to connect to the AIS daemon. password: The password needed to connect to the AIS daemon.

Connection String
The connection string for the getConnection method has the following syntax:
87-2 AIS User Guide and Reference

jdbc:://[username:password@]machine[[:port] [:encryption_protocol][/workspace]][;parameter=value] [;parameter=value]...

Where:
Parameter username:password Description The username/password needed to connect to the AIS daemon/server. If anonymous access is allowed on the remote machine, then these parameters are optional. If you dont specify a user name and password, and anonumous access is not allowed, then the "user=someUser" and "password=somePwd" tokens are used to access the computer. Note: The user id of the computer you are accessing must be the same as the user profile ID. machine port The IP address where AIS daemon runs. The port that the daemon listens to. If omitted, then the default, 2551, is used. If you specify encryption_protocol in the connection string, then you must also specify the port number. encryption_protocol The protocol used for encrypting network communications. RC4 and Des3 protocols are supported. Note: The value for the encryption protocol is case sensitive. If you specify encryption_protocol, then you must also specify a port number and the encryptionKey parameter. workspace The daemons workspace. The default is Navigator.

JDBC Client Interface 87-3

Parameter parameter

Description The following parameters are available:

addDefaultSchema: (1/0) Specifies that a schema shows a default owner name (public) if the data source does not natively support owners. The default is set to 1. DefTdpName: The name of the data source which is accessed by this connection by default. Tables specified in SQL statements are assumed to be from this data source. If this argument is not specified, then SYS is the default. For tables from any other data source you must prefix each table name with the name of the data source, using the datasource:tablename format. encryptionKey: Establishes encryption of client/server network communication from a Java thin client. Note: If you specify encryption_protocol in the connection string, then you must specify this parameter. This parameter can have one of the following attributes: resource: The name associated with the encryption password and which the daemon on the remote server looks up. If this resource entry is not specified, the daemon on the server machine uses the name of the user account that is accessing this remote machine. password: The password is required in order to pass or access encrypted information over the network. A password entry surrounded by curly braces (as in ({password}) is assumed to be in hexadecimal format. Thus {456ACF} is interpreted as 0x456ACF, and {TDX} is not a legal password but TDX is

firewallProtocol Log

The firewall protocol used (if available). Currently, fixedNat is supported. Specifies the logging level. The following logging levels are available:

0: No (the default). 1: JDBC API logging. 2: NAV API logging. 3: JDBC and NAV APIs logging.

LogFile

Specifies the log file. This parameter is relevant only when the Log parameter value is other than zero. The log file name can include the following tokens, which are replaced with the appropriate values:

%T: Specifies the date the file was created. %I: Specifies the sequence number of the current connection within the session. If neither %T nor %I are specified, a new log for this connection overwrites the existing log. If LogFile is not specified and Log is, then all output log information is delegated to DriverManager.

Notes:
1. 2.

87-4 AIS User Guide and Reference

Parameter MaxConnections MaxStatements

Description Specifies the maximum number of simultaneous connections to this data source. The default is set to zero. Specifies the maximum number of active SQL statements that can run against this data source. The default is set to zero. Specifies that the schema shows tables only of the default data source, as if only one data source is defined. The default is set to 1, indicating single tdp mode. Specifies that commit/rollback are always processed in auto commit mode turned off. Setting this parameter to 1, indicates that when commit/rollback APIs are called, and auto commit mode is off, they are processed only if the transaction is not empty (executes were performed under this transaction). The default is set to 0. Specifies whether all SQL statements that do not return rowsets during this connection will pass directly to the native RDBMS data source parameter, without any parsing normally performed by the Query Processor. Setting this parameter to 1 enables passthru mode and causes Attunity Connect to open the connection in single data source mode. This parameter can be used only if the back-end data source is an SQL-based driver. SQL executed in passthru mode behave the same as individual passthru queries specified with the TEXT={{}} syntax; however, there is no way to override passthru mode for a particular query. Use passthru mode to issue queries that perform special processing not supported in Attunity Connect, such as ALTER TABLE and DROP INDEX. Note: It is not recommended using this option since it impacts on every DDL SQL statement, even if only some statements were intended.

OneTdpMode

OptimizeEmptyTrans

Passthru

Password

Specifies the password required in order to decrypt an encrypted user profile on the server. This parameter is used in conjunction with the User parameter. When set to 1, this parameter returns the data source name as part of the schema name for use with multi data source connections. The default is set to 0. When set to 1, this parameter returns the data source name as part of the schema name for use with multi data source connections. The default is set to 0. When set to 1, this parameter returns the data source name as part of the schema name for use with multi data source connections. The default is set to 0. When set to 1, a ResultSetMetadata object is created during the program execution. When set to 0, no metadata object is created, therefore metadata creation is disabled. This is useful when performance considerations are imported and the user program knows the resultset metadata. Then default is set to 1. Specifies the codepage to use when converting text BLOBs. This parameter controls the BLOB codepage as returned from the server. If not specified, the default codepage is used (UTF-8).

QualifyOwner

QualifySchema

QualifyTable

DescribeResultSet

BlobCodePage

JDBC Client Interface 87-5

Parameter StreamMode

Description A flag for the internal performance enhancing getRows method. It is activated when only when the resultSet parameter is set to forward-only and read-only. The default is set to 0. Specifies the name of a user profile in the repository on a server. The user profile controls access to remote machines and data sources.

User

Accessing Data Sources Directly


Before attempting to access data sources directly, ensure you have the Sun JDBC 2.0 Standard Extension packages installed. Perform the following steps to directly connect to a data source:
1.

Include an import statement in an application to import the NvDataSource class, as follows:


import com.Attunity.jdbc.NvDataSource;

2.

Specify the following in order to define a data source object:


NvDataSource ds = new NvDataSource();

Where ds is the name of the data source object.


3.

Specify the methods for the data source object, using the following syntax:
ds.method

Where ds is the name of the data source object, as specified in the previous step. For example, the following specifies the server and port for a connection:
ds.setServerName("osf.acme.com"); ds.setPortNumber(8888);

Note:

The methods available for the data source object are described in JDBC API Conformance.

4.

Connect to AIS using the following syntax:


Connection con = ds.getConnection();

Where con is the connection instance.

Data Types
Attunity JDBC driver supports the standard JDBC data type conversion rules between an application using Java data types and a data source using SQL types. These rules apply to JDBC methods on the ResultSet class (retrieving SELECT statement results as Java types) and on the PrepareStatement class (sending Java types as SQL statement input parameters). The data type information is divided into the following tables:

Java and JDBC types mapped to specific SQL types. Conversions between Java object types and target SQL types.

87-6 AIS User Guide and Reference

Use of ResultSet.getXXX methods to retrieve various types of data.


Note: Since the JDBC driver does not support output parameters, conversion information regarding retrieval of OUT parameters for the CallableStatement class does not apply.

JDBC and Java Type Mapping


This section provides information on the mapping between Java types, Java object types, JDBC types and the data types that SQL supports through the CREATE TABLE statement. The actual data type used for the physical data depends on the underlying data provider that is accessed by AIS. Refer to the detailed descriptions of AIS drivers for the mapping between data source-specific data types and the SQL data types. You must use the SQL type names in the CREATE TABLE statement when creating a new data source table using the JDBC driver.
Table 871 Java Type String JDBC and Java Type Mapping Java Object Type String JDBC Type CHAR VARCHAR LONGVARCHAR SQL Type1 CHAR VARCHAR TEXT2 NUMERIC TBD TINYINT SMALLINT INTEGER Not supported3 FLOAT DOUBLE BINARY IMAGE

java.math.BigDecimal java.math.BigDecimal NUMERIC DECIMAL boolean byte short int long float double byte[] boolean Integer Integer Integer long float double byte[] BIT TINYINT SMALLINT INTEGER BIGINT REAL DOUBLE FLOAT BINARY VARBINARY LONGVARBINARY DATE TIME TIMESTAMP LONGVARCHAR LONGVARBINARY

java.sql.Date java.sql.Time java.sql.Timestamp java.io.InputStream


1

java.sql.Date java.sql.Time java.sql.Timestamp TBD

DATE TIME TIMESTAMP TEXT IMAGE

A blank entry in this column indicates that Attunity Connect cannot properly process data corresponding to the listed Java and JDBC types, from existing data sources only. You cannot use SQL to create such columns. Attunity Connect allows you to retrieve or modify TEXT (LONGVARCHAR) and IMAGE (LONGVARBINARY) fields, with certain restrictions. The restrictions are due primarily to the support for BLOBs available in the underlying data sources, and may vary from one data source to another (for further details, refer to the PreparedStatement interface). Attunity Connect does not support reading or writing to the JDBC type BIGINT. You can read other data types and convert the data to the Java type long using the getLong method; however, the data must be converted back to its stored data type using the appropriate setXXX method prior to updating the data source.

JDBC Client Interface 87-7

See also ADD Supported Data Types.

Conversions Between Java Object Types and Target SQL Types


The following table lists the conversions that can be performed by the setObject method between Java object types and target SQL types. An X indicates that the conversion is possible, although some conversions may fail at runtime if the resulting value is invalid for a given type.
Table 872 Java Object Types and Target SQL types Mapping TINYINT X X X X X X X SAMLLINT X X X X X X X INTEGER X X X X X X X FLOAT X X X X X X X DOUBLE X X X X X X X NUMERIC X X X X X X X CHAR X X X X X X X VARCHAR X X X X X X X TEXT X X X X X X X X X X X X X X X X X X X X X BINARY IMAGE X DATE X

String java.math.BigDecimal Boolean Integer Long Float Double byte[] java.sql.Date java.sql.Time java.sql.Timestamp

Using the getXXX Methods to Retrieve Data Types


The following table lists the SQL data types that can be retrieved through JDBC getXXX methods. An X indicates that the method can retrieve the specified type, getting the value of the designated data column as the Java type indicated in the method name. A bold X indicates that the method is recommended for retrieving data columns of the specified type.
Table 873 Java Object Types and Target SQL types Mapping TINYINT X X X X X X X X X X X X X X X X X X X SAMLLINT INTEGER FLOAT X X X X X X X X DOUBLE X X X X X X X X NUMERIC X X X X X X X X CHAR X X X X X X X X VARCHAR X X X X X X X X TEXT X X X X X X X X BINARY IMAGE DATE

getByte getShort getInt getLong getFloat GetDouble getBigDecimal getBoolean

87-8 AIS User Guide and Reference

Table 873 (Cont.) Java Object Types and Target SQL types Mapping TINYINT X SAMLLINT X INTEGER X FLOAT X DOUBLE X NUMERIC X CHAR X VARCHAR X TEXT X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X BINARY X X IMAGE X X X DATE X

getString getBytes getDate getTime getTimestamp getAsciiStream getUnicodeStream getBinaryStream getObject

JDBC API Conformance


The JDBC driver supports the JDK release 1.3 or higher. This section provides the following JDBC conformance information:

Supported Interfaces Supported Classes DataSource Properties ConnectionPool Data Source and XADatasource Interface Properties Connection Pooling Properties

Supported Interfaces
The JDBC driver supports the interfaces and methods listed in the following table:

JDBC Client Interface 87-9

Table 874 Supported Interface CachedRowSet

Supported Interfaces and Methods Supported Method absolute acceptChanges afterLast beforeFirst cancelRowDelete cancelRowInsert cancelRowUpdates clone createCopy createShared clearWarnings close deleteRow execute findColumn first getArray getAsciiStream getBigDecimal getBinaryStream getBoolean getByte getBytes getChapter getCharacterStream getConcurrency getDate getDouble getFetchDirection getFetchSize getFloat getInt getLong getMetaData getObject getReader getRef getRow getShort getShowDeleted getStatement getString getTime getTimestamp getType getUnicodeStream getWarnings getWriter insertRow isAfterLast isBeforeFirst isFirst isLastlast moveToCurrentRow moveToInsertRow next previous populate Limitations The following methods are not supported:

getBlob getClob

87-10 AIS User Guide and Reference

Table 874 (Cont.) Supported Interfaces and Methods Supported Interface CachedRowSet (continued) Supported Method refreshRow relative release restoreOriginal rowDeleted rowInserted rowUpdated setFetchDirection setFetchSize setReader setShowDeleted setWriter toCollection updateAsciiStream updateBigDecimal updateBinaryStream updateBoolean updateByte updateBytes updateCharacterStream updateDate updateDouble updateFloat updateInt updateLong updateNull updateObject updateRow updateShort updateString updateTime updateTimestamp wasNull <none> clearWarnings close commit createStatement getAutoCommit getCatalog getMetaData getTransactionIsolation getWarnings isClosed isReadOnly nativeSQL prepareCall prepareStatement rollback setAutoCommit setCatalog setReadOnly setTransactionIsolation1 connectionClosed connectionErrorOccurred getPooledConnection setPooledConnection setLogWriter getLogWriter The setLoginTimeout method is not supported. Limitations

CallableStatement Connection

ConnectionEventListener ConnectionPoolDatasource

JDBC Client Interface 87-11

Table 874 (Cont.) Supported Interfaces and Methods Supported Interface DatabaseMetaData Supported Method allProceduresAreCallable allTablesAreSelectable dataDefinitionCausesTransaction Commit dataDefinitionIgnoredIn Transactions doesMaxRowSizeIncludeBlobs getBestRowIdentifier getCatalogs getCatalogSeparator getCatalogTerm getColumns getCrossReference getDatabaseProductName getDatabaseProductVersion getDefaultTransactionIsolation getDriverName getDriverVersion getDriverMajorVersion getDriverMinorVersion getExportedKeys getExtraNameCharacters getIdentifierQuoteString getImportedKeys getIndexInfo getMaxBinaryLiteralLength getMaxCatalogNameLength getMaxCharLiteralLength getMaxColumnNameLength getMaxColumnsInGroupBy getMaxColumnsInIndex getMaxColumnsInOrderBy getMaxColumnsInSelect getMaxColumnsInTable getMaxConnections getMaxCursorNameLength getMaxIndexLength getMaxProcedureNameLength getMaxRowSize getMaxSchemaNameLength getMaxStatementLength getMaxStatements getMaxTableNameLength getMaxTablesInSelect getMaxUserNameLength getNumericFunctions getPrimaryKeys getProcedureColumns getProcedures getProcedureTerm getSchemas getSchemaTerm getSearchStringEscape getSQLKeywords getStringFunctions getSystemFunctions getTables getTableTypes getTimeDateFunctions getTypeInfo Limitations All other methods for the DatabaseMetaData interface return information about a data source based on Attunity metadata, not the backend metadata. For example, the method getTypeInfo returns the SQL data type names (BINARY, CHAR, VARCHAR, TEXT, NUMERIC, TINYINT, SMALLINT, INTEGER, FLOAT, DOUBLE, IMAGE, DATE, TIME and TIMESTAMP) rather than a DBMS-specific data type such as LONG (Oracle) or SCALED BYTE (Rdb).

87-12 AIS User Guide and Reference

Table 874 (Cont.) Supported Interfaces and Methods Supported Interface DatabaseMetaData (continued) Supported Method getURL getUserName isCatalogAtStart isReadOnly nullsAreSortedHigh nullsAreSortedLow nullsAreSortedAtStart nullsAreSortedAtEnd nullPlusNonNullIsNull storesLowerCaseIdentifiers storesLowerCaseQuotedIdentifiers storesMixedCaseIdentifiers storesMixedCaseQuotedIdentifiers storesUpperCaseIdentifiers storesUpperCaseQuotedIdentifiers supportsAlterTableWithAddColumn supportsAlterTableWithDropColumn supportsANSI92EntryLevelSQL supportsANSI92IntermediateSQL supportsANSI92FullSQL supportsCatalogsInData Manipulation supportsCatalogsInIndex Definitions supportsCatalogsInPrivilege Definitions supportsCatalogsInProcedure Calls supportsCatalogsInTableDefinitions supportsColumnAliasing supportsConvert supportsCoreSQLGrammar supportsCorrelatedSubqueries supportsDataDefinitionAndData ManipulationTransactions supportsDataManipulation TransactionsOnly supportsDifferentTable CorrelationNames supportsExpressionsInOrderBy supportsExtendedSQLGrammar supportsFullOuterJoins supportsGroupBy supportsGroupByUnrelated supportsGroupByBeyondSelect supportsIntegrityEnhancement Facility supportsLikeEscapeClause supportsLimitedOuterJoins supportsMinimumSQLGrammar supportsMixedCaseIdentifiers supportsMixedCaseQuotedIdentifiers supportsTableCorrelationNames supportsMultipleResultSets supportsMultipleTransactions supportsNonNullableColumns supportsOpenCursorsAcrossCommit supportsOpenCursorsAcrossRollback Limitations

JDBC Client Interface 87-13

Table 874 (Cont.) Supported Interfaces and Methods Supported Interface DatabaseMetaData (continued) Supported Method supportsOpenStatementsAcross Commit supportsOpenStatementsAcross Rollback supportsOrderByUnrelated supportsOuterJoins supportsPositionedDelete supportsPositionedUpdate supportsSchemasInDataManipulation supportsSchemasInIndexDefinitions supportsSchemasInPrivilege Definitions supportsSchemasInProcedureCalls supportsSchemasInTableDefinitions supportsSelectForUpdate supportsStoredProcedures supportsSubqueriesInComparisons supportsSubqueriesInExists supportsSubqueriesInIns supportsSubqueriesInQuantifieds supportsTableCorrrelationNames supportsTransactionIsolationLevel supportsTransactions supportsUnion supportsUnionAll usesLocalFiles usesLocalFilePerTable getConnection getLoginTimeout setLogWriter getLogWriter The following methods are supported: getActiveSize(): returns the number of connections in use. getDefaultMinPoolSize(): returns the default min pool size getDefaultMaxPoolSize: returns the default max pool size.getPoolSize(): returns the number of pool connections in the pool. These connections are available for reuse. acceptsURL connect getMajorVersion getMinorVersion getPropertyInfo jdbcCompliant getConnection close addConnectionEventListener removeConnectionEventListener The SetLoginTimeout is not supported. Limitations

DataSource2

Driver

PooledConnection

87-14 AIS User Guide and Reference

Table 874 (Cont.) Supported Interfaces and Methods Supported Interface PreparedStatement Supported Method addBatch clearBatch clearParameters close execute executeBatch executeQuery executeUpdate getMetaData setAsciiStream setBigDecimal setBinaryStream setBoolean setByte setBytes setDate setDouble setFloat setInt setLong setNull setObject setShort setString setTime setTimestamp setUnicodeStream Limitations

To insert or update TEXT and IMAGE fields (JDBC data types LONGVARCHAR and LONGVARBINARY, respectively), you must pass the data as input streams to a PreparedStatement object. Use the method setAsciiStream to set the value for a TEXT field, then call the method executeUpdate to perform an SQL INSERT or UPDATE. Use the method setBinaryStream to set the value for an IMAGE field, then call the method executeUpdate to perform an SQL INSERT or UPDATE. Additional restrictions on the retrieval or modification of TEXT and IMAGE fields may vary from one data source to another, depending on the support for BLOBs that is available in the underlying data source.

Referenceable

getReference

Supported only in DataSource object.

JDBC Client Interface 87-15

Table 874 (Cont.) Supported Interfaces and Methods Supported Interface ResultSet Supported Method absolute afterLast beforeFirst cancelRowUpdates clearWarnings close deleteRow findColumn first getAsciiStream getBigDecimal getBinaryStream getBoolean getByte getBytes getChapter getCharacterStream getConcurrency getDate getDouble getFetchDirection getFetchSize getFloat getInt getLong getMetaData getObject getRow getShort getStatement getString getTime getTimestamp getType getUnicodeStream getWarnings insertRow isAfterLast isBeforeFirst isFirst isLast last moveToCurrentRow moveToInsertRow next previous refreshRow relative rowDeleted rowInserted rowUpdated setFetchDirection updateAsciiStream updateBigDecimal updateBinaryStream updateBoolean updateByte updateBytes Limitations

Chapter Support: The getObject method returns a ResultSet if the type column type is java.sql.type.OTH ER and the type name is CHAPTER. Scrollable resultsets: Scrollable resultsets are supported with the JDBC driver. The type must be set to TYPE_ SCROLL_ INSENSITIVE or an error occurs. The following methods are not supported: getArray getBlob getClob setFetchSize. The following methods are only supported in JDBC version 2: absolute afterLast beforeFirst cancelRowUpdates clearWarnings deleteRow first getCharacter Stream getConcurrency getFetchDirection getFetchSize getRow getStatement getType insertRow isAfterLast isBeforeFirst isFirst isLast last moveToCurrentRow moveToInsertRow previous refreshRow relative rowDeleted rowInserted rowUpdated setFetchDirection updateAsciiStream updateBigDecimal updateBinary Stream updateBoolean

87-16 AIS User Guide and Reference

Table 874 (Cont.) Supported Interfaces and Methods Supported Interface ResultSet (continued) Supported Method updateCharacterStream updateDate updateDouble updateFloat updateInt updateLong updateNull updateObject updateRow updateShort updateString updateTime updateTimestamp wasNull Limitations updateByte updateBytes updateCharacter Stream updateDate updateDouble updateFloat updateInt updateLong updateNull updateObject updateRow updateShort updateString updateTime updateTimestamp

ResultSetMetadata

getCatalogName getColumnCount getColumnDisplaySize getColumnLabel getColumnName getColumnType getColumnTypeName getPrecision getScale getSchemaName getTableName isAutoIncrement isCaseSensitive isChapter isCurrency isDefinitelyWritable isNullable isReadOnly isSearchable isSigned isWritable

JDBC Client Interface 87-17

Table 874 (Cont.) Supported Interfaces and Methods Supported Interface RowSet Supported Method absolute afterLast beforeFirst cancelRowUpdates clearWarnings close deleteRow findColumn first getAsciiStream getBigDecimal getBinaryStream getBoolean getByte getBytes getChapter getCharacterStream getConcurrency getDate getDouble getFetchDirection getFetchSize getFloat getInt getLong getMetaData getObject getRow getShort getStatement getString getTime getTimestamp getType getUnicodeStream getWarnings insertRow isAfterLast isBeforeFirst isFirst isLastlast moveToCurrentRow moveToInsertRow next previous refreshRow relative rowDeleted rowInserted rowUpdated setFetchDirection setFetchSize updateAsciiStream updateBigDecimal updateBinaryStream updateBoolean updateByte Limitations Chapter Support: The getObject method returns a RowSet if the type column type is java.sql.type.OTHER and the type name is CHAPTER. Scrollable rowsets: The type must be set to TYPE_SCROLL_ INSENSITIVE or an error occurs. The following methods are not supported: getArray getBlob getClob getRef

87-18 AIS User Guide and Reference

Table 874 (Cont.) Supported Interfaces and Methods Supported Interface RowSet (continued) Supported Method updateBytes updateCharacterStream updateDate updateDouble updateFloat updateInt updateLong updateNull updateObject updateRow updateShort updateString updateTime updateTimestamp wasNull addBatch clearBatch clearWarnings close execute executeBatch executeQuery executeUpdate getMaxFieldSize getMaxRows getMoreResults getQueryTimeout getResultSet getUpdateCount getWarnings setMaxRows getXAResource getXAConnection getLoginTimeout setLogWriter getLogWriter commit end forget prepare recover rollback start isSameRM getTransactionTimeout The setLoginTimeout method is not supported. The following methods are not supported: cancel setCursorName setEscapeProcessing setMaxFieldSize setQueryTimeout3 Limitations

Statement

XAConnection XADataSource

XAResource

The setTransactionTimeout method is not supported.

2 3

Isolation level dynamic changes are supported for specific data sources only. The change takes effect after a transaction is started. Also see DataSource Properties. You can use the daemon Call Timeout parameter on the server machine. However, setting this parameter will impact all client requests. Also, the server process will continue to run, even after the client has timed out.

Supported Classes
The JDBC driver also supports all of the fully implemented classes in the JDK 1.3 release or higher, listed in the following table:

JDBC Client Interface 87-19

Table 875

JDBC Classes Supported Methods getDataSize getIndex getParameter getRead getTransferSize toString valueOf deregisterDriver getConnection getDriver getDrivers getLoginTimeout getLogStream println registerDriver setLoginTimeout setLogStream <none> Note: There are no methods defined in the JDBC API for the DriverPropertyInfo class

Supported Classes dataTruncation

date driverManager

driverPropertyInfo

SQLException

getSQLState getErrorCode getNextException setNextException getNextWarning setNextWarning toString valueOf after before equals getNanos setNanos toString valueOf <none> Notes:

SQLWarning time timestamp

types

There are no methods defined in the JDBC API for the Types class. See Data Types (JDBC and Java Type Mapping) for more information about data type mapping in the JDBC driver.

DataSource Properties
The JDBC driver supports standard properties for the DataSource (including the DataSource interface that provides connection pooling), ConnectionPoolDataSource, and XADatasource interfaces, listed in the following table:

87-20 AIS User Guide and Reference

Table 876 Property Name databaseName dataSourceName description password portNumber serverName user Type

DataSource Properties Supported Methods setDatabaseName getDatabaseName setDataSourceName getDataSourceName setDescription getDescription setPassword getPassword setPortNumber getPortNumber setServerName getServerName setUser getUser Description The name of a particular database on a server. The logical data source name. A description of this data source. The users data source password. The port that the daemon listens to. The default value is 2551. The data source server name. The users account name.

string string string string int string string

ConnectionPool Data Source and XADatasource Interface Properties


The JDBC driver also supports its own properties for the DataSource (including the DataSource interface that provides connection pooling), ConnectionPoolDataSource, and XADatasource interface, as listed in the following table:
Table 877 Property Name Type ConnectionPool, Data Source and XADatasource Properties Supported Methods setAddDefaultSchema isAddDefaultSchema setConnectionString getConnectionString Description This flag specifies that a schema shows the default owner name if owner is empty. Any additional parameters for the connection string. The setConnectionString method receives a string including all the connection parameters that cannot be set by the DataSource object. All connection parameters that can be set by the dataSource object (including the username\password) must be set by this object and not be included in the connectionString. daemonPassword daemonUser defTdpName encryptionKey logFile string string string string string setDaemonPassword getDaemonPassword setDaemonUser getDaemonUser setPortNumber getPortNumber setEncryptionKey getEncryptionKey setLogFile getLogFile The daemon password. The daemon user name. The name of the data source this connection accesses by default. The name of the encryption key. The name of the log file.

addDefaultSchema boolean

connectionString string

JDBC Client Interface 87-21

Table 877 (Cont.) ConnectionPool, Data Source and XADatasource Properties Property Name logLevel maxConnections maxStatements oneTdpMode passThruMode qualifySchema Type int int int boolean boolean boolean Supported Methods setLogLevel getLogLevel setMaxConnections getMaxConnections setMaxStatements getMaxStatements setOneTdpMode isOneTdpMode setPassThruMode isPassThruMode setQualifySchema isQualifySchema setWorkspace getWorkspace Description The level of logging/tracing. The maximum number of active connections. The maximum number of active statements. Sets the data source to multi/single mode. Sets the Pass Thru mode. When set to true, a schema returns the data source name as part of the schema/owner name. The daemons workspace (the default is Navigator).

workspace

string

Connection Pooling Properties


The JDBC driver also supports its own properties for the DataSource interface that provides connection pooling, as listed in the following table:
Table 878 Property Name maxPoolSize Type boolean Connection Pooling Properties Supported Methods setQualifySchema isQualifySchema setWorkspace getWorkspace Description When set to true, a schema returns the data source name as part of the schema/owner name. The daemons workspace (the default is Navigator).

minPoolSize

string

JDBC Client Interface


The Attunity JDBC driver supports JDBC release 1.3 or higher, with the interfaces listed in the following table:
Table 879 Attunity JDBC Class NvCachedRowSet NvCachedRowSetReader NvCachedRowSetWriter NvCallableStatement NvConnection NvConnectionEventListener NvConnectionPoolDatasource NvDatabaseMetadata NvDataSource Supported JDBC Interfaces JDBC Interface Implemented javax.sql.rowset.CachedRowSet java.sql.RowSetReader and java.io.Serializable javax.sql.RowSetWriter and java.io.Serializable java.sql.CallableStatement java.sql.Connection javax.sql.ConnectionEventListener javax.sql.ConnectionPoolDataSource and java.io.Serializable java.sql.DatabaseMetadata javax.naming.Referenceable and java.io.Serializable

87-22 AIS User Guide and Reference

Table 879 (Cont.) Supported JDBC Interfaces Attunity JDBC Class NvDataSourceFactory NvDataSourceInt NvDriver NvDriverCoreBase NvPoolDataSource NvPooledConnection NvPreparedStatement NvResultSet NvResultSetMetadata NvRow NvRowSet NvRowSetListener NvRowSetMetadata NvScollableResultSet NvSerialBlob NvSerialClob NvSQLReconnectException NvSQLWarning NvStatement NvUpdateableResultSet NvXAConnection NvXADataSource NvXAException NvXAResource NvXid JDBC Interface Implemented javax.naming.spi.ObjectFactory Extended javax.sql.DataDource The driver implementation class. java.sql.Driver javax.namingReferenceable and java.io.Serializable. Addionally, it implements the NvDataSourceInt class. javax.sql.PooledConnection java.sqlPreparedStatement java.sql.ResultSet java.sqlResultSetMetadata java.io.Serializable and java.lang.Cloneable javax.sql.RowSet javax.sql.RowSetListener and java.io.Serializable javax.sql.RowSetMetadata and java.io.Serializable java.sql.ResultSet java.sql.Blob and java.io.Serializable and java.lang.Cloneable java.sql.Clob and java.io.Serializable and java.lang.Cloneable Extended java.sql.SQLException Extended java.sql.SQLWarning java.sql.Statement java.sql.ResultSet javax.sql.XAConnection javax.sql.XADataSource and java.io.Serializable Extended javax.transaction.xa.XAException javax.transaction.xa.XAResource javax.transaction.xa.Xid and java.io.Serializable

JDBC Sample Program


The sample provided in this section demonstrates the usage of the Attunity JDBC 2 driver. The sample executes the following steps:

Creates a JDBC connection. Creates a prepareStatement object and sets the SQL query. Executes the query and gets the results (if any). Prints the results to the console or to a file, according to the user specification. Closes the open objects (PrepareStatement, ResultSet, Connection).

You need to provide the following parameters before the JDBC driver starts the process:
JDBC Client Interface 87-23

A user name to connect the daemon/server. Press Enter in case the Attunity daemon/server doesnt request authorization. A password to connect the daemon/server. Press Enter in case the Attunity daemon/server doesnt request authorization. An SQL query (e.g SELECT * from disam:nv_dept)
Note:

If the data source name was not specified at the data source name request, then it must be linked to the table name in the SQL query, separated with : (a colon).

The sample program:


Example 871 Sample JDBC Program

/** * Title: SimpleJDBC * Description: SimpleJDBC to work with Attunity * Copyright: * Company: * @author * @version 1.0 */ import java.sql.*; import java.io.*; public class SimpleJDBC { public static void main(String args[]) { accessEns(); } private static boolean accessEns() { Connection con = null; ResultSet rs = null; PreparedStatement ps = null; int rowCount = 0; try { //Initialize the JDBC driver Class.forName("com.attunity.jdbc.NvDriver"); } catch(Exception ex) { System.out.println("failed to initialize the jdbc driver. The error msg is: \n" + ex.getMessage()); return false; } try { String url = "jdbc:attconnect://localhost;DefTdpName=Disam;"; String sUser = askUser("Please enter the user name to connect the daemon/server or press <Enter> for anonymous access: ");

87-24 AIS User Guide and Reference

String sPassword = askUser("Please enter the password to connect the daemon/server or press <Enter> for anonymous access: "); con = DriverManager.getConnection(url,sUser,sPassword); String sQuery = askUser("Enter the SELECT statement for execution: "); ps = con.prepareStatement(sQuery); rs = ps.executeQuery(); while (rs.next()) { rowCount++; String name = rs.getString(2); System.out.println("name = " + name); } System.out.println("The total rows read = " + rowCount); ps.close(); rs.close(); con.close(); System.out.println("SUCCESSS - END OF THE PROGRAM!!!"); return true; } catch(SQLException ex) { System.out.println("failed to execute the db query. The error msg is: \n" + ex.getMessage()); return false; } catch(Exception ex) { System.out.println("Exception !!!!!!!!!: " + ex.getMessage()); return false; } } /* * Prints the question onto screen and returns typed answer */ static private String askUser(String sQuestion) throws java.io.IOException { InputStreamReader isr = new InputStreamReader(System.in); char inBuf[] = new char[1000]; String sAnswer; System.out.print(sQuestion); isr.read(inBuf, 0, 999); System.out.println(); sAnswer = new String(inBuf); sAnswer = sAnswer.trim(); return sAnswer; } }

JDBC Client Interface 87-25

87-26 AIS User Guide and Reference

88
ODBC Client Interface
This section contains the following topics:

Connection ODBC Client Interface ODBC API Conformance Platform Specific Information Environment Variables

Connection
This section includes the following topics:

Creating an ODBC Connection Defining a DSN Defining a File DSN Connection String Parameters

Creating an ODBC Connection


Connecting to AIS Server from an ODBC application requires that you set up a DSN. On Windows platforms, DSN is created using the ODBC Data Source Administrator. On non-Windows platforms, the DSN must be a name of a data source defined in the binding configuration. After a DSN is defined, use one of the following ways to connect to AIS Server from an ODBC application:

Passing the DSN name, and the username and password (if required) to the application. Using the SQLDriverConnect method and providing one of the following in the connect string:

Driver=Attunity Connect Driver FileDSN=absolute_path_of_File_DSN You can add additional connection string parameters, separated with a semi colon (;). See Connection String Parameters.

ODBC Client Interface 88-1

(On Windows platforms) DSN=name_of_DSN where name_of_DSN is a user or system DSN. Refer to Defining a DSN for details.

(On non-Windows platforms) DSN=absolute_path_of_File_DSN

For a description of the additional parameters, refer to Connection String Parameters.

Defining a DSN
Connecting to AIS Server from an ODBC application requires that you set up a Data Source Name (DSN). This section describes the ODBC Setup Wizard, which is used to define the properties for a DSN. The Attunity ODBC DSN setup supports 32-bit and 64-bit (x64 and 64-bit Itanium) Windows platforms. It can run on Windows XP and Windows Vista.
Note:

Other properties can be specified by editing the DSN as a text

file. To open the DSN setup wizard 1. From the Start menu, select Control Panel.
2. 3. 4. 5. 6.

Open Administrative Tools. Double-click Data sources (ODBC). Select the type of DSN you want to define (a User, System or File DSN). Click Add. The Create New Data Source screen is displayed. Select Attunity Connect Driver from the list and click Next if you are defining a File DSN or User or System DSN). The Opening Page of the Attunity Connect ODBC Setup Wizard is displayed.

88-2 AIS User Guide and Reference

The Opening Page


The following figure shows the ODBC Setup Wizard opening page. This page is used to enter the basic information for the DSN.
Figure 881 The ODBC Setup Wizard Opening Page

Enter the following information on this page:


Name: Enter a name to identify the DSN. Description: You can enter a description for the DSN. This is optional. Server location: Select or enter a location where the Attunity Server you are working with is located. If you want to work on a specific port, enter the port number after the server name in the following format: server:port number. You can leave this field blank if you ar e working on a local server.

Ping server: Click this button to ping the server that is selected in the Server location field. The ping result is displayed in a separate window.

If you do not enter a server, the Local Authentication Page is displayed. If you enter a server location, the Remote Server Authentication Page is displayed.

ODBC Client Interface 88-3

Local Authentication Page


The following figure shows the ODBC Setup Wizard Local Authentication page. On this page, supply the credentials for the authentication to the AIS Server on a local machine.
Figure 882 Local Authentication Page

Enter the following information on this page:

User profile: Select a user profile from the list. The user profile list is taken from the user profiles that you defined for the local machine. For more information on defining a user profile, see User Profiles. If you do not enter a user profile, the default NAV profile is used. Password: The password is entered as part of the User Profile. If you want to save the password setting select the Save password check box. It is more secure to leave this check box cleared. Important: The password will not be encrypted. It is sent through the system as is.

Test Login: Click this button to try and log in to the local machine with the credentials that are entered on this page.

After you enter the information on this page, click Next to go to the Local Binding Information Page.

88-4 AIS User Guide and Reference

Local Binding Information Page


The following figure shows the ODBC Setup Wizard Local Binding Information page. On this page, enter information about the binding and data source that you are using.
Figure 883 Local Binding Information Page

Enter the following information on this page.

Binding: Select the binding that you want to use from the list. The binding list is taken from the bindings that you defined for the local machine. For more information on defining a binding, see Binding Configuration. Default data source: Select a data source from the list. The data source list is taken from the data sources that you defined for the selected binding. For more information on defining a data source, see Data Sources. You can select the following: Single data source (for schema): When you select this check box, only the default data source selected is displayed in the ODBC schema. Use this if your ODBC tool cannot handle a large catalog case or to save system resources. Virtual Database: This is automatically selected if the data source you selected is a virtual database. Batch Update passthru: Select this to bypass the query processor and send queries directly to the backend database.

Test data source: Click this to run a test on the data source. The DSN setup tries to access the data source and displays the first table. The test information is displayed in a separate window.

After you enter the information on this page, click Next to go to the Advanced Settings Page.

ODBC Client Interface 88-5

Remote Server Authentication Page


The following figure shows the ODBC Setup Wizard Remote Server Authentication page. On this page, supply the credentials for the authentication to the AIS Server on a remote machine. You can also configure the DSN to use encryption.
Figure 884 Remote Server Authentication Page

Enter the following information on this page:

Login ID: Enter the login ID for the user that is accessing the server on the remote machine. Leave this information blank to allow anonymous log in to the server. Password: Enter the password for the user in that is accessing the server on the remote machine. Leave this information blank to allow anonymous log in to the server. If you want to save the password setting select the Save password check box. It is more secure to leave this check box clear. Important: The password will not be encrypted. It is sent through the system as is.

User Encryption: Select this check box if you want to use encryption. If this option is selected, enter the following information: Key name: Enter the name of the encryption password that the daemon on the server machine looks up. Key value: Enter the value for the encryption key.

For more information on using encryption, see Add Encryption Keys.

Test Login: Click this button to try and log in to the remote machine with the credentials that are entered on this page.

After you enter the information on this page, click Next to go to the Remote Server Binding Page.

88-6 AIS User Guide and Reference

Remote Server Binding Page


The following figure shows the ODBC Setup Wizard Local Binding Information page. On this page, enter information about the binding and data source that you are using.
Figure 885 Remote Server Binding Page

Enter the following information on this page.

Server Workspace: From the list, select the workspace from the server where you are working that you want to use. The list is taken from the bindings that you defined for the remote machine. For more information on defining a binding, see Adding and Editing Workspaces. Default data source: Select a data source from the list. The data source list is taken from the data sources that you defined on the remote machine. For more information on defining a data source, see Data Sources. Local language: Select a language code page to use with the data source. The list is determined by the Windows computer available language settings. You can select the following: Use UTF-8 encoding for strings: Select this if you want to use UTF-8 encoding with the data source. If this is selected, you cannot select a different language. English will always be used. Single data source (for schema): When you select this check box, only the default data source selected is displayed in the ODBC schema. Use this if your ODBC tool cannot handle a large catalog case or to save system resources. Use remote query processor only: Select this to use an AIS query processor on a remote machine. Batch Update passthru: Select this to bypass the query processor and send queries directly to the backend database.

Test data source: Click this to run a test on the data source. The DSN setup tries to access the data source and displays the first table. The test information is displayed in a separate window.

ODBC Client Interface 88-7

After you enter the information on this page, click Next to go to the Advanced Settings Page.

Advanced Settings Page


The following figure shows the ODBC Setup Wizard Advanced Settings page. On this page, define the logging levels, AIS configuration settings, and threading options.
Figure 886 Advanced Settings Page

Enter the following information on this page:


Log file name: The name of the log file. General trace: Select this to include the general trace and time trace elements in the log file. Produce log to debug output: Select this check box to turn on the invoke debugger. This debug option is available in the debugger output window or through other free utilities. This is helpful in troubleshooting IIS and ODBC applications. Use Threading: This turn on threading. The default is no threading because using AIS threading may cause some problems when using the ODBC environment. Setting name: Select environment properties that you want to expose from the list. Click Add to add them to the environment. The selected properties are displayed in the table in the middle of this page. To remove a property from the environment, select it from the table, and click Remove. For information on these properties, see Environment Properties.

Setting value: Enter a value for the selected environment property.

After you enter the information on this page, click Next to go to the Final Page.

88-8 AIS User Guide and Reference

Final Page
The following figure shows the ODBC Setup Wizard Final page. This page displays the configuration file with the configurations you entered in this wizard.
Figure 887 Final Page

On this page, you can select the format to display the configuration. Select the format from the Display configuration as list. You can select one of the following:

ODBC File DSN ODBC connect string ADO/OLEDB connect string

You can use this page to review the configurations. If you need to make changes, click Back and make the changes you need. You can also manually edit the configuration in any text editor. After you select the format you are working with, click Copy to clipboard. Open your text editor and paste the contents into a new file. Make your changes and save the file in the correct location. After you finish checking the configuration and are finished entering information in this wizard, click Finish to close the wizard and save all of your settings.

Defining a File DSN


File DSN is a file with the.dsn extension, and which provides information for establishing an ODBC connection to a data provider. This file is formatted in a WIndows INI format. It should start with a section header ([ODBC]), providing the DSN name. The file DSN can contain the following:

All the AIS connection string parameters (\binding, BindURL, Database, defaulttdp, DSNPasswords, etc.).

ODBC Client Interface 88-9

Environment parameters that override the values in the binding environment on the client (thin ODBC client). This is done by specifying the full environment parameter path. For example, debug/generalTrace=true. Location: Optional. In case of a thin ODBC client, specifies the thin ODBC client installation location. LocalQp: A boolean parameter. When set to true, indicates that the thin ODBC client works in localQp mode and not remoteQp, which is the default.
Notes:

Location, localQp and environment parameters are relevant ONLY for the thin ODBC client. Location and localQp parameters can also be specified in the connection string.
File DSN

Example 881

The following is an example of a File DSN, followed by code using the DSN. File DSN (located in c:\OdbcThin\DSNs\myfiledsn.dsn):
[ODBC] DRIVER=Attunity Connect Driver BindURL:panter.attunity.co.il:2551 LocalQp=true

Sample code:
/* Allocate Environment Handle */ rc = SQLAlloEnv (&hEnv); /* allocate connection handle */ rc = SQLAllocConnect (hEnv, &hConn); /* Connect */ strcpy(ConnString, "FileDSN=c:\\OdbcThin\\DSNs\myfiledsn.dsn;"); rc = SQLDriverConnect (hConn, NULL, ConnString, ConnString, szCompleteConnStr, 355, NULL, SQLDRIVER_NOPROMPT);

Connection String Parameters


The connect string parameters can be one or more of the following:

Binding=name|XML_format: Specifies the data source connection information.

name: The name of the binding settings in the local repository. This provides access to all data sources defined in this binding configuration. See Editing Bindings. XML format: The binding settings in XML format. This version of the parameter defines specific data sources either locally or on a remote machine and eliminates the need to define local binding settings in the repository. Only the data sources specified for the binding are accessed. If you want to access the data sources in all the binding settings on a remote machine, use the BindURL parameter (see below).

88-10 AIS User Guide and Reference

The settings include the following: * * name: The name of a data source defined in the binding. See Editing Bindings. type: The driver used to access the data source if it resides on the client machine, or the value REMOTE if the data resides on a remote machine. If the value is REMOTE, the binding on the remote machine is updated with the values of name, connect, and Datasource and Config properties. See Editing Bindings. * connect: If the type value is a driver, this value is the connection information to the data source. If the type value is REMOTE, this value is the address of the remote machine where the data source resides and the workspace on that machine (if the default workspace is not used). * Configuration properties: Properties specific to the data source. For details, see the specific driver.
Connection string in XML format

Example 882

The following sample shows a connection to a local, demo DISAM data source:
Binding="<?xml version=1.0 encoding iso-8859-1?> <navobj> <datasource> <datasource name=demo type=add-disam readOnly=true> <config newFileLocation=D:\disam audit=true/> </datasource> </datasources> </navobj>

The following sample shows a connection to a remote demo data source:


Binding="<?xml version=1.0 encoding iso-8859-1?> <navobj> <datasource> <datasource name=demo connect=develop/acme.com type=remote/> </datasource> </datasources> </navobj>

BindURL=[attconnect://][username:password@]host[:port][/workspa ce][&...][|...]: Specifies an AIS Server to connect to and whose data sources, defined in the binding settings on this server, are available. This parameter eliminates the need to define a local binding with entries for data sources on a server. If you want to access only a subset of the data sources on the server, use the Binding parameter (see above).

attconnect:// An optional prefix to make the URL unique when the context is ambiguous. username:password@ : An optional user ID/password pair for accessing the AIS Server. host: The TCP/IP host where the daemon resides. Both numeric form and symbolic form are accepted.
ODBC Client Interface 88-11

port: An optional daemon port number. This item is required when the daemon does not use the Sun RPC portmapper facility. workspace: An optional workspace to use. If omitted, the default workspace (Navigator) is used. & Multiple BindURLs may be specified, using an ampersand (&) as separator. Spaces between the BindURLs are not allowed. If one of the machines listed is not accessible, the connect string fails. | Multiple BindURLs may be specified, using an OR symbol (|) as separator. Spaces between the BindURLs are not allowed. The connect string succeeds as long as one of the machines listed is accessible.

Note the following:

A data source name may appear multiple times (for example, in the local and remote bindings). AIS resolves this ambiguity by using the first definition of any DSN and disregarding any subsequent definitions. Thus, if a DSN called SALES appears in the local binding and via the BindURL parameter, the local definition is used. When using BindURL, AIS binds upon initialization to all of the DSNs defined for the binding regardless of which DSNs are actually used (note that this may result in decreased performance). For each server specified in the BindURL connect string item, AIS automatically adds a remote machine (dynamically in memory) called BindURLn with n=1,2,, according to the order of the elements in the BindURL value. For multiple BindURLs, use the following syntax to specify a remote Query Processor to use: BindURLn=[attconnect://]. where n is the number of the BindURL specifying the remote machine whose Query Processor you want to use.

Example 883

BindURL Strings

The following string shows an AIS Server running on nt.acme.com using the Prod workspace and logging on as minny (password mouse).
BindURL=minny:mouse@nt.acme.com/prod

The following string shows an AIS Server running on nt.acme.com using the default workspace (Navigator), using port 8888 and an anonymous login.
BindURL=nt.acme.com:8888

Database: The name of a virtual database that this connection accesses. (The virtual database presents a limited view to the user of the available data such that only selected tables from either one or more data sources are available, as if from a single data source.1 For more information, see Using a Virtual Database). If set, only the virtual database can be accessed using this connect string. This property is equivalent to the Virtual Database option in the definition of a DSN (see Defining a DSN).

Attunity Federate provides the ability to view multiple data sources as a single federated data source

88-12 AIS User Guide and Reference

DefTdpName=data source: The name of the single data source you want to access as the default using this connection. Tables specified in SQL statements are assumed to be from this data source. If this parameter is not specified, then SYS is the default. For tables from any other data source, you must prefix each table name with the name of the data source, using the format: data source:tablename. Attunity Connect opens the connection in single data source mode (unless explicitly overridden by setting OneTdpMode=0, see below). This property is equivalent to the Default Data Source option in the definition of a DSN (see Defining a DSN).

DSNPasswords=data_source|machine_ alias=username/password[&data_source|machine_ alias=username/password[&]]: User profile information (username and password) granting access to a data source or remote machine via this connection. As an alternative to storing usernames and passwords in the user profile, this parameter allows you to dynamically supply one or more pairs of username and password values, with each pair assigned to a particular data source or remote machine. where:

data_source: A data source name defined in the binding configuration. machine_alias: A machine alias defined in the binding configuration. username/password: a pair of user ID and password values needed in order to access the indicated data source.

LocalQp=1|0 : Specifies that the ODBC client works in local QP mode (1) and not remote QP (0, which is the default). OneTdpMode=1|0 : Specifies whether you are working in single (1) or multiple (0) data source mode. You must explicitly set a value for OneTdpMode as well as setting a value for DefTdpName (see above), for Attunity Connect to work in single data source mode. Otherwise, the connection is opened to allow access to multiple data sources. This property is equivalent to the Single option in the definition of a DSN (see Defining a DSN).

Passthru=1|0 : Specifies whether all SQL statements that do not return rowsets during this connection will pass directly to the native RDBMS data source parameter, without any parsing normally performed by the Query Processor. Specifying 1 enables passthru mode and causes Attunity Connect to open the connection in single data source mode. This parameter can be used only if the backend data source is SQL-based. SQL executed in passthru mode behaves the same as individual PASSTHRU queries specified with the TEXT={{}} syntax; however, there is no way to override passthru mode for a particular query. Use passthru mode to issue queries that perform special processing not supported in Attunity Connect, such as ALTER TABLE and DROP INDEX. This property is equivalent to the Passthru option in the definition of a DSN (see Defining a DSN).
Note: Attunity does not recommend using this option, since it impacts on every DDL SQL statement, even if only some statements were intended.

ODBC Client Interface 88-13

Also refer to For all SQL During a Session.

PWD=password: Specifies the password required in order to access the user profile. For details, see Managing a User Profile in Attunity Studio. QpTdpName=server machine: Specifies the remote machine where query processing will take place. The name of this remote machine is defined in the binding configuration. SecFile=filespec: The name of an user profile other than the default (NAV). The SecFile entry is supported for backward compatibility only. As of Attunity Connect version 3.x, the UID parameter (described below) is used instead of SecFile.

UID=userID: Specifies the name of a user profile in the repository. If the user profile is not specified, the default user profile (NAV) is used.

ODBC Client Interface


This section includes the following topics:

Supported Interfaces ODBC Schema Rowsets ODBC Data Types Supported Options

Supported Interfaces
Attunity Connect supports the following ODBC interfaces:

SQLAllocEnv SQLFreeEnv SQLAllocConnect SQLFreeConnect SQLDisconnect SQLDriverConnect SQLAllocStmt SQLPrepare SQLNumParams SQLBindParameter SQLSetCursorName SQLGetCursorName SQLExecDirect SQLExecute SQLParamData SQLPutData SQLCancel

88-14 AIS User Guide and Reference

SQLTables SQLColumns SQLStatistics SQLSpecialColumns SQLProcedures SQLProcedureColumns SQLNumResultCols SQLBindCol SQLDescribeCol SQLColAttributes SQLFetch SQLGetData SQLRowCount SQLGetInfo SQLGetTypeInfo SQLSetConnectOption SQLGetConnectOption SQLSetStmtOption SQLGetStmtOption SQLTransact SQLExtendedFetch SQLGetFunctions SQLFreeStmt SQLError SQLMoreResults SQLDataSources SQLNativeSql SQLParamOptions SQLPrimaryKeys SQLForeignKeys SQLSetPos

ODBC Schema Rowsets


Attunity Connect supports the following ODBC schema rowsets:

SQLCatalogs SQLColumns SQLForeignKeys

ODBC Client Interface 88-15

SQLGetTypeInfo SQLPrimaryKeys SQLProcedures SQLProcedureColumns SQLStatistics SQLTables

ODBC Data Types


Attunity Connect supports the following SQL data types:

SQL_SMALLINT SQL_TINYINT SQL_INTEGER SQL_DOUBLE2 SQL_REAL SQL_NUMERIC SQL_LONGVARBINARY (treated as a BLOB) SQL_VARCHAR SQL_LONGVARCHAR (treated as a BLOB) SQL_CHAR SQL_DATE3 SQL_TIME SQL_TIMESTAMP SQL_BINARY

Attunity Connect enables you to retrieve or modify SQL_LONGVARCHAR and SQL_ LONGVARBINARY fields, with certain restrictions. The restrictions are due primarily to the support for BLOBs available in the underlying data sources, and may vary from one data source to another. For specific details about the various data sources, see the specific driver. For example, data sources that support BLOBs may or may not support random positioning within a stream (as with the Seek method), and may or may not allow operations on partial "pieces" of a BLOB. This table lists a suggested mapping between ODBC data types and C and COBOL data types.
Table 881 Mapping Between ODBC, C, and COBOL Data Types C Data Types SQL-C-CHAR COBOL Data Type Bytes

ODBC Data Type SQL-CHAR

PIC X(nnn), PIC 9(nnn) nnn

2 3

Since the double and real data types have limited accuracy, arithmetic operations involving two doubles or two reals may be incorrect. This type is reported as "Not Supported" to the ODBC Driver Manager. However, in existing data sources, columns of this type are processed properly.

88-16 AIS User Guide and Reference

Table 881 (Cont.) Mapping Between ODBC, C, and COBOL Data Types ODBC Data Type SQL-NUMERIC (full word, no decimal point) SQL-NUMERIC (with decimal point) SQL-INTEGER SQL-SMALLINT SQL-REAL SQL-DOUBLE SQL-DATE (YYYY-MM-DD) C Data Types SQL-C-LONG SQL-C-DOUBLE SQL-C-LONG SQL-C-SHORT SQL-C-FLOAT SQL-C-DOUBLE SQL-C-DATE COBOL Data Type PIC S9(09)COMP COMP-2 PIC S9(09)COMP PIC S9(04)COMP COMP-1 COMP-2 PIC S9(04)COMP PIC 9(04)COMP PIC 9(04)COMP SQL-DATE SQL-TIME SQL-C-CHAR SQL-C-TIME PIC X(10) PIC 9(04)COMP PIC 9(04)COMP PIC 9(04)COMP SQL-TIME (HH:MM:SS) SQL-TIMESTAMP SQL-C-TIMESTAMP PIC S9(04)COMP PIC 9(04)COMP PIC 9(04)COMP PIC 9(04)COMP PIC 9(04)COMP PIC 9(04)COMP PIC 9(04)COMP SQL-TIMESTAMP (YYYY-MM-D HH:MM:S) SQL-VARCHAR SQL-LONGVARCHAR SQL-BINARY SQL-C-CHAR SQL-C-CHAR SQL-C-CHAR SQL-C-BINARY PIC X(23) PIC X(nnn) PIC X(nnn) PIC X(nnn) PIC 9(nnn) SQL-VARBINARY SQL-LONGVARBINARY SQL-TINYINT N/A N/A SQL-C-SHORT N/A N/A PIC S9(04) COMP N/A N/A 2 23 nnn nnn nnn 16 SQL-C-CHAR PIC X(08) 8 10 6 Bytes 4 8 4 2 4 8 6

See also ADD Supported Data Types.

Supported Options
Attunity Connect supports the following additional fInfoType displayed in the following table:

ODBC Client Interface 88-17

Table 882 fInfoType 1000

SQLGetInfo fInfo Types Data Type SQL_INFO__START Description Returns the default data source type. For example, if the default data source is an ORACLE data source, SQLGetInfo(fIinfoType = 1000) returns the string ORACLE.

Attunity Connect supports the following additional fOption displayed in the following table:
Table 883 fOption 1002 Additional fOption Types Data Type Long Description Returns TRUE if the field is a BLOB; otherwise FALSE.

ODBC API Conformance


This section lists the ODBC 2.5 APIs that are implemented by AIS on a Windows platform. It includes the following topics:

Minimum Requirements of an ODBC Provider Asynchronous Execution General Information Conformance Information SQL Syntax Information

The following table lists the ODBC 2.5 APIs that are implemented by AIS Server on a Windows platform: Non-Windows Platforms: The following APIs are not implemented by the ODBC Driver manager (except for SQLGetFunctions).
Table 884 ODBC APIs Implemented on Windows Platform Conformance Level Core Core See Support for Non-C Applications on Platforms Other than Windows. Core Core Level 2 Core Core Level 1 Core Level 2 Core Core

ODBC Function SQLAllocConnect SQLAllocEnv COBOLSQLAllocEnv SQLAllocStmt SQLBindCol SQLBindParameter SQLCancel SQLColAttributes SQLColumns SQLConnect SQLDataSources SQLDescribeCol SQLDisconnect

88-18 AIS User Guide and Reference

Table 884 (Cont.) ODBC APIs Implemented on Windows Platform ODBC Function SQLDriverConnect SQLDrivers SQLError SQLExecDirect SQLExecute SQLExtendedFetch SQLFetch SQLForeignKeys SQLFreeConnect SQLFreeEnv SQLFreeStmt SQLGetConnectOption SQLGetCursorName SQLGetData SQLGetFunctions SQLGetInfo SQLGetStmtOption SQLGetTypeInfo SQLMoreResults SQLNumParams SQLNumResultCols SQLParamData SQLPrepare SQLPrimaryKeys SQLProcedureColumns SQLProcedures SQLPutData SQLRowCount SQLSetConnectOption SQLSetCursorName SQLSetParam SQLSetPos SQLSetStmtOption SQLSpecialColumns SQLStatistics SQLTables Conformance Level Level 1 Level 2 (implemented only in ODBC Driver Manager) Core Core Core Level 2 Core Level 2 Core Core Core Level 1 Core Level 1 Level 1 (implemented only in ODBC Driver Manager) Level 1 Level 1 Level 1 Level 2 Level 2 Core Level 1 Core Level 2 Level 2 Level 2 Level 1 Core Level 1 Core Core Level 2 Level 1 (partial) Level 1 Level 1 Level 1

ODBC Client Interface 88-19

Table 884 (Cont.) ODBC APIs Implemented on Windows Platform ODBC Function SQLTransact Conformance Level Core

Minimum Requirements of an ODBC Provider


The ODBC provider must support all of the core SQL ODBC data types and expose the following ODBC APIs displayed in the following table:
Table 885 ODBC APIs: Minimum Requirements of ODBC Provider Comment

ODBC Function SQLAllocConnect SQLAllocEnv SQLAllocStmt SQLBindCol SQLBindParameter SQLColumns SQLConnect SQLDescribeCol SQLDisconnect SQLConnect SQLError SQLExecDirect SQLExecute SQLExtendedFetch SQLFetch SQLForeignKeys SQLFreeConnect SQLFreeEnv SQLFreeStmt SQLGetConnectOption SQLGetData SQLGetFunctions1 SQLGetInfo SQLGetTypeInfo SQLNumParams SQLNumResultCols SQLParamData SQLPrepare SQLPrimaryKeys SQLProcedureColumns

Recommended if used by the back-end data source.

Recommended if used by the back-end data source.

Recommended if used by the back-end data source.

Recommended if used by the back-end data source. Recommended if used by the back-end data source.

88-20 AIS User Guide and Reference

Table 885 (Cont.) ODBC APIs: Minimum Requirements of ODBC Provider ODBC Function SQLProcedures SQLPutData SQLRowCount2 SQLSetConnectOption3 SQLSetStmtOption4 SQLStatistics SQLTables SQLTransact
1

Comment Recommended if used by the back-end data source.

Recommended if used by the back-end data source.

On a Windows platform: There is no custom implementation for this function. Attunity Connect uses the ODBC Administrator implementation, which checks for the presence of an API in a driver DLL header. The ODBC driver supplies stubs that return SQL_ERROR for all non-implemented APIs, so SQLGetFunctions returns all APIs as supported. On non-Windows platforms: There is a custom implementation for this function. 2 Unavailable for SELECT statements (returns pcrow = -1). For batch update queries, pcrow=0 indicates that there was no data to update/delete and pcrow=n indicates that n rows were affected. 3 Attunity Connect supports the following options: SQL_ACCESS_MODE, SQL_AUTOCOMMIT, SQL_TXN_ISOLATION (the supported vparams of this option is SQL_TXN_READ_COMMITTED), SQL_CURRENT_QUALIFIER (available only on a DSN in single data source mode). 4 Attunity Connect supports the following options: SQL_BIND_TYPE SQL_MAX_ROWS: Effective only in SELECT statements. SQL_CONCURRENCY: The supported vparams of this option are: SQL_CONCUR_READ_ONLY: Executes the query in read only mode. SQL_CONCUR_LOCK: Executes the query in pessimistic lock mode. SQL_CONCUR_VALUES: Executes the query in optimistic lock mode. SQL_CONCUR_ROWVER: (unsupported) Changed to SQL_CONCUR_VALUES SQL_CURSOR_TYPE: The supported vparams of this option are: SQL_CURSOR_FORWARD_ONLY SQL_CURSOR_STATIC SQL_CURSOR_KEYSET_DRIVEN: Changed to SQL_CURSOR_STATIC. SQL_CURSOR_DYNAMIC: Changed to SQL_CURSOR_STATIC. SQL_ROWSET_SIZE SQL_ASYNC_ENABLE: This option is supported only on Windows platforms.

Asynchronous Execution
Attunity Connect supports asynchronous execution enabling a query to be cancelled during execution. If more than one query is active on the same server, these queries might also be cancelled along with the query for which the cancel was issued.

General Information
This table displays the general information configured to access AIS Data sources through ODBC:
Table 886 General Information for Accessing Attunity Data Sources via ODBC Returns 0x0250

Information Type ODBCVER

ODBC Client Interface 88-21

Table 886 (Cont.) General Information for Accessing Attunity Data Sources via ODBC Information Type SQL__NAME Returns On Windows: ODNAV32.DLL Non-Windows: ODNAVSHR SQL_DBMS_NAME "Attunity Connect"

Conformance Information
This table displays the conformance information configured to access AIS Data sources via ODBC:
Table 887 Conformance Information for Accessing Attunity Data Sources via ODBC Returns 0x0250 SQL_OSC_NOT_COMPLIANT SQL_OSC_CORE

Information Type SQL_ODBC_API_CONFORMANCE SQL_ODBC_SAG_CLI_CONFORMANCE SQL_ODBC_SQL_CONFORMANCE

SQL Syntax Information


This table displays the SQL Syntax information configured to access AIS Data sources via ODBC:
Table 888 SQL Syntax Information for Accessing Attunity Data Sources via ODBC Returns 63 " : #$ SQL_IC_MIXED (non-sensitive)

Information Type SQL_MAX_ID_NAME_LEN SQL_IDENTIFIER_QUOTE_CHAR SQL_QUALIFIER_NAME_SEPARATOR SQL_SPECIAL_CHARACTERS SQL_IDENTIFIER_CASE

Platform Specific Information


This section contains the following topics:

Support for Non-C Applications on Platforms Other than Windows ODBC Client Interface Under CICS (z/OS Only) Sample Programs

Support for Non-C Applications on Platforms Other than Windows


Attunity Connect implements the COBOLSQLAllocEnv function, which can be used in place of the SQLAllocEnv API. By setting Attunity Connect to use strings that are space-padded and not null-terminated as input or output, the COBOLSQLAllocEnv function enables applications based on languages other than C to work with ODBC. HP NonStop Platforms: When using COBOL, you can only use HP NonStop NMCOBOL. COBOLSQLAllocEnv affects the way the following ODBC APIs handle strings:

88-22 AIS User Guide and Reference

SQLBrowseConnect SQLColAttributes SQLConnect SQLDataSources SQLDescribeCol SQLDriverConnect SQLDrivers SQLError SQLGetConnectOption SQLGetCursorName SQLGetInfo SQLNativeSql SQLSetConnectOption

ODBC Client Interface Under CICS (z/OS Only)


From a z/OS machine you access data that resides on another machine (z/OS), with either a COBOL or C program running under CICS. On the z/OS machine you do not need to run the daemon, as long as the daemon is running on the other machine. To set up the CICS ODBC Interface 1. Copy NAVROOT.LOAD(ATTCICSD) to a CICS DFHRPL library.
2.

Make sure that the CICS Socket Interface is enabled. You can enable this interface by issuing the following CICS command:
EZAO START CICS4

To use the CICS ODBC Interface Follow the steps below to set up use of the CICS ODBC Interface:
1. 2.

Copy the COBOL or C program to a CICS DFHRPL library. Set up CICS resource definitions for the COBOL or C program and transaction. The following JCL can be used as a template:
//ATTCSD JOB 'ATTUNITY','CSD',MSGLEVEL=1,NOTIFY=&SYSUID //STEP1 EXEC PGM=DFHCSDUP,REGION=512K, // PARM='CSD(READWRITE),PAGESIZE(60),NOCOMPAT' //STEPLIB DD DSN=<HLQ1>.SDFHLOAD,DISP=SHR //DFHCSD DD UNIT=SYSDA,DISP=SHR,DSN=<HLQ2>.CSD //OUTDD DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSIN DD *

Refer to the TCP/IP V3R2 For MVS: CICS Sockets Interface Guide. If you are not sure if the system is configured with the Socket Interface, try running the EZAC transaction. If the transaction produces a screen, you should be able to run the EZAO startup transaction. If not, see if the transaction has been defined in a group that has not been installed, for example: CEDC V TRANS(EZAC) G(*). If it is defined in a group, install that group and try running EZAO again. If not, you have to configure CICS as outlined in the above manual.

ODBC Client Interface 88-23

*/********************************************************************/ */* ATTUNITY ODBC CICS Definitions */ */* */ */********************************************************************/ *---------------------------------------------------------------------* * Note: Install GROUP(ATT) - CEDA IN G(ATT) * * If you are re-running this, you can uncomment the DELETE command. * *---------------------------------------------------------------------* * * Start ATTUNITY ODBC RESOURCES: * * DELETE GROUP(ATT) DEFINE PROGRAM(ATTCICSD ) GROUP(ATT) LANGUAGE(C) DATALOCATION(ANY) DE(Attunity ODBC DLL) DEFINE PROGRAM(<PROG> ) GROUP(ATT) LANGUAGE(<LANG>) DATALOCATION(ANY) DE(Attunity ODBC) DEFINE TRANSACTION(<ATTTRAN>) GROUP(ATT) PROGRAM(<PROG> ) TASKDATAL(ANY) DE(Attunity ODBC TRAN ID) LIST GROUP(ATT) * * End ATTUNITY ODBC RESOURCES * /* // 3.

Make the following changes before running the JCL:

Modify the JCL, as follows: * * * * * * Change the JOB card to suit the site. Change <HLQ1> to point to the CICS SDFHLOAD library. Change <HLQ2> to point to the CICS CSD dataset. Change <LANG> to the LE370 for COBOL or C for C. Change <PROG> to the COBOL or C program name. Change <ATTTRAN> to the CICS transaction name.

From CICS, install the ATT group, by issuing the following command:
CEDA IN G(ATT)

4.

Compile the COBOL or C program and link it to the NAVROOT.FIXLIB(ATTCICSS) member where NAVROOT is the high-level qualifier where AIS is installed. Move the compiled and linked module to a CICS LOAD library. For COBOL programs only: Write, as part of the program, include definitions for the COBOL ODBC interface to match the C definitions found in the attcicsh.h include file (supplied as part of the mainframe kit, in NAVROOT.INCLUDEH.H).

5. 6.

Sample Programs
AIS provides a C and COBOL sample program as a template for incorporating the CICS ODBC interface into an existing C or COBOL application. The samples are open programs which use ODBC. They accept two input parameters: SERVER and SQL.

88-24 AIS User Guide and Reference

The format of the SERVER parameter is:


SERVER=[USER:PASSWORD@]IP:PORT[/WORKSPACE]; [LOG=IRPC | ODBC | SOCKETS | ALL;] [QUEUE=CICS_QUEUE_NAME;] [ODBC_connect_string_parameters;]

Where:

USER:PASSWORD@ : An optional user ID/password pair for accessing the AIS. IP: The TCP/IP host where the daemon resides. Both numeric form and symbolic form are accepted. PORT: The daemon port number. This item is required when the daemon does not use the Sun RPC portmapper facility. WORKSPACE: An optional workspace to use. If omitted, the default workspace (Navigator) is used. LOG: The details that are logged:

IRPC: Generates a message in the log for each RPC operation. ODBC: Generates a message in the log for each ODBC operation. SOCKETS: Generates a message in the log for each socket operation. ALL: Generates a message in the log for each RPC, ODBC and socket operation.

CICS_QUEUE_NAME: The name of queue for output which is defined under CICS when tracing the output of the program. When not defined, the default CICS queue is used.

The SQL parameter is any valid SQL statement. There must be a space or a new line between the SERVER and the SQL parameters. Examples

ATYC SERVER=194.90.22.113:2551;LOG=ALL; SQL=SELECT * FROM NAVDEMO:NATION; ATYC is a transaction that uses the C sample.

ATCO SERVER=194.90.22.113:2551; SQL=insert into navdemo:nation values(26, 'TIMOR', 3, 'Gained independence in 2002'); ATCO is a transaction that uses the COBOL sample.

Environment Variables
The following parameters define aspects of how Attunity Connect works with ODBC applications:

enableAsyncExecuting: Enables asynchronous execution. forceQualifyTable: The catalog and table name are reported together as a single string (as DS:table_name). maxActiveConnections: The maximum number of connections that an ODBC or OLE DB application can make through Attunity Connect. The default is 0, indicating that the maximum is not set.

ODBC Client Interface 88-25

Note:

The greater the number of connections possible, the faster the application can run. However, other applications will run slower and each connection is counted as a license, restricting the total number of users who can access data through Attunity Connect concurrently. This is particularly the case when using MS Access as a front-end, since MS Access allocates more than one connection whenever possible.

maxActiveStatements: The value retuned for the infoType of the ODBC SQL GetInfo API. The default is 0, indicating that there is no limit on the number of active statements. procedureReturnValueName: The name of the return value parameter of a stored procedure. The default is RETURN_VALUE.

88-26 AIS User Guide and Reference

89
OLE DB (ADO) Client Interface
This section includes the following topics:

Overview Methods and Properties ADO Connect String Optimizing ADO ADO Schema Recordsets OLE DB Data Types OLE DB Properties

Overview
OLE DB is Microsoft's low-level interface to data across the organization including relational and non-relational databases, e-mail and file systems. OLE DB is an open specification designed to build on the success of ODBC, by providing an open standard for accessing all sorts of data.

Methods and Properties


AIS supports the following objects, methods and properties of ADO:
Table 891 Object Command ADO Objects, Methods, and Properties Methods Supported CreateParameter Execute Properties Supported ActiveConnection CommandText1 CommandType Name Prepared State Properties Not Supported CommandTimeout

OLE DB (ADO) Client Interface 89-1

Table 891 (Cont.) ADO Objects, Methods, and Properties Object Connection Methods Supported BeginTrans Close CommitTrans Execute Open OpenSchema RollbackTrans Properties Supported Attributes ConnectionString ConnectionTimeout CursorLocation DefaultDatabase IsolationLevel Mode Provider State Version Error Description HelpContext HelpFile NativeError Number Source SQLState Field AppendChunk GetChunk ActualSize Attributes DefinedSize Name NumericScale Optimize OriginalValue Precision Type Value Parameter Append AppendChunk Delete Item Refresh Attributes Count Direction Name NumericScale Precision Size Type Value UnderlyingValue Properties Not Supported CommandTimeout

89-2 AIS User Guide and Reference

Table 891 (Cont.) ADO Objects, Methods, and Properties Object Property Methods Supported Item Refresh Properties Supported Attributes Count Name Type Value Version RecordSet2 AddNew CancelUpdate Clone Close CompareBookmarks Delete GetRows GetString Move MoveFirst MoveLast MoveNext MovePrevious Open NextRecordset Requery Save Supports Update UpdateBatch
1

Properties Not Supported

AbsolutePosition ActiveConnection BOF Bookmark CacheSize CursorLocation CursorType DataMember DataSource EditMode EOF Filter LockType MarshalOptions RecordCount Sort Source State Status StayInSync

AbsolutePage ActualSize MaxRecords PageCount PageSize

You must qualify the command text with the data source name (for example, oCmd.CommandText = sqlsrv:storedproc0). The following methods are not supported: CancelBatch, Next, Resync.

Notes The setting for the CacheSize property (indicating the number of records from an ADO Recordset object that are cached locally) should be equivalent to the value of the <oledb maxHRows> AIS Server environment property. The smaller of the two is used in practice. For example, if the ADO CacheSize property is set to 100 and the <oledb maxHRows> environment property set to 50, only 50 rows are retrieved at a time.

AIS returns cursor type adOpenStatic when adOpenKeyset or adOpenDynamic is specified for the CursorType property. AIS returns adXactReadCommitted for the IsolationLevel property. This property is read-only.

OLE DB (ADO) Client Interface 89-3

ADO Connect String


To specify AIS Server as the provider through ADO, use the Open method of the Connection object. You can connect to AIS Server as the provider using a Microsoft UDL, as follows:
connection.Open "file name=UDL_filename"

where UDL_filename is the full path of the UDL file. Alternatively, you can use the following format:
connection.Open "provider=AttunityConnect [;parameter=value[;parameter=value]...]"

For a description of the available parameters, refer to Connect String Parameters.


Example 891 Connect Strings

The following connect string connects to AIS Server via a UDL:


connection.Open "file name=c:\provider.udl"

The following connect string specifies AIS Server as the data provider and uses all the AIS Server defaults (such as the location of binding information and the default user profile).
connection.Open "provider=AttunityConnect"

The following connect string additionally specifies both the binding and the user profile:
connection.Open "provider=AttunityConnect; binding=production; User ID=QAsmith;password=asdfaa"

Connect String Parameters


The connect string parameters can be one or more of the following:

Binding=name|XML_format: Specifies the data source connection information. Where: name: The name of the binding configuration in the local repository. This provides access to all data sources defined in this binding configuration. For more information, see Binding Configuration. XML format: The binding configuration in XML format. This parameter defines specific data sources and eliminates the need to define a local binding configuration in the repository. Only the data sources specified for the binding are accessed. If you want to access the data sources in all the binding configurations on a remote machine, use the BindURL parameter (see below). The settings include the following:

name: The name of a data source. For more information, see Binding Configuration.

89-4 AIS User Guide and Reference

type: The driver used to access the data source if it resides on the client machine, or the value REMOTE if the data resides on a remote machine. If the value is REMOTE, the binding on the remote machine is updated with the values of name, data source type and configuration properties. connect: If the type value is a driver, this value is the connection information to the data source. If the type value is REMOTE, this value is the address of the remote machine where the data source resides and the workspace on that machine (if the default workspace is not used). Configuration properties: Properties specific to the data source driver. For details, see the specific data source driver.
Sample Connection

Example 892

The following shows a connection to a local demo DISAM data source, via ADO:
connection.Open "provider=AttunityConnect; Binding="<?xml version=1.0 encoding=iso-8859-1?> <navobj><datasources><datasource name=demo type=add-disam readOnly=true><config newFileLocation=D:\disam audit=true/> </datasource></datasources></navobj>""

The following shows a connection to a remote demo data source:


connection.Open "provider=AttunityConnect; Extended Properties="BindFile=NAV;DEFAULTTDP=mysql;OPERATING_MODE=0;"; Binding="<?xml version=1.0 encoding=iso-8859-1?><navobj> <datasources><datasource name=demo connect=develop/acme.com type=remote/></datasources></navobj>""

BindURL=[attconnect://][username:password@]host[:port][/workspa ce][&...][|...]: Specifies an AIS Server and which data sources, defined in a binding configuration on this server, are available. This parameter eliminates the need to define a local binding with entries for data sources on a server. If you want to access only a subset of the data sources on the server, use the Binding parameter (see above). The settings include the following:

attconnect:// An optional prefix to make the URL unique when the context is ambiguous. username:password@: An optional user ID/password pair for accessing the AIS Server. host: The TCP/IP host where the daemon resides. Both numeric form and symbolic form are accepted. port: An optional daemon port number. This item is required when the daemon does not use the Sun RPC portmapper facility. workspace: An optional workspace to use. If omitted, the default workspace (Navigator) is used. & Multiple BindURLs may be specified, using an ampersand (&) as separator. Spaces between the BindURLs are not allowed. If one of the machines listed is not accessible, the connect string fails. | Multiple BindURLs may be specified, using an OR symbol (|) as separator. Spaces between the BindURLs are not allowed. The connect string succeeds as long as one of the machines listed is accessible.

OLE DB (ADO) Client Interface 89-5

Note the following:

A data source name may appear more than once. AIS resolves this ambiguity by using the first definition of any DSN and disregarding any subsequent definitions. Thus, if a DSN called SALES appears in the local binding and via the BindURL parameter, then the local definition is used. When using BindURL, AIS Server binds upon initialization to all of the DSNs defined for the binding regardless of which DSNs are actually used. (Note that this may result in decreased performance.) For each server specified in the BindURL connect string item, AIS Server automatically adds a remote machine (dynamically in memory) called BindURLn with n=1,2,, according to the order of the elements in the BindURL value. For multiple BindURLs, use the following syntax to specify a remote Query Processor to use: BindURLn=[attconnect://]. where n is the number of the BindURL specifying the remote machine whose Query Processor you want to use.

Example 893

Connection to remote machines

The following string shows an AIS Server running on nt.acme.com using the default workspace (Navigator), the portmapper and an anonymous login.
ADO connection.Open "provider=AttunityConnect;BindURL=nt.acme.com"

The following string shows an AIS Server running on nt.acme.com using the Prod workspace and logging on as minny (password mouse).
BindURL=minny:mouse@nt.acme.com/prod

The following string shows an AIS Server running on nt.acme.com using the default workspace (Navigator), using the port 8888 and an anonymous login.
BindURL=nt.acme.com:8888

Database: The name of a virtual database that this connection accesses. The virtual database presents a limited view to the user of the available data such that only selected tables from either one or more data sources are available, as if from a single data source. For more information, see Using a Virtual Database. If set, only the virtual database can be accessed using this connect string.

defaulttdp=data source: The name of the single data source you want to access as the default using this connection. Tables specified in SQL statements are assumed to be from this data source. If this parameter is not specified, SYS is the default; for tables from any other data source, you must prefix each table name with the name of the data source, using the format data source:tablename. Specifying defaulttdp is equivalent to setting the DefaultDatabase property in ADO or the Default Data Source option in a UDL connecting to AIS Server.

DSNPasswords=data_source|machine_ alias=username/password[&data_source|machine_

89-6 AIS User Guide and Reference

alias=username/password[&]]: User profile information (username and password) granting access to a data source or remote machine via this connection. As an alternative to storing usernames and passwords in the user profile, this parameter allows you to dynamically supply one or more pairs of username and password values, with each pair assigned to a particular data source or remote machine. where:

data_source: A data source name defined in the binding configuration For more information, see Binding Configuration. machine_alias: A machine alias defined in the binding configurationFor more information, see Binding Configuration. username/password: A pair of user ID and password values needed in order to access the indicated data source.

Env-prop=value[,Env-prop=value[,Env-prop=value]...]: Environment values that override the values in the binding environment on the client. where Env-prop is the name of the environment property. For information on these parameters see Environment Properties. Example3

Example 894

Setting a parameter to True

The following string sets the value of the noHashJoin parameter to true, disabling the hash join mechanism during query optimization.
<optimizer noHashJoin="true"/>

Operating_Mode=1|0: Specifies whether all SQL statements that do not return rowsets during this connection will pass directly to the native RDBMS data source parameter, without any parsing normally performed by the Query Processor. Specifying 1 enables passthru mode and causes AIS to open the connection in single data source mode. This parameter can be used only if the backend data source is an SQL-based driver. SQL executed in passthru mode behave the same as individual passthru queries specified with the TEXT={{}} syntax; however, there is no way to override passthru mode for a particular query. Use passthru mode to issue queries that perform special processing not supported in AIS, such as alter table and drop index.
Note: Attunity does not recommend using this option, since it impacts on every DDL SQL statement, even if only some statements were intended.

Also see For all SQL During a Session. Specifying operating_mode is equivalent to setting the passthru option in a UDL used to connect to AIS Server. Also refer to Passthru SQL.

Password=password: Specifies the password required in order to access the user profile. For more information, see Managing a User Profile in Attunity Studio.

OLE DB (ADO) Client Interface 89-7

When using ADO, to prompt the user for the correct password via a dialog box, use the ADO adPromptAlways attribute of the Properties method, as follows: connection.Properties("Prompt")=adPromptAlways Specify this line before the Open method. When the user tries to connect, a dialog box requesting the password is always displayed.

SecFile=filespec: The name of an user profile other than the default (NAV). The SecFile entry is supported for backwards compatibility only. As of AIS version 3.x, the User ID parameter (described below) is used instead of SecFile.

User ID=userID: Specifies the name of a user profile in the repository. If the user profile is not specified, the default user profile (NAV) is used.

Optimizing ADO
By reusing the same ADO command, you save time needed to create the ADO object, making execution faster. To reuse commands you must use a transaction as an envelope around the commands. You can reuse commands for INSERT and SELECT statements whenever SQL is repeated, with different parameter values for each iteration of the SQL. Reusing commands for UPDATE and DELETE statements works only with non-relational data sources. The following code shows how to reuse code to insert rows into a table. Two methods are shown: the first method inserts rows via parameters and the second method uses constants. The second method is translated internally to be the same as the first method and is thus a little slower. Using either method is a matter of preference. Using constants works only for INSERT statements and not for SELECT statements.
Dim Dim Dim Dim conn As ADODB.Connection com As ADODB.Command dept_id(10) As String dept_budget(10) As Double for the new rows "DP11" "DP12" "DP13" "DP14" "DP15" "DP16" = = = = = = 11 22 33 44 55 66

Init values dept_id(1) = dept_id(2) = dept_id(3) = dept_id(4) = dept_id(5) = dept_id(6) =

dept_budget(1) dept_budget(2) dept_budget(3) dept_budget(4) dept_budget(5) dept_budget(6)

On Error GoTo Error_Handler Set conn = New ADODB.Connection conn.Provider = "AttunityConnect" conn.Open NOTE: 1. It is important to use a transaction in order

89-8 AIS User Guide and Reference

to enable the reuse. 2. All the ADO commands to reuse must be created within this transaction. conn.BeginTrans Two methods for efficient insertion of rows. Each method inserts 3 rows in a loop to the dept table 1. Same command, reusing executed state, using parameters. =========================================================== Set com = New ADODB.Command Set com.ActiveConnection = conn com.CommandType = adCmdText We set the command text once and reuse it. com.CommandText = "insert into dept values (?,?)" com.Parameters.Append com.CreateParameter("p0", adChar, adParamInput, 4) com.Parameters.Append com.CreateParameter("p1", adDouble, adParamInput, 0) Insert 3 new rows For i = 1 To 3 com.Parameters(0) = dept_id(i) com.Parameters(1) = dept_budget(i) com.Execute There is no need to create a new command or even to set text again in every iteration. Next i cleanup Set com = Nothing 2. Same command, reusing executed state, using constants. ========================================================= Set com = New ADODB.Command Set com.ActiveConnection = conn com.CommandType = adCmdText We dont create a new command for every iteration, just set the new command text Insert additional 3 new rows For i = 4 To 6 com.CommandText = "insert into dept values (" & dept_id(i) & "," &dept_budget(i) & ")" com.Execute Next i cleanup Set com = Nothing Finish

OLE DB (ADO) Client Interface 89-9

conn.CommitTrans conn.Close Exit Sub Error_Handler: If conn.Errors.count > 0 Then MsgBox conn.Errors.Item(0).Source & " : " & conn.Errors.Item(0).Description End If conn.RollbackTrans conn.Close End Sub

ADO Schema Recordsets


The following ADO schema recordsets are supported:

adSchemaCatalogs adSchemaColumns adSchemaForeignKeys adSchemaIndexes adSchemaPrimaryKeys adSchemaProcedures adSchemaProcedureColumns adSchemaProviderTypes adSchemaStatistics adSchemaTables

OLE DB Data Types


The following OLE DB data types are exposed through the IColumnsInfo::GetColumnsInfo interface:

DBTYPE_I1 DBTYPE_UI1 DBTYPE_I2 DBTYPE_I4 DBTYPE_R4 DBTYPE_R8 DBTYPE_STR DBTYPE_WSTR DBTYPE_BYTE DBTYPE_NUMERIC DBTYPE_DECIMAL DBTYPE_DBDATE

89-10 AIS User Guide and Reference

DBTYPE_DBTIME DBTYPE_DBTIMESTAMP

The following table lists the mapping between Attunity AIS data types and ADO data types:
Table 892 AIS Datatype ID DT_TYPE_B_ DT_TYPE_BU_ DT_TYPE_W_ DT_TYPE_L_ DT_TYPE_WU_ DT_TYPE_F_ DT_TYPE_G_ DT_TYPE_STRING_ DT_TYPE_CSTRING_ DT_TYPE_VT_ DT_TYPE_FIXED_CSTRING_ DT_TYPE_VT4_ DT_TYPE_STR_TIME_ DT_TYPE_STR_DATE_ DT_TYPE_STR_DATETIME_ DT_TYPE_PADDED_STR_TIME DT_TYPE_PADDED_STR_DATE_ DT_TYPE_PADDED_STR_DTAETIME_ DT_TYPE_VAR_UNICODE1_ DT_TYPE_VAR_UNICODE2_ DT_TYPE_VAR_UNICODE4_ DT_TYPE_UNICODE_STRING_ DT_TYPE_UNICODE_CSTRING_ DT_TYPE_Z_ DT_TYPE_VB1_ DT_TYPE_VB2_ DT_TYPE_VB4_ DT_TYPE_NUMERIC_CSTRING_ DT_TYPE_Q_ DT_TYPE_OLE_NUMERIC_ DT_TYPE_NRO_ DT_TYPE_LU_ DT_TYPE_NLO_ AIS and ADO Data Type Mapping AIS Datatype Name int1 uint1 int2 int4 uint2 single double string cstring varstring fixed_cstring varstring4 str_time str_date str_datetime padded_str_time padded_str_date padded_str_datetime varunicode1 varunicode2 varunicode4 unicode_string unicode_cstring unspecified varbunary1 varbinary2 varbinary4 numeric_cstring int8 ole_numeric numstr_s uint4 numstr_nlo OLE DB Datatype DBTYPE_I1 DBTYPE_UI1 DBTYPE_I2 DBTYPE_I4 DBTYPE_I4 DBTYPE_R4 DBTYPE_R8 DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_STR DBTYPE_WSTR DBTYPE_WSTR DBTYPE_WSTR DBTYPE_WSTR DBTYPE_WSTR DBTYPE_BYTES DBTYPE_BYTES DBTYPE_BYTES DBTYPE_BYTES DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_NUMERIC

OLE DB (ADO) Client Interface 89-11

Table 892 (Cont.) AIS and ADO Data Type Mapping AIS Datatype ID DT_TYPE_NL_ DT_TYPE_NR_ DT_TYPE_OLE_DECIMAL_ DT_TYPE_ODBC_DATE_ DT_TYPE_BD400_TIME_ DT_TYPE_ODBC_TIME_ DT_TYPE_DB400_DATETIME_ DT_TYPE_ODBC_TIMESTAMP_ DT_TYPE_TIME_ DT_TYPE_OLE_DATE_ DT_TYPE_DB400_DATE_ AIS Datatype Name numstr_nl numstr_nr ole_decimal odbcDate db400_time time db400_datetime timestamp apt_time ole_date db400_date OLE DB Datatype DBTYPE_NUMERIC DBTYPE_NUMERIC DBTYPE_DECIMAL DBTYPE_DBDATE DBTYPE_DBTIME DBTYPE_DBTIME DBTYPE_DBTIMESTAMP DBTYPE_DBTIMESTAMP DBTYPE_DBTIMESTAMP DBTYPE_DBTIMESTAMP DBTYPE_DBTIMESTAMP

The drivers for specific data providers map the data source-specific data types to the above data types. SQL data types are supported through the CREATE TABLE statement. The data type conversion rules are documented in the OLE DB Programmer's Reference. You can retrieve or modify text and image fields, with certain restrictions. The restrictions are due primarily to the support for BLOBs available in the underlying data sources, and may vary from one data source to another. For example, data sources that support BLOBs may or may not support random positioning within a stream (as with the SEEK method), and may or may not allow operations on partial "pieces" of a BLOB. The following restrictions of the implementation of text and image fields apply to the OLE DB interface:

IStream:CopyTo and IStream:Clone are not supported. For data providers that support IStream::Seek functionality, the STREAM_ SEEK_END parameter value is not supported. Transacted BLOBs are not supported.

See also ADD Supported Data Types.

Mapping SQL Data Types to OLE DB Data Types


SQL data types are mapped to OLE DB data types as described in OLEDB SQL Data Types.

ADO Conformance Level


Attunity OLE DB Client conforms to the OLE DB specification version 1.5. The following OLE DB components are supported:

OLE DB Interfaces and Methods OLE DB Properties

89-12 AIS User Guide and Reference

OLE DB Interfaces and Methods


The following table lists the supported OLE DB interfaces and methods:
Table 893 Interface IAccessor Supported OLE DB Interfaces and Methods Methods AddRefAccessor CreateAccessor GetBindings ReleaseAccessor IChapteredRowset AddRefChapter ReleaseChapter IColumnsInfo GetColumnInfo MapColumnIDs ICommand Execute GetDBSession ICommandPrepare Prepare Unprepare ICommandProperties GetProperties SetProperties ICommandText GetCommandText SetCommandText ICommandWithParameters GetParameterInfo MapParameterNames SetParameterInfo IConvertType IDBCreateCommand IDBCreateSession IDBInfo IDBInitialize CanConvert CreateCommand CreateSession GetLiteralInfo Initialize Uninitialize IDBProperties GetProperties GetPropertyInfo SetProperties IDBSchemaRowset GetRowset SetSchemas IErrorLookup1 GetErrorDescription GetHelpInfo ReleaseErrors IGetDataSource GetDataSource

OLE DB (ADO) Client Interface 89-13

Table 893 (Cont.) Supported OLE DB Interfaces and Methods Interface ILockBytes (OLE) Methods Flush ReadAt SetSize Stat WriteAt IMultipleResults IOpenRowset IPersist (OLE) IRowset GetResult OpenRowset GetClassID AddRefRows GetData GetNextRows ReleaseRows RestartPosition IRowsetChange DeleteRows InsertRow SetData IRowsetIdentity IrowSetInfo IsSameRow GetProperties GetReferenceRowset GetSpecification IRowsetLocate Compare GetRowsAt GetRowsByBookmark Hash IRowsetUpdate GetOriginalData GetPendingRows GetRowStatus Undo Update ISequentialStream Read Write ISessionProperties GetProperties SetProperties IStream (OLE) Read Seek SetSize Stat Write ISupportErrorInfo InterfaceSupportsErrorInfo

89-14 AIS User Guide and Reference

Table 893 (Cont.) Supported OLE DB Interfaces and Methods Interface ITransaction Methods Abort Commit GetTransactionInfo ITransactionJoin ITransactionLocal JoinTransaction StartTransaction

OLE DB Properties
This section lists the supported OLE DB properties. These properties belong to one of the following categories:

DBPROPSET_DBINIT initialization properties. DBPROPSET_DATASOURCE data source properties. DBPROPSET_DATASOURCEINFO data source information properties. DBPROPSET_SESSION session properties. DBPROPSET_ROWSET rowset properties.

Initialization Properties
The DBPROPSET_DBINIT property set contains the properties listed below. Providers can define additional initialization properties.
Table 894 Property Name DBPROP_AUTH_PASSWORD DBPROP_AUTH_USERID DBPROP_INIT_DATASOURCE DBPROP_INIT_MODE DBPROP_INIT_TIMEOUT DBPROP_INIT_HWND DBPROP_INIT_PROVIDERSTRING DBPROP_INIT_PROMPT DBPROP_INIT_OLEDBSERVICES DBPROP_INIT_CATALOG Initialization Properties Permission Read/Write Read/Write Read/Write Read/Write Read/Write Read/Write Read/Write Read/Write Read/Write Read/Write "SYS" 999 0 0 "" DBPROMPT_NOPROMPT 0xffffffff "" Default

Data Source Properties


The DBPROPSET_DATASOURCE property set contains the following property:
Table 895 Property Name DBPROP_CURRENTCATALOG Data Source Properties Permission Read/Write Default ""

OLE DB (ADO) Client Interface 89-15

Data Source Information Properties


The DBPROPSET_DATASOURCEINFO property set contains the properties listed below. Providers can define additional data source information properties. These properties are read-only.
Table 896 Property Name DBPROP_ACTIVESESSIONS DBPROP_ASYNCTXNABORT DBPROP_ASYNCTXNCOMMIT DBPROP_BYREFACCESSORS DBPROP_CATALOGLOCATION DBPROP_CATALOGTERM DBPROP_CATALOGUSAGE Data Source Information Properties Permission Default Read Read Read Read Read Read Read 0 (no limit) FALSE FALSE FALSE DBPROPVAL_CL_START "DATABASE" DBPROPVAL_CU_DML_STATEMENTS DBPROPVAL_CU_TABLE_DEFINITION DBPROPVAL_CU_INDEX_DEFINITION DBPROP_COLUMNDEFINTION DBPROP_CONCATNULLBEHAVIOR DBPROP_DATASOURCENAME DBPROP_DATASOURCEREADONLY DBPROP_DBMSNAME DBPROP_DBMSVER DBPROP_DSOTHREADMODEL Read Read Read Read Read Read Read DBPROPVAL_CD_NOTNULL DBPROPVAL_CB_NULL "OLEQP" FALSE "Attunity Connect" "04.60.0000" DBPROPVAL_RT_APTMTTHREAD or DBPROPVAL_RT_FREETHREAD DBPROP_GROUPBY DBPROP_HETEROGENEOUSTABLES DBPROP_IDENTIFIERCASE DBPROP_MAXINDEXSIZE DBPROP_MAXOPENCHAPTERS DBPROP_MAXROWSIZE DBPROP_MAXROWSIZEINCLUDESBLOB DBPROP_MAXTABLESINSELECT DBPROP_MULTIPLEPARAMSETS DBPROP_MULTIPLERESULTS DBPROP_MULTIPLESTORAGEOBJECTS DBPROP_MULTITABLEUPDATE DBPROP_OLEOBJECTS DBPROP_ORDERBYCOLUMNSINSELECT DBPROP_OUTPUTPARAMETERAVAILABILITY DBPROP_PERSISTENTIDTYPE Read Read Read Read Read Read Read Read Read Read Read Read Read Read Read Read DBPROPVAL_GB_EQUALS_SELECT DBPROPVAL_HT_DIFFERENT_CATALOGS DBPROPVAL_IC_UPPER 215 1 32000 FALSE 10 FALSE DPPROPVAL_MR_SUPPORTED TRUE FALSE DPPROPVAL_OO_BLOB FALSE DBPROPVAL_OA_NOTSUPPORTED DBPROPVAL_PT_NAME

89-16 AIS User Guide and Reference

Table 896 (Cont.) Data Source Information Properties Property Name DBPROP_PREPAREABORTBEHAVIOR DBPROP_PREPARECOMMITBEHAVIOR DBPROP_PROCEDURETERM DBPROP_PROVIDERFILENAME DBPROP_PROVIDEROLEDBVER DBPROP_PROVIDERVER DBPROP_QUOTEDIDENTIFIERCASE DBPROP_ROWSETCONVERSIONONCOMMAND DBPROP_SCHEMATERM DBPROP_SCHEMAUSAGE DBPROP_SQLSUPPORT Permission Default Read Read Read Read Read Read Read Read Read Read Read DBPROPVAL_CB_reserve DBPROPVAL_CB_reserve "STORED_PROCEDURE" "NAV32.DLL" "2.0" "04.60.0000" DBPROPVAL_IC_SENSITIVE TRUE "OWNER" 0 DBPROPVAL_Sql_ANSI92_ENTRY DBVPROPVAL_SQL_ODBC_MINIMUM DBPROPVAL_SQL_ODBC_MINIMUM DBPROP_STRUCTUREDSTORAGE Read DBPROPVAL_Ss_ISTREAM DBPROPVAL_SS_ISEQUENTIALSTREAM DBPROP_SUBQUERIES Read DBPROPVAL_SQ_CORRELATEDSUBQUERIES DBPROPVAL_SQ_COMPARISON DBPROPVAL_SQ_EXISTS DBPROPVAL_SQ_IN DBPROPVAL_SQ_QUANTIFIED DBPROP_SUPPORTEDTXNDDL DBPROP_SUPPORTEDTXNISOLEVELS DBPROP_SUPPORTEDTXNISORETAIN DBPROP_TABLETERM Read Read Read Read DBPROPVAL_TC_ALL DBPROPVAL_TI_READCOMMITTED DBPROPVAL_TR_DONTCARE "TABLE"

Session Properties
The DBPROPSET_SESSION property set contains the following property:
Table 897 Property Name DBPROP_SESS_AUTOCOMMITISOLEVELS Session Properties Permission Read Default DBPROPVAL_TI_READCOMMITTED

Rowset Properties
The DBPROPSET_ROWSET property set contains the properties listed below. Providers can define additional rowset properties:
Table 898 Property Name DBPROP_ABORTPREERVE DBPROP_BLOCKINGSTORAGEOBJECT Rowset Properties Permission Read Read Default FALSE FALSE

OLE DB (ADO) Client Interface 89-17

Table 898 (Cont.) Rowset Properties Property Name DBPROP_BOOKMARKS DBPROP_BOOKMARKSKIPPED DBPROP_BOOKMARKTYPE DBPROP_CANFETCHBACKWARD DBPROP_CANHOLDROWS DBPROP_CANSCROLLBACKWARD DBPROP_CHANGEINSERTEDROWS DBPROP_COLUMNRESTRICT DBPROP_COMMANDTIMEOUT DBPROP_COMMITPRESERVE DBPROP_DEFERRED DBPROP_DELAYSTORAGEOBJECTS DBPROP_IMMOBILEROWS DBPROP_LITERALBOOKMARKS DBPROP_LITERALIDENTITY DBPROP_MAXOPENROWS DBPROP_MAXPENDINGROWS DBPROP_MAXROWS DBPROP_MEMORYUSAGE DBPROP_ORDEREDBOOKMARKS DBPROP_MULTIPLEPARAMSETS DBPROP_MULTIPLERESULTS DBPROP_MULTIPLESTORAGEOBJECTS DBPROP_MULTITABLEUPDATE DBPROP_OLEOBJECTS DBPROP_ORDERBYCOLUMNSINSELECT DBPROP_OUTPUTPARAMETERAVAILABILITY DBPROP_PERSISTENTIDTYPE DBPROP_PREPAREABORTBEHAVIOR DBPROP_PREPARECOMMITBEHAVIOR DBPROP_PROCEDURETERM DBPROP_PROVIDERFILENAME DBPROP_PROVIDEROLEDBVER DBPROP_PROVIDERVER DBPROP_QUOTEDIDENTIFIERCASE DBPROP_ROWSETCONVERSIONONCOMMAND DBPROP_SCHEMATERM Permission Read/Write Read Read Read Read Read/Write Read Read Read/Write Read Read Read/Write Read Read/Write Read Read Read Read Read Read/Write Read Read Read Read Read Read Read Read Read Read Read Read Read Read Read Read Read Default FALSE FALSE DBPROPVAL_BMK_NUMERIC FALSE TRUE FALSE TRUE TRUE 0 FALSE FALSE FALSE TRUE FALSE TRUE 100 100 0 0 FALSE FALSE DPPROPVAL_MR_SUPPORTED TRUE FALSE DPPROPVAL_OO_BLOB FALSE DBPROPVAL_OA_NOTSUPPORTED DBPROPVAL_PT_NAME DBPROPVAL_CB_reserve DBPROPVAL_CB_reserve "STORED_PROCEDURE" "NAV32.DLL" "2.0" "04.60.0000" DBPROPVAL_IC_SENSITIVE TRUE "OWNER"

89-18 AIS User Guide and Reference

Table 898 (Cont.) Rowset Properties Property Name DBPROP_SCHEMAUSAGE DBPROP_SQLSUPPORT Permission Read Read Default 0 DBPROPVAL_Sql_ANSI92_ENTRY DBVPROPVAL_SQL_ODBC_MINIMUM DBPROPVAL_SQL_ODBC_MINIMUM DBPROP_STRUCTUREDSTORAGE Read DBPROPVAL_Ss_ISTREAM DBPROPVAL_SS_ISEQUENTIALSTREAM DBPROP_SUBQUERIES Read DBPROPVAL_SQ_CORRELATEDSUBQUERIES DBPROPVAL_SQ_COMPARISON DBPROPVAL_SQ_EXISTS DBPROPVAL_SQ_IN DBPROPVAL_SQ_QUANTIFIED DBPROP_SUPPORTEDTXNDDL DBPROP_SUPPORTEDTXNISOLEVELS DBPROP_SUPPORTEDTXNISORETAIN DBPROP_TABLETERM Read Read Read Read DBPROPVAL_TC_ALL DBPROPVAL_TI_READCOMMITTED DBPROPVAL_TR_DONTCARE "TABLE"

Specific Properties
Attunity AIS includes the following specific properties:
Table 899 Property Name ISGPROP_DEFTDP_TYPE1 ISGPROP_PASSTHROUGH_MODE2 Specific Properties Permission Read Read/Write NULL string Default

Notes:

The ISGPROP_DEFTDP_TYPE property returns the default data source type. For example, if the default data source is an ORACLE data source, then this property returns the string "ORACLE". The ISGPROP_PASSTHROUGH_MODE property sets the AIS Query Processor to passthru mode. For more information, see Passthru SQL.

OLE DB (ADO) Client Interface 89-19

89-20 AIS User Guide and Reference

90
XML Client Interface
This section contains the following topics:

Overview of the XML Client Interface ACX Verbs Setting XML Transports for AIS

Overview of the XML Client Interface


The XML client interface uses the AIS XML protocol (ACX). The ACX protocol is implemented as an exchange of XML documents representing requests and responses. An ACX request is made of one or more ACX verbs (operations). The verbs are executed in sequence, as they appear in the request. As the verbs are executed, AIS constructs the response document. Some ACX verbs do not generate a response verb, though they may generate an exception response verb within the ACX response document. To write the adapter input and output XML to the log file, set the acxTrace debugging element. For information regarding how to set acxTrace and other environment variables. The following sections describe XML verbs implemented in the AIS XML Protocol (ACX):

ACX Request and Response Documents Connection Verbs Transaction Verbs The Execute Verb Metadata Verbs The Ping Verb The Exception Verb

XML request and response documents can be passed between the application and AIS via the TCP/IP or (over the web) HTTP transport.

ACX Verbs
This section contains the following:

ACX Request and Response Documents


XML Client Interface 90-1

Connection Verbs Transaction Verbs The Execute Verb Metadata Verbs The Ping Verb The Exception Verb

ACX Request and Response Documents


The following sections describe the general formats of the request and response XML documents passed to and from AIS.

Request Document
The general format for an ACX request document is as follows:
<?xml version="1.0" ?> <acx type="request" id="request_id"> ...acx_verbs... </acx>

The id attribute is used for matching the ACX document with its response document. The server does not use the id attribute value other than in sending it back along with the response document.

Response Document
The general format of an ACX response document is as follows:
<?xml version="1.0" ?> <acx type="response" href="request_id"> ... </acx>

The href attribute is used for matching the ACX response document with its request document (matching with the id attribute).

Connection Verbs
ACX defines XML format for the following verbs that handle the connect and connection context for an ACX request:

The Connect Verb The setConnection Verb The disconnect Verb The reauthenticate Verb The cleanConnection Verb

There are two kinds of connections:

Transient Connections Transient connections are created for use within a single ACX request. A transient connection is disconnected when an ACX request ends, or when the connection context changes (that is, with the connect, setConnection or disconnect verbs).

90-2 AIS User Guide and Reference

Persistent Connections Persistent connections can persist across multiple ACX requests or connection context changes. Persistent connection are disconnected upon an explicit disconnect verb or when a connection idle timeout expires.

The Connect Verb


The connect verb establishes a new connection context. All the interactions defined in ACX take place within a connection context. A connection is associated with a single adapter resource. However, more than one connection may be used within a single request (and so more than one connection may share a given transport connection). Upon a successful connect, a connection context is established and an implicit setConnection is performed with the newly created connection ID. A failed connect verb leaves the ACX request with no connection context (that is, if a connection context was established prior to invoking the connect verb, that connection context will no longer be in effect). Syntax
<connect adapter="adapter_name" idleTimeout="idle_timeout" persistent="false|true" > <passwordAuthenticator username="username" password="password" /> </connect>

where:

adapter (string): Name of adapter with which to associate the connection. You can use the following syntax to set this adapter to run on a particular workspace:
<connect adapter="workspace/adapter_name" ... />

where:

workspace: The name of the workspace where the adapter runs. adapter_name: The name of the adapter.

idleTimeout (number): A per-connection client idle timeout setting (seconds). If the client does not use the connection for the specified amount of time, the connection will be disconnect by the server and its associated resources released. This setting is limited by the server side maximum idle connection timeout setting. This parameter represents a common behavior within application servers limiting the amount of time a resource can be tied up by a client. persistent (boolean): Persistent connection indication. If True, a persistent connection is created. Otherwise a transient connection is created. The default is False. passwordAuthenticator: The authentication information required by the resource adapter of the client that created the connection. The kind of authentication information required by the resource adapter is returned as a part of the resource adapter metadata. The definition for the authentication information used for passwordAuthenticator is shown above within the syntax.

XML Client Interface

90-3

Response The connect verb matching response is only generated for a persistent connection. It is defined as follows:
<connectResponse connectionId="connection_id" idleTimeout="idle_timeout" />

where:

connectionId (string): A connection ID value representing the newly created connection. idleTimeout (number): The actual idle timeout (seconds) in effect for the new connection. This is determined by the server, possibly overriding the setting on the client.

Exceptions The following exceptions may result from a connect verb:

client.noSuchResource: The requested adapter (or workspace) is not available on the server. server.redirect: The resource adapter is available on a different server whose details are given. The same ACX request should be directed at that server. The scope of the redirection is only guaranteed for its first use. This means that if you get a redirect, you should connect to the named server and use it. Once you work with the socket and hold it, you can continue to connect to the server (assuming you used a persistent connection). Later on, connecting to the redirected server may or may not work. In many cases, once you get a physical connection, you maintain it open and open a new one only if the connection is dropped (for example, by a connection idle timeout), at which time you will need to ask the daemon for a new one.

server.internalError: An error occurred on the server.

Example
<?xml version="1.0" encoding="UTF-8"?> <acx type="request" id="try0001"> <connect adapter="adapter_name"> <passwordAuthenticator username="scott" password="tiger"/> </connect> </acx>

The setConnection Verb


The setConnection verb reestablishes the connection context for the rest of the ACX request or until it is changed again by a setConnection, connect or disconnect verb. Note that a connect verb also affects the connection context by setting it to the newly created connection. The setConnection verb enables multiplexing application connections over a single physical transport connection. For example, via one transport connection, you can access multiple application adapters. Syntax
<setConnection connectionId=connection_id />

90-4 AIS User Guide and Reference

where: connectionId (string): A connection ID value, returned by a persistent previous connect verb. The setConnection verb does not generate a response. Exceptions The following exceptions may result from a setConnection verb:

client.noSuchConnection: The given connection is either invalid (was not acquired via a connect verb) or represents a timed out connection. server.internalError: An error occurred on the server. *: Other, adapter specific, setConnection exceptions.

Example
<?xml version="1.0" encoding="UTF-8"?> <acx type="request" id="try001"> <connect adapter="adapter_name" persistent="true"> <passwordAuthenticator username="scott" password="tiger"/> </connect> </acx> <?xml version="1.0" encoding="UTF-8"?> <acx type="response" href="try001"> <connectResponse connectionID="39275569"/> </acx> <?xml version="1.0" encoding="UTF-8"?> <acx type="request"> <setConnection connectionID="39275569"/> </acx>

The disconnect Verb


The disconnect verb destroys the current connection context. All the resources associated with the current connection (persistent or transient) are released. Syntax
<disconnect/>

The disconnect verb does not generate a response. Exceptions The following exceptions may result from a disconnect verb: server.internalError: An error occurred on the server.

The reauthenticate Verb


The reauthenticate verb establishes a new client identity for the active connection. Adapters are not required to support reauthentication. An adapter that does not support reauthentication is required to produce an exception if required to reauthenticate with an identity different from the current one. Failure of the reauthentication prevents further activity on the connection (other than retrying the

XML Client Interface

90-5

authentication). Once the reauthentication succeeds, the connection's client is authorized based upon the authenticated identity established. Syntax
<reauthenticate> <passwordAuthenticator username="username" password="password" /> </reauthenticate>

where:

username: The username of the new client of the connection. password: The password of the new client of the connection.

The reauthenticate verb does not generate a response. Exceptions The following exceptions may result from a reauthenticate verb:

client.authenticationError: The given authentication information is not valid. server.notImplemented: The adapter does not support reauthentication. server.internalError: An internal error has occurred.

The cleanConnection Verb


The cleanConnection verb indicates that the client is working with connection pooling and that the connection is being 'soft-closed'. That is, the connection is being placed in a connections pool. The connection should still be valid but various resources on it can be freed (for example, objects related to local interactions). Syntax
<cleanConnection />

Note that the adapter may forget the authentication information upon a cleanConnection verb. This behavior is reflected in the adapter metadata. The cleanConnection verb does not generate a response.

Transaction Verbs
The ACX transaction verbs are used in the following scenarios:

Non-transacted operation: The adapter works in auto-commit mode. Work is committed immediately and automatically upon execution. This operation mode is the default operation mode when no transaction verbs are used, or with the setAutoCommit verb setting auto-commit to True. Local transaction operation: With auto-commit set to False, the first interaction starts a transaction that lasts until an explicit commit (using the transactionCommit verb) or an explicit rollback (using the transactionRollback verb) occurs. All interactions performed in between are made as a part of that transaction. Note that 'local' is used here to indicate the scope of the transaction, rather than its location - using ACX, the local transaction may be running on a remote machine.

90-6 AIS User Guide and Reference

Distributed transaction operation: The ACX adapter participates in a distributed transaction by exposing the appropriate XA methods. In this scenario, the responsibility for invoking the different ACX verbs is divided between the application component (performing the interactions) and the application server (which, by means of the transaction manager, manages the transaction and performs the 2-phase-commit protocol).

ACX defines the following verbs that handle transaction operations:


The setAutoCommit Verb The transactionStart Verb The transactionPrepare Verb The transactionCommit Verb The transactionRollback Verb The transactionRecover Verb The transactionForget Verb The transactionEnd Verb

The setAutoCommit Verb


The setAutoCommit verb sets the auto-commit mode of the connection. Syntax
<setAutoCommit autoCommit="auto_commit_mode" />

where:

autoCommit (boolean): New auto-commit mode of the connection. If set to True, each interaction immediately commits once executed. The auto-commit mode must be turned off if multiple interactions need to be grouped into a single transaction and committed or rolled back as a unit. When auto-commit is reset and no global transaction is in progress, any interaction starts a local transaction. The client is required to use transactionCommit or transactionRollback at the appropriate time to commit or rollback the transaction. The auto-commit mode is True by default and is reset if a distributed (global) transaction is started.

The setAutoCommit verb does not generate a response. Exceptions The following exception may result from a setAutoCommit verb: server.internalError: An error occurred on the server.

The transactionStart Verb


The transactionStart verb starts operations under the given transaction. Syntax
<transactionStart> <xid formatID="id" globalTransactionID="id_string" branchQualifier="branch_string" /> state="state" XML Client Interface 90-7

</transactionStart>

where:

xid: A global transaction identifier, automatically assigned. If not given (or empty), the transaction is assumed to be local. The xid comprises the following:

formatID (number): Specifies the format of the xid. globalTransactionID (hex string): Defines the transaction ID. The value must be less than 128. branchQualifier (hex string): Defines the transaction branch. The value must be less than 128.

state (string): The state of the given xid. May be "join", "resume" or empty. If the state is empty the it assumed that the transaction is new.

The transactionStart verb does not generate a response. Exceptions The following exceptions may result from a transactionStart verb:

INTRANS: A transaction is already started. UNKXID: An unknown xid was specified with transaction state of "join" or "resume". INTERR: An internal error has occurred.

The transactionPrepare Verb


The transactionPrepare verb prepares to commit the work done under the (global) transaction in a 2-phase commit protocol. Syntax
<transactionPrepare> <xid formatID="id" globalTransactionID="id_string" branchQualifier="branch_string" /> </transactionPrepare>

where: xid: The global transaction identifier passed by the transactionStart xid. The transactionPrepare verb does not generate a response. Exceptions The following exceptions may result from a transactionPrepare verb:

NOTRANS: No matching global transaction is started. UNKXID: An unknown xid was specified. INTERR: An internal error has occurred.

The transactionCommit Verb


The transactionCommit verb commits the work done under the global or local transaction.

90-8 AIS User Guide and Reference

Syntax
<transactionCommit onePhase=one_phase> <xid formatID="id" globalTransactionID="id_string" branchQualifier="branch_string" /> </transactionCommit>

where:

onePhase (boolean): Specifies that the resource adapter should use a one-phase commit protocol to commit the work done on the client's behalf. onePhase is applicable only when the transaction is a global transaction and includes a 1-phase commit data source. If true, this option may be used by the transaction manager to optimize its distributed transaction processing. xid: The global transaction identifier passed by the transactionStart xid.

The transactionCommit verb does not generate a response. Exceptions The following exceptions may result from a transactionCommit verb:

NOTRANS: No matching global transaction is started. UNKXID: An unknown xid was specified. INTERR: An internal error has occurred.

The transactionRollback Verb


The transactionRollback verb rolls back the work done under the (global) transaction. Syntax
<transactionRollback> <xid formatID="id" globalTransactionID="id_string" branchQualifier="branch_string" /> </transactionRollback>

where: xid: The global transaction identifier passed by the transactionStart xid. The transactionRollback verb does not generate a response. Exceptions The following exceptions may result from a transactionRollback verb:

NOTRANS: No matching global transaction is started UNKXID: An unknown xid was specified. INTERR: An internal error has occurred.

The transactionRecover Verb


The transactionRecover verb lists the prepared transaction branches. Syntax
<transactionRecover maxResultItems="max_result_items" scanOption="scan_option" />

XML Client Interface

90-9

where:

maxResultItems (number): Indicates the maximum number of XIDs to return. If omitted or zero, all XIDs are returned. If specified, and the number of items is equal to or greater than this number, exactly this number of items will be returned. scanOption (string): Indicates the XID scanning operation to be done. It may be "start" (in which XID are returned from the first one), "end" (where the scan is terminated and nothing is returned) or it may be "next" or omitted, meaning that the scan should continue from the point it reached in the last recover call.

The matching response is defined as: <transactionRecoverResponse xid="xid" /> where: xid: The global transaction identifier passed by the transactionStart xid. Exceptions The following exceptions may result from a transactionRecover verb: INTERR: An internal error has occurred.

The transactionForget Verb


The transactionForget verb deletes the completed transaction branch from the transaction log. Syntax
<transactionForget> <xid formatID="id" globalTransactionID="id_string" branchQualifier="branch_string" /> </transactionForget>

where: xid: The global transaction identifier passed by the transactionStart xid. The transactionForget verb does not generate a response. Exceptions The following exceptions may result from a transactionForget verb:

UNKXID: An unknown xid was specified. INTERR: An internal error has occurred.

The transactionEnd Verb


The transactionEnd verb completes or suspends work under the given transaction. This verb is allowed only when a global transaction has already been started. Syntax
<transactionEnd state="success|suspend|fail"> <xid formatID="id" globalTransactionID="id_string" branchQualifier="branch_string" /> </transactionEnd>

where:

90-10 AIS User Guide and Reference

state (string): The state of the given xid. May be "success", "suspend" or "fail". The default is success. xid: A global transaction identifier (required).

The transactionEnd verb does not generate a response. Exceptions The following exceptions may result from a transactionEnd verb:

NOTRANS: No matching global transaction is started. UNKXID: An unknown xid was specified. INTERR: An internal error has occurred.

The Execute Verb


The execute verb executes a given interaction against the application. Syntax
<execute interactionName="interaction_name" interactionMode="interaction_mode"> <input_record> ... </input_record </execute>

where:

interactionName (string): Name of interaction to execute. You can omit the interactionName if it is identical to input_record (defined below). interactionMode (string): Describes the nature and direction of the interaction. input_record: Represents the interaction input information as an XML element (with whatever content). The type of input-record is determined by the interaction definition in the adapter schema. If the interaction_name value is not given, the interaction name is assumed to be the same as input_record name, while its attributes and content are determined by the interaction input type.

Response The matching response is defined as:


<executeResponse> <output_record> ... </output_record </executeResponse>

where: Represents the interaction result as an XML element (with whatever content). The element name must match the interaction output record (and so do the attributes and content).

XML Client Interface 90-11

Metadata Verbs
ACX defines XML format for the following verbs that handle metadata operations for an ACX request:

The getMetadataItem Verb The getMetadataList Verb

The getMetadataItem Verb


The getMetadataItem verb requests information on resources available via the resource adapter. It is expected that the getMetadataItem verb typically is used in design-time rather than in run-time, though this is not enforced. Syntax
<getMetadataItem type="item_type" name="item_name" />

Or
<getMetadataItem type="item_type"> <name>item1_name</name> <name>item2_name</name> ... </getMetadataItem>

where:

type (string): Indicates the kind of item for which metadata is needed. Supported item types are:

adapter: Provides information about a given adapter (or the currently connected adapter). interaction: Provides information about a given interactions. schema: Returns the complete resource schema. record: Returns a sub-schema containing the definition of the requested records.

name (string): Indicates the particular item(s) for which metadata is needed, or a wildcard (using *,%) for getting metadata of multiple items.

The matching response may be in one of the following forms:


The adapter response. For information, see The Adapter Response. The interaction response. For details, see The Interaction Response.

The Adapter Response The <adapter> response provides information on a particular adapter.
<getMetadataItemResponse> <adapter name="xsd:Name" description="xsd:string" version="xsd:string" type="xsd:Name" operatingSystem="xsd:string" transactionLevelSupport="0|1|2" authenticationMechanism="basic-password" maxActiveConnections="xsd:integer" maxIdleTimeout="xsd:integer"

90-12 AIS User Guide and Reference

maxRequestSize="xsd:integer" connectionPoolingSize="xsd:integer" poolingTimeout="xsd:integer" supportsReauthentication="xsd:boolean" /> </getMetadataItemResponse>

where:

name (string): Name of the adapter. description (string): Description of the adapter. version (string): Version of the adapter. type (string): The type of the adapter, as specified in the adapter definition. operatingSystem (string): Operating system on which the adapter runs. transactionLevelSupport (number): Indicates the transaction support level:

0: no transactions support (0PC) 1: simple transactions support (1PC) 2: distributed transaction support (2PC).

authenticationMechanism (string): "basic-password" for authentication based on username/password. In a future release support will be provided for "kerbv5", for Kerberos version 5 based authentication.

maxActiveConnections (number): The maximum number of concurrent connections supported by the adapter. This number may or may not be enforced by the adapter but it serves as an indication for the effective number of concurrent connections from the adapter's point of view. The client can use this number to optimize its configuration. maxIdleTimeout (number): An ACX connection to an adapter is terminated after the specified idle timeout expires (in seconds). maxRequestSize (number): ACX requests are restricted in their size to prevent draining of the server system resources. Default limit of request size is 65536 (64KB). This parameter specifies the maximum request size in bytes. connectionPoolingSize (number): Size of connection pool maintained at the server. poolingTimeout (number): Time in seconds a connection is pooled before it is destroyed. supportsReauthentication (Boolean): Indication of whether or not the adapter supports reauthentication.

The Interaction Response The <interaction> response provides information on a particular interaction.
<getMetadataItemResponse> <interaction name="xsd:Name" description="xsd:string" mode="sync-send|sync-send-receive| sync-receive|async-send| async-send-receive" input="xsd:Name"

XML Client Interface 90-13

output="xsd:Name" /> </getMetadataItemResponse>

where:

name (string): Name of the interaction. description (string): Description of the interaction. interactionVerb (string): Describes the nature and direction of the interaction. mode (string): The nature and direction of the interaction. input (string): The name of the record (from the adapter schema) used for input to the interaction. output (string): The name of the record (from the adapter schema) produced by the interaction.

The schema Response The <schema> response describes the input and output of the interactions.
<getMetadataItemResponse> <schema name="xsd:Name" version="xsd:integer" defaultNotDisplayed="xsd:boolean" open="xsd:boolean" > <record|enumeration|variant /> </schema> </getMetadataItemResponse>

where:

name (string): Name of the interaction. version (string): Version of the schema definition. defaultNotDisplayed (Boolean): When True, data items having their respective default values are omitted from the XML document. open (Boolean): When True, the XML to native transformation does not throw an exception about unknown elements or attributes: they are ignored. record|enumeration|variant: Multiple record, variant and enumeration definitions can appear in any order.

The getMetadataList Verb


The getMetadataList verb requests information on resources available via the resource adapter. It is expected that the getMetadataList verb typically is used in design-time rather than in run-time, though this is not enforced. Syntax
<getMetadataList name="item_name type=item_type" maxResultItems="max_result_items" startItemName="start_item_name" />

where:

name (string): Empty or a wildcard (using *,%) for reducing the names returned on the list. Note that the entries are not necessarily returned in alphabetical order of the item-names. However, there must be some sort of order,

90-14 AIS User Guide and Reference

even if it is entirely internal to the adapter (for example, items might be returned in a list ordered an internal ID value).

type (string) Indicates the kind of item for which listing is needed. Supported item types are:

adapter: Useful when interacting with resource dispenser which provides connections or redirections to multiple resources. interaction: List of interactions of the currently connected resource. record: List of records of the currently connected resource.

maxResultItems (number): Indicates the maximum number of result items that are returned. If the number of items is greater than or equal to the number specified in this parameter, exactly this number of items must be returned. startItemName (string): Indicates an item name starting from which items are to be returned. If empty or not given, all items from the first one (inclusive) are returned.

Response The matching response is described below. Note that the list returned might be affected by the authorization of the requester.
<getMetadataListResponse type="item_type" > <name>xsd:Name</name> <name>xsd:Name</name> ... </getMetadataListResponse>

where:

type (string): The type of items whose names were returned. The following are valid values:

adapter interaction record

name (string (array)): Names of items on the list.

The Ping Verb


The ping verb returns, in a pingResponse response, information about an active adapter. Syntax
<ping/>

Response The response provides information about an active adapter accessed via the <ping> verb.
<pingResponse name="xsd:Name" description="xsd:string" version="xsd:string" type="xsd:Name" operatingSystem="xsd:string" XML Client Interface 90-15

vendor="xsd:string"> </pingResponse>

where:

name (string): Name of the adapter the ping accessed. description (string): Description of the adapter. version: The version number of the adapter. operatingSystem: The operating system on which the adapter runs. vendor: The vendor of the adapter.

Example The following xml file is passed to nav_util xml:


<acx> <connect adapter=adapter_name/> <ping/> </acx>

The following response is returned:


<?xml version=1.0 encoding=ISO-8859-1?> <acx type=response> <connectResponse></connectResponse> <pingResponse name=adapter_name description=Adapter to access data type=Database operatingSystem=INTEL-NT> </pingResponse> </acx>

The Exception Verb


The exception verb informs the recipient of an exception that has occurred.

The Exception Element


Application adapters report errors by returning an exception response. An exception may be generated for every operation, including those that do not normally produce a response. The occurrence of an exception interrupts the execution of an ACX request and result in an immediate response. The response contains all the responses of the previous successful operations, followed by the exception verb. The exception verb has the following formats: Syntax 1: ACX Exceptions
<exception origin="xsd:Name" name="xsd:Name"> <info>...</info> </exception>

Syntax 2: Application Exceptions


<exception origin="xsd:Name" name="xsd:Name"> <application-exception>

90-16 AIS User Guide and Reference

</exception>

where:

origin (string): The origin of the exception. The origin string has the following format: adapter.interaction[.location]. Where adapter is the adapter type, interaction is the interaction name and location is an optional location within the interaction. name (string): The exception name. The exception name has the following format: who.what where who isclient if the exception resulted from a client fault, or server if the exception resulted from a server fault. It is important to use one of the common exception names that appear in the table below so that applications can consistently identify common exceptions and handle them. info (string (array)): Zero or more text strings containing a readable description of the exception (the texts are ordered from the general to the specific). application-exception (record): An application specific exception element. The elements tag identifies the schema record defining the exception element structure.

The following table lists common error names and their meaning. Specific adapters can include their own error names. Errors starting with client indicate that the client side was probably responsible for the error. Errors starting with server indicate that the server side was probably responsible for the error
Table 901 Common Error Names Description The authentication information provided with the request is not valid. Exception occurred because the ACX XML request verb was given without an active connection context. A connection has timed out or was otherwise dropped (for example, the server was stopped between requests). The requested interaction is not available on the current adapter. The adapter referred to in the request does not exist on the server. The request sent by the client had semantic errors. The ACX XML request sent by the client had XML parsing errors. An internal error on the server. Additional information is provided to explain the exception. No automatic handling is expected is this case. A feature requested is not implemented in the current release. The server to which the request was referred cannot handle the request and it should be issued to the server named in the <info> element. A resource limit was reached on the server. The same request may succeed later on.

Exception Name client.authenticationError client.noActiveConnection

client.noSuchConnection

client.noSuchInteraction client.noSuchResource client.requestError client.xmlError server.internalError

server.notImplemented server.redirect

server.resourceLimit

XML Client Interface 90-17

Table 901 (Cont.) Common Error Names Exception Name server.xmlError Description An XML parsing error was found in a server response

Setting XML Transports for AIS


AIS accepts XML request documents from any application as long as the XML is formatted as described in ACX Request and Response Documents. After processing the XML request, AIS returns an XML response document to the requesting client application. XML request and response documents can be passed between the application and AIS over the TCP/IP or (over the web) HTTP transport.

Passing XML Documents via TCP/IP


XML request and response documents can be passed between the application and AIS through the TCP/IP as follows: To send XML via the TCP/IP transport 1. Connect to the remote machine.
2.

Send the XML, in which the first 4 bytes specify the length of the document and the remainder is the XML itself. The length format is defined as a signed 32-bit integer in network format (big-endian). If the response from the remote machine includes server.redirect, it also includes the following: <info>ip:port</info> where:

ip: The IP address of a remote machine to which you redirect the XML. port: The port on the remote machine through which the connection is made.

For details about server.redirect, see The Connect Verb.


3.

Open a new connection and send the XML to this ip:port location.

Passing XML Documents via HTTP (Using the NSAPI Extension)


To configure NSAPI to handle the XML documents transferred to it by the daemon, complete the steps below: To configure NSAPI to handle the XML documents 1. Ensure that the nvNsapi.dll exists on the server.
2.

Create the NSAPI initialization file, NSAPI.INI, as follows:


alias=URL_address:port ...

where:

alias: An alias for the remote machine.

90-18 AIS User Guide and Reference

URL_address: The address of the remote machine. port: The port through which the connection is established.

Example
dev=osf.acme.com:2551 prod=sun.acme.com:2551 3.

Make the following modifications to the OBJCNF file:

Add to the init section the following line:


Init fn="load-modules" shlib="navroot/bin/nvNsapi.dll" funcs="xml-form,url-line-req-xml"

Add to the NameTransaction section the following lines:


NameTrans fn="pfx2dir" from="/nvNsapi.dll" dir="navroot/bin" name="nvNsapi.dll"

Create the object having the name defined in step 2, and within the object define the functions xml-form, which takes as a parameter the location of the nvNsapi.ini file:
<object name="nvNsapi.dll"> Service fn="url-line-req-xml" </object>

4.

In the MIME.TYPE file, add the following line:


type=magnus-internal/xmlform exts=xml

XML Client Interface 90-19

90-20 AIS User Guide and Reference

Part XIV
Appendixes
This part contains the following appendixes:

NAVDEMO - Attunity Demo Data Attunity SQL Syntax National Language Support (NLS) COBOL Data Types to Attunity Data Types Glossary

A
NAVDEMO - Attunity Demo Data
This section includes the following topics:

NAVDEMO Overview NAVDEMO Database NAVDEMO Tables

NAVDEMO Overview
Attunity AIS includes demo data on every platform where it is installed, called NAVDEMO. The demo database includes the following tables:

Customer: This table lists details of customers who order parts. Supplier: This table lists details of suppliers of parts. Partsupp: This table lists details of parts supplied. TPart: This table lists details of parts that can be ordered. Torder: This table lists details of orders. Lineitem: This table lists details of a specific part in an order. Nation: This table lists details of the countries where customers and suppliers live. Region: This table lists details of the regions where customers and suppliers live.

NAVDEMO Database
All installations of Attunity Server include a demo database. You can use this database for training, benchmarking and demonstrations. The demo database structure is shown in the following figure:

NAVDEMO - Attunity Demo Data A-1

Figure A1 NAVDEMO Database Structure

The columns and data types of each of these tables are listed below. The annotations for primary keys and foreign references are for clarification only and do not specify any implementation requirements such as integrity constraints.

NAVDEMO Tables
This section describes each NAVDEMO table, its column names and data types.

TPART Table
The following table lists the TPART table columns and data types:
Table A1 TPART Table Columns and Data Types Data Type integer variable text, size 55 fixed text, size 25 fixed text, size 10 variable text, size 25 integer fixed text, size 10 decimal variable text, size 23 Comment Primary Key

Column Name P_PARTKEY P_NAME P_MFGR P_BRAND P_TYPE P_SIZE P_CONTAINER P_RETAILPRICE P_COMMENT

SUPPLIER Table
The following table lists the SUPPLIER table columns and data types:
Table A2 Column Name S_SUPPKEY S_NAME SUPPLIER Table Columns and Data Types Data Type integer variable text, size 25 Comment Primary Key

A-2 AIS User Guide and Reference

Table A2 (Cont.) SUPPLIER Table Columns and Data Types Column Name S_ADDRESS S_NATIONKEY S_PHONE S_ACCTBAL S_COMMENT P_RETAILPRICE P_COMMENT Data Type variable text, size 40 integer fixed text, size 15 decimal variable text, size 101 decimal variable text, size 23 Foreign reference to N_NATIONKEY Comment

PARTSUPP Table
The following table lists the PARTSUPP table columns and data types:
Table A3 Column Name PS_PARTKEY PS_SUPPKEY PS_AVAILQTY PS_SUPPLYCOST P_COMMENT PARTSUPP Table Columns and Data Types Data Type integer integer integer decimal variable text, size 199 Comment Foreign reference to P_PARTKEY Foreign reference to S_SUPPKEY

Compound Primary Key: PS_PARTKEY, PS_SUPPKEY

CUSTOMER Table
The following table lists the CUSTOMER table columns and data types:
Table A4 Column Name C_CUSTKEY C_NAME C_ADDRESS C_NATIONKEY C_PHONE C_ACCTBAL C_MKTSEGMENT C_COMMENT CUSTOMER Table Columns and Data Types Data Type integer variable text, size 25 variable text, size 40 integer fixed text, size 15 decimal fixed text, size 10 variable text, size 117 Foreign reference to N_NATIONKEY Comment Primary Key

TORDER Table
The following table lists the TORDER table columns and data types:

NAVDEMO - Attunity Demo Data A-3

Table A5 Column Name O_ORDERKEY O_CUSTKEY O_ORDERSTATUS O_TOTALPRICE O_ORDERDATE O_ORDERPRIORITY O_CLERK O_SHIPPRIORITY O_COMMENT

TORDER Table Columns and Data Types Data Type integer integer fixed text, size 1 decimal date variable text, size 15 variable text, size 15 integer variable text, size 79 Comment Primary Key Foreign reference to C_CUSTKEY

LINEITEM Table
The following table lists the LINEITEM table columns and data types:
Table A6 Column Name L_ORDERKEY L_PARTKEY L_SUPPKEY L_LINENUMBER L_QUANTITY L_EXTENDEDPRICE L_DISCOUNT L_TAX L_RETURNFLAG L_LINESTATUS L_SHIPDATE L_COMMITDATE L_RECEIPTDATE L_SHIPINSTRUCT L_SHIPMODE L_COMMENT LINEITEM Table Columns and Data Types Data Type integer integer integer integer integer decimal decimal decimal fixed text, size 1 fixed text, size 1 date date date variable text, size 25 variable text, size 10 variable text, size 44 Comment Foreign reference to O_ORDERKEY Foreign reference to P_PARTKEY Foreign reference to S_SUPPKEY

Compound Primary Key: L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER

NATION Table
The following table lists the NATION table columns and data types:

A-4 AIS User Guide and Reference

Table A7 Column Name N_NATIONKEY N_NAME N_REGIONKEY N_COMMENT

NATION Table Columns and Data Types Data Type integer fixed text, size 25 integer variable text, size 152 Foreign reference to R_REGIONKEY Comment Primary Key. 25 nations are populated

REGION Table
The following table lists the REGION table columns and data types:
Table A8 Column Name R_REGIONKEY R_NAME R_COMMENT REGION Table Columns and Data Types Data Type integer variable text, size 25 variable text, size 152 Foreign reference to R_REGIONKEY Comment Primary Key. 5 regions are populated

NAVDEMO - Attunity Demo Data A-5

A-6 AIS User Guide and Reference

B
Attunity SQL Syntax
This section describes SQL statements which include enhancements specific to AIS. The syntax used to describe the SQL statements is described in Syntax Diagrams Describing SQL. The following SQL statements are specific to Attunity Connect:

SELECT Statement FROM Clause WHERE Clause GROUP BY and HAVING Clause ORDER BY Clause Set Operators on SELECT Statements

SELECT XML Statement Batch Update Statements INSERT Statement UPDATE Statement DELETE Statement

TABLE, INDEX CREATE and DROP Statements CREATE TABLE Statement DROP TABLE Statement CREATE INDEX Statement

VIEW Statements CREATE VIEW Statement DROP VIEW Statement

Stored Procedure Statements CREATE PROCEDURE Statement DROP PROCEDURE Statement CALL Statement

Synonym Statements CREATE SYNONYM Statement

Attunity SQL Syntax B-1

DROP SYNONYM Statement

GRANT Statement Transaction Statements BEGIN Statement COMMIT Statement ROLLBACK Statement

Constant Formats Expressions Functions Aggregate Functions Conditional Functions Data Type Conversion Functions Date and Time Functions Numeric Functions and Arithmetic Operators String Functions

Parameters Search Conditions and Comparison Operators Passthru Query Statements (bypassing Query Processing)

Syntax Diagrams Describing SQL


The following diagrams describe the SQL syntax:
Table B1 SQL Syntax Descriptors The beginning of an SQL statement. The continuation of an SQL statement from another line. The syntax is continued on another line. The end of an SQL statement.

Required items appear on the main path. Keywords are shown in uppercase and must be typed as shown. Variables are shown in lowercase. Syntax that is described in detail elsewhere is displayed in angled brackets (<>):

A choice between items appears as a stack. If an entry is shown on the main path, you must include an item in the stack:

B-2 AIS User Guide and Reference

An optional choice between items appears as a stack below the main path. Default values are shown in bold:

An option to repeat a part of the syntax appears as an arrow returning to the left. Any syntax included between the beginning and end of the returning arrow follows the normal syntax rules for that part of the syntax diagram:

Punctuation marks, parentheses, arithmetic operators and other symbols must be entered as part of the syntax.

SELECT Statement
The SELECT statement retrieves a rowset from one or more data sources.
Figure B1 SELECT Statement Syntax

Keywords and Options


This section describes the following keywords and options within the SELECT statement:

Attunity SQL Syntax B-3

DISTINCT: The query retrieves only unique rows. Null values are considered equal for the purposes of this keyword: only one is selected no matter how many the query encounters. table: A specific table whose columns are retrieved. All columns are retrieved in the order they appear in the tables specified in the FROM clause (for example, T1.* specifies all columns in table T1). col: A specific column in the table is retrieved. (<select>): A nested SELECT statement used to flatten a hierarchical rowset. <expr>: Expressions are constants, functions, or any combination of column names, constants, and functions connected by arithmetic operators and listed in the order in which you want to see them. {<select>}: A nested SELECT statement reflecting a parent-child relationship. The result of the nested SELECT statement is represented by a single column, which must be identified using an alias. alias: A new name for this output column of the retrieved rowset. This new name is used to represent any resulting rowsets and can be referenced in any of the SELECT statement clauses (WHERE, GROUP BY, etc.).
Note:

An alias can be specified either as an identifier or as a constant. For example: select n_name as a ... where the alias is an identifier, or, select n_name as a ... where the alias is a constant.

LIMIT TO m ROWS: Only m rows of the result rowset are retrieved.


Note:

The LIMIT TO syntax is useful in test and prototype environments.

OPTIONS(FORCE ORDER): The query optimizer orders the tables in the same order as they appear in the FROM clause. FOR UPDATE: Records are locked as they are retrieved (pessimistic locking mode). This option can be applied only to the main SELECT statement. OPTIMIZE m ROWS: The optimization strategy is selected to ensure that the first m rows are returned as quickly as possible. Optimization is set to First Row optimization and overrides any value specified for the <optimizer goal> parameter in the AIS Server environment. This option can be applied only to the main SELECT statement.

Example B1 SELECT title, type, price FROM titles WHERE price > 9.99 ORDER BY title OPTIMIZE 5 ROWS

B-4 AIS User Guide and Reference

FROM Clause
The FROM clause specifies which sources (such as tables, rowsets and stored procedures) are used in the SELECT statement.
Figure B2 FROM Clause Syntax

Keywords and Options


This section describes the following keywords and options within the FROM clause:

ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.

Example B2 DS Code Sample SELECT * FROM DS1:Table1, DS2:Table2 WHERE...

Notes:
ds can be omitted for the default data source. You can specify the default source as part of the connect string. For HP NonStop Platforms: Fully qualified table names must be delimited by quotes (").

owner: A table owner. If you need to define different owners for the tables, define a data source for each owner and access the tables of the data source in the same way that you access multiple data sources in a single query.

Example B3 owner Code Sample

The following SQL statement retrieves information about employees from the EMPLOYEE table, on the Ora1 database. The table owner name is WHITE:
SELECT * FROM Ora1:white.employee

Attunity SQL Syntax B-5

Notes:
You can assign a default owner with the owner property in the binding configuration. Using Attunity Studio, you specify this property in the Default table owner field in the Advanced tab, which is displayed by right-clicking the specific data source, selecting Edit data source and choosing the Advanced tab. You cannot perform schema or catalog functions for a specific owner.

table: The name of the table or view that includes the columns to retrieve. The order in which the tables or views appear is not significant (the logical query execution order may be changed by the query optimizer) except when <join> includes a left outer join (LOJ).
Notes:

The maximum length for a table name is 64.

Example B4 table Code Sample SELECT * FROM Table1, Table2 WHERE Table1.col1 = Table2.col2

table->chapter: The identifier for data that is stored in a data source such that it can be represented hierarchically (such as information stored in arrays in RMS). For more information and examples see Hierarchical Queries.

view: The name of a view on one or more data sources. For details, see CREATE VIEW Statement. synonym: The name of a synonym (alias) for a table, view or stored procedure. For details, see CREATE SYNONYM Statement. stored_prc: The name of an AIS procedure. The procedure must return a resultset for this option to be valid. For details, see CREATE PROCEDURE Statement.
Note:

The maximum length for a stored query is 64.

Example B5 stored_prc Code Samples select * from T1,stored_proc1 Q1 where Q1.col1=T1.col2 and Q1.col3<T1.col4

Where stored_proc1 is a stored query that contains the following query:


select * from T2,T3 where T2.col5=T3.col6 and T2.col7>T3.col8

<passthru>: See Passthru Query Statements (bypassing Query Processing). param: Specifies values for the stored_prc or <passthru> parameters. The number of values supplied must match exactly the number of parameters defined in the stored procedure or passthru query. If there are no parameters, no values need to be supplied; however, the parentheses () following the name are still required. See CREATE PROCEDURE Statement and Passthru Query Statements (bypassing Query Processing).

B-6 AIS User Guide and Reference

alias: An alias used either for clarity or to distinguish the different roles played in a self-join or a subquery.
Note:

For stored procedures, views and passthru queries the alias is mandatory if the rowset name is to be used as a column qualifier elsewhere in the query.

Example B6 alias Code Sample SELECT pub_name, title_id FROM publishers pu, titles t WHERE t.pub_id = pu.pub_id

Note:

Once an alias is specified, the columns can be referenced only using the alias (or if the name is not ambiguous without any identifier).

<hint>: The optimization strategy you want used instead of the optimization strategy selected by the query optimizer. If you specify more than one strategy, the optimizer uses the strategy from this list that is most efficient. If you specify strategies that you dont want used, the optimizer doesnt use any of the strategies from this list. Only specify a hint in the SQL if you have checked the optimization strategy used with the Attunity Connect Query Analyzer. Attunity recommends using hints only when advised by Attunity Support.
Note: If you specify optimization strategies that cannot be satisfied, the query execution fails. The query optimizer creates a number of possible optimization strategies. Only after these strategies have been generated is the best strategy selected from the list of available strategies, based either on any hints specified or on what the optimizer evaluates as the best strategy.

Figure B3 <hint> Syntax

WITH: The designated optimization strategy should be used. WITHOUT: The designated optimization strategy should not be used. SCAN: Indexes are ignored and the data is scanned according to its physical order. INDEX: The index specified by indname is used to seek for specific values in the WHERE clause or, if WITHOUT is specified, the index is ignored for the specific values. INDEXSCAN: The index specified by indname is used to seek for all values or, if WITHOUT is specified, the index is ignored.

Attunity SQL Syntax B-7

indname: The name or an ordinal of an index. n: The number of segments in the index that should be used. Specifying a value of zero (0) is equivalent to specifying INDEXSCAN. FIRST: The left table in the join strategy. This table will be the first table in the optimized tree (on the left side). LAST: The table will be the last table in the optimized tree.

Example B7 <hint> Code Sample 1 SELECT * FROM T1 <ACCESS(INDEX(emp_prim, 2))> WHERE key = 323 or key = 512

Example B8 <hint> Code Sample 2 SELECT * FROM T1 <ACCESS(WITHOUT SCAN), LAST>

ON <cond>: A condition determining the results of the <join>. The condition is used as part of the FROM clause when a join keyword is used (see below). See Search Conditions and Comparison Operators.
Note: The condition can be included in the WHERE clause instead of in the FROM clause, without impacting the results.

<join>: A join between successive tables. <join> has the following format:

Figure B4 <join> Syntax

Joins can be used only with columns containing scalar values and with aggregate-free expressions. The joins are processed left-to-right, each pair of joined tables becoming in effect a single data source for the next join connector. , (comma): A standard cross join.
Note: Search conditions cannot be used (see Search Conditions and Comparison Operators).

ONE_TO_ONE: Each row in the left rowset has one and only one matching row in the right rowset. ONE_TO_MANY: Each row in the left rowset may have more than one matching row in the right rowset, while each row in the right rowset has exactly one matching row in the left rowset. MANY_TO_ONE: Each row in the right rowset may have more than one matching row in the left rowset, while each row in the left rowset has exactly one matching row in the right rowset.

B-8 AIS User Guide and Reference

MANY_TO_MANY: A row in either rowset may have more than one matching row in the other rowset.
Note:

If one of the above modifiers is used in executing a join, it directly influences whether a result row is updateable. The modifier is not considered during query optimization.

INNER: Used for clarity. The following are equivalent:


T1 INNER JOIN T2 ON T1 JOIN T2 ON

JOIN: A standard inner join.


Note: JOIN is evaluated before commas. Parentheses may be used to affect the grouping of the data sources.

In cases where an ON condition is replaced by a WHERE condition, inner joins and cross joins are equivalent, as in the following example:
FROM T1 JOIN T2 ON T1.c1 = T2.c2 FROM T1, T2 WHERE T1.c1 = T2.c2

The exception to this equivalence occurs when the join is under the right branch of an LOJ. Such cases generate two different optimization strategies, which produce different results. The following queries, for example, generate different results:
Select * from nv_dept d LOJ (nv_emp e JOIN nv_sal s ON e.emp_id = s.emp_id) ON d.dept_id=e.dept_id Select * from nv_dept d LOJ (nv_emp e, nv_sal s) ON d.dept_id=e.dept_id where e.emp_id = s.emp_id

Example B9 JOIN Sample 1 SELECT * FROM Table1 JOIN Table2 ON Table1.col1 = Table2.col2 WHERE...

Example B10 JOIN Sample 2 SELECT * FROM Table1, Table2 INNER JOIN Table3 WHERE...

This is equivalent to:


SELECT * FROM Table1, (Table2 INNER JOIN Table3) WHERE...

Example B11 JOIN Sample 3

Use parentheses to change the order in which the joins are processed. For example:
SELECT * FROM (Table1, Table2) INNER JOIN Table3 WHERE...

Example B12 JOIN Sample 4 SELECT * FROM Table1 ONE_TO_MANY JOIN Table2 Attunity SQL Syntax B-9

ON Table1.col1 = Table2.col2 WHERE...

LOJ: The join includes all rows from the left rowset regardless of whether there is a matching row in the right rowset. These joins are called left outer joins. Every row from the left rowset is first matched (using the ON <cond>) with rows from the right rowset, or with null values if there are no matching rows in the right rowset; the predicates from the WHERE clause are then applied to filter the result.
Note:

Search conditions (ON <cond>) must be used (see Search Conditions and Comparison Operators). LOJ is evaluated before commas. Parentheses may be used to affect the grouping of the data sources. The keywords LEFT JOIN or LEFT OUTER JOIN can be used instead of LOJ to improve readability. For conformity with ODBC format you can use {OJ source LEFT OUTER JOIN source ON <cond>}.

Example B13 LOJ Samples SELECT * FROM Table1, Table2 LOJ Table3 ON Table2.col1 = Table3.col3 WHERE...

The following join, lists all authors whose last name starts with R or greater, and retrieves all the publishers (if any) in their city:
SELECT au_fname, au_lname, pub_name FROM authors LOJ publishers ON authors.city = publishers.city WHERE au_lname >= R

ROJ: The join includes all rows from the right rowset regardless of whether there is a matching row in the left rowset. These joins are called right outer joins (ROJ). Every row from the right rowset is first matched (using the ON <cond>) with rows from the left rowset, or with null values if there are no matching rows in the left rowset; the predicates from the WHERE clause are then applied to filter the result.
Note:

Search conditions (ON <cond>) must be used (see Search Conditions and Comparison Operators). ROJ is evaluated before commas. Parentheses may be used to affect the grouping of the data sources. The keywords RIGHT JOIN or RIGHT OUTER JOIN can be used instead of ROJ to improve readability. For conformity with ODBC format you can use {OJ source RIGHT OUTER JOIN source ON <cond>}.

Example B14 ROJ Samples SELECT * FROM Table1, Table2 ROJ Table3 ON Table2.col1 = Table3.col3 WHERE...

B-10 AIS User Guide and Reference

The following join lists all authors whose last name starts with R or greater, and retrieves all the publishers (if any) in their city. This example produces the same result as the example for a left outer join above. Note that in this ROJ example the FROM clause lists publishers before authors. In the LOJ example, authors are listed before publishers.
SELECT au_fname, au_lname, pub_name FROM publishers ROJ authors ON authors.city = publishers.city WHERE au_lname >= R

NESTEDJOIN: Forces the query optimizer to use a nested join strategy when joining the tables. For general details about forcing a specific optimization strategy, see <hint>, above.

Example B15 NESTEDJOIN Sample SELECT * FROM Table1, Table2 LOJ <NESTEDJOIN> Table3 ON Table1.col1 = Table3.col3 WHERE...

SEMIJOIN: Forces the query optimizer to use a semi-join strategy when joining the tables. For general details about forcing a specific optimization strategy, see <hint>, above. HASHJOIN: Forces the query optimizer to use a hash join strategy when joining the tables. For general details about forcing a specific optimization strategy, see <hint>, above.

WHERE Clause
The WHERE clause sets the search conditions for filtering rows in a SELECT, UPDATE, or DELETE statement.
Figure B5 WHERE Clause Syntax

Keywords and Options


This section describes the following keywords and options within the WHERE clause:

<cond>: The search condition. The syntax for the search conditions is described on Search Conditions and Comparison Operators.
Note:

If more than one search condition is used in a single statement, the conditions need to be connected with AND or OR.

Parameters can be specified in the WHERE clause. For more information, see Parameters. Identifiers in the where clause in a subquery can come from the higher level (a level which includes the nested query). Subqueries nest a SELECT statement inside a WHERE clause or another subquery.

Attunity SQL Syntax

B-11

The following subquery returns a list of the types of books published by each publisher. The main query selects the names of those publishers who publish at least one Business title:
Example B16 WHERE Clause Sample 1 SELECT DISTINCT pub_name FROM publishers WHERE 'business' IN (SELECT type FROM titles WHERE pub_id = publishers.pub_id)

Example B17 WHERE Clause Samples WHERE advance * 2 > total_sales * price WHERE phone NOT '415%' (finds all rows in which the phone number does not begin with 415) WHERE advance < 5000 OR advance IS NULL WHERE (type = 'business' OR type = 'psychology') AND advance > 5500 WHERE total_sales BETWEEN 4095 AND 12000 WHERE state IN ('CA', 'IN', 'MD') WHERE total_sales > ? (a parameter is used) WHERE au.last_name LIKE 'R%' AND EXISTS (SELECT * FROM pubs WHERE au.city = pubs.city)

GROUP BY and HAVING Clause


The GROUP BY and HAVING clauses are used in SELECT statements to divide a table into groups and optionally, to filter the groups.
Figure B6 GROUP BY and HAVING Clauses Syntax

Keywords and Options


This section describes the following keywords and options within the GROUP BY and HAVING clauses:

<expr>: Expressions are constants, functions, or any combination of column names, constants, and functions connected by arithmetic operators.
Note:

In certain circumstances (dictated by the data source) a GROUP BY clause may return an error if it contains more than eight columns.

B-12 AIS User Guide and Reference

The GROUP BY clause cannot reference a constant expression it returns an error prior to execution. For example, the following returns a syntax error:
SELECT * FROM employee GROUP BY 3*7

Column names and expressions that do not appear in the list of columns after the SELECT keyword can be used in GROUP BY and HAVING clauses. For example:
SELECT pub_id, SUM (advance), AVG(price) FROM titles GROUP BY city HAVING SUM(advance) > 15000 AND AVG(price) < 10

This query groups the results by the cities of the various publishers.

select_num: The ordinal number of the column to group the results by from the list of columns after the SELECT keyword. alias: An alias for an output column specified in the list of columns retrieved by the query. HAVING <cond> : Search conditions for the grouping. The search conditions must be aggregate expressions (they cannot include subqueries). For more information see, Search Conditions and Comparison Operators.

Example B18 GROUP BY and HAVING Clauses Code Samples

The following example calculates the average advance and the sum of the sales for each type of book:
SELECT type, AVG(advance), SUM(total_sales) FROM titles GROUP BY type

The following example groups the results by a combination of type and pub_id:
SELECT type, pub_id, AVG(advance), SUM(total_sales) FROM titles GROUP BY type, pub_id

The following example displays the results for groups matching the conditions in the HAVING clause:
SELECT pub_id, SUM (advance), AVG(price) FROM titles GROUP BY pub_id HAVING SUM(advance) > 15000 AND AVG(price) < 10 AND pub_id > '0700'

ORDER BY Clause
The ORDER BY clause returns query results sorted by the specified columns.

Attunity SQL Syntax

B-13

Figure B7 ORDER BY Clause Syntax

Keywords and Options


This section describes the following keywords and options within the ORDER BY clause:

<expr>: Expressions are constants, functions, or any combination of column names, constants, and functions connected by arithmetic operators.

Example B19 ORDER BY Clause Code Sample SELECT emp_id, emp_name FROM employee ORDER BY emp_name

The query results are sorted by emp_name, in ascending order. Column names and expressions that do not appear in the list of columns after the SELECT statement can be used in an ORDER BY clause. For example:
SELECT title, type, price FROM titles WHERE price > 9.99 ORDER BY author_id

This query orders the results by the authors of the books that are retrieved.

select_num: The ordinal number of the column to order the results by from the list of columns after the SELECT keyword.

Example B20 ORDER BY Clause Code Sample SELECT emp_id, emp_name FROM employee ORDER BY 2

alias: An alias for an output column specified in the list of columns retrieved by the query. ASC : Sorts the query results by ascending order. DESC: Sorts the query results in descending order.

Additional Information

ORDER BY can be used to display query results in a meaningful order. Without an ORDER BY clause you cannot control the order in which query results are returned. With ORDER BY, null values may come before or after all others. Sorting is carried out inside Attunity Connect or inside a data source containing some of the data, as determined by the query optimizer.

B-14 AIS User Guide and Reference

Since different data sources use different strategies regarding the ordering of nulls, the null order is unpredictable. However, for any one query, nulls are always either sort first or sort last.

ORDER BY cannot be used in subqueries and queries with child rowsets. ORDER BY can be used in the main (unnested) part of a query and in queries that use nested SELECT statements to reflect parent-child relationships.

Example B21 ORDER BY Clause Code Samples SELECT title, type, price FROM titles WHERE price > 9.99 ORDER BY title SELECT title AS BookName, type AS Mytype FROM titles ORDER BY 2 SELECT Dept, Max(Sal) FROM Sal GROUP BY Dept ORDER BY 2 SELECT Firstname, Lastname FROM Authors ORDER BY (Lastname || Firstname)

The following query orders rowsets by salary:


SELECT emp_name, (select salary FROM E -> salary) FROM employee E ORDER BY 2

The following query returns an error since ORDER BY is used in the nested query:
SELECT emp_name, (select salary FROM E -> salary ORDER BY 1) FROM employee E

Set Operators on SELECT Statements


Set operations combine the results of two SELECT statements into one result. The SELECT statement can be any SELECT statement supported by Attunity Connect (such as a SELECT statement in a CREATE VIEW statement, or in an hierarchical query, and so forth).
Figure B8 Set Operators Syntax

When a statement includes an INTERSECT operator with other set operators and parentheses are not used to enforce an order of operations, the INTERSECT operation is performed first. The other set operators are performed as they appear in the query, from left to right.

Attunity SQL Syntax

B-15

Keywords and Options


This section describes the following keywords and options within the Set operators:

<select>: A SELECT statement that is non-hierarchical.The SELECT statement may be scrollable but cannot be updateable (and the query can be run only in read-only mode).
Note:

When the ORDER BY clause is included within a SELECT statement itself, rather than applied to the result of the set operations, this clause may specify either the name or ordinal number of one of the columns.

Each SELECT statement involved in the set operation must return the same number of columns. The columns in the same position of the returned queries must be of the same data type or coercible, although they may be of different lengths. All the returned data of a particular column will have the size of the longest item in the column (shorter column values are padded, as necessary). For example, when handling character data of different lengths, all returned results are the length of the largest character data returned, with the other results padded to this length. The set operation statement takes the names and data types from the first SELECT statement. If a column is NULL, the data type for this column in the next table of the statement is used.
Note:

BLOBs or chapters cannot be used in set operations.

When you combine tables with different structures (as when some tables have missing columns and different data types), you can use dummy columns and convert functions, as in the following:
select n_name, n_regionkey from nation union select r_name, convert (null,int) from region order by 1;

UNION: Returns the rows of the <select> statements, discarding any duplicate rows. UNION ALL: Returns all the rows returned by the <select> statements, including duplicate rows. INTERSECT : Returns rows common to both result sets returned by the <select> statements, discarding duplicate rows. MINUS: Returns rows that appear only in the first <select> statement, discarding duplicate rows. <order by>: Returns the combined query results of the set operation sorted by the specified column. You can order the results only by the ordinal number of a column of the rows returned by the statement. <limit to>: Limits the number of rows returned in the retrieved rowset.

B-16 AIS User Guide and Reference

Note:

The LIMIT TO syntax is useful in test and prototype environments. For the <order by> or <limit to> clauses to apply only the final <select> statement, rather than to the combined results of the union, you must explicitly group the <order by> or <limit to> with the final <select> as follows:

<select> union (<select> <order by>) <select> union (<select> <limit to>)

Example B22 Set Operators Code Samples select n_name from nation union select r_name from region (select c_name from customer union select s_name from supplier) intersect select n_name from nation

Note that the result of this query is different from the following:
select c_name from customer union select s_name from supplier intersect select n_name from nation

In this case, the INTERSECT operator is applied before the UNION operator.

SELECT XML Statement


The SELECT XML statement retrieves a rowset from a specified table as XML, where the XML reflects the true structure of the table, including array and variant structures. To use a SELECT XML statement you must first set the exposeXmlField property of the binding in the Misc section of the binding environment properties.
Figure B9 SELECT XML Statement Syntax

Keywords and Options


This section describes the following keywords and options within the SELECT XML statement:

XML: Indicates that the data returned is to be displayed in XML format.


Note:

The XML keyword can be changed in the binding environment properties, by specifying another name for the keyword. This is done by setting the xmlFieldName property.

Attunity SQL Syntax

B-17

ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.
Note: ds can be omitted for the default data source. You can specify the default data source as part of the connect string.

For HP NonStop platforms, fully qualified table names must be delimited by quotes (")

owner: A table owner. table: The specific table whose columns are retrieved. <cond>: Search conditions. LIMIT TO m ROWS: Only m rows of the result rowset are retrieved.

Example B23 SELECT XML Code Sample SELECT XML FROM navdemo:nation where n_nationkey=3

Batch Update Statements


Batch update statements include the following statements:

The INSERT Statement. The UPDATE Statement. The DELETE Statement.

INSERT Statement
The INSERT statement adds to one base table one row of data or the results of another query.
Figure B10 INSERT Statement Syntax

Keywords and Options


This section describes the following keywords and options within the INSERT statement:

ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.
Note: ds can be omitted for the default data source. You can specify the default data source as part of the connect string.

B-18 AIS User Guide and Reference

For HP NonStop platforms, fully qualified table names must be delimited by quotes (").

owner: A table owner.


Note: You can assign a default owner with the owner property in the binding configuration. Using Attunity Studio, you specify this property in the Default table owner field in the Advanced tab, which is displayed by right-clicking the specific data source, selecting Edit data source and then choosing the Advanced tab.

table: The name of the table containing rows to be inserted. The maximum length for a table name is 64.
Note: If the table contains a field with data type BLOB or CLOB, you must define a unique index for the table before you can insert data into the table. For details of creating an index, see CREATE INDEX Statement.

chapter: The identifier for data that is stored hierarchically in a data source (for example, information stored in arrays in RMS). See Hierarchical Queries. column: The name of a column in the table. A column needs to be present only if the expressions specified do not match in order or in count the columns of the table. The columns must be in the same order and count as the values.
Note:

Columns not specified are assigned with Null values.

<select>: The data returned by a SELECT statement is copied to the table. See SELECT Statement.
Note:

Make sure that the data types of data retrieved by the SELECT statement match the data types of the columns inserted in the table.

Example B24 <select> from INSERT Code Sample INSERT INTO ORACLE:table1 SELECT * FROM DISAM:table2

VALUE <constant>.

UPDATE Statement
The UPDATE statement updates rows in one or more of base tables. No result rowset object is returned to the user. The base tables affected must each be updateable, or the operation fails. A table is updateable only if for each row in the base table there is at most one corresponding row in the retrieved rowset and according to updateability rules described below. Note that within this limitation, Attunity Connect supports updateable joins.

Attunity SQL Syntax

B-19

Figure B11 UPDATE Statement Syntax

Keywords and Options


This section describes the following keywords and options within the UPDATE statement:

ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.
Note: ds can be omitted for the default data source. You can specify the default data source as part of the connect string.

For HP NonStop platforms, fully qualified table names must be delimited by quotes (").

owner: A table owner.


Note:

You can assign a default owner with the owner property in the binding configuration. Using Attunity Studio, you specify this property in the Default table owner field in the Advanced tab, which is displayed by right-clicking the specific data source, selecting Edit data source and then choosing the Advanced tab.

table: The name of the table containing rows to be updated. chapter: The identifier for data that is stored hierarchically in a data source (for example, information stored in arrays in RMS). <hint>: The optimization strategy you want used instead of the optimization strategy selected by the query optimizer. If you specify more than one strategy, the optimizer uses the strategy from this list that is most efficient. If you specify strategies that you dont want used, the optimizer doesnt use any of the strategies from this list.
Note:

If you specify optimization strategies that cannot be satisfied, the query execution fails. The query optimizer creates a number of possible optimization strategies. Only after these strategies have been generated is the best strategy selected from the list of available strategies, based either on any hints specified or on what the optimizer evaluates as the best strategy.

Only specify a hint in the SQL if you have checked the optimization strategy used with the Attunity Connect Query Analyzer. Attunity recommends using hints only when advised by Attunity Support.
B-20 AIS User Guide and Reference

Figure B12 <hint> Syntax

WITH: The designated optimization strategy should be used. WITHOUT: The designated optimization strategy should not be used. SCAN: Indexes are ignored and the data is scanned according to its physical order. INDEX: The index specified by indname is used to seek for specific values in the WHERE clause or, if WITHOUT is specified, the index is ignored for the specific values. INDEXSCAN: The index specified by indname is used to seek for all values or, if WITHOUT is specified, the index is ignored. indname: The name or an ordinal of an index. n: The number of segments in the index that should be used. Specifying a value of zero (0) is equivalent to specifying INDEXSCAN. FIRST: The leftmost table in the join strategy. This table will be the first table in the optimized tree (on the left side). LAST: The table will be the last table in the optimized tree.

ON <cond>: A condition determining the results of the <join> (see below). For full details see Search Conditions and Comparison Operators. <join>: A join between successive tables. <join> has the following format: , (comma): A standard cross join.
Note: Search conditions cannot be used (see Search Conditions and Comparison Operators).

ONE_TO_ONE: Each row in the left rowset has one and only one matching row in the right rowset. ONE_TO_MANY: Each row in the left rowset may have more than one matching row in the right rowset, while each row in the right rowset has exactly one matching row in the left rowset. MANY_TO_ONE: Each row in the right rowset may have more than one matching row in the left rowset, while each row in the left rowset has exactly one matching row in the right rowset. MANY_TO_MANY: A row in either rowset may have more than one matching row in the other rowset.
Note:

If one of the above modifiers is used in executing a join, it directly influences whether a result row is updateable. The modifier is not considered during query optimization.

Attunity SQL Syntax

B-21

INNER: Used for clarity. The following are equivalent:


T1 INNER JOIN T2 ON T1 JOIN T2 ON

JOIN: A standard inner join.


Note: JOIN is evaluated before commas. Parentheses may be used to affect the grouping of the data sources.

In cases where an ON condition is replaced by a WHERE condition, inner joins and cross joins are equivalent, as in the following example:
FROM T1 JOIN T2 ON T1.c1 = T2.c2 FROM T1, T2 WHERE T1.c1 = T2.c2

The exception to this equivalence occurs when the join is under the right branch of an LOJ. Such cases generate two different optimization strategies, which produce different results. The following queries, for example, generate different results: LOJ: The join includes all rows from the left rowset regardless of whether there is a matching row in the right rowset. These joins are called left outer joins. Every row from the left rowset is first matched (using the ON <cond>) with rows from the right rowset, or with null values if there are no matching rows in the right rowset; the predicates from the WHERE clause are then applied to filter the result.
Note:

Search conditions (ON <cond>) must be used (see Search Conditions and Comparison Operators). LOJ is evaluated before commas. Parentheses may be used to affect the grouping of the data sources. The keywords LEFT JOIN or LEFT OUTER JOIN can be used instead of LOJ to improve readability. For conformity with ODBC format you can use {OJ source LEFT OUTER JOIN source ON <cond>}.

ROJ: The join includes all rows from the right rowset regardless of whether there is a matching row in the left rowset. These joins are called right outer joins (ROJ). Every row from the right rowset is first matched (using the ON <cond>) with rows from the left rowset, or with null values if there are no matching rows in the left rowset; the predicates from the WHERE clause are then applied to filter the result.

B-22 AIS User Guide and Reference

Note:

Search conditions (ON <cond>) must be used (see Search Conditions and Comparison Operators). ROJ is evaluated before commas. Parentheses may be used to affect the grouping of the data sources. The keywords RIGHT JOIN or RIGHT OUTER JOIN can be used instead of ROJ to improve readability. For conformity with ODBC format you can use {OJ source RIGHT OUTER JOIN source ON <cond>}.

NESTEDJOIN: Forces the query optimizer to use a nested join strategy when joining the tables. For general details about forcing a specific optimization strategy, see <hint>, above. SEMIJOIN: Forces the query optimizer to use a semi-join strategy when joining the tables. For general details about forcing a specific optimization strategy, see <hint>, above. HASHJOIN: Forces the query optimizer to use a hash join strategy when joining the tables. For general details about forcing a specific optimization strategy, see <hint>, above.
Note:

If the join includes a relational data source, it is recommended to have an index defined, otherwise Attunity Connect generates a virtual unique index from all the columns in the table.

SET column: The name of a column in the table. A column needs to be present only if the expressions specified do not match in order or in count the columns of the table. The columns must be in the same order and count as the values.
Note: A SET clause that includes an expression col1=col2 is supported only if col1 and col2 are from the same updateable table.

value: The value to be assigned to the specified column: it may be any scalar-valued expression valid in a WHERE clause. WHERE <cond>: See WHERE Clause.

Updateability rules (described below) determine how to include more than one table in an UPDATE statement.

Additional Information
Expressions for SET and WHERE clauses are evaluated prior to any actual updates, and the order of the column assignments in the SET clause is not significant.

Updateability Rules
It is not semantically meaningful to specify UPDATE Table1, Table2,... since it assumes a Many-to-Many join between Table1 and Table2, making neither table updateable. If more than one table is involved, join modifiers such as One-to-One or Many-to-One must be used, and at least one table must be at the opposite end of a -To-One join and thus be updateable.

Attunity SQL Syntax

B-23

The table on the right side of a left outer join or the left side of a right outer join is not updateable. A SET clause that includes an expression col1=col2 is supported only if col1 and col2 are from the same updateable table.

DELETE Statement
The DELETE statement deletes rows in one or more base tables. No result rowset object is returned to the user. The base tables affected must each be updateable, or the operation fails.
Figure B13 DELETE Statement Syntax

Keywords and Options


This section describes the following keywords and options within the DELETE statement:

ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.
Note: ds can be omitted for the default data source. You can specify the default data source as part of the connect string.

For HP NonStop platforms, fully qualified table names must be delimited by quotes (").

owner: A table owner.


Note:

You can assign a default owner with the owner property in the binding configuration. Using Attunity Studio, you specify this property in the Default table owner field in the Advanced tab, which is displayed by right-clicking the specific data source, selecting Edit data source and then choosing the Advanced tab.

table: The name of the table containing rows to be deleted. chapter: The identifier for data that is stored hierarchically in a data source (for example, information stored in arrays in RMS). <hint>: The optimization strategy you want used instead of the optimization strategy selected by the query optimizer. If you specify more than one strategy, the optimizer uses the strategy from this list that is most efficient. If you specify strategies that you dont want used, the optimizer doesnt use any of the strategies from this list.

B-24 AIS User Guide and Reference

Note: If you specify optimization strategies that cannot be satisfied, the query execution fails. The query optimizer creates a number of possible optimization strategies. Only after these strategies have been generated is the best strategy selected from the list of available strategies, based either on any hints specified or on what the optimizer evaluates as the best strategy.

Only specify a hint in the SQL if you have checked the optimization strategy used with the Attunity Connect Query Analyzer. Attunity recommends using hints only when advised by Attunity Support.
Figure B14 <hint> Syntax

WITH: The designated optimization strategy should be used. WITHOUT: The designated optimization strategy should not be used. SCAN: Indexes are ignored and the data is scanned according to its physical order. INDEX: The index specified by indname is used to seek for specific values in the WHERE clause or, if WITHOUT is specified, the index is ignored for the specific values. INDEXSCAN: The index specified by indname is used to seek for all values or, if WITHOUT is specified, the index is ignored. indname: The name or an ordinal of an index. n: The number of segments in the index that should be used. Specifying a value of zero (0) is equivalent to specifying INDEXSCAN. FIRST: The leftmost table in the join strategy. This table will be the first table in the optimized tree (on the left side). LAST: The table will be the last table in the optimized tree.

Example B25 <hint> Code Samples SELECT * FROM T1 <ACCESS(INDEX(emp_prim, 2))> WHERE key = 323 or key = 512 SELECT * FROM T1 <ACCESS(WITHOUT SCAN), LAST>

ON <cond>: A condition determining the results of the <join> (see below). For full details see Search Conditions and Comparison Operators. <join>: A join between successive tables. <join> has the following format:

Attunity SQL Syntax

B-25

Figure B15 <join> Syntax

, (comma): A standard cross join.


Note: Search conditions cannot be used (see Search Conditions and Comparison Operators.

ONE_TO_ONE: Each row in the left rowset has one and only one matching row in the right rowset. ONE_TO_MANY: Each row in the left rowset may have more than one matching row in the right rowset, while each row in the right rowset has exactly one matching row in the left rowset. MANY_TO_ONE: Each row in the right rowset may have more than one matching row in the left rowset, while each row in the left rowset has exactly one matching row in the right rowset. MANY_TO_MANY: A row in either rowset may have more than one matching row in the other rowset.
Note:

If one of the above modifiers is used in executing a join, it directly influences whether a result row is updateable. The modifier is not considered during query optimization.

INNER: Used for clarity. The following are equivalent:


T1 INNER JOIN T2 ON T1 JOIN T2 ON

JOIN: A standard inner join.


Note: JOIN is evaluated before commas. Parentheses may be used to affect the grouping of the data sources.

In cases where an ON condition is replaced by a WHERE condition, inner joins and cross joins are equivalent, as in the following example:
FROM T1 JOIN T2 ON T1.c1 = T2.c2 FROM T1, T2 WHERE T1.c1 = T2.c2

The exception to this equivalence occurs when the join is under the right branch of an LOJ. Such cases generate two different optimization strategies, which produce different results. The following queries, for example, generate different results: LOJ: The join includes all rows from the left rowset regardless of whether there is a matching row in the right rowset. These joins are called left outer

B-26 AIS User Guide and Reference

joins. Every row from the left rowset is first matched (using the ON <cond>) with rows from the right rowset, or with null values if there are no matching rows in the right rowset; the predicates from the WHERE clause are then applied to filter the result.
Note:

Search conditions (ON <cond>) must be used (see Search Conditions and Comparison Operators). LOJ is evaluated before commas. Parentheses may be used to affect the grouping of the data sources. The keywords LEFT JOIN or LEFT OUTER JOIN can be used instead of LOJ to improve readability. For conformity with ODBC format you can use {OJ source LEFT OUTER JOIN source ON <cond>}.

ROJ: The join includes all rows from the right rowset regardless of whether there is a matching row in the left rowset. These joins are called right outer joins (ROJ). Every row from the right rowset is first matched (using the ON <cond>) with rows from the left rowset, or with null values if there are no matching rows in the left rowset; the predicates from the WHERE clause are then applied to filter the result.
Note:

Search conditions (ON <cond>) must be used (see Search Conditions and Comparison Operators). ROJ is evaluated before commas. Parentheses may be used to affect the grouping of the data sources. The keywords RIGHT JOIN or RIGHT OUTER JOIN can be used instead of ROJ to improve readability. For conformity with ODBC format you can use {OJ source RIGHT OUTER JOIN source ON <cond>}.

NESTEDJOIN: Forces the query optimizer to use a nested join strategy when joining the tables. For general details about forcing a specific optimization strategy, see <hint>, above. SEMIJOIN: Forces the query optimizer to use a semi-join strategy when joining the tables. For general details about forcing a specific optimization strategy, see <hint>, above. HASHJOIN: Forces the query optimizer to use a hash join strategy when joining the tables. For general details about forcing a specific optimization strategy, see <hint>, above.
Note:

If the join includes a relational data source, it is recommended to have an index defined, otherwise Attunity Connect generates a virtual unique index from all the columns in the table.

SET column: The name of a column in the table. A column needs to be present only if the expressions specified do not match in order or in count the columns of the table. The columns must be in the same order and count as the values.

Attunity SQL Syntax

B-27

Note: A SET clause that includes an expression col1=col2 is supported only if col1 and col2 are from the same updateable table.

value: The value to be assigned to the specified column: it may be any scalar-valued expression valid in a WHERE clause. WHERE <cond>: See WHERE Clause.

Updateability rules (described below) determine how to include more than one table in an UPDATE statement.

Additional Information
Expressions (particularly aggregates in subqueries) are evaluated prior to any actual delete operations.

Updateability Rules
It is not semantically meaningful to specify DELETE Table1, Table2,... since it assumes a Many-to-Many join between Table1 and Table2, making neither table undeletable. If more than one table is involved, join modifiers such as One-to-One or Many-to-One must be used, and at least one table must be at the opposite end of a -To-One join and thus be deletable.

TABLE, INDEX CREATE and DROP Statements


Table and Index create and drop statements include the following statements:

The CREATE TABLE Statement. The DROP TABLE Statement. The CREATE INDEX Statement.

CREATE TABLE Statement


The CREATE TABLE statement creates a new table in the specified data source. CREATE TABLE statements are translated by the Query Processor into the native syntax required by each specific data source.
Note:

This statement is valid for all supported data sources except ADABAS, DBMS, and VSAM under CICS.

Figure B16 CREATE TABLE Statement Syntax

Keywords and Options


This section describes the following keywords and options within the CREATE TABLE statement:
B-28 AIS User Guide and Reference

ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.
Note: ds can be omitted for the default data source. You can specify the default data source as part of the connect string.

For HP NonStop platforms, fully qualified table names must be delimited by quotes (").

owner: A table owner.


Note: You can assign a default owner with the owner property in the binding configuration. Using Attunity Studio, you specify this property in the Default table owner field in the Advanced tab, which is displayed by right-clicking the specific data source, selecting Edit data source and then choosing the Advanced tab.

table: The name of the table to be created. The maximum allowed length for a table name is 64. column: A column in the table. data type: The data type for the column, which can be one of the following: Char[(m)]. The default value for m is 1 [Char] varchar(m) Tinyint Smallint Integer Numeric[(p[,s])]. The default value of p (precision) is 10 and s (scale) is 0 Float Double Date Time Timestamp (consisting of date and time components for use as a timestamp) Text (a text large object) Image (a binary large object)

The following table lists the data types which are mapped to the data source data types by the relevant driver:
Table B2 Relational DB2 Informix Ingress II (Open Ingres) Data Types Mapping Non-relational ADABAS CISAM DBMS (Open VMS only) Generic Flat Files ODBC OLE DB-FS (Flat FIle System)

Attunity SQL Syntax

B-29

Table B2 (Cont.) Data Types Mapping Relational Oracle Rdb SQL/MP (HP NonStop only) SQL Server Sybase Non-relational DISAM Enscribe IMS/DB (z/OS only) RMS (Open VMS only) VSAM under CICS and VSAM (z/OS only) Generic OLE DB SQL (Relational) Text-Delimited File

NULL: Null values are allowed for the column (this is the default). NOT NULL: Null values are not allowed for the column.

Example B26 CREATE TABLE Statement Code Sample

The following SQL statement creates a table named EMPLOYEE, on the Ora1 data source. The name of the table owner is WHITE.
CREATE TABLE Ora1:white.employee (emp_num integer NOT NULL, emp_name varchar(20))

For HP NonStop platforms, an explicit CATALOG clause must be added to the CREATE statement, otherwise it is assumed that the current subvolume is the catalog.
Example B27 CREATE TABLE Statement for HP NonStop Platforms

To execute the following statement, issue it directly to the data source using a passthru query, with the text={{query}} syntax.
create table $d0117.nssdata.ET_md0(PLANT_CODE SMALLINT not null, DEPARTMENT_CODE SMALLINT not null, FACILITY_NO INTEGER not null, PRODUCTION_DATE DATE not null,DELAYTIMESUMALLCMT FLOAT, DELAYEVENTSSUMALLC INTEGER, primary key(PLANT_CODE, DEPARTMENT_CODE, FACILITY_NO, PRODUCTION_DATE)) CATALOG $d0117.nssdata

DROP TABLE Statement


The drop table statement deletes an existing table at the specified data source. drop table statements are transalated by the Query Processor into the native syntax required by each specific data source.
Note:

This statement is valid for all supported data sources except ADABAS and DBMS.

Figure B17 DROP TABLE Statement Syntax

B-30 AIS User Guide and Reference

Keywords and Options


This section describes the following keywords and options within the DROP TABLE statement:

ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.
Note: ds can be omitted for the default data source. You can specify the default data source as part of the connect string.

For HP NonStop platforms, fully qualified table names must be delimited by quotes (").

owner: A table owner.


Note: You can assign a default owner with the owner property in the binding configuration. Using Attunity Studio, you specify this property in the Default table owner field in the Advanced tab, which is displayed by right-clicking the specific data source, selecting Edit data source and then choosing the Advanced tab.

table: The name of the table to be deleted. If this name is the name of a synonym, then the table which the synonym refers to is dropped.

Example B28 DROP TABLE Statement Code Sample

The following SQL statement deletes the EMPLOYEE table from the Ora1 data source. The table owner name is WHITE.
DROP TABLE Ora1:white.employee

CREATE INDEX Statement


The CREATE INDEX statement creates a new index on the specified table at the specified data source. This statement is valid for all data sources that support indexes. CREATE INDEX statements are translated by the Query Processor into the native syntax required by each specific data source.
Figure B18 CREATE INDEX Statement Syntax

Keywords and Options


This section describes the following keywords and options within the CREATE INDEX statement:

Attunity SQL Syntax

B-31

UNIQUE: The entries in the index will be unique. index_name: The name of the index to be created. The maximum length for an index name is 64. ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.
Note: ds can be omitted for the default data source. You can specify the default data source as part of the connect string.

For HP NonStop platforms, fully qualified table names must be delimited by quotes (").

owner: A table owner.


Note:

You can assign a default owner with the owner property in the binding configuration. Using Attunity Studio, you specify this property in the Default table owner field in the Advanced tab, which is displayed by right-clicking the specific data source, selecting Edit data source and then choosing the Advanced tab.

table: The name of the table the index is created for. column: The columns (segments)constituting the index.

Example B29 CREATE INDEX Statement Code Sample CREATE INDEX empname ON employee(last_name, first_name)

VIEW Statements
This section describes the following SQL VIEW statements:
Note:

In Attunity Studio, views can only be defined using the Virtual data source and the Virtual driver.

The CREATE VIEW Statement The DROP VIEW Statement

CREATE VIEW Statement


The CREATE VIEW statement creates a read-only view that can be used later in the FROM clause of an SQL query, or wherever a subquery can be specified. A view is stored by default in the SYS data source. The view is stored in the repository and not in the back-end data source.
Notes:

A view cannot accept parameters. You cannot create a view in a single session after dropping a view with the same name.

B-32 AIS User Guide and Reference

Figure B19 CREATE VIEW Statement Syntax

Keywords and Options


This section describes the following keywords and options within the CREATE VIEW statement:

ds: The name of the data source of type Virtual. This enables you to specify a location for the view other than the default data source. If you do not want to use the default SYS data source, define a data source of type Virtual and specify the name of this data source as the ds value when you create the view. For information about defining a Virtual data source, see Virtual Data Source.
Notes:

To create a view on the back-ene database, issue the CREATE VIEW statement directly to the data source using a passthru query, with the text={{query}} syntax. You may want to store views in a location other than the SYS data source for organizational reasons and because storing a large number of views in SYS can degrade performance.

view_name: The name used to identify the view. The maximum length for a view name is 64. <select>: A SELECT statement. For the syntax, see SELECT Statement.

Note:

A view cannot include parameters.

Duplicate column names are not allowed when creating a view. You must use an alias for a duplicate column name. For example the following view returns an error:
create view v1 as select d.dept_id COL1, e.dept_id from disam:nv_dept d, disam:nv_emp e where e.dept_id = d.dept_id

All columns in a view are returned unqualified. For example, the columns returned from the previous view are: COL1 and dept_id.
Example B30 CREATE VIEW Statement Code Sample create view1 as select * from T2,T3 where T2.col5=T3.col6 and T2.col7>T3.col8

Attunity SQL Syntax

B-33

This view is stored in the current default data source or in the SYS data source if no default data source was set. To use this view if no default data source is set, issue the following statement:
select * from sys:view1 where view1.col1=T1.col2 and view1.col3<T1.col4

Note:

You do not need to supply an alias if the rowset name is used as a column qualifier elsewhere in the query (as in this example).

DROP VIEW Statement


The DROP VIEW statement deletes the specified view.
Figure B20 DROP VIEW Statement Syntax

Keywords and Options


This section describes the following keywords and options within the DROP VIEW statement:

ds: The name of the data source other than the default data source, where the view is located. view_name: The name of the view to be deleted. The view is dropped from the repository and not from the back-end data source. To drop a view of the back-end database, issue the DROP VIEW statement using a passthru query with the text={{query}} syntax.
Note:

Stored Procedure Statements


This section describes the following SQL stored procedure statements:

The CREATE PROCEDURE Statement The DROP PROCEDURE Statement The CALL Statement

CREATE PROCEDURE Statement


A query can be restored for reuse by creating a stored query, using a CREATE PROCEDURE statement. The stored procedure can then be used in SELECT statements. The following types of stored procedures, which can be executed using a CALL statement or referenced within a SELECT statement are supported:

Stored procedures native to a particular relational data source.

B-34 AIS User Guide and Reference

Support for stored procedures native to particular data sources is described per data source.

Attunity Connect procedures. An Attunity Connect procedure is a user-written DLL that returns a rowset. The returned rowset is handled in the same way that data from any data source is handled. For details about Attunity Connect procedures, see Procedure Data Sources.
Note: When you access a stored procedure in a SELECT statement, you must use parentheses after the stored procedure, even when you do not supply the parameters.

A stored query is a SELECT statement that accesses a data source or another stored procedure. You create a stored query with the CREATE PROCEDURE statement. This statement creates a query that can be used later in the FROM clause of an SQL query, or wherever a subquery can be specified. Stored procedures can have parameters (specified in the WHERE clause of the SELECT statement), and can be used only for data retrieval operations.
Figure B21 CREATE PROCEDURE Statement Syntax

Keywords and Options


This section describes the following keywords and options within the CREATE PROCEDURE statement:

ds: The name of the data source of type Virtual. This enables you to specify a location for the stored procedure other than the default data source. If you do not want to use the default SYS data source, define a data source of type Virtual and specify the name of this data source as the ds value when you create the stored procedure. For information about defining a Virtual data source, see Virtual Data Source.
Note:

To create a stored query on the back-end data source, issue the CREATE PROCEDURE statement directly to the data source using a passthru query with the text={{query}} syntax. You may want to store a procedure in a location other than the SYS data source for organizational reasons. Also, storing a large number of views in SYS can degrade performance.

stored_proc: The name used to identify the stored procedure. The maximum length for a procedure name is 64. <select>: A SELECT statement. For the syntax, see SELECT Statement.

Attunity SQL Syntax

B-35

Duplicate column names are not allowed when creating a stored procedure. You must use an alias for a duplicate column name. For example the following stored procedure returns an error:
create procedure sp1 as select d.dept_id, e.dept_id from disam:nv_dept d, disam:nv_emp e where e.dept_id = d.dept_id

Use an alias to correctly create the stored procedure, as follows:


create procedure sp1 as select d.dept_id COL1, e.dept_id from disam:nv_dept d, disam:nv_emp e where e.dept_id = d.dept_id

All columns in a stored procedure are returned unqualified. For example, the columns returned from the previous stored procedure are: COL1 and dept_id.
Example B31 CREATE PROCEDURE Code Sample 1

P1 is a stored procedure defined as follows:


CREATE PROCEDURE P1 AS SELECT * FROM T1 WHERE COL1 > ?

This stored procedure is stored in the SYS data source if no default data source is set. To use this stored procedure, you include SYS as the ds specification in the query accessing the stored procedure. You can execute the procedure with a value for the parameter as follows:
CALL sys:P1(20)

Example B32 CREATE PROCEDURE Code Sample 2

SP_salaries is a stored procedure defined as follows:


CREATE PROCEDURE sp_salaries AS select * from sal where emp_id =?

This stored procedure is stored in the SYS data source if no default data source is set. To use this stored procedure, you include SYS as the ds specification in the query accessing the stored procedure. The following query joins the EMPLOYEE table with sp_salaries, retrieving information on employee salaries:
select* from emp,sys:sp_salaries(emp.emp_id)

Example B33 CREATE PROCEDURE Code Sample 3

stored_proc1 is a stored procedure defined as follows:


CREATE PROCEDURE stored_proc1 as select * from T2,T3 where T2.col5=T3.col6 and T2.col7>T3.col8

The following query joins the T1 table with stored_proc1, as follows:


select * from T1,sys:stored_proc1() Q1 where Q1.col1=T1.col2 and Q1.col3<T1.col4

Note:

You must supply an alias if the rowset name is used as a column qualifier elsewhere in the query (as in this example).

B-36 AIS User Guide and Reference

DROP PROCEDURE Statement


The DROP PROCEDURE statement deletes the specified stored procedure.
Figure B22 DROP PROCEDURE Statement Syntax

Keywords and Options


This section describes the following keywords and options within the CREATE PROCEDURE statement:

ds: The name of the data source other than the default data source, where the stored query is located. stored_proc: The name of the stored procedure to be deleted. The stored procedure is dropped from the repository and not from the back-end data source.
Note:

To drop a stored query of a back-end database, issue the DROP PROCEDURE statement directly to the data source using a passthru query with the text={{query}} syntax.

CALL Statement
The CALL statement executes a stored procedure and returns a rowset. A stored procedure can be:

An Attunity Connect procedure. A native data source stored procedure.


Note:

Stored queries, created with a CREATE PROCEDURE statement, cannot be called using a CALL statement.

The maximum number of nested CALL statements is 20.


Figure B23 CALL Statement Syntax

Keywords and Options


This section describes the following keywords and options within the CALL statement:

?=: Retrieves the return value from the stored procedure. ds: The name of the data source where the stored procedure is created. The ds entry is determined by the type of stored procedure you are calling:

Attunity SQL Syntax

B-37

For a stored procedure native to a specific data source, specify the data source name as it is defined in the binding configuration. For an Attunity Connect procedure, specify the data source name defined for the Attunity Connect procedure in the binding configuration.

owner: A table owner.


Note:

You can assign a default owner with the owner property in the binding configuration. Using Attunity Studio, you specify this property in the Default table owner field in the Advanced tab, which is displayed by right-clicking the specific data source, selecting Edit data source and then choosing the Advanced tab.

stored_proc: The name of the stored procedure being called. param: A list of parameters.

The following rules apply also to passing parameters to Attunity Connect procedures (procedures are described in Procedure Data Source (Application Connector):

The number of values supplied must match exactly the number of parameters defined in the stored procedure. If the stored procedure has no parameters, no values need to be supplied; however, the parentheses () following the name of the stored procedure are still required. If several values are specified, they must be separated by commas. You can specify values for parameters positionally or by name. When specified positionally, you list the values in the order expected by the data source or Attunity Connect procedure. This format is always valid. Specifying the values by name is valid only if the stored procedure defined named parameters; in this case, the values may be specified in the form parameter_ name = value, separated by commas. The order in which the values are specified is not significant. These two formats for specifying values cannot be mixed in any single invocation of a stored procedure.

A value may be a literal or an expression. As an extension of standard ANSI 92 SQL, you can specify, as values, expressions involving columns of tables that appeared previously in the FROM list. For example, the following is valid where V is a parameterized stored procedure and col1 is a column of table A:
SELECT * FROM A, V(A.col1)

This creates an implicit join between A and V: for each row of A, the current value of col1 is used to compute a new row based on the stored procedure V. For this join to be valid, A must appear before V in the FROM list.
Example B34 CALL Statement Code Sample

You can execute the procedure P1 with a single parameter, as follows:


CALL P1(20)

B-38 AIS User Guide and Reference

Synonym Statements
This section describes the following SQL synonym statements:

The CREATE SYNONYM Statement The DROP SYNONYM Statement

CREATE SYNONYM Statement


The CREATE SYNONYM statement creates a synonym (alias) for a table, or view. You can use the synonym name instead of the name of the table or view that it is replacing.
Note:

You cannot create a synonym in a single session after dropping a synonym with the same name.

A synonym can be used to implement a virtual data source. A virtual data source presents a view to the user such that only selected tables from one or more data sources are available, as if from a single data source. You populate a virtual data source by defining synonyms for the tables, views, and stored procedures you want the virtual data source to make available. For details about virtual databases see Using a Virtual Database.
Figure B24 CREATE SYNONYM Statement Syntax

Keywords and Options


This section describes the following keywords and options within the CREATE SYNONYM statement:

ds: The data source name as specified in the binding configuration. Specify for this ds entry the name of the virtual data source (if it is not the default).
Note:

You can specify the default data source as part of the connect

string.

synonym_name: The name of the synonym to be created. The name must not be the name of an existing table or synonym. ds: The data source name as specified in the binding configuration. This ds entry is the data source where the table referenced by the virtual data source resides.

Attunity SQL Syntax

B-39

owner: A table owner.


Note:

You can assign a default owner with the owner property in the binding configuration. Using Attunity Studio, you specify this property in the Default table owner field in the Advanced tab, which is displayed by right-clicking the specific data source, selecting Edit data source and then choosing the Advanced tab.

table: The name of a table for which the synonym is created. The name cannot be the name of an existing synonym. The synonym can be used as if it is the table.
Note:

If the table does not exist, the synonym will automatically refer to the table when it is created.

view: The name of the existing view for which the synonym is created. synonym: The name of an existing synonym, which this new synonym references. You can only specify this option if creating a virtual data source entry.

Example B35 CREATE SYNONYM Statement Code Sample create synonym vdb1:emp for sybase:employees

This stores the emp synonym referencing the Sybase employees table with the virtual data source vdb1.

DROP SYNONYM Statement


The DROP SYNONYM statement deletes a synonym (the table the synonym refers to is not dropped).
Note:

You cannot create a synonym in a single session after dropping a synonym with the same name.

Figure B25 DROP SYNONYM Statement Syntax

Keywords and Options


This section describes the following keywords and options within the DROP SYNONYM statement:

ds: The name of the data source other than the default internal SYS data source, where the synonym is located. synonym_name: The name of the synonym to be deleted. The synonym is dropped from the repository and not from the back-end data source.

B-40 AIS User Guide and Reference

GRANT Statement
The GRANT statement grants all the permissions for a table to a specified user.
Note: The data source being accessed must support the GRANT statement.
Figure B26 GRANT Statement Syntax

Keywords and Options


This section describes the following keywords and options within the GRANT statement:

user: The user who subsequently has all permissions on the specified table. ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.
Note:

ds can be omitted for the default data source. You can specify the default data source as part of the connect string.

For HP NonStop platforms, fully qualified table names must be delimited by quotes (").

table: The name of the table for which permissions are granted.

Transaction Statements
This section describes the following SQL transaction statements:

The BEGIN Statement The COMMIT Statement The ROLLBACK Statement

BEGIN Statement
The BEGIN statement starts a transaction on all data sources, which continues until a ROLLBACK or COMMIT statement is executed.
Note: For an XML data source, the BEGIN statement causes the table data to be stored in memory.

Attunity SQL Syntax

B-41

Figure B27 BEGIN Statement Syntax

COMMIT Statement
The COMMIT statement commits a transaction.
Note:

For an XML data source, the COMMIT statement releases the table data from the memory.

Figure B28 COMMIT Statement Syntax

ROLLBACK Statement
The ROLLBACK statement aborts a transaction.
Figure B29 ROLLBACK Statement Syntax

Constant Formats
The following table lists the supported constant formats:
Table B3 Constant Formats Example 124234 123.34 12E-3 The weather is sunny

Constant Format nnnnnn (long constant) 123.34 (double constant) 12E-3 abcde (string constant) NULL (NULL constant) Hexadecimal

0x1AB5

Expressions
An expression is a combination of one or more constants, literals, column names, parameters, subqueries and functions, connected by operators, that returns a single value. The components of an expression may mix data types, within the constraints of the supported coercions. You can use the arithmetic operators in expressions, as listed in the following table:

B-42 AIS User Guide and Reference

Table B4 Symbol + * /

Arithmetic Operators Description Addition or string concatenation Subtraction Multiplication Division

These arithmetic operators can be used on all Attunity data types where the value contains only numeric values.

Operator Precedence
Operators have the following precedence levels, where 1 is the highest level and 5 is the lowest:
1. 2. 3. 4. 5.

binary (single argument) -, NOT */= binary (two arguments) + AND OR

When all operators in an expression are of the same level, the order of execution is from left to right. You can change the order of operation with parentheses the most deeply nested expression is then handled first.

SIngle Quotation Marks in String Expressions


To include single quotation marks in string expressions, you need to place an additional single quotation mark before the quotation mark you want to include. For example:
WHERE col_3 = He said "Lets Go"

Functions
Functions are typically used in expressions. Functions take one or more arguments and return a single value. All functions can be used with the ODBC format {fn function(value)}. This section describes the following SQL functions:

Aggregate Functions Conditional Functions Data Type Conversion Functions Date and Time Functions Numeric Functions and Arithmetic Operators String Functions

Attunity SQL Syntax

B-43

Aggregate Functions
The aggregate functions generate summary values that appear as new columns in the query results. Aggregate functions can be used in the select list of a query or subquery or in a HAVING clause. Aggregate functions often appear in a statement that includes a GROUP BY clause. The aggregate functions, their individual syntax, and the results they produce are described in the following table. The expression is computed once for every qualifying input row.
Table B5 Aggregate Functions
Result Returns the average (over the input rows) of the values in expr.

Aggregate Function AVG([ALL|DISTINCT] expr)

ALL: Applies the function to all values (the default). DISTINCT: Eliminates duplicate values before applying AVG.

COUNT([ALL|DISTINCT} expr)

Returns the number of non-null values in expr.


ALL: Applies the function to all values (the default). DISTINCT: Returns the number of unique non-null values. COUNT: Can be used with all data types except text and image. Null values are ignored. COUNT (column_name): Returns a value of 0 on empty tables, on columns that contain only null values, and on groups that contain only null values.

COUNT(*)

Returns the number of qualifying input rows in this group. All rows are counted, regardless of any null values. The number of rows returned must be less than 2,147,483,647 or the wrong value is returned.

MAX(expr)

Returns the highest value (among the input rows) of the values in expr. With character columns, MAX finds the highest value in the sort sequence. Null values are ignored. Returns the lowest value (among the input rows) of the values in expr. With character columns, MIN finds the lowest value in the sort sequence. Null values are ignored. Returns the total (over the input rows) of the values in expr. SUM can be used only on numeric (integer or floating point) data types. Null values are ignored.

MIN(expr)

SUM([ALL|DISTINCT] expr)

ALL: Applies the function to all values (the default). DISTINCT: Eliminates duplicate values before applying SUM.

Additional Information

Aggregate functions can be used in: The list of values to be retrieved as specified after the SELECT keyword. A HAVING clause of a SELECT statement.

When a list of values to be retrieved, as specified after the SELECT keyword, includes an aggregate, all the columns must either have aggregate functions applied to them or be in the GROUP BY list. Aggregate functions can be applied to all the rows in a table, in which case they produce a single value, a scalar aggregate. Aggregate functions can also be applied to all the rows that have the same value in a specified column or expression (using the GROUP BY and, optionally, the HAVING clause), in which case they produce a value for each group, a vector aggregate. The results of the aggregate functions are shown as new columns.

B-44 AIS User Guide and Reference

Note:

Aggregate functions cannot be used in WHERE clauses.

Example B36 Aggregate Functions Code Samples

The following example calculates the average advance and the sum of the total sales for all history books. Both functions produce a single summary value for all the retrieved rows. The aggregates are computed over the entire titles table, and are not broken into groups.
SELECT AVG(advance), SUM(total_sales) FROM titles WHERE type = history

Used with a GROUP BY clause, both aggregate functions produce single values for each group, rather than for the entire table. The following example produces summary values for each type of book:
SELECT type, AVG(advance), SUM(total_sales) FROM titles GROUP BY type

The following example finds the number of different cities in which authors live:
SELECT COUNT(distinct city) FROM authors

The following example lists the book types in the titles table but eliminates the types that include one or no books:
SELECT type FROM titles GROUP BY type HAVING count (*) > 1

The following example groups the titles table by publishers and includes only those groups of publishers who have paid more than $25,000 in total advances and whose average book price is more than $15:
SELECT pub_id, SUM(advance), AVG(price) FROM titles GROUP BY pub_id HAVING SUM(advance) > 25000 AND AVG(price) > 15

Conditional Functions
The supported conditional functions are listed and described in the following table:
Table B6 Conditional Functions
Result Compares expr to each search value, one at a time. If expr is equal to search, the function returns the corresponding result. If no match is found, the function returns default, or NULL if the ELSE statement is omitted. The data types of search1searchN must be comparable to the data type of expr and are converted to this data type if necessary. The data types of result1resultN must be the same (including the same precision and scale for non-atomic data types except for strings). For strings, every result is padded to have the same length as the longest result. The data type returned by this function is that of result. Evaluates the conditions in the order in which they appear. Returns the corresponding result for the first condition that evaluates to TRUE. The data types of result1resultN must be the same (including the same precision and scale for non-atomic data types except for strings). For strings, every result is padded to have the same length as the longest result. The data type returned by this function is that of result.

Conditional Function CASE expr when search1 then result1 when search2 then result2 ... When searchN then resultN [else default] END CASE when condition1 then result1 when condition then result2 ... When conditionN then resultN [else default] END

Attunity SQL Syntax

B-45

Table B6 (Cont.) Conditional Functions


Conditional Function IFNULL(expr1,expr2) NVL(expr1,expr2) Result Returns expr2, if expr1 evaluates to NULL; otherwise, returns expr1. expr2 is mapped to the data type of expr1 and the result data type is the data type of expr1.

Note: Use the CONVERT function (see Data Type Conversion Functions to ensure that the data types match). For example, if the expression involves multiplying an integer by a numeric, the result can be a double, causing an error. In this type of situation, convert all values to double before applying the CASE statement.
Example B37 Conditional Function Code Samples

CASE with Search:


SELECT emp_id, CASE emp_id WHEN 11 then eleven WHEN 21 then twenty one ELSE other END FROM emp

CASE with Condition:


SELECT emp_id, CASE WHEN emp_id>10 then twelve WHEN emp_id>20 then twenty three' ELSE eight END FROM emp

Data Type Conversion Functions


The supported data type conversion functions are listed and described in the following table:
Table B7 Data Type Conversion Functions
Result Returns the expression converted to the specified target data type. The valid data types are:

Conversion Function CASE (expression, datatype)

Char[(m)]. The default value of m is 1. [Char] varchar(m) Tinyint Smallint Integer Numeric[(p[,s])]. The default value of p (precision) is 10 and s (scale) is 0. Float Double Date Time Timestamp expression cannot be a BLOB. The converted expression is delegated to the data source for processing whenever possible. Note that data sources handle strings differently and therefore the result of converting an expression to a string can be different per data source.

Notes:

NAV_CONVERT (expression, datatype)

Returns the expression converted to the specified target data type and does not issue the query to the data source for processing. The valid data types are the same as for the CONVERT function, listed above.

B-46 AIS User Guide and Reference

Table B7 (Cont.) Data Type Conversion Functions


Conversion Function TO_GRAPGIC(string) Result Returns the string converted to a graphic string (a double byte).

Date and Time Functions


Date and time functions use string representations of dates and time with the following forms:

Date Format
Any of the following forms are valid:

dd-mm-yy[yy] dd-mmm-yy[yy] yyyy-mm-dd yyyy-mmm-dd

You can use a hyphen (-) or a slash (/) as a separator (for example, dd/mm/yy[yy]). If you are using characters (JAN, FEB, MAR, etc.) to denote the month, use the format dd-mmm-yy[yy]. When using the format dd-mm-yy or dd-mmm-yy, the year2000Policy parameter in the environment settings determines the century (1900 or 2000). For details, refer to the YEAR2000_POLICY parameter in the Miscellaneous Category for the Environment Properties.

Time Format
hh:mm:ss[.f]: The placeholder f represents billionths of a second, and can be up to nine characters long.

Timestamp Format
Timestamp functions use string representations of timestamps, which combine the date and time formats described above. The supported date functions are listed in the following table:
Table B8 Date Functions
Result Accepts string designation of the date, prefixed by d. For example: select * from ORD where datefield = {d 01-01-03} Returns the present value of SQL_DATE. {t time string} Accepts string designation of the time, prefixed by t. This is equivalent to the TO_TIME function. Returns the current value of SQL_TIME. {ts timestamp string} Accepts string designation of the date and time, prefixed by ts. This is equivalent to the TO_DATE function. Returns the present value of SQL_TIMESTAMP. ADD_MONTHS(date, n) Accepts a date and integer (n) and returns the date plus n months. If date is the last day of the month or if the resulting month has fewer days than the day component of date, the result is the last day of the resulting month. Otherwise, the result has the same day component as the date.

Date Function {d date string}

Attunity SQL Syntax

B-47

Table B8 (Cont.) Date Functions


Date Function CURRENT_DATE() TODAY() CURRENT_TIME() CURRENT_TIMESTAMP() NOW() DATEADD(datepart, number, date) In addition to a number and date, accepts one of the following for datepart:

Result Returns the current date as SQL_DATE data type.

Returns the current date as SQL_TIME data type. Returns the current day and time as SQL_TIMESTAMP data type.

year quarter month day week hour minute second

Returns a date value equal to the date plus the number of dateparts. DATEDIFF(datepart, date1, date2) Accepts two dates and one of the following for datepart:

year quarter month day week hour minute second

Returns the number of datepart boundaries drossed between the two dates (date1 and date2). DAY(expression) DAYNAME(date) DAYOFWEEK(date) DAYOFYEAR(date) LAST_DAY(date) LEAPYEAR(date) HOUR(expression) MINUTE(expression) MONTH(expression) MONTHNAME(date) Accepts a date or timestamp expression and returns the day part of the expression as SQL_INTEGER data type. Accepts a date or timestamp expression and returns the name of the day for the input date. Accepts a date or timestamp expression and returns the number of the day in the week for the input date (Sunday is 1, Monday is 2, ...). Accepts a date or timestamp expression and returns the number of days since the beginning of the year. Accepts a date and returns the date of the last day of the month that contains date. Accepts a date or timestamp expression and returns 1 if the year is a leap year and 0 if it isnt. Accepts a time or timestamp expression and returns the hour part of the expression as SQL_INTEGER data type. Accepts a time or timestamp expression and returns the minute part of the expression as SQL_INTEGER data type. Accepts a date or timestamp expression and returns the month part of the expression as SQL_INTEGER data type. Accepts a date or timestamp expression and returns the name of the month.

B-48 AIS User Guide and Reference

Table B8 (Cont.) Date Functions


Date Function MONTHS_BETWEEN(date1, date2) Result Accepts two dates and returns the number of months between them. If date1 is later than date2, the result is positive; if date1 is earlier than date2, the result is negative. If date1 and date2 are either the same days of the month or both last days of months, the result is an integer; otherwise, the fractional portion of the result is calculated based on a 31-day month and the difference in the time components of the dates is considered. Accepts a date and a weekday named by char that is later than the date date. char must be a day of the week in the sessions date language either the full name or the abbreviation. The minimum number of letters required is the number of letters in the abbreviated version; any characters immediately following the valid abbreviation are ignored. The return value has the same hours, minutes, and seconds component as the argument date. Accepts a date or timestamp expression and returns the quarter for the input date (January to March is 1, April to June is 2, ...). Accepts a time or timestamp expression and returns the seconds part of the expression as SQL_INTEGER data type. Accepts a string designating the date and returns the date as SQL_ timestamp. For example: select * from ORD where datefield = TO_DATE(01-01-03) Note: Use the CONVERT function to return the data as an SQL_DATE. Returns the present value of SQL_DATE. TO_TIME(time, literal) Accepts a string designating the time and returns the time as SQL_TIME. For example: select * from APPOINT where datefield = TO_DATE(01-01-03) and timefield = TO_TIME(14:00:00) Note: Times are represented in terms of a 24-hour clock. Returns the present value of SQL_TIME. WEEK(date) YEAR(expression) Accepts expressions with either a DATE or TIMESTAMP data type and returns the number of the week in the year for the input date. Accepts an expression and returns the year as SQL_INTEGER data type.

NEXT_DAY(date, char)

QUARTER(date) SECOND(expression) TO_DATE(date, literal)

Date Comparison Semantics


With date comparison operations, you can use the {d date_literal} function to compensate for the lack of uniformity among date data types and literals on different platforms. When comparing a column of a date data type with a date literal value, prefix the date literal, as follows:
date_col comparison_operator {d date_literal}

where the comparison operator can be one of =, <, >, <=, >=, <>. Comparisons are done to the level of seconds (milliseconds are ignored).

Numeric Functions and Arithmetic Operators


The supported numeric functions are listed in the following table:

Attunity SQL Syntax

B-49

Table B9

Numeric Functions
Result Returns the absolute value of expression. Returns the smallest integer value bigger than or equal to expression. Returns e to the power of expression. Returns the biggets integer value less than or equal to expression. Returns the base 10 logarithm of expression. Returns the natural logarithm of expression. Returns the remainder of a/b. This function converts the given expression to a long data type and returns a long data type.

Numeric Function ABS(expression) CEIL()expression) EXP(expression) FLOOR(expression) LOG10(expression) LN(expression) MOD(a, b)

PI() POWER(expressionx, expressiony) ROUND(expressionx, expressiony)

Returns Pi to 14 decimal places. Returns expressionx to the power of expressiony. Returns expressionx rounded to expressiony places to the right of the decimal point. If expressiony is negative, returns expressionx rounded to the nearest ten to the power of expressiony. For example: ROUND(153.193, 1) = 153.2 ROUND(153.193, -2) = 200 The ROUND function is similar to the TRUNC function (described below). expressiony is converted to a lond data type.

SIGN(expression)

Returns -1 if expression<0 Returns 0 if expression=0 Returns 1 if expression>0

SQRT(expression) trig(expression) Where trig is one of the following:


Returns the square root of expression.. Returns the result of the trigonometric function.

SIN, ASIN, SINH COS, ACOS, COSH TAN, ATAN, TANH Returns expressionx truncated to expressiony places to the right of the decimal point. If expressiony is negative, returns expressionx truncated to the nearest ten to the power of expessiony. For example: TRUNC(153.193, 1) = 153.1 TRUNC(153.193, -2) = 100) The TRUNC function is similar to the ROUND function, always rounding down. TRUNC(n, 0) is equivalent to FLOOR(n).

TRUNC(expressionx, expressiony)

All functions (except for the MOD function) convert the given expression to a double data type and return a double data type.

String Functions
The supported string functions are listed in the following table:

B-50 AIS User Guide and Reference

Table B10

String Functions
Result Returns the ASCII value of the first character. On EBCDIC computers, returns the EBCDIC value. expression is converted to a string data type. This function applies to single byte character sets.

String Function ASCII(expression)

CHR(expression)

Returns the characrter with the ASCII value of expression. On EBCDIC computers, returns the EBCDIC value. expression is converted to a string data type.

(expr1 || expr2) CONCAT(expr1, expr2) LCASE(string) LOWER(string) LENGTH(string) LPAD(expr1, expr2)

Returns a string containing expr1 with expr2 appended to it.

Returns string in lower case.

Returns the number of bytes (as SQL_INTEGER) of string. Returns a cstring with length equal to expr2, padded by spaces to the left. expr2 must evaluate to a non-negative long. If expr2 is less than the length of expr1, the first characters are returned. If expr2 is a long constant, the returned the cstring maximum size equals the expr1 size. Otherwise, the size is 2000. expression is converted to a string data type. For example: LPAD(abcd, 5) = abcd LPAD(abcd, 3) = abc Compare with RPAD(), below.

LTRIM(expression) MBLENGTH(graphic_string) MBPOSITION(substring IN graphic_ string) MBSUBSTR(graphic_string, i [, j])

Returns a cstring without leading spaces. The length of this cstring is equal to the length of expression. Returns the number of characters (as SQL_INTEGER) of graphic_string. Searches for presence of substring in graphic_string. If substring exists, a numeric value is returned indicating the position (in characters) of substring in graphic_string. If substring is not found, 0 is returned. Returns a substring of graphic_string, i characters from the beginning of graphic_string and ending j characters later or at the end of the graphic_ string, if no j is specified. Searches a substring in a string. If substring exists, a value indicating the position (in bytes) of substring in string is returned. 0 indicates no substring is found. Returns a cstring of length equal to expr2, padded by spaces to the right. expr2 must evaluate to a non-negative long. If expr2 is less than expr1, the first characters are returned. If expr2 is a long constant, the returned cstring has a maximum size equal to the size of expr1. Otherwise, the size is 2000. expression is converted to a string data type. For example: RPAD(abcd, 5) = abcd RPAD(abcd, 3) = abc Compare with LPAD(), above.

POSITION(substring IN string) 1

RPAD(expr1, expr2)

RTRIM(expression)

Returns a cstring without trailing spaces. The length of this cstring is equal to the length of expression. expression is converted to a string data type. Compare with LTRIM() above.

SUBSTRING(string, i [,j]) SUBSTR(string, i [,j])

Returns a substring of string, offset i bytes from the beginning of string and ending j bytes later or at the end of the string, if j is not specified.

Attunity SQL Syntax

B-51

Table B10 (Cont.) String Functions


String Function UCASE(string) UPPER(string)
1

Result Returns string in upper case.

When using the POSITION function against Ingres or Ingres II, the POSITION function is translated into the Ingres and Ingres II LOCATE function. This function returns the first position of the specified string. If the string is not found the maximum size of the field plus one is returned. Thus, when the POSITION function is used with the MAX function and the specified string is not found in all the rows, or when it is used with the MIN function and the string specified is not found in any of the rows, the result is the size of the field plus one.

Note:

The maximum length allowed for a string literal is 350 charcters.

Parameters
A parameter is used like a constant and may be used anywhere a literal value is valid, except in a list of retrieved data after a SELECT keyword. A value must be assigned to the parameter before the query is executed. A parameter is specified either by a colon followed by the parameter name (for example: WHERE col1 = :param1) or by a question mark (WHERE col1 = ?) A parameter prefixed by a colon can be assigned values by position or name. A parameter prefixed by a question mark can only be assigned values by position. Although an ODBC API does not support keywords for passing parameter values to a statement prior to execution, you can still use passthru queries calling Attunity Connect procedures specifying parameter values by keywords. See Passthru Query Statements (bypassing Query Processing) and CREATE PROCEDURE Statement.
Note:

Colon and question marks cannot be used together in the same

query.

Search Conditions and Comparison Operators


Search conditions are predicates (or a combination of predicates) used in WHERE and HAVING clauses to select a subset of rows from a table. Search conditions often employ comparison operators, logical operators, and other common SQL keywords and options.

B-52 AIS User Guide and Reference

Figure B30 Search Condition Syntax

Keywords and Options


This section describes the following keywords and options within the SEARCH condition:

NOT: Negates any logical expression or keywords such as LIKE, NULL, BETWEEN, and IN. <expr>: An expression is a combination of one or more constants, literals, column names, parameters, subqueries and functions, connected by operators, that returns a single value. The components of an expression may mix data types, within the constraints of the supported coercions. A comparison between a string and a numeric value is performed according to the numeric order, and not the string order. To perform a string comparison, make sure that both sides of the comparison are string values or place single quotes () around the numeric value.

Example B38 Search Condition Code Sample

Comparing 100 with 5 produces a different result when comparing numbers or strings: 100 is greater than 5 but 100 is less than 5. To convert a numeric value to a string value, use the CONVERT function. See Data Type Conversion Functions. Comparison operators are:
=, <, >, <=, >=, <>

Attunity SQL Syntax

B-53

Notes:

When comparing data of type SQL_CHAR or SQL_VARCHAR, the operator < means closer to the beginning of the alphabet and the operator > means closer to the end of the alphabet. For the purposes of comparison, trailing blanks are ignored for data of type SQL_CHAR but are significant for SQL_VARCHAR. When the data types are mixed, the coercion is always to SQL_ CHAR, and trailing blanks are ignored. Literals are also considered to be of type SQL_VARCHAR. For example, SQL_VARCHAR with a value Dirk is not the same as a value Dirk . When comparing dates, < means earlier and > means later.

Single quotes must be placed around character and date data used with a comparison operator. For example:
= Bennett > 94609

AND: Joins two conditions and returns results when both conditions are true. OR: Joins two conditions and returns results when either condition is true.
Note: When both AND and OR are used in a statement, OR is evaluated after AND. Use parentheses to change the order of execution.

NULL: Use to search for null values (or for all values except null values). For example:
WHERE advance < 5000 OR advance IS NULL

An expression with an arithmetic operator evaluates to NULL if any of the operands are null. Null values in tables or views being joined never match each other.

BETWEEN: Denotes the beginning of an inclusive range of values. The keyword AND denotes the end of a range of values. For example:
WHERE val BETWEEN x AND y

Note:

If the first value specified is greater than the second value, no rows are returned.

IN: Specifies whether a given value matches any one of a list of expressions (all of which must be of the same data type) or is included among the values retrieved by the subquery (<select>). IN <select>: A SELECT statement (subquery). For the syntax, see SELECT Statement. The maximum number of nested SELECT statements is 10. The SELECT statement can return only one column (multiple rows can be returned).
Note:

The IN <select> syntax can be used only in a WHERE clause.

B-54 AIS User Guide and Reference

string: A string of characters or an expression that evaluates to a string of characters (for example, CONCAT(firstname,lastname)). LIKE: The character string is a matching pattern for columns of data type SQL_ CHAR or SQL_VARCHAR. For example:
WHERE FAMILY_NAME LIKE JOHNSON

<expr>: An expression. When the expression is a string of characters and wildcards, it must be enclosed in quotes. <expr> can include the following wildcard characters: % (percent symbol): A string of zero or more characters. _ (underscore): A single character.
Note: <expr> must include at least one character and less than 256 characters.

char: An escape character used to search for literal occurrences of wildcard characters. The following example defines and uses the pound sign (#) as an escape character:
WHERE 10%DIS LIKE 10#%D% ESCAPE #

EXISTS <select>: Tests for the existence of at least one row in the rowset from the nested SELECT statement (subquery).
Note: The EXIST <select> syntax can be used only in a WHERE clause.

Example B39 EXIST <expr> Code Sample

The following example retrieves the names of publishers who have published mathematics books:
SELECT DISTINCT pub_name FROM publishers WHERE EXISTS (SELECT * FROM titles WHERE pub_id = publishers.pub_id AND type = mathematics)

ALL <select>: True when at least one row retrieved by the nested SELECT statement (subquery) matches the expression specified as the first operand. The SELECT statement can return only one column.
Notes:

The ALL <select> syntax can be used only in a WHERE clause. If neither ALL or ANY is specified, then the subquery can return only a single row.

Example B40 ALL <select> Code Sample

The following example retrieves the books that commanded an advance greater than the largest advance paid by CompMath Publishers:

Attunity SQL Syntax

B-55

SELECT A.title FROM titles A WHERE advance > ALL (SELECT advance FROM titles B, publishers WHERE B.pub_id = publishers.pub_id AND pub_name = CompMath Publishers)

ANY <select>: True when any value retrieved by the nested SELECT statement (subquery) matches the expression specified as the first operand. The SELECT statement can return only one column.
Notes:

The ANY <select> syntax can be used only in a WHERE clause. If neither ANY or ALL is specified, then the subquery can return only a single row.

Example B41 ANY <select> Code Sample

The following example retrieves the authors who live in the same city as some publisher:
SELECT au_lname, au_fname FROM authors WHERE city = ANY (SELECT city FROM publishers)

Example B42 Search Condition Code Samples WHERE advance * 2 > total_sales * price WHERE phone NOT LIKE 415% /* finds all rows in which the phone number does not begin with 415 */ WHERE advance < 5000 OR advance IS NULL WHERE (type = business OR type = psychology) AND advance > 5500 WHERE total_sales BETWEEN 4095 AND 12000 WHERE state IN (CA, IN, MD) WHERE au.last_name LIKE R% AND EXISTS (SELECT * FROM pubs WHERE au.city = pubs.city)

Passthru Query Statements (bypassing Query Processing)


DDL statements and SELECT statements can be issued directly to a relational data source, instead of being processed by the Query Processor. You can specify a query as a passthru query either as part of the SQL syntax for the query (described below) or by setting a parameter to issue all SQL or a specific set of SQL to the backend data source (see Passthru SQL).
Figure B31 non-returning Passthru Statement Syntax

The SQL syntax for a retrieval passthru query is part of the FROM clause:

B-56 AIS User Guide and Reference

Figure B32 select Passthru Statement Syntax

Keywords and Options


This section describes the following keywords and options within the PASSTHRU statement:

ds: The data source name for a table as specified in the binding configuration, when this is not the default data source.
Note: ds can be omitted for the default data source. You can specify the default data source as part of the connect string.

<query>: The passthru query.


Note:

.For Rdb, prefix all table names in the query with the data source name specified in the binding configuration.

For HP NonStop Platforms, append the words BROWSE ACCESS at the end of the query, when specifying a query to a HP NonStop SQL/MP data source (if the query is not within a transaction).

n: The number of parameters included in the passthru query. For example:


TEXT={{inset into t1 values (?, ?)}} (2)

param: Specifies values for the parameters required by the passthru query. The number of values supplied must match exactly the number of parameters defined in the passthru query. If the passthru query has no parameters, no values need to be supplied; however, the parentheses () following the name are still required. If several values are specified, they must be separated by commas. If the parameter value is supplied externally to the query (for example, with the APPEND method on the Parameter object in ADO or using a setXXX method in JDBC), specify a question mark (?) in the parameter value list. You specify parameters positionally, in the order expected by the passthru query. A value may be a literal or an expression. As an extension of standard ANSI 92 SQL, you can specify, as values, expressions involving columns of tables that appeared previously in the FROM list. For example, the following is valid where col1 is a column of table A:
SELECT * FROM A, TEXT={{parameterized passthru query}}(A.col1)

This creates an implicit join between A and the passthru query: for each row of A, the current value of col1 is used to compute a new row based on the passthru query. For this join to be valid, A must appear before the passthru query in the FROM list.

alias: You must supply an alias if the rowset name is to be used as a column qualifier elsewhere in the query.
Attunity SQL Syntax B-57

Note:

An alias is supported only for retrieval passthru queries (as part of a SELECT statement).

Example B43 Passthru Statement Code Samples

A non-returning result:

oracle:TEXT={{CREATE TABLE employee (emp_num number(5) NOT NULL, emp_name varchar2(32))}}

As part of a SELECT statement:

SELECT * FROM disam:nation, rdbms:TEXT={{SELECT * FROM customer WHERE c_nationkey = ? AND c_custkey = ?}}(7,100)

Where disam and rdbms are data sources specified in the binding configuration. The SQL to the rdbms data source is issued directly to this data source, bypassing the Query Processor.

Reserved Keywords
The reserved keywords are listed in the following table:
Table B11 ACCESS ALL AND ANY AS ASC AVG BEGIN BETWEEN BY CALL CASE COMMIT CONVERT COUNT CREATE CURRENT DATEADD DATEDIFF DELETE DESC DISTINCT Reserved Keywords GRANT GROUP HASJOIN HAVING IN INDEX INDEXSCAN INNER INSERT INTERSECT INTERVAL INTO IS JOIN LAST LEFT LIKE LIMIT LOJ MANY_TO_MANY MANY_TO_ONE MAX OPTIMIZE OPTIONS OR ORDER OUTER OUTPUT PROC PROCEDURE QUERY RIGHT ROJ ROLLBACK ROWS SELECT SEMIJOIN SET SOME STRING SUM SYNONYM TABLE TEXT

B-58 AIS User Guide and Reference

Table B11 (Cont.) Reserved Keywords DISTINCTROW DROP ELSE END ESCAPE EXEC EXISTS FIRST FOR FORCE FROM MIN MINUS MYWHERE NAV_COVERT NESTEDJOIN NOT NULL OF ON ONE_TO_MANY ONE_TO_ONE THEN TO TRANSACTION UNION UNIQUE UPDATE VALUES VIEW WHEN WHERE WITH WITHOUT

Attunity SQL Syntax

B-59

B-60 AIS User Guide and Reference

C
National Language Support (NLS)
The main aspect of the National Language Support in AIS is the recognition of the different characters associated with a language and the way they are encoded in various operating systems and data sources. For each supported language, a special definition file called 'a codepage file' is supplied where all the language related information is stored. For complex languages such as Chinese, Japanese and Korean, a special library is also provided where specific conversion rules are implemented. As a distributed product that accesses heterogeneous data sources on varied platforms, AIS offers seamless conversion of text between the different character encodings used on the different platforms. Examples of such automatic conversion include:

Conversion between ASCII based encoding on open systems and EBCDIC based encoding on IBM mainframes and AS/400 machines Conversions to and from Unicode for Java, COM and .NET APIs Conversions to and from Unicode for databases that store data in Unicode Conversions between different encodings of the same language used on different platforms Conversions of legacy data stored using old character encodings (e.g., 7-bit encoding) into the current platform encoding standard

Getting this kind of seamless NLS support requires the proper setting of the codepage definitions according to the kind of encoding in use in the various data sources and platforms. This section discusses the different encoding schemes in use, the codepage definitions required and other NLS related aspects, and contains information on the following topics:

Codepage Terminology Basic NLS Settings Globally Setting Language at the System Level Working with Multiple Languages (UTF Codepage) NLS and XML and Java Encoding NLS Support at the Field Level Special Daemon Language Considerations Support for 7-Bit Codepages SQL Functions For Use With Graphic Strings

National Language Support (NLS) C-1

Codepage Terminology
The following terminology is used to describe codepages.

Single-byte codepages
In a single-byte codepage, each character is represented by a single byte value, that is, a number between 1 and 255, inclusive. Single-byte codepages are typical of Western languages. For example, in the ISO-8859-1 (Latin) codepage, the character 'A' is represented by the single byte value of 65, whereas in the US-EBCDIC codepage (or in the IBM-037 codepage), the same character is represented by the single-byte value of 193.

Multi-byte codepages
In a multi-byte codepage, some or all of the characters are represented by more than one byte value. Multi-byte codepages are typical in complex languages such as Chinese, Japanese and Korean.

Unicode codepages
Unicode is a universal numbering of all known characters, with each character identified by a unique number - its codepoint. Unicode has several encoding schemes, of which AIS supports UTF-8 and, to a lesser extent, UCS-2. Since the product uses 8-bit characters, the only Unicode encoding that qualifies as a 'codepage' is the UTF-8 encoding. The product supports UCS-2 in its Unicode-based APIs (Java and .NET) and in its data sources (via special Unicode data types).

Customized codepages
The NLS support of AIS can be customized to add new languages and codepages not currently supported as well as to introduce special conversion cases. The customization involves editing special codepage source files and building .cp files from them using the NAV_UTIL program. Exact details of how to customize codepages can be found in an e-Resolve knowledge base article on this subject.

Basic NLS Settings


The minimal national language support configuration amounts to telling the product what national language is in use. To set the language in Studio 1. In the Design perspective, open the machine for which you want to set the language.
2. 3. 4. 5.

Open the Bindings list and right-click on the NAV binding. Select Edit Binding. Open the Misc category and fill in the language parameter with the desired language code from NLS Language Codes. Save the change. New servers will use the language selected.

When a language is selected, a default codepage is automatically used based on the language and the platform. The following table summarizes the languages, their codes and their codepages.

C-2 AIS User Guide and Reference

Table C1

NLS Language Codes Alternative Codepages ASCII Platforms (Default) ISO-8859-15 EBCDIC Platforms (Default) IBM1140 (EBCDIC based unless noted otherwise) IBM037, IBM500, IBM1148, IBM1047, ISO-8859-1 (ASCII based)

Language Name English US

Language Code ENUS

Windows Default Windows-125 2

English UK

ENUK

Windows-125 2

ISO-8859-15

IBM1146

IBM285, IBM037, IBM500, IBM1140, IBM1148, IBM1047, ISO-8859-1 (ASCII based)

French

FRE

Windows-125 2

ISO-8859-15

IBM1147

IBM297, IBM037, IBM500, IBM1140, IBM1148, IBM1047, ISO-8859-1 (ASCII based)

Latin International

LAT

Windows-125 2

ISO-8859-15

IBM1148

IBM500, IBM037, IBM1140, IBM1047, ISO-8859-1 (ASCII based)

Spanish

SPA

Windows-125 2

ISO-8859-15

IBM1145

IBM284, IBM037, IBM500, IBM1140, IBM1148, IBM1047, ISO-8859-1 (ASCII based)

German

GER

Windows-125 2

ISO-8859-15

IBM1141

IBM273, IBM037 ,IBM500, IBM1140, IBM1148, IBM1047, ISO-8859-1 (ASCII based)

National Language Support (NLS) C-3

Table C1 (Cont.) NLS Language Codes Alternative Codepages ASCII Platforms (Default) ISO-8859-15 EBCDIC Platforms (Default) IBM1140 (EBCDIC based unless noted otherwise) IBM037, IBM500, IBM1148, IBM1047, ISO-8859-1 (ASCII based) Italian ITL Windows-125 2 ISO-8859-15 IBM1144 IBM280, IBM037, IBM500, IBM1140, IBM1148, IBM1047, ISO-8859-1 (ASCII based) Greek Russian1 Turkish2 Hebrew Arabic Japanese GRK RUS TUR HEB ARA JPN Windows-125 3 Windows-125 1 Windows-125 4 Windows-125 5 Windows-125 6 SJIS ISO-8859-7 ISO-8859-5 ISO-8859-9 ISO-8859-8 ISO-8859-6 VMS Machine - VMS-JP SUN Machine - EUC-JP (Solaris) All non VMS/SUN Machines SJIS Chinese Simplified Chinese Traditional Korean
1

Language Name Portuguese

Language Code POR

Windows Default Windows-125 2

IBM875 IBM1154 IBM1155 IBM424 IBM420 IBM1399

IBM1025 IBM1026 -

IBM939

SCHI TCHI KOR

GBK2312 BIG5 MS949

GBK2312 BIG5 EUC-KR

IBM1388 IBM937 IBM933

Russian users who use ANSI 1251 Cyrillic as their Windows codepage must

edit the RUS.TXT file and compile it to RUS.CP using the NAV_UTIL CODEPAGE. A RUS.TXT Example is shown below.
2

To work with solutions in Attunity Studio, when using Turkish, add the -nl en switch to the Target path in the Attunity Studio shortcut properties. For example: "C:\Program Files\Attunity\Studio1\studio.exe -nl en"

C-4 AIS User Guide and Reference

RUS.TXT Example
The following is an example of the RUS.TXT file that must be included to usethe ANSI 1251 Cyrillic code page.
; The setting of MS_WIN_CP depends on the Windows' Russian codepage ; set in the Regional Settings Control Panel screen: ; ; For "1251 - (ANSI - Cyrillic)" use: ;MS_WIN_CP = 1251 ; ; For "8595 - (ISO 8859-5 Cyrillic)" use: MS_WIN_CP = 28595

Globally Setting Language at the System Level


AIS supports an alternative mechanism for setting the default language for an installation. Such a setting may be useful in the following cases:
1. 2.

When there are many different binding definitions, this method can save the need to define the language for each one. When upgrading from old versions (before V4.1). In the past, server definitions could be saved in a local language. Currently, server definitions are always saved in UTF-8 encoding. Without this setting, there may be a problem loading an old environment definition, since the server may not recognize its codepage before the loading of the environment is complete.
Note:

This option overrides the language and codepage specified in the environment definition making it impossible to change language settings from Studio. This may interfere with the proper working of the system in some scenarios (e.g., when a particular server workspace needs to be set to use UTF-8 encoding).

To set the language at the system level Define an environment variable by the name ACLANG with a value based on the following pattern:
<language>:<codepage>

For example:
FRE:F8EBCDIC297

The environment variable should be defined in an appropriate place, in accordance with the operating system in use.

Working with Multiple Languages (UTF Codepage)


To support multiple languages concurrently, you can choose UTF-8 as your codepage (for details on how, see the procedure To set the language at the system level). Data is then saved as Unicode in UTF-8 encoding. Note: This option is not supported on EBCDIC platforms. It is also important to remember that if you choose UTF-8 as your codepage for concurrent multiple language support, all string-type data in your database will be stored in UTF-8 encoding.

National Language Support (NLS) C-5

There may be cases in which some of your existing data is encoded differently, for example with 7-bit encoding. AIS allows you to specify which fields are encoded differently by using the NLS_STRING data type. Data fields of this type are converted from the alternate encoding to the main encoding that you set in the procedure To set the language at the system level for your environment, when read from the database, and converted back to the alternate encoding when writing to the database. In default configuration, we fully support working with tables with names and metadata in the machine codepage but, this configuration is NOT multilingual i.e. only one codepage should be used in all bindings. To work with more than one language (supported on ASCII based machines, only), the NAV environment codepage should be set to UTF-8. In this case, objects (Tables, Procedures, Stored Procedures, Views, and Synonyms) that have names with non-Latin characters are not supported. To change the NAV environment codepage to 'UTF-8' use nav_util edit env acadmin and set codepage=UTF-8 under Misc. To define the nlsString codepage 1. In Studio, navigate to the Design perspective.
2. 3.

In the Configuration view, right-click the machine with the binding configuration you want to set. Under Bindings, select the your binding configuration. Note: NAV is the default binding.

4.

Right-click the binding configuration and choose Edit Binding. The environment properties are listed in the Properties tab. nlsString is set under the misc group.

5.

In the Value field for the nlsString property, specify the codepage to be used for non-UTF-8 encoding.

NLS and XML and Java Encoding


XML documents declare their encoding the in XML prolog line (the first line of the XML document) - for example, the following XML document:
<?xml version="1.0" encoding="ISO-8859-1"?>

declares its encoding to be ISO-8859-1 (ISO Latin). AIS generally recognizes all accepts its codepage names as valid XML encoding names. However, only codepages of the current language (as configured) are recognized. In addition, common XML alias names are accepted for most codepages. For Java, there is also a known set of encoding names recognized by Java components. The list of common encoding as of version 1.4.2 of the Java 2 Standard Edition can be found at: http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html

NLS Support at the Field Level


Normally, AIS assumes that all string fields are encoded in the system codepages. However, there are cases where some (or even all) the string fields are encoded in a different codepage than the system codepage (e.g., a 7-bit encoding).
C-6 AIS User Guide and Reference

To handle such cases, AIS defines a special string type called nls_String. Strings in the database that are defined as nls_Strings are converted automatically to the system codepage when they are read from the database, and are converted back from the system codepage when they are written to the database. To use this data type, change the tables to use this data type rather than STRING and then tell the product which encoding to use for NLS strings. This involves two steps:
1. 2.

Defining the data type of a field as nlsString. Specifying the nlsString parameter of the environment setting.

To define a field's data type as nlsString


Note: A field is defined with a data type of nlsString using Studio, in the Design perspective.
1.

In the Configuration view, right-click the data source, and select Edit Metadata. The Metadata tab opens with the selected data source displayed in the Configuration view.

2. 3.

Select the table that contains the field, right-click and choose Edit. In the Columns tab, select the field and specify nlsString as the data type.

To define the nlsString environment properties Note: The language and codepage is set using Studio, in the Design perspective.
1. 2. 3.

In the Configuration view, click the machine with the binding configuration you want to set. Under Bindings, select the relevant binding configuration. Right-click the binding configuration and select Edit Binding. The environment properties are listed in the Properties tab. The nlsString property is set under the misc group.

4.

In the Value field for the nlsString property, specify the name of the codepage and, optionally, a comma and whether the character set reads from right to left (as in Middle Eastern character sets). The default is false (read from left to right).

Examples
Specifying the following in the Value field defines a Japanese EUC 16 bit code page:
JA16EUC

Specifying the following in the Value field defines an Israeli standard 960 7-bit Latin/Hebrew (ASCII 7-bit), where the character set reads from right to left:
IW7IS960,true

Special Daemon Language Considerations


The daemon, whose main task is to allocate servers for requesting clients, does not deal directly with user data. Therefore, the daemon is less sensitive to NLS settings.

National Language Support (NLS) C-7

A special case exists where the daemon must be set to work in a specific language. This happens when XML-based clients (e.g., JCA, NETACX, COMACX) send XML documents encoded in codepages other than UTF-8, ISO-8859-1 or US-EBCDIC. In such a case, the daemon must be instructed to load the language definition that contains the codepage that appears in the requests. To define the daemon language
Note:
1. 2. 3. 4.

The language is set using Studio, in the Design perspective.

In the Configuration view, click the machine with the daemon you want to set. Under Daemons, select the daemon configuration. Right-click the daemon configuration and select Edit Daemon. In the Default language field in the Daemon Control tab, choose the language.

Support for 7-Bit Codepages


Certain legacy systems keep data generated decades ago using a special type of encoding called 7-bit encoding. Such encoding uses only 7-bit values (numbers in the range of 0..127) rather than the entire 8-bits of a normal computer byte. The common characteristic of 7-bit codepages is that language-specific characters replace the lower-case Latin characters (for example, in a 7-bit Hebrew encoding, the Hebrew letter Bet uses the same byte value as the Latin letter A). A similar effect exists with some old Northeast Asian codepages (e.g. the Japanese JA16EBCDIC930 codepage) where lower case Latin characters are replaced with certain national language characters. Codepages where the lower case Latin characters are folded-over cannot be used as the primary codepage for the AIS. AIS supports working with data using this encoding by a special data type called nls_String (see NLS Support at the Field Level) which is converted to a regular string in an unfolded codepage. To define the 7-bit codepage environment setting Note: The codepage is set using Studio, in the Design perspective.
1. 2. 3.

In the Design perspective Configuration view, click the computer with the binding configuration you want to set. Under Bindings, select the relevant binding configuration. Right-click the binding configuration and select Edit binding. The environment properties are listed in the Properties tab. The language and codepage properties are already set. The property to set here is the nlsString property under the misc group.

4.

In the Value field for the nlsString property fill in the 7-bit codepage name.

For 7-bit Hebrew, specify IW7IS960 followed by a comma and true, since the character set reads from right to left: IW7IS960,true. For the Japanese JA16EBCDIC930 codepage, specify JA16EBCDIC.

C-8 AIS User Guide and Reference

SQL Functions For Use With Graphic Strings


The following string functions are available when using graphic strings:

MBLENGTH MBPOSITION MBSUBSTR

These functions use characters instead of bytes when executing the function. In addition, the TO_GRAPHIC data type conversion function is available to convert a single byte string to a double byte graphic string.

National Language Support (NLS) C-9

C-10 AIS User Guide and Reference

D
COBOL Data Types to Attunity Data Types
The following table shows how data the mapping between Attunity data types and COBOL data types.
Table D1 COBOL to Attunity Data Type Conversion Storage Mode NOIBMCOMP Fraction al ADD Data Type Y scaled_int1 or string according to the number of digits. Note: See Footnote 1 for all int data types. N int or string according to the number of digits. See footnote 1 at the end of this table. IBMCOMP Y scaled_int See footnote 1 at the end of this table. N int See footnote 1 at the end of this table. Other Y scaled_int See footnote 1 at the end of this table. N int See footnote 1 at the end of this table.

COBOL Date COMP-6 Type COBOL Flavor Switch BINARY MICROFOCUS

COBOL Data Types to Attunity Data Types D-1

Table D1 (Cont.) COBOL to Attunity Data Type Conversion COBOL Date COMP-6 Type COBOL Flavor Switch BINARY-CHAR Storage Mode Fraction al ADD Data Type int See footnote 1 at the end of this table. BINARY-SHORT int See footnote 1 at the end of this table. BINARY-LONG int See footnote 1 at the end of this table. BINARY-DOUBLE int See footnote 1 at the end of this table. COMP AS 400 Other Y decimal scaled_int See footnote 1 at the end of this table. N int See footnote 1 at the end of this table. COMP-1 COMP-2 COMP-3 COMP-4 Y single dfloat (double)
2

scaled_int See footnote 1 at the end of this table.

int See footnote 1 at the end of this table.

COMP-5

MICROFOCUS

NOIBMCOMP

scaled_int or string according to number of digits See footnote 1 at the end of this table.

D-2 AIS User Guide and Reference

Table D1 (Cont.) COBOL to Attunity Data Type Conversion COBOL Date COMP-6 Type COBOL Flavor Switch Storage Mode Fraction al ADD Data Type N int or string according to the number of digits See footnote 1 at the end of this table. IBMCOMP Y scaled_int See footnote 1 at the end of this table. N int See footnote 1 at the end of this table. Other Y scaled_int See footnote 1 at the end of this table. N int See footnote 1 at the end of this table. COMP-6 MICROFOCUS COMP-6 (1) NOIBMCOMP Y scaled_int or string according to the number of digits See footnote 1 at the end of this table. N intv or string according to the number of digits scaled_int See footnote 1 at the end of this table. N int See footnote 1 at the end of this table. COMP-6 (2) Other decimal
3

IBNCOMP

COBOL Data Types to Attunity Data Types D-3

Table D1 (Cont.) COBOL to Attunity Data Type Conversion COBOL Date COMP-6 Type COBOL Flavor Switch COMP-X MICROFOCUS Storage Mode Fraction al ADD Data Type Y scaled_int or string according to the number of digits See footnote 1 at the end of this table. N int or string according to the number of digits See footnote 1 at the end of this table. Other Y scaled_int See footnote 1 at the end of this table. N int See footnote 1 at the end of this table. FLOAT-SHORT FLOAT-LONG FLOAT-EXTENDED INDEX single dfloat dfloat int See footnote 1 at the end of this table. SIGN [IS] LEADING See footnote 3 at the end of this table.
4

SIGN [IS] LEADING SEPARATE [CHARACTER] NATIVE-2 HP NonStop

int See footnote 1 at the end of this table.

NATIVE-4

int See footnote 1 at the end of this table.

NATIVE-8

int See footnote 1 at the end of this table.

PACKED -DECIMAL

decimal

D-4 AIS User Guide and Reference

Table D1 (Cont.) COBOL to Attunity Data Type Conversion COBOL Date COMP-6 Type COBOL Flavor Switch POINTER Storage Mode Fraction al ADD Data Type int See footnote 1 at the end of this table. POINTER-64 int See footnote 1 at the end of this table.
1

The actual data type can be a signed or unsigned integer, and its size depends on the size that describes the COBOL data type (for example, int1, int3, uint1, uint4) A PIC clause that does not contain a format character (S),(+), or, (-) and the COBOL flavor is z/OS, maps to unsigned_decimal, otherwise, it maps to decimal. A PIC clause that contains the format character (.), maps to string. A PIC clause that does not contain the format character (.) and the COBOL flavor is HP NonStop, maps to numstr_lse, and all other COBOL flavors map to numstr_nlo. A PIC clause that contains the format character (.), maps to numstr_bdn. A PIC clause that does not contain the format character (.), maps to numstr_nl.

When mapping COBOL data types not listed in the above table, Attunity Connect maps PIC clauses containing the format characters X, B, A, or N map to string. To determine the size of the Attunity string, allow one character for each COBOL X, B, or A format character and two characters for each COBOL N format character. All other data types that define only a PIC clause, are mapped according to the following rules:

A PIC clause containing a format character (+), (-), or (S) and the PIC clause:

Contains the format character (.), it maps to string. Does not contain the format character (.) and the COBOL flavor is HP NonStop, it maps to numstr_tse while other COBOL flavors map to numstr_s.

A PIC clause not containing a format character (+), (-) or, (S) and the PIC clause:

Contains the format character (.), it maps to string. Does not contain the format character (.), it maps to string numstr_u.

COBOL Data Types to Attunity Data Types D-5

D-6 AIS User Guide and Reference

E
Editing XML Files in Attunity Studio
In many cases you must manually edit the metadata to configure parts of a solution or composition. Metadata is created in XML format. You define aspects of a solution by changing the values of the elements and attributes of the XML files that belong to the solution. Attunity Studio provides a graphical interface where you can define the various aspects of a solution. This interface lets you make changes easily without having to manually edit the XML file.

Preparing to Edit XML Files in Attunity Studio


You can edit XML files for the following items in Attunity Studio:

Machines, for more information see Setting up Machines. Bindings, for more information see Setting up Bindings in Attunity Studio. Daemons, for more information see Defining Daemons at Design Time. Users, for more information see User Profiles and Managing a User Profile in Attunity Studio.

When you open an XML file, a graphical representation of the file is opened in the editor. The editor displays the elements and attributes in the file in the first column and their corresponding values in the second column. Each entry has an icon that indicates whether the entry is an element or an attribute. Click the Source tab to view the file in its native format. The following figure is an example of the editors view of an XML file.
Figure 901 XML Graphical Display

Editing XML Files in Attunity Studio

E-1

To edit an XML file in Attunity Studio 1. In the Design perspective, open the Navigator view.
2. 3. 4. 5. 6.

In the Navigator view, find the item with the XML file that you want to edit. This can be a machine, binding, daemon, or user. Right-click the item and select Open as XML. A graphical list of the files elements and attributes opens in the editor. Find the element or attribute (property) that you want to change. Click in the right column next to the property you are changing and edit or add the value. Save the file, then select it again in the Project Explorer and press F5 to refresh. The XML file is updated automatically.

Making Changes to the XML File


You can also make the following changes to XML files in Attunity Studio:

Remove Objects Add DTD Information Edit Namespaces Add Elements and Attributes Replace an Element

Remove Objects
You can delete an element, attribute, or other object from the XML file. To remove an object 1. Right-click an object from the list in the editor.
2.

Select Remove.

Add DTD Information


You can add DTD information to an element or attribute. To add DTD Information 1. Right-click an element or attribute and select Add DTD Information. The Add DTD Information dialog box opens.

E-2 AIS User Guide and Reference

Figure E1

Add DTD Information Dialog Box

2.

Enter the information requested in the dialog box. The following table describes the Add DTD Information dialog box.
Add DTD Information Description The name of the XML root element. The value in this field is the Public Identifier. It is used to associate the XML file (using an XML catalog entry) with a DTD file by providing a hint to the XML processor. Click Browse to select an XML catalog entry from a list. An XML Catalog entry contains two parts, a Key (which represents a DTD or XML schema) and a URI (which contains information about a DTD or XML schema's location). Select the catalog entry you want to associate with your XML file.

Table E1 Field

Root element name Public ID

System ID

The value in this field is the DTD the XML file is associated with. You can change the DTD the file is associated with by editing this field. The XML processor will try to use the Public ID to locate the DTD, and if this fails, it will use the System ID to find it. Click Browse to select a system ID. You can this in two ways:

Select the file from the workbench. In this case, update the with the import dialog box. Select an XML catalog entry.

3.

Save the file, then select it again in the Project Explorer and press F5 to refresh. The XML file is updated automatically.

Edit Namespaces
You can make changes to the namespaces associated with an element or attribute. To edit namespaces 1. Right-click an element or attribute and select Edit namespaces. The Edit Schema Information dialog box opens.

Editing XML Files in Attunity Studio

E-3

Figure E2

Edit Schema Information

2.

Click on one of the buttons to make any changes to this information.

1. 2.

To add a new namespace From the Schema Information dialog box, click Add. The Add Namespace Definitions dialog box opens. Select one of the following:

Select from registered namespaces. This selection is available when the dialog box opens. Select from the list of registered namespaces and then click OK. If no registered namespaces are available, the list is empty. Specify new namespace. Enter the information described in the following table:
New Namespace Description The prefix is added to all qualified elements and attributes in the XML file. The namespace of the XML file. The location of the XML schema of the XML file. An XML Catalog ID or a URI can be entered in this field. Click Browse to search for the schema you want You can this in two ways:

Table E2 Field Prefix

Namespace Name Location Hint

Select the schema from the workbench. In this case, update the with the import dialog box. Select an XML catalog entry.

The Namespace Name and Prefix fields are be filled with the appropriate values from the schema (you must leave the fields blank for this to occur). Note: If you are creating an XML file from an XML schema, you cannot change the Namespace Name or Location Hint values.

To edit a namespace 1. From the Schema Information dialog box, click Edit.
2.

Enter the information in the fields.

E-4 AIS User Guide and Reference

Add Elements and Attributes


You can add additional elements and attributes to the XML file. To add Elements and Attributes 1. Right-click an element.
2.

Select one of the following:


Add Attribute to add an attribute under the selected element. Add Child to add another element under the selected element Add Before to add another element above the selected element Add After to add another element below the selected element
Note:

The InFocus Design Studio XML editor is Context sensitive to InFocus schemas. This means that when adding elements and attributes to an XML file with an InFocus schema, you can select an element or attribute from a list of the possible values (depending on the schema definition). This list is available as a submenu.

3.

Provide a name for the element or attribute if required. You may also be able to select the element from a submenu. The element or attribute will be added to the file. Save the file, then select it again in the Project Explorer and press F5 to refresh. The XML file is updated automatically.

4.

Replace an Element
You can replace an element with another legal element. To replace an element 1. Right-click an element from the list in the editor.
2. 3. 4.

Select Replace with. Select an element from the submenu. Only legal elements are available. The original element is replaced with the selected element.

Editing XML Files in Attunity Studio

E-5

E-6 AIS User Guide and Reference

Glossary
ACX Attunity XML protocol. ACX is used as the network protocol between the application connectivity client (e.g. .NET or JCA program) and the AIS server. The verbs and the concept of the ACX protocol are the foundation of Attunitys application connectivity. Adapter Binding An instance of an Application Adapter in the Binding. The relationship between the application adapter and the adapter binding is the same as the one between a data source Driver and the Data Source definition in the binding. In Attunity terminology, often used interchangeably with Application Adapter or simply adapter. Adapter Definition The Metadata object that corresponds to an Application Adapter instance. ADD Attunity Data Dictionary, which includes Metadata for Data Sources and Applications. Agent A special type of Application Adapter that provides the changed information for CDC. For example, a DB2 CDC agent is the software component that reads changes from the DB2 log and provides them to a CDC consumer. Application Legacy software within an enterprise to which connectivity is required for various purposes. Application Adapter A software component that provides connectivity to an application. Adapters are a part of the AIS application connectivity framework which includes the ACX protocol and interfaces like JCA, .NET, COM and XML. They are analogous to Data Source Drivers in the data access sphere. ATMI Application to Transaction Monitor Interface

Glossary-1

Backend Database

ATMI supports a programming interface that offers procedural library-based programming using a set of C or COBOL procedures. ATMI also provides an interface for communication, transaction, and data buffer management. The ATMI interface and BEA Tuxedo system implement the X/Open distributed transaction processing (DTP) model for transaction processing. Backend Database The Data Source to which you connect via Attunity Connect. In a client-server data access scenario, the server database being accessed is commonly referred to as the backend database. Binding An assemblage of Application Adapters and Data Sources. Every Workspace has a definition as to which binding it works with. Bindings include data sources, application adapters and environments/Events, with the first two being the most important. BLOB A BLOB (Binary Large Object) is a large file, typically an image or sound file, that must be handled in a special way because of its size. Captured Data Source The data source being monitored for changes. CDC Change Data Capture Enables updates to tables to be captured for additional processing. The change capture mechanism polls the database using a specified query and when a change is encountered that meets initial criteria, the relevant data is written to an Event Queue, where it can then be further processed. Change Data Source A data source where change records from the CDC agent are stored. Change Router A server component that reads events from a CDC agent on a back-end and fills them up in a Staging Area. Change Table (Change File) The tables in a change data source. There is one change table per captured table in the captured data source. Chapter A chapter is an OLE DB term, denoting a group of rows within a hierarchical rowset. The chapter constitutes a collection of children of some row and column in a parent rowset (and is meaningful only in the context of the parent rowset). The column in the parent rowset is called a chapter column and contains a chapter identifier. The columns name is also the name identifying the child rowset, which is meaningful only in the context of the parent rowset. An ADO Recordset is equivalent to an OLE DB chapter. In ADO you refer to a child Recordset. In ODBC and JDBC, Attunity Connect provides functionality to enable support for chapters.

Glossary-2

Data Source

ChapView A sample program provided with AIS. It is installed on Windows. Intended to demonstrate the use of Chapters within ADO programs, but also useful to test SQL statements against AIS Data Sources. Client Interface A set of functions, classes, interfaces and rules through which an Application communicates with a software component. AIS supports standard client interfaces for applications, namely ODBC, OLEDB/ADO, JDBC, JCA, and ACX. Client Machine A client machine is a machine running an application that calls Attunity Connect via one of its standard Client Interfaces such as ODBC, OLEDB/ADO, JDBC, JCA or ACX. A Machine that can supply data to other machines is called a server, and a machine that requests data from a server is a client. Note that a Server Machine can also be a client of other servers. CLOB A CLOB (character large object) value can be up to two giga-characters long. A CLOB is used to store unicode character-based data, such as large documents in any character set. The length is given in number characters for both CLOB, unless one of the suffixes K, M, or G is given, relating to the multiples of 1024, 1024*1024, 1024*1024*1024 respectively. Length is specified in characters (unicode) for CLOB. Connect String A string containing connection properties that are passed to a Client Interface such as ODBC, OLEDB/ADO, JDBC, JCA, and ACX when creating a data or application access connection. There are various types of connection strings, depending on the client interface you are trying to use and the target system. Attunity Connect defines connection strings for the client interfaces that it supports. Connection Pooling A cache of connections maintained in memory so that the connections can be reused when future requests are received. Connection pooling is handled by the Daemon, by setting a number of parameters, including the server mode (making the Server Process reusable) and the number of available servers. Daemon The daemons main function is to house the assemblage of Workspaces. A series of daemons can be made available. The default is called IRPCD. Workspaces are underneath daemons in the Attunity Studio tree. Data Source Data access within AIS is divided into data sources. A data source can be of type Oracle, VSAM, etc., etc. The division into data sources is sometimes defined by the Backend Database; for example, connecting to a specific Oracle database. In other cases, the division into data sources is more a design decision, for example in the grouping of logically related VSAM files.

Glossary-3

Data Type

Data Type A data type in a programming language is a set of data with values having predefined characteristics. Examples of data types are: integer, floating point unit number, character, string, and pointer. Usually, a limited number of such data types come built into a language. The language usually specifies the range of values for a given data type, how the values are processed by the computer, and how they are stored. Database Adapter A special Application Adapter in which the underlying Application is the AIS Query Processor. Driver A software component that connects to a particular data source interface. For example, an Oracle driver is the software component that translates internal AIS calls to the OCI interface. In Attunity terminology, often used interchangeably with Data Source. ETL Extract, Transform, Load. Three database functions that are combined into one tool to pull data out of one Data Source and place it in another. Event An interaction of type async-send. It can be used in Event Queues and in CDC. Event Queue A software component that provides persistent storage of Events to allow delivery to consuming applications in a decoupled manner. Extended ADD An ADD repository used in conjunction with a non-ADD-based data source. As such, it does not store the Metadata itself, but rather augments information. For example, Extended ADD is commonly used for storing cardinality information for NonStop SQL and Adabas-Predict. Another example is virtual tables or virtual views for arrays within Adabas-Predict. File Pool Cacheing of file handles in a pool to reduce the amount of open/close operations on files. File pools are used in conjunction with File-system Data Sources, such as RMS, Enscribe, DISAM, etc. File-system Data Source In AIS terms, any data source that does not accept SQL commands is considered a file-system data source. Some examples include Adabas, DISAM, VSAM, DBMS, etc. See also Relational Data Source. Interaction The most basic unit of activity against an Application. An Adapter Definition is comprised of a set of interactions and the Schemas that describe their input and output.

Glossary-4

NAV.SYN

IRPCD The default Daemon. Isolation Level Describes the degree to which the data being updated is visible to other transactions. Lock A lock is a mechanism for controlling access to something. In databases, locks are often used so that multiple programs or threads of a program can share a resource, for example, access to a file for updating it, on a one-at-a-time basis. Typically, a lock is of temporary duration and when the resource is no longer required, it is freed for locking and use by the next sharer in a queue. LOJ Left Outer Join. An SQL operation that requests data from two tables where the results returned are those columns that are common to both tables. In addtion all the columns from the first or left table are also included. In this manual any outer join is described by the term LOJ. In a right outer join the common columns and those in the second table are returned. An inner join operation returns only the common columns. Machine A computer system being used as either the client or server in a data access, application access or CDC scenario. Metadata The term metadata is used both in conjunction with data access and/or application access scenarios. In data access, metadata is comprised of table definitions, including columns, datatypes, indexes, cardinality, file locations, etc. In application access, metadata is comprised of interactions, input records, output records and their respective columns and datatypes. Metadata can either be generated and stored within AIS (see ADD), or can be dynamically retrieved from any backend system that provides metadata. Oracle database is an example of a backend system that holds its own metadata; VSAM data source is an example of a backend system that uses the AIS data dictionary (ADD) to store metadata. Native File System Some AIS-supported platforms include a file system which is native to the machine. Examples of such native file systems are VSAM for MVS, RMS for OpenVMS and Enscribe for HP NonStop. AIS uses the native file system of the machine whenever possible, e.g. physical NOS storage and NAVDEMO files. On machines that do not have a native file system, AIS uses DISAM files. NAV The name of the default Binding. NAV.SYN The external AIS syntax file. When accessing Relational Data Sources, the Attunity Query Processor generates SQL in the syntax of the backend Relational Data Source. In order to do so, AIS maintains a set of internal syntax definitions for all supported relational data sources. The NAV.SYN syntax file provides a means of creating new syntax definitions. In the following circumstances you can control the way the SQL is generated:

Glossary-5

NAV_UTIL

When the features supported by the version of the Backend Database are different from the support provided by Attunity Connect for that database. When the backend data source is accessed using either the ODBC or OLESQL generic Drivers. The set of SQL features sent by default to the backend data source is minimal; those that are normally supported by all relational data sources. This is because any flavor of SQL can be supported by the backend database.

NAV_UTIL The AIS command line interface, with its own collection of commands. The commands include troubleshooting utilities and Metadata utilities. All of the commands run from the NAV_UTIL Command Line Utility (or NAVCMD on IBM z/OS systems). NAVDEMO Every AIS installation includes a NAVDEMO data source. This data source can be used for installation verification, or for demo purposes. The tables in the NAVDEMO data source are based on TPCD standard tables. The tables in the NAVDEMO data source are implemented using the machines native file system. Navigator The name of the default Workspace. NAVROOT The root directory where AIS is installed. An environment variable named NAVROOT points to this directory. NOS Native Object Store. This is the name of the physical persistent storage for all AIS definitions, including Bindings, Daemons, ADD Metadata, Adapter Definitions, etc. These are all stored in NOS files. Every AIS installation has at least one NOS repository, called SYS. The SYS NOS stores all definitions with the exception of ADD metadata, which is stored in separate NOS repositories. NOS is implemented over the Native File System of each platform. Ownership Ownership in relational databases relates to the three-part standard table naming: catalog.owner.table.The owner usually represents the database user who created the table. So, for example, if the user scott creates a table emp then the table is really called scott.emp (scott can call this table just emp but other users must prefix the table name with the owner). Passthru Passthrough (or Passthru) is not a relational database term. This is an AIS term that usually means that the SQL statement that the user provides is passed as-is to the underlying database without being parsed by Attunity Connect. Passthru is used in cases where one wants to use a special feature or syntax in the underlying database that Attunity Connect does not support. Passthru mode can be associated with a connection or can be used with a special SQL syntax:
select * from datasource:text{{ some database specific query }})()

Predict The data dictionary that is used in conjunction with Adabas.

Glossary-6

Resource Manager

Prestarted Server A prestarted server is a process which is started when the Daemon starts and kept in a pool. Prestarted servers are immediately available for use by new client processes, saving initialization time. Instead of starting a new Server Process each time one is requested by a client, the client receives a process from the pool of available processes. When the client finishes processing, this server process either dies or, if reusable servers have been set, is returned to the pool of available servers. Process In Attunity terminology, a process is an execution context in which the program code runs. You can have more than one process running the same program. In MVS terminology, a process is equivalent to a task. Query Processor An AIS software component at the core of every data access scenario. The query processor accepts SQL requests from client applications/tools and works in conjunction with query optimizer to devise and implement an access strategy for executing the SQL request. The query processor is distributed by design so that an SQL request may be implemented by a group of query processors on several machines working in tandem. Query Optimizer An AIS software component that is used in choosing an efficient access strategy for a given client SQL request. The AIS Query Optimizer is a cost-based optimizer. It attaches a cost to every potential access strategy according to indexes, cardinality, etc. It then chooses the access strategy with the lowest cost. Referential Integrity Referential integrity is a feature provided by Relational Data Source management systems (RDBMS) that prevents users or applications from entering inconsistent data. Most RDBMS have various referential integrity rules that you can apply when you create a relationship between tables. Relational Data Source In Attunity terminology, relational Data Sources refer to any Backend Database which supports SQL. The most common such data sources are Oracle, SQL Server and DB2. Repository AIS holds information internally in the repository. There are two types of repository: a general repository and a repository per Data Source.

General repository: Also referred to as the SYS repository. Stores all AIS definitions, such as Bindings, Daemons, Adapter Definitions, User Profiles, remote machines, etc. Data source repository: A data source repository can exist for every Data Source. For ADD-based data sources, this repository includes full metadata describing the tables, columns, indexes, etc. For non-ADD-based data sources (e.g. relational data sources) the repository can include extended information, such as cardinality. Such a repository is referred to as Extended ADD.

Resource Manager A software component that is responsible for updating and working with resources such as databases and remote systems.

Glossary-7

Reusable Server

Reusable Server Once the client processing finishes, the server Process does not die and can be used by another client, reducing startup times and application startup overhead. This mode does not have the high overhead of Single Client mode since the servers are already initialized. However, this server mode may use a lot of server resources, since it requires as many server processes as concurrent clients. Schema Part of an adapter definition. The schema is a collection of record definitions referenced by interactions within the Adapter Definition. Server Machine A Machine that can supply data to other machines is called a server. The Query Processor and the inter-machine communications components are generally present on every one of the machines in an AIS network. The Attunity Connect interface programs to the specific Data Sources and Applications generally reside on the same machine as the respective data sources and applications. Single Client Each client receives a dedicated Server Process. The account in which a server process runs is determined either by the client login information or by the specific server Workspace. This mode enables servers to run under a particular user account and isolates clients from each other (since each receives its own process). However, this server mode incurs a high overhead due to process startup times and may use a lot of server resources, since it requires as many server processes as concurrent clients. SQL SQL (Structured Query Language) is a standard interactive and programming language for getting information from and updating a database. Although SQL is both an ANSI and an ISO standard, many database products support SQL with proprietary extensions to the standard language. Queries take the form of a command language that lets you select, insert, update, find out the location of data, and so forth. There is also a programming interface. Staging Area An AIS software component that temporarily stores change events that are read from a journal, making them available for consumption by SQL and XML clients. The staging area is always located on a UNIX or Windows platform machine. Stored Procedure A precompiled collection of transact-SQL statements, stored under a name and processed as a unit that you can call from within another transact-SQL statement or from a client application. Stream A sequence of change events that are read from a given Data Source. See also CDC. Stream Context See Stream Position.

Glossary-8

Virtual Table

Stream Position A reference to a specific change event in a Stream. See also CDC. Sync Point A Stream Position of the last change event in a committed Transaction. A sync point is considered to be a safe point to pause in the processing of a change stream. It is typically used to synchronize the reading of change events of different tables; changes from all tables are read up until the selected sync point. Transaction A set of operations that, as a group, either succeed or fail. Transaction Manager A software component that coordinates the committing of Transactions across multiple Resource Managers. Trigger A set of Structured Query Language (SQL) statements that automatically "fires off" an action when a specific operation, such as changing data in a table, occurs. A trigger consists of an event (an INSERT, DELETE, or UPDATE statement issued against an associated table) and an action (the related procedure). Triggers are used to preserve data integrity by checking on or changing data in a consistent manne Two-phase Commit Also known as 2PC. A protocol for reliably committing changes across multiple systems, even in the case of network failures or node failures. In the protocol, a Transaction Manager calls each participating Resource Manager, first to prepare to commit the changes, and then to actually commit them. These two phases allow the transaction manager to get a commitment from all participating resource managers that they will be able to actually commit the changes. Once all participating resource managers agree to commit, the transaction manager asks each one to commit them. User The person or program acting as a client. User Profile The user profile enables you to provide user name password pairs to access Applications, Data Sources and Server Machines from the Client Machine at runtime without being prompted to supply this information. Upon installation, a default user profile called NAV is created. This user profile is used whenever a client connects without providing a user name. Virtual Table One of the ways that Attunity Connect exposes an array for SQL access. Using virtual tables, one can view arrays within a table as if they were separate tables. Virtual tables have full SQL functionality.

Glossary-9

Virtual View

Virtual View One of the ways that Attunity Connect exposes an array for SQL access. Virtual views enhance performance on joins between a table and an array within it by implementing the join at retrieval time rather than at query processing time. Workspace A logical pool of servers. A server is the smallest service unit managed by the Daemon. Every server belongs to a specific workspace. What a workspace does depends on its Binding. A workspace defines the server processes and environment that is used for the communication between the client and the Server Machine for the duration of the client request. A workspace definition includes the Data Sources and Application Adapters that can be accessed as well as various environment variables.

Glossary-10

Index
See alsoCDC Glossary Def, F-1 Application Adapter Definition Adapter Element, 17-2 Enumeration Element, 17-5 Field Element, 17-7 Interaction Element, 17-3 Overview, 17-1 Record Element, 17-5 Schema Element, 17-4 Variant Record Element, 17-6 Application Adapter Glossary Def, F-1 Array Handling Overview, 7-1 Arrays, 7-1 ATMI Glossary Def, F-1 Attunity Data Dictionary Glossary Def, F-1 Attunity XML Protocol Glossary Def, F-1 Attunity Studio backing up and restoring metadata, 29-3 Axis servlet, 9-3

A
ACX Glossary Def, F-1 ACX Protocol, 15-5 Adapter Database, 69-1 Query, 70-1 Tuxedo, 68-1 Adapter Binding Glossary Def, F-1 Adapter Definition Glossary Def, F-1 ADD Glossary Def, F-1 ADD Syntax, 5-23 adding a daemon, 4-2 adding workspaces, 4-12 Agent Glossary Def, F-1 aggregate functions HAVING clause, B-44 AIS backing up, 29-1 AIS Server data backing up and restroring, 29-3 Application Glossary Def, F-1 Application Access, 15-2 Flow, 15-4 Application Access Flow, 15-4 Application Access Solution, 15-1 Application Adapter, 15-4 CICS, 63-1 COM, 64-1 Definition, 15-4 HP NonStop Pathway, 67-1 IMS/TM, 65-1 Legacy Plug, 66-1 Metadata Example, 15-4 Tuxedo, 68-1 application adapter defaulttdp, 70-16

B
Backend Database Glossary Def, F-2 Backing up, 29-1 AIS Server data, 29-3 AIS Server installation, 29-1 AIS Server metadata, 29-2 Attunity Studio metadata, 29-3 server scripts, 29-3 Basic Import Utility Using the BASIC import utility, 38-1 Binding, 3-1 Glossary Def, F-2 Client Configuration Properties Sample Server Syntax binding configuration, selecting the, 4-24

Index-1

Binding Configuration, 3-1 Binding Syntax, 3-6 BLOB Glossary Def, F-2

C
Captured Data Source Glossary Def, F-2 CDC Glossary Def, F-2 Agent components, 20-2 Change Data Capture Glossary Def, F-2 Change Data Source Glossary Def, F-2 Change File Glossary Def, F-2 Change Router Glossary Def, F-2 Change Table Glossary Def, F-2 Chapter Glossary Def, F-2 ChapView Glossary Def, F-3 CICS Application Adapter, 63-1 CICS Application Adapter Configuration Properties, 63-3 Configuring, 63-4 Defining, 63-3 Installing, 63-3 Overview, 63-1 Setting up metadata, 63-5 Transaction Support, 63-2 CICS Procedure Data Types, 62-6 CICS Procedure Data Source, 62-1 Configuration Properties, 62-2 Configuring, 62-7 Defining, 62-6 Installing, 62-6 Security, 62-5 Setting up metadata, 62-8 Transaction Support, 62-4 CISAM Data Types, 41-2 CISAM Data Source, 41-1 Configuration Properties, 41-1 Configuring, 41-4 Defining, 41-3 Installing, 41-3 Setting up metadata, 41-5 Client Binding, 3-2 Client Interface Glossary Def, F-3 Client Machine Glossary Def, F-3 CLOB

Glossary Def, F-3 COM Application Adapter, 64-1 Registering the application, 64-3 COM Application Adapter, 64-5 Data Types, 64-1 Defining, 64-3 Defining Interactions, 64-3 Overview, 64-1 Configuration Binding Configuration Parameters CICS Procedure Data Source, 62-2 Database Adapter, 69-2 HP NonStop Pathway Application Adapter, IMS/TM Application Adapter, 65-2 Legacy Plug Application Adapter, 66-1 Procedure Data Source (Application Connector), 61-2 Configuration Properties CICS Application Adapter, 63-3 OLEDB-SQL Data Source driver, 50-4 Tuxedo Application Adapter, 68-2 Configuring CICS Application Adapter, 63-4 CICS Procedure Data Source, 62-7 CISAM Data Source, 41-4 Database Adapter, 69-5 DB2 Data Source, 40-10 DBMS Data Source, 42-22 DISAM Data Source, 41-4 Enscribe Data Source, 43-5 HP NonStop Pathway Application Adapter, IMS/DB DBCTL Data Source, 45-14 IMS/DB DBDC Data Source, 45-16 IMS/DB DLI Data Source, 45-12 IMS/TM Application Adapter, 65-3 Ingres Data Source, 47-7 Legacy Plug Application Adapter, 66-2 ODBC Data Source, 48-8 Oracle RDB Data Source, 52-11 Procedure Data Source (Application Connector), 61-10 RDBSQL Data Source, 52-11 RMS Data Source, 53-4 Sybase Data Source, 56-6 Text Delimited File Data Source, 57-2 Virtual Data Source, 58-4 Connect String Glossary Def, F-3 Connecting the Axis servlet, 9-3 Connection Pooling Glossary Def, F-3 creating procedure metadata, 6-37 Creating SQL Queries for the Database Adapter, 69-21

67-2

67-3

D
Daemon

Index-2

Glossary Def, F-3 daemon configuration, 4-10 daemon configuration, 4-10, 4-11 daemon status daemons status, 4-10 daemons, 4-1 adding, 4-2 configuration, 4-11 definition, 4-1 editing, 4-3 shutting down, 4-11 Data Source Glossary Def, F-3 Installing, 52-10 Data Source Metadata, 5-1 ADD Supported Data Types, 5-15 ADD Syntax, 5-23 data source metadata, 6-1 Data Sources CICS Procedure, 62-1 CISAM, 41-1 DB2, 40-1 DBMS, 42-1 DISAM, 41-1 Enscribe, 43-1 IMS, 45-1 Ingres, 47-1 ODBC, 48-1 OLEDB-FS, 49-1 OLEDB-SQL, 50-1 Oracle RDB, 52-1 Procedure (Application Connector), 61-1 RDBSQL, 52-1 RMS, 53-1 Sybase, 56-1 Text Delimited File, 57-1 Virtual, 58-1 Data Type Glossary Def, F-4 Data Types CICS Procedure, 62-6 CISAM, 41-2 COM Application Adapter, 64-1 DB2 Data Source, 40-7 DBMS, 42-3 DISAM, 41-2 Ingres, 47-5 ODBC, 48-4 OLEDB-FS, 49-3 OLEDB-SQL, 50-3 Oracle RDB, 52-9 Procedure Data Source (Application Connector), 61-7 RDBSQL, 52-9 RMS, 53-3 Sybase, 56-3 Tuxedo Application Adapter, 68-3 Database Adapter, 69-1

Glossary Def, F-4 Configuration Properties, 69-2 Configuring, 69-5 Configuring Interactions, 69-5 Creating SQL Queries, 69-21 Defining, 69-4 Installing, 69-4 Interaction Parameters, 69-4 Metadata, 69-2 Overview, 69-1 Transaction Support, 69-4 Databases Virtual, 23-1 DB2 Data Source Configuration Properties, 40-2 Configuring, 40-10 Data Types, 40-7 Defining, 40-8 Functionality, 40-1 Security, 40-7 Setting up meatadata, 40-4 Transaction Support, 40-4 DBMS Data Types, 42-3 DBMS Data Source, 42-1 Configuration Properties, 42-3 Configuring, 42-22 Defining, 42-21 Error Codes, 42-15 Installing, 42-21 Setting up metadata, 42-23 Virtual Columns, 42-5 Defining CICS Application Adapter, 63-3 CISAM Data Source, 41-3 COM Application Adapter, 64-3 Database Adapter, 69-4 DB2 Data Source, 40-8 DBMS Data Source, 42-21 DISAM Data Source, 41-3 Enscribe Data Source, 43-4 HP NonStop Pathway Application Adapter, IMS/DB DBCTL Data Source, 45-13 IMS/DB DBDC Data Source, 45-16 IMS/DB DLI Data Source, 45-11 IMS/TM Application Adapter, 65-3 Ingres Data Source, 47-6 Legacy Plug Application Adapter, 66-2 ODBC Data Source, 48-6 OLEDB-FS Data Source, 49-4 OLEDB-SQL Data Source, 50-4 Procedure Data Source, 61-10 RDBSQL Data Source, 52-10 RMS Data Source, 53-3 Sybase Data Source, 56-5 Text Delimited File Data Source, 57-2 Virtual Data Source, 58-3 Defining a Virtual Database Defining an adapter as a Web service, 9-3 Defining Data Types, 64-5

67-2

Index-3

design time, 4-1 disabling a workspace workspaces disabling, 4-25 DISAM Data Types, 41-2 DISAM Data Source, 41-1 Configuration Properties, 41-1 Configuring, 41-4 Defining, 41-3 Installing, 41-3 Setting up metadata, 41-5 Driver Glossary Def, F-4 Drivers CICS Procedure, 62-1 CISAM, 41-1 DB2, 40-1 DBMS, 42-1 DISAM, 41-1 Enscribe, 43-1 IMS, 45-1 Ingres, 47-1 ODBC, 48-1 OLEDB-FS, 49-1 OLEDB-SQL, 50-1 Oracle RDB, 52-1 Procedure (Application Connector), 61-1 RDBSQL, 52-1 RMS, 53-1 Sybase, 56-1 Text Delimited File, 57-1 Virtual, 58-1

Glossary Def, F-4

F
File Pool Glossary Def, F-4 Data Source (File-system) Glossary Def, F-4 File-system Data Source Glossary Def, F-4

G
Glossary, F-1

H
Handling Arrays, 7-1 HAVING clause aggregate functions, B-44 HP NonStop Pathway Application Adapter, 67-1 HP NonStop Pathway Application Adapter Configuration Parameters, 67-2 Configuring, 67-3 Defining, 67-2 Installing, 67-2 Overview, 67-1 Setting up metadata, 67-4 Transaction Support, 67-1

I
import COBOL, 6-14 Importing Metadata Using a Standalone Utility, 5-3 Using Attunity Studio, 5-2 importing metadata, 6-14, 43-7 Importing Procedure Metadata, 5-5 importing procedure metadata, 6-37 IMS Data Source, 45-1 Configuration Properties, 45-7 Setting up metadata, 45-18 IMS/DB DBCTL Data Source Configuring, 45-14 Defining, 45-13 Installing, 45-13 IMS/DB DBDC Data Source Configuring, 45-16 Defining, 45-16 Installing, 45-16 IMS/DB DLI Data Source Configuring, 45-12 Defining, 45-11 Installing, 45-11 IMS/TM Application Adapter, 65-1 IMS/TM Application Adapter Configuration Parameters, 65-2 Configuring, 65-3 Defining, 65-3

E
editing a daemon, 4-3 Daemon Control tab, 4-3 Daemon Logging tab, 4-5 Daemon Security tab, 4-8 editing workspaces, 4-15 Enscribe Data Source, 43-1 Configuration Properties, 43-2 Configuring, 43-5 Defining, 43-4 Installing, 43-4 Setting up metadata, 43-6 Environment Properties, 3-12 Error Codes DBMS Data Source, 42-15 ETL Glossary Def, F-4 Event Glossary Def, F-4 Event Queue Glossary Def, F-4 Extended ADD Glossary Def, F-4 Extended Metadata, 5-4 Extract, Transform, Load

Index-4

Installing, 65-3 Overview, 65-1 Setting up metadata, 65-4 Transaction Support, 65-1 Ingres Data Types, 47-5 Ingres Data Source, 47-1 Configuration Properties, 47-3 Configuring, 47-7 Defining, 47-6 Installing, 47-6 Installing CICS Application Adapter, 63-3 CICS Procedure Data Source, 62-6 CISAM Data Source, 41-3 Database Adapter, 69-4 DBMS Data Source, 42-21 DISAM Data Source, 41-3 Enscribe Data Source, 43-4 HP NonStop Pathway Application Adapter, IMS/DB DBCTL Data Source, 45-13 IMS/DB DBDC Data Source, 45-16 IMS/DB DLI Data Source, 45-11 IMS/TM Application Adapter, 65-3 Ingres Data Source, 47-6 Legacy Plug Application Adapter, 66-2 ODBC Data Source, 48-6 Procedure Data Source (Application Connector), 61-10 RDBSQL Data Source, 52-10 RMS Data Source, 53-3 Sybase Data Source, 56-5 Text Delimited File Data Source, 57-2 Virtual Data Source, 58-3 Interaction Glossary Def, F-4 Interaction Parameters Database Adapter, 69-4 Interactions as Web services, 9-7 IRPCD Glossary Def, F-5 IRPCDCMD REXX script, 37-21 Isolation Level Glossary Def, F-5

Legacy Plug Application Adapter, 66-1 Legacy Plug Application Adapter Configuration Parameters, 66-1 Configuring, 66-2 Defining, 66-2 Installing, 66-2 Overview, 66-1 Setting up metadata, 66-3 Lock Glossary Def, F-5 logging level, 21-20 LOJ Glossary Def, F-5

M
67-2 Machine Glossary Def, F-5 Managing Metadata, 5-3 managing procedure metadata, 6-37 Mapfiles using, 38-1 Master password, 37-22 Metadata, 5-1, 5-5 Glossary Def, F-5 AIS Server backing up, 29-2 backing up, 29-2 Caching Native Metadata, 5-4 Extending, 5-4 for DataSources, 5-1 For Virtual Databases Importing, 5-2 Importing Procedure Managing, 5-3 Procedure metadata, 6-1 data source metedata, 6-1 general table information, 6-2 import wizard, 6-14 importing, 6-14, 43-7 procedure metadata, 6-36 table columns configuration, 6-4 table index, 6-8 table statistics, 6-9 tabs, 6-2 Metadata Definition Tuxedo Application Adapter, 68-5 metadata import wizard, 6-14 metadata tabs, 6-2 Metadata, Definition Defining for CICS Application Adapter, 63-5 Defining for CICS Procedure Data Source, 62-8 Defining for CISAM Data Source, 41-5 Defining for DB2 Data Source, 40-4 Defining for DBMS Data Source, 42-23 Defining for DISAM Data Source, 41-5 Defining for Enscribe Data Source, 43-6 Defining for HP NonStop Pathway Application

J
JCA Connecting to AIS Server, 86-1 ConnectionFactory parameters, 86-2 Logging mechanism, 86-15 Metadata enhancement objects, 86-5 Overview, 86-1 RECORD enhancement classes, 86-6 Supported interfaces, 86-3

L
Languages, 3-25

Index-5

Adapter, 67-4 Defining for IMS Data Source, 45-18 Defining for IMS/TM Application Adapter, 65-4 Defining for Legacy Plug Application Adapter, 66-3 Defining for Procedure Data Source (Application Connector), 61-11 Defining for RMS Data Source, 53-5 Defining for Text Delimited File Data Source, 57-3 Tuxedo Application Adapter, 68-2

Data Types, 52-9 Oracle RDB Data Source, 52-1 Configuration Properties, 52-6 Configuring, 52-11 SQL Capabilities, 48-2, 51-6, 52-4 Ownership Glossary Def, F-6

P
Passthrough Glossary Def, F-6 Passthru Glossary Def, F-6 Predefined interactions query adapter, 70-2 Predict Glossary Def, F-6 Prestarted Server Glossary Def, F-7 Procedure Data Source, 61-1 Configuration Properties, 61-2 Configuring, 61-10 Data Types, 61-7 Defining, 61-10 Installing, 61-10 Security, 61-7 Setting up metadata, 61-11 Transaction Support, 61-7 Procedure Metadata procedure metadata, 6-36 ceate manually, 6-37 importing, 6-37 managing, 6-37 Procedure Metadata Statements, 5-6 Process Glossary Def, F-7 Properties, 3-12

N
Native File System Glossary Def, F-5 Native Metadata Caching, 5-4 Native Object Store (NOS) Glossary Def Native Object], F-6 NAV Glossary Def, F-5 NAV_UTIL Glossary Def, F-6 using, 37-2 NAV_UTIL Command Line Utility, 37-1 NAV_UTIL Options, 37-2 NAVDEMO Glossary Def, F-6 Navigator Glossary Def, F-6 NAVROOT Glossary Def, F-6 NAV.SYN Glossary Def, F-5 NOS Glossary Def, F-6

O
ODBC Data Types, 48-4 ODBC Data Source, 48-1 Configuration Properties, 48-3 Configuring, 48-8 Defining, 48-6 Installing, 48-6 OLE DB IMAGE fields, 89-12 TEXT fields, 89-12 OLEDB-FS Data provider requirements, 49-1 Functionality, 49-2 OLEDB-FS Data Source, 49-1 Configuration Properties, 49-3 OLEDB-FS Data source Defining, 49-4 OLEDB-SQL Data provider requirements, 50-1 Functionality, 50-2 Oracle RDB

Q
Queries Managing over large tables, 71-1 Query Adapter, 70-1 Metadata, 70-1 Overview, 70-1 Transaction Support, 70-2 Query adapter interactions, 70-2 usage, 70-16 Query Analyzer, 36-1 icons, 36-4 toolbar, 36-3 Query analyzer generating a plan for SQL statements, 36-2 optimization plan, 36-5 SQL statement plan, 36-2 viewing the execution Plan, 36-2 query application adapter defaulttdp, 70-16

Index-6

Query Governor, 71-1 Query Optimizer Glossary Def, F-7 Query Processor Glossary Def, F-7

R
RDBSQL Data Types, 52-9 RDBSQL Data Source, 52-1 Configuration Properties, 52-6 Configuring, 52-11 Defining, 52-10 Installing, 52-10 SQL Capabilities, 48-2, 51-6, 52-4 Referential Integrity Glossary Def, F-7 Data Source (Relational) Glossary Def, F-7 Relational Data Source Glossary Def, F-7 Repository Glossary Def, F-7 Resource Manager Glossary Def, F-7 Restoring server scripts, 29-3 Reusable Server Glossary Def, F-8 RMS Data Types, 53-3 RMS Data Source, 53-1 Configuration Properties, 53-2 Configuring, 53-4 Defining, 53-3 Installing, 53-3 Setting up metadata, 53-5

Glossary Def, F-8 SQL utility connecting to AIS Server, 32-2 Modifying data in a recordset, 32-4 specifying and executing queries, 32-2 using the, 32-1 working with chapters, 32-4 Staging Area Glossary Def, F-8 Statements Procedure Metadata Stored Procedure Glossary Def, F-8 Stored Procedures In Virtual Databases Stream Glossary Def, F-8 Stream Context Glossary Def, F-8 Stream Position Glossary Def, F-9 Supported Languages, 3-25 Sybase Data Types, 56-3 Sybase Data Source, 56-1 Configuration Properties, 56-2 Configuring, 56-6 Defining, 56-5 Environment variables, 56-7 Installing, 56-5 Sync Point Glossary Def, F-9 Synonyms Creating in Virtual Databases

T
Text Delimited File Data Source, 57-1 Configuration Properties, 57-1 Configuring, 57-2 Defining, 57-2 Setting up metadata, 57-3 Transaction Glossary Def, F-9 Transaction Manager Glossary Def, F-9 Transaction Support CICS Application Adapter, 63-2 CICS Procedure Data Source- CICS Procedure Data Source Two-phase commit, 62-4 Data Source Capabilities, 30-3 Data Sources That Do Not Support Transactions, 30-3 Data Sources with One-Phase Commit Capability, 30-3 Data Sources with Two-Phase Commit Capability, 30-4 Database Adapter, 69-4 DB2 Data Source, 40-4

S
Schema Glossary Def, F-8 Security CICS Procedure Data Source, 62-5 Procedure Data Source (Application Connector), 61-7 selecting a binding configuration, 4-24 Server backing up, 29-1 Binding restore installation, 29-1 Server Binding, 3-1 Server Machine Glossary Def, F-8 Server scripts backing up, 29-3 Setting up the system for Web services, 9-1 Single Client Glossary Def, F-8 SQL

Index-7

HP NonStop Pathway Application Adapter, 67-1 IMS/TM Application Adapter, 65-1 in application adapters, 15-6 in applications, 15-6 OLEDB-FS, 49-3 OLEDB-SQL, 50-3 Overview, 30-1 Procedure Data Source (Application Connector), 61-7 Query Adapter, 70-2 Recovery, 30-7 Stand-alone Transaction Coordinator, 30-2 Tuxedo Application Adapter, 68-2 Trigger Glossary Def, F-9 Tuxedo Application Adapter, 68-1 Tuxedo Application Adapter Checking environment variables, 68-3 Configuration Properties, 68-2 data types, 68-3 Defining, 68-3 Metadata, 68-2 Overview, 68-1 Setting up metadata, 68-5 Transaction Support, 68-2 Two-phase Commit Glossary Def, F-9 2PC Glossary Def, F-9

Using Virtual Table Glossary Def, F-9 Virtual View Glossary Def, F-10

W
Web Services, 9-1 prerequisites, 9-1 Web services advanced connection settings map, 9-6 pooling, 9-5 advanced connections settings general, 9-5 connecting the Axis servlet, 9-3 defining an adapter as a Web service, 9-3 logs, 9-10 preparing the system, 9-1 selecting interactions, 9-7 undeploying, 9-9 viewing deployed Web services, 9-9 Web services wizard, 9-2 Working with parameterized queries, 32-3 Workspace Glossary Def, F-10 Workspace server mode, 4-20 workspaces adding, 4-12 editing, 4-15 general tab, 4-16 Server Mode tab, 4-19 WS Security tab, 4-22

U
Undeploying Web services, 9-9 User Glossary Def, F-9 User Profile Glossary Def, F-9 Using SQL utility, 32-1 the Query analyzer, 36-1 Using a Virtual Database, 23-8 Using Arrays, 7-1

X
XML Protocol, 15-5

V
Viewing deployed Web services, 9-9 Views In Virtual Databases Virtual Data Source, 58-1 Configuration Properties, 58-1 Configuring, 58-4 Defining, 58-3 Installing, 58-3 Virtual Database, 23-1 Creating Synonyms for Creating Views in Defining Defining Strored Procedures for Defining Tables in

Index-8

S-ar putea să vă placă și