Sunteți pe pagina 1din 745

MIMIX

Version 7.0
MIMIX Administrator Reference
Conceptual, Configuration, and Reference Information
Published: September 2010 level 7.0.01.00 Copyrights, Trademarks, and Notices
2
Cont ent s
Who this book is for................................................................................................... 16
What is in this book............................................................................................. 16
The MIMIX documentation set ............................................................................ 16
SNA and OptiConnect Support Discontinued............................................................ 17
Sources for additional information............................................................................. 18
How to contact us...................................................................................................... 19
Chapter 1 MIMIX overview 20
MIMIX concepts......................................................................................................... 22
System roles and relationships ........................................................................... 22
Data groups: the unit of replication...................................................................... 23
Changing directions: switchable data groups...................................................... 23
Additional switching capability....................................................................... 24
J ournaling and object auditing introduction......................................................... 24
Log spaces.......................................................................................................... 25
Multi-part naming convention.............................................................................. 26
The MIMIX environment............................................................................................ 28
The product library .............................................................................................. 28
IFS directories ............................................................................................... 28
J ob descriptions and job classes......................................................................... 29
User profiles .................................................................................................. 31
The system manager........................................................................................... 31
The journal manager ........................................................................................... 32
The MIMIXQGPL library...................................................................................... 33
MIMIXSBS subsystem................................................................................... 33
Data libraries ....................................................................................................... 33
Named definitions................................................................................................ 34
Data group entries............................................................................................... 34
J ournal receiver management................................................................................... 36
Interaction with other products that manage receivers........................................ 37
Processing from an earlier journal receiver......................................................... 38
Considerations when journaling on target........................................................... 38
Operational overview................................................................................................. 40
Support for starting and ending replication.......................................................... 40
Support for checking installation status............................................................... 40
Support for automatically detecting and resolving problems............................... 40
Support for working with data groups.................................................................. 41
Support for resolving problems ........................................................................... 41
Support for switching a data group...................................................................... 43
Support for working with messages .................................................................... 43
Chapter 2 Replication process overview 45
Replication job and supporting job names ................................................................ 46
Cooperative processing introduction......................................................................... 48
MIMIX Dynamic Apply......................................................................................... 48
3
Legacy cooperative processing........................................................................... 49
Advanced journaling............................................................................................ 49
System journal replication......................................................................................... 50
Processing self-contained activity entries ........................................................... 51
Processing data-retrieval activity entries............................................................. 52
Processes with multiple jobs ............................................................................... 54
Tracking object replication................................................................................... 54
Managing object auditing.................................................................................... 54
User journal replication.............................................................................................. 57
What is remote journaling?.................................................................................. 57
Benefits of using remote journaling with MIMIX .................................................. 57
Restrictions of MIMIX Remote J ournal support................................................... 58
Overview of IBM processing of remote journals.................................................. 59
Synchronous delivery.................................................................................... 59
Asynchronous delivery.................................................................................. 61
User journal replication processes ...................................................................... 62
The RJ link .......................................................................................................... 62
Sharing RJ links among data groups............................................................. 62
RJ links within and independently of data groups ......................................... 63
Differences between ENDDG and ENDRJ LNK commands .......................... 63
RJ link monitors................................................................................................... 64
RJ link monitors - operation........................................................................... 64
RJ link monitors in complex configurations ................................................... 64
Support for unconfirmed entries during a switch................................................. 66
RJ link considerations when switching................................................................ 66
User journal replication of IFS objects, data areas, data queues.............................. 68
Benefits of advanced journaling.......................................................................... 68
Replication processes used by advanced journaling .......................................... 69
Tracking entries................................................................................................... 70
IFS object file identifiers (FIDs) ........................................................................... 71
Lesser-used processes for user journal replication................................................... 72
User journal replication with source-send processing......................................... 72
The data area polling process............................................................................. 73
Chapter 3 Preparing for MIMIX 75
Checklist: pre-configuration....................................................................................... 76
Data that should not be replicated............................................................................. 77
Planning for journaled IFS objects, data areas, and data queues............................. 78
Is user journal replication appropriate for your environment? ............................. 78
Serialized transactions with database files.......................................................... 78
Converting existing data groups.......................................................................... 78
Conversion examples.................................................................................... 79
Database apply session balancing...................................................................... 80
User exit program considerations........................................................................ 80
Starting the MIMIXSBS subsystem........................................................................... 82
Accessing the MIMIX Main Menu.............................................................................. 83
Chapter 4 Planning choices and details by object class 85
Replication choices by object type............................................................................ 87
Configured object auditing value for data group entries............................................ 89
4
Identifying library-based objects for replication......................................................... 91
How MIMIX uses object entries to evaluate journal entries for replication.......... 92
Identifying spooled files for replication ................................................................ 93
Additional choices for spooled file replication................................................ 94
Replicating user profiles and associated message queues ................................ 95
Identifying logical and physical files for replication.................................................... 96
Considerations for LF and PF files...................................................................... 96
Files with LOBs.............................................................................................. 98
Configuration requirements for LF and PF files................................................... 99
Requirements and limitations of MIMIX Dynamic Apply.................................... 101
Requirements and limitations of legacy cooperative processing....................... 102
Identifying data areas and data queues for replication............................................ 103
Configuration requirements - data areas and data queues............................... 103
Restrictions - user journal replication of data areas and data queues .............. 104
Identifying IFS objects for replication...................................................................... 106
Supported IFS file systems and object types .................................................... 106
Considerations when identifying IFS objects..................................................... 107
MIMIX processing order for data group IFS entries..................................... 107
Long IFS path names .................................................................................. 107
Upper and lower case IFS object names..................................................... 107
Configured object auditing value for IFS objects......................................... 108
Configuration requirements - IFS objects.......................................................... 108
Restrictions - user journal replication of IFS objects ......................................... 109
Identifying DLOs for replication............................................................................... 111
How MIMIX uses DLO entries to evaluate journal entries for replication.......... 111
Sequence and priority order for documents ................................................ 111
Sequence and priority order for folders ....................................................... 112
Processing of newly created files and objects......................................................... 114
Newly created files ............................................................................................ 114
New file processing - MIMIX Dynamic Apply............................................... 114
New file processing - legacy cooperative processing.................................. 115
Newly created IFS objects, data areas, and data queues................................. 115
Determining how an activity entry for a create operation was replicated.... 116
Processing variations for common operations ........................................................ 117
Move/rename operations - system journal replication....................................... 117
Move/rename operations - user journaled data areas, data queues, IFS objects...
118
Delete operations - files configured for legacy cooperative processing............ 121
Delete operations - user journaled data areas, data queues, IFS objects ........ 121
Restore operations - user journaled data areas, data queues, IFS objects ...... 121
Chapter 5 Configuration checklists 123
Checklist: New remote journal (preferred) configuration......................................... 125
Checklist: New MIMIX source-send configuration................................................... 129
Checklist: converting to application groups............................................................. 132
Checklist: Converting to remote journaling.............................................................. 133
Converting to MIMIX Dynamic Apply....................................................................... 135
Converting using the Convert Data Group command ....................................... 135
Checklist: manually converting to MIMIX Dynamic Apply.................................. 136
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling.................... 138
5
Checklist: Converting to legacy cooperative processing......................................... 141
Chapter 6 System-level communications 143
Configuring for native TCP/IP.................................................................................. 143
Port aliases-simple example ............................................................................. 144
Port aliases-complex example .......................................................................... 145
Creating port aliases ......................................................................................... 146
Configuring APPC/SNA........................................................................................... 147
Configuring OptiConnect......................................................................................... 148
Chapter 7 Configuring system definitions 149
Tips for system definition parameters ..................................................................... 150
Creating system definitions ..................................................................................... 153
Changing a system definition.................................................................................. 154
Multiple network system considerations.................................................................. 155
Chapter 8 Configuring transfer definitions 157
Tips for transfer definition parameters..................................................................... 159
Using contextual (*ANY) transfer definitions ........................................................... 163
Search and selection process ........................................................................... 163
Considerations for remote journaling ................................................................ 164
Considerations for MIMIX source-send configurations...................................... 164
Naming conventions for contextual transfer definitions..................................... 165
Additional usage considerations for contextual transfer definitions................... 165
Creating a transfer definition................................................................................... 166
Changing a transfer definition................................................................................. 167
Changing a transfer definition to support remote journaling.............................. 167
Finding the system database name for RDB directory entries................................ 169
Using IBM i commands to work with RDB directory entries .............................. 169
Starting the TCP/IP server ...................................................................................... 170
Using autostart job entries to start the TCP server ................................................. 171
Identifying the current autostart job entry information....................................... 171
Changing an autostart job entry and its related job description ........................ 172
Using a different job description for an autostart job entry.......................... 172
Updating host information for a user-managed autostart job entry............. 173
Updating port information for a user-managed autostart job entry.............. 173
Verifying a communications link for system definitions ........................................... 175
Verifying the communications link for a data group................................................. 176
Verifying all communications links..................................................................... 176
Chapter 9 Configuring journal definitions 177
J ournal definitions created by other processes....................................................... 179
Tips for journal definition parameters...................................................................... 180
J ournal definition considerations............................................................................. 184
Naming convention for remote journaling environments with 2 systems........... 185
Example journal definitions for a switchable data group............................. 185
Naming convention for multimanagement environments .................................. 187
Example journal definitions for three management nodes .......................... 188
J ournal receiver size for replicating large object data............................................. 191
Verifying journal receiver size options............................................................... 191
Changing journal receiver size options ............................................................. 191
6
Creating a journal definition..................................................................................... 192
Changing a journal definition................................................................................... 194
Building the journaling environment........................................................................ 195
Changing the journaling environment to use *MAXOPT3....................................... 196
Changing the remote journal environment.............................................................. 200
Adding a remote journal link.................................................................................... 202
Changing a remote journal link................................................................................ 203
Temporarily changing from RJ to MIMIX processing.............................................. 204
Changing from remote journaling to MIMIX processing.......................................... 205
Removing a remote journaling environment............................................................ 206
Chapter 10 Configuring data group definitions 208
Tips for data group parameters............................................................................... 209
Additional considerations for data groups ......................................................... 219
Creating a data group definition.............................................................................. 221
Changing a data group definition............................................................................ 225
Fine-tuning backlog warning thresholds for a data group....................................... 225
Chapter 11 Additional options: working with definitions 229
Copying a definition................................................................................................. 229
Deleting a definition................................................................................................. 230
Displaying a definition............................................................................................. 231
Printing a definition.................................................................................................. 232
Renaming definitions............................................................................................... 232
Renaming a system definition........................................................................... 232
Renaming a transfer definition .......................................................................... 235
Renaming a journal definition with considerations for RJ link........................... 236
Renaming a data group definition ..................................................................... 237
Swapping system definition names......................................................................... 238
Chapter 12 Configuring data group entries 241
Creating data group object entries .......................................................................... 242
Loading data group object entries..................................................................... 242
Adding or changing a data group object entry................................................... 243
Creating data group file entries ............................................................................... 246
Loading file entries ............................................................................................ 246
Loading file entries from a data groups object entries................................ 247
Loading file entries from a library................................................................ 249
Loading file entries from a journal definition................................................ 250
Loading file entries from another data groups file entries........................... 251
Adding a data group file entry ........................................................................... 252
Changing a data group file entry....................................................................... 253
Creating data group IFS entries .............................................................................. 255
Adding or changing a data group IFS entry....................................................... 255
Loading tracking entries .......................................................................................... 257
Loading IFS tracking entries.............................................................................. 257
Loading object tracking entries.......................................................................... 258
Creating data group DLO entries ............................................................................ 259
Loading DLO entries from a folder .................................................................... 259
Adding or changing a data group DLO entry..................................................... 260
Creating data group data area entries..................................................................... 261
7
Loading data area entries for a library............................................................... 261
Adding or changing a data group data area entry............................................. 262
Additional options: working with DG entries............................................................ 263
Copying a data group entry............................................................................... 263
Removing a data group entry............................................................................ 264
Displaying a data group entry............................................................................ 265
Printing a data group entry................................................................................ 265
Chapter 13 Additional supporting tasks for configuration 266
Accessing the Configuration Menu.......................................................................... 268
Starting the system and journal managers.............................................................. 269
Setting data group auditing values manually........................................................... 270
Examples of changing of an IFS objects auditing value................................... 271
Checking file entry configuration manually.............................................................. 276
Changes to startup programs.................................................................................. 278
Starting the DDM TCP/IP server............................................................................. 279
Verifying that the DDM TCP/IP server is running.............................................. 279
Checking DDM password validation level in use..................................................... 280
Option 1. Enable MIMIXOWN user profile for DDM environment...................... 280
Option 2. Allow user profiles without passwords............................................... 281
Starting data groups for the first time...................................................................... 282
Identifying data groups that use an RJ link............................................................. 283
Using file identifiers (FIDs) for IFS objects.............................................................. 284
Configuring restart times for MIMIX jobs................................................................. 285
Configurable job restart time operation............................................................. 285
Considerations for using *NONE................................................................. 287
Examples: job restart time................................................................................. 287
Restart time examples: system definitions .................................................. 288
Restart time examples: system and data group definition combinations..... 288
Configuring the restart time in a system definition ............................................ 291
Configuring the restart time in a data group definition....................................... 291
Setting the system time zone and time................................................................... 293
Creating an application group definition.................................................................. 294
Loading data resource groups into an application group........................................ 295
Specifying the primary node for the application group............................................ 296
Starting, ending, or switching an application group................................................. 297
Starting an application group............................................................................. 298
Ending an application group.............................................................................. 299
Switching an application group.......................................................................... 299
Chapter 14 Starting, ending, and verifying journaling 301
What objects need to be journaled.......................................................................... 302
Authority requirements for starting journaling.................................................... 303
MIMIX commands for starting journaling................................................................. 304
J ournaling for physical files ..................................................................................... 305
Displaying journaling status for physical files.................................................... 305
Starting journaling for physical files................................................................... 305
Ending journaling for physical files.................................................................... 306
Verifying journaling for physical files ................................................................. 307
J ournaling for IFS objects........................................................................................ 308
8
Displaying journaling status for IFS objects ...................................................... 308
Starting journaling for IFS objects ..................................................................... 308
Ending journaling for IFS objects ...................................................................... 309
Verifying journaling for IFS objects.................................................................... 310
J ournaling for data areas and data queues............................................................. 311
Displaying journaling status for data areas and data queues............................ 311
Starting journaling for data areas and data queues .......................................... 311
Ending journaling for data areas and data queues............................................ 312
Verifying journaling for data areas and data queues......................................... 313
Chapter 15 Configuring for improved performance 314
Configuring parallel access path maintenance........................................................ 315
Underlying Technology...................................................................................... 315
Parallel Access Path Maintenance usage of MAINT......................................... 315
Minimized journal entry data ................................................................................... 318
Restrictions of minimized journal entry data...................................................... 318
Configuring for minimized journal entry data..................................................... 319
Configuring database apply caching....................................................................... 320
Configuring for high availability journal performance enhancements...................... 321
J ournal standby state........................................................................................ 321
Minimizing potential performance impacts of standby state........................ 322
J ournal caching................................................................................................. 322
MIMIX processing of high availability journal performance enhancements....... 322
Requirements of high availability journal performance enhancements............. 323
Restrictions of high availability journal performance enhancements................. 323
Caching extended attributes of *FILE objects ......................................................... 325
Increasing data returned in journal entry blocks by delaying RCVJ RNE calls ........ 326
Understanding the data area format.................................................................. 326
Determining if the data area should be changed............................................... 327
Configuring the RCVJ RNE call delay and block values .................................... 327
Configuring high volume objects for better performance......................................... 329
Improving performance of the #MBRRCDCNT audit .............................................. 330
Chapter 16 Configuring advanced replication techniques 332
Keyed replication..................................................................................................... 334
Keyed vs positional replication.......................................................................... 334
Requirements for keyed replication................................................................... 334
Restrictions of keyed replication........................................................................ 335
Implementing keyed replication......................................................................... 335
Changing a data group configuration to use keyed replication.................... 335
Changing a data group file entry to use keyed replication........................... 336
Verifying key attributes...................................................................................... 338
Data distribution and data management scenarios................................................. 339
Configuring for bi-directional flow...................................................................... 339
Bi-directional requirements: system journal replication............................... 339
Bi-directional requirements: user journal replication.................................... 340
Configuring for file routing and file combining................................................... 341
Configuring for cascading distributions ............................................................. 343
Trigger support........................................................................................................ 346
How MIMIX handles triggers ............................................................................. 346
9
Considerations when using triggers .................................................................. 346
Enabling trigger support.................................................................................... 347
Synchronizing files with triggers........................................................................ 347
Constraint support................................................................................................... 348
Referential constraints with delete rules............................................................ 348
Replication of constraint-induced modifications .......................................... 349
Handling SQL identity columns ............................................................................... 350
The identity column problem explained............................................................. 350
When the SETIDCOLA command is useful....................................................... 351
SETIDCOLA command limitations .................................................................... 351
Alternative solutions .......................................................................................... 352
SETIDCOLA command details.......................................................................... 353
Usage notes ................................................................................................ 354
Examples of choosing a value for INCREMENTS....................................... 354
Checking for replication of tables with identity columns.................................... 355
Setting the identity column attribute for replicated files..................................... 355
Collision resolution.................................................................................................. 357
Additional methods available with CR classes.................................................. 357
Requirements for using collision resolution....................................................... 358
Working with collision resolution classes .......................................................... 359
Creating a collision resolution class ............................................................ 359
Changing a collision resolution class........................................................... 360
Deleting a collision resolution class............................................................. 360
Displaying a collision resolution class ......................................................... 360
Printing a collision resolution class.............................................................. 361
Omitting T-ZC content from system journal replication........................................... 362
Configuration requirements and considerations for omitting T-ZC content....... 363
Omit content (OMTDTA) and cooperative processing................................. 364
Omit content (OMTDTA) and comparison commands ................................ 364
Selecting an object retrieval delay........................................................................... 366
Object retrieval delay considerations and examples......................................... 366
Configuring to replicate SQL stored procedures and user-defined functions.......... 368
Requirements for replicating SQL stored procedure operations ....................... 368
To replicate SQL stored procedure operations ................................................. 369
Using Save-While-Active in MIMIX.......................................................................... 370
Considerations for save-while-active................................................................. 370
Types of save-while-active options ................................................................... 371
Example configurations..................................................................................... 371
Chapter 17 Object selection for Compare and Synchronize commands 372
Object selection process ......................................................................................... 372
Order precedence ............................................................................................. 374
Parameters for specifying object selectors.............................................................. 375
Object selection examples ...................................................................................... 380
Processing example with a data group and an object selection parameter ...... 380
Example subtree ............................................................................................... 383
Example Name pattern...................................................................................... 387
Example subtree for IFS objects ....................................................................... 388
Report types and output formats............................................................................. 390
Spooled files...................................................................................................... 390
10
Outfiles .............................................................................................................. 391
Chapter 18 Comparing attributes 392
About the Compare Attributes commands .............................................................. 392
Choices for selecting objects to compare.......................................................... 393
Unique parameters...................................................................................... 393
Choices for selecting attributes to compare...................................................... 394
CMPFILA supported object attributes for *FILE objects.............................. 395
CMPOBJ A supported object attributes for *FILE objects ............................ 395
Comparing file and member attributes .................................................................... 396
Comparing object attributes .................................................................................... 399
Comparing IFS object attributes.............................................................................. 402
Comparing DLO attributes....................................................................................... 405
Chapter 19 Comparing file record counts and file member data 408
Comparing file record counts .................................................................................. 408
To compare file record counts........................................................................... 409
Significant features for comparing file member data............................................... 411
Repairing data................................................................................................... 411
Active and non-active processing...................................................................... 411
Processing members held due to error ............................................................. 412
Additional features............................................................................................. 412
Considerations for using the CMPFILDTA command............................................. 412
Recommendations and restrictions................................................................... 412
Using the CMPFILDTA command with firewalls................................................ 413
Security considerations ..................................................................................... 413
Comparing allocated records to records not yet allocated................................ 413
Comparing files with unique keys, triggers, and constraints ............................. 414
Avoiding issues with triggers ....................................................................... 415
Referential integrity considerations ............................................................. 415
J ob priority.................................................................................................... 415
CMPFILDTA and network inactivity................................................................... 416
Specifying CMPFILDTA parameter values.............................................................. 416
Specifying file members to compare................................................................. 416
Tips for specifying values for unique parameters.............................................. 417
Specifying the report type, output, and type of processing ............................... 420
System to receive output............................................................................. 420
Interactive and batch processing................................................................. 420
Using the additional parameters........................................................................ 421
Advanced subset options for CMPFILDTA.............................................................. 422
Ending CMPFILDTA requests................................................................................. 426
Comparing file member data - basic procedure (non-active) .................................. 427
Comparing and repairing file member data - basic procedure................................ 430
Comparing and repairing file member data - members on hold (*HLDERR) .......... 433
Comparing file member data using active processing technology.......................... 436
Comparing file member data using subsetting options ........................................... 439
Chapter 20 Synchronizing data between systems 443
Considerations for synchronizing using MIMIX commands..................................... 445
Limiting the maximum sending size .................................................................. 445
Synchronizing user profiles ............................................................................... 445
11
Synchronizing user profiles with SYNCnnn commands .............................. 446
Synchronizing user profiles with the SNDNETOBJ command.................... 446
Missing system distribution directory entries automatically added.............. 447
Synchronizing large files and objects................................................................ 447
Status changes caused by synchronizing......................................................... 447
Synchronizing objects in an independent ASP.................................................. 448
About MIMIX commands for synchronizing objects, IFS objects, and DLOs .......... 449
About synchronizing data group activity entries (SYNCDGACTE).......................... 450
About synchronizing file entries (SYNCDGFE command) ...................................... 451
About synchronizing tracking entries....................................................................... 453
Performing the initial synchronization...................................................................... 454
Establish a synchronization point...................................................................... 454
Resources for synchronizing............................................................................. 455
Using SYNCDG to perform the initial synchronization............................................ 456
To perform the initial synchronization using the SYNCDG command defaults . 457
Verifying the initial synchronization......................................................................... 458
Synchronizing database files................................................................................... 460
Synchronizing objects ............................................................................................. 462
To synchronize library-based objects associated with a data group................. 462
To synchronize library-based objects without a data group.............................. 463
Synchronizing IFS objects....................................................................................... 466
To synchronize IFS objects associated with a data group................................ 466
To synchronize IFS objects without a data group ............................................. 467
Synchronizing DLOs................................................................................................ 470
To synchronize DLOs associated with a data group......................................... 470
To synchronize DLOs without a data group...................................................... 471
Synchronizing data group activity entries................................................................ 473
Synchronizing tracking entries ................................................................................ 475
To synchronize an IFS tracking entry................................................................ 475
To synchronize an object tracking entry............................................................ 475
Sending library-based objects................................................................................. 476
Sending IFS objects ................................................................................................ 478
Sending DLO objects .............................................................................................. 479
Chapter 21 Introduction to programming 480
Support for customizing........................................................................................... 481
User exit points.................................................................................................. 481
Collision resolution............................................................................................ 481
Completion and escape messages for comparison commands ............................. 483
CMPFILA messages ......................................................................................... 483
CMPOBJ A messages........................................................................................ 484
CMPIFSA messages......................................................................................... 484
CMPDLOA messages ....................................................................................... 485
CMPRCDCNT messages.................................................................................. 485
CMPFILDTA messages..................................................................................... 486
Adding messages to the MIMIX message log......................................................... 490
Output and batch guidelines.................................................................................... 491
General output considerations .......................................................................... 491
Output parameter ........................................................................................ 491
Display output.............................................................................................. 492
12
Print output.................................................................................................. 492
File output.................................................................................................... 494
General batch considerations............................................................................ 495
Batch (BATCH) parameter .......................................................................... 495
J ob description (J OBD) parameter.............................................................. 495
J ob name (J OB) parameter......................................................................... 495
Displaying a list of commands in a library............................................................... 496
Running commands on a remote system................................................................ 497
Benefits - RUNCMD and RUNCMDS commands ............................................. 497
Procedures for running commands RUNCMD, RUNCMDS.................................... 498
Running commands using a specific protocol ................................................... 498
Running commands using a MIMIX configuration element............................... 500
Using lists of retrieve commands ............................................................................ 504
Changing command defaults................................................................................... 505
Chapter 22 Customizing procedures 506
Procedure components and concepts..................................................................... 506
Procedure types ................................................................................................ 507
Procedure job processing.................................................................................. 507
Attributes of a step............................................................................................ 508
Operational control ............................................................................................ 509
Current status and run history........................................................................... 510
Customizing user application handling for switching............................................... 510
Customize the step programs for user applications .......................................... 511
Working with procedures......................................................................................... 512
Accessing the Work with Procedures display.................................................... 512
Displaying the procedures for an application group.................................... 513
Displaying all procedures ............................................................................ 513
Creating a procedure......................................................................................... 514
Deleting a procedure......................................................................................... 514
Working with the steps of a procedure.................................................................... 515
Displaying the steps within a procedure............................................................ 515
Displaying step status for the last started run of a procedure........................... 515
Adding a step to a procedure............................................................................ 516
Changing attributes of a step............................................................................ 516
Enabling or disabling a step.............................................................................. 517
Removing a step from a procedure................................................................... 517
Working with step programs.................................................................................... 517
Accessing step programs.................................................................................. 518
Creating a custom step program....................................................................... 518
Changing a step program.................................................................................. 518
Step program format STEP0100....................................................................... 519
Working with step messages................................................................................... 520
Assessing the Work with Step Messages display............................................. 521
Adding or changing a step message................................................................. 521
Removing a step message................................................................................ 521
Additional programming support for procedures and steps..................................... 522
Chapter 23 Customizing with exit point programs 523
Summary of exit points............................................................................................ 523
13
MIMIX user exit points....................................................................................... 523
MIMIX Monitor user exit points.......................................................................... 523
MIMIX Promoter user exit points....................................................................... 524
Requesting customized user exit programs ...................................................... 525
Working with journal receiver management user exit points................................... 526
J ournal receiver management exit points.......................................................... 526
Change management exit points................................................................. 526
Delete management exit points................................................................... 527
Requirements for journal receiver management exit programs................... 527
J ournal receiver management exit program example ................................. 530
Appendix A Supported object types for system journal replication 533
Appendix B Copying configurations 536
Supported scenarios ............................................................................................... 536
Checklist: copy configuration................................................................................... 537
Copying configuration procedure............................................................................ 541
Appendix C Configuring Intra communications 542
Manually configuring Intra using SNA..................................................................... 543
Manually configuring Intra using TCP ..................................................................... 544
Appendix D MIMIX support for independent ASPs 546
Benefits of independent ASPs................................................................................. 547
Auxiliary storage pool concepts at a glance............................................................ 547
Requirements for replicating from independent ASPs ............................................ 550
Limitations and restrictions for independent ASP support....................................... 550
Configuration planning tips for independent ASPs.................................................. 551
J ournal and journal receiver considerations for independent ASPs.................. 552
Configuring IFS objects when using independent ASPs................................... 552
Configuring library-based objects when using independent ASPs.................... 552
Avoiding unexpected changes to the library list................................................ 553
Detecting independent ASP overflow conditions..................................................... 555
What are rules and how they are used by auditing................................................. 556
Appendix E Creating user-defined rules and notifications 557
Requirements for using audits and rules................................................................. 558
Guidelines and recommendations for auditing........................................................ 558
Considerations and recommendations for rules................................................ 559
Replacement variables................................................................................ 560
Rule-generated messages and notifications ............................................... 560
Creating user-defined rules..................................................................................... 562
Example of a user-defined rule ......................................................................... 562
Creating user-generated notifications ..................................................................... 563
Example of a user-generated notification.......................................................... 564
Running user rules and rule groups programmatically............................................ 566
Example of creating a monitor to run a user rule .............................................. 566
MIMIX rule groups................................................................................................... 567
Appendix F Interpreting audit results 568
Resolving audit problems........................................................................................ 569
Checking the job log of an audit.............................................................................. 571
14
Interpreting results for configuration data - #DGFE audit........................................ 572
Interpreting results of audits for record counts and file data ................................... 574
What differences were detected by #FILDTA.................................................... 574
What differences were detected by #MBRRCDCNT......................................... 576
Interpreting results of audits that compare attributes .............................................. 577
What attribute differences were detected.......................................................... 577
Where was the difference detected................................................................... 579
What attributes were compared ........................................................................ 580
Attributes compared and expected results - #FILATR, #FILATRMBR audits.... 581
Attributes compared and expected results - #OBJ ATR audit............................ 586
Attributes compared and expected results - #IFSATR audit............................. 594
Attributes compared and expected results - #DLOATR audit ........................... 596
Comparison results for journal status and other journal attributes.................... 598
How configured journaling settings are determined.................................... 601
Comparison results for auxiliary storage pool ID (*ASP)................................... 602
Comparison results for user profile status (*USRPRFSTS) .............................. 605
How configured user profile status is determined........................................ 606
Comparison results for user profile password (*PRFPWDIND)......................... 608
Appendix G Journal Codes and Error Codes 610
J ournal entry codes for user journal transactions.................................................... 610
J ournal entry codes for files .............................................................................. 610
Error codes for files in error............................................................................... 612
J ournal codes and entry types for journaled IFS objects .................................. 615
J ournal codes and entry types for journaled data areas and data queues........ 615
J ournal entry codes for system journal transactions ............................................... 617
Appendix H Outfile formats 620
Work panels with outfile support ............................................................................. 621
MCAG outfile (WRKAG command) ......................................................................... 622
MCDTACRGE outfile (WRKDTARGE command)................................................... 625
MCNODE outfile (WRKNODE command)............................................................... 628
MXCDGFE outfile (CHKDGFE command).............................................................. 630
MXCMPDLOA outfile (CMPDLOA command)......................................................... 632
MXCMPFILA outfile (CMPFILA command)............................................................. 634
MXCMPFILD outfile (CMPFILDTA command)........................................................ 636
MXCMPFILR outfile (CMPFILDTA command, RRN report).................................... 639
MXCMPRCDC outfile (CMPRCDCNT command)................................................... 640
MXCMPIFSA outfile (CMPIFSA command) ............................................................ 642
MXCMPOBJ A outfile (CMPOBJ A command) ......................................................... 644
MXAUDHST outfile (WRKAUDHST command) ...................................................... 646
MXAUDOBJ outfile (WRKAUDOBJ , WRKAUDOBJ H commands) ......................... 648
MXDGACT outfile (WRKDGACT command)........................................................... 651
MXDGACTE outfile (WRKDGACTE command)...................................................... 653
MXDGDAE outfile (WRKDGDAE command) .......................................................... 660
MXDGDFN outfile (WRKDGDFN command) .......................................................... 661
MXDGDLOE outfile (WRKDGDLOE command) ..................................................... 669
MXDGFE outfile (WRKDGFE command)................................................................ 671
MXDGIFSE outfile (WRKDGIFSE command)......................................................... 675
MXDGSTS outfile (WRKDG command).................................................................. 677
15
WRKDG outfile SELECT statement examples.................................................. 699
WRKDG outfile example 1........................................................................... 699
WRKDG outfile example 2........................................................................... 699
WRKDG outfile example 3........................................................................... 700
WRKDG outfile example 4........................................................................... 700
MXDGOBJ E outfile (WRKDGOBJ E command) ...................................................... 701
MXDGTSP outfile (WRKDGTSP command)........................................................... 704
MXJ RNDFN outfile (WRKJ RNDFN command)....................................................... 707
MXRJ LNK outfile (WRKRJ LNK command)............................................................. 711
MXSYSDFN outfile (WRKSYSDFN command)....................................................... 714
MXTFRDFN outfile (WRKTFRDFN command)....................................................... 718
MZPRCDFN outfile (WRKPRCDFN command)...................................................... 720
MZPRCE outfile (WRKPRCE command)................................................................ 721
MXDGIFSTE outfile (WRKDGIFSTE command)..................................................... 724
MXDGOBJ TE outfile (WRKDGOBJ TE command).................................................. 726
Index 729
16
Who this book is for
The MIMIX Administrator Reference book is a tool for MIMIX administrators who
configure and maintain a MIMIX

ha1

or MIMIX

ha Lite

replication environment.
What is in this book
The MIMIX Administrator Reference book provides these distinct types of information:
Descriptions of MIMIX concepts and replication processes
Configuration planning information, including details about replication choices for
classes of objects
Checklists and supporting procedures for implementing common configurations
Detailed information for customizing configurations for improved performance and
to support advanced replication techniques
Detailed information about comparison commands and their results, as well as
synchronization commands. Compare commands are the basis of all MIMIX
audits and synchronize commands are the basis for automatic recoveries.
Descriptions of available support for customizing through the use of exit programs
Reference material such as lists of supported object types and possible journal
codes and error codes, values that can be returned in output files (outfiles), and
attributes that can be compared.
The MIMIX documentation set
The following documents about MIMIX products are available:
Using License Manager
This book describes software requirements, system security, and other planning
considerations for installing MIMIX software and software fixes. The preferred way
to obtain license keys and install software is by using AutoValidate and the MIMIX
Installation Wizard. However, if you cannot use them, this book provides
instructions for obtaining licenses and installing software from a 5250 emulator.
This book also describes how to use the additional security functions from Vision
Solutions which are available for MIMIX products and commands through License
Manager. Also, to support compatible previous releases, this book includes
requirements and troubleshooting information for MIMIX Availability Manager.
MIMIX Administrator Reference
This book provides detailed conceptual, configuration, and programming
information for MIMIX Enterprise and MIMIX Professional. It includes checklists
for setting up several common configurations, information for planning what to
replicate, and detailed advanced configuration topics for custom needs. It also
identifies what information can be returned in outfiles if used in automation.
MIMIX Global Operations
This book provides high level concepts and operational procedures for MIMIX
SNA and OptiConnect Support Discontinued
17
Global users in an IBM i cluster environment. This book focuses on addressing
problems reported in status and basic operational procedures such as starting,
ending, and switching.
MIMIX Operations - 5250
This book provides high level concepts and operational procedures for managing
your high availability environment using MIMIX Enterprise or MIMIX Professional
from a 5250 emulator. This book focuses on tasks typically performed by an
operator, such as checking status, starting or stopping replication, performing
audits, and basic problem resolution.
Using MIMIX Monitor
This book describes how to use the MIMIX Monitor user and programming
interfaces available with MIMIX Enterprise or MIMIX Professional. This book also
includes programming information about MIMIX Model Switch Framework and
support for hardware switching.
Using MIMIX Promoter
This book describes how to use MIMIX commands for copying and reorganizing
active files. MIMIX Promoter is available with MIMIX Enterprise only.
MIMIX for IBM WebSphere MQ
This book identifies requirements for the MIMIX for MQ feature which supports
replication in IBM WebSphere MQ environments. This book describes how to
configure MIMIX for this environment and how to perform the initial
synchronization and initial startup. Once configured and started, all other
operations are performed as described in the MIMIX Operations - 5250 book.
SNA and OptiConnect Support Discontinued
With the release of MIMIX V7.0, MIMIX no longer supports configurations using SNA
or OptiConnect for communications. The parameters are still available within MIMIX
V7.0, however, this functionality will not be tested for MIMIX V7.0. Vision Solutions will
only assist customers to determine possible workarounds with issues arising from the
use of SNA or OptiConnect for communication in MIMIX V7.0.
18
Sources for additional information
This book refers to other published information. The following information, plus
additional technical information, can be located in the IBM System i and i5/OS
Information Center.
From the Information center you can access these IBM Power
TM
Systems topics,
books, and redbooks:
Backup and Recovery
Journal management
DB2 Universal Database for IBM Power
TM
Systems Database Programming
Integrated File System Introduction
Independent disk pools
OptiConnect for OS/400
TCP/IP Setup
IBM redbook Striving for Optimal Journal Performance on DB2 Universal
Database for iSeries, SG24-6286
IBM redbook AS/400 Remote Journal Function for High Availability and Data
Replication, SG24-5189
IBM redbook Power
TM
Systems iASPs: A Guide to Moving Applications to
Independent ASPs, SG24-6802
The following information may also be helpful if you use advanced journaling:
DB2 UDB for iSeries SQL Programming Concepts
DB2 Universal Database for iSeries SQL Reference
IBM redbook AS/400 Remote Journal Function for High Availability and Data
Replication, SG24-5189
How to contact us
19
How to contact us
For contact information, visit our Contact CustomerCare web page.
If you are current on maintenance, support for MIMIX products is also available when
you log in to Support Central.
It is important to include product and version information whenever you report
problems.
MIMIX overview
20
CHAPTER 1 MIMIX overview
This book provides concepts, configuration procedures, and reference information for
MIMIX Enterprise and MIMIX Professional. For simplicity, this book uses the term
MIMIX to refer to the functionality provided by either product unless a more specific
name is necessary.
MIMIX version 7 provides high availability for your critical data in a production
environment on IBM Power
TM
Systems through real-time replication of changes.
MIMIX continuously captures changes to critical database files and objects on a
production system, sends the changes to a backup system, and applies the changes
to the appropriate database file or object on the backup system. The backup system
stores exact duplicates of the critical database files and objects from the production
system.
MIMIX uses two replication paths to address different pieces of your replication
needs. These paths operate with configurable levels of cooperation or can operate
independently.
The user journal replication path captures changes to critical files and objects
configured for replication through a user journal. When configuring this path,
shipped defaults use the remote journaling function of the operating system to
simplify sending data to the remote system. In previous versions, MIMIX DB2
Replicator provided this function.
The system journal replication path handles replication of critical system objects
(such as user profiles, program objects, or spooled files), integrated file system
(IFS) objects, and document library object (DLOs) using the system journal. In
previous versions MIMIX Object Replicator provided this function.
Configuration choices determine the degree of cooperative processing used between
the system journal and user journal replication paths when replicating database files,
IFS objects, data areas, and data queues.
One common use of MIMIX is to support a hot backup system to which operations
can be switched in the event of a planned or unplanned outage. If a production
system becomes unavailable, its backup is already prepared for users. In the event of
an outage, you can quickly switch users to the backup system where they can
continue using their applications. MIMIX captures changes on the backup system for
later synchronization with the original production system. When the original
production system is brought back online, MIMIX assists you with analysis and
synchronization of the database files and other objects.
You can view the replicated data on the backup system at any time without affecting
productivity. This allows you to generate reports, submit (read-only) batch jobs, or
perform backups to tape from the backup system. In addition to real-time backup
capability, replicated databases and objects can be used for distributed processing,
allowing you to off-load applications to a backup system.
Typically MIMIX is used among systems in a network. Simple environments have one
production system and one backup system. More complex environments have
21
multiple production systems or backup systems. MIMIX can also be used on a single
system.
MIMIX automatically monitors your replication environment to detect and correct
potential problems that could be detrimental to maintaining high availability.
MIMIX also provides a means of verifying that the files and objects being replicated
are what is defined to your configuration. This can help ensure the integrity of your
MIMIX configuration.
The topics in this chapter include:
MIMIX concepts on page 22 describes concepts and terminology that you need
to know about MIMIX.
The MIMIX environment on page 28 describes components of the MIMIX
operating environment.
J ournal receiver management on page 36 describes how MIMIX performs
change management and delete management for replication processes.
Operational overview on page 40 provides information about day to day MIMIX
operations.
22
MIMIX concepts
This topic identifies concepts and terminology that are fundamental to how MIMIX
performs replication. You should be familiar with the relationships between systems,
the concepts of data groups and switching, and role of the IBM i journaling function in
replication.
System roles and relationships
Usually, replication occurs between two or more systems. The most common
scenario for replication is a two-system environment in which one system is used for
production activities and the other system is used as a backup system.
The terms production system and backup system are used to describe the role of a
system relative to the way applications are used on that system. In an availability
management context, a production system is the system currently running the
production workload for the applications. In normal operations, the production system
is the system on which the principal copy of the data and objects associated with the
application exist. A backup system is the system that is not currently running the
production workload for the applications. In normal operations, the backup system is
the system on which you maintain a copy of the data and objects associated with the
application. These roles are not always associated with a specific system. For
example, if you switch application processing to the backup system, the backup
system temporarily becomes the production system.
Typically, for normal operations in basic two-system environment, replicated data
flows from the system running the production workload to the backup system. In a
more complex environment, the terms production system and backup system may not
be sufficient to clearly identify a specific system or its current role in the replication
process. For example, if a payroll application on system CHICAGO is backed up on
system LONDON and another application on system LONDON is backed up to the
CHICAGO system, both systems are acting as production systems and as backup
systems at the same time.
The terms source system and target system identify the direction in which an
activity occurs between two participating systems. A source system is the system
from which MIMIX replication activity between two systems originates. In replication,
the source system contains the journal entries used for replication. Information from
the journal entries is either replicated to the target system or used to identify objects
to be replicated to the target system. A target system is the system on which MIMIX
replication activity between two systems completes.
Because multiple instances of MIMIX can be installed on any system, it is important to
correctly identify the instance to which you are referring. It is helpful to consider each
installation of MIMIX on a system as being part of a separate network that is referred
to as a MIMIX installation. A MIMIX installation is a network of systems that transfer
data and objects among each other using functions of a common MIMIX product. A
MIMIX installation is defined by the way in which you configure the MIMIX product for
each of the participating systems. A system can participate in multiple independent
MIMIX installations.
MIMIX concepts
23
The terms management system and network system define the role of a system
relative to how the products interact within a MIMIX installation. These roles remain
associated with the system within the MIMIX installation to which they are defined.
Typically one system in the MIMIX installation is designated as the management
system and the remaining one or more systems are designated as network systems.
A management system is the system in a MIMIX installation that is designated as the
control point for all installations of the product within the MIMIX installation. The
management system is the location from which work to be performed by the product
is defined and maintained. Often the system defined as the management system also
serves as the backup system during normal operations. A network system is any
system in a MIMIX installation that is not designated as the management system
(control point) of that MIMIX installation. Work definitions are automatically distributed
from the management system to a network system. Often a system defined as a
network system also serves as the production system during normal operations.
Data groups: the unit of replication
The concept of a data group is used to control replication activities. A data group is a
logical grouping of database files, library-based objects, IFS objects, DLOs, or a
combination thereof that defines a unit of work by which MIMIX replication activity is
controlled. A data group may represent an application, a set of one or more libraries,
or all of the critical data on a given system. Application environments may define a
data group as a specific set of files and objects. For example, the R/3 environment
defines a data group as a set of SQL tables that all use the same journal and which
are all replicated to the same system. Users can start and stop replication activity by
data group, switch the direction of replication for a data group, and display replication
status by data group.
By default, data groups support replication from both the system journal and the user
journal. Optionally, you can limit a data group to replicate using only one replication
path. The parameters in the data group definition identify the direction in which data is
allowed to flow between systems and whether to allow the flow to switch directions.
You also define the data to be replicated and many other characteristics the
replication process uses on the defined data. The replication process is started and
ended by operations on a data group.
A data group entry identifies a source of information that can be replicated. Once a
data group definition is created, you can define data group entries. MIMIX uses the
data group entries that you create during configuration to determine whether a journal
entry should be replicated. If you are using both user journal and system journal
replication, a data group can have any combination of entries for files, IFS objects,
library-based objects, and DLOs.
Changing directions: switchable data groups
When you configure a data group definition, you specify which of the two systems in
the data group is the source for replicated data. In normal operation, data flows
between two systems in the direction defined within the data group. When you need
to switch the direction of replication, for example, when a production system is
removed from the network for planned downtime. default values in the data group
definition allow the same data group to be used for replication from either direction.
24
MIMIX provides support for switching due to planned and unplanned events. At the
data group level, the Switch Data Group (SWTDG) command will switch the direction
in which replication occurs between systems.
Note: A switchable data group is different than bi-directional data flow. Bi-directional
data flow is a data sharing technique described in Configuring advanced
replication techniques on page 332.
Additional switching capability
Typically, switching is performed by using the MIMIX Switch Assistant. MIMIX Switch
Assistant provides a user interface that prompts you through the switch process.
MIMIX Switch Assistant calls your default MIMIX Model Switch Framework to control
the switching process.
MIMIX Enterprise and MIMIX Professional include MIMIX Monitor, which provides
support for the MIMIX Model Switch Framework. Through this support, you can
customize monitoring and switching programs. Switching support in MIMIX Monitor
includes logical and physical switching. When you perform switching in this manner,
the exit programs called by your implementation of MIMIX Model Switch Framework
must include the SWTDG command. For more information, see the Using MIMIX
Monitor book. Your authorized MIMIX representative can assist you in implementing
advanced switching scenarios.
Journaling and object auditing introduction
MIMIX relies on data recorded by the IBM i functions of journaling, remote journaling,
and object auditing. Each of these functions record information in a journal.
Variations in the replication process are optimized according to characteristics of the
information provided by each of these functions.
Journaling is the process of recording information about changes to user-identified
objects, including those made by a system or user function, for a limited number of
object types. Events are logged in a user journal. Optionally, logged events in a user
journal can be on a remote system using remote journaling, whereby the journal and
journal receiver exist on a remote system or on a different logical partition.
Object auditing is the process by which the system creates audit records for
specified types of access to objects. Object auditing logs events in a specialized
system journal (the security audit journal, QAUDJ RN).
When an event occurs to an object or database file for which journaling is enabled, or
when a security-relevant event occurs, the system logs identifying information about
the event as a journal entry, a record in a journal receiver. The journal receiver is
associated with a journal and contains the log of all activity for objects defined to the
journal or all objects for which an audit trail is kept.
J ournaling must be active before MIMIX can perform replication. MIMIX uses the
recorded journal entries to replicate activity to a designated system. Data group
entries and other data group configuration settings determine whether MIMIX
replicates activity for objects and whether replication is performed based on entries
logged to the system journal or to a user journal. For some configurations, MIMIX
uses entries from both journals.
MIMIX concepts
25
J ournal entries deposited into the system journal (on behalf of an audited object)
contain only an indication of a change to an object. Some of these types of entries
contain enough information needed by MIMIX to apply the change directly to the
replicated object on the target system, however many types of these entries require
MIMIX to gather additional information about the object from the source system in
order to apply the change directly to the replicated object on the target system.
J ournal entries deposited into a user journal (on behalf of a journaled file, data area,
data queue, or IFS object) contain images of the data which was changed. This
information is needed by MIMIX in order to apply the change directly to the replicated
object on the target system.
When replication is started, the start request (STRDG command) identifies a
sequence number within a journal receiver at which MIMIX processing begins. In data
groups configured with remote journaling, the specified sequence number and
receiver name is the starting point for MIMIX processing from the remote journal. The
IBM i remote journal function controls where it starts sending entries from the source
journal receiver to the remote journal receiver.
IBM i requires that journaled objects reside in the same auxiliary storage pool (ASP)
as the user journal. The journal receivers can be in a different ASP. If the journal is in
a primary independent ASP, the journal receivers must reside in the same primary
independent ASP or a secondary independent ASP within the same ASP group.
IBM i (V5R4 and higher releases) allows journaling a maximum of 10,000,000 objects
to one user journal. MIMIX can use existing journals with this value. J ournals created
by MIMIX have a maximum of 250,000 objects. User journaling will not start if the
number of objects associated with the journal exceeds the journal maximum. The
maximum includes:
Objects for which changes are currently being journaled
Objects for which journaling was ended while the current receiver is attached
J ournal receivers that are, or were, associated with the journal while the current
journal receiver is attached.
Remote journaling requires unique considerations for journaling and journal receiver
management. For additional information, see J ournal receiver management on
page 36.
Log spaces
Based on user space objects (*USRSPC), a log space is a MIMIX object that
provides an efficient storage and manipulation mechanism for replicated data that is
temporarily stored on the target system during the receive and apply processes. All
internal structures and objects that make up a log space are created and manipulated
by MIMIX.
26
Multi-part naming convention
MIMIX uses named definitions to identify related user-defined configuration
information. A multi-part, qualified naming convention uniquely describes certain
types of definitions. This includes a two-part name for journal definitions and a three-
part name for transfer definitions and data group definitions. Newly created data
groups use remote journaling as the default configuration, which has unique
requirements for naming data group definitions. For more information, see Naming
convention for remote journaling environments with 2 systems on page 185.
The multi-part name consists of a name followed by one or two participating system
names (actually, names of system definitions). Together the elements of the multi-part
name define the entire environment for that definition. As a whole unit, a fully-qualified
two-part or three-part name must be unique. The first element, the name, does not
need to be unique. In a three-part name, the order of the system names is also
important, since two valid definitions may share the same three elements but with the
system names in different orders.
For example, MIMIX automatically creates a journal definition for the security audit
journal when you create a system definition. Each of these journal definitions is
named QAUDJ RN, so the name alone is not unique. The name must be qualified with
the name of the system to which the journal definition applies, such as QAUDJ RN
CHICAGO or QAUDJ RN NEWYORK. Similarly, the data group definitions
INVENTORY CHICAGO HONGKONG and INVENTORY HONGKONG CHICAGO
are unique because of the order of the system names.
When using command interfaces which require a data group definition, MIMIX can
derive the fully-qualified name of a data group definition if a partial name provided is
sufficient to determine the unique name. If the first part of the name is unique, it can
be used by itself to designate the data group definition. For example, if the data group
definition INVENTORY CHICAGO HONGKONG is the only data group with the name
INVENTORY, then specifying INVENTORY on any command requiring a data group
name is sufficient. However, if a second data group named INVENTORY NEWYORK
LONDON is created, the name INVENTORY by itself no longer describes a unique
data group. INVENTORY CHICAGO would be the minimum parts of the name of the
first data definition necessary to determine its uniqueness. If a third data group named
INVENTORY CHICAGO LONDON was added, then the fully qualified name would be
required to uniquely identify the data group. The order in which the systems are
identified is also important. The system HONGKONG appears in only one of the data
groups definitions. However, specifying INVENTORY HONGKONG will generate a
not found error because HONGKONG is not the first system in any of the data group
definitions. This applies to all external interfaces that reference multi-part definition
names.
MIMIX can also derive a fully qualified name for a transfer definition. Data group
definitions and system definitions include parameters that identify associated transfer
definitions. When a subsequent operation requires the transfer definition, MIMIX uses
the context of the operation to determine the fully qualified name. For example, when
starting a data group, MIMIX uses information in the data group definition, the
systems specified in the data group name, and the specified transfer definition name
to derive the fully qualified transfer definition name. If MIMIX cannot find the transfer
27
definition, it reverses the order of the system names and checks again, avoiding the
need for redundant transfer definitions.
You can also use contextual system support (*ANY) to configure transfer definitions.
When you specify *ANY in a transfer definition, MIMIX uses information from the
context in which the transfer definition is called to resolve to the correct system.
Unlike the conventional configuration case, a specific search order is used if MIMIX is
still unable to find an appropriate transfer definition. For more information, see Using
contextual (*ANY) transfer definitions on page 163.
28
The MIMIX environment
A variety of product-defined operating elements and user-defined configuration
elements collectively form an operational environment on each system. A MIMIX
environment can be comprised of one or more MIMIX installations. Each system that
participates in the same MIMIX environment must have the same operational
environment. This topic describes each of the components of the MIMIX operating
environment.
The product library
The name of the product library into which MIMIX is installed defines the connection
among systems in the same MIMIX installation. The default name of the product
installation library is MIMIX.
Several items are shipped as part of the product library. The IFS directory structure is
associated with the product library for the MIMIX installation and is created during the
installation process for License Manager and MIMIX. Each MIMIX installation also
contains several default job descriptions and job classes within its library.
Note: Do not replicate the library in which MIMIX is installed or any other libraries
created by MIMIX. Also do not place user created objects in this library. For
additional information, see Data that should not be replicated on page 77.
IFS directories
A default IFS directory structure is used in conjunction with the library-based objects
of the MIMIX family of products. The IFS directory structure is associated with the
product library for the MIMIX installation and is created during the installation process
for License Manager and MIMIX. Over time, the installation processes for products
and fixes will restore objects to the IFS directory structure as well as to the QSYS
library.
The directories created when License Manager is installed or upgraded follow these
guidelines:
/LakeviewTech This is the root directory for all IFS-based objects.
/LakeviewTech/system-based-area This directory structure contains
system-based objects that need to exist only once on a system. The system-
based-area represents a unique directory for each set of objects. Two structures
that you should be aware of are:
/LakeviewTech/Service/MIMIX/VvRrMm/ is the recommended location
for users to place fixes downloaded from the website. The VvRrMm value is
the same as the release of License Manager on the system. Multiple VvRrMm
directories will exist as the release of License Manager changes.
/LakeviewTech/Upgrades/ is where the MIMIX Installation Wizard places
software packages that it uploads to the system.
/LakeviewTech/UserData/ is available to users to store product-related
data.
The MIMIX environment
29
/LakeivewTech/ISC/ contains artifacts which enable the Vision Solutions plug-
in to appear in IBM Systems Director Navigator for IBM i5/OS under the category
of i5/OS Management.
The directories created when MIMIX is installed or upgraded follow these guidelines.
The requirements of your MIMIX environment determine the structure of these
directories:
/LakeviewTech/MIMIX/product-installation-library There is a
unique directory structure for each installation of MIMIX.
/LakeviewTech/MIMIX/product-installation-library/product-
area There is a unique directory structure for each installation of MIMIX. The
structure is determined by the set of objects needed by an area of the product and
the product installation library.
Job descriptions and job classes
MIMIX uses a customized set of job descriptions and job classes. Customized job
descriptions optimize characteristics for a category of jobs, including the user profile,
job queue, message logging level, and routing data for the job. Customized job
classes optimize runtime characteristics such as the job priority and CPU time slice
for a category of jobs. All of the shipped job descriptions and job classes are
configured with recommended default values.
J ob descriptions control batch processing. MIMIX features use a set of default job
descriptions, MXAUDIT, MXSYNC, and MXDFT. When MIMIX is installed, these job
descriptions are automatically restored in the product library. These job descriptions
exist in the product library of each MIMIX installation. J obs and related output are
associated with the user profile submitting the request. Commands such as Compare
File Attributes (CMPFILA), Compare File Data (CMPFILDTA), Synchronize Object
(SYNCOBJ ), as well as numerous others support this standard.
Older commands that provide job description support for batch processing use
different job descriptions that are located in the MIMIXQGPL library. The MIMIXQGPL
library, along with these job descriptions, is automatically restored on the system
when a MIMIX product is installed. Installing additional MIMIX installations on the
same system does not create additional copies of these job descriptions.
Table 1. shows a combined list of MIMIX job descriptions.
Table 1. J ob descriptions used by MIMIX
Name Description Shipped in
Installation
Library
Shipped in
MIMIXQGPL
Library
MXAUDIT MIMIX Auditing. Used for MIMIX compare commands,
such as those called by MIMIX audits, as the default value
on the J ob description (J OBD) parameter.
X
MXDFT MIMIX Default. Used for MIMIX load commands and by
other commands that do not have a specific job
description as the default value on the J OBD parameter.
X
30
MXSYNC MIMIX Synchronization. Used for MIMIX synchronization
commands, such as those called by MIMIX audits, as the
default value on the J OBD parameter.
X
MIMIXAPY MIMIX Apply. Used for MIMIX apply process jobs. X
MIMIXCLU MIMIXCluster Manager. Used by application groups which
support IBM i clustering to route jobs to the QCTL
subsystem.
X
MIMIXCMN MIMIX Communications. Used for all target
communication jobs.
X
MIMIXDFT MIMIX Default. Used for all MIMIX jobs that do not have a
specific job description.
X
MIMIXMGR MIMIX Manager. Used for MIMIX system manager and
journal manager jobs.
X
MIMIXMON MIMIX Monitor. Used for most jobs submitted by the
MIMIX Monitor product.
X
MIMIXPRM MIMIX Promoter. Used for jobs submitted by the MIMIX
Promoter product.
X
MIMIXRGZ MIMIX Reorganize File. Used for file reorganization jobs
submitted by the database apply job.
X
MIMIXSND MIMIX Send. Used for database send, object send, object
retrieve, container send, and status send jobs in MIMIX.
X
MIMIXSYNC MIMIX Synchronization. Used for MIMIX file
synchronization. This is valid for synchronize commands
that do not have a J OBD parameter on the display.
X
MIMIXUPS MIMIX UPS Monitor. Used for the uninterruptible power
source (UPS) monitor managed by the MIMIX Monitor
product.
X
MIMIXVFY MIMIX Verify. Used for MIMIX verify and compare
command processes. This is valid for verify and compare
commands that do not have a J OBD parameter on the
display.
X
PORTnnnnn
or alias name
MIMIX TCP Server, where nnnnn identifies the server port
number or alias. A job description exists for each transfer
definition which uses TCP protocol and enables MIMIX to
create and manage autostart job entries. Characters
nnnnn in the name identify the server port.
X
1

1. The job descriptions are created in the installation library when transfer definitions which specify PROTOCOL(*TCP)
and MNGAJ E(*YES) are created or changed. The associated autostart job entries are added to the subsystem
description for the MIMIXSBS subsystem in library MIMIXQGPL.
Table 1. J ob descriptions used by MIMIX
Name Description Shipped in
Installation
Library
Shipped in
MIMIXQGPL
Library
The MIMIX environment
31
User profiles
All of the MIMIX job descriptions are configured to run jobs using the MIMIXOWN user
profile. This profile owns all MIMIX objects, including the objects in the MIMIX product
libraries and in the MIMIXQGPL library. The profile is created with sufficient authority
to run all MIMIX products and perform all the functions provided by the MIMIX
products. The authority of this user profile can be reduced, if business practices
require, but this is not recommended. Reducing the authority of the MIMIXOWN
requires significant effort by the user to ensure that the products continue to function
properly and to avoid adversely affecting the performance of MIMIX products. See the
Using License Manager book for additional security information for the MIMIXOWN
user profile.
Note: Do not replicate the MIMIXOWN or LAKEVIEW user profiles. For additional
information, see Data that should not be replicated on page 77.
The system manager
The system manager consists of a pair of system management communication jobs
between a management system and a network system. Each pair has a send side
system manager job and a receiver side system manager job. These jobs must be
active to enable replication.
Once started, the system manager monitors for configuration changes and
automatically moves any configuration changes to the network system. Dynamic
status changes are also collected and returned to the management system. The
system manager also gathers messages and timestamp information from the network
system and places them in a message log and timestamp file on the management
system. In addition, the system manager performs periodic maintenance tasks,
including cleanup of the system and data group history files.
Figure 1 shows a MIMIX installation with a management system and two network
systems. In this installation, there are four pairs of system manager jobs; two between
the first network system and the management system and two between the second
network system and the management system. Each arrow represents a pair of
system manager jobs. Since each pair has a send side system manager job and a
receiver side system manager job, there are eight total system manager jobs in this
installation.
Figure 1. System manager jobs in a MIMIX installation with one management system and
32
two network systems.
The System manager delay parameter in the system definition determines how
frequently the system manager looks for work. Other parameters in the system
definition control other aspects of system manager operation.
System manager jobs are included in a group of jobs that MIMIX automatically
restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to
restart these MIMIX jobs at midnight (12:00 a.m.). MIMIX determines when to restart
the system managers based on the value of the Job restart time parameter in the
system definitions for the network and management systems. For more information,
see the section Configuring restart times for MIMIX jobs on page 285.
The journal manager
The journal manager is the process by which MIMIX maintains journal receivers on a
system. A journal manager job runs on each system in a MIMIX installation. If you
have a MIMIX installation with a management system and two network systems, you
The MIMIX environment
33
have three journal manager jobs, one on each system. For more information, see
J ournal definition considerations on page 184.
By default, MIMIX performs both change management and delete management for
journal receivers used by the replication process. Parameters in a journal definition
allow you to customize details of how the change and delete operations are
performed. The Journal manager delay parameter in the system definition determines
how frequently the journal manager looks for work. For more information, see J ournal
receiver management on page 36.
J ournal manager jobs are included in a group of jobs that MIMIX automatically
restarts daily to maintain the MIMIX environment. The default operation of MIMIX is to
restart these MIMIX jobs at midnight (12:00 a.m.). The Job restart time parameter in
the system definition determines when the journal manager for that system restarts.
For more information, see the section Configuring restart times for MIMIX jobs on
page 285.
The MIMIXQGPL library
When a MIMIX product is installed, a library named MIMIXQGPL is restored on the
system. The MIMIXQGPL library includes work management objects used by all
MIMIX products. Many of these objects are customized and shipped with default
settings designed to streamline operations for the products which use them. These
objects include the MIMIXSBS subsystem and a variety of job descriptions and job
classes.
Note: If you have previous releases of MIMIX products on a system, you may find
additional objects in the MIMIXQGPL library, however you should not place
objects in this library. If you place objects in these libraries, they may be
deleted during the next installation process. Also, do not replicate the
MIMIXQGPL library. For additional information, see Data that should not be
replicated on page 77.
MIMIXSBS subsystem
The MIMIXSBS subsystem is the default subsystem used by nearly all MIMIX-related
processing. This subsystem is shipped with the proper job queue entries and routing
entries for correct operation of the MIMIX jobs.
Data libraries
MIMIX uses the concept of data libraries. Currently there are two series of data
libraries:
MIMIX uses data libraries for storing the contents of the object cache. MIMIX
creates the first data library when needed and may create additional data libraries.
The names of data libraries are of the form product-library_n (where n is a number
starting at 1).
For system journal replication, MIMIX creates libraries named product-library_x,
where x is derived from the ASP. For example, A for ASP 1, B for ASP 2. These
ASP-specific data libraries are created when needed and are not deleted until the
product is uninstalled.
34
Named definitions
MIMIX uses named definitions to identify related user-defined configuration
information. You can create named definitions for system information, communication
(transfer) information, journal information, and replication (data group) information.
Any definitions you create can be used by both user journal and system journal
replication processes.
One or more or each of the following definitions are required to perform replication:
A system definition identifies to MIMIX the characteristics of a system that
participates in a MIMIX installation.
A transfer definition identifies to MIMIX the communications path and protocol to be
used between two systems. MIMIX supports Systems Network Architecture (SNA),
OptiConnect, and Transmission Control Protocol/Internet Protocol (TCP/IP) protocols.
A journal definition identifies to MIMIX a journal environment on a particular system.
MIMIX uses the journal definition to manage the journal receiver environment used by
the replication process.
A data group definition identifies to MIMIX the characteristics of how replication
occurs between two systems. A data group definition determines the direction in
which replication occurs between the systems, whether that direction can be
switched, and the default processing characteristics to use when processing the
database and object information associated with the data group.
A remote journal link (RJ link) is a MIMIX configuration element that identifies an
IBM i remote journaling environment. Newly created data groups use remote
journaling as the default configuration. An RJ link identifies journal definitions that
define the source and target journals, primary and secondary transfer definitions for
the communications path used by MIMIX, and whether the IBM i remote journal
function sends journal entries asynchronously or synchronously. When a data group
is added, the ADDRJ LNK command is run automatically, using the transfer definition
defined in the data group.
The naming conventions used within definitions are described in Multi-part naming
convention on page 26.
Data group entries
Data group entries are part of the MIMIX environment that must exist on each system
in a MIMIX installation. MIMIX uses the data group entries that you create during
configuration to determine whether or not a journal entry should be replicated.
Data group file entry This type of data group entry identifies the location of a
database file to be replicated and what its name and location will be on the target
system. Within a file entry, you can override the default file entry options defined
for the data group. MIMIX only replicates transactions for physical files because a
physical file contains the actual data stored in members. MIMIX supports both
positional and keyed access paths for accessing records stored in a physical file.
Data group object entries This type of entry allows you to identify library-based
objects for replication. Examples of library-based objects include programs, user
profiles, message queues, and non-journaled database files. To select these
The MIMIX environment
35
types of objects for replication, you select individual objects or groups of objects
by generic or specific object and library name, and object type,. Optionally, for
files, you can specify an extended object attribute such as PF-DTA or DSPF.
Data group IFS entries This type of entry allows you to identify integrated file
system (IFS) objects for replication. IFS objects include directories, stream files,
and symbolic links. They reside in directories, similar to DOS or UNIX files. You
can select IFS objects for replication by specific or generic path name.
Data group DLO entries This type of entry allows you to identify document
library objects (DLOs) for replication. DLOs are documents and folders. They are
contained in folders (except for first-level folders). To select DLOs for replication
you select individual DLOs by specific or generic folder and DLO name, and
owner.
Data group data area entries This type of entry allows you to define a data area
for replication by the data area polling process. However, the preferred way to
replicate data areas is to use advanced journaling.
A single data group can contain any combination of these types of data group entries.
If your license is for only one of the MIMIX products rather than for MIMIX Enterprise
or MIMIX Professional, only the entries associated with the product to which you are
licensed will be processed for replication.
36
Journal receiver management
Parameters in journal definition commands determine how change management and
delete management are performed on the journal receivers used by the replication
process. Shipped default values allow MIMIX to perform change management and
delete management.
Change management - The Receiver change management (CHGMGT) parameter
controls how the journal receivers are changed. The shipped default value
*TIMESIZE results in MIMIX changing the journal receiver by both threshold size and
time of day.
Additional parameters in the journal definition control the size at which to change
(THRESHOLD), the time of day to change (TIME), and when to reset the receiver
sequence number (RESETTHLD2 or RESETTHLD). The conditions specified in these
parameters must be met before change management can occur. For additional
information, see Tips for journal definition parameters on page 180.
If you do not use the default value *TIMESIZE for CHGMGT, consider the following:
When you specify *TIMESYS, the system manages the receiver by size and
during IPLs and MIMIX manages changing the receiver at a specified time.
Note: The value *TIME can be specified with *SIZE or *SYSTEM to achieve the
same results as *TIMESIZE or *TIMESYS, respectively.
When you specify *NONE, MIMIX does not handle changing the journal receivers.
You must ensure that the system or another application performs change
management to prevent the journal receivers from overflowing.
When you allow the system to perform change management (*SYSTEM) and the
attached journal receiver reaches its threshold, the system detaches the journal
receiver and creates and attaches a new journal receiver. During an initial
program load (IPL) or the vary on of an independent ASP, the system performs a
CHGJ RN command to create and attach a new journal receiver and to reset the
journal sequence number of journals that are not needed for commitment control
recovery for that IPL or vary on, unless the receiver size option (RCVSIZOPT) is
*MAXOPT3. When the RCVSIZOPT is *MAXOPT3, the sequence number will not
be reset and a new journal receiver will not be attached unless the sequence
number exceeds the sequence number threshold.
In a remote journaling configuration, MIMIX recognizes remote journals and ignores
change management for the remote journals. The remote journal receiver is changed
automatically by the IBM i remote journal function when the receiver on the source
system is changed. You can specify in the source journal definition whether to have
receiver change management performed by the system or by MIMIX. Any change
management values you specify for the target journal definition are ignored.
You can also customize how MIMIX performs journal receiver change management
through the use of exit programs. For more information, see Working with journal
receiver management user exit points on page 526.
Delete management - The Receiver delete management (DLTMGT) parameter
controls how the journal receivers used for replication are deleted. It is strongly
Journal receiver management
37
recommended that you use the value *YES to allow MIMIX to perform delete
management.
When MIMIX performs delete management, the journal receivers are only deleted
after MIMIX is finished with them and all other criteria specified on the journal
definition are met. The criteria includes how long to retain unsaved journal receivers
(KEEPUNSAV), how many detached journal receivers to keep (KEEPRCVCNT), and
how long to keep detached journal receivers (KEEPJ RNRCV).
Note: If more than one MIMIX installation uses the same journal, the journal
manager for each installation can delete the journal regardless of whether the
other installations are finished with it. If you have this scenario, you need to
use the journal receiver delete management exit points to control deleting the
journal receiver. For more information, see Working with journal receiver
management user exit points on page 526.
Delete management of the source and target receivers occur independently from
each other. It is highly recommended that you configure the journal definitions to have
MIMIX perform journal delete management. The IBM i remote journal function does
not allow a receiver to be deleted until it is replicated from the local journal (source) to
the remote journal (target). When MIMIX manages deletion, a target journal receiver
cannot be deleted until it is processed by the database reader (DBRDR) process and
it meets the other criteria defined in the journal definition.
If you choose to manage journal receivers yourself, you need to ensure that journals
are not removed before MIMIX has finished processing them. MIMIX operations can
be affected if you allow the system to handle delete management. For example, the
system may delete a journal receiver before MIMIX has completed its use.
Interaction with other products that manage receivers
If you run other products that use the same journals on the same system as MIMIX,
such as Vision Replicate1 or Vision Director, there may be considerations for
journal receiver management.
Vision Replicate1: Although both Vision Replicate1 and MIMIX support receiver
change management, you need to choose only one product to perform change
management activities for a specific journal. If you choose Vision Replicate1, your
MIMIX journal definition should specify CHGMGT(*NONE). If you choose MIMIX, see
change management for available options that can be specified in the journal
definition, including system managed receivers.
If both products scrape from the same journal, perform delete management only from
Vision Replicate1. This will prevent MIMIX deleting receivers before Vision Replicate1
is finished with them. The journal definition within MIMIX should specify
DLTMGT(*NO).
Vision Director: Both Vision Director and MIMIX read journal receiver entries from
the system (QAUDJ RN) journal. Shipped default settings in journal definitions allow
MIMIX to perform receiver delete management. When both products are used, it is
recommended that you change the journal definition for QAUDJ RN to specify a higher
number for the Keep journal receiver count (KEEPRCVCNT) parameter. If the journal
definition for QAUDJ RN is set to prevent MIMIX from performing change or delete
38
management, you must ensure that journal receivers are retained long enough for
both products to complete their use.
Processing from an earlier journal receiver
It is possible to have a situation where the operating system attempts to retransmit
journal receivers that already exist on the target system. When this situation occurs,
the remote journal function ends with an error and transmission of entries to the target
system stops. This can occur in the following scenarios:
When performing a clear pending start of the data group while also specifying a
sequence number that is earlier in the journal stream than the last processed
sequence number
When starting a data group while specifying a database journal receiver that is
earlier in the receiver chain than the last processed receiver.
For example, refer to Figure 2. Replication ended while processing journal entries in
target receiver 2. Target journal receiver 1 is deleted through the configured delete
management options. If the data group is started (STRDG) with a starting journal
sequence number for an entry that is in journal receiver 1, the remote journal function
attempts to retransmit source journal receivers 1 through 4, beginning with receiver 1.
However, receiver 2 already exists on the target system. When the operating system
encounters receiver 2, an error occurs and the transmission to the target system
ends.
You can prevent this situation before starting that data group if you delete any target
journal receivers following the receiver that will be used as the starting point. If you
encounter the problem, recovery is simply to remove the target journal receivers and
let remote journaling resend them. In this example, deleting target receiver 2 would
prevent or resolve the problem.
Figure 2. Example of processing from an earlier journal receiver.
Considerations when journaling on target
The default behavior for MIMIX is to have journaling enabled on the target systems for
the target files. After a transaction is applied to the target system, MIMIX writes the
journal entry to a separate journal on the target system. This journaling on the target
system makes it easier and faster to start replication from the backup system
following a switch. As part of the switch processing, the journal receiver is changed
before the data group is started.
Source Journal Receivers Target Journal Receivers
1
2
1
2
3
4
Journal receiver management
39
In a remote journaling environment, these additional journal receivers can become
stranded on the backup system following a switch. When starting a data group after a
switch, the IBM i remote journal function begins transmitting journal entries from the
just changed journal receiver. Because the backup system is now temporarily acting
as the source system, the remote journal function interprets any earlier receivers as
unprocessed source journal receivers and prevents them from being deleted.
To remove these stranded journal receivers, you need to use the IBM command
DLTJ RNRCV with *IGNTGTRCV specified as the value of the DLTOPT parameter.
40
Operational overview
Before replication can begin, the following requirements must be met through the
installation and configuration processes:
MIMIX software must be installed on each system in the MIMIX installation.
At least one communications link must be in place for each pair of systems
between which replication will occur.
The MIMIX operating environment must be configured and be available on each
system.
J ournaling must be active for the database files and objects configured for user
journal replication.
For objects to be replicated from the system journal, the object auditing
environment must be set up.
The files and objects must be initially synchronized between the systems
participating in replication.
Once MIMIX is configured and files and objects are synchronized, day-to-day
operations for MIMIX can be performed
Support for starting and ending replication
The Start MIMIX (STRMMX) and End MIMIX (ENDMMX) commands provide the
ability to start and end all elements of a MIMIX environment. These commands
include MIMIX services and manager jobs, all replication jobs for all data groups, as
well as the master monitor and jobs that are associated with it. While other commands
are available to perform these functions individually, the STRMMX and ENDMMX
commands are preferred because they ensure that processes are started or ended in
the appropriate order.
The Start Data Group (STRDG) and End Data Group (ENDDG) commands operate at
the data group level to control replication processes. These commands provide the
flexibility to start or end selected processes and apply sessions associated with a data
group, which can be helpful for balancing workload or resolving problems.
For more information about both sets of commands, see the MIMIX Operations book.
Support for checking installation status
The MIMIX Availability Status display reports the prioritized status of a single
installation. Status from the installation is reported in three areas: Replication, Audits
and Notification, and Services. Color and informational messages identify the most
severe problem present in an area and identify the action to take to start problem
isolation.
Support for automatically detecting and resolving problems
The functions provided by MIMIX AutoGuard are fully integrated into MIMIX user
interfaces.
Operational overview
41
Audits: MIMIX ships with a set of audits and associated audit monitors that are
automatically scheduled to run daily. These audits check for common problems and
automatically correct any detected problems within a data groups. Audits can also be
invoked manually and automatic recovery can be optionally disabled. The Work with
Audits display (WRKAUD) provides a summary view for audit status and a
compliance view for adherence to auditing best practices.
Error recovery during replication: MIMIX AutoGuard also provides the ability to
have MIMIX check for and correct common problems during user journal and system
journal replication that would otherwise cause a replication error. Automatic recovery
can be optionally disabled. Problems that cannot be resolved are reported like any
other replication error.
For detailed information about MIMIX AutoGuard, refer to the MIMIX Operations
book.
Support for working with data groups
Data groups are central to performing day-to-day operations. The Work with Data
Groups (WRKDG) display provide status of replication jobs and indication of any
replication errors for the data groups within an installation. Highlighted text indicates
whether problems exist. Many options are available for taking action at the data group
level and for drilling into detailed status information.
Detailed status: The command DSPDGSTS (option 8 from the Work with Data
groups display) accesses the Data Group Status display. The initial merged view
summarizes replication errors and the status of user journal (database) and system
journal (object) processes for both source and target systems. By using function keys,
you can display additional detailed views of only database or only object status.
Database views - These views provide information about replication performed by
user journal replication processes, including journaled files, IFS objects, data
areas, and data queues. They also include information about the replication of
user journal transactions, including journal progress, performance, and recent
activity.
Object views - These views provide information about replication performed by
system journal replication processes, including journal progress, performance,
and recent activity.
When a data group is experiencing replication problems, you can use these options
from the Work with Data Groups display to view problems grouped by type of activity:
12=Files not active, 13=Objects in error, 51=IFS trk entries not active, and 53=Obj trk
entries not active.
Support for resolving problems
MIMIX includes functions that can assist you in resolving a variety of problems.
Depending on the type of problem, some problem resolution tasks may need to be
performed from the system where the problem occurs, such as on the source system
where the journal resides or on the target system if the problem is related to the apply
process. MIMIX will direct you to the correct system when this is required.
42
Object activity: The Work with Data Group Activity (WRKDGACT) command allows
you to track system journal replication activity associated with a data group. You can
see the object, DLO, IFS, and spooled file activity, which can help you determine the
cause of an error. You can also see an error view that identifies the reason why the
object is in error. Options on the Work with Data Group Activity display allow you to
see messages associated with an entry, synchronize the entry between systems, and
remove a failed entry with or without related entries.
Failed requests: During normal processing, system journal replication processes
may encounter object requests that cannot be processed due to an error. Often the
error is due to a transient condition, such as when an object is in use by another
process at the time the object retrieve process attempts to gather the object data.
Although MIMIX will attempt some automatic retries, requests may still result in a
Failed status. In many cases, failed entries can be resubmitted and they will succeed.
Some errors may require user intervention, such as a never-ending process that
holds a lock on the object.
When the Automatic object recovery policy is enabled, MIMIX will attempt a third retry
cycle using the settings from the Number of third delay/retries (OBJ RTY) and Third
retry interval (min.) (OBJ RTYITV) policies. These policies can be set for the
installation or adjusted for a specific data group.
You can manually request that MIMIX retry processing for a data group activity entry
that has a status of *FAILED. These entries can be viewed using the Work with Data
Group Activity (WRKDGACT) command. From the Work with Data Group Activity or
Work with Data Group Activity Entries displays, you can use the retry option to
resubmit individual failed entries or all of the entries for an object. This option calls the
Retry Data Group Activity Entries (RTYDGACTE) command. From the Work with
Data Group Activity display, you can also specify a time at which to start the request,
thereby delaying the retry attempt until a time when it is more likely to succeed.
Files on hold: When the database apply process detects a data synchronization
problem, it places the file (individual member) on error hold and logs an error. File
entries are in held status when an error is preventing them from being applied to the
target system. You need to analyze the cause of the problem in order to determine
how to correct and release the file and ensure that the problem does not occur again.
An option on the Work with Data Groups display provides quick access to the subset
of file entries that are in error for a data group. From the Work with DG File Entries
display, you can see the status of an entry and use a number of options to assist in
resolving the error. An alternative view shows the database error code and journal
code. Available options include access to the Work with DG Files on Hold
(WRKDGFEHLD) command. The WRKDGFEHLD command allows you to work with
file entries that are in a held status. When this option is selected from the target
system, you can view and work with the entry for which the error was detected and
work with all other entries following the entry in error.
Journal analysis: With user journal replication, when the system that is the source of
replicated data fails, it is possible that some of the generated journal entries may not
have been transmitted to or received by the target system. However, it is not always
possible to determine this until the failed system has been recovered. Even if the
failed system is recovered, damage to a disk unit or to the journal itself may prevent
an accurate analysis of any missed data. Once the source system is available again,
Operational overview
43
if there is no damage to the disk unit or journal and its associated journal receivers,
you can use the journal analysis function to help determine what journal entries may
have been missed and to which files the data belongs. You can only perform journal
analysis on the system where a journal resides.
Missed transactions for IFS objects, data areas and data queues that are replicated
through the user journal will not be detected by journal analysis.
Support for switching a data group
Typically, you perform a switch using the MIMIX Switch Assistant or by using
commands to call a customized implementation of MIMIX Model Switch Framework.
In either case, the Switch Data Group (SWTDG) command is called programmatically
to change the direction in which replication occurs between systems defined to a data
group. The SWTDG command supports both planned and unplanned switches.
In a planned switch, you are purposely changing the direction of replication for any of
a variety of reasons. You may need to take the system offline to perform
maintenance on its hardware or software, or you may be testing your disaster
recovery plan. In a planned switch, the production system (the source of replication) is
available. When you perform a planned switch, data group processing is ended on
both the source and target systems. The next time you start the data group, it will be
set to replicate in the opposite direction.
In an unplanned switch, you are changing the direction of replication as a response to
a problem. Most likely the production system is no longer available. When you
perform an unplanned switch, you must run the SWTDG command from the target
system. Data group processing is ended on the target system. The next time you
start the data group, it will be set to replicate in the opposite direction.
To enable a switchable data group to function properly for default user journal
replication processes, four journal definitions (two RJ links) are required. J ournal
definition considerations on page 184 contains examples of how to set up these
journal definitions.
You can specify whether to end the RJ link during a switch. Default behavior for a
planned switch is to leave the RJ link running. Default behavior during an unplanned
switch is to end the RJ link. Once you have a properly configured data group that
supports switching, you should be aware of how MIMIX supports unconfirmed entries
and the state of the RJ link following a switch. For more information, see Support for
unconfirmed entries during a switch on page 66 and RJ link considerations when
switching on page 66.
For additional information about switching, see the MIMIX Operations book. For
additional information about MIMIX Model Switch Framework, see the Using MIMIX
Monitor book.
Support for working with messages
MIMIX sends a variety of system message based on the status of MIMIX jobs and
processes. You can view messages generated by MIMIX from either the Message
Log window or from the Work with Message Log (WRKMSGLOG) display.
44
These messages are sent to both the primary and secondary message queues that
are specified for the system definition.
In addition to these message queues, message entries are recorded in a MIMIX
message log file. The MIMIX message log provides a powerful tool for problem
determination. Maintaining a message log file allows you to keep a record of
messages issued by MIMIX as an audit trail. In addition, the message log provides
robust subset and filter capabilities, the ability to locate and display related job logs,
and a powerful debug tool. When messages are issued, they are initially sent to the
specified primary and secondary message queues. In the event that these message
queues are erased, placing messages into the message log file secures a second
level of information concerning MIMIX operations.
The message log on the management system contains messages from the
management system and each network system defined within the installation. The
system manager is responsible for collecting messages from all network systems. On
a network system, the message log contains only those messages generated by
MIMIX activity on that system.
MIMIX automatically performs cleanup of the message log on a regular basis. The
system manager deletes entries from the message log file based on the value of the
Keep system history parameter in the system definition. However, if you process an
unusually high volume of replicated data, you may want to also periodically delete
unnecessary message log entries since the file grows in size depending on the
number of messages issued in a day.
45
CHAPTER 2 Replication process overview
In general terms, a replication path is a series of processes that, together, represent
the critical path on which data to be replicated moves from its origin to its destination.
MIMIX uses two replication paths to accommodate differences in how replication
occurs for databases and objects. These paths operate with configurable levels of
cooperation or can operate independently.
The user journal replication path captures changes to critical files and objects
configured for replication through the user journal using the IBM i remote
journaling function. In previous versions, MIMIX DB2 Replicator provided this
function.
The system journal replication path handles replication of critical system objects
(such as user profiles or spooled files), integrated file system (IFS) objects, and
document library object (DLOs) using the IBM i system journal. In previous
versions MIMIX Object Replicator provided this function.
Configuration choices determine the degree of cooperative processing used between
the system journal and user journal replication paths when replicating files, IFS
objects, data areas, and data queues.
Within each replication path, MIMIX uses a series of processes. This chapter
describes the replication paths and the processes used in each.
The topics in this chapter include:
Replication job and supporting job names on page 46 describes the replication
paths for database and object information. Included is a table which identifies the
replication job names for each of the processes that make up the replication path.
Cooperative processing introduction on page 48 describes three variations
available for performing replication activities using a coordinated effort between
user journal processing and system journal processing.
System journal replication on page 50 describes the system journal replication
path which is designed to handle the object-related availability needs of your
system through system journal processing.
User journal replication on page 57 describes remote journaling and the benefits
of using remote journaling with MIMIX.
User journal replication of IFS objects, data areas, data queues on page 68
describes a technique which allows replication of changed data for certain object
types through the user journal.
Lesser-used processes for user journal replication on page 72 describes two
lesser used replication processes, MIMIX source-send processing for database
replication and the data area poller process.
46
Replication job and supporting job names
The replication path for database information includes the IBM i remote journal
function, the MIMIX database reader process, and one or more database apply
processes. If MIMIX source-send processes are used instead of remote journaling,
then the processes include the database send process, the database receive
process, and one or more database apply processes.
The replication path for object information includes the object send process, the
object receive process, and the object apply process. When a data retrieval request is
replicated, the replication path also includes the object retrieve, container send, and
container receive processes. A data retrieval request is an operation that creates or
changes the content of an object. A self-contained request is an operation that
deletes, moves, or renames an object, or that changes the authority or ownership of
an object.
Table 2 identifies the job names for each of the processes that make up the
replication path. Except as noted, MIMIX automatically restarts the jobs in Table 2 to
maintain the MIMIX environment. The default is to restart these MIMIX jobs daily at
midnight (12:00 a.m.). If this time conflicts with scheduled workloads, you can
configure a different time to restart the jobs. For more information, see Configuring
restart times for MIMIX jobs on page 285.
Table 2. MIMIX processes and their corresponding job names
Abbreviation Description Runs on Job name Notes
CNRRCV Container receive process Target sdn_CNRRCV 1, 3
CNRSND Container send process Source sdn_CNRSND 1, 3
DAPOLL Data area polling Source sdn_DAPOLL 3
DBAPY Database apply process Target sdn_DBAPYs 3, 4
DBRCV Database receive process Target sdn_DBRCV 1, 3
DBRDR Database reader Target sdn_DBRDR 3
DBSND Database send process Source sdn_DBSND 1, 3
J RNMGR J ournal manager System J RNMGR --
MXCOMMD MIMIX Communications Daemon System MXCOMMD --
MXOBJ SELPR Object selection process System MXOBJ SELPR --
OBJ APY Object apply process Target sdn_OBJ APY 3
OBJ RTV Object retrieve process Source sdn_OBJ RTV 1, 3
OBJ SND Object send process Source sdn_OBJ SND 1, 3
OBJ RCV Object receive process Target sdn_OBJ RCV 1, 3
STSSND Status send Target sdn_STSSND 1, 3
SYSMGR System manager System SM******** 1, 2
Replication job and supporting job names
47
SYSMGRRCV System manager receive process Network SR******** 1, 2
STSRCV Status receive Source sdn_STSRCV 1, 3
TEUPD Tracking entry update process Source or Target sdn_TEUPD 3, 5
Note:
1. Send and receive processes depend on communication. The job name varies, depending on the transfer
protocol. OptiConnect job names start with APIA* in the QSOC subsystem. The SNA job name is derived
from the remote location name. TCP/IP uses a job name port number or alias as the job name. The alias
is defined on the service table entry.
2. The system manager runs on both source and target systems. The ******** in the job name format
indicates the name of the system definition.
3. The characters sdn in a job name indicate the short data group name.
4. The character s is the apply session letter.
5. The job is used only for replication with advanced journaling and is started only when needed.
Table 2. (Continued) MIMIX processes and their corresponding job names
Abbreviation Description Runs on Job name Notes
48
Cooperative processing introduction
Cooperative processing is when the MIMIX user journal processes and system
journal processes work in a coordinated effort to perform replication activities for
certain object types.
When configured, cooperative processing enables MIMIX to perform replication in the
most efficient way by evaluating the object type and the MIMIX configuration to
determine whether to use the system journal replication processes, user journal
replication processes, or a combination of both. Cooperative processing also provides
a greater level of data protection, data management efficiency, and high availability by
ensuring the complete replication of newly created or redefined files and objects.
Object types that can be journaled to a user journal are eligible to be processed
cooperatively when properly configured to MIMIX. MIMIX supports the following
variations of cooperative processing for these object types:
MIMIX Dynamic Apply (files)
Legacy cooperative processing (files)
Advanced journaling (IFS objects, data areas, and data queues).
When a data group definition meets the requirements for MIMIX Dynamic Apply, any
logical files and physical (source and data) files properly identified for cooperative
processing will be processed via MIMIX Dynamic Apply unless a known restriction
prevents it.
When a data group definition does not meet the requirements for MIMIX Dynamic
Apply but still meets legacy cooperative processing requirements, any PF-DTA or
PF38-DTA files properly configured for cooperative processing will be replicated using
legacy cooperative processing. All other types of files are processed using system
journal replication.
IFS objects, data areas, or data queues that can be journaled are not automatically
configured for advanced journaling, by default. These object types must be manually
configured to use advanced journaling.
In all variations of cooperative processing, the system journal is used to replicate the
following operations:
The creation of new objects that do not deposit an entry in a user journal when
they are created.
Restores of objects on the source system
Move and rename operates from a non-replicated library or path into a library or
path that is configured for replication.
MIMIX Dynamic Apply
Most environments can take advantage of cooperatively processed operations for
journaled *FILE objects that are journaled primarily through a user (database) journal.
MIMIX Dynamic Apply is the most efficient way to perform cooperative processing of
logical and physical files. MIMIX Dynamic Apply intelligently handles files with
Cooperative processing introduction
49
relationships by assigning them to the same or appropriate apply sessions. It is also
much better at maintaining data integrity of replicated objects which previously
needed legacy cooperative processing in order to replicate some operations such as
creates, deletes, moves, and renames. Another benefit of MIMIX Dynamic Apply is
more efficient hold log processing by enabling multiple files to be processed through a
hold log instead of just one file at a time.
New data groups created with the shipped default configuration values are configured
to use MIMIX Dynamic Apply. This configuration requires data group object entries
and data group file entries.
For more information, see Identifying logical and physical files for replication on
page 96 and Requirements and limitations of MIMIX Dynamic Apply on page 101.
Legacy cooperative processing
In legacy cooperative processing, record and member operations of *FILE objects are
replicated through user journal processes, while all other transactions are replicated
through system journal processes. Legacy cooperative processing supports only data
files (PF-DTA and PF38-DTA).
Data groups that existed prior to upgrading to MIMIX version 5 are typically configured
with legacy cooperative processing which requires data group object entries and data
group file entries.
It is recommended to use MIMIX Dynamic Apply for cooperative processing. Existing
data groups configured to use legacy cooperative processing can be converted to use
MIMIX Dynamic Apply. For more information, see Requirements and limitations of
legacy cooperative processing on page 102.
Advanced journaling
The term advanced journaling refers to journaled IFS objects, data areas, or data
queues that are configured for cooperative processing. When these objects are
configured for cooperative processing, replication of changed bytes of the journaled
objects data occurs through the user journal. This is more efficient than replicating an
entire object through the system journal each time changes occur.
Such a configuration also allows for the serialization of updates to IFS objects, data
areas, and data queues with database journal entries. In addition, processing time for
these object types may be reduced, even for equal amounts of data, as user journal
replication eliminates the separate save, send, and restore processes necessary for
system replication.
Frequently you will see the phrase user journal replication of IFS objects, data areas,
and data queues used interchangeably with the term advanced journaling. These
terms are the same.
For more information, see User journal replication of IFS objects, data areas, data
queues on page 68 and Planning for journaled IFS objects, data areas, and data
queues on page 78.
50
System journal replication
The system journal replication path is designed to handle the object-related
availability needs of your system. You identify the critical system objects that you want
to replicate, such as user profiles, programs, and DLOs. MIMIX uses the journal
entries generated by the operating systems object auditing function to identify the
changes to objects on production systems and replicates the changes to backup
systems.
If you are not already using the systems security audit journal (QAUDJ RN, or
system journal), when you use MIMIX commands to build the journaling environment,
MIMIX creates the journal and correctly sets system values related to auditing. MIMIX
checks the settings of the following system values, making changes as necessary:
QAUDLVL (Security auditing level) system value. MIMIX sets the values
*CREATE, *DELETE, *OBJ MGT, and *SAVRST. MIMIX checks for values
*SECURITY, *SECCFG, *SECRUN, and *SECVLDL and will set them only if the
value *SECURITY is not already set. If any data group is configured to replicated
spooled files, MIMIX also sets *SPLFDTA and *PRTDTA.
QAUDCTL (Auditing control) system value. MIMIX sets the values *OBJ AUD and
*AUDLVL.
These system value settings, along with the object audit value of each object, control
what journal entries are created in the system journal (QAUDJ RN) for an object.
If an operation on an object is not represented by an entry in the system journal,
MIMIX is not aware of the operation and cannot replicate it.
The system objects you want to replicate are defined to a data group through data
group object entries, data group DLO entries, and data group IFS entries. The term
name space refers to this collection of objects that are identified for replication by
MIMIX using the system journal replication processes.
An object is replicated when it is created, restored, moved, or renamed into the MIMIX
name space. While in the MIMIX name space, changes to the object or to the
authority settings of the object are also replicated.
Replication through the system journal is event-driven. When a data group is started,
each process used in the replication path waits for its predetermined event to occur
then begins its activity. The processes are interdependent and run concurrently. The
system journal replication path in MIMIX uses the following processes:
Object send process: alternates between identifying objects to be replicated and
transmitting control information about objects ready for replication to the target
system.
Object receive process: receives control information and waits for notification that
additional source system processing, if any, is complete before passing the
control information to the object apply process.
Object retrieve process: if any additional information is needed for replication,
obtains it and places it in a holding area. This process is also used when
additional processing is required on the source system prior to transmission to the
target system.
System journal replication
51
Container send process: transmits any additional information from a holding area
to the target system and notifies the control process of that action.
Container receive process: receives any additional information and places it into a
holding area on the target system.
Object apply process: replicates objects according to the control information and
any required additional information that is retrieved from the holding area.
Status send process: notifies the source system of the status of the replication.
Status receive process: updates the status on the source system and, if
necessary, passes control information back to the object send process.
MIMIX uses a collection of structures and customized functions for controlling these
structures during replication. Collectively the customized functions and structures are
referred to as the work log. The structures in the work log consist of log spaces, work
lists (implemented as user queues), and distribution status file.
When a data group is started, MIMIX uses the security audit journal to monitor for
activity on objects within the name space. When activity occurs on the object, such as
it is being accessed or changed, a corresponding journal entry is created in the
security audit journal. As journal entries are added to the journal receiver on the
source system, the object send process reads journal entries and determines if they
represent operations to objects that are within the name space. For each journal entry
for an object within the name space, the object send process creates an activity
entry in the work log. Creation of an activity entry includes adding the entry to the log
space and adding a record to the distribution status file. An activity entry includes a
copy of the journal entry and any related information associated with a replication
operation for an object, including the status of the entry. User interaction with activity
entries is through the Work with Data Group Activity display and the Work with DG
Activity Entries display.
There are two categories of activity entries: those that are self-contained and those
that require the retrieval of additional information. Processing self-contained activity
entries on page 51 describes the simplest object replication scenario. Processing
data-retrieval activity entries on page 52 describes the object replication scenario in
which additional data must be retrieved from the source system and sent to the target
system.
Processing self-contained activity entries
For a self-contained activity entry, the copied journal entry contains all of the
information required to replicate the object. Examples of journal entries include
Change Authority (T-CA), Object Move or Rename (T-OM), and Object Delete (T-DO).
After the object send process determines that an entry is to be replicated, it performs
the following actions:
Sets the status of the entry to PA (pending apply)
Adds the sent date and time to the activity entry
Writes the activity entry to the log space and adds a record to the distribution
status file
52
Transmits the activity entry to a corresponding object receive process job on the
target system.
The object receive process adds the received date and time to the activity entry,
writes the activity entry to the log space, adds a record to the distribution status file,
and places the activity entry on the object apply work list. Now each system has a
copy of the activity entry.
The next available object apply process job for the data group retrieves the activity
entry from the object apply work list and replicates the operation represented by the
entry. The object apply process adds the applied date and time to the activity entry,
changes the status of the entry to CP (completed processing), and adds the entry to
the status send work list.
The status send process retrieves the activity entry from the status send work list
and transmits the updated entry to a corresponding status receive process on the
source system. The status receive process updates the activity entry in the work log
and the distribution status file.
Processing data-retrieval activity entries
In a data retrieval activity entry, additional data must be gathered from the object on
the source system in order to replicate the operation. The copied journal entry
indicates that changes to an object affect the attributes or data of the object. The
actual content of the change is not recorded in the journal entry. To properly replicate
the object, its content, attributes, or both, must be retrieved and transmitted to the
target system. MIMIX may retrieve this data by using APIs or by using the appropriate
save command for the object type. APIs store the data in one or more user spaces
(*USRSPC) in a data library associated with the MIMIX installation. Save commands
store the object data in a save file (*SAVF) in the data library. Collectively, these
objects in the data library are known as containers.
After the object send process determines that an entry is to be replicated and that
additional processing or information on the source system is required, it performs the
following actions:
Sets the status of the entry to PR (pending retrieve)
Adds the sent date and time to the activity entry
Writes the activity entry to the log space and adds a record to the distribution
status file
Transmits the activity entry to a corresponding object receive process on the
target system.
Adds the entry to the object retrieve work list on the source system.
The object receive process adds the received date and time to the activity entry,
writes the activity entry to the log space, and adds a record to the distribution status
file. Now each system has a copy of the activity entry. The object receive process
waits until the source system processing is complete before it adds the activity entry
to the object apply work list.
System journal replication
53
Concurrently, the object send process reads the object send work list. When the
object send process finds an activity entry in the object send work list, the object send
process performs one or more of the following additional steps on the entry:
If an object retrieve job packaged the object, the activity entry is routed to the
container send work list.
The activity entry is transmitted to the target system, its status is updated, and a
retrieved date and time is added to the activity entry.
On the source system the next available object retrieve process for the data group
retrieves the activity entry from the object retrieve work list and processes the
referenced object. In addition to retrieving additional information for the activity entry,
additional processing may be required on the source system. The object retrieve
process may perform some or all of the following steps:
Retrieve the extended attribute of the object. This may be one step in retrieving
the object or it may be the primary function required of the retrieve process.
If necessary, cooperative processing activities, such as adding or removing a data
group file entry, are performed.
The object identified by the activity entry is packaged into a container in the data
library. The object retrieve process adds the retrieved date and time to the
activity entry and changes the status of the entry to pending send.
The activity entry is added to the object send work list. From there the object send
job takes the appropriate action for the activity, which may be to send the entry to
the target system, add the entry to the container send work list, or both.
The container send and receive processes are only used when an activity entry
requires information in addition to what is contained within the journal entry. The next
available job for the container send process for the data group retrieves the activity
entry from the container send work list and retrieves the container for the packaged
object from the data library. The container send job transmits the container to a
corresponding job of the container receive process on the target system. The
container receive process places the container in a data library on the target system.
The container send process waits for confirmation from the container receive job, then
adds the container sent date and time to the activity entry, changes the status of the
activity entry to PA (pending apply), and adds the entry to the object send work list.
The next available object apply process job for the data group retrieves the activity
entry from the object apply work list, locates the container for the object in the data
library, and replicates the operation represented by the entry. The object apply
process adds the applied date and time to the activity entry, changes the status of
the entry to CP (completed processing), and adds the entry to the status send work
list.
The status send process retrieves the activity entry from the status send work list
and transmits the updated entry to a corresponding job for status receive process
on the source system. The status receive process updates the activity entry in the log
space and the distribution status file. If the activity entry requires further processing,
such as if an updated container is needed on the target system, the status receive job
adds the entry to the object send work list.
54
Processes with multiple jobs
The object retrieve, container send and receive, and object apply processes all
consist of one or more asynchronous jobs. You can specify the minimum and
maximum number of asynchronous jobs you want to allow MIMIX to run for each
process and a threshold for activating additional jobs. The minimum number indicates
how many permanent jobs should be started for the process. These jobs stay active
as long as the data group is active.
During periods of peak activity, if more requests are backlogged than are specified in
the threshold, additional temporary jobs, up to the maximum number, may also be
started. This load leveling feature allows system journal replication processes to react
automatically to periodic heavy workloads. By doing this, the replication process stays
current with production system activity. When system activity returns to a reduced
level, the temporary jobs end after a period of inactivity elapses.
Tracking object replication
After you start a data group, you need to monitor the status of the replication
processes and respond to any error conditions. Regular monitoring and timely
responses to error conditions significantly reduce the amount of time and effort
required in the event that you need to switch a data group.
MIMIX provides an indication of high level status of the processes used in object
replication and error conditions. You can access detailed status information through
the Data Group Status window.
When an operation cannot complete on either the source or target system (such as
when the object is in use by another process and cannot be accessed), the activity
entry may go to a failed state. MIMIX attempts to rectify many failures automatically,
but some failures require manual intervention. Objects with at least one failed entry
outstanding are considered to be in error. You should periodically review the objects
in error, and the associated failed entries, and determine the appropriate action. You
may retry or delete one or all of the failed entries for an object. You can check the
progress of activity entries and take corrective action through the Work with Data
Group Activity display and the Work with DG Activity Entries display. You can also
subset directly to the activity entries in error from the Work with Data Groups display.
If you have new objects to replicate that are not within the MIMIX name space, you
need to add data group entries for them. Before any new data group entries can be
replicated, you must end and restart the system journal replication processes in order
for the changes to take effect.
The system manager removes old activity entries from the work log on each system
after the time specified in the system definition passes. The Keep data group history
(days) parameter (KEEPDGHST) indicates how long the activity entries remain on the
system. You can also manually delete activity entries. Containers in the data libraries
are deleted after the time specified in the Keep MIMIX data (days) parameter
(KEEPMMXDTA).
Managing object auditing
System journal replication
55
The system journal replication path within MIMIX relies on entries placed in the
system journal by IBM i object auditing functions. To ensure that objects configured
for this replication path retain an object auditing value that supports replication, MIMIX
evaluates and changes the objects auditing value when necessary.
To do this, MIMIX employs a configuration value that is specified on the Object
auditing value (OBJ AUD) parameter of data group entries (object, IFS, DLO)
configured for the system journal replication path. When MIMIX determines that an
objects auditing value is lower than the configured value, it changes the object to
have the higher configured value specified in the data group entry that is the closest
match to the object. The OBJ AUD parameter supports object audit values of *ALL,
*CHANGE, or *NONE.
MIMIX evaluates and may change an objects auditing value when specific conditions
exist during object replication or during processing of a Start Data Group (STRDG)
request. This evaluation process can also be invoked manually for all objects
identified for replication by a data group.
During replication - MIMIX may change the auditing value during replication when
an object is replicated because it was created, restored, moved, or renamed into the
MIMIX name space (the group of objects defined to MIMIX).
While starting a data group - MIMIX may change the auditing value while
processing a STRDG request if the request specified processes that cause object
send (OBJ SND) jobs to start and the request occurred after a data group switch or
after a configuration change to one or more data group entries (object, IFS, or DLO).
Shipped command defaults for the STRDG command allow MIMIX to set object
auditing if necessary. If you would rather set the auditing level for replicated objects
yourself, you can specify *NO for the Set object auditing level (SETAUD) parameter
when you start data groups.
Invoking manually - The Set Data Group Auditing (SETDGAUD) command provides
the ability to manually set the object auditing level of existing objects identified for
replication by a data group. When the command is invoked, MIMIX checks the audit
value of existing objects identified for system journal replication. Shipped default
values on the command cause MIMIX to change the object auditing value of objects
to match the configured value when an objects actual value is lower than the
configured value.
The SETDGAUD command is used during initial configuration of a data group.
Otherwise, it is not necessary for normal operations and should only be used under
the direction of a trained MIMIX support representative.
The SETDGAUD command also supports optionally forcing a change to a configured
value that is lower than the existing value through its Force audit value (FORCE)
parameter.
Evaluation processing - Regardless of how the object auditing evaluation is
invoked, MIMIX may find that an object is identified by more than one data group
entry within the same class of object (IFS, DLO, or library-based). It is important to
understand the order of precedence for processing data group entries.
Data group entries are processed in order from most generic to most specific. IFS
entries are processed using the unicode character set; object entries and DLO entries
56
are processed using the EBCDIC character set. The first entry (more generic) found
that matches the object is used until a more specific match is found.
The entry that most specifically matches the object is used to process the object. If
the object has a lower audit value, it is set to the configured auditing value specified in
the data group entry that most specifically matches the object.
When MIMIX processes a data group IFS entry and changes the auditing level of
objects which match the entry, all of the directories in the objects directory path are
checked and, if necessary, changed to the new auditing value. In the case of an IFS
entry with a generic name, all descendents of the IFS object may also have their
auditing value changed.
When you change a data group entry, MIMIX updates all objects identified by the
same type of data group entry in order to ensure that auditing is set properly for
objects identified by multiple entries with different configured auditing values. For
example, if a new DLO entry is added to a data group, MIMIX sets object auditing for
all objects identified by the data groups DLO entries, but not for its object entries or
IFS entries.
For more information and examples of setting auditing values with the SETDGAUD
command, see Setting data group auditing values manually on page 270.
User journal replication
57
User journal replication
MIMIX Remote J ournal support enables MIMIX to take advantage of the cross-journal
communications capabilities provided by the IBM i remote journal function instead of
using internal communications. Newly created data groups use remote journaling as
the default configuration.
What is remote journaling?
Remote journaling is a function of the IBM i that allows you to establish journals and
journal receivers on a target system and associate them with specific journals and
journal receivers on a source system. After the journals and journal receivers are
established on both systems, the remote journal function can replicate journal entries
from the source system to the journals and journal receivers located on the target
system.
The remote journal function supports both synchronous and asynchronous modes of
operation. More information about the benefits and implications of each mode can be
found in topic Overview of IBM processing of remote journals on page 59.
You should become familiar with the terminology used by the IBM i remote journal
function. The Backup and Recovery and Journal management books are good
sources for terminology and for information about considerations you should be
aware of when you use remote journaling. The IBM redbooks AS/400 Remote Journal
Function for High Availability and Data Replication (SG24-5189) and Striving for
Optimal Journal Performance on DB2 Universal Database for iSeries (SG24-6286)
provide an excellent overview of remote journaling in a high availability environment.
You can find these books online at the IBM eServer iSeries Information Center.
Benefits of using remote journaling with MIMIX
MIMIX has internal send and receive processing as part of its architecture. The MIMIX
Remote J ournal support allows MIMIX to take advantage of the cross-journal
communications functions provided by the IBM i remote journal function instead of
using the internal communications provided by MIMIX. As stated in the AS/400
Remote Journal Function for High Availability and Data Replication redbook,
The benefits of remote journal function include:
It lowers the CPU consumption on the source machine by shifting
the processing required to receive the journal entries from the
source system to the target system. This is true when
asynchronous delivery is selected.
It eliminates the need to buffer journal entries to a temporary area
before transmitting them from the source machine to the target
machine. This translates into less disk writes and greater DASD
efficiency on the source system.
Since it is implemented in microcode, it significantly improves the
replication performance of journal entries and allows database
images to be sent to the target system in realtime. This realtime
operation is called the synchronous delivery mode. If the
58
synchronous delivery mode is used, the journal entries are
guaranteed to be in main storage on the target system prior to
control being returned to the application on the source machine.
It allows the journal receiver save and restore operations to be
moved to the target system. This way, the resource utilization on
the source machine can be reduced.
Restrictions of MIMIX Remote Journal support
The IBM i remote journal function does not allow writing journal entries directly to the
target journal receiver. This restriction severely limits the usefulness of cascading
remote journals in a managed availability environment.
MIMIX user journal replication does not support a cascading environment in which
remote journal receivers on the target system are also source journal receivers for a
third system.
Users who require this type of environment may use multiple installations of MIMIX,
implementing apply side journaling in one installation and using remote journaling to
replicate the applied transactions to a third system.
59
Overview of IBM processing of remote journals
Several key concepts within the IBM i remote journal function are important to
understanding its impact on MIMIX replication.
A local-remote journal pair refers to the relationship between a configured source
journal and target journal. The key point about a local-remote journal pair is that data
flows only in one direction within the pair, from source to target.
When the remote journal function is activated and all journal entries from the source
are requested, existing journal entries for the specified journal receiver on the source
system which have not already been replicated are replicated as quickly as possible.
This is known as catchup mode. Once the existing journal entries are delivered to
the target system, the system begins sending new entries in continuous mode
according to the delivery mode specified when the remote journal function was
started. New journal entries can be delivered either synchronously or asynchronously.
Synchronous delivery
In synchronous delivery mode the target system is updated in real time with journal
entries as they are generated by the source applications. The source applications do
not continue processing until the journal entries are sent to the target journal.
Each journal entry is first replicated to the target journal receiver in main memory on
the target system (1 in Figure 3). When the source system receives notification of the
delivery to the target journal receiver, the journal entry is placed in the source journal
receiver (2) and the source database is updated (3).
With synchronous delivery, journal entries that have been written to memory on the
target system are considered unconfirmed entries until they have been written to
60
auxiliary storage on the source system and confirmation of this is received on the
target system (4).
Figure 3. Synchronous mode sequence of activity in the IBM remote journal feature.
Unconfirmed journal entries are entries replicated to a target system but the state of
the I/O to auxiliary storage for the same journal entries on the source system is not
known. Unconfirmed entries only pertain to remote journals that are maintained
synchronously. They are held in the data portion of the target journal receiver. These
entries are not processed with other journal entries unless specifically requested or
until confirmation of the I/O for the same entries is received from the source system.
Confirmation typically is not immediately sent to the target system for performance
reasons.
Once the confirmation is received, the entries are considered confirmed journal
entries. Confirmed journal entries are entries that have been replicated to the target
system and the I/O to auxiliary storage for the same journal entries on the source
system is known to have completed.
With synchronous delivery, the most recent copy of the data is on the target system. If
the source system becomes unavailable, you can recover using data from the target
system.
Since delivery is synchronous to the application layer, there are application
performance and communications bandwidth considerations. There is some
performance impact to the application when it is moved from asynchronous mode to
synchronous mode for high availability purposes. This impact can be minimized by
ensuring efficient data movement. In general, a minimum of a dedicated 100
megabyte ethernet connection is recommended for synchronous remote journaling.
Applications
1
Tar get
Jour nal
Recei ver
(Remote)
Source
J ournal
Receiver
(Local)
Production
Database
4
2 3
Source J ournal
Message Queue
Target J ournal
Message Queue
Source System
Target System
61
MIMIX includes special switch processing for unconfirmed entries to ensure that the
most recent transactions are preserved in the event of a source system failure. For
more information, see Support for unconfirmed entries during a switch on page 66.
Asynchronous delivery
In asynchronous delivery mode, the journal entries are placed in the source journal
first (A in Figure 4) and then applied to the source database (B). An independent job
sends the journal entries from a buffer (C) to the target system journal receiver (D) at
some time after control is returned to the source applications that generated the
journal entries.
Because the journal entries on the target system may lag behind the source systems
database, in the event of a source system failure, entries may become trapped on the
source system.
Figure 4. Asynchronous mode sequence of activity in the IBM remote journal feature.
With asynchronous delivery, the most recent copy of the data is on the source system.
Performance critical applications frequently use asynchronous delivery.
Default values used in configuring MIMIX for remote journaling use asynchronous
delivery. This delivery mode is most similar to the MIMIX database send and receive
processes.
Applications
Source System
Target System
Target
J ournal
Receiver
(Remote)
Source
J ournal
Receiver
(Local)
Production
Database
A B
Buffer
C
D
Source J ournal
Message Queue
Target J ournal
Message Queue
62
User journal replication processes
Data groups created using default values are configured to use remote journaling
support for user journal replication.
The replication path for database information includes the IBM i remote journal
function, the MIMIX database reader process, and one or more database apply
processes.
The IBM i remote journal function transfers journal entries to the target system.
The database reader process (DBRDR) process reads journal entries from the
target journal receiver of a remote journal configuration and places those journal
entries that match replication criteria for the data group into a log space.
Remote journaling does not allow entries to be filtered from being sent to the remote
system. All entries deposited into the source journal will be transmitted to the target
system. The database reader process performs the filtering that is identified in the
data group definition parameters and file and tracking entry options.
The database apply process applies the changes stored in the target log space to the
target systems database. MIMIX uses multiple apply processes in parallel for
maximum efficiency. Transactions are applied in real-time to generate a duplicate
image of the journaled objects being replicated from the source system.
The RJ link
To simplify tasks associated with remote journaling, MIMIX implements the concept of
a remote journal link. A remote journal link (RJ link) is a configuration element that
identifies an IBM i remote journaling environment. An RJ link identifies:
A source journal definition that identifies the system and journal which are the
source of journal entries being replicated from the source system.
A target journal definition that defines a remote journal.
Primary and secondary transfer definitions for the communications path for use by
MIMIX.
Whether the IBM i remote journal function sends journal entries asynchronously or
synchronously.
Once an RJ link is defined and other configuration elements are properly set, user
journal replication processes will use the IBM i remote journaling environment within
its replication path.
The concept of an RJ link is integrated into existing commands. The Work with RJ
Links display makes it easy to identify the state of the IBM i remote journaling
environment defined by the RJ link.
Sharing RJ links among data groups
It is possible to configure multiple data groups to use the same RJ link. However, data
groups should only share an RJ link if they are intended to be switched together or if
they are non-switchable data groups. Otherwise, there is additional communications
overhead from data groups replicating in opposite directions and the potential for
63
journal entries for database operations to be routed back to their originating system.
See Support for unconfirmed entries during a switch on page 66 and RJ link
considerations when switching on page 66 for more details.
RJ links within and independently of data groups
The RJ link is integrated into commands for starting and ending data group replication
(STRDG and ENDDG). The STRDG and ENDDG commands automatically determine
whether the data group uses remote journaling and select the appropriate replication
path processes, including the RJ link, as needed.
Two MIMIX commands provide the ability to use an RJ link without performing data
replication. The Start Remote J ournal Link (STRRJ LNK) and the End Remote J ournal
Link (ENDRJ LNK) commands provide this capability.
Differences between ENDDG and ENDRJLNK commands
You should be aware of differences between ending data group replication (ENDDG
command) and ending only the remote journal link (ENDRJ LNK command). You will
primarily use the End Data Group (ENDDG) command to end replication processes
and to optionally end the RJ link when necessary. The End Remote J ournal Link
(ENDRJ LNK) command ends only the RJ link.
Both commands include an end option (ENDOPT parameter) to specify whether to
end immediately or in a controlled manner. These options on the ENDRJ LNK
command do not have the same meaning as on the ENDDG command. For
ENDRJ LNK, the ENDOPT parameter has the following values:
The ENDRJ LNK commands ENDOPT parameter is ignored and an immediate end is
preformed when either of the following conditions are true:
When the remote journal function is running in synchronous mode
(DELIVERY(*SYNC)).
When the remote journal function is performing catch-up processing.
Table 3. End option values on the End Remote J ournal Link (ENDRJ LNK) command.
*IMMED The target journal is deactivated immediately. J ournal entries that are already
queued for transmission are not sent before the target journal is deactivated.
The next time the remote journal function is started, the journal entries that
were queued but not sent are prepared again for transmission to the target
journal.
*CNTRLD Any journal entries that are queued for transmission to the target journal will
be transmitted before the IBM i remote journal function is ended. At any time,
the remote journal function may have one or more journal entries prepared for
transmission to the target journal. If an asynchronous delivery mode is used
over a slow communications line, it may take a significant amount of time to
transmit the queued entries before actually ending the target journal.
64
RJ link monitors
User journal replication processes monitor the journal message queues of the
journals identified by the RJ link. Two RJ link monitors are created automatically, one
on the source system and one on the target system. These monitors provide added
value by allowing MIMIX to automatically monitor the state of the remote journal link,
to notify the user of problems, and to automatically recover the link when possible.
RJ link monitors - operation
The RJ link monitors are automatically started when the master monitor is started. If
for some reason the monitors are not already started, they will be started when you
start a remote journal link. The monitors are created if they do not already exist. The
source RJ link monitor is named after the source journal definition and the target RJ
link monitor is named after the target journal definition.
The RJ link monitors are MIMIX message queue monitors. They monitor messages
put on the message queues associated with the source and target journals. The
operating system issues messages to these journal message queues when a failure
is detected in IBM i remote journal processing. Each RJ link monitor uses information
provided in the messages to determine which remote journal link is affected and to try
to automatically recover that remote journal link. (The state of a remote journal link
can be seen by using the Work with RJ Links (WRKRJ LNK) command.) There is a
limit on the number of times that a link will be recovered in a particular time period; a
continually failing link will eventually be marked failed and recovery will end. Typically
this occurs when there are communications problems. Once the problem is resolved,
you can start the RJ link monitors again the using the Work with Monitors (WRKMON)
command and selecting the Start option.
The RJ link monitor for the source does not end once it is started, since more than
one remote journal link can use a source monitor. Users can end the monitors by
using the Work with Monitors (WRKMON) command and selecting the End option.
MIMIX Monitor commands can be used to see the status of your RJ link monitors. The
WRKMON command lists all monitors for a MIMIX installation and displays whether
the monitor is active or inactive. You can also view the status of your RJ link monitors
on the DSPDGSTS status display (option 8 from the Work with Data Groups display).
Both the source and target RJ link monitor processes appear on this display. The
display shows whether or not the monitor processes are active. If MIMIX Monitor is
not installed as recommended, the RJ link monitor status appears as unknown on the
Display Data Group Status display.
RJ link monitors in complex configurations
In a broadcast scenario, a single source journal definition can link to multiple target
journal definitions, each over its own remote journal link. One source RJ link monitor
handles this broadcast, since there is one source RJ monitor per source journal
definition communicating via a remote journal link.
Alternately, in a cascade scenario an intermediate system can have both a source RJ
link monitor and a target RJ link monitor running on it for the same journal definition.
This intermediate system has the target journal definition for the system that
65
originated the replication and holds the source journal definition for the next system in
the cascade.
For more information about configuring for these environments, see Data distribution
and data management scenarios on page 339.
66
Support for unconfirmed entries during a switch
The MIMIX Remote J ournal support implements synchronous mode processing in a
way that reduces data latency in the movement of journal entries from the source to
the target system. This reduces the potential for and the degree of manual
intervention when an unplanned outage occurs.
Whenever an RJ link failure is detected MIMIX saves any unconfirmed entries on the
target system so they can be applied to the backup database if an unplanned switch
is required. The unconfirmed entries are the most recent changes to the data.
Maintaining this data on the target system is critical to your managed availability
solution.
In the event of an unplanned switch, the unconfirmed entries are routed to the MIMIX
database apply process to be applied to the backup database. As a result, you will
see the database apply process jobs run longer than they would under standard
switch processing. If the apply process is ended by a user before the switch, MIMIX
will restart the apply jobs to preserve these entries.
As part of the unplanned switch processing, MIMIX checks whether the apply jobs are
caught up. Then, unconfirmed entries are applied to the target database and added to
a journal that will be transferred to the source system when that system is brought
back up. When the backup system is brought online as the temporary the source
system, the unconfirmed entries are processed before any new journal entries
generated by the application are processed. Furthermore, to ensure full data integrity,
once the original source system is operational these unconfirmed entries are the first
entries replicated back to that system.
RJ link considerations when switching
By default, when a data group is ended or a planned switch occurs, the RJ link
remains active. You need to consider whether to keep the original RJ link active after
a planned switch of a data group. If the RJ link is used by another application or data
group, the RJ link must remain active. Sharing an RJ link among multiple data groups
is only recommended for the conditions identified in Sharing RJ links among data
groups on page 62.
If the RJ link is not used by any other application or data group, the link should be
ended to prevent communications and processing overhead. When you are
temporarily running production applications on the backup system after a planned
switch, journal entries generated on the backup system are transmitted to the remote
journal receiver (which is on the production system). MIMIX applies the entries to the
original production database. If journaling is still active on the original production
database, new journal entries are created for the entries that were just applied. These
new journal entries are essentially a repeat of the same operation just performed
against the database. Remote journaling causes the entries to be transmitted back to
the backup system. MIMIX prevents these repeat entries from being reapplied,
however, these repeated entries cause additional resources to be used within MIMIX
and in communications.
MIMIX Model Switch Framework considerations - When remote journaling is used
in an environment in which MIMIX Model Switch Framework is implemented, you
need to consider the implications of sharing an RJ link. In addition, default values
67
used during a planned switch cause the RJ link to remain active. You may need to
end the RJ link after a planned switch.
68
User journal replication of IFS objects, data areas, data
queues
IBM provides journaling support for IFS objects as well as for data areas and data
queues. This capability allows transactions to be journaled in the user journal
(database journal), much like transactions are recorded for database record changes.
Each time an IFS object, data area, or data queue changes, only changed bytes are
recorded in the journal entry.
MIMIX enables you to take advantage of this capability of the IBM i when replicating
these journaled objects. This support within MIMIX is often referred to as advanced
journaling and is enabled by explicitly configuring data group object entries for data
areas and data queues and data group IFS entries for IFS objects. In addition to data
group object entries and IFS entries, MIMIX uses tracking entries to uniquely identify
each object that is configured for advanced journaling.
A data group that replicates some or all configured IFS objects, data areas, or data
queues through a user journal may also replicate files from the same journal as well
as replicate objects from the system journal. For example, a data group could be
configured to support MIMIX Dynamic Apply for *FILE objects, advanced journaling
for IFS objects and data areas, and system journal processes for data queues and
other library-based objects. For more information, see Replication choices by object
type on page 87
You may need to consider how much data is replicated through the same apply
session for user journal replication processes and whether any transactions need to
be serialized with database files. For more information, see Planning for journaled
IFS objects, data areas, and data queues on page 78.
Benefits of advanced journaling
One of the most significant benefits of using advanced journaling is that IFS objects,
data areas, and data queues are processed by replicating only changed bytes.
For example, when IFS objects, data areas, or data queues are replicated through the
system journal, the entire object is shipped across the communications link. While this
may be sufficient for many applications, those using large files or making frequent
small byte-level changes can be negatively impacted by the additional data
transmission. When these objects are configured to allow user journal replication,
MIMIX replicates only changed bytes of the data for IFS objects, data areas, and data
queues.
Another significant benefit of using advanced journaling for IFS objects, data areas,
and data queues is that transactions can be applied in lock-step with a database file.
This requires that the objects and database are configured to the same data group
and the same database apply session.
For example, assume that a hotel uses a database application to reserve rooms.
Within the application, a data area contains a counter to indicate the number of rooms
reserved for a particular day and a database file contains detailed information about
reservations. Each time a room is reserved, both the counter and the database file are
updated. If these updates do not occur in the same order on the target system, the
User journal replication of IFS objects, data areas, data queues
69
hotel risks reserving too many or too few rooms. Without advanced journaling,
serialization of these transactions cannot not be guaranteed on the target system due
to inherent differences in MIMIX processing from the user journal (database file) and
the system journal (default for objects). With advanced journaling, MIMIX serializes
these transactions on the target system by updating both the file and the data area
through user journal processing. Thus, as long as the database file and data area are
configured to be processed by the same apply session, updates occur on the target
system in the same order they were originally made on the source system.
Additional benefits of replicating IFS objects, data areas, and data queues from the
user journal include:
Replication is less intrusive. In traditional object replication, the save/restore
process places locks on the replicated object on the source system. Database
replication touches the user journal only, leaving the source object alone.
Changes to objects replicated from the user journal may be replicated to the target
system in a more timely manner. In traditional object replication, system journal
replication processes must contend with potential locks placed on the objects by
user applications.
Processing time may be reduced, even for equal amounts of data. Database
replication eliminates the separate save, send, and restore processes necessary
for object replication.
The objects replicated from the user journal can reduce burden on object
replication processes when there is a lot of activity being replicated through the
system journal.
Commitment control is supported for B journal entry types for IFS objects
journaled to a user journal.
Advanced journaling can be used in configurations that use either remote
journaling or MIMIX source-send processes for user journal replication.
Restrictions and configuration requirements vary for IFS objects and data area or data
queue objects. If one or more of the configuration requirements are not met, the
system journal replication path is used. For detailed information, including supported
journal entry types, see Identifying data areas and data queues for replication on
page 103 and Identifying IFS objects for replication on page 106.
Replication processes used by advanced journaling
When IFS objects, data areas, and data queues are properly configured, replication
occurs through the user journal replication path. Processing occurs through the IBM i
remote journal function, the MIMIX database reader process
1
, and one database
apply process (session A).
1. Data groups can also be configured for MIMIX source-send processing instead of MIMIX RJ sup-
port.
70
Tracking entries
A unique tracking entry is associated with each IFS object, data area, and data queue
that is replicated using advanced journaling.
The collection of data group IFS entries for a data group determines the subset of
existing IFS objects on the source system that are eligible for replication using
advanced journaling techniques. Similarly, the collection of data group object entries
determines the subset of existing data areas and data queues on the source system
that are eligible for replication using advanced journaling techniques. MIMIX requires
a tracking entry for each of the eligible objects to identify how it is defined for
replication and to assist with tracking status when it is replicated. IFS tracking entries
identify IFS stream files, including the source and target file ID (FID), while object
tracking entries identify data areas or data queues.
When you initially configure a data group you must load tracking entries, start
journaling for the objects which they identify, and synchronize the objects with the
target system. The same is true when you add new or change existing data group IFS
entries or object entries.
It is also possible for tracking entries to be automatically created. After creating or
changing data group IFS entries or object entries that are configured for advanced
journaling, tracking entries are created the next time the data group is started.
However, this method has disadvantanges.This can significantly increase the amount
of time needed to start a data group. If the objects you intend to replicate with
advanced journaling are not journaled before the start request is made, MIMIX places
the tracking entries in *HLDERR state. Error messages indicate that journaling must
be started and the objects must be synchronized between systems.
Once a tracking entry exists, it remains until one of the following occurs:
The object identified by the tracking entry is deleted from the source system and
replication of the delete action completes on the target system.
The data group configuration changes so that an object is no longer identified for
replication using advanced journaling.
User journal replication of IFS objects, data areas, data queues
71
Figure 5 shows an IFS user directory structure, the include and exclude processing
selected for objects within that structure, and the resultant list of tracking entries
created by MIMIX.
Figure 5. IFS tracking entries produced by MIMIX
The status of tracking entries is included with other data group status. You also can
see what objects they identify, whether the objects are journaled, and their replication
status. You can also perform operations on tracking entries, such as holding and
releasing, to address replication problems.
IFS object file identifiers (FIDs)
Normally, when dealing with objects and database files, you have the ability of seeing
the name of the object (filename, library name, and member name) in the journal
entries. For IFS objects, it is impractical to put the name of the IFS object in the
header of the journal entry due to potentially long path names.
Each IFS object on a system has a unique 16-byte file ID (FID). The FID is used to
identify IFS objects in journal entries. The FID is machine-specific, meaning that IFS
objects with the same path name may have different FIDs on different systems.
MIMIX tracks the FIDs for all IFS objects configured for replication with advanced
journaling via IFS tracking entries. When the data group is switched, the source and
target FID associations are reversed, allowing MIMIX to successfully replicate
transactions to IFS objects.
72
Lesser-used processes for user journal replication
This topic describes two lesser used replication processes, MIMIX source-send
processing for database replication and the data area poller process.
User journal replication with source-send processing
This topic describes user journal replication when data groups are configured to use
MIMIX source-send processes.
Note: New data groups are created to use remote journaling support for user journal
replication when shipped default values on commands are used. Using remote
journaling support offers many benefits over using MIMIX source-send
processes.
MIMIX uses journaling to identify changes to database files and other journaled
objects to be replicated. As journal entries are added to the journal receiver, the
database send process collects data from journal entries on the source system and
compares them to the data group file entries defined for the data group.
J ournal entries for which a match is found for the file and library are then transported
to the target system for replication according to the DB journal entry processing
parameter (DBJ RNPRC) filtering specified in the data group definition. The Data
group file entries (FEOPT) parameter, specified either at the data group level or on
individual data group file entries, also indicates whether to send only the after-image
of the change or both before-image and after-images.
Alternatively, if all journal entries are sent to the target system, the journal entries are
filtered there by the apply process. The matching for the apply process is at the file,
library, and member level.
Note: If an application program adds or removes members and all members within
the file are to be processed by MIMIX, it is better to use *ALL as the member
name in that data group file entry. If individual members are specified, only
those members you identify are processed.
On the target system, the database receive process transfers the data received over
the communications line from the source system into a log space on the target
system.
The database apply process applies replicated database transactions from the log
space to the appropriate database physical file member or data area on the target
system. For database files, transactions are applied at record level (puts, updates,
deletes) or file level (clears, reorganizations, member deletes). MIMIX uses multiple
apply processes in parallel for maximum efficiency. Transactions are applied in real-
time to generate a duplicate image of the files and data areas replicated from the
source system.
Throughout this process, MIMIX manages the journal receiver unless you have
specified otherwise. The journal definition default operation specifies that MIMIX
automatically create the next journal receiver when the journal receiver reaches the
threshold size you specified in the journal definition. After MIMIX finishes reading the
entries from the current journal receiver, it deletes this receiver (if configured to do so)
Lesser-used processes for user journal replication
73
and begins reading entries from the next journal receiver. This eliminates excessive
use of disk storage and allows valuable system resources to be available for other
processing.
Besides indicating the mapping between source and target file names, data group file
entries identify additional information used by database processes. The data group
file entry can also specify a particular apply session to use for processing on the
target system.
A status code in the data group file entry also stores the status of the file or member in
the MIMIX process. If a replication problem is detected, MIMIX puts the member in
hold error (*HLDERR) status so that no further transactions are applied. Files can
also be put on hold (*HLD) manually.
Putting a file on hold causes MIMIX to retain all journal entries for the file in log
spaces on the target system. If you expect to synchronize files at a later time, it is
better to put the file in an ignored state. By setting files to an ignored state, journal
entries for the file in the log spaces are deleted and additional entries received from
the target system are discarded. This keeps the log spaces to a minimal size and
improves efficiency for the apply process.
The file entry option Lock member during apply indicates whether or not to allow only
restricted access (read-only) to the file on the backup system. This file entry option
can be specified on the data group definition or on individual data group entries.
The data area polling process
Note: The preferred way to replicate data areas is through the user journal. Data
areas can alternatively be replicated through system journal replication
processes or with the data area poller.
When a data group is configured to use the data area polling process, polling
programs capture changes to data areas defined to the data group at specified
intervals. MIMIX creates a journal entry when there is a change to a data area.
MIMIX supports the following data area types:
You define a data group data area entry for each data area that you want MIMIX to
manage. The data group definition determines how frequently the polling programs
check for changes to data areas.
The data area polling process runs on the source system. This process retrieves each
data area defined to a data group at the interval you specify and determines whether
or not a data area has changed. MIMIX checks for changes to the data area type and
length as well as to the contents of the data area. If a data area has changed, the data
area polling process retrieves the data area and converts it into a journal entry. This
Table 4. Data area types supported by the data area polling process.
*CHAR character, up to 2000 bytes
*DEC decimal, up to 24 bytes in length and 9 decimal positions
*LGL logical, equal to 1 byte.
74
journal entry is sent through the normal user journal replication processing and is
used to update the data area on the target system.
For example, if a data area that is defined to MIMIX is deleted and recreated with new
attributes, the data area polling process will capture the new attributes and recreate
the data area on the target system.
75
CHAPTER 3 Preparing for MIMIX
This chapter outlines what you need to do to prepare for using MIMIX.
Preparing for the installation and use of MIMIX is a very important step towards
meeting your availability management requirements. Because of their shared
functions and their interaction with other MIMIX products, it is best to determine IBM
System i requirements for user journal and system journal processing in the context of
your total MIMIX environment.
Give special attention to planning and implementing security for MIMIX. General
security considerations for all MIMIX products can be found in the Using License
Manager book. In addition, you can make your systems more secure with MIMIX
product-level and command-level security. Each product has its own product-level
security, but now you must consider the security implications of common functions
used by each product. Information about setting security for common functions is also
found in the Using License Manager book.
The topics in this chapter include:
Checklist: pre-configuration on page 76 provides a procedure to follow to
prepare to configure MIMIX on each system that participates in a MIMIX
installation.
Data that should not be replicated on page 77 describes how to consider what
data should not be replicated.
Planning for journaled IFS objects, data areas, and data queues on page 78
describes considerations when planning to use advanced journaling for IFS
objects, data areas, or data queues.
Starting the MIMIXSBS subsystem on page 82 describes how to start the
MIMIXSBS subsystem which all MIMIX products run in.
Accessing the MIMIX Main Menu on page 83 describes the MIMIX Main Menu
and its two assistance levels, basic and intermediate which provide options to
help simplify daily interactions with MIMIX.
76
Checklist: pre-configuration
You need to configure MIMIX on each system that participates in a MIMIX installation.
Do the following:
1. By now, you should have completed the following tasks:
The checklist for installing MIMIX software in the Using License Manager book
You should have also turned on product-level security and granted authority to
user profiles to control access to the MIMIX products.
2. At this time, you should review the information in Data that should not be
replicated on page 77.
3. Decide what replication choices are appropriate for your environment. For
detailed information see the chapter Planning choices and details by object class
on page 85.
4. If it is not already active, start the MIMIXSBS subsystem using topic Starting the
MIMIXSBS subsystem on page 82.
5. Configure each system in the MIMIX installation, beginning with the management
system. The chapter Configuration checklists on page 123 identifies the primary
options you have for configuring MIMIX.
6. Once you complete the configuration process you choose, you may also need to
do one or more of the following:
If you plan to use MIMIX Monitor in conjunction with MIMIX, you may need to
write exit programs for monitoring activity and you may want to ensure that
your monitor definitions are replicated. See the MIMIX Operations book for
more information.
Verify the configuration.
Verify any exit programs that are called by MIMIX.
Update any automation programs you use with MIMIX and verify their
operation.
If you plan to use switching support, you or your Certified MIMIX Consultant
may need to take additional action to set up and test switching. In order to use
MIMIX Switch Assistant, a default model switch framework must be configured
and identified in MIMIX policies. For more information about MIMIX Model
Switch Framework, see the Using MIMIX Monitor book. For more information
about switching and policies, see the MIMIX Operations book.
Data that should not be replicated
77
Data that should not be replicated
There are some considerations to keep in mind when defining data for replication. Not
only do you need to determine what is critical to replicate, but you also need to
consider data that should not be replicated.
User environment - As you identify your critical data, consider the following:
You may not need to replicate temporary files, work files, and temporary objects,
including DLOs and stream files. Evaluate how your applications use such files to
determine if they need to be replicated.
MIMIX environment - Consider the following:
Do not replicate libraries LAKEVIEW, MIMIXQGPL, VSI001LIB, any MIMIX
installation libraries, any MIMIX data libraries, libraries in which Vision Director
is installed, or the IFS location /visionsolutions/http/vsisvr.
Note: MIMIX is the default name for the MIMIX installation library -- the library in
which MIMIX Enterprise or MIMIX Professional is installed. MIMIX data
libraries are associated with a MIMIX installation library and have names in
the format installation-library-name_x, where x is a letter or number.
Do not place user created objects or programs in the LAKEVIEW, MIMIXQGPL, or
VSI001LIB libraries or in the IFS location /visionsolutions/http/vsisvr. This includes
any programs created as part of your MIMIX Model Switch Framework. If you
place such objects or programs in these libraries, they may be deleted during the
installation process. Objects that are in these libraries must be placed in a
different library before installing software.The one exception is that job
descriptions, such as the MIMIX Port job, can continue to be placed into the
MIMIXQGPL library.
Only user created objects or programs that are related to a MIMIX installation
should be placed within a MIMIX installation library or a MIMIX data library.
Examples of related objects include user created step programs, user exit
programs, and programs created as part of a MIMIX Model Switch Framework
implementation.
Do not replicate the LAKEVIEW or MIMIXOWN user profiles.
System environment - Consider the following:
Do not replicate system user profiles from one system to another. For example,
QSYSOPR and QSECOFR should not be replicated.
Do not replicate IBM System i objects from one system to another. IBM-supplied
libraries, files, and other objects for System i typically begin with the prefix letter
Q.
78
Planning for journaled IFS objects, data areas, and data
queues
You can choose to use the cooperative processing support within MIMIX to replicate
any combination of journaled IFS objects, data queue objects, or data queue objects
using user journal replication processes.
In addition to configuration and journaling requirements and the restrictions that apply,
you need to address several other considerations when planning to replicate
journaled IFS objects, data areas, or data queues. These considerations affect
whether journals should be shared, whether objects should be replicated in a data
group shared with database files, whether configuration changes are needed to
change apply sessions for database files, and whether exit programs need to be
updated.
Is user journal replication appropriate for your environment?
While user journal replication has significant advantages, it may not be appropriate for
your environment. Or, it may be appropriate for only some of the supported object
types. Consider the following:
Do the objects remain relatively static? Static objects typically persist after they
are created, while their data may change. Examples of more dynamic objects
include temporary objects, which are created, renamed, and deleted frequently.
Objects for some applications, like those which heavily use *DTAQs, may be
better suited to replication from the system journal.
What release of IBM i is in use? On some operating system releases, the types of
operations that can be replicated from a user journal are limited. The IBM i release
in use may influence whether objects are considered static or dynamic for
replication purposes.
The benefits of user journal replication are described in Benefits of advanced
journaling on page 68. For restrictions and limitations, see Identifying data areas
and data queues for replication on page 103 and Identifying IFS objects for
replication on page 106.
Serialized transactions with database files
Transactions completed for database files and objects (IFS objects, data areas, or
data queues) can be serialized with one another when they are applied to objects on
the target system. If you require serialization, these objects and database files must
share the same data group as well as the same database apply session, session A.
Since MIMIX uses apply session A for all objects configured for advanced journaling,
serialization may require that you change the configuration for database files to
ensure that they use the same apply session. Load balancing may also become a
concern. See Database apply session balancing on page 80.
Converting existing data groups
When converting an existing data group consider the following:
Planning for journaled IFS objects, data areas, and data queues
79
You may have previously used data groups with a Data group type (TYPE) value
of *OBJ to separate replication of IFS, data area, or data queue objects from other
activity. Converting these data groups to use advanced journaling will not cause
problems with the data group. The data group definition and existing data group
entries must be changed to the values required for advanced journaling.
When converting an existing data group to use advanced journaling, all objects in
the IFS path or the library specified that match the selection criteria are selected.
You may need to create additional data group IFS or object entries in order to
achieve the desired results. This may include creating entries that exclude objects
from replication.
Adding IFS, data area, or data queue objects configured for advanced journaling
to an existing database replication environment may increase replication activity
and affect performance. If a large amount of data is to be replicated, consider the
overall replication performance and throughput requirements when choosing a
configuration.
Changing the replication mechanism of IFS objects, data areas, or data queues
from system journal replication to user journal replication generally reduces
bandwidth consumption, improves replication latency, and eliminates the locking
contention associated with the save and restore process. However, if these
objects have never been replicated, the addition of IFS byte stream files, data
areas, or data queues to the replication environment will increase bandwidth
consumption and processing workload.
Conversion examples
To illustrate a simple conversion, assume that the systems defined to data group
KEYAPP are running on IBM i V5R4. You use this data group for system journal
replication of the objects in library PRODLIB. The data group has one data group
object entry which has the following values:
LI B1( PRODLI B) OBJ 1( *ALL) OBJ TYPE( *ALL) PRCTYPE( *I NCLD)
COOPDB( *YES) COOPTYPE( *FI LE)
Example 1 - You decide to use advanced journaling for all *DTAARA and *DTAQ
objects replicated with data group KEYAPP. You have confirmed that the data group
definition specifies TYPE(*ALL) and does not need to change. After performing a
controlled end of the data group, you change the data group object entry to have the
following values:
LI B1( PRODLI B) OBJ 1( *ALL) OBJ TYPE( *ALL) PRCTYPE( *I NCLD)
COOPDB( *YES) COOPTYPE( *DFT)
Note: COOPTYPE (*DFT) is equivalent to specifying COOPTYPE(*FILE *DTAARA
*DTAQ).
When the data group is started, object tracking entries are loaded and journaling is
started for the data area and data queue objects in PRODLIB. Those objects will now
be replicated from a user journal. Any other object types in PRODLIB continue to be
replicated from the system journal.
80
Example 2 - You want to use advanced journaling for data group KEYAPP but one
data area, XYZ, must remain replicated from the system journal. You will need the
data group object entry described in Example 1.
LI B1( PRODLI B) OBJ 1( *ALL) OBJ TYPE( *ALL) PRCTYPE( *I NCLD)
COOPDB( *YES) COOPTYPE( *DFT)
You will also need a new data group object entry that specifies the following so that
data area XYZ can be replicated from the system journal:
LI B1( PRODLI B) OBJ 1( XYZ) OBJ TYPE( *DTAARA) PRCTYPE( *I NCLD)
COOPDB( *NO)
Database apply session balancing
In each data group, one database apply session, session A, is used for all IFS
objects, data areas, and data queues replicated from a user journal. If you also
replicate database files in the same data group, the way in which files are configured
for replication can also affect how much data is processed by apply session A. In
some cases, you may need adjust the configured apply session in data group object
and file entries to either ensure that files that should be serialized remain in the same
apply session or to move files to another apply session to manually balance loads.
Consider the following:
In MIMIX Dynamic Apply configurations, newly created database files are
distributed evenly across database apply sessions by default. This ensures that
the files are distributed in a way that will not overload any one apply session.
In configurations using legacy cooperative processing, newly created database
files are distributed to apply session A by default. In data groups that also
replicate IFS objects, data areas or data queues through the user journal, it may
be necessary to change the apply session to which cooperatively processed files
are directed when the database files are created to prevent apply session A from
becoming overloaded. The apply session can be changed in the file entry options
(FEOPT) on the data group object and file entries.
Logical files and physical files with referential constraints also have apply session
requirements to consider. For more information see Considerations for LF and PF
files on page 96.
User exit program considerations
When new or different journaled object types are added to an existing data group,
user exit programs may be affected. Be aware of the following exit program
considerations when changing an existing configuration to include IFS objects, data
areas, or data queues configured for replication processing from a user journal.
When IFS objects, data areas, or data queues are journaled to a user journal, new
journal entry codes are provided to the user exit program. If the user exit program
interprets the journal code, changes may be required.
The path name for IFS objects cannot be interpreted in the same way as it can for
database files. MIMIX uses the file ID (FID) to identify the IFS object being
replicated. User exit programs that rely on the library and file names in the journal
entry may need to be changed to either ignore IFS journal entries or process them
Planning for journaled IFS objects, data areas, and data queues
81
by resolving the FID to a path name using the IBM-supplied APIs.
J ournaled IFS objects and data queues can have incomplete journal entries. For
incomplete journal entries, MIMIX provides two or more journal entries with
duplicate journal entry sequence numbers and journal codes and types to the user
exit program when the data for the incomplete entry is retrieved. Programs need
to correctly handle these duplicate entries representing the single, original journal
entry.
J ournal entries for journaled IFS objects, data areas, and data queues will be
routed to the user exit program. This may be a performance consideration relative
to user exit program design.
Contact your Certified MIMIX Consultant for assistance with user exit programs.
82
Starting the MIMIXSBS subsystem
By default, all MIMIX products run in the MIMIXSBS subsystem that is created when
you install the product. This subsystem must be active before you can use the MIMIX
products.
If the MIMIXSBS is not already active, start the subsystem by typing the command
STRSBS SBSD(MIMIXQGPL/MIMIXSBS) and pressing Enter.
Any autostart job entries listed in the MIMIXSBS subsystem will start when the
subsystem is started.
Note: You can ensure that the MIMIX subsystem is started after each IPL by adding
this command to the end of the startup program for your system. Due to the
unique requirements and complexities of each MIMIX implementation, it is
strongly recommended that you contact your Certified MIMIX Consultant to
determine the best way in which to design and implement this change.

Accessing the MIMIX Main Menu
83
Accessing the MIMIX Main Menu
The MIMIX command accesses the main menu for a MIMIX installation. The MIMIX
Main Menu has two assistance levels, basic and intermediate. The command defaults
to the basic assistance level, shown in Figure 6, with its options designed to simplify
day-to-day interaction with MIMIX. Figure 7 shows the intermediate assistance level.
The options on the menu vary with the assistance level. In either assistance level, the
available options also depend on the MIMIX products installed in the installation
library and their licensing. The products installed and the licensing also affect
subsequent menus and displays.
Accessing the menu - If you know the name of the MIMIX installation you want, you
can use the name to library-qualify the command, as follows:
Type the command library-name/MIMIX and press Enter. The default name of
the installation library is MIMIX.
If you do not know the name of the library, do the following:
1. Type the command LAKEVIEW/WRKPRD and press Enter.
2. Type a 9 (Display product menu) next to the product in the library you want on the
Lakeview Technology Installed Products display and press Enter.
Changing the assistance level - The F21 key (Assistance level) on the main menu
toggles between basic and intermediate levels of the menu. You can also specify the
the Assistance Level (ASTLVL) parameter on the MIMIX command.
Figure 6. MIMIX Basic Main Menu
MIMIX Basic Main Menu
Syst em: SYSTEM1
MI MI X

Sel ect one of t he f ol l owi ng:

1. Wor k wi t h appl i cat i on gr oups WRKAG
2. St ar t MI MI X
3. End MI MI X
4. Swi t ch al l appl i cat i on gr oups
5. St ar t or compl et e swi t ch usi ng Swi t ch Asst .
6. Wor k wi t h dat a gr oups WRKDG

10. Avai l abi l i t y st at us WRKMMXSTS
11. Conf i gur at i on menu
12. Wor k wi t h moni t or s WRKMON
13. Wor k wi t h messages WRKMSGLOG
14. Cl ust er menu
Mor e. . .
Sel ect i on or command
===>__________________________________________________________________________
______________________________________________________________________________
F3=Exi t F4=Pr ompt F9=Ret r i eve F21=Assi st ance l evel F12=Cancel
( C) Copyr i ght Vi si on Sol ut i ons, I nc. , 1990, 2010.
84
Note: On the MIMIX Basic Main Menu, options 5 (Start or complete switch using
Switch Asst.) and 10 (Availability Status) are not recommended for
installations that use application groups.
Figure 7. MIMIX Intermediate Main Menu
MIMIX Intermediate Main Menu
Syst em: SYSTEM1
MI MI X
Sel ect one of t he f ol l owi ng:

1. Wor k wi t h dat a gr oups WRKDG
2. Wor k wi t h syst ems WRKSYS
3. Wor k wi t h messages WRKMSGLOG
4. Wor k wi t h moni t or s WRKMON
5. Wor k wi t h appl i cat i on gr oups WRKAG
6. Wor k wi t h audi t s WRKAUD
7. Wor k wi t h pr ocedur es WRKPROC

11. Conf i gur at i on menu
12. Compar e, ver i f y, and synchr oni ze menu
13. Ut i l i t i es menu
14. Cl ust er menu
Mor e. . .
Sel ect i on or command
===>__________________________________________________________________________
______________________________________________________________________________
F3=Exi t F4=Pr ompt F9=Ret r i eve F21=Assi st ance l evel F12=Cancel
( C) Copyr i ght Vi si on Sol ut i ons, I nc. , 1990, 2010.
85
CHAPTER 4 Planning choices and details by
object class
This chapter describes the replication choices available for objects and identifies
critical requirements, limitations, and configuration considerations for those choices.
Many MIMIX processes are customized to provide optimal handling for certain
classes of related object types and differentiate between database files, library-based
objects, integrated file system (IFS) objects, and document library objects (DLOs).
Each class of information is identified for replication by a corresponding class of data
group entries. A data group can have any combination of data group entry classes.
Some classes even support multiple choices for replication.
In each class, a data group entry identifies a source of information that can be
replicated by a specific data group. When you configure MIMIX, each data group
entry you create identifies one or more objects to be considered for replication or to
be explicitly excluded from replication. When determining whether to replicate a
journaled transaction, MIMIX evaluates all of the data group entries for the class to
which the object belongs. If the object is within the name space determined by the
existing data group entries, the transaction is replicated.
The topics in this chapter include:
Replication choices by object type on page 87 identifies the available replication
choices for each object class.
Configured object auditing value for data group entries on page 89 describes
how MIMIX uses a configured object auditing value that is identified in data group
entries and when MIMIX will change an objects auditing value to match this
configuration value.
Identifying library-based objects for replication on page 91 includes information
that is common to all library-based objects, such as how MIMIX interprets the data
group object entries defined for a data group. This topic also provides examples
and additional detail about configuring entries to replicate spooled files and user
profiles.
Identifying logical and physical files for replication on page 96 identifies the
replication choices and considerations for *FILE objects with logical or physical file
extended attributes. This topic identifies the requirements, limitations, and
configuration requirements of MIMIX Dynamic Apply and legacy cooperative
processing.
Identifying data areas and data queues for replication on page 103 identifies the
replication choices and configuration requirements for library-based objects of
type *DTAARA and *DTAQ. This topic also identifies restrictions for replication of
these object types when user journal processes (advanced journaling) is used.
Identifying IFS objects for replication on page 106 identifies supported and
unsupported file systems, replication choices, and considerations such as long
path names and case sensitivity for IFS objects. This topic also identifies
Planning choices and details by object class
86
restrictions and configuration requirements for replication of these object types
when user journal processes (advanced journaling) is used.
Identifying DLOs for replication on page 111 describes how MIMIX interprets the
data group DLO entries defined for a data group and includes examples for
documents and folders.
Processing of newly created files and objects on page 114 describes how new
IFS objects, data areas, data queues, and files that have journaling implicitly
started are replicated from the user journal.
Processing variations for common operations on page 117 describes
configuration-related variations in how MIMIX replicates move/rename, delete,
and restore operations.
Replication choices by object type
87
Replication choices by object type
A new configuration of MIMIX that uses shipped defaults for all configuration choices
will use remote journaling support for replication from user journals. Default
configuration choices will result in physical files (data and source) as well as logical
files, data areas, and data queues being processed through user journal replication.
All other supported object types and classes will be replicated using system journal
replication. You can optionally use other replication processes as described in Table
5.
Table 5. Replication choices by object class
Object Class and
Type
Replication Options Required Classes of
DG Entry
More Information
Objects of type *FILE,
extended attributes:
PF (data, source)
LF
Default: user journal with
MIMIX Dynamic Apply
1
Object entries and
File entries
Identifying logical and
physical files for
replication on page 96
Other: For PF data files,
legacy cooperative
processing
2
. (For PF
source and LF files, system
journal)
Object entries and
File entries
*FILE, other
extended attributes
Default: For other files,
system journal
Object entries Identifying library-based
objects for replication on
page 91
Objects of type
*DTAARA
Default: advanced
journaling
2

Object entries and
Object tracking entries
Identifying data areas
and data queues for
replication on page 103
Other: system journal Object entries
Other: Data area polling
process associated with
user journal
2

Data area entries
Objects of type
*DTAQ
Default: advanced
journaling
2

Object entries and
Object tracking entries
Other: system journal Object entries
Other library-based
objects
Default: system journal Object entries Identifying library-based
objects for replication on
page 91
IFS objects Default: system journal IFS entries Identifying IFS objects for
replication on page 106
Other: advanced
journaling
2

IFS entries and IFS
tracking entries
DLOs Default: system journal DLO entries Identifying DLOs for
replication on page 111
1. New data groups are created to use remote journaling and to cooperatively process files using MIMIX Dynamic Apply.
Existing data groups can be converted to this method of cooperative processing.
88
2. User journal replication can be configured for either remote journaling or MIMIX source-send processes.
Configured object auditing value for data group entries
89
Configured object auditing value for data group entries
When you create data group entries for library-based objects, IFS objects, or DLOs,
you can specify an object auditing value within the configuration. This configured
object auditing value affects how MIMIX handles changes to attributes of objects. It is
particularly important for, but not limited to, objects configured for system journal
replication.
The Object auditing value (OBJ AUD) parameter defines a configured object auditing
level for use by MIMIX. This configured value is associated with all objects identified
for processing by the data group entry. An objects actual auditing level determines
the extent to which changes to the object are recorded in the system journal and
replicated by MIMIX. The configured value is used during initial configuration and
during processing of requests to compare objects that are identified by configuration
data.
In specific scenarios, MIMIX evaluates whether an objects auditing value matches
the configured value of the data group entry that most closely matches the object
being processed. If the actual value is lower than the configured value, MIMIX sets
the object to the configured value so that future changes to the object will be recorded
as expected in the system journal and therefore can be replicated.
Note: MIMIX only considers changing an objects auditing value when the data
group object entry is configured for system journal replication. MIMIX does not
change the objects value for files that are configured for MIMIX Dynamic
Apply or legacy cooperative processing or for data areas and data queues that
are configured for user journal replication.
The configured value specified in data group entries can affect replication of some
journal entries generated when an object attribute changes. Specifically, the
configured value can affect replication of T-ZC journal entries for files and IFS objects
and T-YC entries for DLOs. Changes that generate other types of journal entries are
not affected by this parameter.
When MIMIX changes the audit level, the possible values have the following results:
The default value, *CHANGE, ensures that all changes to the object by all users
are recorded in the system journal.
The value *ALL ensures that all changes or read accesses to the object by all
users are recorded in the system journal. The journal entries generated by read
accesses to objects are not used for replication and their presence can adversely
affect replication performance.
The value *NONE results in no entries recorded in the system journal when the
object is accessed or changed.
The values *CHANGE and *ALL result in replication of T-ZC and T-YC journal entries.
The value *NONE prevents replication of attribute and data changes for the identified
object or DLO because T-ZC and T-YC entries are not recorded in the system journal.
For files configured for MIMIX Dynamic Apply and any IFS objects, data areas, or
data queues configured for user journal replication, the value *NONE can improve
MIMIX performance by preventing unneeded entries from being written to the system
journal.
90
When a compare request includes an object with a configured object auditing value of
*NONE, any differences found for attributes that could generate T-ZC or T-YC journal
entries are reported as *EC (equal configuration).
You may also want to read the following:
For more information about when MIMIX sets an objects auditing value, see
Managing object auditing on page 54.
For more information about manually setting values and examples, see Setting
data group auditing values manually on page 270.
To see what attributes can be compared and replicated, see the following topics:
Attributes compared and expected results - #FILATR, #FILATRMBR audits
on page 581
Attributes compared and expected results - #OBJ ATR audit on page 586
Attributes compared and expected results - #DLOATR audit on page 596.
Attributes compared and expected results - #IFSATR audit on page 594
Identifying library-based objects for replication
91
Identifying library-based objects for replication
MIMIX uses data group object entries to identify whether to process transactions for
library-based objects. Collectively, the object entries identify which library-based
objects can be replicated by a particular data group.
Each data group object entry identifies one or more library-based objects. An object
entry can specify either a specific or a generic name for the library and object. In
addition, each object entry also identifies the object types and extended object
attributes (for *FILE and *DEVD objects) to be selected, defines a configured object
auditing level for the identified objects, and indicates whether the identified objects
are to be included in or excluded from replication.
For most supported object types which can be identified by data group object entries,
only the system journal replication path is available. For a list of object types, see
Supported object types for system journal replication on page 533. This list includes
information about what can be specified for the extended attributes of *FILE objects.
A limited number of object types which use the system journal replication path have
unique configuration requirements. These are described in are described in
Identifying spooled files for replication on page 93 and Replicating user profiles and
associated message queues on page 95.
For detailed procedures, see Configuring data group entries on page 241.
Replication options for object types journaled to a user journal - For objects of
type *FILE, *DTAARA, and *DTAQ, MIMIX supports multiple replication methods. For
these object types, additional configuration data is evaluated when determining what
replication path to use for the identified objects.
For *FILE objects, the extended attribute and other configuration data are considered
when MIMIX determines what replication path to use for identified objects.
For logical and physical files, MIMIX supports several methods of replication.
Each method varies in its efficiency, in its supported extended attributes, and in
additional configuration requirements. See Identifying logical and physical files
for replication on page 96 for additional details.
For other extended attribute types, MIMIX supports only system journal
replication. Only data group object entries are required to identify these files for
replication.
For *FILE objects configured for replication through the system journal, MIMIX caches
extended file attribute information for a fixed set of *FILE objects. Also, the Omit
content (OMTDTA) parameter provides the ability to omit a subset of data-changing
operations from replication. For more information, see Caching extended attributes of
*FILE objects on page 325 and Omitting T-ZC content from system journal
replication on page 362.
For *DTAARA and *DTAQ object types, MIMIX supports replication using either
system journal or user journal replication processes. A configuration that uses the
user journal is also called an advanced journaling configuration. Additional
information, including configuration requirements are described in Identifying data
areas and data queues for replication on page 103.
92
How MIMIX uses object entries to evaluate journal entries for replication
The following information and example can help you determine whether the objects
you specify in data group object entries will be selected for replication. MIMIX
determines which replication process will be used only after it determines whether the
library-based object will be replicated.
When determining whether to process a journal entry for a library-based object,
MIMIX looks for a match between the object information in the journal entry and one
of the data group object entries. The data group object entries are checked from the
most specific to the least specific. The library name is the first search element, then
followed by the object type, attribute (for files and device descriptions), and the object
name. The most significant match found (if any) is checked to determine whether to
include or exclude the journal entry in replication.
Table 6 shows how MIMIX checks a journal entry for a match with a data group object
entry. The columns are arranged to show the priority of the elements within the object
entry, with the most significant (library name) at left and the least significant (object
name) at right.
Table 6. Matching order for library-based object names.
Search Order Library Name Object Type Attribute
1

1. The extended object attribute is only checked for objects of type *FILE and *DEVD.
Object Name
1 Exact Exact Exact Exact
2 Exact Exact Exact Generic*
3 Exact Exact Exact *ALL
4 Exact Exact *ALL Exact
5 Exact Exact *ALL Generic*
6 Exact Exact *ALL *ALL
7 Exact *ALL Exact Exact
8 Exact *ALL Exact Generic*
9 Exact *ALL Exact *ALL
10 Exact *ALL *ALL Exact
11 Exact *ALL *ALL Generic*
12 Exact *ALL *ALL *ALL
13 Generic* Exact Exact Exact
14 Generic* Exact Exact Generic*
15 Generic* Exact Exact *ALL
16 Generic* Exact *ALL Exact
17 Generic* Exact *ALL Generic*
18 Generic* Exact *ALL *ALL
19 Generic* *ALL Exact Exact
20 Generic* *ALL Exact Generic*
21 Generic* *ALL Exact *ALL
22 Generic* *ALL *ALL Exact
23 Generic* *ALL *ALL Generic*
24 Generic* *ALL *ALL *ALL
Identifying library-based objects for replication
93
When configuring data group object entries, the flexibility of the generic support
allows a variety of include and exclude combinations for a given library or set of
libraries. But, generic name support can also cause unexpected results if it is not well
planned. Consider the search order shown in Table 6 when configuring data group
object entries to ensure that objects are not unexpectedly included or excluded in
replication.
Example - For example, say you that you have a data group configured with data
group object entries like those shown in Table 8. The journal entries MIMIX is
evaluating for replication are shown in Table 7.
A transaction is received from the system journal for program BOOKKEEP in library
FINANCE. MIMIX will replicate this object since it fits the criteria of the first data group
object entry shown in Table 8.
A transaction for file ACCOUNTG in library FINANCE would also be replicated since it
fits the third entry.
A transaction for data area BALANCE in library FINANCE would not be replicated
since it fits the second entry, an Exclude entry.
Likewise, a transaction for data area ACCOUNT1 in library FINANCE would not be
replicated. Although the transaction fits both the second and third entries shown in
Table 8, the second entry determines whether to replicate because it provides a more
significant match in the second criteria checked (object type). The second entry
provides an exact match for the library name, an exact match for object type, and a
object name match to *ALL.
In order for MIMIX to process the data area ACCOUNT1, an additional data group
object entry with process type *INCLD could be added for object type of *DTAARA
with an exact name of ACCOUNT1 or a generic name ACC*.
Identifying spooled files for replication
MIMIX supports spooled file replication on an output queue basis. When an output
queue (*OUTQ) is identified for replication by a data group object entry, its spooled
files are not automatically replicated when default values are used. Table 9 identifies
the values required for spooled file replication. When MIMIX processes an output
Table 7. Sample journal transactions for objects in the system journal
Object Type Library Object
*PGM FINANCE BOOKKEEP
*FILE FINANCE ACCOUNTG
*DTAARA FINANCE BALANCE
*DTAARA FINANCE ACCOUNT1
Table 8. Sample of data group object entries, arranged in order from most to least specific
Entry Source Library Object Type Object Name Attribute Process Type
1 Finance *PGM *ALL *ALL *INCLD
2 Finance *DTAARA *ALL *ALL *EXCLD
3 Finance *ALL acc* *ALL *INCLD
94
queue that is identified by an object entry with the appropriate settings, all spooled
files for the output queue (*OUTQ) are replicated by system journal replication
processes.
Is it important to consider which spooled files must be replicated and which should
not. Some output queues contain a large number of non-critical spooled files and
probably should not be replicated. Most likely, you want to limit the spooled files that
you replicate to mission-critical information. It may be useful to direct important
spooled files that should be replicated to specific output queues instead of defining a
large number of output queues for replication.
When an output queue is selected for replication and the data group object entry
specifies *YES for Replicate spooled files, MIMIX ensures that the values *SPLFDTA
and *PRTDTA are included in the system value for the security auditing level
(QAUDLVL). This causes the system to generate spooled file (T-SF) entries in the
system journal. When a spooled file is created, moved, deleted, or its attributes are
changed, the resulting entries in the system journal are processed by a MIMIX object
send job and are replicated.
Additional choices for spooled file replication
MIMIX provides additional options to customize your choices for spooled file
replication.
Keeping deleted spooled files: You can also specify to keep spooled files on the
target system after they have been deleted from the source system by using the Keep
deleted spooled files parameter on the data group definition. The parameter is also
available on commands to add and change data group object entries.
Options for spooled file status: You can specify additional options for processing
spooled files. The Spooled file options (SPLFOPT) parameter is only available on
commands to add and change data group object entries. The following values support
choosing how status of replicated spooled files is handled on the target system:
*NONE This is the shipped default value. Spooled files on the target system will
have the same status as on the source system.
*HLD All replicated spooled files are put on hold on the target system regardless
of their status on the source system.
*HLDONSAV All replicated spooled files that have a saved status on the source
system will be put on hold on the target system. Spooled files on the source
system which have other status values will have the same status on the target
system.
This parameter can be helpful if your environment includes programs which
automatically process spooled files on the target system. For example, if you have a
Table 9. Data group object entry parameter values for spooled file replication
Parameter Value
Object type (OBJ TYPE) *ALL or *OUTQ
Replicate spooled files (REPSPLF) *YES
Identifying library-based objects for replication
95
program that automatically prints spooled files, you may want to use one of these
values to control what is printed after replication when printers writers are active.
If you move a spooled file between output queues which have different configured
values for the SPLFOPT parameter, consider the following:
Spooled files moved from an output queue configured with SPLFOPT(*NONE) to
an output queue configured with SPLFOPT(*HLD) are placed in a held state on
the target system.
Spooled files moved from an output queue configured with SPLFOPT(*HLD) to an
output queue configured with SPLFOPT(*NONE) or SPLFOPT(*HLDONSAV)
remain in a held state on the target system until you take action to release them.
Replicating user profiles and associated message queues
When user profile objects (*USRPRF) are identified by a data group object entry
which specifies *ALL or *USRPRF for the Object type parameter, MIMIX replicates the
objects using system journal replication processes.
When MIMIX replicates user profiles, the message queue (*MSGQ) objects
associated with the *USRPRF objects may also be created automatically on the target
system as a result of replication. If the *MSGQ objects are not also configured for
replication, the private authorities for the *MSGQ objects may not be the same
between the source and target systems. If it is necessary for the private authorities for
the *MSGQ objects be identical between the source and target systems, it is
recommended that *MSGQ objects associated with *USRPRF objects be configured
for replication.
For example, Table 10 shows the data group object entries required to replicate user
profiles beginning with the letter A and maintain identical private authorities on
associated message queues. In this example, the user profile ABC and its associated
message queue are excluded from replication.
Table 10. Sample data group object entries for maintaining private authorities of message
queues associated with user profiles
Entry Source Library Object Type Object Name Process Type
1 QSYS *USRPRF A* *INCLD
2 QUSRSYS *MSGQ A* *INCLD
3 QSYS *USRPRF ABC *EXCLD
4 QUSRSYS *MSGQ ABC *EXCLD
96
Identifying logical and physical files for replication
MIMIX supports multiple ways of replicating *FILE objects with extended attributes of
LF, PF-DTA, PF38-DTA, PF-SRC, PF38-SRC. MIMIX configuration data determines
the replication method used for these logical and physical files. The following
configurations are possible:
MIMIX Dynamic Apply - MIMIX Dynamic Apply is strongly recommended. In this
configuration, logical files and physical files (source and data) are replicated
primarily through the user (database) journal. This configuration is the most
efficient way to replicate LF, PF-DTA, PF38-DTA, PF-SRC, and PF38-SRC files. In
this configuration, files are identified by data group object entries and file entries.
Legacy cooperative processing - Legacy cooperative processing supports only
data files (PF-DTA and PF38-DTA). It does not support source physical files or
logical files. In legacy cooperative processing, record data and member data
operations are replicated through user journal processes, while all other file
transactions such as creates, moves, renames, and deletes are replicated
through system journal processes. The database processes can use either
remote journaling or MIMIX source-send processes, making legacy cooperative
processing the recommended choice for physical data files when the remote
journaling environment required by MIMIX Dynamic Apply is not possible. In this
configuration, files are identified by data group object entries and file entries.
User journal (database) only configurations - Environments that do not meet
MIMIX Dynamic Apply requirements but which have data group definitions that
specify TYPE(*DB) can only replicate data changes to physical files. These
configurations may not be able to replicate other operations such as creates,
restores, moves, renames, and some copy operations. In this configuration, files
are identified by data group file entries.
System journal (object) only configurations - Data group definitions which
specify TYPE(*OBJ ) are less efficient at processing logical and physical files. The
entire member is updated with each replicated transaction. Members must be
closed in order for replication to occur. In this configuration, files are identified by
data group object entries.
You should be aware of common characteristics of replicating library-based objects,
such when the configured object auditing value is used and how MIMIX interprets
data group entries to identify objects eligible for replication. For this information, see
Configured object auditing value for data group entries on page 89 and How MIMIX
uses object entries to evaluate journal entries for replication on page 92.
Some advanced techniques may require specific configurations. See Configuring
advanced replication techniques on page 332 for additional information.
For detailed procedures, see Creating data group object entries on page 242.
Considerations for LF and PF files
Newly created data groups are automatically configured to use MIMIX Dynamic Apply
when its requirements and restrictions are met and shipped command defaults are
Identifying logical and physical files for replication
97
used. With this configuration, logical and physical files are processed primarily from
the user journal.
Cooperative journal - The value specified for the Cooperative journal (COOPJ RN)
parameter in the data group definition is critical to determining how files are
cooperatively processed. When creating a new data group, you can explicitly specify
a value or you can allow MIMIX to automatically change the default value (*DFT) to
either *USRJ RN or *SYSJ RN based on whether operating system and configuration
requirements for MIMIX Dynamic Apply are met. When requirements are met, MIMIX
changes the value *DFT to *USRJ RN. When the MIMIX Dynamic Apply requirements
are not met, MIMIX changes *DFT to *SYSJ RN.
Note: Data groups created prior to upgrading to version 5 continue to use their
existing configuration. The installation process sets the value of COOPJ RN to
*SYSJ RN and this value remains in effect until you take action as described in
Converting to MIMIX Dynamic Apply on page 135.
When a data group definition meets the requirements for MIMIX Dynamic Apply, any
logical files and physical (source and data) files properly identified for cooperative
processing will be processed via MIMIX Dynamic Apply unless a known restriction
prevents it.
When a data group definition does not meet the requirements for MIMIX Dynamic
Apply but still meets legacy cooperative processing requirements, any PF-DTA or
PF38-DTA files properly configured for cooperative processing will be replicated using
legacy cooperative processing. All other types of files are processed using system
journal replication.
Logical file considerations - Consider the following for logical files.
Logical files are replicated through the user journal when MIMIX Dynamic Apply
requirements are met. Otherwise, they are replicated through the system journal.
It is strongly recommended that logical files reside in the same data group as all of
their associated physical files.
Physical file considerations - Consider the following for physical files
Physical files (source and data) are replicated through the user journal when
MIMIX Dynamic Apply requirements are met. Otherwise, data files are replicated
using legacy cooperative processing if those requirements are met, and source
files are replicated through the system journal.
If a data group definition specifies TYPE(*DB) and the configuration meets other
MIMIX Dynamic Apply requirements, source files need to be identified by both
data group object entries and data group file entries.
If a data group is configured for only user journal replication (TYPE is *DB) and
does not meet other configuration requirements for MIMIX Dynamic Apply, source
files should be identified by only data group file entries.
If a data group is configured for only system replication (TYPE is *OBJ ), any
source files should be identified by only data group object entries. Any data group
object entries configured for cooperative processing will be replicated through the
98
system journal and should not have any corresponding data group file entries.
Physical files with referential constraints require a field in another physical file to
be valid. All physical files in a referential constraint structure must be in the same
database apply session. See Requirements and limitations of MIMIX Dynamic
Apply on page 101 and Requirements and limitations of legacy cooperative
processing on page 102 for additional information. For more information about
load balancing apply sessions, see Database apply session balancing on
page 80.
Commitment control - This database technique allows multiple updates to one or
more files to be considered a single transaction. When used, commitment control
maintains database integrity by not exposing a part of a database transaction until the
whole transaction completes. This ensures that there are no partial updates when the
process is interrupted prior to the completion of the transaction. This technique is also
useful in the event that a partially updated transaction must be removed, or rolled
back, from the files or when updates identified as erroneous need to be removed.
MIMIX fully simulates commitment control on the target system. When commitment
control is used on a source system in a MIMIX environment, MIMIX maintains the
integrity of the database on the target system by preventing partial transactions from
being applied until the whole transaction completes. If the source system becomes
unavailable, MIMIX will not have applied incomplete transactions on the target
system. In the event of an incomplete (or uncommitted) commitment cycle, the
integrity of the database is maintained.
If your application dynamically creates database files that are subsequently used in a
commitment control environment, use MIMIX Dynamic Apply for replication.
Without MIMIX Dynamic Apply, replication of the create operation may fail if a commit
cycle is open when MIMIX tries to save the file. The save operation will be delayed
and may fail if the file being saved has uncommitted transactions.
Files with LOBs
Large objects (LOBs) in files that are configured for either MIMIX Dynamic Apply or
legacy cooperative processing are automatically replicated.
LOBs can greatly increase the amount of data being replicated. As a result, you may
see some degradation in your replication activity. The amount of degradation you see
is proportionate to the amount of journal entries with LOBs that are applied per hour.
This is also true during switch processing if you are using remote journaling and have
unconfirmed entries with LOB data.
Since the volume of data to be replicated can be very large, you should consider
using the minimized journal entry data function along with LOB replication. IBM
support for minimized journal entry data can be extremely helpful when database
records contain static, very large objects. If minimized journal entry data is enabled,
journal entries for database files containing unchanged LOB data may be complete
and therefore processed like any other complete journal entry. This can significantly
improve performance, throughput, and storage requirements. If minimized journal
entry is used with files containing LOBs, keyed replication is not supported. For more
information, see Minimized journal entry data on page 318.
Identifying logical and physical files for replication
99
User exit programs may be affected when journaled LOB data is added to an existing
data group. Non-minimized LOB data produces incomplete entries. For incomplete
journal entries, two or more entries with duplicate journal sequence numbers and
journal codes and types will be provided to the user exit program when the data for
the incomplete entry is retrieved and segmented. Programs need to correctly handle
these duplicate entries representing the single, original journal entry.
You should also be aware of the following restrictions:
Copy Active File (CPYACTF) and Reorganize Active File (RGZACTF) do not work
against database files with LOB fields.
There is no collision detection for LOB data. Most collision detection classes
compare the journal entries with the content of the record on the target system.
Although you can compare the actual content of the record, you cannot compare
the content of the LOBs.
J ournaled changes cannot be removed for files with LOBs that are replicated by a
data group that does not use remote journaling (RJ LNK(*NO)). In this scenario,
the F-RC entry generated by the IBM command Remove J ournaled Changes
(RMVJ RNCHG) cannot be applied on the target system.
Configuration requirements for LF and PF files
MIMIX Dynamic Apply and legacy cooperative processing have unique requirements
for data group definitions as well as many common requirements for data group object
entries and file entries, as indicated in Table 11. In both configurations, you must
have:
A data group definition which specifies the required values.
One or more data group object entries that specify the required values. These
entries identify the items within the name space for replication. You may need to
create additional entries to achieve the desired results, including entries which
specify a Process type of *EXCLD.
The identified existing objects must be journaled to the journal defined for the data
group.
Data group file entries for the items identified by data group object entries.
Processing cannot occur without these corresponding data group file entries.
100
Corresponding data group file entries - Both MIMIX Dynamic Apply and legacy
cooperative processing require that existing files identified by a data group object
entry which specifies *YES for the Cooperate with DB (COOPDB) parameter must
also be identified by data group file entries.
When a file is identified by both a data group object entry and an data group file entry,
the following are also required:
The object entry must enable the cooperative processing of files by specifying
COOPDB(*YES) and COOPTYPE(*FILE).
Table 11. Key configuration values required for MIMIX Dynamic Apply and legacy cooperative processing
Critical Parameters MIMIX Dynamic
Apply
Required Values
Legacy Coopera-
tive Processing
Required Values
Configuration Notes
Data Group Definition
Data group type (TYPE) *ALL or *DB *ALL See Requirements
and limitations of
MIMIX Dynamic Apply
on page 101.
Use remote journal link
(RJ LNK)
*YES any value
Cooperative journal
(COOPJ RN)
*DFT or *USRJ RN *DFT or *SYSJ RN See cooperative
journal is default.
File and tracking ent. opts:
(FEOPT)
Replication type
*POSITION any value
See Requirements
and limitations of
MIMIX Dynamic Apply
on page 101.
Data Group Object Entries
Object type (OBJ TYPE) *ALL or *FILE *ALL or *FILE
Attribute (OBJ ATR) *ALL or one of the
following: LF, LF38,
PF-DTA, PF-SRC,
PF38-DTA, PF38-
SRC
*ALL, PF-DTA, or
PF38-DTA
Cooperate with database
(COOPDB)
*YES *YES See Corresponding
data group file entries
required.
Cooperating object types
(COOPTYPE)
*FILE *FILE
File and tracking ent. opts:
(FEOPT)
Replication type
*POSITION any value
See Requirements
and limitations of
MIMIX Dynamic Apply
on page 101.
Identifying logical and physical files for replication
101
If name mapping is used between systems, the data group object entry and file
entry must have the same name mapping defined.
If the data group object entry and file entry specify different values for the File and
tracking ent. opts (FEOPT) parameter, the values specified in the data group file
entry take precedence.
Files defined by data group file entries must have journaling started and must be
synchronized. If journaling is not started, MIMIX cannot replicate activity for the
file.
Typically, data group object entries are created during initial configuration and are
then used as the source for loading the data group file entries. The #DGFE audit can
be used to determine whether corresponding data group file entries exist for the files
identified by data group object entries.
Requirements and limitations of MIMIX Dynamic Apply
MIMIX Dynamic Apply requires that user journal replication be configured to use
remote journaling. Specific data group definition and data group entry requirements
are listed in Table 11.
MIMIX Dynamic Apply configurations have the following limitations.
Files in library - It is recommended that files within a single library be replicated
using the same user journal.
Data group file entries for members - Data group file entries (DGFE) for specific
member names are not supported unless they are created by MIMIX. MIMIX may
create these for error hold processing.
Name mapping - MIMIX Dynamic Apply configurations support name mapping at the
library level only. Entries with object name mapping are not supported. For example,
MYLIB/MYOBJ mapped to MYLIB/OTHEROBJ is not supported. If you require object
name mapping, it is supported in legacy cooperative processing configurations.
TYPE(*DB) data groups - MIMIX Dynamic Apply configurations that specify
TYPE(*DB) in the data group definition will not be able to replicate the following
actions:
Files created using CPYF CRTFILE(*YES) on OS V5R3 into a library configured
for replication
Files restored into a source library configured for replication
Files moved or renamed from a non-replicated library into a replicated library
Files created which are not otherwise journaled upon creation into a library
configured for replication
Files created by these actions can be added to the MIMIX configuration by running
the #DGFE audit. The audit recovery will synchronize the file as part of adding the file
entry to the configuration. In data groups that specify TYPE(*ALL), the above actions
are fully supported.
Referential constraints - The following restrictions apply:
If using referential constraints with *CASCADE or *SETNULL actions you must
102
specify *YES for the J ournal on target (J RNTGT) parameter in the data group
definition.
Physical files with referential constraints require a field in another physical file to
be valid. All physical files in a referential constraint structure must be in the same
database apply session. If a particular preferred apply session has been specified
in file entry options (FEOPT), MIMIX may ignore the specification in order to
satisfy this restriction.
Requirements and limitations of legacy cooperative processing
Legacy cooperative processing requires that data groups be configured for both
database (user journal) and object (system journal) replication. While remote
journaling is recommended, MIMIX source-send processing for database replication
is also supported. Specific data group definition and data group entry requirements
are listed in Table 11.
Legacy cooperative processing configurations have the following limitations.
Supported extended attributes - Legacy cooperative processing supports only data
files (PF-DTA and PF38-DTA).
When a *FILE object is configured for legacy cooperative processing, only file and
member attribute changes identified by T-ZC journal entries with a subclass of
7=Change are logged and replicated through system journal replication processes. All
member and data changes are logged and replicated through user journal replication
processes.
File entry options - If a file is moved or renamed and both names are defined by a
data group file entry, the file entry options must be the same in both data group file
entries.
Referential constraints - Physical files with referential constraints require a field in
another physical file to be valid. All physical files in a referential constraint structure
must be in the same apply session. If this is not possible, contact CustomerCare.
Identifying data areas and data queues for replication
103
Identifying data areas and data queues for replication
MIMIX uses data group object entries to determine whether to process transactions
for data area (*DTAARA) and data queue (*DTAQ) object types. Object entries can be
configured so that these object types can be replicated from journal entries recorded
a user journal (default) or in the system journal (optional).
While user journal replication, also called advanced journaling, has significant
advantages, you must decide whether it is appropriate for your environment. For more
information, see Planning for journaled IFS objects, data areas, and data queues on
page 78.
For detailed procedures, see Configuring data group entries on page 241.
Data areas can also be replicated by the data area poller process associated the user
journal. However, this type of replication is the least preferred and requires data group
data area entries. See Creating data group data area entries on page 261.
Configuration requirements - data areas and data queues
For any data group object entries you create for data areas or data queues, consider
the following:
You must have at least one data group object entry which specifies a a Process
type of *INCLD. You may need to create additional entries to achieve the desired
results. This may include entries which specify a Process type of *EXCLD.
When specifying objects in data group object entries, specify only the objects that
need to be replicated. Specifying *ALL or a generic name for the System 1 object
(OBJ 1) parameter will select multiple objects within the library specified for
System 1 library (LIB1).
When you create data group object entries, you can specify an object auditing
value within the configuration. The configured object auditing value affects how
MIMIX handles changes to attributes of library-based objects. It is particularly
important for, but not limited to, objects configured for system journal replication.
For objects configured for user journal replication, the configured value can affect
MIMIX performance. For detailed information, see Configured object auditing
value for data group entries on page 89.
Additional requirements for user journal replication - The following additional
requirements must be met before data areas or data queues identified by data group
object entries can be replicated with user journal processes.
The data group definition and data group object entries must specify the values
indicated in Table 12 for critical parameters.
Object tracking entries must exist for the objects identified by properly configured
object entries. Typically these are created automatically when the data group is
started.
J ournaling must be started on both the source and target systems for the objects
104
identified by object tracking entries.
Additionally, if any of the following apply, see Planning for journaled IFS objects, data
areas, and data queues on page 78 for additional details:
Converting existing configurations - When converting an existing data group to
use or add advanced journaling, you must consider whether journals should be
shared and whether data area or data queue objects should be replicated in a
data group that also replicates database files.
Serialized transactions - If you need to serialize transactions for database files
and data area or data queue objects replicated from a user journal, you may need
to adjust the configuration for the replicated files.
Apply session load balancing - One database apply session, session A, is used
for all data area and data queue objects are replicated from a user journal. Other
replication activity can use this apply session, and may cause it to become
overloaded. You may need to adjust the configuration accordingly.
User exit programs - If you use user exit programs that process user journal
entries, you may need to modify your programs.
Restrictions - user journal replication of data areas and data queues
For operating systems V5R4 and above, changes to data area and data queue
content, as well as changes to structure (such as moves and renames) and number
(such as creates and deletes), are recognized and supported through user journal
replication.
Be aware of the following restrictions when replicating data areas and data queues
using MIMIX user journal replication processes:
MIMIX does not support before-images for data updates to data areas, and
cannot perform data integrity checks on the target system to ensure that data
being replaced on the target system is an exact match to the data replaced on the
source system. Furthermore, MIMIX does not provide a mechanism to prevent
users or applications from updating replicated data areas on the target system
Table 12. Critical configuration parameters for replicating *DTAARA and *DTAQ objects
from a user journal
Critical Parameters Required
Values
Configuration Notes
Data Group Definition
Data group type (TYPE) *ALL
Data Group Object Entry
Cooperate with database (COOPDB) *YES
Cooperating object types (COOPTYPE) *DFT or
*DTAARA
*DTAQ
The value *DFT includes
*FILE, *DTAARA, and *DTAQ.
Identifying data areas and data queues for replication
105
accidentally. To guarantee the data integrity of replicated data areas between the
source and target systems, you should run MIMIX AutoGuard on a regular basis.
The apply of data area and data queue objects is restricted to a single database
apply job (DBAPYA). If a data group has too much replication activity, this job may
fall behind in the processing of journal entries. If this occurs, you should load-level
the apply sessions by moving some or all of the database files to another
database apply job.
Pre-existing data areas and data queues to be selected for replication must have
journaling started on both the source and target systems before the data group is
started.
The ability to replicate Distributed Data Management (DDM) data areas and data
queues is not supported. If you need to replicate DDM data areas and data
queues, use standard system journal replication methods.
The subset of E and Q journal code entry types supported for user journal
replication are listed in J ournal codes and entry types for journaled data areas
and data queues on page 615.
106
Identifying IFS objects for replication
MIMIX uses data group IFS entries to determine whether to process transactions for
objects in the integrated file system (IFS), and what replication path is used. IFS
entries can be configured so that the identified objects can be replicated from journal
entries recorded in the system journal (default) or in a user journal (optional).
One of the most important decisions in planning for MIMIX is determining which IFS
objects you need to replicate. Most likely, you want to limit the IFS objects you
replicate to mission-critical objects.
User journal replication, also called advanced journaling, is well suited to the dynamic
environments of IFS objects. While user journal replication has significant
advantages, you must decide whether it is appropriate for your environment. For more
information, see Planning for journaled IFS objects, data areas, and data queues on
page 78.
For detailed procedures, see Creating data group IFS entries on page 255.
Objects configured for user journal replication may have create, restore, delete,
move, and rename operations. Differences in implementation details are described in
Processing variations for common operations on page 117.
Supported IFS file systems and object types
The IFS objects to be replicated must be in the Root (/) or QOpenSys file systems.
The following object types are supported:
Directories (*DIR)
Stream Files (*STMF)
Symbolic Links (*SYMLNK)
Table 13 identifies the IFS file systems that are not supported by MIMIX and cannot
be specified for either the System 1 object prompt or the System 2 object prompt in
the Add Data Group IFS Entry (ADDDGIFSE) command.
J ournaling is not supported for files in Network Work Storage Spaces (NWSS), which
are used as virtual disks by IXS and IXA technology. Therefore, IFS objects
configured to be replicated from a user journal must be in the Root (/) or QOpenSys
file systems.
Refer to the IBM book OS/400 Integrated File System Introduction for more
information about IFS.
Table 13. IFS file systems that are not supported by MIMIX
/QDLS /QLANSrv /QOPT
/QFileSvr.400 /QNetWare /QSYS.LIB
/QFPNWSSTG /QNTC /QSR
Identifying IFS objects for replication
107
Considerations when identifying IFS objects
The following considerations for IFS objects apply regardless of whether replication
occurs through the system journal or user journal.
MIMIX processing order for data group IFS entries
Data group IFS entries are processed in order from most generic to most specific. IFS
entries are processed using the unicode character set. The first entry (more generic)
found that matches the object is used until a more specific match is found.
Long IFS path names
MIMIX currently replicates IFS path names of 512 characters. However, any MIMIX
command that takes an IFS path name as input may be susceptible to a 506
character limit. This character limit may be reduced even further if the IFS path name
contains embedded apostrophes ('). In this case, the supported IFS path name length
is reduced by four characters for every apostrophe the path name contains.
For information about IFS path name naming conventions, refer to the IBM book,
Integrated File System Introduction V5R4.
Upper and lower case IFS object names
When you create data group IFS entries, be aware of the following information about
character case sensitivity for specifying IFS object names.
The root file system on the System i is generally not case sensitive. Character
case is preserved when creating objects, but otherwise character case is ignored.
For example, you can create /AbCd or /ABCD, but not both. You can refer to the
object by any mix of character case, such as /AbCd, /abcd, or /ABCD.
The QOpenSys file system on the System i is generally case sensitive. Except for
"QOpenSys" in a path name, all characters in a path name are case sensitive. For
example, you can create both /QOpenSys/AbCd and /QOpenSys/ABCD. You
must specify the correct character case when referring to an object.
During replication, MIMIX preserves the character case of IFS object names. For
example, the creation of /AbCd on the source system will be replicated as /AbCd on
the target system.
Replication will not alter the character case of objects that already exist on the target
system (unless the object is deleted and recreated). In the root file system, /AbCd and
/ABCD are equivalent names. If /ABCD exists as such on the target system, changes
to /AbCd will be replicated to /ABCD, but the object name will not be changed to
/AbCd on the target system.
When character case is not a concern (root file system), MIMIX may present path
names as all upper case or all lower case. For example, the WRKDGACTE display
shows all lower case, while the WRKDGIFSE display shows all upper case. Names
can be entered in either case. For example, subsetting WRKDGACTE by /AbCd and
/ABCD will produce the same result.
108
When character case does matter (QOpenSys file system), MIMIX presents path
names in the appropriate case. For example, the WRKDGACTE display and the
WRKDGIFSE display would show /QOpenSys/AbCd, if that is the actual object path.
Names must be entered in the appropriate character case. For example, subsetting
the WRKDGACTE display by /QOpenSys/ABCD will not find /QOpenSys/AbCd.
Configured object auditing value for IFS objects
When you create data group IFS entries, you can specify an object auditing value
within the configuration. The configured object auditing value affects how MIMIX
handles changes to attributes of IFS objects. It is particularly important for, but not
limited to, objects configured for system journal replication. For IFS objects configured
for user journal replication, the configured value can affect MIMIX performance. For
detailed information, see Configured object auditing value for data group entries on
page 89.
Configuration requirements - IFS objects
For any data group IFS entry you create, consider the following:
You must have at least one data group IFS entry which specifies a a Process type
of *INCLD. You may need to create additional entries to achieve the desired
results. This may include entries which specify a Process type of *EXCLD.
When specifying which IFS objects in data group IFS entries, specify only the IFS
objects that need to be replicated. The System 1 object (OBJ 1) parameter selects
all IFS objects within the path specified.
You can specify an object auditing value within the configuration. For details, see
Configured object auditing value for data group entries on page 89.
Additional requirements for user journal replication - The following additional
requirements must be met before IFS objects identified by data group IFS entries can
be replicated with user journal processes.
The data group definition and data group IFS entries must specify the values
indicated in Table 14 identifies for critical parameters.
IFS tracking entries must exist for the objects identified by properly configured IFS
entries. Typically these are created automatically when the data group is started.
J ournaling must be started on both the source and target systems for the objects
identified by IFS tracking entries.
Table 14. Critical configuration parameters for replicating IFS objects from a user journal
Critical Parameters Required
Values
Configuration Notes
Data Group Definition
Data group type (TYPE) *ALL
Data Group IFS Entry
Identifying IFS objects for replication
109
Additionally, see Planning for journaled IFS objects, data areas, and data queues on
page 78 for additional details if any of the following apply:
Converting existing configurations - When converting an existing data group to
use or add advanced journaling, you must consider whether journals should be
shared and whether IFS objects should be replicated in a data group that also
replicated database files.
Serialized transactions - If you need to serialize transactions for database files
and IFS objects replicated from a user journal, you may need to adjust the
configuration for the replicated files.
Apply session load balancing - One database apply session, session A, is used
for all IFS objects that are replicated from a user journal. Other replication activity
can use this apply session, and may cause it to become overloaded. You may
need to adjust the configuration accordingly.
User exit programs - If you use user exit programs that process user journal
entries, you may need to modify your programs.
Restrictions - user journal replication of IFS objects
When considering replicating IFS objects using MIMIX user journal replication
processes, be aware of the following restrictions:
The operating system does not support before-images for data updates to IFS
objects. As such, MIMIX cannot perform data integrity checks on the target
system to ensure that data being replaced on the target system is an exact match
to the data replaced on the source system. MIMIX will check the integrity of the
IFS data through the use of regularly scheduled audits, specifically the #IFSATR
audit.
The apply of IFS objects is restricted to a single database apply job (DBAPYA). If
a data group has too much replication activity, this job may fall behind in the
processing of journal entries. If this occurs, you should load-level the apply
sessions by moving some or all of the database files to another database apply
job.
Pre-existing IFS objects to be selected for replication must have journaling started
both the source and target systems before the data group is started.
A physical object, such as an IFS object, is identified by a hard link. Typically, an
unlimited number of hard links can be created as identifiers for one object. For
journaled IFS objects, MIMIX does not support the replication of additional hard
links because doing so causes the same FID to be used for multiple names for the
same IFS object.
Cooperate with database (COOPDB) *YES The default, *NO, results in
system journal replication
Table 14. Critical configuration parameters for replicating IFS objects from a user journal
Critical Parameters Required
Values
Configuration Notes
110
The ability to lock on apply IFS objects in order to prevent unauthorized updates
from occurring on the target system is not supported when advanced journaling is
configured.
The ability to use the Remove J ournaled Changes (RMVJ RNCHG) command for
removing journaled changes for IFS tracking entries is not supported.
It is recommended that option 14 (Remove related) on the Work with Data Group
Activity (WKRDGACT) display not be used for failed activity entries representing
actions against cooperatively processed IFS objects. Because this option does
not remove the associated tracking entries, orphan tracking entries can
accumulate on the system.
The subset of B journal code entry types supported for user journal replication are
listed in J ournal codes and entry types for journaled IFS objects on page 615.
Identifying DLOs for replication
111
Identifying DLOs for replication
MIMIX uses data group DLO entries to determine whether to process system journal
transactions for document library objects (DLOs). Each DLO entry for a data group
includes a folder path, document name, owner, an object auditing level, and an
include or exclude indicator. In addition to specific names, MIMIX supports generic
names for DLOs. In a data group DLO entry, the folder path and document can be
generic or *ALL.
When you create data DLO object entries, you can specify an object auditing value
within the configuration. The configured object auditing value affects how MIMIX
handles changes to attributes of DLOs. For detailed information, see Configured
object auditing value for data group entries on page 89.
For detailed procedures, see Creating data group DLO entries on page 259.
How MIMIX uses DLO entries to evaluate journal entries for replication
How items are specified within a DLO determines whether MIMIX selects or omits
them from processing. This information can help you understand what is included or
omitted.
When determining whether to process a journal entry for a DLO, MIMIX looks for a
match between the DLO information in the journal entry and one of the data group
DLO entries. The data group DLO entries are checked from the most specific to the
least specific. The folder path is the most significant search element, followed by the
document name, then the owner. The most significant match found (if any) is checked
to determine whether to process the entry.
An exact or generic folder path name in a data group DLO entry applies to folder
paths that match the entry as well as to any unnamed child folders of that path which
are not covered by a more explicit entry. For example, a data group DLO entry with a
folder path of ACCOUNT would also apply to a transaction for a document in folder
path ACCOUNT/J ANUARY. If a second data group DLO entry with a folder path of
ACCOUNT/J * were added, it would take precedence because it is more specific.
For a folder path with multiple elements (for example, A/B/C/D), the exact checks and
generic checks against data group DLO entries are performed on the path. If no
match is found, the lowest path element is removed and the process is repeated. For
example, A/B/C/D is reduced to A/B/C and is rechecked. This process continues until
a match is found or until all elements of the path have been removed. If there is still no
match, then checks for folder path *ALL are performed.
Sequence and priority order for documents
Table 15 illustrates the sequence in which MIMIX checks DLO entries for a match.
Table 15. Matching order for document names
Search Order Folder Path Document Name Owner
1 Exact Exact Exact
2 Exact Exact *ALL
3 Exact Generic* Exact
112
Document example - Table 16 illustrates some sample data group DLO entries. For
example, a transaction for any document in a folder named FINANCE would be
blocked from replication because it matches entry 6. A transaction for document
ACCOUNTS in FINANCE1 owned by J ONESB would be replicated because it
matches entry 4. If SMITHA owned ACCOUNTS in FINANCE1, the transaction would
be blocked by entry 3. Likewise, documents LEDGER.J UL and LEDGER.AUG in
FINANCE1 would be blocked by entry 2 and document PAYROLL in FINANCE1
would be blocked by entry 1. A transaction for any document in FINANCE2 would be
blocked by entry 6. However, transactions for documents in FINANCE2/Q1, or in a
child folder of that path, such as FINANCE2/Q1/FEB, would be replicated because of
entry 5.
Sequence and priority order for folders
Folders are treated somewhat differently than documents. Folders are replicated
based on whether there are any data group DLO entries with a process type of
*INCLD that would require the folder to exist on the target system. If a folder needs to
exist to satisfy the folder path of an include entry, the folder will be replicated even if a
different exclude entry prevents replication of the contents of the folder.
4 Exact Generic* *ALL
5 Exact *ALL Exact
6 Exact *ALL *ALL
7 Generic* Exact Exact
8 Generic* Exact *ALL
9 Generic* Generic* Exact
10 Generic* Generic* *ALL
11 Generic* *ALL Exact
12 Generic* *ALL *ALL
13 *ALL Exact Exact
14 *ALL Exact *ALL
15 *ALL Generic* Exact
16 *ALL Generic* *ALL
17 *ALL *ALL Exact
18 *ALL *ALL *ALL
Table 16. Sample data group DLO entries, arranged in order from most to least specific
Entry Folder Path Document Owner Process Type
1 FINANCE1 PAYROLL *ALL *EXCLD
2 FINANCE1 LEDGER* *ALL *EXCLD
3 FINANCE1 *ALL SMITHA *EXCLD
4 FINANCE1 *ALL *ALL *INCLD
5 FINANCE2/Q1 *ALL *ALL *INCLD
6 FIN* *ALL *ALL *EXCLD
Table 15. Matching order for document names
Search Order Folder Path Document Name Owner
Identifying DLOs for replication
113
There is one exception to the requirement of replicating folders to satisfy the folder
path for an include entry. A folder will not be replicated when the only include entry
that would cause its replication specifies *ALL for its folder path and the folder
matches an exclude entry with an exact or a generic folder path name, a document
value of *ALL and an owner of *ALL.
Table 16 and Table 17 illustrate the differences in matching folders to be replicated.
In Table 16, above, a transaction for a folder named FINANCE would be blocked from
replication because it matches entry 6. This would also affect all folders within
FINANCE. A transaction for folder FINANCE1 would be replicated because of entry 4.
Likewise, a transaction for folder FINANCE2 would be replicated because of entry 5.
Note that any transactions for documents in FINANCE2 or any child folders other than
those in the path that includes Q1 would be blocked by entry 6; only FINANCE2 itself
must exist to satisfy entry 5.
In Table 17, although entry 5 is an include entry, a transaction for folder ACCOUNT
would be blocked from replication because it matches entry 2. This is because of the
exception described above. ACCOUNT matches an exclude entry with an exact folder
path, document value of *ALL, and an owner of *ALL, and the only include entry that
would cause it to be replicated specifies folder path *ALL. The exception also affects
all child folders in the ACCOUNT folder path. Note that the exception holds true even
if ACCOUNT is owned by user profile J ONESB (entry 4) because the more specific
folder name match takes precedence.
A transaction for folder ACCOUNT2 would be replicated even though it is an exact
path name match for exclude entry 1. The exception does not apply because entry 1
does not specify document *ALL. Entry 5 requires that ACCOUNT2 exist on the target
system to satisfy the folder path requirements for document names other than
LEDGER* and for child folders of ACCOUNT2.
Table 17. Sample data group DLO entries, folder example
Entry Folder Path Document Owner Process Type
1 ACCOUNT2 LEDGER* *ALL *EXCLD
2 ACCOUNT *ALL *ALL *EXCLD
3 *ALL ABC* *ALL *INCLD
4 *ALL *ALL J ONESB *INCLD
5 *ALL *ALL *ALL *INCLD
114
Processing of newly created files and objects
Your production environment is dynamic. New objects continue to be created after
MIMIX is configured and running. When properly configured, MIMIX automatically
recognizes entries in the user journal that identify new create operations and
replicates any that are eligible for replication. Optionally, MIMIX can also notify you of
newly created objects not eligible for replication so that you can choose whether to
add them to the configuration.
Configurations that replicate files, data areas, data queues, or IFS objects from user
journal entries require journaling to be started on the objects before replication can
occur. When a configuration enables journaling to be implicitly started on new objects,
a newly created object is already journaled. When the journaled object falls within the
group of objects identified for replication by a data group, MIMIX replicates the create
operation. Processing variations exist based on how the data group and the data
group entry with the most specific match to the object are configured. These
variations are described in the following subtopics.
The MMNFYNEWE monitor is a shipped journal monitor that watches the security
audit journal (QAUDJ RN) for newly created libraries, folders, or directories that are
not already included or excluded for replication by a data group and sends warning
notifications when its conditions are met. This monitor is shipped disabled. User
action is required to enable this monitor on the source system within your MIMIX
environment. Once enabled, the monitor will automatically start with the master
monitor. For more information about the conditions that are checked, see topic
Notifications for newly created objects in the MIMIX Operations book.
For more information about requirements and restrictions for implicit starting of
journaling as well as examples of how MIMIX determines whether to replicate a new
object, see What objects need to be journaled on page 302.
Newly created files
When newly created *FILE objects are implicitly journaled and are eligible for
replication, the replication processes used depend on how the data group definition is
configured and how the data group entry with the most specific match to the file is
configured.
New file processing - MIMIX Dynamic Apply
When a data group definition meets configuration requirements for MIMIX Dynamic
Apply and data group object and file entries are properly configured, new files created
on the source system that are eligible for replication will be re-created on the target
system by MIMIX. The following briefly describes the events that occur for newly
created files on the source system which are configured for MIMIX Dynamic Apply:
System journal replication processes ignore the creation entry, knowing that user
journal replication processes will get a create entry as well.
User journal replication processes dynamically add a file entry for a file when a file
create is seen in the user journal. The file entry is added with a status of *ACTIVE.
User journal replication processes create the file on the target system. Replication
Processing of newly created files and objects
115
proceeds normally after the file has been created.
All subsequent file changes including moves or renames, member operations
(adds, changes, and removes), member data updates, file changes, authority
changes, and file deletes are replicated through the user journal.
For MIMIX Dynamic Apply configurations, MIMIX always attempts to place files
that are related due to referential constraints into the same apply session. This
eliminates the possibility of constraint violations that would otherwise occur if
apply sessions processed the files independently. However, there are some
situations where constraints are added dynamically between two files already
assigned to different apply sessions. In this case, the constraint may need to be
disabled to avoid the constraint violations. In the case of cascading constraints,
where a modification to one file cascades operations to related files, MIMIX will
always attempt to apply the cascading entries, whether the constraint is enabled
or disabled, to ensure that the modification is done.
New file processing - legacy cooperative processing
When a data group definition meets configuration requirements for legacy cooperative
processing and data group object and file entries are properly configured, files
created on the source system will be saved and restored to the target system by
system journal replication processes. The following briefly describes the events that
occur when files are created that have been defined for legacy cooperative
processing:
System journal replication processes communicate with user journal replication
processes to add a data group file entry for the file (ADDDGFE command). The
file entry is added with the status of *HLD.
A user journal transaction is created on the source system and is transferred to
the target system to dynamically add the file to active user journal processes.
J ournaling on the file is started if it is not already active.
System journal replication processes save the created file, restores it on the target
system, then communicates with user journal replication processes to issue a
release wait request against the file. The status of the file entry changes to
*RLSWAIT.
The database apply process waits for the save point in the journal, and then
makes the file active. The status of the file entry changes to *ACTIVE.
Newly created IFS objects, data areas, and data queues
When journaling is implicitly started for IFS objects, data areas, and data queues,
newly created objects that are eligible for replication are automatically replicated.
Configuration values specified in the data group IFS entry or object entry that most
specifically matches the new object determines what replication processes are used.
Note: Non-journaled objects are replicated through the system journal.
For data areas and data queues, automatic journaling of new *DTAARA or *DTAQ
objects is supported. MIMIX configurations can be enabled to permit the automatic
start of journaling for newly created data areas and data queues in libraries journaled
116
to a user journal. New MIMIX installations that are configured for MIMIX Dynamic
Apply of files automatically have this behavior.
For requirements for implicitly starting journaling on new objects, see What objects
need to be journaled on page 302.
If the object is journaled to the user journal, MIMIX user journal replication processes
can fully replicate the create operation. The user journal entries contain all the
information necessary for replication without needing to retrieve information from the
object on the source system. MIMIX creates a tracking entry for the newly created
object and an activity entry representing the T-CO (create) journal entry.
If the object is not journaled to the user journal, then the create of the object is
processed with system journal processing.
If the specified values in data group entry that identified the object as eligible for
replication do not allow the object type to be cooperatively processed, the create of
the object and subsequent operations are replicated through system journal
processes.
When MIMIX replicates a create operation through the user journal, the create
timestamp (*CRTTSP) attribute may differ between the source and target systems.
Determining how an activity entry for a create operation was replicated
To determine whether a create operation of a given object is being replicated through
user journal processes or through system journal processes, do the following:
1. On the Data Group Activity Entries (WRKDGACTE) display, locate the entry for a
create operation that you want to check. Create operations have a value of T-CO
in the Code column.
2. Use option 5 (Display) next to an activity entry for a create operation.
3. On the resulting details display, check the value of the Requires container send
field.
If *YES appears for an activity entry representing a create operation, the create
operation is being replicated through the system journal.
If *NO appears in the field, the create operation is being replicated through the
user journal.
Processing variations for common operations
117
Processing variations for common operations
Some variation exists in how MIMIX performs common operations such as moves,
renames, deletes, and restores. The variations are based on the configuration of the
data group entry used for replication.
Configurations specify whether these operations are processed through the system
journal, user journal, or a combination of both journals. Advanced journaling (user
journal replication of data areas, data queues and IFS objects), legacy cooperative
processing, and MIMIX Dynamic Apply utilize both journals, however MIMIX Dynamic
Apply primarily processes through the user journal.
For IFS objects, user journal replication offers full support of create, restore, delete,
and move and rename operations. In environments using V5R4 and higher
operating systems, user journal replication also offers full support of these
operations for data area and data queue objects.
Move/rename operations - system journal replication
Table 18 describes how MIMIX processes a move or rename journal entry from the
system journal. MIMIX uses system journal replication processes DLOs and for IFS
objects and library-based objects which are not explicitly identified for user journal
replication. The Original Source Object and New Name or Location columns indicate
whether the object is identified within the name space for replication. The Action
column indicates the operation that MIMIX will attempt on the target system.
Table 18. Current object move actions
Original Source Object New Name or Location MIMIX Action on Target
System
Excluded from or not
identified for replication
Within name space of objects
to be replicated
Create Object
1

Identified for replication Excluded from or not identified
for replication
Delete Object
2

Identified for replication Within name space of objects
to be replicated
Move Object
Excluded from or not
identified for replication
Excluded from or not identified
for replication
None
1. If the source system object is not defined to MIMIX or if it is defined by an Exclude entry,
it is not guaranteed that an object with the same name exists on the backup system or
that it is really the same object as on the source system. To ensure the integrity of the
target (backup) system, a copy of the source object must be brought over from the
source system.
2. If the target object is not defined to MIMIX or if it is defined by an Exclude entry, there is
no guarantee that the target library exists on the target system. Further, the customer is
assumed not to care if the target object is replicated, since it is not defined with an
Include entry, so deleting the object is the most straight forward approach.
118
Move/rename operations - user journaled data areas, data queues, IFS
objects
IFS, data area, and data queue objects replicated by user journal replication
processes can be moved or renamed while maintaining the integrity of the data. If the
new location or new name on the source system remains within the set of objects
identified as eligible for replication, MIMIX will perform the move or rename operation
on the object on the target system.
When a move or rename operation starts with or results in an object that is not within
the name space for user journal replication, MIMIX may need to perform additional
operations in order to replicate the operation. MIMIX may use a create or delete
operation and may need to add or remove tracking entries.
Each row in Table 19 summarizes a move/rename scenario and identifies the action
taken by MIMIX.
Table 19. MIMIX actions when processing moves or renames of objects when user journal replication pro-
cesses are involved
Source object New name or loca-
tion
MIMIX action
Identified for
replication with user
journal processing
Within name space of
objects to be
replicated with user
journal processing
Moves or renames the object on the target system and
renames the associated tracking entry. See example 1.
Not identified for
replication
Not identified for
replication
None. See example 2.
Identified for
replication with user
journal processing
Not identified for
replication
Deletes the target object and deletes the associated
tracking entry. The object will no longer be replicated. See
example 3.
Identified for
replication with user
journal processing
Within name space of
objects to be
replicated with
system journal
processing
Moves or renames the object using system journal
processes and removes the associated tracking entry.
See example 4.
Identified for
replication with
system journal
processing
Within name space of
objects to be
replicated with user
journal processing
Creates tracking entry for the object using the new name
or location and moves or renames the object using user
journal processes. If the object is a library or directory,
MIMIX creates tracking entries for those objects within the
library or directory that are also within name space for
user journal replication and synchronizes those objects.
See example 5.
Not identified for
replication
Within name space of
objects to be
replicated with user
journal processing
Creates tracking entry for the object using the new name
or location. If the object is a library or directory, MIMIX
creates tracking entries for those objects within the library
or directory that are also within name space for user
journal replication. Synchronizes all of the objects
identified by these new tracking entries. See example 6.
Processing variations for common operations
119
The following examples use IFS objects and directories to illustrate the MIMIX
operations in move/rename scenarios that involve user journal replication (advanced
journaling). The MIMIX behavior described is the same as that for data areas and
data queues that are within the configured name space for advanced journaling.
Table 20 identifies the initial set of source system objects, data group IFS entries, and
IFS tracking entries before the move/rename operation occurs.
Example 1, moves/renames within advanced journaling name space: The most
common move and rename operations occur within advanced journaling name space.
For example, MIMIX encounters user journal entries indicating that the source system
IFS directory /TEST/dir1 was renamed to /TEST/dir2, and that the IFS stream file
/TEST/stmf1 was renamed to /TEST/stmf2. In both cases, the old and new names fall
within advanced journaling name space, as indicated in Table 19. The rename
operations are replicated and names are changed on the target system objects. The
tracking entries for these objects are also renamed. The resulting changes on the
target system objects and MIMIX configuration are shown in Table 21.
Example 2, moves/renames outside name space: When MIMIX encounters a
journal entry for a source system object outside of the name space that has been
renamed or moved to another location also outside of the name space, MIMIX ignores
the transaction. The object is not eligible for replication.
Example 3, moves/renames from advanced journaling name space to outside
name space: In this example, MIMIX encounters user journal entries indicating that
the source system IFS directory /TEST/dir1 was renamed to /TEST/xdir1 and IFS
stream file /TEST/stmf1 was renamed to /TEST/xstmf1. MIMIX is aware of only the
original names, as indicated in Table 19. Thus, the old name is eligible for replication,
Table 20. Initial data group IFS entries, IFS tracking entries, and source IFS objects for
examples
Configuration
Supports
Data Group
IFS Entries
Source System IFS
Objects in Name
Space
Associated Data
Group IFS Tracking
Entries
advanced journaling /TEST/STMF* /TEST/stmf1 /TEST/stmf1
advanced journaling /TEST/DIR* /TEST/dir1/doc1 /TEST/dir1
/TEST/dir1/doc1
system journal
replication
/TEST/NOTAJ * /TEST/notajstmf1
/TEST/notajdir1/doc1
Table 21. Results of move/rename operations within name space for advanced journaling
Resulting Target IFS objects Resulting data group IFS tracking entries
/TEST/stmf2 /TEST/stmf2
/TEST/dir2/doc1 /TEST/dir2
/TEST/dir2/doc1
120
but the new name is not. MIMIX treats this as a delete operation during replication
processing. MIMIX deletes the IFS directory and IFS stream file from the target
system. MIMIX also deletes the associated IFS tracking entries.
Example 4, moves/renames from advanced journaling to system journal name
space: In this example, MIMIX encounters user journal entries indicating that the
source system IFS directory /TEST/dir1 was renamed to /TEST/notajdir1 and that IFS
stream file /TEST/stmf1 was renamed to /TEST/notajstmf1. MIMIX is aware that both
the old names and new names are eligible for replication as indicated in Table 19.
However, the new names fall within the name space for replication through the
system journal. As a result, MIMIX removes the tracking entries associated with the
original names and performs the rename operation for the objects on the target
system. Table 22 shows these results.
Example 5, moves/renames from system journal to advanced journaling name
space: In this example, MIMIX encounters journal entries indicating that source
system IFS directory from /TEST/notajdir1 was renamed to /TEST/dir1 and that IFS
stream file /TEST/notajstmf1 was renamed to /TEST/stmf1. MIMIX is aware that the
old names are within the system journal name space and that the new names are
within the advanced journaling name space. MIMIX creates tracking entries for the
names and then performs the rename operation on the target system using advanced
journaling.
MIMIX also creates tracking entries for any objects that reside within the moved or
renamed IFS directory (or library in the case of data areas or data queues). The
objects identified by these tracking entries are individually synchronized from the
source to the target system. Table 23 illustrates the results on the target system.
Example 6, moves/renames from outside to within advanced journaling name
space: In this example MIMIX encounters journal entries indicating that the source
system IFS directory /TEST/xdir1 was renamed to /TEST/dir1 and that IFS stream file
/TEST/xstmf1 was renamed to /TEST/stmf1. The original names are outside of the
name space and are not eligible for replication. However, the new names are within
Table 22. Results of move/rename operations from advanced journaling to system journal
name space
Resulting target IFS objects Resulting data group IFS tracking entries
/TEST/notajstmf1 (removed)
/TEST/notajdir1/doc1 (removed)
Table 23. Results of move/rename operations from system journal to advanced journaling
name space
Resulting target IFS objects Resulting data group IFS tracking
entries
/TEST/stmf1 /TEST/stmf1
/TEST/dir1/doc1 /TEST/dir1
/TEST/dir1/doc1
Processing variations for common operations
121
the name space for advanced journaling as indicated in Table 19. Because the
objects were not previously replicated, MIMIX processes the operations as creates
during replication. See Newly created files on page 114.
MIMIX also creates tracking entries for any objects that reside within the moved or
renamed IFS directory (or library in the case of data areas or data queues). The
objects identified by these tracking entries are individually synchronized from the
source to the target system. Table 24 illustrates the results.
Delete operations - files configured for legacy cooperative processing
The following briefly describes the events that occur in MIMIX when a file that is
defined for legacy cooperative processing is deleted:
System journal replication processes communicate with user journal replication
processes that a file has been deleted on the source system and indicates that the
file should be deleted from the target system.
A journal transaction which identifies the deleted file is created on the source
system. The transaction is transferred dynamically.
If the data group file entry is set to use the option to dynamically update active
replication processes, the file and associated file entry will be dynamically
removed from the replication processes. If the dynamic update option is not used,
the data group changes are not recognized until all data group processes are
ended and restarted.
MIMIX system journal replication processes delete the file on the target system.
Delete operations - user journaled data areas, data queues, IFS objects
When a T-DO (delete) journal entry for an IFS, data area, or data queue object is
encountered in the system journal, MIMIX system journal replication processes
generate an activity entry representing the delete operation and handle the delete of
the object from the target system. The user journal replication processes remove the
corresponding tracking entry.
Restore operations - user journaled data areas, data queues, IFS objects
When an IFS, data area, or data queue object is restored, the pre-existing object is
replaced by a backup copy on the source system. With user journal replication,
restores of IFS, data area, and data queue objects on the source system are
Table 24. Results of move/rename operations from outside to within advanced journaling
name space
Resulting target IFS objects Resulting data group IFS tracking
entries
/TEST/stmf1 /TEST/stmf1
/TEST/dir1/doc1 /TEST/dir1
/TEST/dir1/doc1
122
supported through cooperative processing between MIMIX system journal and user
journal replication processes.
Provided the object was journaled when it was saved, a restored IFS, data area, or
data queue object is also journaled.
During cooperative processing, system journal replication processes generate an
activity entry representing the T-OR (restore) journal entry from the system journal
and perform a save and restore operation on the IFS, data area, or data queue object.
Meanwhile, user journal replication processes handle the management of the
corresponding IFS or object tracking entry. MIMIX may also start journaling, or end
and restart journaling on the object so that the journaling characteristics of the IFS,
data area, or data queue object match the data group definition.
123
CHAPTER 5 Configuration checklists
MIMIX can be configured in a variety of ways to support your replication needs. Each
configuration requires a combination of definitions and data group entries. Definitions
identify systems, journals, communications, and data groups that make up the
replication environment. Data group entries identify what to replicate and the
replication option to be used. For available options, see Replication choices by object
type on page 87. Also, advanced techniques, such as keyed replication, have
additional configuration requirements. For additional information see Configuring
advanced replication techniques on page 332.
New installations: Before you start configuring MIMIX, system-level configuration
for communications (lines, controllers, IP interfaces) must already exist between the
systems that you plan to include in the MIMIX installation. Choose one of the following
checklists to configure a new installation of MIMIX.
Checklist: New remote journal (preferred) configuration on page 125 uses
shipped default values to create a new installation. Unless you explicitly configure
them otherwise, new data groups will use the IBM i remote journal function as part
of user journal replication processes.
Checklist: New MIMIX source-send configuration on page 129 configures a new
installation and is appropriate when your environment cannot use remote
journaling. New data groups will use MIMIX source-send processes in user journal
replication.
To configure a new installation that is to use the integrated MIMIX support for IBM
WebSphere MQ (MIMIX for MQ), refer to the MIMIX for IBM WebSphere MQ
book.
Upgrades and conversions: You can use any of the following topics, as appropriate,
to change a configuration:
Checklist: converting to application groups on page 132 provides the instructions
needed to change your environment to implement application groups. Application
groups are best practice and provide the ability to group and control multiple data
groups as one entity.
Checklist: Converting to remote journaling on page 133 changes an existing
data group to use remote journaling within user journal replication processes.
Converting to MIMIX Dynamic Apply on page 135 provides checklists for two
methods of changing the configuration of an existing data group to use MIMIX
Dynamic Apply for logical and physical file replication. Data groups that existed
prior to installing version 5 must use this information in order to use MIMIX
Dynamic Apply.
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling on
page 138 changes the configuration of an existing data group to use user journal
replication processes for these objects.
To add integrated MIMIX support for IBM WebSphere MQ (MIMIX for MQ) to an
Configuration checklists
124
existing installation, use topic Choosing the correct checklist for MIMIX for MQ in
the MIMIX for IBM WebSphere MQ book.
Checklist: Converting to legacy cooperative processing on page 141 changes
the configuration of an existing data group so that logical and physical source files
are processed from the system journal and physical data files use legacy
cooperative processing.
Other checklists: The following configuration checklist employs less frequently used
configuration tools and is not included in this chapter.
Use Checklist: copy configuration on page 537 if you need to copy configuration
data from an existing product library into another MIMIX installation.
Checklist: New remote journal (preferred) configuration
125
Checklist: New remote journal (preferred) configuration
Use this checklist to configure a new installation of MIMIX. This checklist creates the
preferred configuration that uses IBM i remote journaling and uses MIMIX Dynamic
Apply to cooperatively process logical and physical files.
To configure your system manually, perform the following steps on the system that
you want to designate as the management system of the MIMIX installation:
1. Communications between the systems must be configured and operational
before you start configuring MIMIX.
a. If communications is not configured, refer to System-level communications
on page 143 for more information.
b. If you have TCP configured and plan to use it for your transfer protocol, verify
that is it is operational using the PING command.
2. Create system definitions for the management system and each of the network
systems for the MIMIX installation. Use topic Creating system definitions on
page 153.
3. Create transfer definitions to define the communications protocol used between
pairs of systems. A pair of systems consists of a management system and a
network system. Use topic Creating a transfer definition on page 166.
4. If you are using the TCP protocol, ensure that the Lakeview TCP server is running
on each system defined in the transfer definition. You can use the Work with
Active J obs (WRKACTJ OB) command to look for a job under the MIMIXSBS
subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not
active on a system, use topic Starting the TCP/IP server on page 170.
Note: Default values for transfer definitions enable MIMIX to create and manage
autostart job entries for the server. If your transfer definitions prevent this,
you can create and manage your own autostart job entries. For more
information see Using autostart job entries to start the TCP server on
page 171.
5. Start the MIMIX managers using topic Starting the system and journal managers
on page 269. When the system manager is running, configuration information for
data groups will be automatically replicated to the other system as you create it.
6. Verify that the communications link defined in each transfer definition is
operational using topic Verifying a communications link for system definitions on
page 175.
7. If you are using the TCP protocol, ensure that the DDM TCP server is running
using topic Starting the DDM TCP/IP server on page 279.
8. If you have implemented DDM password validation, verify that your environment
will allow MIMIX RJ support to work properly. Use topic Checking DDM password
validation level in use on page 280.
9. Create the data group definitions that you need using topic Creating a data group
definition on page 221. The referenced topic creates a data group definition with
appropriate values to support MIMIX Dynamic Apply.
126
10. Confirm that the journal definitions which have been automatically created have
the values you require. For information, see J ournal definitions created by other
processes on page 179, Tips for journal definition parameters on page 180, and
J ournal definition considerations on page 184.
11. Build the necessary journaling environments for the RJ links using Building the
journaling environment on page 195. If the data group is switchable, be sure to
build the journaling environments for both directions--source system A to target
system B (target journal @R) and for source system B to target system A (target
journal @R).
Note: The use of application groups is considered best practice. Step 12 through
Step 14 create the additional configuration needed for application groups. If
you are not using application groups, skip to Step 15.
12. Create the application groups to which you will associate the data groups using
topic Creating an application group definition on page 294.
13. Load the data resource group entries and nodes that define the association
between application groups and data groups. Loading data resource groups into
an application group on page 295.
14. Identify what node (system) will be the primary node for each application group,
using Specifying the primary node for the application group on page 296.
15. Use Table 25 to create data group entries for this configuration. This configuration
requires object entries and file entries for LF and PF files. For other object types or
classes, any replication options identified in planning topic Replication choices by
object type on page 87 are supported.
Table 25. How to configure data group entries for the remote journal (preferred) configuration.
Class Do the following: Planning and Requirements
Information
Library-
based
objects
1. Create object entries using. UseCreating data group
object entries on page 242.
2. After creating object entries, load file entries for LF and
PF (source and data) *FILE objects using Loading file
entries from a data groups object entries on page 247.
Note: If you cannot use MIMIX Dynamic Apply for logical files or
PF data files, you should still create file entries for PF data
files to ensure that legacy cooperative processing can be
used.
3. After creating object entries, load object tracking entries
for any *DTAARA and *DTAQ objects to be replicated
from a user journal. Use Loading object tracking entries
on page 258.
Identifying library-based
objects for replication on
page 91
Identifying logical and physical
files for replication on page 96
Identifying data areas and data
queues for replication on
page 103
Checklist: New remote journal (preferred) configuration
127
16. Do the following to confirm and automatically correct any problems found in file
entries associated with data group object entries:
a. From the management system, temporarily change the Action for running
audits policy using the following command: SETMMXPCY DGDFN(name
system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR)
b. From the source system, type WRKAUD RULE(#DGFE) and press Enter.
c. Next to the data group you want to confirm, type 9 (Run rule) and press F4
(Prompt).
d. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on
system policy prompt. Then press Enter.
e. Check the audit status for a value of *NODIFF or *AUTORCVD. If the audit
results in any other status, resolve the problem. For additional information, see
Resolving audit problems on page 569 and Interpreting results for
configuration data - #DGFE audit on page 572.
f. From the management system, set the Action for running audits policy to its
previous value. (The default value is *INST.) Use the command: SETMMXPCY
DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST)
17. Ensure that object auditing values are set for the objects identified by the
configuration before synchronizing data between systems. Use the procedure
Setting data group auditing values manually on page 270. Doing this now
ensures that objects to be replicated have the object auditing values necessary for
replication and that any transactions which occur between configuration and
starting replication processes can be replicated.
18. Start journaling using the following procedures as needed for your configuration.
Note: If the objects do not yet exist on the target system, be sure to specify *SRC
for the Start journaling on system (J RNSYS) parameter in the commands
to start journaling.
For user journal replication, use J ournaling for physical files on page 305 to
start journaling on both source and target systems.
IFS
objects
1. Create IFS entries using Creating data group IFS
entries on page 255.
2. After creating IFS entries, load IFS tracking entries for
IFS objects to be replicated from a user journal. Use
Loading IFS tracking entries on page 257.
Identifying IFS objects for
replication on page 106
DLOs Create DLO entries using Creating data group DLO
entries on page 259.
Identifying DLOs for
replication on page 111
Table 25. How to configure data group entries for the remote journal (preferred) configuration.
Class Do the following: Planning and Requirements
Information
128
For IFS objects, configured for user journal replication, use J ournaling for IFS
objects on page 308.
For data areas or data queues configured for user journal replication, use
J ournaling for data areas and data queues on page 311.
19. Synchronize the database files and objects on the systems between which
replication occurs. Topic Performing the initial synchronization on page 454
identifies options available for synchronizing and identifies how to establish a
synchronization point that identifies the journal location that will be used later to
initially start replication.
20. Confirm that the systems are synchronized by checking the libraries, folders and
directories contain expected objects on both systems.
21. Start the data group using Starting data groups for the first time on page 282.
22. For configurations that use application groups, after you have started data groups
as described in Step 21, start the application groups using Starting an application
group on page 298.
23. Verify the configuration. Topic Verifying the initial synchronization on page 458
identifies the additional aspects of your configuration that are necessary for
successful replication.
Checklist: New MIMIX source-send configuration
129
Checklist: New MIMIX source-send configuration
Best practices for MIMIX are to use MIMIX Remote J ournal support for database
replication. However, in cases where you cannot use remote journaling, this checklist
will configure a new installation that uses MIMIX source-send processes for database
replication. System journal replication is also configured.
To configure a source-send environment, perform the following steps on the system
that you want to designate as the management system of the MIMIX installation:
1. Communications between the systems must be configured and operational
before you start configuring MIMIX.
a. If communications is not configured, refer to System-level communications
on page 143 for more information.
b. If you have TCP configured and plan to use it for your transfer protocol, verify
that is it is operational using the PING command.
2. Create system definitions for the management system and each of the network
systems for the MIMIX installation. Use topic Creating system definitions on
page 153.
3. Create transfer definitions to define the communications protocol used between
pairs of systems. A pair of systems consists of a management system and a
network system. Use topic Creating a transfer definition on page 166.
4. If you are using the TCP protocol, ensure that the Lakeview TCP server is running
on each system defined in the transfer definition. You can use the Work with
Active J obs (WRKACTJ OB) command to look for a job under the MIMIXSBS
subsystem with a function of PGM-LVSERVER. If the Lakeview TCP server is not
active on a system, use topic Starting the TCP/IP server on page 170.
Note: Default values for transfer definition enable MIMIX to create and manage
autostart job entries for the server. If your transfer definitions prevent this,
you can create and manage your own autostart job entries. For more
information see Using autostart job entries to start the TCP server on
page 171.
5. Start the MIMIX managers using topic Starting the system and journal managers
on page 269. When the system manager is running, configuration information for
data groups will be automatically replicated to the other system as you create it.
6. Verify that the communications link defined in each transfer definition is
operational using topic Verifying a communications link for system definitions on
page 175.
7. Create the data group definitions that you need using topic Creating a data group
definition on page 221. Be sure to specify *NO for the Use remote journal link
prompt.
8. Confirm that the journal definitions which have been automatically created have
the values you require. For information, see J ournal definitions created by other
processes on page 179, Tips for journal definition parameters on page 180, and
J ournal definition considerations on page 184.
130
9. If the journaling environment does not exist, use topic Building the journaling
environment on page 195 to create the journaling environment.
Note: The use of application groups is considered best practice. Step 10 through
Step 12 create the additional configuration needed for application groups. If
you are not using application groups, skip to Step 13.
10. Create the application groups to which you will associate the data groups using
topic Creating an application group definition on page 294.
11. Load the data resource group entries and nodes that define the association
between application groups and data groups. Loading data resource groups into
an application group on page 295.
12. Identify what node (system) will be the primary node for each application group,
using Specifying the primary node for the application group on page 296.
13. Use Table 26 to create data group entries for this configuration. This configuration
requires object entries and file entries for legacy cooperative processing of PF
data files. For other object types or classes, any replication options identified in
planning topic Replication choices by object type on page 87 are supported.
14. Do the following to confirm and automatically correct any problems found in file
entries associated with data group object entries:
a. From the management system, temporarily change the Action for running
audits policy using the following command: SETMMXPCY DGDFN(name
system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR)
Table 26. How to configure data group entries a new MIMIX source-send configuration.
Class Do the following: Planning and Requirement
Information
Library-
based
objects
1. Create object entries using Creating data group object
entries on page 242.
2. After creating object entries, load file entries for PF (data)
*FILE objects using Loading file entries from a data
groups object entries on page 247.
3. After creating object entries, load object tracking entries
for *DTAARA and *DTAQ objects to be replicated from a
user journal. Use Loading object tracking entries on
page 258.
Identifying library-based
objects for replication on
page 91
Identifying logical and physical
files for replication on page 96
Identifying data areas and data
queues for replication on
page 103
IFS
objects
1. Create IFS entries using Creating data group IFS
entries on page 255.
2. After creating IFS entries, load IFS tracking entries for
IFS objects to be replicated from a user journal. Use
Loading IFS tracking entries on page 257.
Identifying IFS objects for
replication on page 106
DLOs Create DLO entries using Creating data group DLO
entries on page 259.
Identifying DLOs for
replication on page 111
Checklist: New MIMIX source-send configuration
131
b. From the source system, type WRKAUD RULE(#DGFE) and press Enter.
c. Next to the data group you want to confirm, type 9 (Run rule) and press F4
(Prompt).
d. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on
system policy prompt. Then press Enter.
e. Check the audit status for a value of *NODIFF or *AUTORCVD. If the audit
results in any other status, resolve the problem. For additional information, see
Resolving audit problems on page 569 and Interpreting results for
configuration data - #DGFE audit on page 572.
f. From the management system, set the Action for running audits policy to its
previous value. (The default value is *INST.) Use the command: SETMMXPCY
DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST)
15. Ensure that object auditing values are set for the objects identified by the
configuration before synchronizing data between systems. Use the procedure
Setting data group auditing values manually on page 270. Doing this now
ensures that objects to be replicated have the object auditing values necessary for
replication and that any transactions which occur between configuration and
starting replication processes can be replicated.
16. Start journaling using the following procedures as needed for your configuration.
Note: If the objects do not yet exist on the target system, be sure to specify *SRC
for the Start journaling on system (J RNSYS) parameter in the commands
to start journaling.
For user journal replication, use J ournaling for physical files on page 305 to
start journaling on both source and target systems.
For IFS objects, configured for user journal replication, use J ournaling for IFS
objects on page 308.
For data areas or data queues configured for user journal replication, use
J ournaling for data areas and data queues on page 311.
17. Synchronize the database files and objects on the systems between which
replication occurs. Topic Performing the initial synchronization on page 454
identifies options available for synchronizing and identifies how to establish a
synchronization point that identifies the journal location that will be used later to
initially start replication.
18. Confirm that the systems are synchronized by checking the libraries, folders and
directories contain expected objects on both systems.
19. Start the data group using Starting data groups for the first time on page 282.
20. For configurations that use application groups, after you have started data groups
as described in Step 19, start the application groups using Starting an application
group on page 298.
21. Verify your configuration. Topic Verifying the initial synchronization on page 458
identifies the additional aspects of your configuration that are necessary for
successful replication.
132
Checklist: converting to application groups
Use this checklist to change an existing configuration so that data groups will be
associated with one or more application groups. Operational control of the data
groups will be performed by procedures for the application group in which they
participate. The use of application groups is considered best practice.
To convert an existing environment so that one or more data groups will be controlled
by application groups, do the following:
1. Create the application groups to which you will associate the data groups using
Creating an application group definition on page 294.
2. Load the data resource group entries and nodes that define the association
between application groups and data groups using Loading data resource groups
into an application group on page 295.
3. Identify what node (system) will be the primary node for each application group,
using Specifying the primary node for the application group on page 296.
4. If you have automation programs, evaluate them for any needed changes.
5. Once you have completed the preceding steps, start the application groups
usingStarting an application group on page 298.
Checklist: Converting to remote journaling
133
Checklist: Converting to remote journaling
Use this checklist to convert an existing data group from using MIMIX source-send
processes to using MIMIX Remote J ournal support for user journal replication.
Note: This checklist does not change values specified in data group entries that
affect how files are cooperatively processed or how data areas, data queues,
and IFS objects are processed. For example, files configured for legacy
processing prior to this conversion will continue to be replicated with legacy
cooperative processing.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. If you use a startup program, make the modifications to the program described in
Changes to startup programs on page 278.
2. If you have implemented DDM password validation, you need to verify that your
environment will allow MIMIX RJ support to work properly. Use topic Checking
DDM password validation level in use on page 280.
3. Do the following to ensure that you have a functional transfer definition:
a. Modify the transfer definition to identify the RDB directory entry. Use topic
Changing a transfer definition to support remote journaling on page 167.
b. Verify the communications link using Verifying the communications link for a
data group on page 176.
4. If you are using the TCP protocol, ensure that the DDM TCP server is running
using topic Starting the DDM TCP/IP server on page 279.
5. Connect the journal definitions for the local and remote journals using Adding a
remote journal link on page 202. This procedure also creates the target journal
definition.
6. Build the journaling environment on each system defined by the RJ pair using
Building the journaling environment on page 195.
7. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *YES for the Use remote journal link prompt.
d. When you are ready to accept the changes, press Enter.
8. To make the configuration changes effective, you need to end the data group you
are converting to remote journaling and start it again as follows:
a. Perform a controlled end of the data group (ENDDG command), specifying
*ALL for Process and *CNTRLD for End process. Refer to topic Ending all
replication in a controlled manner in the MIMIX Operations book.
134
b. Start data group replication using the procedure Starting selected data group
processes in the MIMIX Operations book. Be sure to specify *ALL for Start
processes prompt (PRC parameter) and *LASTPROC as the value for the
Database journal receiver and Database large sequence number prompts.
Converting to MIMIX Dynamic Apply
135
Converting to MIMIX Dynamic Apply
Use either procedure in this topic to change a data group configuration to use MIMIX
Dynamic Apply. In a MIMIX Dynamic Apply configuration, objects of type *FILE (LF,
PF source and data) are replicated using primarily user journal replication processes.
This configuration is the most efficient way to process these files.
Converting using the Convert Data Group command on page 135 automatically
converts a data group configuration.
Checklist: manually converting to MIMIX Dynamic Apply on page 136 enables
you to perform the conversion yourself.
It is recommended that you contact your Certified MIMIX Consultant for assistance
before performing this procedure.
Requirements: Before starting, consider the following:
Any data group that existed prior to installing version 5 must use one of these
procedures in order to use MIMIX Dynamic Apply. As of version 5, newly created
data groups are automatically configured to use MIMIX Dynamic Apply when its
requirements and restrictions are met and shipped command defaults are used.
Any data group to be converted must already be configured to use remote
journaling.
Any data group to be converted must have *SYSJ RN specified as the value of
Cooperative journal (COOPJ RN).
A minimum level of IBM i PTFs are required on both systems. For a complete list
of required and recommended IBM PTFs, log in to Support Central and refer to
the Technical Documents page.
The conversion must be performed from the management system. The data group
must be active when starting the conversion.
For additional information about configuration requirements and limitations of MIMIX
Dynamic Apply, see Identifying logical and physical files for replication on page 96.
Converting using the Convert Data Group command
The Convert Data Group (CVTDG) will automatically convert the configuration of
specified data groups to enable MIMIX Dynamic Apply. This command will
automatically attempt to perform the steps described in the manual procedure and will
issue diagnostic messages if a step cannot be performed.
Perform the following steps from the management system on an active data group:
1. From a command line enter the command:
CVTDG DGDFN(name system1 system2)
2. Watch for diagnostic messages in the job log and take any recovery action
indicated.
The conversion is complete when you see message LVI321A.
136
Checklist: manually converting to MIMIX Dynamic Apply
Perform the following steps from the management system to enable an existing data
group to use MIMIX Dynamic Apply:
1. Verify the environment meets the requirements and restrictions. See
Requirements and limitations of MIMIX Dynamic Apply on page 101.
2. Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they
pertain to your environment. Log in to Support Central and refer to the Technical
Documents page for a list of required and recommended IBM PTFs.
3. Verify that the System Manager jobs are active. See Starting the system and
journal managers on page 269.
4. Verify that data group is synchronized by running the MIMIX audits. See Verifying
the initial synchronization on page 458.
5. Use the Work with Data Groups display to ensure that there are no files on hold
and no failed or delayed activity entries. Refer to topic Preparing for a controlled
end of a data group in the MIMIX Operations book.
Note: Topic Ending a data group in a controlled manner in the MIMIX
Operations book includes subtask Preparing for a controlled end of a data
group and the other subtasks needed for Step 6 and Step 7.
6. Perform a controlled end of the data group you are converting. Follow the
procedure for Performing the controlled end in the MIMIX Operations book.
7. Ensure that there are no open commit cycles for the database apply process.
Follow the steps for Confirming the end request completed without problems in
the MIMIX Operations book.
8. From the management system, change the data group definition so that the
Cooperative journal (COOPJ RN) parameter specifies *USRJ RN. Use the
command:
CHGDGDFN DGDFN(name system1 system2) COOPJRN(*USRJRN)
9. Ensure that you have one or more data group object entries that specify the
required values. These entries identify the items within the name space for
replication. You may need to create additional entries to achieve desired results.
For more information, see Identifying logical and physical files for replication on
page 96.
10. To ensure that new files created while the data group is inactive are automatically
journaled, the QDFTJ RN data areas must be created in the libraries configured for
replication of cooperatively processed files. This can be done by running the
following command from the source system:
SETDGAUD DGDFN(name system1 system2) OBJTYPE(*AUTOJRN)
Note: The QDFTJ RN data area is created in libraries identified by data group
object entries which are configured for cooperatively processing of files,
data areas, or data queues, subject to some limitations For a list of
restricted libraries and other details of requirements for implicitly starting
journaling, see What objects need to be journaled on page 302.
Converting to MIMIX Dynamic Apply
137
11. From the management system, use the following command to load the data group
file entries from the target system. Ensure that the value you specify (*SYS1 or
*SYS2) for the LODSYS parameter identifies the target system.
LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE)
UPDOPT(*ADD) LODSYS(value) SELECT(*NO)
For additional information about loading file entries, see Loading file entries from
a data groups object entries on page 247.
12. Start journaling for all files not previously journaled. See Starting journaling for
physical files on page 305.
13. Start the data group specifying the command as follows:
STRDG DGDFN(name system1 system2) CLRPND(*YES)
14. Verify that data groups are synchronized by running the MIMIX audits. See
Verifying the initial synchronization on page 458.
138
Checklist: Change *DTAARA, *DTAQ, IFS objects to user
journaling
Use this checklist to change the configuration of an existing data group so that IFS
objects, *DTAARA and *DTAQ objects can be replicated from entries in a user journal.
(This environment is also called advanced journaling.) The procedure in this checklist
assumes that the data group already includes user journal replication for files.
Topic User journal replication of IFS objects, data areas, data queues on page 68
describes the benefits and restrictions of replicating these objects from user journal
entries. It also identifies the MIMIX processes used for replication and the purpose of
tracking entries.
To convert existing data groups to use advanced journaling, do the following:
1. Determine if IFS objects, data areas, and data queues should be replicated in a
data group shared with other objects undergoing database replication, or if these
objects should be in a separate data group. Topic Planning for journaled IFS
objects, data areas, and data queues on page 78 provides guidelines for the
following planning considerations:
Serializing transactions with database files
Converting existing data groups, including examples
Database apply session balancing
User exit program considerations
2. Perform a controlled end of the data groups that will include objects to be
replicated using advanced journaling. See the MIMIX Operations book for how to
end a data group in a controlled manner (ENDOPT(*CNTRLD)).
3. Ensure that all pending activity for objects and IFS objects has completed. Use
the command WRKDGACTE STATUS(*ACTIVE) to display any pending activity
entries. Any activities that are still in progress will be listed.
4. The data group definitions used for user journal replication of IFS objects, data
areas, and data queues must specify *ALL as the value for Data group type
(TYPE). Verify the value in the data group definition is correct. If necessary,
change the value.
Note: If you have to change the Data group type, the journal definitions and
journaling environment for user journal replication may not exist. If
necessary, create the journal definitions (Creating a journal definition on
page 192) and build the journaling environment (Building the journaling
environment on page 195).
5. Add or change data group IFS entries for the IFS objects you want to replicate. Be
sure to specify *YES for the Cooperate with database prompt in procedure
Adding or changing a data group IFS entry on page 255. For additional
information, see Restrictions - user journal replication of IFS objects on
page 109.
6. Add or change data group object entries for the data areas and data queues you
want to replicate using the procedure Adding or changing a data group object
Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling
139
entry on page 243. For additional information, see Restrictions - user journal
replication of data areas and data queues on page 104.
Note: New data group object entries created in MIMIX version 7 or higher
automatically default to values that result in user journal replication of
*DTAARA and *DTAQ objects.
7. Load the tracking entries associated with the data group IFS entries and data
group object entries you configured. Use the procedures in Loading tracking
entries on page 257.
8. Start journaling using the following procedures as needed for your configuration. If
you ever plan to switch the data groups, you must start journaling on both the
source system and on the target system.
For IFS objects, use Starting journaling for IFS objects on page 308
For data areas or data queues, use Starting journaling for data areas and data
queues on page 311
9. Verify that journaling is started correctly. This step is important to ensure the IFS
objects, data areas and data queues are actually replicated. For IFS objects, see
Verifying journaling for IFS objects on page 310. For data areas and data
queues, see Verifying journaling for data areas and data queues on page 313.
10. If you anticipate a delay between configuring data group IFS, object, or file entries
and starting the data group, use the SETDGAUD command before synchronizing
data between systems. Doing so will ensure that replicated objects are properly
audited and that any transactions for the objects that occur between configuration
and starting the data group are replicated. Use the procedure Setting data group
auditing values manually on page 270.
11. Synchronize the IFS objects, data areas and data queues between the source
and target systems. For IFS objects, follow the Synchronize IFS Object
(SYNCIFS) procedures. For data areas and data queues, follow the Synchronize
Object (SYNCOBJ ) procedures. Refer to chapter Synchronizing data between
systems on page 443 for additional information.
12. If you are replicating large amounts of data, you should specify IBM i journal
receiver size options that provide large journal receivers and large journal entries.
J ournals created by MIMIX are configured to allow maximum amounts of data.
J ournals that already exist may need to be changed.
a. After IFS objects are configured, perform the steps in Verifying journal
receiver size options on page 191 to ensure journaling is configured
appropriately.
b. Change any journal receiver size options necessary using Changing journal
receiver size options on page 191.
13. If you have database replication user exit programs, changes may need to be
made. See User exit program considerations on page 80.
14. Once you have completed the preceding steps, start the data groups. For more
information about starting data groups, see the MIMIX Operations book.
140
Checklist: Converting to legacy cooperative processing
141
Checklist: Converting to legacy cooperative processing
If you find that you cannot use MIMIX Dynamic Apply for logical and physical files, use
this checklist to change the configuration of an existing data group so that user journal
replication (MIMIX Dynamic Apply) is no longer used. This checklist changes the
configuration so that physical data files can be processed using legacy cooperative
processing. Logical files and physical source files will be processed using the system
journal. For more information, see Requirements and limitations of legacy
cooperative processing on page 102.
Important! Before you use this checklist, consider the following:
As of version 5, newly created data groups are configured for MIMIX Dynamic
Apply when default values are taken and configuration requirements are met.
This checklist does not convert user journal replication processes from using
remote journaling to MIMIX source-send processing.
This checklist only affects the configuration of *FILE objects. The configuration of
any other *DTAARA, *DTAQ, or IFS objects that are replicated through the user
journal are not affected.
Perform the following steps to enable legacy cooperative processing and system
journal replication:
1. Verify that data group is synchronized by running the MIMIX audits. See Verifying
the initial synchronization on page 458.
2. Use the Work with Data Groups display to ensure that there are no files on hold
and no failed or delayed activity entries. Refer to topic Preparing for a controlled
end of a data group in the MIMIX Operations book.
Note: Topic Ending a data group in a controlled manner in the MIMIX
Operations book includes subtask Preparing for a controlled end of a data
group and the subtask needed for Step 3.
3. End the data group you are converting by performing a controlled end. Follow the
procedure for Performing the controlled end in the MIMIX Operations book.
4. From the management system, change the data group definition so that the
Cooperative journal (COOPJ RN) parameter specifies *SYSJ RN. Use the
command:
CHGDGDFN DGDFN(name system1 system2) COOPJRN(*SYSJRN)
5. Save the data group file entries to an outfile. Use the command:
WRKDGFE DGDFN(DGDFN SYS1 SYS2) OUTPUT(*OUTFILE)
6. From the management system, use the following command to load the data group
file entries from the target system. Ensure that the value you specify (*SYS1 or
*SYS2) for the LODSYS parameter identifies the target system.
LODDGFE DGDFN(name system1 system2) CFGSRC(*DGOBJE)
UPDOPT(*REPLACE) LODSYS(value) SELECT(*NO)
For additional information about loading file entries, see Loading file entries from
a data groups object entries on page 247.
142
7. Examine the data group file entries with those saved in the outfile created in
Step 5. Any differences need to be manually updated.
8. Optional step: Delete the QDFTJ RN data areas. These data areas automatically
start journaling for newly created files. This may not be desired because the
journal image (J RNIMG) value for these files may be different than the value
specified in the MIMIX configuration. Such a difference will be detected by the file
attributes (#FILATR) audit. To delete these data areas, run the following command
from each system:
DLTDTAARA DTAARA(library/QDFTJRN)
9. Start the data group specifying the command as follows:
STRDG DGDFN(name system1 system2) CRLPND(*YES)
Configuring for native TCP/IP
143
CHAPTER 6 System-level communications
This information is provided to assist you with configuring the IBM Power
TM
Systems
communications that are necessary before you can configure MIMIX. MIMIX supports
the following communications protocols:
Transmission Control Protocol/Internet Protocol (TCP/IP)
Systems Network Architecture (SNA)
OptiConnect
MIMIX should have a dedicated communications line that is not shared with other
applications, jobs, or users on the production system. A dedicated path will make it
easier to fine-tune your MIMIX environment and to determine the cause of problems.
For TCP/IP, it is recommended that the TCP/IP host name or interface used be in its
own subnet. For SNA, it is recommended that MIMIX have its own communication line
instead of sharing an existing SNA device.
Your Certified MIMIX Consultant can assist you in determining your communications
requirements and ensuring that communications can efficiently handle peak volumes
of journal transactions.
If you plan to use system journal replication processes, you need to consider
additional aspects that may affect the communications speed. These aspects include
the type of objects being transferred and the size of data queues, user spaces, and
files defined to cooperate with user journal replication processes.
MIMIX IntelliStart can help you determine your communications requirements.
The topics in this chapter include:
Configuring for native TCP/IP on page 143 describes using native TCP/IP
communications and provides steps to prepare and configure your system for it.
Configuring APPC/SNA on page 147 describes basic requirements for SNA
communications.
Configuring OptiConnect on page 148 describes basic requirements for
OptiConnect communications and identifies MIMIX limitations when this
communications protocol is used.
Configuring for native TCP/IP
MIMIX has the ability to use native TCP/IP communications over sockets. This allows
users with TCP communications on their networks to use MIMIX without requiring the
use of IBM ANYNET through SNA.
Using TCP/IP communications may or may not improve your CPU usage, but if your
primary communications protocol is TCP/IP, this can simplify your network
configuration.
System-level communications
144
Native TCP/IP communications allow MIMIX users greater flexibility and provides
another option in the communications available for use on their Power
TM
Systems.
MIMIX users can also continue to use IBM ANYNET support to run SNA protocols
over TCP networks.
Preparing your system to use TCP/IP communications with MIMIX requires the
following:
1. Configure both systems to use TCP/IP. The procedure for configuring a system to
use TCP/IP is documented in the information included with the IBM i software.
Refer to the IBM TCP/IP Fastpath Setup book, SC41-5430, and follow the
instructions to configure the system to use TCP/IP communications.
2. If you need to use port aliases, do the following:
a. Refer to the examples Port aliases-simple example on page 144 and Port
aliases-complex example on page 145.
b. Create the port aliases for each system using the procedure in topic Creating
port aliases on page 146.
3. Once the system-level communication is configured, you can begin the MIMIX
configuration process.
Port aliases-simple example
Before using the MIMIX TCP/IP support, you must first configure the system to
recognize the feature. This involves identifying the ports that will be used by MIMIX to
communicate with other systems. The port identifiers used depend on the
configuration of the MIMIX installations. MIMIX installations vary according to the
needs of each enterprise. At a minimum, a MIMIX installation consists of one
management system and one network system. A more complex MIMIX installation
may consist of one management system and multiple network systems. A large
enterprise may even have multiple MIMIX installations that are interconnected.
Figure 8 shows a simple MIMIX installation in which the management system
(LONDON) and a network system (HONGKONG) use the TCP communications
protocol through the port number 50410. Figure 9 shows a MIMIX installation with two
network systems.
Figure 8. Creating Ports. In this example, the MIMIX installation consists of two systems.
Figure 9. Creating Ports. In this example, the MIMIX installation consists of three systems,
Configuring for native TCP/IP
145
two of which are network systems.
In both Figure 8 and Figure 9, if you need to use port aliases for port 50410, you need
to have a service table entry on each system that equates the port number to the port
alias. For example, you might have a service table entry on system LONDON that
defines an alias of MXMGT for port number 50410. Similarly, you might have service
table entries on systems HONGKONG and CHICAGO that define an alias of MXNET
for port 50410. You would use these aliases in the PORT1 and PORT2 parameters in
the transfer definition.
Port aliases-complex example
If a network system communicates with more than one management system (it
participates with multiple MIMIX installations), it must have a different port for each
management system with which it communicates. Figure 10 shows an example of
such an environment with two MIMIX installations. In the LIBA cluster, the port 50410
is used to communicate between LONDON (the management system) and
HONGKONG and CHICAGO (network systems). In the LIBB cluster, the port 50411 is
used to communicate between CHICAGO (the management system for this cluster)
and MEXICITY and CAIRO. The CHICAGO system has two port numbers defined,
one for each MIMIX installation in which it participates.
Figure 10. Creating Port Aliases. In this example, the system CHICAGO participates in two
System-level communications
146
MIMIX installations and uses a separate port for each MIMIX installation.
If you need to use port aliases in an environment such as Figure 10, you need to have
a service table entry on each system that equates the port number to the port alias. In
this example, CHICAGO would require two port aliases and two service table entries.
For example, you might use a port alias of LIBAMGT for port 50410 on LONDON and
an alias of LIBANET for port 50410 on both HONKONG and CHICAGO. You might
use an alias of LIBBMGT for port 50411 on CHICAGO and an alias of LIBBNET for
port 50411 on both CAIRO and MEXICITY. You would use these port aliases in the
PORT1 and PORT2 parameters on the transfer definitions.
Creating port aliases
The following procedure describes the steps for creating port aliases which allow
MIMIX installations to communicate through TCP/IP.
Notes:
Perform this procedure on each system in the MIMIX installation that will use
the TCP protocol.
To allow communications in both directions between a pair of systems, such as
between a management system and a network system, you need to add port
aliases for both systems in the pair on each system.
If you are using more than one MIMIX installation, define a different set of
aliases for each MIMIX installation.
Do the following to create a port alias on a system:
1. From a command line, type the command CFGTCP and press Enter.
2. The Configure TCP/IP menu appears. Select option 21 (Configure related tables)
and press Enter.
Configuring APPC/SNA
147
3. The Configure Related Tables display appears. Select option 1 (Work with
service table entries) and press Enter.
4. The Work with Service Table Entries display appears. Do the following:
a. Type a 1 in the Opt column next to the blank lines at the top of the list.
b. In the blank at the top of the Service column, use uppercase characters to
specify the alias that the System i will use to identify this port as a MIMIX native
TCP port.
Note: Port alias names are case sensitive and must be unique to the system
on which they are defined. For environments that have only one MIMIX
installation, Vision Solutions recommends that you use the same port
number or same port alias on each system in the MIMIX installation.
c. In the blank at the top of the Port column, specify the number of an unused port
ID to be associated with the alias. The port ID can be any number greater than
1024 and less than 55534 that is not being used by another application. You
can page down through the list to ensure that the number is not being used by
the system.
d. In the blank at the top of the Protocol column, type TCP to identify this entry as
using TCP/IP communications.
e. Press Enter.
5. The Add Service Table Entry (ADDSRVTBLE) display appears. Verify that the
information shown for the alias and port is what you want. At the Text 'description'
prompt, type a description of the port alias, enclosed in apostrophes, and then
press Enter.
Configuring APPC/SNA
Before you create a transfer definition that uses the SNA protocol, a functioning SNA
(APPN or APPC) line, controller, and device must exist between the systems that will
be identified by the transfer definition. If a line, controller, and device do not exist,
consult your network administrator before continuing.
Note: MIMIX no longer fully supports the SNA protocol. Vision Solutions will only
assist customers to determine possible workarounds if communication related
issues arise when using SNA. If you create transfer definitions that specify
*SNA for protocol, be certain that your business environment can accept this
limitation.
Attention: MIMIX requires that you restrict the length of port
aliases to 14 or fewer characters and suggests that you specify the
alias in uppercase characters.
System-level communications
148
Configuring OptiConnect
If you plan to use the OptiConnect protocol, a functioning OptiConnect line must exist
between the two system that you identify in the transfer definition.
Note: MIMIX no longer fully supports the OptiConnect/400 protocol. Vision Solutions
will only assist customers to determine possible workarounds if
communication related issues arise when using SNA. If you create transfer
definitions that specify *OPTI for protocol, be certain that your business
environment can accept this limitation.
You can use the OptiConnectproduct from IBM for all communication for most
1

MIMIX processes. Use the IBM book OptiConnect for OS/400 to install and verify
OptiConnect communications. Then you can do the following:
Ensure that the QSOC library is in the system portion of the library list. Use the
command DSPSYSVAL SYSVAL(QSYSLIBL) to verify whether the QSOC library
is in the system portion of the library list. If it not, use the CHGSYSVAL command
to add this library to the system library list.
When you create the transfer definition, specify *OPTI for the transfer protocol.
1. The #FILDTA audit and the Compare File Data (CMPFILDTA) command require TCP/IP commu-
nicaitons.
149
CHAPTER 7 Configuring system definitions
By creating a system definition, you identify to MIMIX characteristics of IBM Power
TM

Systems that participate in a MIMIX installation.
When you create a system definition, MIMIX automatically creates a journal definition
for the security audit journal (QAUDJ RN) for the associated system. This journal
definition is used by MIMIX system journal replication processes. It is recommended
that you avoid naming system definitions based on their roles. System roles such as
source, target, production, and backup change upon switching.
The topics in this chapter include:
Tips for system definition parameters on page 150 provides tips for using the
more common options for system definitions.
Creating system definitions on page 153 provides the steps to follow for creating
system definitions.
Changing a system definition on page 154 provides the steps to follow for
changing a system definition.
Multiple network system considerations on page 155 describes
recommendations when configuring an environment that has multiple network
systems.
150
Tips for system definition parameters
This topic provides tips for using the more common options for system definitions.
Context-sensitive help is available online for all options on the system definition
commands.
System definition (SYSDFN) This parameter is a single-part name that represents a
system within a MIMIX installation. This name is a logical representation and does not
need to match the system name that it represents.
Note: In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_).
System type (TYPE) This parameter indicates the role of this system within the
MIMIX installation. A system can be a management (*MGT) system or a network
(*NET) system. Only one system in the MIMIX installation can be a management
system.
Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the
primary and secondary transfer definitions used for communicating with the system.
The communications path and protocol are defined in the transfer definitions. For
MIMIX to be operational, the transfer definition names you specify must exist. MIMIX
does not automatically create transfer definitions. If you accept the default value
primary for the Primary transfer definition, create a transfer definition by that name.
If you specify a Secondary transfer definition, it will be used by MIMIX if
communications path specified by the primary transfer definition is not available.
Cluster member (CLUMBR) You can specify if you want this system definition to be
a member of a cluster. The system (node) will not be added to the cluster until the
system manager is started the first time.
Cluster transfer definition (CLUTFRDFN) You can specify the transfer definition
that cluster resource services will use to communicate to the node and for the node to
communicate with other nodes in the cluster. You must specify *TCP as the transfer
protocol.
Message handling (PRIMSGQ, SECMSGQ) MIMIX uses the centralized message
log facility which is common to all MIMIX products. These parameters provide
additional flexibility by allowing you to identify the message queues associated with
the system definition and define the message filtering criteria for each message
queue. By default, the primary message queue, MIMIX, is located in the MIMIXQGPL
library. You can specify a different message queue or optionally specify a secondary
message queue. You can also control the severity and type of messages that are sent
to each message queue.
Manager delay times (JRNMGRDLY, SYSMGRDLY) Two parameters define the
delay times used for all journal management and system management jobs. The
value of the journal manager delay parameter determines how often the journal
manager process checks for work to perform. The value of the system manager delay
parameter determines how often the system manager process checks for work to
perform.
Tips for system definition parameters
151
Output queue values (OUTQ, HOLD, SAVE) These parameters identify an output
queue used by this system definition and define characteristics of how the queue is
handled. Any MIMIX functions that generate reports use this output queue. You can
hold spooled files on the queue and save spooled files after they are printed.
Keep history (KEEPSYSHST, KEEPDGHST) Two parameters specify the number of
days to retain MIMIX system history and data group history. MIMIX system history
includes the system message log. Data group history includes time stamps and
distribution history. You can keep both types of history information on the system for
up to a year.
Keep notifications (KEEPNEWNFY, KEEPACKNFY) Two parameters specify the
number of days to retain new and acknowledged notifications. The Keep new
notifications (days) parameter specifies the number of days to retain new notifications
in the MIMIX data library. The Keep acknowledged notifications (days) parameter
specifies the number of days to retain acknowledged notifications in the MIMIX data
library.
MIMIX data library, storage limit (KEEPMMXDTA, DTALIBASP, DSKSTGLMT)
Three parameters define information about MIMIX data libraries on the system. The
Keep MIMIX data (days) parameter specifies the number of days to retain objects in
the MIMIX data library, including the container cache used by system journal
replication processes. The MIMIX data library ASP parameter identifies the auxiliary
storage pool (ASP) from which the system allocates storage for the MIMIX data
library. For libraries created in a user ASP, all objects in the library must be in the
same ASP as the library. The Disk storage limit (GB) parameter specifies the
maximum amount of disk storage that may be used for the MIMIX data libraries.
User profile and job descriptions (SBMUSR, MGRJOBD, DFTJOBD) MIMIX runs
under the MIMIXOWN user profile and uses several job descriptions to optimize
MIMIX processes. The default job descriptions are stored in the MIMIXQGPL library.
Job restart time (RSTARTTIME) System-level MIMIX jobs, including the system
manager and journal manager, restart daily to maintain the MIMIX environment. You
can change the time at which these jobs restart. The management or network role of
the system affects the results of the time you specify on a system definition. Changing
the job restart time is considered an advanced technique.
Printing (CPI, LPI, FORMLEN, OVRFLW, COPIES) These parameters control
characteristics of printed output.
Product library (PRDLIB) This parameter is used for installing MIMIX into a
switchable independent ASP, and allows you to specify a MIMIX installation library
that does not match the library name of the other system definitions. The only time
this parameter should be used is in the case of an INTRA system (which is handled by
the default value) or in replication environments where it is necessary to have extra
MIMIX system definitions that will switch locations along with the switchable
independent ASP. Due to its complexity, changing the product library is considered an
advanced technique and should not be attempted without the assistance of a Certified
MIMIX Consultant.
ASP group (ASPGRP) This parameter is used for installing MIMIX into a switchable
independent ASP, and defines the ASP group (independent ASP) in which the
product library exists. Again, this parameter should only be used in replication
152
environments involving a switchable independent ASP. Due to its complexity,
changing the ASP group is considered an advanced technique and should not be
attempted without the assistance of a Certified MIMIX Consultant.
Creating system definitions
153
Creating system definitions
To create a system definition, do the following:
1. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
2. The Work with System Definitions display appears. Type a 1 (Create) next to the
blank line at the top of the list area and press Enter.
3. The Create System Definition (CRTSYSDFN) display appears. Specify a name at
the System definition prompt. Once created, the name can only be changed by
using the Rename System Definition command.
4. Specify the appropriate value for the system you are defining at the System type
prompt.
5. Specify the names of the transfer definitions you want at the Primary transfer
definition and, if desired, the Secondary transfer definition prompts.
6. If the system definition is for a cluster environment, do the following:
a. Specify *YES at the Cluster member prompt.
b. Verify that the value of the Cluster transfer definition is what you want. If
necessary, change the value.
7. If you want use to a secondary message queue, at the prompts for Secondary
message handling specify the name and library of the message queue and values
indicating the severity and the Information type of messages to be sent to the
queue.
8. At the Description prompt, type a brief description of the system definition.
9. If you want to verify or change values for additional parameters, press F10
(Additional parameters).
10. To create the system definition, press Enter.
154
Changing a system definition
To change a system definition, do the following:
1. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
2. The Work with System Definitions display appears. Type a 2 (Change) next to the
system definition you want and press Enter.
3. The Change System Definition (CHGSYSDFN) display appears. Press F10
(Additional parameters)
4. Locate the prompt for the parameter you need to change and specify the value
you want. Press F1 (Help) for more information about the values for each
parameter.
5. To save the changes press Enter.
Multiple network system considerations
155
Multiple network system considerations
When configuring an environment that has multiple network systems, it is
recommended that each system definition in the environment specify the same name
for the Primary transfer definition prompt. This configuration is necessary for the
MIMIX system managers to communicate between the management system and all
systems in the network. Data groups can use the same transfer definitions that the
system managers use, or they can use differently named transfer definitions.
Similarly, if you use secondary transfer definitions, it is recommended that each
system definition in the multiple network environment specifies the same name for the
Secondary transfer definition prompt. (The value of the Secondary transfer definition
should be different than the value of the Primary transfer definition.)
Figure 11 shows system definitions in a multiple network system environment. The
management system (LONDON) specifies the value PRIMARY for the primary
transfer definition in its system definition. The management system can communicate
with the other systems using any transfer definition named PRIMARY that has a value
for System 1 or System 2 that resolves to its system name (LONDON). Figure 12
shows the recommended transfer definition configuration which uses the value *ANY
for both systems identified by the transfer definition.
The management system LONDON could also use any transfer definition that
specified the name LONDON as the value for either System 1 or System 2.
The default value for the name of a transfer definition is PRIMARY. If you use a
different name, you need to specify that name as the value for the Primary transfer
definition prompt in all system definitions in the environment.
Figure 11. Example of system definition values in a multiple network system environment.
Figure 12. Example of a contextual (*ANY) transfer definition in use for a multiple network
Work with System Definitions
Syst em: LONDON
Type opt i ons, pr ess Ent er .
1=Cr eat e 2=Change 3=Copy 4=Del et e 5=Di spl ay 6=Pr i nt 7=Rename
11=Ver i f y communi cat i ons l i nk 12=J our nal def i ni t i ons
13=Dat a gr oup def i ni t i ons 14=Tr ansf er def i ni t i ons

- Tr ansf er Def i ni t i ons- Cl ust er
Opt Syst em Type Pr i mar y Secondar y Member
__ _______
__ CHI CAGO *NET PRI MARY *NONE *NO
__ NEWYORK *NET PRI MARY *NONE *NO
__ LONDON *MGT PRI MARY *NONE *NO
156
system environment.
Work with Transfer Definitions
Syst em: LONDON
Type opt i ons, pr ess Ent er .
1=Cr eat e 2=Change 3=Copy 4=Del et e 5=Di spl ay 6=Pr i nt 7=Rename
11=Ver i f y communi cat i ons l i nk

- - - - - - - - - Def i ni t i on- - - - - - - - - Thr eshol d
Opt Name Syst em1 Syst em2 Pr ot ocol ( MB)
__ __________ _______ ________
PRI MARY *ANY *ANY *TCP *NOMAX
157
CHAPTER 8 Configuring transfer definitions
By creating a transfer definition, you identify to MIMIX the communications path and
protocol to be used between two systems. You need at least one transfer definition for
each pair of systems between which you want to perform replication. A pair of
systems consists of a management system and a network system. If you want to be
able to use different transfer protocols between a pair of systems, create a transfer
definition for each protocol.
System-level communication must be configured and operational before you can use
a transfer definition.
You can also define an additional communications path in a secondary transfer
definition. If configured, MIMIX can automatically use a secondary transfer definition if
the path defined in your primary transfer definition is not available.
In an Intra environment, a transfer definition defines a communications path and
protocol to be used between the two product libraries used by Intra. For detailed
information about configuring an Intra environment, refer to Configuring Intra
communications on page 542.
Once transfer definitions exist for MIMIX, they can be used for other functions, such
as the Run Command (RUNCMD), or by other MIMIX products for their operations.
The topics in this chapter include:
Tips for transfer definition parameters on page 159 provides tips for using the
more common options for transfer definitions.
Using contextual (*ANY) transfer definitions on page 163 describes using the
value (*ANY) when configuring transfer definitions.
Creating a transfer definition on page 166 provides the steps to follow for
creating a transfer definition.
Changing a transfer definition on page 167 provides the steps to follow for
changing a transfer definition. This topic also includes sub-task for how to
changing a transfer definition when converting to a remote journaling
environment.
Finding the system database name for RDB directory entries on page 169
provides the steps to follow for finding the system database name for RDB
directory entries.
Starting the TCP/IP server on page 170 provides the steps to follow if you need
to start the Lakeview TCP/IP server.
Using autostart job entries to start the TCP server on page 171 provides the
steps to configure the Lakeview TCP server to start automatically every time the
MIMIX subsystem is started
Verifying a communications link for system definitions on page 175 provides the
steps to verify that the communications link defined for each system definition is
operational.
Configuring transfer definitions
158
Verifying the communications link for a data group on page 176 provides a
procedure to verify the primary transfer definition used by the data group.
Tips for transfer definition parameters
159
Tips for transfer definition parameters
This topic provides tips for using the more common options for transfer definitions.
Context-sensitive help is available online for all options on the transfer definition
commands.
Transfer definition (TFRDFN) This parameter is a three-part name that identifies a
communications path between two systems. The first part of the name identifies the
transfer definition. The second and third parts of the name identify two different
system definitions which represent the systems between which communication is
being defined. It is recommended that you use PRIMARY as the name of one transfer
definition. To support replication, a transfer definition must identify the two systems
that will be used by the data group. You can explicitly specify the two systems, or you
can allow MIMIX to resolve the names of the systems. For more information about
allowing MIMIX to resolve the system names, see Using contextual (*ANY) transfer
definitions on page 163.
Note: In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_).
For more information, see Naming convention for remote journaling environments
with 2 systems on page 185.
Short transfer definition name (TFRSHORTN) This parameter specifies the short
name of the transfer definition to be used in generating a relational database (RDB)
directory name. The short transfer definition name must be a unique, four-character
name if you specify to have MIMIX manage your RDB directory entries. It is
recommended that you use the default value *GEN to generate the name. The
generated name is a concatenation of the first character of the transfer definition
name, the last character of the system 1 name, the last character of the system 2
name, and the fourth character will be either a blank, a letter (A - Z), or a single digit
number (0 - 9).
Transfer protocol (PROTOCOL) This parameter specifies the communications
protocol to be used. Each protocol has a set of related parameters. If you change the
protocol specified after you have created the transfer definition, MIMIX saves
information about both protocols.
For the *TCP protocol the following parameters apply:
System x host name or address (HOST1, HOST2) These two parameters
specify the host name or address of system 1 and system 2, respectively. The
name is a mixed-case host alias name or a TCP address (nnn.nnn.nnn.nnn) and
can be up to 256 characters in length. For the HOST1 parameter, the special
value *SYS1 indicates that the host name is the same as the name specified for
System 1 in the Transfer definition parameter. Similarly, for the HOST2 parameter,
the special value *SYS2 indicates that the host name is the same as the name
specified for System 2 in the Transfer definition parameter.
Note: The specified value is also used when starting the Lakeview TCP Server
(STRSVR command). The HOST parameter on the STRSVR command is
limited to 80 or fewer characters.
160
System x port number or alias (PORT1, PORT2) These two parameters specify
the port number or port alias of system1 and system 2, respectively. The value of
each parameter can be a 14-character mixed-case TCP port number or port alias
with a range from 1000 through 55534. To avoid potential conflicts with
designations made by the operating system, it is recommended that you use
values between 40000 and 55500. By default, the PORT1 parameter uses the
port 50410. For the PORT2 parameter, the default special value *PORT1
indicates that the value specified on the System 1 port number or alias (PORT1)
parameter is used. If you configured TCP using port aliases in the service table,
specify the alias name instead of the port number.
The Relational database (RDB) parameter also applies to *TCP protocol.
For the *SNA protocol the following parameters apply:
System x location name (LOCNAME1, LOCNAME2) These two parameters
specify the location name or address of system 1 and system 2, respectively. The
value of each parameter is the unique location name that identifies the system to
remote devices. For the LOCNAME1 parameter, the special value *SYS1
indicates that the location name is the same as the name specified for System 1
on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2
parameter, the special value *SYS2 indicates that the location name is the same
as the name specified for System 2 on the Transfer definition (TFRDFN)
parameter.
System x network identifier (NETID1, NETID2) These two parameters specify
name of the network for system 1 and system 2, respectively. The default value
*LOC indicates that the network identifier for the location name associated with
the system is used. The special value *NETATR indicates that the value specified
in the system network attributes is used. The special value *NONE indicates that
the network has no name. For the NETID2 parameter, the special value *NETID1
indicates that the network identifier specified on the System 1 network identifier
(NETID1) parameter is used.
SNA mode (MODE) This parameter specifies the name of mode description used
for communication. The default name is MIMIX. The special value *NETATR
indicates that the value specified in the system network attributes is used.
The following parameters apply for the *OPTI protocol:
System x location name (LOCNAME1, LOCNAME2) These two parameters
specify the location name or address of system 1 and system 2, respectively. The
value of each parameter is the unique location name that identifies the system to
remote devices. For the LOCNAME1 parameter, the special value *SYS1
indicates that the location name is the same as the name specified for System 1
on the Transfer definition (TFRDFN) parameter. Similarly, for the LOCNAME2
parameter, the special value *SYS2 indicates that the location name is the same
as the name specified for System 2 on the Transfer definition (TFRDFN)
parameter.
Threshold size (THLDSIZE) This parameter is accessible when you press F10
(Additional parameters). This controls the size of files and objects by specifying the
maximum size of files and objects that are sent. If the file or object exceeds the
threshold it is not sent. Valid values range from 1 through 9999999. The special
Tips for transfer definition parameters
161
value *NOMAX indicates that no maximum value is set. Transmitting large files and
objects can consume excessive communications bandwidth and negatively impact
communications performance, especially for slow communication lines.
Manage autostart job entries (MNGAJE) This parameter is accessible when you
press F10 (Additional parameters). This determines whether MIMIX will use this
transfer definition to manage an autostart job entry for starting the TCP server for the
MIMIXQGPL/MIMIXSBS subsystem description. The shipped default is *YES,
whereby MIMIX will add, change, or remove an autostart job entry based on changes
to this transfer definition. This parameter only affects transfer definitions for TCP
protocol which have host names of 80 or fewer characters. For a given port number or
alias, only one autostart job entry will be created regardless of how many transfer
definitions use that port number or alias. An autostart job entry is created on each
system related to the transfer definition.
When configuring a new installation, transfer definitions and MIMIX-added autostart
job entries do not exist on other systems until after the first time the MIMIX managers
are started. Therefore, during initial configuration you may need to manually start the
TCP server on the other systems using the STRSVR command.
Relational database (RDB) This parameter is accessible when you press F10
(Additional parameters) and is valid when default remote journaling configuration is
used. The parameter consists of a four relational database values, which identify the
communications path used by the IBM i remote journal function to transport journal
entries: a relational database directory entry name, two system database names, and
a management indicator for directory entries. This parameter creates two RDB
directory entries, one on each system identified in the transfer definition. Each entry
identifies the other systems relational database.
Note: If you use the value *ANY for both system 1 and system 2 on the transfer
definition, *NONE is used for the directory entry name, and no directory entry
is generated.
If MIMIX is managing your RDB directory entries, a directory entry is
generated if you use the value *ANY for only one of the systems on the
transfer definition. This directory entry is generated for the system that is
specified as something other than *ANY. For more information about the use
of the value *ANY on transfer definitions, see Using contextual (*ANY)
transfer definitions on page 163.
The four elements of the relational database parameter are:
Directory entry This element specifies the name of the relational database entry.
The default value *GEN causes MIMIX to create an RDB entry and add it to the
relational database. The generated name is in the format MX_nnnnnnnnnn_ssss,
where nnnnnnnnnn is the 10-character installation name, and ssss is the transfer
definition short name. If you specify a value for the RDB parameter, it is
recommended that you limit its length to 18 characters. When you specify the
special value *NONE, the directory entry is not added or changed by MIMIX.
System 1 relational database This element specifies the name of the relational
database for System 1. The default value *SYSDB specifies that MIMIX will
determine the relational database name. If you are managing the RDB directory
entries and you need to determine the system database name, refer to Finding
162
the system database name for RDB directory entries on page 169.
Note: For remote journaling that uses an independent ASP, specify the database
name for the independent ASP.
System 2 relational database This element specifies the name of the relational
database for System 2. The default value *SYSDB specifies that MIMIX will
determine the relational database name. If you are managing the RDB directory
entries and you need to determine the system database name, refer to Finding
the system database name for RDB directory entries on page 169.
Note: For remote journaling that uses an independent ASP, specify the database
name for the independent ASP.
Manage directory entries This element specifies that MIMIX will manage the
relational database directory entries associated with the transfer definition
whether the directory entry name is specified or whether the directory entry name
is generated by MIMIX. Management of the relational database directory entries
consists of adding, changing, and deleting the directory entries on both systems,
as needed, when the transfer definition is created, changed, or deleted. The
special value *DFT indicates that MIMIX manages the relational database
directory entries only when the name is generated using the special value *GEN
on the Directory entry element of this parameter. The special value *YES indicates
that the directory entries on each system are managed by MIMIX. If the relational
database directory entries do not exist, MIMIX adds them and sets any needed
system values. If they do exist, MIMIX changes them to match the values
specified by the Relational database (RDB) parameter. When any of the transfer
definition relational database values change, the directory entry is also changed.
When the transfer definition is deleted, the directory entries are also deleted.
Using contextual (*ANY) transfer definitions
163
Using contextual (*ANY) transfer definitions
When the three-part name of transfer definition specifies the value *ANY for System 1
or System 2 instead system names, MIMIX uses information from the context in which
the transfer definition is called to resolve to the correct system. Such a transfer
definitions is called contextual transfer definition.
For remote journaling environments, best practice is to use transfer definitions that
identify specific system definitions in the thee-part transfer definition name. Although
you can use contextual transfer definitions with remote journaling, they are not
recommended. For more information, see Considerations for remote journaling on
page 164.
In MIMIX source-send configurations, a contextual transfer definition may be an aid in
configuration. For example, if you create a transfer definition named PRIMARY SYSA
*ANY. This definition can be used to provide the necessary parameters for
establishing communications between SYSA and any other system.
The *ANY value represents several transfer definitions, one for each system
definition. For example, a transfer definition PRIMARY SYSA *ANY in an installation
that has three system definitions (SYSA, SYSB, INTRA) represents three transfer
definitions:
PRIMARY SYSA SYSA
PRIMARY SYSA SYSB
PRIMARY SYSA INTRA
Search and selection process
Data group definitions and system definitions include parameters that identify
associated transfer definitions. When an operation requires a transfer definition,
MIMIX uses the context of the operation to determine the fully qualified name. For
example, when starting a data group, MIMIX uses information in the data group
definition, the systems specified in the data group name and the specified transfer
definition name, to derive the fully qualified transfer definition name. If MIMIX is still
unable to find an appropriate transfer definition the following search order is used:
1. PRIMARY SYSA SYSB
2. PRIMARY *ANY SYSB
3. PRIMARY SYSA *ANY
4. PRIMARY SYSB SYSA
5. PRIMARY *ANY SYSA
6. PRIMARY SYSB *ANY
7. PRIMARY *ANY *ANY
When you specify *ANY in the three-part name of a transfer definition, and you have
specified *TFRDFN for the Protocol parameter on such commands as RUNCMD or
VFYCMNLNK, MIMIX searches your system and selects those systems with a
164
transfer definition that matches the transfer definition that you specified, for example,
(PRIMARY SYSA SYSB).
Considerations for remote journaling
Best practice for a remote journaling environment is to use a transfer definition that
identifies specific system definitions in the thee-part transfer definition name. By
specifying both systems, the transfer definition can be used for replication from either
direction.
If you do use a contextual transfer definition in a remote journaling environment, the
value *ANY can be used for the system where the local journal (source) resides. This
value can be either the second or third parts of the three-part name. For example, a
transfer definition of PRIMARY name *ANY is valid in a remote journaling
environment, where name identifies the system definition for the system where the
remote journal (target) resides. A transfer definition of PRIMARY *ANY name is also
valid. The command would look like this:
CRTTFRDFN TFRDFN( PRI MARY name *ANY) TEXT( ' description' )
MIMIX Remote J ournal support requires that each transfer definition that will be used
has a relational database (RDB) directory entry to properly identify the remote
system. An RDB directory entry cannot be added to a transfer definition using the
value *ANY for the remote system.
To support a switchable data group when using contextual transfer definitions, each
system in the remote journaling environment must be defined by a contextual transfer
definition. For example, an environment with systems NEWYORK and CHICAGO,
you would need a transfer definition named PRIMARY NEWYORK *ANY as well as a
transfer definition named PRIMARY CHICAGO *ANY.
Considerations for MIMIX source-send configurations
When creating a transfer definition for a MIMIX source-send configuration that uses
contextual system capability (*ANY) and the TCP protocol, take the default values for
other parameters on the CRTTFRDFN command. For example, using the naming
conventions for contextual systems, the command would look like this:
CRTTFRDFN TFRDFN( PRI MARY *ANY *ANY) TEXT( ' Recommended
configuration' )
Note: Ensure that you consult with your site TCP administrator before making these
changes.
For an Intra environment, an additional transfer definition is needed. If there is an
Intra system definition defined, the transfer definition must specify a unique port
number to communicate with Intra. The following is an example of an additional
transfer definition that uses port number 42345 to establish communications with the
Intra system:
CRTTFRDFN TFRDFN( PRI MARY *ANY I NTRA) PORT2( 42345)
TEXT( ' Recommended configuration' )
Using contextual (*ANY) transfer definitions
165
Naming conventions for contextual transfer definitions
The following suggested naming conventions make the contextual (*ANY) transfer
definitions more useful in your environment.
*TCP protocol: The MIMIX system definition names should correspond to DNS or
host table entries that tie the names to a specific TCP address.
*SNA protocol: The MIMIX system definition names must match SNA environment
(controller names) for the respective systems. The MIMIX system definitions should
match the net attribute system name (DSPNETA). For example, with two MIMIX
systems called SYSA and SYSB, on the SYSA system there would have to be a
controller called SYSB that is used for SYSA to SYSB communications. Conversely,
on SYSB, a SYSA controller would be necessary.
*OPTI protocol: The MIMIX system definition names must match the OptiConnect
names for the systems (DSPOPCLNK).
Additional usage considerations for contextual transfer definitions
The Run Command (RUNCMD) and the Verify Communications Link (VFYCMNLNK)
commands requires specific system names to verify communications between
systems. These commands do not handle transfer definitions that specify *ANY in the
three-part name.
When the VFYCMNLNK command is called from option 11 on the Work with System
Definitions display or option 11 on the Work with Data Groups display, MIMIX
determine the specific system names. However, when the command is called from
option 11 on the Work with Transfer Definitions display, entered from a command line,
or included in automation programs, you will receive an error message if the transfer
definition has the value *ANY for either system 1 or system 2.
166
Creating a transfer definition
System-level communication must be configured and operational before you can use
a transfer definition.
To create a transfer definition, do the following:
1. Access the Work with Transfer Definitions display by doing one of the following:
From the MIMIX Configuration Menu, select option 2 (Work with transfer
definitions) and press Enter.
From the MIMIX Cluster Menu, select option 21 (Work with transfer definitions)
and press Enter.
2. The Work with Transfer Definitions display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
3. The Create Transfer Definition display appears. Do the following:
a. At the Transfer definition prompts, specify a name and the two system
definitions between which communications will occur.
b. At the Short transfer definition name prompt, accept the default value *GEN to
generate a short transfer definition name. This short transfer definition name is
used in generating relational database directory entry names if you specify to
have MIMIX manage your RDB directory entries.
c. At the Transfer protocol prompt, specify the communications protocol you
want, then press Enter. The value *TCP is strongly recommended for all
environments and is required for MIMIX Global.
4. Additional parameters for the protocol you selected appear on the display. Verify
that the values shown are what you want. Make any necessary changes.
5. At the Description prompt, type a text description of the transfer definition,
enclosed in apostrophes.
6. Optional step: If you need to set a maximum size for files and objects to be
transferred, press F10 (Additional parameters). At the Threshold size (MB)
prompt, specify a valid value.
7. Optional step: If you need to change the relational database information, press
F10 (Additional parameters). See Tips for transfer definition parameters on
page 159 for details about the Relational database (RDB) parameter. If MIMIX is
not managing the RDB directory entries, it may be necessary to change the RDB
values.
8. To create the transfer definition, press Enter.
Changing a transfer definition
167
Changing a transfer definition
To change a transfer definition, do the following:
1. Access the Work with Transfer Definitions display by doing one of the following:
From the MIMIX Configuration Menu, select option 2 (Work with transfer
definitions) and press Enter.
2. The Work with Transfer Definitions display appears. Type 2 (Change) next to the
definition you want and press Enter.
3. The Change Transfer Definition (CHGTFRDFN) display appears. If you want to
change which protocol is used between the specified systems, specify the value
you want for the Transfer protocol prompt.
4. Press Enter to display the parameters for the specified transfer protocol. Locate
the prompt for the parameter you need to change and specify the value you want.
Press F1 (Help) for more information about the values for each parameter.
5. If you need to set a maximum size for files and objects to be transferred, press
F10 (Additional parameters). At the Threshold size (MB) prompt, specify a valid
value.
6. If you need to create or remove an autostart job entry for the TCP server, press
F10 (Additional parameters). At the Manage autostart job entries prompt, specify
the value you want. When *YES is specified, MIMIX will add, change, or remove
the autostart entry based on changes to the transfer definition. For a given port
number or alias, only one autostart job entry will be created regardless of how
many transfer definitions use that port number or alias. An autostart job entry is
created on each system related to the transfer definition.
7. If you need to change your relational database information, press F10 (Additional
parameters). At the Relational database (RDB) prompt, specify the desired values
for each of the four elements and press Enter. For special considerations when
changing your transfer definitions that are configured to use RDB directory entries
see Tips for transfer definition parameters on page 159.
8. To save changes to the transfer definition, press Enter.
Changing a transfer definition to support remote journaling
If the value *ANY is specified for either system in the transfer definition, before you
complete this procedure refer to Using contextual (*ANY) transfer definitions on
page 163. Contextual transfer definitions are not recommended in a remote journaling
environment.
To support remote journaling, modify the transfer definition you plan to use as follows:
1. From the MIMIX Configuration menu, select option 2 (Work with transfer
definitions) and press Enter.
2. The Work with Transfer Definitions display appears. Type a 2 (Change) next to the
definition you want and press Enter.
3. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10
168
(Additional parameters), then press Page Down.
4. At the Relational database (RDB) prompt, specify the desired values for each of
the four elements and press Enter.
Note: See Tips for transfer definition parameters on page 159 for detailed
information about the Relational database (RDB) parameter. Also see
Finding the system database name for RDB directory entries on
page 169 for special considerations when changing your transfer
definitions that are configured to use RDB directory entries.
Finding the system database name for RDB directory entries
169
Finding the system database name for RDB directory
entries
To find the system database name, do the following:
1. Login to the system that was specified for System 1 in the transfer definition.
2. From the command line type DSPRDBDIRE and press Enter. Look for the
relational database directory entry that has a corresponding remote location name
of *LOCAL.
3. Repeat steps 1 and 2 to find the system database name for System 2.
Using IBM i commands to work with RDB directory entries
The Manage directory entries element of the Relational Database (RDB) parameter in
the transfer definition determines whether MIMIX manages RDB directory entries. If
you did not accept default values of *GEN for the Directory entry element and *DFT
for the Manage directory entries element of the RDB parameter when you created
your transfer definition, or if you specified *NO for the Manage directory entries
element, you can use IBM i commands to add and change RDB directory entries. The
Add RDB Directory Entry (ADDRDBDIRE) command will add an entry. The Change
RDB Directory Entry (CHGRDBDIRE) command will change an existing RDB
directory entry. You can also use these options from the Work with Relational
Database Directory entries display (WRKRDBDIRE command): 1=Add, 2=Change,
and 5=Display details.
170
Starting the TCP/IP server
Use this procedure if you need to manually start the TCP/IP server.
Once the TCP communication connections have been defined in a transfer definition,
the TCP server must be started on each of the systems identified by the transfer
definition.
You can also start the TCP/IP server automatically through an autostart job entry.
Either you can change the transfer definition to allow MIMIX to create and manage
the autostart job entry for the TCP/IP server, or you can add your own autostart job
entry. MIMIX only manages entries for the server when they are created by transfer
definitions.
When configuring a new installation, transfer definitions and MIMIX-added autostart
job entries do not exist on other systems until after the first time the MIMIX managers
are started. Therefore, during initial configuration you may need to manually start the
TCP server on the other systems using the STRSVR command.
Note: Use the host name and port number (or port alias) defined in the transfer
definition for the system on which you are running this command.
Do the following on the system on which you want to start the TCP server:
1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and
press Enter.
2. The Utilities Menu appears. Select option 51 (Start TCP server) and press Enter.
3. The Start Lakeview TCP Server display appears. At the Host name or address
prompt, specify the host name or address for the local system as defined in the
transfer definition.
4. At the Port number or alias prompt, specify the port number or alias as defined in
the transfer definition for the local system.
Note: If you specify an alias, you must have an entry in the service table on this
system that equates the alias to the port number.
5. Press Enter.
6. Verify that the server job is running under the MIMIX subsystem on that system.
You can use the Work with Active J obs (WRKACTJ OB) command to look for a job
under the MIMIXSBS subsystem with a function of PGM-LVSERVER.
Using autostart job entries to start the TCP server
171
Using autostart job entries to start the TCP server
To use TCP/IP communications, the MIMIX TCP/IP server must be started each time
the MIMIX subsystem (MIMIXSBS) is started. Because this can become a time
consuming task that can be mistakenly forgotten, MIMIX supports automatically
creating and managing autostart job entries for the TCP server with the MIMIXSBS
subsystem. MIMIX does this when transfer definitions for TCP protocol specify *YES
for the Manage autostart job entries (MNGAJ E) parameter.
The autostart job entry uses a job description that contains the STRSVR command
which will automatically start the Lakeview TCP server when the MIMIXSBS
subsystem is started. The STRSVR command is defined in the Request data or
command (RQSDTA) parameter of the job description.
When configuring a new installation, transfer definitions and MIMIX-added autostart
job entries do not exist on other systems until after the first time the MIMIX managers
are started. Therefore, during initial configuration you may need to manually start the
TCP server on the other systems using the STRSVR command.
If you prefer, you can create and manage autostart job entries yourself. The transfer
definition must specify MNGAJ E(*NO) and you must have an autostart job entry on
each system that can use the transfer definition.
Identifying the current autostart job entry information
This procedure enables you to identify the autostart job entry for the STRSVR
command in the MIMIXSBS subsystem and display the current information within the
job description associated with the entry.
To display the autostart job entry information, do the following:
1. Type the command DSPSBSD MIMIXQGPL/MIMIXSBS and press Enter. The
Display Subsystem Description display appears.
2. Type 3 (Autostart job entries) and press enter. The Display Autostart J ob Entries
display appears.
3. The columns Job, Job Description, and Library identify autostart job names and
their job description information. Locate the name and library of the job description
for the autostart job entry for the STRSVR. Typically, this job description name is
either the port alias name or PORTnnnnn where nnnnn is the port number and the
library name is the name of the MIMIX installation library. Press Enter.
4. To display the STRSVR details specified in the job description, do the following:
a. Using the job description information identified in Step 3, type the command
DSPJOBD library/job_description and press Enter.
b. The Display with J ob Descriptions display appears. Page down to view the
Request data field. The information in this field shows the current values of the
STRSVR command used by the autostart job entry.
172
Changing an autostart job entry and its related job description
When the host or port information for a system identified in a transfer definition
changes, those changes must be also be reflected in autostart job entries for the
STRSVR command and in their associated job descriptions. MIMIX automatically
updates this information for MIMIX-managed autostart job entries when the transfer
definition is updated.
However, if the transfer definition specifies MNGAJ E(*NO) and you are managing the
autostart job entries for the STRSVR command and their associated job descriptions
yourself, you must update them when the host or port information for a system in the
MIMIX environment changes. Specifically, the following changes to a transfer
definition require changing a user-managed autostart job entry or its associated job
description on the local system:
A change to the port number or alias identified in the PORT1 or PORT2
parameters requires replacing the job description and autostart job entry.
A change to the host name or address identified in the HOST1 or HOST2
parameters requires changing the job description.
If the transfer definition was renamed or copied so that the value of
HOST1(*SYS1) or HOST2(*SYS2) no longer resolves to the same system
definition system, the job description must be changed.
Using a different job description for an autostart job entry
When MIMIX manages autostart job entries for the STRSVR command, the default
job description used to submit the job is named MIMIXCMN in library MIMIXQGPL. If
you want the STRSVR request to run using a different job description, you can do the
following:
1. Identify the job description and library for the autostart job entry using the
procedure in Identifying the current autostart job entry information on page 171.
2. Type CHGJOBD and press F4 (Prompt). The Change J ob Description display
appears. Do the following:
a. For the Job description and Library prompts, specify the job description and
library names from in Step 1.
a. Press F10 (Additional parameters), then Page Down.
b. The the Request data or command prompt shows the current values of the
STRSVR command. Change the J OBD parameter shown to specify the library
and job description you want.
Important! Change only the J OBD information for the STRSVR command
specified within the RQSDTA parameter. Do not change the HOST or PORT
values when the autostart job entry that is managed by MIMIX.
c. Press Enter.
Using autostart job entries to start the TCP server
173
Updating host information for a user-managed autostart job entry
Use this procedure to update a user-managed autostart job entry which starts the
STRSVR command with the MIMIXSBS subsystem so that the request is submitted
with the correct host information. Autostart job entries for the server are user-
managed when the transfer definition specifies MNGAJ E(*NO).
Important! Do not use this procedure for MIMIX-managed autostart job entries.
Perform this procedure from the local system, which is the system for which
information changed within the transfer definition. Do the following:
1. Identify the job description and library for the autostart job entry using the
procedure in Identifying the current autostart job entry information on page 171.
This information is needed in the following step.
2. Type CHGJOBD and press F4 (Prompt). The Change J ob Description display
appears. Do the following:
a. For the Job description and Library prompts, specify the job description and
library names from in Step 1.
a. Press F10 (Additional parameters), then Page Down to locate Request data or
command (RQSDTA).
b. The Request data or command prompt shows the current values of the
STRSVR command in the following format. Change the value specified for
HOST so that the local_host-name is the host name or address specified
for the local system in the transfer definition.
' installation_library/ STRSVR HOST( ' ' local_host_name' ' )
PORT( nnnnn) J OBD( MI MI XQGPL/ MI MI XCMN) '
c. Press Enter.
Updating port information for a user-managed autostart job entry
This procedure identifies how to update the port information for a user-managed
autostart job entry that starts the Lakeview TCP server with the MIMIXSBS
subsystem. Autostart job entries for the server are user-managed when the transfer
definition specifies MNGAJ E(*NO).
Important! Do not use this procedure for MIMIX-managed autostart job entries.
Perform this procedure from the local system, which is the system for which
information changed within the transfer definition. Do the following:
1. Identify the job name, job description, and library for the autostart job entry using
the procedure in Identifying the current autostart job entry information on
page 171. This information is needed in the following steps.
2. Remove the old autostart job entry by specifying the job name from Step 1 for
job_name in the following command:
RMVAJ E SBSD( MI MI XQGPL/ MI MI XSBS) J OB( job_name)
3. Remove the old job description by specify the job description name and library
from Step 1 in the following command:
174
DLTJ OBD J OBD( library/job_description)
4. Create a new job description for the autostart job entry using the following
command:
CRTDUPOBJ OBJ ( MI MI XCMN) FROMLI B( MI MI XQGPL) OBJ TYPE( *J OBD)
TOLI B( installation-library) NEWOBJ ( job_description_name)
where installation_library is the name of the library for the MIMIX
installation and where job_description_name follows the recommendation to
identify the port for the local system by specifying the port number in the format
PORTnnnnn or the port alias.
5. Type CHGJOBD and press F4 (Prompt). The Change J ob Description display
appears. Do the following:
a. For the Job description and Library prompts, specify the job description and
library you created in Step 4.
b. Press F10 (Additional parameters).
c. Page Down to locate Request data or command (RQSDTA).
d. At the Request data or command prompt, specify the STRSVR command in
the following format:
' installation_library/ STRSVR HOST( ' ' local_host_name' ' )
PORT( nnnnn) J OBD( MI MI XQGPL/ MI MI XCMN) '
Where the values to specify are:
installation_library is the name of the library for the MIMIX
installation
local_host_name is the host name or address from the transfer definition
for the local system
nnnnn is the new port information from the transfer definition for the local
system, specified as either the port number or the port alias.
e. Press Enter. The job description is changed.
6. Create a new autostart job entry using the following command:
ADDAJ E SBSD( MI MI XQGPL/ MI MI XSBS) J OB( autostart_job_name)
J OBD( installation_library/ job_description_name)
Where installation_library/job_description_name specifies the job
description from Step 4 and autostart_job_name specifies the same port
information and format as specified for the job description name.
Verifying a communications link for system definitions
175
Verifying a communications link for system definitions
Do the following to verify that the communications link defined for each system
definition is operational:
1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, type a 1 (Work with system definitions) and
press Enter.
3. From the Work with System Definitions display, type an 11 (Verify
communications link) next to the system definition you want and press Enter. You
should see a message indicating the link has been verified.
Note: If the system manager is not active, this process will only verify that
communications to the remote system is successful. You will also see a
message in the job log indicating that communications link failed after 1
request. This indicates that the remote system could not return
communications to the local system.
4. Repeat this procedure for all system definitions. If the communications link
defined for a system definition uses SNA protocol, do not check the link from the
local system.
Note: If your transfer definition uses the *TCP communications protocol, then
MIMIX uses the Verify Communications Link command to validate the
information that has been specified for the Relational database (RDB)
parameter. MIMIX also uses VFYCMNLNK to verify that the System 1 and
System 2 relational database names exist and are available on each
system.
176
Verifying the communications link for a data group
Before you synchronize data between systems, ensure that the communications link
for the data group is active. This procedure verifies the primary transfer definition
used by the data group. If your configuration requires multiple data groups, be sure to
check communications for each data group definition.
Do the following:
1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, type a 4 (Work with data group definitions)
and press Enter.
3. From the Work with Data Group Definitions display, type an 11 (Verify
communications link) next to the data group you want and press F4.
4. The Verify Communications Link display appears. Ensure that the values shown
for the prompts are what you want.
5. To start the check, press Enter.
6. You should see a message "VFYCMNLNK command completed successfully."
If your data group definition specifies a secondary transfer definition, use the following
procedure to check all communications links.
Verifying all communications links
The Verify Communications Link (VFYCMNLNK) command requires specific system
names to verify communications between systems. When the command is called from
option 11 on the Work with System Definitions display or option 11 on the Work with
Data Groups display, MIMIX identifies the specific system names.
For transfer definitions using TCP protocol: MIMIX uses the Verify
Communications Link (VFYCMNLNK) command to validate the values specified for
the Relational database (RDB) parameter. MIMIX also uses VFYCMNLNK to verify
that the System 1 and System 2 relational database names exist and are available on
each system.
When the command is called from option 11 on the Work with Transfer Definitions
display or when entered from a command line, you will receive an error message if
the transfer definition specifies the value *ANY for either system 1 or system 2.
1. From the Work with Transfer Definitions display, type an 11 (Verify
communications link) next to all transfer definitions and press Enter.
2. The Verify Communications Link display appears. If you are checking a Transfer
definition with the value of *ALL, you need to specify a value for the System 1 or
System 2 prompt. Ensure that the values shown for the prompts are what you
want and then press Enter.
You will see the Verify Communications Link display for each transfer definition
you selected.
3. You should see a message "VFYCMNLNK command completed successfully."
177
CHAPTER 9 Configuring journal definitions
By creating a journal definition you identify to MIMIX a journal environment that can
be used in the replication process. MIMIX uses the journal definition to manage the
journaling environment, including journal receiver management.
A journal definition does not automatically build the underlying journal environment
that it defines. If the journal environment does not exist, it must be built. This can be
done after the journal definition is created. Configuration checklists indicate when to
build the journal environment.
The topics in this chapter include:
J ournal definitions created by other processes on page 179 describes the
security audit journal (QAUDJ RN) and other journal definitions that are
automatically created by MIMIX.
Tips for journal definition parameters on page 180 provides tips for using the
more common options for journal definitions.
J ournal definition considerations on page 184 provides things to consider when
creating journal definitions for remote journaling.
J ournal receiver size for replicating large object data on page 191 provides
procedures to verify that a journal receiver is large enough to accommodate large
IFS stream files and files containing LOB data, and if necessary, to change the
receiver size options.
Creating a journal definition on page 192 provides the steps to follow for creating
a journal definition.
Changing a journal definition on page 194 provides the steps to follow for
changing a journal definition.
Building the journaling environment on page 195 describes the journaling
environment and provides the steps to follow for building it.
Changing the journaling environment to use *MAXOPT3 on page 196 describes
considerations and provides procedures for changing the journaling environment
to use the *MAXOPT3 receiver size option.
Changing the remote journal environment on page 200 provides steps to follow
when changing an existing remote journal configuration. The procedure is
appropriate for changing a journal receiver library for the target journal in a remote
journaling environment or for any other changes that affect the target journal.
Adding a remote journal link on page 202 describes how to create a MIMIX RJ
link, which will in turn create a target journal definition with appropriate values to
support remote journaling. In most configurations, the RJ link is automatically
created for you when you follow the steps of the configuration checklists.
Changing a remote journal link on page 203 describes how to change an
existing RJ link.
Temporarily changing from RJ to MIMIX processing on page 204 describes how
Configuring journal definitions
178
to change a data group configured for remote journaling to temporarily use MIMIX
send processing.
Changing from remote journaling to MIMIX processing on page 205 describes
how to change a data group that uses remote journaling so that it uses MIMIX
send processing. Remote journaling is preferred.
Removing a remote journaling environment on page 206 describes how to
remove a remote journaling environment that you no longer need.
Journal definitions created by other processes
179
Journal definitions created by other processes
When you create system definitions, MIMIX automatically creates a journal definition
for the security audit journal (QAUDJ RN) on that system. The QAUDJ RN is used only
by MIMIX system journal replication processes. If you do not already have a
journaling environment for the security audit journal, it will be created when the first
data group that replicates from the system journal is started.
When you create a data group definition, MIMIX automatically creates a journal
definition if one does not already exist. Any journal definitions that are created in this
manner will be named with the value specified in the data group definition.
In an environment that uses MIMIX Remote J ournal support, the process of creating a
data group definition creates a remote journal link which in turn creates the journal
definition for the target journal. The target journal definition is created using values
appropriate for remote journaling.
Any journal definitions created by another process can be changed if necessary.
180
Tips for journal definition parameters
This topic provides tips for using the more common options for journal definitions.
Context-sensitive help is available online for all options on the journal definition
commands.
Journal definition (JRNDFN) This parameter is a two-part name that identifies a
journaling environment on a system. The first part of the name identifies the journal
definition. When a journal definition for the security audit journal (system journal) is
automatically created as a result of creating a system definition, the first part of the
name is QAUDJ RN. The second part of the name identifies a system definition which
represents the system on which you want the journal to reside.
Note: In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_). J ournal definition names cannot be UPSMON
or begin with the characters MM. If the target journal definition is configured by
MIMIX for use with MIMIX RJ support, its name is the first eight characters
from the name of the source journal definition followed by the characters @R.
If a journal definition name is already in use, the name may include @S, @T,
@U, @V, or @W. There are additional specific naming conventions for journal
definitions that are used with remote journaling.
MIMIX uses the first six characters of the journal definition name to generate
the journal receiver prefix. MIMIX restricts the last character of the prefix from
being numeric. If the last character of a prefix resulting from the journal
definition name is numeric, it can become part of the receiver number and no
longer match the journal name.
Journal (JRN) This parameter specifies the qualified name of a journal to which
changes to files or objects to be replicated are journaled. For the journal name, the
default value *J RNDFN uses the name of the journal definition for the name of the
journal.
For the journal library, the default value *DFT allows MIMIX to determine the library
name based on the ASP in which the journal library is allocated, as specified in the
Journal library ASP parameter. If that parameter specifies *ASPDEV, MIMIX uses
#MXJ RNIASP for the default journal library name; otherwise, the default library name
is #MXJ RN.
Journal library ASP (JRNLIBASP) This parameter specifies the auxiliary storage
pool (ASP) from which the system allocates storage for the journal library. You can
use the default value *CRTDFT or you can specify the number of an ASP in the range
1 through 32.
The value *CRTDFT indicates that the command default value for the IBM i Create
Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP)
from which the system allocates storage for the library.
For libraries that are created in a user ASP, all objects in the library must be in the
same ASP as the library.
Tips for journal definition parameters
181
Journal receiver prefix (JRNRCVPFX) This parameter specifies the prefix to be
used in the name of journal receivers associated with the journal used in the
replication process and the library in which the journal receivers are located.
The prefix must be unique to the journal definition and cannot end in a numeric
character. The default value *GEN for the name prefix indicates that MIMIX will
generate a unique prefix, which usually is the first six characters of the journal
definition name with any trailing numeric characters removed. If that prefix is already
used in another journal definition, a unique six character prefix name is derived from
the definition name. If the journal definition will be used in a configuration which
broadcasts data to multiple systems, there are additional considerations. See J ournal
definition considerations on page 184.
The value *DFT for the journal receiver library allows MIMIX to determine the library
name based on the ASP in which the journal receiver is allocated, as specified in the
Journal receiver library ASP parameter. If that parameter specifies *ASPDEV, MIMIX
uses #MXJ RNIASP for the default journal receiver library name. Otherwise, the
default library name is #MXJ RN. You can specify a different name or specify the value
*J RNLIB to use the same library that is used for the associated journal.
Journal receiver library ASP (RCVLIBASP) This parameter specifies the auxiliary
storage pool (ASP) from which the system allocates storage for the journal receiver
library. You can use the default value *CRTDFT or you can specify the number of an
ASP in the range 1 through 32.
The value *CRTDFT indicates that the command default value for the IBM i Create
Library (CRTLIB) command is used to determine the auxiliary storage pool (ASP)
from which the system allocates storage for the library.
For libraries that are created in a user ASP, all objects in the library must be in the
same ASP as the library.
Target journal state (TGTSTATE) This parameter specifies the requested status of
the target journal, and can be used with active journaling support or journal standby
state. Use the default value *ACTIVE to set the target journal state to active when the
data group associated with the journal definition is journaling on the target system
(J RNTGT(*YES)). Use the value *STANDBY to journal objects on the target system
while preventing most journal entries from being deposited into the target journal.
Note: J ournal standby state and journal caching require that the IBM feature for High
Availability J ournal Performance be installed. For more information, see
Configuring for high availability journal performance enhancements on
page 321.
Journal caching (JRNCACHE) This parameter specifies whether the system should
cache journal entries in main storage before writing them to disk. Use the
recommended default value *BOTH to perform journal caching on both the source
and the target systems. You can also specify values *SRC, *TGT, or *NONE.
Receiver change management (CHGMGT, THRESHOLD, TIME, RESETTHLD2 or
RESETTHLD) Several parameters control how journal receivers associated with the
replication process are changed.
The Receiver change management (CHGMGT) parameter controls whether MIMIX
performs change management operations for the journal receivers used in the
182
replication process. The shipped default value of *TIMESIZE results in MIMIX
changing journal receivers by both threshold size and time of day.
The following parameters specify conditions that must be met before change
management can occur.
Receiver threshold size (MB) (THRESHOLD) You can specify the size, in
megabytes, of the journal receiver at which it is changed. The default value is
6600 MB. This value is used when MIMIX or the system changes the receivers.
If you decide to decrease the size of the Receiver threshold size you will need to
manually change your journal receiver to reflect this change.
If you change the journal receiver threshold size in the journal definition, the
change is effective with the next receiver change.
Time of day to change receiver (TIME) You can specify the time of day at which
MIMIX changes the journal receiver. The time is based on a 24 hour clock and
must be specified in HHMMSS format.
Reset large sequence threshold (RESETTHLD2) You can specify the sequence
number (in millions) at which to reset the receiver sequence number. When the
threshold is reached, the next receiver change resets the sequence number to 1.
Note: RESETTHLD2 accepts larger sequence number values than
RESETTHLD. You can specify a value for only one of these parameters.
RESETTHLD2 is recommended.
For information about how change management occurs in a remote journal
environment and about using other change management choices, see J ournal
receiver management on page 36
Receiver delete management (DLTMGT, KEEPUNSAV, KEEPRCVCNT,
KEEPJRNRCV) Four parameters control how MIMIX handles deleting the journal
receivers associated with the replication process.
The Receiver delete management (DLTMGT) parameter specifies whether or not
MIMIX performs delete management for the journal receivers. By default, MIMIX
performs the delete management operations. MIMIX operations can be adversely
affected if you allow the system or another process to handle delete management.
For example, if another process deletes a journal receiver before MIMIX is finished
with it, replication can be adversely affected.
All of the requirements that you specify in the following parameters must be met
before MIMIX deletes a journal receiver:
Keep unsaved journal receivers (KEEPUNSAV) You can specify whether or not to
have MIMIX retain any unsaved journal receivers. Retaining unsaved receivers
allows you to back out (rollback) changes in the event that you need to recover
from a disaster. The default value *YES causes MIMIX to keep unsaved journal
receivers until they are saved.
Keep journal receiver count (KEEPRCVCNT) You can specify the number of
detached journal receivers to retain. For example, if you specify 2 and there are
10 journal receivers including the attached receiver (which is number 10), MIMIX
retains two detached receivers (8 and 9) and deletes receivers 1 through 7.
Tips for journal definition parameters
183
Keep journal receivers (days) (KEEPJ RNRCV) You can specify the number of
days to retain detached journal receivers. For example, if you specify to keep the
journal receiver for 7 days and the journal receiver is eligible for deletion, it will be
deleted after 7 days have passed from the time of its creation. The exact time of
the deletion may vary. For example, the deletion may occur within a few hours
after the 7 days have passed.
For information see J ournal receiver management on page 36
Journal receiver ASP (JRNRCVASP) This parameter specifies the auxiliary storage
pool (ASP) from which the system allocates storage for the journal receivers. The
default value *LIBASP indicates that the storage space for the journal receivers is
allocated from the same ASP that is used for the journal receiver library.
Threshold message queue (MSGQ) This parameter specifies the qualified name of
the threshold message queue to which the system sends journal-related messages
such as threshold messages. The default value *J RNDFN for the queue name
indicates that the message queue uses the same name as the journal definition. The
value *J RNLIB for the library name indicates that the message queue uses the library
for the associated journal.
Exit program (EXITPGM) This parameter allows you to specify the qualified name of
an exit program to use when journal receiver management is performed by MIMIX.
The exit program will be called when a journal receiver is changed or deleted by the
MIMIX journal manager. For example, you might want to use an exit program to save
journal receivers as soon as MIMIX finishes with them so that they can be removed
from the system immediately.
Receiver size option (RCVSIZOPT) This parameter specifies what option to use for
determining the maximum size of sequence numbers in journal entries written to the
attached journal receiver. Changing this value requires that you change to a new
journal receiver. In order for a change to take effect the journaling environment must
be built. When the value *MAXOPT3 is used, the journal receivers cannot be saved
and restored to systems with operating system releases earlier than V5R3M0.
To support a switchable data group, a change to this parameter requires more than
one journal definition to be changed. For additional information, see Changing the
journaling environment to use *MAXOPT3 on page 196
Minimize entry specific data (MINENTDTA) This parameter specifies which object
types allow journal entries to have minimized entry-specific data. For additional
information about improving journaling performance with this capability, see
Minimized journal entry data on page 318.
Reset sequence threshold (RESETTHLD) You can specify the sequence number
(in millions) at which to reset the receiver sequence number. When the threshold is
reached, the next receiver change resets the sequence number to 1. You can specify
a value for this parameter or for the RESETTHLD2 parameter, but not both.
RESETTHLD2 is recommended.
184
Journal definition considerations
Consider the following as you create journal definitions for remote journaling:
The source journal definition identifies the local journal and the system on
which the local journal exists. Similarly, the target journal definition identifies
the remote journal and the system on which the remote journal exists.
Therefore, the source journal definition identifies the source system of the
remote journal process and the target journal definition identifies the target
system of the remote journal process.
You can use an existing journal definition as the source journal definition to
identify the local journal. However, using an existing journal definition for the
target journal definition is not recommended. The existing definition is likely to
be used for journaling and therefore is not appropriate as the target journal
definition for a remote journal link.
MIMIX recognizes the receiver change management parameters (CHGMGT,
THRESHOLD, TIME, RESETTHLD2 or RESETTHLD) specified in the source
journal definition and ignores those specified in the target journal definition.
When a new receiver is attached to the local journal, a new receiver with the
same name is automatically attached to the remote journal. The receiver prefix
specified in the target journal definition is ignored.
Each remote journal link defines a local-remote journal pair that functions in
only one direction. J ournal entries flow from the local journal to the remote
journal. The direction of a defined pair of journals cannot be switched. If you
want to use the RJ process in both directions for a switchable data group, you
need to create journal definitions for two remote journal links (four journal
definitions). For more information, see Example journal definitions for a
switchable data group on page 185.
After the journal environment is built for a target journal definition, MIMIX
cannot change the value of the target journal definitions Journal receiver prefix
(J RNRCVPFX) or Threshold message queue (MSGQ), and several other
values. To change these values see the procedure in the IBM topic Library
Redirection with Remote J ournals in the IBM eServer iSeries Information
Center.
If you are configuring MIMIX for a scenario in which you have one or more
target systems, there are additional considerations for the names of journal
receivers. Each source journal definition must specify a unique value for the
Journal receiver prefix (J RNRCVPFX) parameter. MIMIX ensures that the
same prefix is not used more than once on the same system but cannot
determine if the prefix is used on a target journal while it is being configured. If
the prefix defined by the source journal definition is reused by target journals
that reside in the same library and ASP, attempts to start the remote journals
will fail with message CPF699A (Unexpected journal receiver found).
When you create a target journal definition instead of having it generated using
the Add Remote J ournal Link (ADDRJ LNK) command, use the default value
*GEN for the prefix name for the J RNRCVPFX on a target journal definition.
The receiver name for source and target journals will be the same on the
Journal definition considerations
185
systems but will not be the same in the journal definitions. In the target journal,
the prefix will be the same as that specified in the source journal definition.
Naming convention for remote journaling environments with 2 systems
If you allow MIMIX to generate the target journal definition when you create a remote
journal link, MIMIX implements the following naming conventions for the target journal
definition and for the objects in its associated journaling environment. If you specify
your own target journal definition, follow these same naming conventions to reduce
the potential for confusion and errors.
The two-part name of the target journal definition is generated as follows:
The Name is the first eight characters from the name of the source journal
definition followed by the characters @R when the journal definition is created for
MIMIX RJ support. If a journal definition name is already in use, the name may
instead include @S, @T, @U, @V, or @W.
Note: J ournal definition names cannot be UPSMON or begin with the characters
MM.
The System is the value entered in the target journal definition system field.
For example, if the source journal definition name is MYJ RN and you specified
TGTJ RNDFN(*GEN CHICAGO), the target journal definition will be named
MYJ RN@R CHICAGO.
The target journal definition will have the following characteristics and associated new
objects:
The J ournal name will have the same name as the source journal.
The J ournal library will use the first eight characters of the name of the source
journal library followed by the characters @R.
The J ournal library ASP will be copied from source journal definition.
The J ournal receiver prefix will be copied from the source journal definition.
The J ournal receiver library will use the first eight characters of the name of the
source journal receiver library followed by the characters @R.
The Message queue library will use the first eight characters of the name of the
source message queue library followed by the characters @R.
The value for the Receiver change management (CHGMGT) parameter will be
*NONE.
Example journal definitions for a switchable data group
To support a switchable data group in a remote journaling environment, you need to
have four journal definitions configured: two for the RJ link used for normal
production-to-backup operations, and two for the RJ link used for replication in the
opposite direction.
In this example, a switchable data group named PAYABLES is created between
systems CHICAGO and NEWYORK. System 1 (CHICAGO) is the data source. The
data group definition specifies *YES to Use remote journal link. Command defaults
186
create the data group using a generated short data group name and using the data
group name for the system 1 and system 2 journal definitions.
To create the RJ link and associated journal definitions for normal operations, option
10 (Add RJ link) on the Work with J ournal Definitions display is used on an existing
journal definition named PAYABLES CHICAGO (the first entry listed in Figure 13).
This is the source journal definition for normal operations. The process of adding the
link creates the target journal definition PAYABLES@R NEWYORK (the last entry
listed in Figure 13).
To create the RJ link and associated definitions for replication in the opposite
direction, a new source journal definition, PAYABLES NEWYORK, is created (the
second entry listed in Figure 13). Then that definition is used to create second RJ link,
which in turn generates the target journal definition PAYABLES@R CHICAGO (the
third entry listed in Figure 13).
Figure 13. Example journal definitions for a switchable data group.
Work with Journal Definitions
CHI CAGO
Type opt i ons, pr ess Ent er .
1=Cr eat e 2=Change 3=Copy 4=Del et e 5=Di spl ay 6=Pr i nt 7=Rename
10=Add RJ l i nk 12=Wor k wi t h RJ l i nks 14=Bui l d
17=Wor k wi t h j r n at t r i but es 24=Del et e j r n envi r onment

- - - - Def i ni t i on - - - - - - - - - - J our nal - - - - - - - - Management - RJ
Opt Name Syst em Name Li br ar y Change Del et e Li nk
PAYABLES CHI CAGO PAYABLES MI MI XJ RN *SYSTEM *YES *SRC
PAYABLES NEWYORK PAYABLES MI MI XJ RN *SYSTEM *YES *SRC
PAYABLES@R CHI CAGO PAYABLES MI MI XJ RN@R *NONE *YES *TGT
PAYABLES@R NEWYORK PAYABLES MI MI XJ RN@R *NONE *YES *TGT






Bot t om
F3=Exi t F4=Pr ompt F5=Ref r esh F6=Cr eat e
F12=Cancel F18=Subset F21=Pr i nt l i st F22=Wor k wi t h RJ l i nks

Journal definition considerations
187
Identifying the correct journal definition on the Work with J ournal Definition display
can be confusing. Fortunately, the Work with RJ Links display (Figure 14) shows the
association between journal definitions much more clearly.
Figure 14. Example of RJ links for a switchable data group.
Naming convention for multimanagement environments
The IBM i remote journal function requires unique names for the local journal receiver
and the remote receiver. In a MIMIX environment that uses multimanagement
functions
1
, more than one system serves as the management system for MIMIX
operations. In a multimanagement environment, it is possible that each node that is a
management system is also both a source and target for replication activity. The
following manually implemented naming convention ensures that journal receivers
have unique names.
Library name-mapping - In target journal definitions, specify journal library and
receiver library names that include a two-character identifier, nn, to represent the
node of the associated source (local journal). Place this identifier before the remote
journal indicator @R at the end of the name, like this: nn@R. Also include this
identifier at the end of the target journal definition name. This convention allows for
the use of the same local journal name for all data groups and places all journals and
receivers from the same source in the same library.
To ensure that journal receivers in a multimanagement environment have unique
names, the following is strongly recommended:
Limit the data group name to six characters. This will simplify keeping an
association between the data group name and the names of associated journal
definitions by allowing space for the source node identifier within those names.
Work with RJ Links
Syst em: CHI CAGO
Type opt i ons, pr ess Ent er .
1=Add 2=Change 4=Remove 5=Di spl ay 6=Pr i nt 9=St ar t 10=End
14=Bui l d 15=Remove RJ connect i on 17=Wor k wi t h j r n at t r i but es
24=Del et e t ar get j r n envi r onment

- - - Sour ce J r n Def - - - - - - Tar get J r n Def - - -
Opt Name Syst em Name Syst em Pr i or i t y Dl vr y St at e

PAYABLES CHI CAGO PAYABLES@R NEWYORK *SYSDFT *ASYNC *I NACTI VE
PAYABLES NEWYORK PAYABLES@R CHI CAGO *SYSDFT *ASYNC *I NACTI VE






Bot t om
Par amet er s or command
===>
F3=Exi t F4=Pr ompt F5=Ref r esh F6=Add F9=Ret r i eve F11=Vi ew 2
F12=Cancel F13=Repeat F16=J r n Def i ni t i ons F18=Subset F21=Pr i nt l i st

1. A MIMIX Global license key is required for multimanagement functions.
188
Manually create journal definitions (CRTJ RNDFN command) using the library
name-mapping convention. J ournal definitions created when a data group is
created may not have unique names and will not create all the necessary target
journal definitions.
Once the appropriately named journal definitions are created for source and target
systems, manually create the remote journal links between them (ADDRJ LNK
command).
Example journal definitions for three management nodes
The following figures illustrate the library-mapping naming convention for journal
definitions in a multimanagement environment with three nodes. In this example, all
three nodes are designated as management systems. The data group name ABC.
When implementing the naming convention, it is helpful to consider one source node
at a time and create all the journal definitions necessary for replication from that
source. This technique is illustrated in the example.
Library-mapping example: In Figure 15, a three node environment is shown in three
separate graphics. Each graphic identifies one node as a replication source, with
arrows pointing to the possible target nodes and lists the journal definitions needed to
replicate from that source.
In each graphic, library name-mapping is evident in the names shown for the target
journal definitions and their journal and receiver libraries. For example, when SYS01
is the source, journal definition ABC SYS01 identifies the local journal on SYS01. The
source identifier 01 appears target journal definitions ABC01@R SYS02 and
ABC01@R SYS03 and in the library names defined within each.
Figure 15 also includes a list of all the journal definitions associated with all nodes
from this example as they would appear on the Work with J ournal definitions display.
Journal definition considerations
189
Figure 15. Library-mapped journal definitions - three node environment. All nodes are management systems
190
Figure 16 shows the RJ links needed for this example.
Figure 16. Library-mapped names as shown within the RJ links for a three node environment
Journal receiver size for replicating large object data
191
Journal receiver size for replicating large object data
For potentially large IFS stream files and files containing LOB data, it is important that
your journal receiver is large enough to accommodate the data. You may need to
change your journal receiver size in order to accommodate the data.
For data groups that can be switched, the journal receivers on both the source and
target systems must be large enough to accomodate the data.
Verifying journal receiver size options
To display the current journal receiver size options for journals used by MIMIX, do the
following from the system where the source journal definition is located:
1. Enter the command installation-library/WRKJRNDFN
2. Next to the journal definition for the system you are on, type a 17 (Work with
journal attributes).
3. View the Receiver size options field to see how the journal is configured. The
value should indicate support for large journal entries. The values *MAXOPT2 and
*MAXOPT3 support journal entries up to 4 GB.
Changing journal receiver size options
To change the journal receiver size, do the following:
1. From a command line, type CHGJRN (Change J ournal) and press F4 to prompt.
2. At the Journal prompt, enter the journal and library names for the journal you wish
to change.
3. At the Receiver size option prompt, specify a value that indicates support for large
journal entries, such as *MAXOPT2 or *MAXOPT3.
Note: Make sure the other systems in your environment are compatible in size.
192
Creating a journal definition
Do the following to create a journal definition:
1. From the MIMIX Basic Main Menu, type an 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu select option 3 (Work with journal definitions)
and press Enter.
3. The Work with J ournal Definitions display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
4. The Create J ournal Definition display appears. At the Journal definition prompts,
specify a two-part name.
Note: J ournal definition names cannot be UPSMON or begin with the characters
MM.
5. Verify that the following prompts contain the values that you want. If you have not
journaled before, the default values are appropriate. If you need to identify an
existing journaling environment to MIMIX, specify the information you need.
Journal
Library
Journal library ASP
Journal receiver prefix
Library
Journal receiver library ASP
Important! The IBM feature for High Availability J ournal Performance, is required for
journal standby state in Step 6 and journal caching in Step 7. For more information
see Configuring for high availability journal performance enhancements on
page 321.
6. At the Target journal state prompt, specify the requested status of the target
journal. The default value is *ACTIVE. This value can be used with active
journaling support or journal standby state.
7. At the Journal caching prompt, specify whether the system should cache journal
entries in main storage before writing them to disk. The recommended default
value is *BOTH.
8. Set the values you need to manage changing journal receivers, as follows:
a. At the Receiver change management prompt, specify the value you want. The
default values are recommended. For more information about valid
combinations of values, press F1 (Help).
b. Press Enter.
c. One or more additional prompts related to receiver change management
appear on the display. Verify that the values shown are what you want and, if
necessary, change the values.
Receiver threshold size (MB)
Creating a journal definition
193
Time of day to change receiver
Reset large sequence threshold
d. Press Enter.
9. Set the values you need to manage deleting journal receivers, as follows:
a. It is recommended that you accept the default value *YES for the Receiver
delete management prompt to allow MIMIX to perform delete management.
b. Press Enter.
c. One or more additional prompts related to receiver delete management appear
on the display. If necessary, change the values.
Keep unsaved journal receivers
Keep journal receiver count
Keep journal receivers (days)
10. At the Description prompt, type a brief text description of the journal definition.
11. This step is optional. If you want to access additional parameters that are
considered advanced functions, press F10 (Additional parameters). Make any
changes you need to the additional prompts that appear on the display.
12. To create the journal definition, press Enter.
194
Changing a journal definition
To change a journal definition, do the following:
1. Access the Work with J ournal Definitions display according to your configuration
needs:
In a clustering environment, from the MIMIX Cluster Menu select option 20
(Work with system definitions) and press Enter. When the Work with System
Definitions display appears, type 12 (J ournal Definitions) next to the system
name you want and press Enter.
In a standard MIMIX environment, from the MIMIX Configuration Menu select
option 3 (Work with journal definitions) and press Enter.
2. The Work with J ournal Definitions display appears. Type 2 (Change) next to the
definition you want and press Enter.
3. The Change J ournal Definition (CHGJ RNDFN) display appears. Press Enter twice
to see all prompts for the display.
4. Make any changes you need to the prompts. Press F1 (Help) for more information
about the values for each parameter.
5. If you need to access advanced functions, press F10 (Additional parameters).
When the additional parameters appear on the display, make the changes you
need.
6. To accept the changes, press Enter.
Note: Changes to the Receiver threshold size (MB) (THRESHOLD) are effective
with the next receiver change. Before a change to any other parameter is
effective, you must rebuild the journal environment. Rebuilding the journal
environment ensures that it matches the journal definition and prevents
problems starting the data group.
Building the journaling environment
195
Building the journaling environment
Before replication for a data group can occur, the journal environment for all journal
definitions used by that data group must be created on each system. A journaling
environment includes the following objects: library, journal, journal receiver, and
threshold message queue on the system specified in the journal definition. The Build
J ournal Environment (BLDJ RNENV) command is used to build the journal
environment objects for a journal definition. When the BLDJ RNENV command is run,
if the objects do not exist, they are created based on what is specified in the journal
definition. If the journal exists, the Source for values (J RNVAL) parameter of the
BLDJ RNENV command is used to determine the source for the values of these
objects. The journal receiver prefix and library, message queue and library, and
threshold parameters are updated from the source specified in the J RNVAL
parameter.
Specifying *J RNENV for the J RNVAL parameter changes the values of the objects in
the journal definition to match the values in the existing journal environment objects.
Specifying *J RNDFN for the J RNVAL parameter changes the values of the journal
environment objects to match the values of the objects in the journal definition. In a
remote journal environment, the values specified in the journal definition (*J RNDFN)
are only applicable to the source journal.
If the data group definition specifies to journal on the target system, the journal
environment must be built on each system that will be a target system for replication
of that data group. If you do not build either source or target journal environments, the
first time the data group starts MIMIX will automatically build the journal environments
for you.
Note: When building a journal environment, ensure the journal receiver prefix in the
specified library is not already used. If the journal receiver prefix in the
specified library is already used, you must change it to an unused value.
For switchable data groups not specified to journal on the target system, it is
recommended to build the source journaling environments for both directions of
replication so the environments exist for data group replication after switching.
All previous steps in your configuration checklist must be complete before you use
this procedure.
To build the journaling environment, do the following:
Note: If you are journaling on the target system, perform this procedure for both
the source and target systems.
1. From the MIMIX Main Menu, select 11 (Configuration menu) and press Enter.
2. From the MIMIX Configuration Menu, select one of the following and press Enter:
Select 8 (Work with remote journal links) to build the journaling environments
for remote journaling.
Select 3 (Work with journal definitions) to build all other journaling
environments.
3. From the Work with display, type 14 (Build) next to the journal definition you want
196
to build and press Enter.
Option 14 calls the Build J ournal Environment (BLDJ RNENV) command. For
environments using remote journaling, the command is called twice (first for the
source journal definition and then for the target journal definition). A status
message is issued indicating that the journal environment was created for each
system.
4. If you plan to journal access paths, you need to change the value of the receiver
size options. To do this, do the following:
a. Type the command CHGJRN and press F4 (Prompt):
b. For the J RN parameter, specify the name of the journal from the journal
definition.
c. Specify *GEN for the J RNRCV parameter.
d. Specify *NONE for the RCVSIZOPT parameter.
e. Press Enter.
5. To verify that the source journals have been created for a data group, do the
following from each system in the data group:
a. Enter the command WRKDGDFN
b. From the Work with DG Definitions display, type 12 (J ournal definitions) next
the data group and press Enter.
c. The Work with J ournal Definitions display is subsetted to the journal definitions
for the data group. Type 17 (Work with jrn attributes) next to the definition that
is the source for the local system.
Changing the journaling environment to use *MAXOPT3
This procedure changes journal definitions and builds the journaling environments
necessary in order to use a journal with a receiver size option of *MAXOPT3.
Before you use this procedure, consider the following:
Determine which journal definitions must be changed. Table 27 identifies
requirements according to the data group configuration.
Switchable data groups require that journal definitions be changed for both source
and target journals.
A journal definition that is changed to use *MAXOPT3 support affects all data
groups which use the journal definition.
When a journal definition for the system journal (QAUDJ RN) is changed to use
*MAXOPT3 support, any additional MIMIX installations on the same system must
also use *MAXOPT3 support for the system journal. Doing so prevents sequence
numbers from being reset unexpectedly. The additional MIMIX installations must
be running version 6 or higher software and must have their journal definitions for
the system journal changed to use *MAXOPT3 support.
The default value for the journal sequence reset threshold changes when using
Changing the journaling environment to use *MAXOPT3
197
*MAXOPT3. If your sequence numbers will exceed 10 digits, updates must be
made to use the MIMIX command and outfile fields that support sequence
numbers with more than 10 digits. Updates should be made to any automation
that uses journal sequence numbers with MIMIX and any journal receiver
management exit programs or monitors with an event class (EVTCLS) of *J RN.
When the value *MAXOPT3 is used, the journal receivers cannot be saved and
restored to systems with operating system releases earlier than V5R3M0.
Do the following:
1. For data groups which use the journal definitions that will be changed, do the
following:
a. If commitment control is used, ensure that there are no open commit cycles.
b. End replication in a controlled manner using topic Ending a data group in a
controlled manner in the MIMIX Operations book. Procedures within this topic
will direct how to:
Prepare for a controlled end of a data group
Perform the controlled end - When ending, specify *ALL for the Process
prompt and *CNTRLD for the End process prompt.
Confirm the end request completed without problems - This includes how to
check for and resolve any open commits.
Note: Resolve any open commits before continuing.
2. From the management system, select option 11 (Configuration menu) on the
Table 27. J ournal definitions to change when converting to *MAXOPT3.
Data Group Configuration Journal Definitions to Change
Replicates
From
Switchable
User journal
with remote
journaling
Yes J ournal definition for normal source system (local)
J ournal definition for normal target system (remote, @R)
J ournal definition for switched source system (local)
J ournal definition for switched target system (remote,
@R)
No J ournal definition for source system (local)
J ournal definition for target system (remote, @R)
User journal
with MIMIX
source-send
processing
Yes J ournal definition for source system
J ournal definition for target system
No J ournal definition for source system
System journal
(QAUDJ RN)
Yes QAUDJ RN journal definition for source system
QAUDJ RN journal definition for target system
No QAUDJ RN journal definition for source system
198
MIMIX Main Menu. Then select option 3 (Work with journal definitions) to access
the Work with J ournal Definitions display.
3. From the Work with J ournal Definitions display, do the following to a journal
definition:
a. Type option 2 (Change) next to a journal definition and press Enter.
b. Optionally, specify a value for the Reset large sequence threshold prompt. If no
new value is specified, MIMIX will automatically use the default value
associated the value you specify for the receiver size option in Step 3d.
c. Press F10 (Additional parameters).
d. At the Receiver size option prompt, specify *MAXOPT3.
e. Press Enter.
f. Repeat Step 3 for each of the journal definitions you need to change, as
indicated in Table 27. After all the necessary journal definitions are changed,
continue with the next step.
4. From the Work with J ournal Definitions display, type a 14 (Build) next to the
journal definitions you changed and press Enter.
Note: For remote journaling environments, only perform this step for a source
journal definition. Building the environment for the source journal will
automatically result in the building of the environment for the associated
target journal definition.
5. Verify that the changed journal definitions have appropriate values. Do the
following:
a. From the Work with J ournal Definitions display, type a 5 (Display) next to each
changed journal definition and press Enter.
b. Verify that *MAXOPT3 is specified for the Receiver size option.
c. Verify that the Reset large sequence threshold prompt contains the value you
specified for Step 3b. If you did not specify a value, the value should be
between 9901 and 18446640000000.
6. Verify that the journals have been changed and now have appropriate values. Do
the following:
a. From the appropriate system (source or target), access the Work with J ournal
Definitions display. Then do the following:
From the source system, type 17 (Work with jrn attributes) next to a changed
source journal definition and press Enter.
From the target system, type 17 (Work with jrn attributes) next to a changed
target journal definition and press Enter.
b. Verify that *MAXOPT3 is specified as one of the values for the Receiver size
options field.
7. Update any automation programs. Any programs that include journal sequence
numbers must be changed to use the Reset large sequence threshold
(RESETTHLD2) and the Receiver size option (RCVSIZOPT) parameters.
Changing the journaling environment to use *MAXOPT3
199
8. Start the data groups using default values. Refer to topic Starting selected data
group processes in the MIMIX Operations book.
200
Changing the remote journal environment
Use the following checklist to guide you through the process of changing an existing
remote journal configuration. For example, this procedure is appropriate for changing
a journal receiver library for the target journal in a remote journaling (RJ ) environment
or for any other changes that affect the target journal. These steps can be used for
synchronous or asynchronous remote journals.
Important! Changing the RJ environment must be done in the correct sequence.
Failure to follow the proper sequence can introduce errors in replication and journal
management.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. Verify that no other data groups use the RJ link using topic Identifying data
groups that use an RJ link on page 283.
2. Use topic Ending a data group in a controlled manner in the MIMIX Operations
book to prepare for and perform a controlled end of the data group and end the RJ
link. Specify the following on the ENDDG command:
*ALL for the Process prompt
*CNTRLD for the End process prompt
*YES for the End remote journaling prompt.
3. Verify that the remote journal link is not in use on both systems. Use topic
Displaying status of a remote journal link in the MIMIX Operations book. The
remote journal link should have a state value of *INACTIVE before you continue.
4. Remove the connection to the remote journal as follows:
a. Access the journal definitions for the data group whose environment you want
to change. From the Work with Data Groups display, type a 45 (J ournal
definitions) next to the data group that you want and press Enter.
b. Type a 12 (Work with RJ links) next to either journal definition you want and
press Enter. You can select either the source or target journal definition.
Note: The target journal definition will end with @R.
c. From the Work with RJ Links display, choose the link based on the name in the
Target Jrn Def column. Type a 15 (Remove RJ connection) next to the link with
the target journal definition you want and press Enter
d. A confirmation display appears. To continue removing the connections for the
selected links, press Enter.
5. From the Work with RJ Links display, do the following to delete the target system
objects associated with the RJ link:
Note: The target journal definition will end with @R.
a. Type a 24 (Delete target jrn environment) next to the link that you want and
press Enter.
Changing the remote journal environment
201
b. A confirmation display appears. To continue deleting the journal, its associated
message queue, and the journal receiver, press Enter.
6. Make the changes you need for the target journal.
For example, to change the target (remote) journal definition to a new receiver
library, do the following:
a. Press F12 to return to the Work with J ournal Definitions display.
b. Type option 2 (Change) next to the journal definition for the target system you
want and press Enter.
7. From the Work with J ournal Definitions display, type a 14 (Build) next to the target
journal definition and press Enter.
Note: The target journal definition will end with @R.
8. Return to the Work with Data Groups display. Then do the following:
a. Type an 8 (Display status) next to the data group you want and press Enter.
b. Locate the name of the receiver in the Last Read field for the Database
process.
9. Do the following to start the RJ link:
a. From the Work with Data Groups display, type a 44 (RJ links) next to the data
group you want and press Enter.
b. Locate the link you want based on the name in the Target Jrn Def column.
Type a 9 (Start) next to the link with the target journal definition and press F4
(Prompt)
c. The Start Remote J ournal Link (STRRJ LNK) appears. Specify the receiver
name from Step 8b as the value for the Starting journal receiver (STRRCV)
and press Enter.
10. Start the data group using default values Refer to topic Starting selected data
group processes in the MIMIX Operations book.
202
Adding a remote journal link
This procedure requires that a source journal definition exists. The process of creating
an RJ link will create the target journal definition with appropriate values for remote
journaling.
Before you create the RJ link you should be familiar with the J ournal definition
considerations on page 184.
To create a link between journal definitions, do the following:
1. From the MIMIX Configuration menu, select option 3 (Work with journal
definitions) and press Enter.
2. The Work with J ournal Definitions display appears. Type a 10 (Add RJ link) next
to the journal definition you want and press Enter.
3. The Add Remote J ournal Link (ADDRJ LNK) display appears. The journal
definition you selected in the previous step appears in the prompts for the Source
journal definition. Verify that this is the definition you want as the source for RJ
processing.
4. At the Target journal definition prompts, specify *GEN as the Name and specify
the value you want for System.
Note: If you specify the name of a journal definition, the definition must exist and
you are responsible for ensuring that its values comply with the
recommended values. Refer to the related topic on considerations for
creating journal definitions for remote journaling for more information.
5. Verify that the values for the following prompts are what you want. If necessary,
change the values.
Delivery
Sending task priority
Primary transfer definition
Secondary transfer definition
If you are using an independent ASP in this configuration you also need to
identify the auxiliary storage pools (ASPs) from which the journal and journal
receiver used by the remote journal are allocated. Verify and change the
values for Journal library ASP, Journal library ASP device, Journal receiver
library ASP, and Journal receiver lib ASP dev as needed.
6. At the Description prompt, type a text description of the link, enclosed in
apostrophes.
7. To create the link between journal definitions, press Enter.
Changing a remote journal link
203
Changing a remote journal link
Changes to the delivery and sending task priority take effect only after the remote
journal link has been ended and restarted.
To change characteristics of the link between source and target journal definitions, do
the following:
1. Before you change a remote journal link, end activity for the link. The MIMIX
Operations book describes how to end only the RJ link.
Note: If you plan to change the primary transfer definition or secondary transfer
definition to a definition that uses a different RDB directory entry, you also
need to remove the existing connection between objects. Use topic
Removing a remote journaling environment on page 206 before
changing the remote journal link.
2. From the Work with RJ Links display, type a 2 (Change) next to the entry you want
and press Enter.
3. The Change Remote J ournal Link (CHGRJ LNK) display appears. Specify the
values you want for the following prompts:
Delivery
Sending task priority
Primary transfer definition
Secondary transfer definition
Description
4. When you are ready to accept the changes, press Enter.
5. To make the changes effective, do the following:
a. If you removed the RJ connection in Step 1, you need to use topic Building the
journaling environment on page 195.
b. Start the data group which uses the RJ link.
204
Temporarily changing from RJ to MIMIX processing
This procedure is appropriate for when you plan to continue using remote journaling
as your primary means of transporting data to the target system but, for some reason,
temporarily need to revert to MIMIX send processing.
Important! If the data group is configured for MIMIX Dynamic Apply, you must
complete the procedure in Checklist: Converting to legacy cooperative
processing on page 141 before you remove remote journaling.
For the data group you want to change, do the following:
1. Use the procedure Ending a data group in a controlled manner in the MIMIX
Operations book to prepare for and perform a controlled end of the data group
and end the RJ link. Specify the following on the ENDDG command:
*ALL for the Process prompt
*CNTRLD for the End process prompt
*YES for the End remote journaling prompt.
2. Verify that the process is ended. On the Work with Data Groups display, the data
group should change to show a red L in the Source DB column.
3. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *NO for the Use remote journal link prompt.
d. To accept the change press Enter.
4. Use the procedure Starting selected data group processes in the MIMIX
Operations book, specifying *ALL for the Start Process prompt.
Changing from remote journaling to MIMIX processing
205
Changing from remote journaling to MIMIX processing
Use this procedure when you no longer want to use remote journaling for a data
group and want to permanently change the data group to use MIMIX send
processing.
Important! If the data group is configured for MIMIX Dynamic Apply, you must
complete the procedure in Checklist: Converting to legacy cooperative
processing on page 141 before you remove remote journaling.
Perform these tasks from the MIMIX management system unless these instructions
indicate otherwise.
1. Perform a controlled end for the data group that you want to change using topic
Ending a data group in a controlled manner in the MIMIX Operations book. On
the ENDDG command, specify the following:
*ALL for the Process prompt
*CNTRLD for the End process prompt
Note: Do not end the RJ link at this time. Step 2 verifies that the RJ link is not
in use by any other processes or data groups before ending and
removing the RJ environment.
2. Perform the procedure in topic Removing a remote journaling environment on
page 206.
3. Modify the data group definition as follows:
a. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
b. The Change Data Group Definition (CHGDGDFN) display appears. Press
Enter to see additional prompts.
c. Specify *NO for the Use remote journal link prompt.
d. To accept the change, press Enter.
4. Start data group replication using the procedure Starting selected data group
processes in the MIMIX Operations book and specify *ALL for the Start
processes prompt (PRC parameter).
206
Removing a remote journaling environment
Use this procedure when you want to remove a remote journaling environment that
you no longer need. This procedure removes configuration elements and system
objects necessary for data group replication with remote journaling.
1. Verify that the remote journal link is not used by any data group. Use Identifying
data groups that use an RJ link on page 283.
If you identify a data group that uses the remote journal link, check with your
MIMIX administrator and determine how to proceed. Possible courses of action
are:
If the data group is being converted to use MIMIX send processing or if the
data group will no longer be used, perform a controlled end of the data group.
When the data group is ended, continue with Step 2 of this procedure.
If the data group needs to remain operable using remote journaling, do not
continue with this procedure.
2. End the remote journal link and verify that it has a state value of *INACTIVE
before you continue. Refer to topics Ending a remote journal link independently
and Checking status of a remote journal link in the MIMIX Operations book.
3. From the management system, do the following to remove the connection to the
remote journal:
a. Access the journal definitions for the data group whose environment you want
to change. From the Work with Data Groups display, type a 45 (J ournal
definitions) next to the data group that you want and press Enter.
b. Type a 12 (Work with RJ links) next to either journal definition you want and
press Enter. You can select either the source or target journal definition.
c. From the Work with RJ Links display, type a 15 (Remove RJ connection) next
to the link that you want and press Enter.
Note: If more than one RJ link is available for the data group, ensure that you
choose the link you want.
d. A confirmation display appears. To continue removing the connections for the
selected links, press Enter.
4. From the Work with RJ Links display, do the following to delete the target system
objects associated with the RJ link:
a. Type a 24 (Delete target jrn environment) next to the link that you want and
press Enter.
Attention: Do not continue with this procedure if you identified a
data group that uses the remote journal link and the data group
must continue to be operational. This procedure removes
configuration elements and system objects necessary for
replication with remote journaling
Removing a remote journaling environment
207
b. A confirmation display appears. To continue deleting the journal, its associated
message queue, the journal receiver, and to remove the connection to the
source journal receiver, press Enter.
5. Delete the target journal definition using topic Deleting a Definition in the MIMIX
Operations book. When you delete the target journal definition, its link to the
source journal definition is removed.
6. Use option 4 (Delete) on the Work with Monitors display to delete the RJ LNK
monitors which have the same name as the RJ link.
Configuring data group definitions
208
CHAPTER 10 Configuring data group definitions
By creating a data group definition, you identify to MIMIX the characteristics of how
replication occurs between two systems. You must have at least one data group
definition in order to perform replication.
In an Intra environment, a data group definition defines how replication occurs
between the two product libraries used by INTRA.
Once data group definitions exist for MIMIX, they can also be used by the MIMIX
Promoter product.
The topics in this chapter include:
Tips for data group parameters on page 209 provides tips for using the more
common options for data group definitions.
Creating a data group definition on page 221 provides the steps to follow for
creating a data group definition.
Changing a data group definition on page 225 provides the steps to follow for
changing a data group definition.
Fine-tuning backlog warning thresholds for a data group on page 225 describes
what to consider when adjusting the values at which the backlog warning
thresholds are triggered.
Tips for data group parameters
209
Tips for data group parameters
This topic provides tips for using the more common options for data group definitions.
Context-sensitive help is available online for all options on the data group definition
commands. Refer to Additional considerations for data groups on page 219 for more
information.
Shipped default values for the Create Data Group Definition (CRTDGDFN) command
result in data groups configured for MIMIX Dynamic Apply. For additional information
see Table 11 in Considerations for LF and PF files on page 96.
Data group names (DGDFN, DGSHORTNAM) These parameters identify the data
group.
The Data group definition (DGDFN) is a three-part name that uniquely identifies
a data group. The three-part name must be unique to a MIMIX installation. The
first part of the name identifies the data group. The second and third parts of the
name (System 1 and System 2) specify system definitions representing the
systems between which the files and objects associated with the data group are
replicated.
Notes:
In the first part of the name, the first character must be either A - Z, $, #, or @.
The remaining characters can be alphanumeric and can contain a $, #, @, a
period (.), or an underscore (_). Data group names cannot be UPSMON or
begin with the characters MM.
For Clustering environments only, MIMIX recommends using the value
*RCYDMN in System 1 and System 2 fields for Peer CRGs.
One of the system definitions specified must represent a management system.
Although you can specify the system definitions in any order, you may find it
helpful if you specify them in the order in which replication occurs during normal
operations. For many users normal replication occurs from a production system to
a backup system, where the backup system is defined as the management
system for MIMIX. For example, if you normally replicate data for an application
from a production system (MEXICITY) to a backup system (CHICAGO) and the
backup system is the management system for the MIMIX cluster, you might name
your data group SUPERAPP MEXICITY CHICAGO.
The Short data group name (DGSHORTNAM) parameter indicates an
abbreviated name used as a prefix to identify jobs associated with a data group.
MIMIX will generate this prefix for you when the default *GEN is used. The short
name must be unique to the MIMIX cluster and cannot be changed after the data
group is created.
Data resource group entry (DTARSCGRP) This parameter identifies the data
resource group entry in which you want the data group to participate. The data
resource group entry provides the association to an application group. When the
specified value is a name or resolves to a name, operations to start, end, or switch are
typically performed at the level of the application group instead of the data group. The
default value, *DFT, will check for the existence of application groups in the
installation library to determine behavior. If there are application groups, the first part
210
of the three-part data group name is used for the name of the data resource group
entry. When application groups exist, the data resource group entry specified or to
which *DFT will resolve to must exist. If application groups do not exist, *DFT is the
same as *NONE and the data group will not be associated with a data resource group
entry. You can also specify the name of an existing data resource group entry.
Data source (DTASRC) This parameter indicates which of the systems in the data
group definition is used as the source of data for replication.
Allow to be switched (ALWSWT) This parameter determines whether the direction
in which data is replicated between systems can be switched. If you plan to use the
data group for high availability purposes, use the default value *YES. This allows you
to use one data group for replicating data in either direction between the two systems.
If you do not allow switching directions, you need to have second data group with
similar attributes in which the roles of source and target are reversed in order to
support high availability.
Data group type (TYPE) The default value *ALL indicates that the data group can be
used by both user journal and system journal replication processes. This enables you
to use the same data group for all of the replicated data for an application. The value
*ALL is required for user journal replication of IFS objects, data areas, and data
queues. MIMIX Dynamic Apply also supports the value *DB. For additional
information, see Requirements and limitations of MIMIX Dynamic Apply on
page 101
Note: In Clustering environments only, the data group value of *PEER is available.
This provides you with support for system values and other system attributes
that MIMIX currently does not support.
Transfer definitions (PRITFRDFN, SECTFRDFN) These parameters identify the
transfer definitions used to communicate between the systems defined by the data
group. The name you specify in these parameters must match the first part of a
transfer definition name. By default, MIMIX uses the name PRIMARY for a value of
the primary transfer definition (PRITFRDFN) parameter and for the first part of the
name of a transfer definition.
If you specify a secondary transfer definition (SECTFRDFN), it is used if the
communications path specified in the primary transfer definition is not available.
Once MIMIX starts using the secondary transfer definition, it continues to use it even
after the primary communication path becomes available again.
Reader wait time (seconds) (RDRWAIT) You can specify the maximum number of
seconds that the send process waits when there are no entries available to process.
J obs go into a delay state when there are no entries to process. J obs wait for the time
you specify even when new entries arrive in the journal. A value of 0 uses more
system resources.
Common database parameters (JRNTGT, JRNDFN1, JRNDFN2, ASPGRP1,
ASPGRP2, RJLNK, COOPJRN, NBRDBAPY, DBJRNPRC) These parameters apply
to data groups that can include database files or tracking entries. Data group types of
*ALL or *DB include database files. Data group types of *ALL may also include
tracking entries.
Journal on target (JRNTGT) The default value *YES enables journaling on the
Tips for data group parameters
211
target system, which allows you to switch the direction of a data group more
quickly. Replication of files with some types of referential constraint actions may
require a value of *YES. For more information, see Considerations for LF and PF
files on page 96.
If you specify *NO, you must ensure that, in the event of a switch to the direction
of replication, you manually start journaling on the target system before allowing
users to access the files. Otherwise, activity against those files may not be
properly recorded for replication.
System 1 journal definition (JRNDFN1) and System 2 journal definition
(JRNDFN2) parameters identify the user journal definitions associated with the
systems defined as System 1 and System 2, respectively, of the data group. The
value *DGDFN indicates that the journal definition has the same name as the data
group definition.
The DTASRC, ALWSWT, J RNTGT, J RNDFN1, and J RNDFN2 parameters interact
to automatically create as much of the journaling environment as possible. The
DTASRC parameter determines whether system 1 or system 2 is the source
system for the data group. When you create the data group definition, if the
journal definition for the source system does not exist, a journal definition is
created. If you specify to journal on the target system and the journal definition for
the target system does not exist, that journal definition is also created. The
names of journal definitions created in this way are taken from the values of the
J RNDFN1 and J RNDFN2 parameters according to which system is considered
the source system at the time they are created. You may need to build the
journaling environment for these journal definitions.
System 1 ASP group (ASPGRP1) and System 2 ASP group (ASPGRP2)
parameters identify the name of the primary auxiliary storage pool (ASP) device
within an ASP group on each system. The value *NONE allows replication from
libraries in the system ASP and basic user ASPs 2-32. Specify a value when you
want to replicate IFS objects from a user journal or when you want to replicate
objects from ASPs 33 or higher. For more information see Benefits of
independent ASPs on page 547.
Use remote journal link (RJLNK) This parameter identifies how journal entries
are moved to the target system. The default value, *YES, uses remote journaling
to transfer data to the target system. This value results in the automatic creation of
the journal definitions (CRTJ RNDFN command) and the RJ link (ADDRJ LNK
command), if needed. The RJ link defines the source and target journal definitions
and the connection between them. When ADDRJ LNK is run during the creation of
a data group, the data group transfer definition names are used for the
ADDRJ LNK transfer definition parameters.
MIMIX Dynamic Apply requires the value *YES. The value *NO is appropriate
when MIMIX source-send processes must be used.
Cooperative journal (COOPJRN) This parameter determines whether
cooperatively processed operations for journaled objects are performed primarily
by user (database) journal replication processes or system (audit) journal
replication processes. Cooperative processing through the user journal is
recommended and is called MIMIX Dynamic Apply. For data groups created on
212
version 5, the shipped default value *DFT resolves to *USRJ RN (user journal)
when configuration requirements for MIMIX Dynamic Apply are met. If those
requirements are not met, *DFT resolves to *SYSJ RN and cooperative processing
is performed through system journal replication processes.
Number of DB apply sessions (NBRDBAPY) You can specify the number of
apply sessions allowed to process the data for the data group.
DB journal entry processing (DBJRNPRC) This parameter allows you to
specify several criteria that MIMIX will use to filter user journal entries before they
reach the database apply (DBAPY) process. Each element of the parameter
identifies a criteria that can be set to either *SEND or *IGNORE.
The value *SEND causes the journal entries meeting the criteria to be processed
and sent to the database apply process. For data groups configured to use MIMIX
source-send processes, *SEND can minimize the amount of data that is sent over
a communications path. The value *IGNORE prevents the entries from being sent
to the database apply process. Certain database techniques, such as keyed
replication, may require that an element be set to a specific value.
The following available elements describe how journal entries are handled by the
database reader (DBRDR) or the database send (DBSND) processes.
Before images This criteria determines whether before-image journal entries
are filtered out before reaching the database apply process. If you use keyed
replication, the before-images are often required and you should specify
*SEND. *SEND is also required for the IBM RMVJ RNCHG (Remove J ournal
Change) command. See Additional considerations for data groups on
page 219 for more information.
For files not in data group This criteria determines whether journal entries for
files not defined to the data group are filtered out.
Generated by MIMIX activity This criteria determines whether journal entries
resulting from the MIMIX database apply process are filtered out.
Not used by MIMIX This criteria determines whether journal entries not used by
MIMIX are filtered out.
Additional parameters: Use F10 (Additional parameters) to access the following
parameters. These parameters are considered advanced configuration topics.
Remote journaling threshold (RJLNKTHLD) This parameter specifies the backlog
threshold criteria for the remote journal function. When the backlog reaches any of the
specified criterion, the threshold exceeded condition is indicated in the status of the
RJ link. The threshold can be specified as a time difference, a number of journal
entries, or both. When a time difference is specified, the value is amount of time, in
minutes, between the timestamp of the last source journal entry and the timestamp of
the last remote journal entry. When a number of journal entries is specified, the value
is the number of journal entries that have not been sent from the local journal to the
remote journal. If *NONE is specified for a criterion, that criterion is not considered
when determining whether the backlog has reached the threshold.
Synchronization check interval (SYNCCHKITV) This parameter, which is only valid
for database processing, allows you to specify how many before-image entries to
Tips for data group parameters
213
process between synchronization checks. For MIMIX to use this feature, the journal
image file entry option (FEOPT parameter) must allow before-image journaling
(*BOTH). When you specify a value for the interval, a synchronization check entry is
sent to the apply process on the target system. The apply process compares the
before-image to the image in the file (the entire record, byte for byte). If there is a
synchronization problem, MIMIX puts the data group file entry on hold and stops
applying journal entries. The synchronization check transactions still occur even if
you specify to ignore before-images in the DB journal entry processing (DBJ RNPRC)
parameter.
Time stamp interval (TSPITV) This parameter, which is only valid for database
processing, allows you to specify the number of entries to process before MIMIX
creates a time stamp entry. Time stamps are used to evaluate performance.
Note: The TSPITV parameter does not apply for remote journaling (RJ ) data groups.
Verify interval (VFYITV) This parameter allows you to specify the number of journal
transactions (entries) to process before MIMIX performs additional processing.
When the value specified is reached, MIMIX verifies that the communications path
between the source system and the target system is still active and that the send and
receive processes are successfully processing transactions. A higher value uses less
system resources. A lower value provides more timely reaction to error conditions.
Larger, high-volume systems should have higher values. This value also affects how
often the status is updated with the "Last read" entries. A lower value results in more
accurate status information.
Data area polling interval (DTAARAITV) This parameter specifies the number of
seconds that the data area poller waits between checks for changes to data areas.
The poller process is only used when configured data group data area entries exist.
The preferred methods of replicating data areas require that data group object entries
be used to identify data areas. When object entries identify data areas, the value
specified in them for cooperative processing (COOPDB) determines whether the data
areas are processed through the user journal with advanced journaling, or through
the system journal.
Journal at creation (JRNATCRT) This parameter specifies whether to start
journaling on new objects of type *FILE, *DTAARA, and *DTAQ when they are
created. The decision to start journaling for a new object is based on whether the data
group is configured to cooperatively process any object of that type in a library. All
new objects of the same type are journaled, including those not replicated by the data
group.
If multiple data groups include the same library in their configurations, only allow one
data group to use journal at object creation (*YES or *DFT). The default for this
parameter is *DFT which allows MIMIX to determine the objects to journal at creation.
Note: There are some IBM library restrictions identified within the requirements for
implicit starting of journaling described in What objects need to be journaled
on page 302. For additional information, see Processing of newly created
files and objects on page 114.
Parameters for automatic retry processing: MIMIX may use delay retry cycles
when performing system journal replication to automatically retry processing an object
that failed due to a locking condition or an in-use condition. It is normal for some
214
pending activity entries to undergo delay retry processingfor example, when a
conflict occurs between replicated objects in MIMIX and another job on the system.
The following parameters define the scope of two retry cycles:
Number of times to retry (RTYNBR) This parameter specifies the number of
attempts to make during a delay retry cycle.
First retry delay interval (RTYDLYITV1) This parameter specifies the amount of
time, in seconds, to wait before retrying a process in the first (short) delay retry
cycle.
Second retry delay interval (RTYDLYITV2) specifies the amount of time, in
seconds, to wait before retrying a process in the second (long) delay retry cycle.
This is only used after all the retries for the RTYDLYITV1 parameter have been
attempted.
After the initial failed save attempt, MIMIX delays for the number of seconds specified
for the First retry delay interval (RTYDLYITV1) before retrying the save operation.
This is repeated for the specified number of times (RTYNBR).
If the object cannot be saved after all attempts in the first cycle, MIMIX enters the
second retry cycle. In the second retry cycle, MIMIX uses the number of seconds
specified in the Second retry delay interval (RTYDLYITV2) parameter and repeats the
save attempt for the specified number of times (RTYNBR).
If the object identified by the entry is in use (*INUSE) after the first and second retry
cycle attempts have been exhausted, a third retry cycle is attempted if the Automatic
object recovery policy is enabled. The values in effect for the Number of third
delay/retries policy and the Third retry interval (min.) policy determine the scope of the
third retry cycle. After all attempts have been performed, if the object still cannot be
processed because of contention with other jobs, the status of the entry will be
changed to *FAILED.
File and tracking entry options (FEOPT) This parameter specifies default options
that determine how MIMIX handles file entries and tracking entries for the data group.
All database file entries, object tracking entries, and IFS tracking entries defined to
the data group use these options unless they are explicitly overridden by values
specified in data group file or object entries. File entry options in data group object
entries enable you to set values for files and tracking entries that are cooperatively
processed.
The options are as follows:
Journal image This option allows you to control the kinds of record images that
are written to the journal when data updates are made to database file records,
IFS stream files, data areas or data queues. The default value *AFTER causes
only after-images to be written to the journal. The value *BOTH causes both
before-images and after-images to be written to the journal. Some database
techniques, such as keyed replication, may require the use of both before-image
and after-images. *BOTH is also required for the IBM RMVJ RNCHG (Remove
J ournal Change) command. See Additional considerations for data groups on
page 219 for more information.
Omit open/close entries This option allows you to specify whether open and close
entries are omitted from the journal. The default value *YES indicates that open
Tips for data group parameters
215
and close operations on file members or IFS tracking entries defined to the data
group do not create open and close journal entries and are therefore omitted from
the journal. If you specify *NO, journal entries are created for open and close
operations and are placed in the journal.
Replication type This option allows you to specify the type of replication to use for
database files defined to the data group. The default value *POSITION indicates
that each file is replicated based on the position of the record within the file.
Positional replication uses the values of the relative record number (RRN) found
in the journal entry header to locate a database record that is being updated or
deleted. MIMIX Dynamic Apply requires the value *POSITION.
The value *KEYED indicates that each file is replicated based on the value of the
primary key defined to the database file. The value of the key is used to locate a
database record that is being deleted or updated. MIMIX strongly recommends
that any file configured for keyed replication also be enabled for both before-
image and after-image journaling. Files defined using keyed replication must have
at least one unique access path defined. For additional information, see Keyed
replication on page 334.
Lock member during apply This option allows you to choose whether you want the
database apply process to lock file members when they are being updated during
the apply process. This prevents inadvertent updates on the target system that
can cause synchronization errors. Members are locked only when the apply
process is active.
Apply session With this option, you can assign a specific apply session for
processing files defined to the data group. The default value *ANY indicates that
MIMIX determines which apply session to use and performs load balancing.
Notes:
Any changes made to the apply session option are not effective until the data
group is started with *YES specified for the clear pending and clear error
parameters.
For IFS and object tracking entries, only apply session A is valid. For additional
information see Database apply session balancing on page 80.
Collision resolution This option determines how data collisions are resolved. The
default value *HLDERR indicates that a file is put on hold if a collision is detected.
The value *AUTOSYNC indicates that MIMIX will attempt to automatically
synchronize the source and target file. You can also specify the name of the
collision resolution class (CRCLS) to use. A collision resolution class allows you to
specify how to handle a variety of collision types, including calling exit programs to
handle them. See the online help for the Create Collision Resolution Class
(CRTCRCLS) command for more information.
Note: The *AUTOSYNC value should not be used if the Automatic database
recovery policy is enabled.
Disable triggers during apply This option determines if MIMIX should disable any
triggers on physical files during the database apply process. The default value
*YES indicates that triggers should be disabled by the database apply process
while the file is opened.
216
Process trigger entries This option determines if MIMIX should process any
journal entries that are generated by triggers. The default value *YES indicates
that journal entries generated by triggers should be processed.
Database reader/send threshold (DBRDRTHLD) This parameter specifies the
backlog threshold criteria for the database reader (DBRDR) process. When the
backlog reaches any of the specified criterion, the threshold exceeded condition is
indicated in the status of the DBRDR process. If the data group is configured for
MIMIX source-send processing instead of remote journaling, this threshold applies to
the database send (DBSND) process. The threshold can be specified as time, journal
entries, or both. When time is specified, the value is the amount of time, in minutes,
between the timestamp of the last journal entry read by the process and the
timestamp of the last journal entry in the journal. When a journal entry quantity is
specified, the value is the number of journal entries that have not been read from the
journal. If *NONE is specified for a criterion, that criterion is not considered when
determining whether the backlog has reached the threshold.
Database apply processing (DBAPYPRC) This parameter allows you to specify
defaults for operations associated with the database apply processes. Each
configured apply session uses the values specified in this parameter. The areas for
which you can specify defaults are as follows:
Force data interval You can specify the number of records that are processed
before MIMIX forces the apply process information to disk from cache memory. A
lower value provides easier recovery for major system failures. A higher value
provides for more efficient processing.
Maximum open members You can specify the maximum number of members
(with journal transactions to be applied) that the apply process can have open at
one time. Once the limit specified is reached, the apply process selectively closes
one file before opening a new file. A lower value reduces disk usage by the apply
process. A higher value provides more efficient processing because MIMIX does
not open and close files as often.
Threshold warning You can specify the number of entries the apply process can
have waiting to be applied before a warning message is sent. When the threshold
is reached, the threshold exceeded condition is indicated in the status of the
database apply process and a message is sent to the primary and secondary
message queues.
Apply history log spaces You can specify the maximum number of history log
spaces that are kept after the journal entries are applied. Any value other than
zero (0) affects performance of the apply processes.
Keep journal log user spaces You can specify the maximum number of journal log
spaces to retain after the journal entries are applied. Log user spaces are
automatically deleted by MIMIX. Only the number of user spaces you specify are
kept.
Size of log user spaces (MB) You can specify the size of each log space (in
megabytes) in the log space chain. Log spaces are used as a staging area for
journal entries before they are applied. Larger log spaces provide better
performance.
Tips for data group parameters
217
Object processing (OBJPRC) This parameter allows you to specify defaults for
object replication. The areas for which you can specify defaults are as follows:
Object default owner You can specify the name of the default owner for objects
whose owning user profile does not exist on the target system. The product
default uses QDFTOWN for the owner user profile.
DLO transmission method You can specify the method used to transmit the DLO
content and attributes to the target system. The value *OPTIMIZED uses IBM i
APIs and does not support doclists. The *SAVRST uses IBM i save and restore
commands.
IFS transmission method You can specify the method used to transmit IFS object
content and attributes to the target system. The default value *OPTIMIZED uses
IBM i APIs for better performance. The value *SAVRST uses IBM i save and
restore commands.
Note: It is recommended that you use the *OPTIMIZED method of IFS
transmission only in environments in which the high volume of IFS activity
results in persistent replication backlogs. The IBM i save and restore
method guarantees that all attributes of an IFS object are replicated.
User profile status You can specify the user profile Status value for user profiles
when they are replicated. This allows you to replicate user profiles with the same
status as the source system in either an enabled or disabled status for normal
operations. If operations are switched to the backup system, user profiles can
then be enabled or disabled as needed as part of the switching process.
Keep deleted spooled files You can specify whether to retain replicated spooled
files on the target system after they have been deleted from the source system.
When you specify *YES, the replicated spooled files are retained on the target
system after they are deleted from the source system. MIMIX does not perform
any clean-up of these spooled files. You must delete them manually when they
are no longer needed. If you specify *NO, the replicated spooled files are deleted
from the target system when they are deleted from the source system.
Keep DLO system object name You can specify whether the DLO on the target
system is created with the same system object name as the DLO on the source
system. The system object name is only preserved if the DLO is not being
redirected during the replication process. If the DLO from the source system is
being directed to a different name or folder on the target system, then the system
object name will not be preserved.
Object retrieval delay You can specify the amount of time, in seconds, to wait after
an object is created or updated before MIMIX packages the object. This delay
provides time for your applications to complete their access of the object before
MIMIX begins packaging the object.
Object send threshold (OBJSNDTHLD) This parameter specifies the backlog
threshold criteria for the object send (OBJ SND) process. When the backlog reaches
any of the specified criterion, the threshold exceeded condition is indicated in the
status of the OBJ SND process. The threshold can be specified as time, journal
entries, or both. When time is specified, the value is the amount of time, in minutes,
between the timestamp of the last journal entry read by the process and the
218
timestamp of the last journal entry in the journal. When a journal entry quantity is
specified, the value is the number of journal entries that have not been read from the
journal. If *NONE is specified for a criterion, that criterion is not considered when
determining whether the backlog has reached the threshold.
Object retrieve processing (OBJRTVPRC) This parameter allows you to specify the
minimum and maximum number of jobs allowed to handle object retrieve requests
and the threshold at which the number of pending requests queued for processing
causes additional temporary jobs to be started. The specified minimum number of
jobs will be started when the data group is started. During periods of peak activity, if
the number of pending requests exceeds the backlog jobs threshold, additional jobs,
up to the maximum, are started to handle the extra work. When the backlog is
handled and activity returns to normal, the extra jobs will automatically end. If the
backlog reaches the warning message threshold, the threshold exceeded condition is
indicated in the status of the object retrieve (OBJ RTV) process. If *NONE is specified
for the warning message threshold, the process status will not indicate that a backlog
exists.
Container send processing (CNRSNDPRC) This parameter allows you to specify
the minimum and maximum number of jobs allowed to handle container send
requests and the threshold at which the number of pending requests queued for
processing causes additional temporary jobs to be started. The specified minimum
number of jobs will be started when the data group is started. During periods of peak
activity, if the number of pending requests exceeds the backlog jobs threshold,
additional jobs, up to the maximum, are started to handle the extra work. When the
backlog is handled and activity returns to normal, the extra jobs will automatically end.
If the backlog reaches the warning message threshold, the threshold exceeded
condition is indicated in the status of the container send (CNRSND) process. If
*NONE is specified for the warning message threshold, the process status will not
indicate that a backlog exists.
Object apply processing (OBJAPYPRC) This parameter allows you to specify the
minimum and maximum number of jobs allowed to handle object apply requests and
the threshold at which the number of pending requests queued for processing triggers
additional temporary jobs to be started. The specified minimum number of jobs will be
started when the data group is started. During periods of peak activity, if the number
of pending requests exceeds the backlog threshold, additional jobs, up to the
maximum, are started to handle the extra work. When the backlog is handled and
activity returns to normal, the extra jobs will automatically terminate. You can also
specify a threshold for warning message that indicates the number of pending
requests waiting in the queue for processing before a warning message is sent. When
the threshold is reached, the threshold exceeded condition is indicated in the status of
the object apply process and a message is sent to the primary and secondary
message queues.
User profile for submit job (SBMUSR) This parameter allows you to specify the
name of the user profile used to submit jobs. The default value *J OBD indicates that
the user profile named in the specified job description is used for the job being
submitted. The value *CURRENT indicates that the same user profile used by the job
that is currently running is used for the submitted job.
Tips for data group parameters
219
Send job description (SNDJOBD) This parameter allows you to specify the name
and library of the job description used to submit send jobs. The product default uses
MIMIXSND in library MIMIXQGPL for the send job description.
Apply job description (APYJOBD) This parameter allows you to specify the name
and library of the job description used to submit apply requests. The product default
uses MIMIXAPY in library MIMIXQGPL for the apply job description.
Reorganize job description (RGZJOBD) This parameter, used by database
processing, allows you to specify the name and library of the job description used to
submit reorganize jobs. The product default uses MIMIXRGZ in library MIMIXQGPL
for the reorganize job description.
Synchronize job description (SYNCJOBD) This parameter, used by database
processing, allows you to the name and library of the job description used to submit
synchronize jobs. The product default uses MIMIXSYNC in library MIMIXQGPL for
synchronization job description. This is valid for any synchronize command that does
not have J OBD parameter on the display.
Job restart time (RSTARTTIME) MIMIX data group jobs restart daily to maintain the
MIMIX environment. You can change the time at which these jobs restart. The source
or target role of the system affects the results of the time you specify on a data group
definition. Results may also be affected if you specify a value that uses the job restart
time in a system definition defined to the data group. Changing the job restart time is
considered an advanced technique.
Recovery window (RCYWIN) Configuring a recovery window
1
for a data group
specifies the minimum amount of time, in minutes, that a recovery window is available
and identifies the replication processes that permit a recovery window. A recovery
window introduces a delay in the specified processes to create a minimum time
during which you can set a recovery point. Once a recovery point is set, you can react
to anticipated problems and take action to prevent a corrupted object from reaching
the target system. When the processes reach the recovery point, they are suspended
so that any corruption in the transactions after that point will not automatically be
processed.
By its nature, a recovery window can affect the data group's recovery time objective
(RTO). Consider the effect of the duration you specify on the data group's ability to
meet your required RTO. You should also disable auditing for any data group that has
a configured recovery window. For more information, see Preventing audits from
running in the MIMIX Operations book.
Additional considerations for data groups
If unwanted changes are recorded to a journal but not realized until a later time, you
can backtrack to a time prior to when the changes were made by using the Remove
J ournal Changes (RMVJ RNCHG) command provided by IBM. In order to use this
command, your configuration must meet certain criteria including specific values for
some of the data group definition parameters. For more information, see Removing
journaled changes in the MIMIX Operations book.
1. Recovery windows and recovery points are supported with the MIMIX CDP feature, which
requires an additional access code.
220
Creating a data group definition
221
Creating a data group definition
Shipped default values for the Create Data Group Definition (CRTDGDFN) command
result in data groups configured for MIMIX Dynamic Apply. These data groups use
remote journaling as an integral part of the user journal replication processes. For
additional information see Table 11 in Considerations for LF and PF files on
page 96. For information about command parameters, see Tips for data group
parameters on page 209.
To create a data group, do the following:
1. To access the appropriate command, do the following:
a. From the From the MIMIX Basic Main Menu, type 11 (Configuration menu) and
press Enter
b. From the MIMIX Configuration Menu, select option 4 (Work with data group
definitions) and press Enter.
c. From the Work with Data Group Definitions display, type a 1 (Create) next to
the blank line at the top of the list area and press Enter.
2. The Create Data Group Definition (CRTDGDFN) display appears. Specify a valid
three-part name at the Data group definition prompts.
Note: Data group names cannot be UPSMON or begin with the characters MM.
3. For the remaining prompts on the display, verify the values shown are what you
want. If necessary, change the values.
a. If you want a specific prefix to be used for jobs associated with the data group,
specify a value at the Short data group name prompt. Otherwise, MIMIX will
generate a prefix.
b. The default value for the Data resource group entry prompt will use the data
group name to create an association, through a data resource group entry,
between the data group and an application group when application groups are
configured within the installation. To have the data group associated with a
different data resource group entry, specify a name. When application groups
exist but you want to prevent the data group from participating in them, specify
*NONE.
c. Ensure that the value of the Data source prompt represents the system that
you want to use as the source of data to be replicated.
d. Verify that the value of the Allow to be switched prompt is what you want.
e. Verify that the value of the Data group type prompt is what you need. MIMIX
Dynamic Apply requires either *ALL or *DB. Legacy cooperative processing
and user journal replication of IFS objects, data areas, and data queues
require *ALL.
f. Verify that the value of the Primary transfer definition prompt is what you want.
g. If you want MIMIX to have access to an alternative communications path,
specify a value for the Secondary transfer definition prompt.
222
h. Verify that the value of the Reader wait time (seconds) prompt is what you
want.
i. Press Enter.
4. If you specified *OBJ for the Data group type, skip to Step 9.
5. The Journal on target prompt appears on the display. Verify that the value shown
is what you want and press Enter.
Note: If you specify *YES and you require that the status of journaling on the
target system is accurate, you should perform a save and restore
operation on the target system prior to loading the data group file entries. If
you are performing your initial configuration, however, it is not necessary
to perform a save and restore operation. You will synchronize as part of
the configuration checklist.
6. More prompts appear on the display that identify journaling information for the
data group. You may need to use the Page Down key to see the prompts. Do the
following:
a. Ensure that the values of System 1 journal definition and System 2 journal
definition identify the journal definitions you need.
Notes:
If you have not journaled before, the value *DGDFN is appropriate. If you
have an existing journaling environment that you have identified to MIMIX in
a journal definition, specify the name of the journal definition.
If you only see one of the journal definition prompts, you have specified *NO
for both the Allow to be switched prompt and the Journal on target prompt.
The journal definition prompt that appears is for the source system as
specified in the Data source prompt.
b. If any objects to replicate are located in an auxiliary storage pool (ASP) group
on either system, specify values for System1 ASP group and System 2 ASP
group as needed. The ASP group name is the name of the primary ASP device
within the ASP group.
c. The default for the Use remote journal link prompt is *YES, which required for
MIMIX Dynamic Apply and preferred for other configurations. MIMIX creates a
transfer definition and an RJ link, if needed. To create a data group definition
for a source-send configuration, change the value to *NO.
d. At the Cooperative journal (COOPJ RN) prompt, specify the journal for
cooperative operations. For new data groups, the value *DFT automatically
resolves to *USRJ RN when Data group type is *ALL or *DB and Remote
journal link is *YES. The value *USRJ RN processes through the user
(database) journal while the value *SYSJ RN processes through the system
(audit) journal.
7. At the Number of DB apply sessions prompt, specify the number of apply sessions
you want to use.
8. Verify that the values shown for the DB journal entry processing prompts are what
you want.
Creating a data group definition
223
Note: *SEND is required for the IBM RMVJ RNCHG (Remove J ournal Change)
command. See Additional considerations for data groups on page 219
for more information.
9. At the Description prompt, type a text description of the data group definition,
enclosed in apostrophes.
10. Do one of the following:
To accept the basic data group configuration, Press Enter. Most users can
accept the default values for the remaining parameters. The data group is
created when you press Enter.
To access prompts for advanced configuration, press F10 (Additional
Parameters) and continue with the next step.
Advanced Data Group Options: The remaining steps of this procedure are only
necessary if you need to access options for advanced configuration topics. The
prompts are listed in the order they appear on the display. Because IBM i does not
allow additional parameters to be prompt-controlled, you will see all parameters
regardless of the value specified for the Data group type prompt.
11. Specify the values you need for the following prompts associated with user journal
replication:
Remote journaling threshold
Synchronization check interval
Time stamp interval
Verify interval
Data area polling interval
Journal at creation
12. Specify the values you need for the following prompts associated with system
journal replication:
Number of times to retry
First retry delay interval
Second retry delay interval
13. Specify the values you need for each of the prompts on the File and tracking ent.
opts (FEOPT) parameter.
Notes:
Replication type must be *POSITION for MIMIX Dynamic Apply.
Apply session A is used for IFS objects, data areas, and data queues that are
configured for user journal replication. For more information see Database
apply session balancing on page 80.
The journal image value *BOTH is required for the IBM RMVJ RNCHG
(Remove J ournal Change) command. See Additional considerations for data
groups on page 219 for more information.
224
14. Specify the values you need for each element of the following parameters:
Database reader/send threshold
Database apply processing
Object processing
Object send threshold
Object retrieve processing
Container send processing
Object apply processing
15. If necessary, change the values for the following prompts:
User profile for submit job
Send job description and its Library
Apply job description and its Library
Reorganize job description and its Library
Synchronize job description and its Library
Job restart time
16. When you are sure that you have defined all of the values that you need, press
Enter to create the data group definition.
Changing a data group definition
225
Changing a data group definition
For information about command parameters, see Tips for data group parameters on
page 209.
To change a data group definition, do the following:
1. From the Work with DG Definitions display, type a 2 (Change) next to the data
group you want and press Enter.
2. The Change Data Group Definition (CHGDGDFN) display appears. Press Enter to
see additional prompts.
3. Make any changes you need for the values of the prompts. Page Down to see
more of the prompts.
Note: If you change the Number of DB apply sessions prompt (NBRDBAPY),
you need to start the data group specifying *YES for the Clear pending
prompt (CLRPND).
4. If you need to access advanced functions, press F10 (Additional parameters).
Make any changes you need for the values of the prompts.
5. When you are ready to accept the changes, press Enter.
Fine-tuning backlog warning thresholds for a data group
MIMIX supports the ability to set a backlog threshold on each of the replication jobs
used by a data group. When a job has a backlog that reaches or exceeds the
specified threshold, the threshold condition is indicated in the job status and reflected
in user interfaces.
Threshold settings are meant to inform you that, while normal replication processes
are active, a condition exists that could become a problem. What is an acceptable risk
for some data groups may not be acceptable for other data groups or in some
environments. For example, a threshold condition which occurs after starting a
process that was temporarily ended or while processing an unusually large object
which rarely changes may be an acceptable risk. However, a process that is
continuously in a threshold condition or having multiple processes frequently in
threshold conditions may indicate a more serious exposure that requires attention.
Ultimately, each threshold setting must be a balance between allowing normal
fluctuations to occur while ensuring that a job status is highlighted when a backlog
approaches an unacceptable level of risk to your recovery time objectives (RTO) or
risk of data loss.
Important! When evaluating whether threshold settings are compatible with your
RTO, you must consider all of the processes in the replication paths for which the
data group is configured and their thresholds. Each threshold represents only one
process in either the user journal replication path or the system journal replication
path. If the threshold for one process is set higher than its shipped value, a
backlog for that process may not result in a threshold condition while being
sufficiently large to cause subsequent processes to have backlogs which exceed
their thresholds. Consider the cumulative effect that having multiple processes in
226
threshold conditions would have on RTO and your tolerance for data loss in the
event of a failure.
Table 28 lists the shipped values for thresholds available in a data group definition,
identifies the risk associated with a backlog for each replication process, and
identifies available options to address a persistent threshold condition. For each data
group, you may need to use multiple options or adjust one or more threshold values
multiple times before finding an appropriate setting.
Table 28. Shipped threshold values for replication processes and the risk associated with a backlog
Replication Process
Backlog Threshold and
its Shipped Default Val-
ues
Risk Associated with a Backlog Options for
Resolving Persistent
Threshold Conditions
Note: Select a name to view a description
Remote journaling
threshold
10 minutes
All journal entries in the backlog for the remote
journaling function exist only in the source
system journal and are waiting to be
transmitted to the remote journal. These entries
cannot be processed by MIMIX user journal
replication processes and are at risk of being
lost if the source system fails. After the source
system becomes available again, journal
analysis may be required.
Option 3
Option 4
Database reader/send
threshold
10 minutes
For data groups that use remote journaling, all
journal entries in the database reader backlog
are physically located on the target system but
MIMIX has not started to replicate them. If the
source system fails, these entries need to be
read and applied before switching.
For data groups that use MIMIX source-send
processing, all journal entries in the database
send backlog, are waiting to be read and to be
transmitted to the target system. The
backlogged journal entries exist only in the
source system and are at risk of being lost if the
source system fails. After the source system
becomes available again, journal analysis may
be required.
Option 2
Option 3
Option 4
Database apply warning
message threshold
100,000 entries
All of the entries in the database apply backlog
are waiting to applied to the target system. If
the source system fails, these entries need to
be applied before switching. A large backlog
can also affect performance.
Option 2
Option 3
Option 4
Fine-tuning backlog warning thresholds for a data group
227
The following options are available, listed in order of preference. Some options are
not available for all thresholds.
Option 1 - Adjust the number of available jobs. This option is available only for the
object retrieve, container send, and object apply processes. Each of these processes
have a configurable minimum and maximum number of jobs, a threshold at which
more jobs are started, and a warning message threshold. If the number of entries in a
backlog divided by the number of active jobs exceeds the job threshold, extra jobs are
automatically started in an attempt to address the backlog. If the backlog reaches the
higher value specified in the warning message threshold, the process status reflects
the threshold condition. If the process frequently shows a threshold status, the
Object send threshold
10 minutes
All of the journal entries in the object send
backlog exist only in the system journal on the
source system and are at risk of being lost if the
source system fails. MIMIX may not have
determined all of the information necessary to
replicate the objects associated with the journal
entries. As this backlog clears, subsequent
processes may have backlogs as replication
progresses.
Option 2
Option 3
Option 4
Object retrieve warning
message threshold
100 entries
All of the objects associated with journal entries
in the object retrieve backlog are waiting to be
packaged so they can be sent to the target
system. The latest changes to these objects
exist only in the source system and are at risk
of being lost if the source system fails. As this
backlog clears, subsequent processes may
have backlogs as replication progresses.
Option 1
Option 2
Option 3
Option 4
Container send warning
message threshold
100 entries
All of the packaged objects associated with
journal entries in the container send backlog
are waiting to be sent to the target system. The
latest changes to these objects exist only in the
source system and are at risk of being lost if the
source system fails. As this backlog clears,
subsequent processes may have backlogs as
replication progresses
Option 1
Option 2
Option 3
Option 4
Object apply warning
message threshold
100 requests
All of the entries in the object apply backlog are
waiting to be applied to the target system. If the
source system fails, these entries need to be
applied before switching. Any related objects
for which an automatic recovery action was
collecting data may be lost.
Option 1
Option 2
Option 3
Option 4
Table 28. Shipped threshold values for replication processes and the risk associated with a backlog
Replication Process
Backlog Threshold and
its Shipped Default Val-
ues
Risk Associated with a Backlog Options for
Resolving Persistent
Threshold Conditions
228
maximum number of jobs may be too low or the job threshold value may be too high.
Adjusting either value in the data group configuration can result in more throughput.
Option 2 - Temporarily increase job performance. This option is available for all
processes except the RJ link. Use work management functions to increase the
resources available to a job by increasing its run priority or its timeslice (CHGJ OB
command). These changes are effective only for the current instance of the job. The
changes do not persist if the job is ended manually or by nightly cleanup operations
resulting from the configured job restart time (RESTARTTIME) on the data group
definition.
Option 3 - Change threshold values or add criterion. All processes support
changing the threshold value. In addition, if the quantity of entries is more of a
concern than time, some processes support specifying additional threshold criteria
not used by shipped default settings. For the remote journal, database reader (or
database send), and object send processes, you can adjust the threshold so that a
number of journal entries is used as criteria instead of, or in conjunction with a time
value. If both time and entries are specified, the first criterion reached will trigger the
threshold condition. Changes to threshold values are effective the next time the
process status is requested.
Option 4 - Get assistance. If you tried the other options and threshold conditions
persist, contact your Certified MIMIX Consultant for assistance. It may be necessary
to change configurations to adjust what is defined to each data group or to make
permanent work management changes for specific jobs.
Copying a definition
229
CHAPTER 11 Additional options: working with
definitions
The procedures for performing common functions, such as copying, displaying, and
renaming, are very similar for all types of definitions used by MIMIX. The generic
procedures in this topic can be used for copying, deleting, displaying, and printing
definitions. Specific procedures are included for renaming each type of definition and
for swapping system definition names.
The topics in this chapter include:
Copying a definition on page 229 provides a procedure for copying a system
definition, transfer definition, journal definition, or a data group definition.
Deleting a definition on page 230 provides a procedure for deleting a system
definition, transfer definition, journal definition, or a data group definition.
Displaying a definition on page 231 provides a procedure for displaying a system
definition, transfer definition, journal definition, or a data group definition.
Printing a definition on page 232 provides a procedure for creating a spooled file
which you can print that identifies a system definition, transfer definition, journal
definition, or a data group definition.
Renaming definitions on page 232 provides procedure for renaming definitions,
such as renaming a system definition which is typically done as a result in a
change of software.
Swapping system definition names on page 238 provides a procedure to swap
system definition names.
Copying a definition
Use this procedure on a management system to copy a system definition, transfer
definition, journal definition, or a data group definition.
Notes for data group definitions:
The data group entries associated with a data group definition are not copied.
Before you copy a data group definition, ensure that activity is ended for the
definition to which you are copying.
Notes for journal definitions:
The journal definition identified in the From journal definition prompt must exist
before it can be copied. The journal definition identified in the To journal defining
prompt cannot exist when you specify *NO for the Replace definition prompt.
If you specify *YES for the Replace definition prompt, the To journal defining
prompt must exist. It is possible to introduce conflicts in your configuration when
replacing an existing journal definition. These conflicts are automatically resolved
Additional options: working with definitions
230
or an error message is sent when the journal environment for the definition is built.
To copy a definition, do the following:
Note: The following procedure includes using MIMIX menus. See Accessing the
MIMIX Main Menu on page 83 for information about using these.
1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
3. The "Work with" display for the definition type appears. Type a 3 (Copy) next to
definition you want and press Enter.
4. The Copy display for the definition type you selected appears. At the To definition
prompt, specify a name for the definition to which you are copying information.
5. If you are copying a journal definition or a data group definition, the display has
additional prompts. Verify that the values of prompts are what you want.
6. The value *NO for the Replace definition prompt prevents you from replacing an
existing definition. If you want to replace an existing definition, specify *YES.
7. To copy the definition, press Enter.
Deleting a definition
Use this procedure on a management system to delete a system definition, transfer
definition, journal definition, or a data group definition.
To delete a definition, do the following:
Note: The following procedure includes using MIMIX menus. See Accessing the
Attention: When you delete a system or data group definition,
information associated with the definition is also deleted. Ensure
that the definition you delete is not being used for replication and be
aware of the following:
If you delete a system definition, all other configuration
elements associated with that definition are deleted. This
includes journal definitions, transfer definitions, and data group
definitions with all associated data group entries.
If you delete a data group definition, all of its associated data
group entries are also deleted.
The delete function does not clean up any records for files in
the error/hold file.
When you delete a journal definition, only the definition is deleted.
The files being journaled, the journal, and the journal receivers are
not deleted.
Displaying a definition
231
MIMIX Main Menu on page 83 for information about using these.
1. Ensure that the definition you want to delete is not being used for replication. Do
the following:
a. From the MIMIX Main Menu, select option 2 (Work with systems) and press
Enter.
b. Type an 8 (Work with data groups) next to the system you want and press
Enter.
c. The result is a list of data groups for the system you selected. Type a 17 (File
entries) next to the data group you want and press Enter.
d. On the Work with DG File Entries display, verify that the status of the file
entries is *INACTIVE. If necessary, use option 10 (End journaling).
e. On the Work with Data Groups display, use option 10 (End data group).
f. Before deleting a system definition, on the Work with Systems display, uses
option 10 (End managers).
2. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
3. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
4. The "Work with" display for the definition type appears. Type a 4 (Delete) next to
definition you want and press Enter.
5. When deleting system definitions, transfer definitions, or journal definitions, a
confirmation display appears with a list of definitions to be deleted. To delete the
definitions press F16.
Displaying a definition
Use this procedure to display a system definition, transfer definition, journal definition,
or a data group definition.
To display a definition, do the following:
Note: The following procedure includes using MIMIX menus. See Accessing the
MIMIX Main Menu on page 83 for information about using these.
1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
3. The "Work with" display for the definition type appears. Type a 5 (Display) next to
definition you want and press Enter.
4. The definition display appears. Page Down to see all of the values.
Additional options: working with definitions
232
Printing a definition
Use this procedure to create a spooled file which you can print that identifies a system
definition, transfer definition, journal definition, or a data group definition.
To print a definition, do the following;
Note: The following procedure includes using MIMIX menus. See Accessing the
MIMIX Main Menu on page 83 for information about using these.
1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, select the option for the type of definition
you want and press Enter.
3. The "Work with" display for the definition type appears. Type a 6 (Print) next to
definition you want and press Enter.
4. A spooled file is created with a name of MX***DFN, where *** indicates the type of
definition. You can print the spooled file according to your standard print
procedures.
Renaming definitions
The procedures for renaming a system definition, transfer definition, journal definition,
or data group definition must be run from a management system.
This section includes the following procedures:
Renaming a system definition on page 232
Renaming a transfer definition on page 235
Renaming a journal definition with considerations for RJ link on page 236
Renaming a data group definition on page 237
Renaming a system definition
System definitions are typically renamed as a result of a change in hardware. When
you rename a system definition, all other configuration information that references the
system definition is automatically modified to include the updated system name. This
includes journal definitions, transfer definitions, data group definitions, and associated
data group entries.
Attention: Before you rename any definition, ensure that all other
configuration elements related to it are not active.
Attention: Before you rename a system definition, ensure that
MIMIX activity is ended by using the End Data Group (ENDDG) and
End MIMIX Manager (ENDMMXMGR) commands.
Renaming definitions
233
To rename system definitions, do the following for each system whose definition you
are renaming: from the management system unless noted otherwise:
Note: The following procedure includes using MIMIX menus. See Accessing the
MIMIX Main Menu on page 83 for information about using these.
1. Perform a controlled end of the MIMIX installation. See the MIMIX Operations
book for procedures for ending MIMIX.
2. End the MIMIXSBS subsystem on all systems. See the MIMIX Operations book
for procedures for ending the MIMIXSBS subsystem.
3. From the MIMIX Intermediate Main Menu, select option 2 (Work with systems)
and press Enter.
4. From the Work with Systems display, select option 8 (Work with data groups) on
the system whose definition you are renaming, and press Enter.
5. For each data group listed, do the following:
a. From the Work with Data Groups display, select option 8 (Display status) and
press Enter.
b. Record the Last Read Receiver name and Sequence # for both database and
object.
6. If changing the host name or IP address, do the following steps. Otherwise,
continue with Step 7.
a. From the MIMIX Intermediate Main Menu, select option 11 (Configuration
menu) and press Enter.
b. From the MIMIX Configuration Menu, select option 2 (Work with transfer
definitons) and press Enter.
c. The Work with Transfer Definitions display appears. Select option 2 (Change)
and press Enter.
d. The Change Transfer Definition (CHGTFRDFN) display appears. Press F10 to
access additional parameters.
e. Specify the System 1 host name or address and System 2 host name or
address as the actual host names or IP addresses of the systems and press
Enter.
Note: Many installations will have an autostart entry for the STRSVR command.
Autostart entries must be reviewed for possible updates of a new system
name or IP address. For more information, see Identifying the current
autostart job entry information on page 171 and Changing an autostart
job entry and its related job description on page 172.
7. Start the MIMIXSBS subsystem and the port jobs on all systems using the host
names or IP addresses. If you changed these, use the host name or IP address
specified in Step 6.
8. For all systems, ensure communications before continuing. Follow the steps in
topic Verifying all communications links on page 176.
9. From the Work with Systems Definitions (WRKSYSDFN) display type a 7
Additional options: working with definitions
234
(Rename) next to the system whose definition is being renamed and press Enter.
10. The Rename System Definitions (RNMSYSDFN) display appears. At the To
system definition prompt, specify the new name for the system whose definition is
being renamed and press Enter.
11. Once this is complete, press F12.
12. Press F12 again to return to the MIMIX Intermediate Main Menu.
13. Select option 2 (Work with systems) and press Enter.
14. The Work with Systems display appears. Type a 9 (Start) next to the management
system you want and press Enter.
15. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following:
a. At the Manager prompt, specify *ALL.
b. Press F10 to access additional parameters.
c. In the Reset configuration prompt, specify *YES.
d. Press Enter.
16. The Work with Systems display appears. For each network system, do the
following:
a. Type a 9 (Start) next to each network system you want and press Enter.
b. The Start MIMIX Managers (STRMMXMGR) display appears. Press Enter.
17. From the Work with Systems display, select option 8 (Work with data groups) on
the system whose definition you have renamed and press Enter.
18. For each data group listed, do the following:
a. From the Work with Data Groups display, select option 9 (Start DG) and press
Enter.
b. The Start Data Group (STRDG) display appears. Press F10 to display
additional parameters.
c. Type the Receiver names and Sequence #s, adding 1 to the sequence #s, that
were recorded in Step 5b for both database and object. Press Enter.
19. From the Work with Systems display, select option 8 (Work with data groups) on
the system whose definition you have renamed and ensure all data groups are
active. Refer to the MIMIX Operations book for more information.
20. Press F3 to return to the Work with Systems display.
21. From the Work with Systems display, select option 8 (Work with data groups) on
the management system and press Enter.
22. From the Work with Data Groups display, select option 9 (Start DG) for data
groups (highlighted red) that are not active and press Enter.
23. The Start Data Group (STRDG) display appears. Press Enter. Additional
parameters are displayed. Press Enter again to start the data groups.
24. The Work with data groups display reappears. Ensure all data groups are active.
Renaming definitions
235
Press F5 to refresh data. Refer to the MIMIX Operations book for more
information.
Renaming a transfer definition
When you rename a transfer definition, other configuration information which
references it is not updated with the new name. You must manually update other
information which references the transfer definition. The following procedure renames
the transfer definition and includes steps to update the other configuration information
that references the transfer definition including the system definition, data group
definition, and remote journal link. All of the steps must be completed.
To rename a transfer definition, do the following from the management system:
Note: The following procedure includes using MIMIX menus. See Accessing the
MIMIX Main Menu on page 83 for information about using these.
1. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu)
and press Enter.
2. From the MIMIX Configuration Menu, select option 2 (Work with transfer
definitions) and press Enter.
3. From the Work with Transfer Definitions menu, type a 7 (Rename) next to the
definition you want to rename and press Enter.
4. The Rename Transfer Definition display for the definition type you selected
appears. At the To transfer definition prompt, specify the values you want for the
new name and press Enter.
5. Press F12 to return to the MIMIX Configuration Menu.
6. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
7. From the Work with System Definitions menu, type a 2 (Change) next to the
system name whose transfer definition needs to be changed and press Enter.
8. From the Change System Definition display, specify the new name for the
transfer definition and press Enter.
9. Press F12 to return to the MIMIX Configuration Menu.
10. From the MIMIX Configuration Menu, select option 4 (Work with data group
definitions) and press Enter.
11. From the Work with DG Definitions menu, type a 2 (Change) next to the data
group name whose transfer definition needs to be changed and press Enter.
12. From the Change Data Group Definition display, specify the new name for the
transfer definition and press Enter until the Work with DG Definitions display
appears.
13. Press F12 to return to the MIMIX Configuration Menu.
14. From the MIMIX Configuration Menu, select option 8 (Work with remote journal
links) and press Enter.
15. From the Work with RJ Links menu, press F11 to display the transfer definitions.
Additional options: working with definitions
236
16. Type a 2 (Change) next to the RJ link where you changed the transfer definition
and press Enter.
17. From the Change Remote J ournal Link display, specify the new name for the
transfer definition and press Enter.
Renaming a journal definition with considerations for RJ link
When you rename a journal definition, other configuration information which
references it is not updated with the new name. This procedure includes steps for
renaming the journal definition in the data group definition, including considerations
when an RJ link is used.
If you rename a journal definition, the journal name will also be renamed if you used
the default value of *J RNDFN when configuring the journal definition. If you do not
want the journal name to be renamed, you must specify the journal name rather than
the default of *J RNDFN for the journal (J RN) parameter.
To rename a journal definition, do the following from the management system:
Note: The following procedure includes using MIMIX menus. See Accessing the
MIMIX Main Menu on page 83 for information about using these.
1. Perform a controlled end for the data group in your remote journaling
environment. Use topic Ending all replication in a controlled manner in the
MIMIX Operations book.
2. If using remote journaling, do the following. Otherwise, continue with Step 3:
a. End the remote journal link in a controlled manner. Use topic Ending a remote
journal link independently in the MIMIX Operations book.
b. Verify that the remote journal link is not in use on both systems. Use topic
Displaying status of a remote journal link in the MIMIX Operations book. The
remote journal link should have a state value of *INACTIVE before you
continue.
c. From the MIMIX Intermediate Main Menu, select option 11 (Configuration
menu) and press Enter.
d. From the MIMIX Configuration Menu, select option 8 (Work with remote
journal links) and press Enter.
e. Remove the remote journal connection (the RJ link). From the Work with RJ
Links display, type a 15 (Remove RJ connection) next to the link that you want
and press Enter. A confirmation display appears. To continue removing the
connections for the selected links, press Enter.
f. Press F12 to return to the MIMIX Configuration Menu.
3. From the MIMIX Configuration Menu, select option 3 (Work with journal
definitions) and press Enter.
4. From the Work with J ournal Definitions menu, type a 7 (Rename) next to the
journal definition names you want to rename and press Enter.
5. The Rename J ournal Definition display for the definition you selected appears. At
the To journal definition prompts, specify the values you want for the new name.
Renaming definitions
237
a. If the journal name is *J RNDFN, ensure that there are no journal receiver
prefixes in the specified library whose names start with the journal receiver
prefix. See Building the journaling environment on page 195 for more
information.
6. Press Enter. The Work with J ournal Definitions display appears.
7. If using remote journaling, do the following to change the corresponding definition
for the remote journal. Otherwise, continue with Step 8:
a. Type a 2 (Change) next to the corresponding remote journal definition name
you changed and press Enter.
b. Specify the values entered in Step 5 and press Enter.
8. From the Work with J ournal Definitions menu, type a 14 (Build) next to the journal
definition names you changed and press F4.
9. The Build J ournaling Environment display appears. At the Source for values
prompt, specify *J RNDFN.
10. Press Enter. You should see a message that indicates the journal environment
was created.
11. Press F12 to return to the MIMIX Configuration Menu. From the MIMIX
Configuration Menu, select option 4 (Work with data group definitions) and press
Enter.
12. From the Work with DG Definitions menu, type a 2 (Change) next to the data
group name that uses the journal definition you changed and press Enter.
13. Press F10 to access additional parameters.
14. From the Change Data Group Definition display, specify the new name for the
System 1 journal definition and System 2 journal definition and press Enter twice.
Renaming a data group definition
Do the following to rename a data group definition:
Note: The following procedure includes using MIMIX menus. See Accessing the
MIMIX Main Menu on page 83 for information about using these.
1. Ensure that the data group is ended. If the data group is active, end it using the
procedure Ending a data group in a controlled manner in the MIMIX Operations
book.
2. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu)
and press Enter.
3. From the MIMIX Configuration Menu, select option 4 (Work with data group
definitions) and press Enter.
4. From the Work with DG Definitions menu, type a 7 (Rename) next to the data
Attention: Before you rename a data group definition, ensure
that the data group has a status of *INACTIVE.
Additional options: working with definitions
238
group name you want to rename and press Enter.
5. From the Rename Data Group Definition display, specify the new name for the
data group definition and press Enter.
Swapping system definition names
Use the procedure in this section to swap system definition names. Refer to the
following requirements before beginning this procedure:
Requirements for swapping system definition names
This procedure must be run from the management system.
Port jobs must be running on both systems.
Use either the IP addresses or the actual host names in the transfer definition.
Ensure each step is successful before proceeding to the next step.
Record system definition names, including temporary names used for this
procedure.
The following procedure uses SYSTEMA for the network system definition and
SYSTEMB for the management system definition. To swap system definition names,
do the following:
Note: The following procedure includes using MIMIX menus. See Accessing the
MIMIX Main Menu on page 83 for information about using these.
1. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu)
and press Enter.
2. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
3. The Work with System Definitions (WRKSYSDFN) display appears. Type a 7
(Rename) next to the network system definition (SYSTEMA) and press Enter.
4. The Rename System Definitions (RNMSYSDFN) display appears. Enter a
temporary name for the network system (SYSTEMA) in the To system definition
prompt. Press Enter.
5. Press F12.
6. Press F12 again to return to the MIMIX Intermediate Main Menu.
7. Select option 2 (Work with systems) and press Enter.
8. The Work with Systems display appears. On the temporary system, select option
9 (Start) and press Enter.
9. The Start MIMIX Managers (STRMMXMGR) display appears. Press F10 to
display additional parameters.
Attention: Before you swap system definition names, ensure
that MIMIX activity is ended by using the End Data Group (ENDDG)
and End MIMIX Manager (ENDMMXMGR) commands.
Swapping system definition names
239
10. Enter *YES for Reset configuration and press Enter.
11. Select option 10 (End) for both systems and press Enter. Ensure the systems are
ended before proceeding.
12. Press F12 to return to the MIMIX Intermediate Main Menu.
13. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu)
and press Enter.
14. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
15. The Work with System Definitions (WRKSYSDFN) display appears. Type a 7
(Rename) next to the management system definition (SYSTEMB) and press
Enter.
16. The Rename System Definitions (RNMSYSDFN) display appears. Enter the old
network system definition name (SYSTEMA) in the To system definition prompt.
Press Enter.
17. Press F12.
18. Press F12 to return to the MIMIX Intermediate Main Menu.
19. Select option 2 (Work with systems) and press Enter.
20. The Work with Systems display appears. On both systems, select option 9 (Start)
and press Enter.
21. The Start MIMIX Managers (STRMMXMGR) display appears. Press F10 to
display Additional parameters.
22. Enter *YES for Reset configuration and press Enter for both systems.
23. From the Work with Systems display select option 10 (End) for both systems and
press Enter.
24. Press F12 to return to the MIMIX Intermediate Main Menu.
25. From the MIMIX Intermediate Main Menu, select option 11 (Configuration menu)
and press Enter.
26. From the MIMIX Configuration Menu, select option 1 (Work with system
definitions) and press Enter.
27. The Work with System Definitions (WRKSYSDFN) display appears. Type a 7
(Rename) next to the temporary network system definition and press Enter.
28. The Rename System Definitions (RNMSYSDFN) display appears. Enter the old
management system definition name (SYSTEMB) in the To system definition
prompt. Press Enter.
29. Press F12.
30. Press F12 again to return to the MIMIX Intermediate Main Menu.
31. Select option 2 (Work with systems) and press Enter.
32. The Work with Systems display appears. On both systems, select option 9 (Start)
and press Enter.
Additional options: working with definitions
240
33. The Start MIMIX Managers (STRMMXMGR) display appears. Press F10 to
display Additional parameters.
34. Enter *YES for Reset configuration and press Enter.
241
CHAPTER 12 Configuring data group entries
Data group entries can identify one or many objects to be replicated or excluded from
replication. You can add individual data group entries, load entries from an existing
source, and change entries as needed.
The topics in this chapter include:
Creating data group object entries on page 242 describes data group object
entries which are used to identify library-based objects for replication. Procedures
for creating these are included.
Creating data group file entries on page 246 describes data group file entries
which are required for user journal replication of *FILE objects. Procedures for
creating these are included.
Creating data group IFS entries on page 255 describes data group IFS entries
which identify IFS objects for replication. Procedures for creating these are
included.
Loading tracking entries on page 257 describes how to manually load tracking
entries for IFS objects, data areas, and data queues that are configured for user
journal replication.
Creating data group DLO entries on page 259 describes data group DLO entries
which identify document library objects (DLOs) for replication by MIMIX system
journal replication processes. Procedures for creating these are included.
Creating data group data area entries on page 261 describes data group data
area entries which identify data areas to be replicated by the data area poller
process. Procedures for creating these are included.
Additional options: working with DG entries on page 263 provides procedures for
performing data group entry common functions, such as copying, removing, and
displaying,
The appendix Supported object types for system journal replication on page 533
lists IBM i object types and indicates whether each object type is replicated by MIMIX.
242
Creating data group object entries
Data group object entries are used to identify library-based objects for replication.
How replication is performed for the objects identified depends on the object type and
configuration settings. For object types that cannot be journaled to a user journal,
system journal replication processes are used. For object types that can be journaled
(*FILE, *DTAARA, and *DTAQ), values specified in the object entry and other
configuration information determine how replication is performed. For these object
types, default values in the object entry are appropriate for user journal replication;
however, user journal replication of these object types also requires file entries (for
*FILE) and object tracking entries (for *DTAARA and *DTAQ).
For detailed concepts and requirements for supported configurations, see the
following topics:
Identifying library-based objects for replication on page 91
Identifying logical and physical files for replication on page 96
Identifying data areas and data queues for replication on page 103
When you configure MIMIX, you can create data group object entries by adding
individual object entries or by using the custom load function for library-based objects.
The custom load function can simplify creating data group entries. This function
generates a list of objects that match your specified criteria, from which you can
selectively create data group object entries. For example, if you want to replicate all
but a few of the data areas in a specific library, you could use the Add Data Group
Object Entry (ADDDGOBJ E) command to create a single data group object entry that
includes all data areas in the library. Then, using the same object selection criteria
with the custom load function, you can select from a list of data areas in the library to
create exclude entries for the objects you do not want replicated.
Once you have created data group object entries, you can tailor them to meet your
requirements. You can also use the #DGFE audit or the Check Data Group File
Entries (CHKDGFE) command to ensure that the correct file entries exist for the
object entries configured for the specified data group.
Loading data group object entries
In this procedure, you specify selection criteria that results in a list of objects with
similar characteristics. From the list, you can select multiple objects for which MIMIX
will create appropriate data group object entries. You can customize individual entries
later, if necessary.
From the management system, do the following to create a custom load of object
entries:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 20 (Object entries) next to the
data group you want and press Enter.
3. The Work with DG Object Entries display appears. Press F19 (Load).
Creating data group object entries
243
4. The Load DG Object Entries (LODDGOBJ E) display appears. Do the following to
specify the selection criteria:
a. Identify the library and objects to be considered. Specify values for the System
1 library and System 1 object prompts.
b. If necessary, specify values for the Object type, Attribute, System 2 library, and
System 2 object prompts.
c. At the Process type prompt, specify whether resulting data group object entries
should include or exclude the identified objects.
d. Specify appropriate values for the Cooperate with database and Cooperating
object types prompts. These prompts determine how *FILE, *DTAARA, and
*DTAQ objects are replicated. Change the values if you want to explicitly
replicate from the system journal or if you want to limit which object types are
cooperatively processed with the user journal.
e. Ensure that the remaining prompts contain the values you want for the data
group object entries that will be created. Press Page Down to see all of the
prompts.
5. To specify file entry options that will override those set in the data group definition,
do the following:
a. Press F9 (All parameters).
b. Press Page Down until you locate the File entry options prompt.
c. Specify the values you need on the elements of the File entry options prompt.
6. To generate the list of objects, press Enter.
Note: If you skipped Step 5, you may need to press Enter multiple times.
7. The Load DG Object Entries display appears with the list of objects that matched
your selection criteria. Either type a 1 (Select) next to the objects you want or
press F21 (Select all). Then press Enter.
8. If necessary, you can use Adding or changing a data group object entry on
page 243 to customize values for any of the data group object entries.
Synchronize the objects identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.
Adding or changing a data group object entry
Note: If you are converting a data group to use user journal replication for data areas
or data queues, use this procedure when directed by Checklist: Change
*DTAARA, *DTAQ, IFS objects to user journaling on page 138.
From the management system, do the following to add a new data group object entry
or change an existing entry:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
244
press Enter.
2. From the Work with Data Groups display, type a 20 (Object entries) next to the
data group you want and press Enter.
3. The Work with DG Object Entries display appears. Do one of the following:
To add a new entry, type a 1 (Add) next to the blank line at the top of the list
and press Enter.
To change an existing entry, type a 2 (Change) next to the entry you want and
press Enter.
4. The appropriate Data Group Object Entry display appears. When adding an entry,
you must specify values for the System 1 library and System 1 object prompts.
Note: When changing an existing object entry to enable replication of data areas
or data queues from a user journal (COOPDB(*YES)), make sure that you
specify only the objects you want to enable for the System 1 object
prompt. Otherwise, all objects in the library specified for System 1 library
will be enabled.
5. If necessary, specify a value for the Object type prompt.
6. Press F9 (All parameters).
7. If necessary, specify values for the Attribute, System 2 library, System 2 object,
and Object auditing value prompts.
8. At the Process type prompt, specify whether resulting data group object entries
should include (*INCLD) or exclude (*EXCLD) the identified objects.
9. Specify appropriate values for the Cooperate with database and Cooperating
object types prompts.
Note: These prompts determine how *FILE, *DTAARA, and *DTAQ objects are
replicated. Change the values if you want to explicitly replicate from the
system journal or if you want to limit which object types are cooperatively
processed with the user journal.
10. Ensure that the remaining prompts contain the values you want for the data group
object entries that will be created. Press Page Down to see more prompts.
11. To specify file entry options that will override those set in the data group definition,
do the following:
a. If necessary, Press Page Down to locate the File entry options prompt.
b. Specify the values you need on the elements of the File entry options prompt.
12. Press Enter.
13. For object entries configured for user journal replication of data areas or data
queues, return to Step 7 in procedure Checklist: Change *DTAARA, *DTAQ, IFS
objects to user journaling on page 138 to complete additional steps necessary to
complete the conversion.
Synchronize the objects identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
Creating data group object entries
245
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.
246
Creating data group file entries
Data group file entries are required for user journal replication of *FILE objects.
When you configure MIMIX, you can create data group file entry information by
creating data group file entries individually or by loading entries from another source.
Once you have created the file entries, you can tailor them to meet your requirements.
Note: If you plan to use either MIMIX Dynamic Apply or legacy cooperative
processing, files must be defined by both data group object entries and data
group file entries. It is strongly recommended that you create data group
object entries first. Then, load the data group file entries from the object entry
information defined for the files. You can use the #DGFE audit or the Check
Data Group File Entries (CHKDGFE) command to ensure that the correct file
entries exist for the object entries configured for the specified data group.
For detailed concepts and requirements for supported configurations, see the
following topics:
Identifying library-based objects for replication on page 91
Identifying logical and physical files for replication on page 96
Loading file entries
If you need to create data group file entries for many files, you can have MIMIX create
the entries for you using the Load Data Group File Entries (LODDGFE) command.
The Configuration source (CFGSRC) parameter supports loading from a variety of
sources, listed below in order most commonly used:
*DGOBJ E - File entry information is loaded from the information in data group
object entries configured for the data group. If you are configuring to use MIMIX
Dynamic Apply or legacy cooperative processing, this value is recommended.
*NONE - File entry information is loaded from a library on either the source or
target system, as determined by the specified for System 1 library (LIB1), System
2 library (LIB2), and Load from system (LODSYS) parameters.
*J RNDFN - File entry information is loaded from a journal specified in the journal
definition associated with the specified data group. File entries will be created for
all files currently journaled to the journal specified in the journal definition.
*DGFE - File entry information is loaded from data group file entries defined to
another data group. This option supports loading from data groups at the previous
release or the current release on the same system. This value is typically used
when loading file entries from a data group in a different installation of MIMIX.
When loading from a data group, you can also specify the source from which file entry
options are loaded, and override elements if needed. The Default FE options source
(FEOPTSRC) parameter determines whether file entry options are loaded from the
specified configuration source (*CFGSRC) or from the data group definition
(*DGDFT). Any file entry option with a value of *DFT is loaded from the specified
source. Any values specified on elements of the File entry options (FEOPT)
parameter override the values loaded from the FEOPTSRC parameter for all data
group file entries created by a load request.
Creating data group file entries
247
Regardless of where the configuration source and file entry option source are located,
the Load Data Group File Entries (LODDGFE) command must be used from a system
designated as a management system.
Note: The Load Data Group File Entries (LODDGFE) command performs a journal
verification check on the file entries using the Verify J ournal File Entries
(VFYJ RNFE) command. In order to accurately determine whether files are
being journaled to the target system, you should first perform a save and
restore operation to synchronize the files to the target system before loading
the data group file entries.
Loading file entries from a data groups object entries
This topic contains examples and a procedure. The examples illustrate the flexibility
available for loading file entry options.
Example - Load from the same data group This example illustrates how to create
file entries when converting a data group to use MIMIX Dynamic Apply. In this
example, data group DGDFN1 is being converted. The data group definition specifies
*SYS1 as its data source (DTASRC). However, in this example, file entries will be
loaded from the target system to take advantage of a known synchronization point at
which replication will later be started.
LODDGFE DGDFN( DGDFN1) CFGSRC( *DGOBJ E) UPDOPT( *ADD) LODSYS( *SYS2)
SELECT( *NO)
Since no value was specified for FROMDGDFN, its default value *DGDFN causes the
file entries to load from existing object entries for DGDFN1. The value *SYS2 for
LODSYS causes this example configuration to load from its target system. Entries are
added (UPDOPT(*ADD) to the existing configuration. Since all files identified by
object entries are wanted, SELECT(*NO) bypasses the selection list. The data group
file entries for DGDFN1 created have file entry options which match those found in the
object entries because no values were specified for FEOPTSRC or FEOPT
parameters.
Example - Load from another data group with mixed sources for file entry
options The file entries for data group DGDFN1 are created by loading from the
object entries for data group DGDFN2, with file entry options loaded from multiple
sources.
LODDGFE DGDFN( DGDFN1) CFGSRC( *DGOBJ E) FROMDGDFN( DGDFN2) FEOPT( *CFGSRC
*DGDFT *CFGSRC *DGDFT)
The data group file entries created for DGDFN1 are loaded from the configuration
information in the object entries for DGDFN2, with file entry options coming from
multiple sources. Because the command specified the first element (Journal image)
and third element (Replication type) of the file entry options (FEOPT) as *CFGSRC,
the resulting file entries have the same values for those elements as the data group
object entries for DGDFN2. Because the command specified the second element
(Omit open/close entries) and the fourth element (Lock member during apply) as
*DGDFT, these elements are loaded from the data group definition. The rest of the file
entry options are loaded from the configuration source (object entries for DGDFN2).
Procedure: Use this procedure to create data group file entries from the object
entries defined to a data group.
248
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears. The name of the
data group for which you are creating file entries and the Configuration source
value of *DGOBJ E are pre-selected. Press Enter.
5. The following prompts appear on the display. Specify appropriate values.
a. From data group definition - To load from entries defined to a different data
group, specify the three-part name of the data group.
b. Load from system - Ensure that the value specified is appropriate. For most
environments, files should be loaded from the source system of the data group
you are loading. (This value should be the same as the value specified for Data
source in the data group definition.)
c. Update option - If necessary, specify the value you want.
d. Default FE options source - Specify the source for loading values for default file
entry options. Each element in the file entry options is loaded from the
specified location unless you explicitly specify a different value for an element
in Step 6.
6. Optionally, you can specify a file entry option value to override those loaded from
the configuration source. Do the following:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
7. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
8. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
9. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use Changing a data group file entry on page 253 to customize
values for any of the data group file entries.
Creating data group file entries
249
Loading file entries from a library
Example: The data group file entries are created by loading from a library named
TESTLIB on the source system. This example assumes the configuration is set up so
that system 1 in the data group definition is the source for replication.
LODDGFE DGDFN( DGDFN1) CFGSRC( *NONE) LI B1( TESTLI B)
Since the FEOPT parameter was not specified, the resulting data group file entries
are created with a value of *DFT for all of the file entry options. Because there is no
MIMIX configuration source specified, the value *DFT results in the file entry options
specified in the data group definition being used.
Procedure: Use this procedure to create data group file entries from a library on
either the source system or the target system.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears with the name of
the data group for which you are creating file entries. At the Configuration source
prompt, specify *NONE and press Enter.
5. Identify the location of the files to be used for loading. For common configurations,
you can accomplish this by specifying a library name at the System 1 library
prompt and accepting the default values for the System 2 library, Load from
system, and File prompts.
If you are using system 2 as the data source for replication or if you want the
library name to be different on each system, then you need to modify these values
to appropriately reflect your data group defaults.
6. If necessary, specify the values you want for the following:
Update option prompt
Add entry for each member prompt
7. The value of the Default FE options source prompt is ignored when loading from a
library. To optionally specify file entry options, do the following:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
250
9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
10. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. If necessary, you can use Changing a data group file entry on
page 253 to customize values for any of the data group file entries.
Loading file entries from a journal definition
Example: The data group file entries are created by loading from the journal
associated system 1 of the data group. This example assumes the configuration is set
up so that system 1 in the data group definition is the source for replication. The
journal definition 1 specified in the data group definition identifies the journal.
LODDGFE DGDFN( DGDFN1) CFGSRC( *J RNDFN) LODSYS( *SYS1)
Since the FEOPT parameter was not specified, the resulting data group file entries
are created with a value of *DFT for all of the file entry options. Because there is no
MIMIX configuration source specified, the value *DFT results in the file entry options
specified in the data group definition being used.
Procedure: Use this procedure to create data group file entries from the journal
associated with a journal definition specified for the data group.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears with the name of
the data group for which you are creating file entries. At the Configuration source
prompt, specify *JRNDFN and press Enter.
File and library names on the source and target systems are set to the same
names for the load operation.
5. At the Load from system prompt, ensure that the value specified represents the
appropriate system. The journal definition associated with the specified system is
used for loading. For common configurations, the value that corresponds to the
source system of the data group you are loading should be used. (This value
should match the value specified for Data source in the data group definition.)
6. If necessary, specify the value you want for the Update option prompt.
7. The value of the Default FE options source prompt is ignored when loading from a
journal definition. To optionally specify file entry options, do the following:
a. Press F10 (Additional parameters).
Creating data group file entries
251
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
8. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source.
9. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
10. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use Changing a data group file entry on page 253 to customize
values for any of the data group file entries.
Loading file entries from another data groups file entries
Example 1: The data group file entries are created by loading from file entries for
another data group, DGDFN2.
LODDGFE DGDFN( DGDFN1) CFGSRC( *DGFE) FROMDGDFN( DGDFN2)
Since the FEOPT parameter was not specified, the resulting data group file entries for
DGDFN1 are created with a value of *DFT for all of the file entry options. Because the
configuration source is another data group, the value *DFT results in file entry options
which match those specified in DGDFN2.
Example 2: The data group file entries are created by loading from file entries for
another data group, DGDFN2 in another installation MXTEST.
LODDGFE DGDFN( DGDFN1) CFGSRC( *DGFE) PRDLI B( MXTEST) FROMDGDFN( DGDFN2)
Since the FEOPT parameter was not specified, the resulting data group file entries for
DGDFN1 are created with a value of *DFT for all of the file entry options. Because the
configuration source is another data group in another installation, the value *DFT
results in file entry options which match those specified in DGDFN2 in installation
MXTEST.
Procedure: Use this procedure to create data group file entries from the file entries
defined to another data group.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading file entries are not effective until the data group
is restarted.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. Press F19 (Load).
4. The Load Data Group File Entries (LODDGFE) display appears with the name of
the data group for which you are creating file entries. At the Configuration source
prompt, specify *DGFE and press Enter.
252
5. At the Production library prompt, either accept *CURRENT or specify the name of
an installation library from which the data group you are copying is located.
6. At the From data group definition prompts, specify the three-part name of the data
group from which you are loading.
7. If necessary, specify the value you want for the Update option prompt.
8. Specify the source for loading values for default file entry options at the Default FE
options source prompt. Each element in the file entry options is loaded from the
specified location unless you explicitly specify a different value for an element in
Step 9.
9. If necessary, do the following specify a file entry option value to override those
loaded from the configuration source:
a. Press F10 (Additional parameters).
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure.
10. Press Enter. The LODDGFE Entry Selection List display appears with a list of the
files identified by the specified configuration source
11. Either type a 1 (Load) next to the files that you want or Press F21 (Select all).
12. To create the file entries, press Enter.
All selected files identified from the configuration source are represented in the
resulting file entries. Each generated file entry includes all members of the file. If
necessary, you can use Changing a data group file entry on page 253 to customize
values for any of the data group file entries.
Adding a data group file entry
When you add a single data group file entry to a data group definition, the
configuration is dynamically updated and MIMIX automatically starts journaling of the
file on the source system if the file exists and is not already journaled. Special entries
are inserted into the journal data stream to enable the dynamic update. The added
data group file entry is recognized by MIMIX as soon as each active process receives
the special entries. For each MIMIX process, there may be a delay before the addition
is recognized. This is true especially for very active data groups.
Use this procedure to add a data group file entry to a data group.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. From the Work with DG File Entries display, type a 1 (Add) next to the blank line at
the top of the list and press Enter.
4. The Add Data Group File Entry (ADDDGFE) display appears. At the System 1 File
Creating data group file entries
253
and Library prompts, specify the file that you want to replicate.
5. By default, all members in the file are replicated. If you want to replicate only a
specific member, specify its name at the Member prompt.
Note: All replicated members of a file must be in the same database apply
session. For data groups configured for multiple apply sessions, specify
the apply session on the File entry options prompt. See Step 7.
6. Verify that the values of the remaining prompts on the display are what you want.
If necessary, change the values as needed.
Notes:
If you change the value of the Dynamically update prompt to *NO, you need to
end and restart the data group before the addition is recognized.
If you change the value of the Start journaling of file prompt to *NO and the file
is not already journaled, MIMIX will not be able to replicate changes until you
start journaling the file.
7. Optionally, you can specify file entry options that will override those defined for the
data group. Do the following:
a. Press F10 (Additional parameters), then press Page Down.
b. Specify values as needed for the elements of the File entry options prompts.
Any values you specify will be used for all of the file entries created with this
procedure
8. Press Enter to create the data group file entry.
Changing a data group file entry
Use this procedure to change an existing data group file entry.
From the management system, do the following:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 17 (File entries) next to the data
group you want and press Enter.
3. Locate the file entry you want on the Work with DG File Entries display. Type a 2
(Change) next to the entry you want and press Enter.
4. The Change Data Group File Entry (CHGDGFE) display appears. Press F10
(Additional parameters) to see all available prompts. You can change any of the
values shown on the display.
Notes:
If the file is currently being journaled and transactions are being applied, do not
change the values specified for To system 1 file (TOFILE1) and To member
(TOMBR1).
254
All replicated members of a file must be in the same database apply session.
For data groups configured for multiple apply sessions, specify the apply
session on the File entry options prompt.
5. To accept your changes, press Enter.
The replication processes do not recognize the change until the data group has been
ended and restarted.
Creating data group IFS entries
255
Creating data group IFS entries
Data group IFS entries identify IFS objects for replication. The identified objects are
replicated through the system journal unless the data group IFS entries are explicitly
configured to allow the objects to be replicated through the user journal.
Topic Identifying IFS objects for replication on page 106 provides detailed concepts
and identifies requirements for configuration variations for IFS objects. Supported file
systems are included, as well as examples of the effect that multiple data group IFS
entries have on object auditing values.
Adding or changing a data group IFS entry
Note: If you are converting a data group to use user journal replication for IFS
objects, use this procedure when directed by Checklist: Change *DTAARA,
*DTAQ, IFS objects to user journaling on page 138.
Changes become effective after one of the following occurs:
The data group is ended and restarted
Nightly maintenance routines end and restart MIMIX jobs
A MIMIX audit that uses IFS entries to select objects to audit is started.
From the management system, do the following to add a new data group IFS entry or
change an existing IFS entry:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 22 (IFS entries) next to the data
group you want and press Enter.
3. The Work with Data Group IFS Entries display appears. Do one of the following:
To add a new entry, type a 1 (Add) next to the blank line at the top of the
display and press Enter.
To change an existing entry, type a 2 (Change) next to the entry you want and
press Enter.
4. The appropriate Data Group IFS Entry display appears. When adding an entry,
you must specify a value for the System 1 object prompt.
Notes:
The object name must begin with the '/' character and can be up to 512
characters in total length. The object name can be a simple name, a name that
is qualified with the name of the directory in which the object is located, or a
generic name that contains one or more characters followed by an asterisk (*),
such as /ABC*. Any component of the object name contained between two '/'
characters cannot exceed 255 characters in length.
All objects in the specified path are selected. When changing an existing IFS
entry to enable replication from a user journal (COOPDB(*YES)), make sure
that you specify only the IFS objects you want to enable.
256
5. If necessary, specify values for the System 2 object and Object auditing value
prompts.
6. At the Process type prompt, specify whether resulting data group object entries
should include (*INCLD) or exclude (*EXCLD) the identified objects.
7. Specify the appropriate value for the Cooperate with database prompt. To ensure
that journaled IFS objects can be replicated from the user journal, specify *YES.
To replicate from the system journal, specify *NO.
8. If necessary, specify a value for the Object retrieval delay prompt.
9. Ensure that the remaining prompts contain the values you want for the data group
object entries that will be created. Press Page Down to see more prompts.
10. Press Enter to create the IFS entry.
11. For IFS entries configured for user journal replication, return to Step 7 in
procedure Checklist: Change *DTAARA, *DTAQ, IFS objects to user journaling
on page 138 to complete additional steps necessary to complete the conversion.
Synchronize the objects identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.
Loading tracking entries
257
Loading tracking entries
Tracking entries are associated with the replication of IFS objects, data areas, and
data queues with advanced journaling techniques. A tracking entry must exist for
each existing IFS object, data area, or data queue identified for replication.
IFS tracking entries identify existing IFS stream files on the source system that have
been identified as eligible for replication with advanced journaling by the collection of
data group IFS entries defined to a data group. Similarly, object tracking entries
identify existing data areas and data queues on the source system that have been
identified as eligible for replication using advanced journaling by the collection of data
group object entries defined to a data group.
When you initially configure a data group, you must load tracking entries and start
journaling for the objects which they identify. Similarly, if you add new or change
existing data group IFS entries or object entries, tracking entries for any additional IFS
objects, data areas, or data queues must be loaded and journaling must be started on
the objects which they identify.
Loading IFS tracking entries
After you have configured the data group IFS entries for advanced journaling, use this
procedure to load IFS tracking entries which match existing IFS objects. This
procedure uses the Load DG IFS Tracking Entries (LODDGIFSTE) command. Default
values for the command will load IFS tracking entries from objects on the system
identified as the source for replication without duplicating existing IFS tracking entries.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading tracking entries are not effective until the data
group is restarted.
From the management system, do the following:
1. Ensure that the data group is ended. If the data group is active, end it using the
procedure Ending a data group in a controlled manner in the MIMIX Operations
book.
2. On a command line, type LODDGIFSTE and press F4 (Prompt). The Load DG IFS
Tracking Entries (LODDGIFSTE) command appears.
3. At that prompts for Data group definition, specify the three-part name of the data
group for which you want to load IFS tracking entries.
4. Verify that the value specified for the Load from system prompt is appropriate for
your environment. If necessary, specify a different value.
5. Verify that the value specified for the Update option prompt is appropriate for your
environment. If necessary, specify a different value.
6. At the Submit to batch prompt, specify the value you want.
7. Press Enter.
8. If you specified *NO for batch processing, the request is processed. If you will see
additional prompts for Job description and Job name. If necessary, specify
different values and press Enter.
258
9. You should receive message LVI3E2B indicating the number of tracking entries
loaded for the data group.
Note: The command used in this procedure does not start journaling on the tracking
entries. Start journaling for the tracking entries when indicated by your
configuration checklist.
Loading object tracking entries
After you have configured the data group object entries for advanced journaling, use
this procedure to load object tracking entries which match existing data areas and
data queues. This procedure uses the Load DG Obj Tracking Entries
(LODDGOBJ TE) command. Default values for the command will load object tracking
entries from objects on the system identified as the source for replication without
duplicating existing object tracking entries.
Note: The data group must be ended before using this procedure. Configuration
changes resulting from loading tracking entries are not effective until the data
group is restarted.
From the management system, do the following:
1. Ensure that the data group is ended. If the data group is active, end it using the
procedure Ending a data group in a controlled manner in the MIMIX Operations
book.
2. On a command line, type LODDGOBJTE and press F4 (Prompt). The Load DG Obj
Tracking Entries (LODDGOBJ TE) command appears.
3. At that prompts for Data group definition, specify the three-part name of the data
group for which you want to load object tracking entries.
4. Verify that the value specified for the Load from system prompt is appropriate for
your environment. If necessary, specify a different value.
5. Verify that the value specified for the Update option prompt is appropriate for your
environment. If necessary, specify a different value.
6. At the Submit to batch prompt, specify the value you want.
7. Press Enter.
8. If you specified *NO for batch processing, the request is processed. If you will see
additional prompts for Job description and Job name. If necessary, specify
different values and press Enter.
9. You should receive message LVI3E2B indicating the number of tracking entries
loaded for the data group.
Note: The command used in this procedure does not start journaling on the tracking
entries. Start journaling for the tracking entries when indicated by your
configuration checklist.
Creating data group DLO entries
259
Creating data group DLO entries
Data group DLO entries identify document library objects (DLOs) for replication by
MIMIX system journal replication processes.
When you configure MIMIX, you can create data group DLO entries by loading from a
generic entry and selecting from documents in the list, or by creating individual DLO
entries. Once you have created the DLO entries, you can tailor them to meet your
requirements.
For detailed concepts and requirements, see Identifying DLOs for replication on
page 111.
Loading DLO entries from a folder
If you need to create data group DLO entries for a group of documents within a folder,
you can specify information so that MIMIX will create the data group DLO entries for
you. (You can customize individual entries later, if necessary.)
The user profile you use to perform this task must be enrolled in the system
distribution directory on the management system.
Note: The MIMIXOWN user profile is automatically added to the system directory
when MIMIX is installed. This entry is required for DLO replication and should
not be removed.
From the management system, do the following to create DLO entries by loading from
a list.
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 21 (DLO entries) next to the data
group you want and press Enter.
3. The Work with DG DLO Entries display appears. Press F19 (Load).
4. The Load DG DLO Entries (LODDGDLOE) display appears. Do the following to
specify the selection criteria:
a. Identify the documents to be considered. Specify values for the System 1 folder
and System 1 document prompts.
b. If necessary, specify values for the Owner, System 2 folder, System 2 object,
and Object auditing value prompts.
c. At the Process type prompt, specify whether resulting data group DLO entries
should include or exclude the identified documents
d. If necessary, specify a value for the Object retrieval delay prompt.
e. Press Enter.
5. Additional prompts appear to optionally use batch processing and to load entries
without load without selecting entries from a list. Press Enter.
6. The Load DG DLO Entries display appears with the list of document that matched
your selection criteria. Either type a 1 (Select) next to the documents you want or
260
press F21 (Select all). Then press Enter.
7. If necessary, you can use Adding or changing a data group DLO entry on
page 260 to customize values for any of the data group DLO entries.
Synchronize the DLOs identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.
Adding or changing a data group DLO entry
The data group must be ended and restarted before any changes can become
effective.
From the management system, do the following to add or change a DLO entry:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 21 (DLO entries) next to the data
group you want and press Enter.
3. The Work with DG DLO Entries display appears. Do one of the following:
To add a new entry, type a 1 (Add) next to the blank line at the top of the list and
press Enter.
To change an existing entry, type a 2 (Change) next to the entry you want and
press Enter. Then skip to Step 5.
4. If you are adding a new DLO entry, the Add Data Group DLO Entry display
appears. Identify the library and objects to be considered. Specify values for the
System 1 folder and System 1 document prompts.
5. Do the following:
a. If necessary, specify values for the Owner, System 2 folder, System 2 object,
and Object auditing value prompts.
b. At the Process type prompt, specify whether resulting data group DLO entries
should include or exclude the identified documents
c. If necessary, specify a value for the Object retrieval delay prompt.
6. Press Enter.
Synchronize the DLOs identified by data group entries before starting replication
processes or running MIMIX audits. The entries will be available to replication
processes after the data group is ended and restarted. This includes after the nightly
restart of MIMIX jobs. The entries will be available to MIMIX audits the next time an
audit runs.
Creating data group data area entries
261
Creating data group data area entries
This procedure creates data group data area entries that identify data areas to be
replicated by the data area poller process.
Note: The data area poller method is not the preferred way to replicate data
areas.The preferred method of replicating data areas is with user journal
replication processes using advanced journaling. The next best method is
identifying them with data group object entries for system journal replication
processes.
For detailed concepts and requirements for supported configurations, see the
following topics:
Identifying library-based objects for replication on page 91
Identifying data areas and data queues for replication on page 103
You can load all data group data area entries from a library or you can add individual
data area entries. Once the data group data area entries are created, you can tailor
them to meet your requirements by adding, changing, or deleting entries. You must
define data group data area entries from the management system. The data area
entries can be created from libraries on either system. If the system manager is
configured and running, all created and changed data group data area entries are
sent to the network systems automatically.
Loading data area entries for a library
Before any addition or change is recognized, you need to end and restart the data
group.
From the management system, do the following to load data area entries for use with
the data area poller:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 19 (Data area entries) next to the
data group you want and press Enter.
3. The Work with DG Data Area Entries display appears. Press F19 (Load).
4. The Load DG Data Area Entries (LODDGDAE) display appears. The values of the
System 1 library and System 2 library prompts indicate the name of the library on
the respective systems. Specify a name for the System 1 library prompt and verify
that the value shown for the System 2 library prompt is what you want.
5. Ensure that the value of the Load from system prompt indicates the system from
which you want to load data areas.
6. Verify that the remaining prompts on the display contain the values you want. If
necessary, change the values.
7. To create the data group data area entries, press Enter. If you submitted the job
for batch processing, MIMIX sends a message indicating that a data areas load
job has been submitted. A completion message is sent when the load has
262
finished.
Adding or changing a data group data area entry
Before any addition or change is recognized, you need to end and restart the data
group.
From the management system, do the following to add a new entry or change an
existing data area entry for use with the data area poller:
1. From the MIMIX Intermediate Main Menu, type a 1 (Work with data groups) and
press Enter.
2. From the Work with Data Groups display, type a 19 (Data area entries) next to the
data group you want and press Enter.
3. From the Work with DG Data Area Entries display do one of the following
To add a new data area entry, type a 1 (Add) at the blank line at the top of the
list and press Enter. The Add Data Group Data Area Entry display appears.
To change an existing data area entry, type a 2 (Change) next to the data
group data area entry you want and press Enter. The Change Data Group Data
Area Entry display appears.
4. Specify the values you want at the prompts for System 1 data area and Library
and System 2 data area and Library.
5. Press Enter to create the data area entry or accept the change.
Additional options: working with DG entries
263
Additional options: working with DG entries
The procedures for performing common functions, such as copying, removing, and
displaying, are very similar for all types of data group entries used by MIMIX. Each
generic procedure in this topic indicates the type of data group entry for which it can
be used.
Copying a data group entry
Use this procedure from the management system to copy a data group entry from one
data group definition to another data group definition. The data group definition to
which you are copying must exist.
To copy a data group entry to another data group definition, do the following:
1. From the Work with DG Definitions display, type the option you want next to the
data group from which you are copying and press Enter. Any of these options will
allow an entry to be copied:
Option 17 (File entries)
Option 19 (Data area entries)
Option 20 (Object entries)
Option 21 (DLO entries)
Option 22 (IFS entries)
2. The "Work with" display for the entry you selected appears. Type a 3 (Copy) next
to the entry you want and press Enter.
3. The Copy display for the entry appears. Specify a name for the To definition
prompt.
4. Additional prompts appear on display that are specific for the type of entry. The
values of these prompts define the data to be replicated by the definition to which
you are copying. Ensure that the prompts identify the necessary information.
Table 29. Values to specify for each type of data group entry.
For file entries, provide: To File 1
To Member
To File 2
For data area entries, provide: To system 1 data area
To system 2 data area
For object entries, provide: System 1 library
System 1 object
Object type
Attribute
For DLO entries, provide: System 1 folder
System 1 document
Owner
264
5. The value *NO for the Replace definition prompt prevents you from replacing an
existing entry in the definition to which you are copying. If you want to replace an
existing entry, specify *YES.
6. To copy the entry, press Enter.
7. For file entries, end and restart the data group being copied.
Removing a data group entry
Use this procedure from the management system to remove a data group entry from
a data group definition. You may want to remove an entry when you no longer need to
replicate the information that the entry identifies.
Note: For all data group entries except file entries, the change is not recognized until
after the send, receive, and apply processes for the associated data group
and ended and restarted.
Data group file entries support dynamic removals if you prompt the
RMVDGFE command and specify Dynamically update (*YES). If you specify
Dynamically update (*YES), you do not need to end the processes for the data
group when you use the default. The change is recognized as soon as each
active process receives the update. If a file is on hold and you want to delete
the data group file entry, it is best to use *YES. This forces all currently held
entries to be deleted, all current entries to be ignored, and prevents additional
entries from accumulating.
If you accept the default of Dynamically update (*NO), the change is not
recognized until after the send receive, and apply processes for the
associated data group are ended and restarted. When you specify
Dynamically update (*NO), the remove function does not clean up any records
in the error/hold log. If an entry is held when you delete it, its information
remains in the error/hold log. Additional transactions for the file or member can
be accumulating in the error/hold log or will be applied to the file.
To remove an entry, do the following:
1. From the Work with DG Definitions display, type the option for the entry you want
next to the data group and press Enter. Any of these options will allow an entry to
be removed:
Option 17 (File entries)
Option 19 (Data area entries)
Option 20 (Object entries)
Option 21 (DLO entries)
Option 22 (IFS entries)
2. The "Work with" display for the entry you selected appears. Type a 4 (Remove)
next to the entry you want and press Enter.
For IFS entries, provide: To system 1 object
Table 29. Values to specify for each type of data group entry.
Additional options: working with DG entries
265
3. For data group file entries, a display with additional prompts appears. Specify the
values you want and press Enter.
4. A confirmation display appears with a list of entries to be deleted. To delete the
entries, press Enter.
Displaying a data group entry
Use this procedure to display a data group entry for a data group definition.
To display a data group entry, do the following:
1. From the Work with DG Definitions display, type the option for the entry you want
next to the data group and press Enter. Any of these options will allow an entry to
be displayed:
Option 17 (File entries)
Option 19 (Data area entries)
Option 20 (Object entries)
Option 21 (DLO entries)
Option 22 (IFS entries)
2. The "Work with" display for the entry you selected appears. Type a 5 (Display)
next to the entry you want and press Enter.
3. The appropriate data group entry display appears. Page Down to see all of the
values.
Printing a data group entry
Use this procedure to create a spooled file which you can print that identifies a system
definition, transfer definition, journal definition, or a data group definition. Not all types
of entries support the print function.
To print a data group entry, do the following;
1. From the Work with DG Definitions display, type the option for the entry you want
next to the data group and press Enter. Any of these options will allow an entry to
be printed:
Option 17 (File entries)
Option 19 (Data area entries)
Option 22 (IFS entries)
2. The "Work with" display for the entry you selected appears. Type a 6 (Print) next
to the entry you want and press Enter.
3. A spooled file is created with a name of MXDG***E, where *** is the type of entry.
You can print the spooled file according to your standard print procedures.
Additional supporting tasks for configuration
266
CHAPTER 13 Additional supporting tasks for
configuration
The tasks in this chapter provide supplemental configuration tasks. Always use the
configuration checklists to guide you though the steps of standard configuration
scenarios.
Accessing the Configuration Menu on page 268 describes how to access the
menu of configuration options from a 5250 emulator.
Starting the system and journal managers on page 269 provides procedures for
starting these jobs. System and journal manager jobs must be running before
replication can be started.
Setting data group auditing values manually on page 270 describes when to
manually set the object auditing level for objects defined to MIMIX and provides a
procedure for doing so.
Checking file entry configuration manually on page 276 provides a procedure
using the CHKDGFE command to check the data group file entries defined to a
data group.
Note: The preferred method of checking is to use MIMIX AutoGuard to
automatically schedule the #DGFE audit, which calls the CHKDGFE
command and can automatically correct detected problems. For additional
information, see Interpreting results for configuration data - #DGFE audit
on page 572.
Changes to startup programs on page 278 describes changes that you may
need to make to your configuration to support remote journaling.
Starting the DDM TCP/IP server on page 279 describes how to start this server
that is required in configurations that use remote journaling.
Checking DDM password validation level in use on page 280 describes how to
check the whether the DDM communications infrastructure used by MIMIX
Remote J ournal support requires a password. This topic also describes options
for ensuring that systems in a MIMIX configuration have the same password and
the implications of these options.
Starting data groups for the first time on page 282 describes how to start
replication once configuration is complete and the systems are synchronized. Use
this only when directed to by a configuration checklist.
Identifying data groups that use an RJ link on page 283 describes how to
determine which data groups use a particular RJ link.
Using file identifiers (FIDs) for IFS objects on page 284 describes the use of FID
parameters on commands for IFS tracking entries. When IFS objects are
configured for replication through the user journal, commands that support IFS
tracking entries can specify a unique FID for the object on each system. This topic
describes the processing resulting from combinations of values specified for the
267
object and FID prompts.
Configuring restart times for MIMIX jobs on page 285 describes how to change
the time at which MIMIX jobs automatically restart. MIMIX jobs restart daily to
ensure that the MIMIX environment remains operational.
Setting the system time zone and time on page 293 describes how to set time
zone values so that the timestamps used within status of application group
procedures will display correctly on all systems.
Creating an application group definition on page 294 describes how to create an
application group that will not participate in a cluster controlled by the IBM i
operating system.
Loading data resource groups into an application group on page 295 describes
how to load data resource groups with existing data group definitions and specify
the relationship between the name spaces of the data groups within each data
resource group.
Specifying the primary node for the application group on page 296 describes
how to ensure that a primary node is defined for an application group.
268
Accessing the Configuration Menu
The MIMIX Configuration Menu provides access to the options you need for
configuring MIMIX.
To access the MIMIX Configuration Menu, do the following:
1. Access the MIMIX Basic Main Menu. See Accessing the MIMIX Main Menu on
page 83.
2. From the on the MIMIX Basic Main Menu, select option 11 (Configuration menu)
and press Enter.
Starting the system and journal managers
269
Starting the system and journal managers
If the system managers are running, they will automatically send configuration
information to the network system as you complete configuration tasks. This
procedure starts all the system managers, journal managers, and, if the system is
participating in a cluster, cluster services. The system managers, journal managers,
and cluster services must be active to start replication.
To start all of the system managers, journal managers, and cluster services (for a
cluster environment) during configuration, do the following:
1. Access the MIMIX Basic Main Menu. See Accessing the MIMIX Main Menu on
page 83.
2. From the MIMIX Basic Main Menu press the F21 key (Assistance level) to access
the MIMIX Intermediate Main Menu.
3. Select option 2 (Work with Systems) and press Enter.
4. The Work with Systems display appears with a list of the system definitions. Type
a 9 (Start) next to each of the system definitions you want and press Enter. This
will start all managers on all of these systems in the MIMIX environment.
5. The Start MIMIX Managers (STRMMXMGR) display appears. Do the following:
a. Verify that *ALL appears as the value for the Manager prompt.
b. Press Enter to complete this request.
6. If you selected more than one system definition in Step 4, the Start MIMIX
Managers (STRMMXMGR) display will be shown for each system definition that
you selected. Repeat Step 5 for each system definition that you selected.
270
Setting data group auditing values manually
Default behavior for MIMIX is to change the auditing value of IFS, DLO, and library-
based objects configured for system journal replication as needed when starting data
groups with the Start Data Group (STRDG) command.
To manually set the system auditing level of replicated objects, or to force a change to
a lower configured level, you can use the Set Data Group Auditing (SETDGAUD)
command.
The SETDGAUD command allows you to set the object auditing level for all existing
objects that are defined to MIMIX by data group object entries, data group DLO
entries, and data group IFS entries. The SETDGAUD command can be used for data
groups configured for replicating object information (type *OBJ or *ALL).
When to set object auditing values manually - If you anticipate a delay between
configuring data group entries and starting the data group, you should use the
SETDGAUD command before synchronizing data between systems. Doing so will
ensure that replicated objects will be properly audited and that any transactions for
the objects that occur between configuration and starting the data group will be
replicated.
You can also use the SETDGAUD command to reset the object auditing level for all
replicated objects if a user has changed the auditing level of one or more objects to a
value other than what is specified in the data group entries.
Processing options - MIMIX checks for existing objects identified by data group
entries for the specified data group. The object auditing level of an existing object is
set to the auditing value specified in the data group entry that most specifically
matches the object. Default behavior is that MIMIX only changes an objects auditing
value if the configured value is higher than the objects existing value. However, you
can optionally force a change to a configured value that is lower than the existing
value through the commands Force audit value (FORCE) parameter.
The default value *NO for the FORCE parameter prevents MIMIX from reducing
the auditing level of an object. For example, if the SETDGAUD command
processes a data group entry with a configured object auditing value of *CHANGE
and finds an object identified by that entry with an existing auditing value of *ALL,
MIMIX does not change the value.
If you specify *YES for the FORCE parameter, MIMIX will change the auditing
value even if it is lower than the existing value.
For IFS objects, it is particularly important that you understand the ramifications of the
value specified for the FORCE parameter. For more information see Examples of
changing of an IFS objects auditing value on page 271.
Procedure -To set the object auditing value for a data group, do the following on each
system defined to the data group:
1. Type the command SETDGAUD and press F4 (Prompt).
2. The Set Data Group Auditing (SETDGAUD) appears. Specify the name of the
data group you want.
Setting data group auditing values manually
271
3. At the Object type prompt, specify the type of objects for which you want to set
auditing values.
4. If you want to allow MIMIX to force a change to a configured value that is lower
than the objects existing value, specify *YES for the Force audit value prompt.
Note: This may affect the operation of your replicated applications. We
recommend that you force auditing value changes only when you have
specified *ALLIFS for the Object type.
5. Press Enter.
Examples of changing of an IFS objects auditing value
The following examples show the effect of the value of the FORCE parameter when
manually changing the object auditing values of IFS objects configured for system
journal replication.
The auditing values resulting from the SETDGAUD command can be confusing when
your environment has multiple data group IFS entries, each with different auditing
levels, and more than one entry references objects sharing common parent
directories. The following examples illustrate how these conditions affect the results of
setting object auditing for IFS objects.
Data group entries are processed in order from most generic to most specific. IFS
entries are processed using the unicode character set. The first entry (more generic)
found that matches the object is used until a more specific match is found.
When MIMIX processes a data group IFS entry and changes the auditing level of
objects which match the entry, all of the directories in the objects directory path are
checked and, if necessary, changed to the new auditing value. In the case of an IFS
entry with a generic name, all descendents of the IFS object may also have their
auditing value changed.
Example 1: This scenario shows a simple implementation where data group IFS
entries have been modified to have a configured value of *CHANGE from a previously
configured value of *ALL. Table 30 identifies a set of data group IFS entries and their
configured auditing values. The entries are listed in the order in which they are
processed by the SETDGAUD command.
Simply ending and restarting the data group will not cause these configuration
changes to be effective. Because the change is to a lower auditing level, the change
must be forced with the SETDGAUD command. Similarly, running the SETDGAUD
command with FORCE(*NO) does not change the auditing values for this scenario.
Table 30. Example 1 configuration of data group IFS entries
Order processed Specified object Object auditing value Process type
1 /DIR1/* OBJ AUD(*CHANGE) PRCTYPE(*EXCLD)
2 /DIR1/DIR2/* OBJ AUD(*CHANGE) PRCTYPE(*INCLD)
3 /DIR1/STMF OBJ AUD(*CHANGE) PRCTYPE(*INCLD)
272
Table 31 shows the intermediate and final results as each data group IFS entry is
processed by the force request.
Example 2: Table 32 identifies a set of data group IFS entries and their configured
auditing values. The entries are listed in the order in which they are processed by the
SETDGAUD command. In this scenario there are multiple configured values.
For this scenario, running the SETDGAUD command with FORCE(*NO) does not
change the auditing values on any existing IFS objects because the configured values
from the data group IFS entries are the same or lower than the existing values.
Running the command with FORCE(*YES) does change the existing objects values.
Table 33 shows the intermediate values as each entry is processed by the force
request and the final results of the change. Data group IFS entry #3 in Table 32
Table 31. Intermediate audit values which occur during FORCE(*YES) processing for example 1.
Existing objects Existing
value
Auditing values while processing SETDGAUD FORCE(*YES)
Changed by
1st entry
Changed by
2nd entry
Changed by
3rd entry
Final results of
FORCE(*YES)
/DIR1 *ALL Note 1 *CHANGE Note 2 *CHANGE
/DIR1/STMF *ALL Note 1 *CHANGE *CHANGE
/DIR1/STMF2 *ALL Note 1 *ALL
/DIR1/DIR2 *ALL Note 1 *CHANGE *CHANGE
/DIR1/DIR2/STMF *ALL Note 1 *CHANGE *CHANGE
Notes:
1. Because the first data group IFS entry excludes objects from replication, object auditing processing does
not apply.
2. This objects auditing value is evaluated when the third data group IFS entry is processed but the entry
does not cause the value to change. The existing value is the same as the configured value of the third
entry at the time it is processed.
Table 32. Example 2 configuration of data group IFS entries
Order processed Specified object Object auditing value Process type
1 /DIR1/* OBJ AUD(*CHANGE) PRCTYPE(*INCLD)
2 /DIR1/DIR2/* OBJ AUD(*NONE) PRCTYPE(*INCLD)
3 /DIR1/STMF OBJ AUD(*ALL) PRCTYPE(*INCLD)
Setting data group auditing values manually
273
prevents directory /DIR1 from having an auditing value of *CHANGE or *NONE
because it is the last entry processed and it is the most specific entry.
Example 3: This scenario illustrates why you may need to force the configured values
to take effect after changing the existing data group IFS entries from *ALL to lower
values. Table 34 identifies a set of data group IFS entries and their configured
auditing values. The entries are listed in the order in which they are processed by the
SETDGAUD command.
For this scenario, running the SETDGAUD command with FORCE(*NO) does not
change the auditing values on any existing IFS objects because the configured values
from the data group IFS entries are lower than the existing values.
In this scenario, SETDGAUD FORCE(*YES) must be run to have the configured
auditing values take effect. Table 35 shows the intermediate values as each entry is
processed by the force request and the final results of the change.
Table 33. Intermediate audit values which occur during FORCE(*YES) processing for example 2.
Existing objects Existing
value
Auditing values while processing SETDGAUD FORCE(*YES)
Changed by
1st entry
Changed by
2nd entry
Changed by
3rd entry
Final results of
FORCE(*YES)
/DIR1 *ALL *CHANGE *NONE *ALL *ALL
/DIR1/STMF *ALL *CHANGE *ALL *ALL
/DIR1/STMF2 *ALL *CHANGE *CHANGE
/DIR1/DIR2 *ALL *CHANGE *NONE *NONE
/DIR1/DIR2/STMF *ALL *CHANGE *NONE *NONE
Table 34. Example 3: configuration of data group IFS entries
Order processed Specified object Object auditing value Process type
1 /DIR1/* OBJ AUD(*CHANGE) PRCTYPE(*INCLD)
2 /DIR1/DIR2/* OBJ AUD(*NONE) PRCTYPE(*INCLD)
3 /DIR1/STMF OBJ AUD(*NONE) PRCTYPE(*INCLD)
Table 35. Intermediate audit values which occur during FORCE(*YES) processing for example 3.
Existing objects Existing
value
Auditing values while processing SETDGAUD FORCE(*YES)
Changed by
1st entry
Changed by
2nd entry
Changed by
3rd entry
Final results of
FORCE(*YES)
/DIR1 *ALL *CHANGE *NONE *NONE
/DIR1/STMF *ALL *CHANGE *NONE *NONE
/DIR1/STMF2 *ALL *CHANGE *CHANGE
274
Example 4: This example begins with the same set of data group IFS entries used in
example 3 (Table 34) and uses the results of the forced change in example 3 as the
auditing values for the existing objects in Table 36.
Table 36 shows how running the SETDGAUD command with FORCE(*NO) causes
changes to auditing values. This scenario is quite possible as a result of a normal
STRDG request. Complex data group IFS entries and multiple configured values
cause these potentially undesirable results.
Note: Any addition or change to the data group IFS entries can cause these results
to occur.
There is no way to maintain the existing values in Table 36 without ensuring that a
forced change occurs every time SETDGAUD is run, which may be undesirable. In
this example, the next time data groups are started, the objects auditing values will
be set to those shown in Table 36 for FORCE(*NO).
Any addition or change to the data group IFS entries can potentially cause similar
results the next time the data group is started. To avoid this situation, we recommend
that you configure a consistent auditing value of *CHANGE across data group IFS
entries which identify objects with common parent directories.
/DIR1/DIR2 *ALL *CHANGE *NONE *NONE
/DIR1/DIR2/STMF *ALL *CHANGE *NONE *NONE
Table 35. Intermediate audit values which occur during FORCE(*YES) processing for example 3.
Existing objects Existing
value
Auditing values while processing SETDGAUD FORCE(*YES)
Changed by
1st entry
Changed by
2nd entry
Changed by
3rd entry
Final results of
FORCE(*YES)
Table 36. Example 4: comparison of objects actual values
Existing objects Auditing value
Existing values After SETDGAUD
FORCE(*NO)
After SETDGAUD
FORCE(*YES)
/DIR1 *NONE *CHANGE *NONE
/DIR1/STMF *NONE *CHANGE *NONE
/DIR1/STMF2 *CHANGE *CHANGE *CHANGE
/DIR1/DIR2 *NONE *CHANGE *NONE
/DIR1/DIR2/STMF *NONE *CHANGE *NONE
Setting data group auditing values manually
275
Example 5: This scenario illustrates the results of SETDGAUD command when the
objects auditing value is determined by the user profile which accesses the object
(value *USRPRF). Table 37 shows the configured data group IFS entry.
Table 38 compares the results running the SETDGAUD command with FORCE(*NO)
and FORCE(*YES).
Running the command with FORCE(*NO) does not change the value. The value
*USRPRF is not in the range of valid values for MIMIX. Therefore, an object with an
auditing value of *USRPRF is not considered for change.
Running the command with FORCE(*YES) does force a change because the existing
value and the configured value are not equal.
Table 37. Example 5 configuration of data group IFS entries
Order processed Specified Object Object auditing value Process type
1 /DIR1/STMF OBJ AUD(*NONE) PRCTYPE(*INCLD)
Table 38. Example 5: comparison of objects actual values
Existing objects Auditing value
Existing values After SETDGAUD
FORCE(*NO)
After SETDGAUD
FORCE(*YES)
/DIR1/STMF *USRPRF *USRPRF *NONE
276
Checking file entry configuration manually
The Check DG File Entries (CHKDGFE) command provides a means to detect
whether the correct data group file entries exist with respect to the data group object
entries configured for a specified data group in your MIMIX configuration. When file
entries and object entries are not properly matched, your replication results can be
affected.
Note: The preferred method of checking is to use MIMIX AutoGuard to automatically
schedule the #DGFE audit, which calls the CHKDGFE command and can
automatically correct detected problems. For additional information, see
Interpreting results for configuration data - #DGFE audit on page 572.
To check your file entry configuration manually, do the following:
1. On a command line, type CHKDGFE and press Enter. The Check Data Group File
Entries (CHKDGFE) command appears.
2. At the Data group definition prompts, select *ALL to check all data groups or
specify the three-part name of the data group.
3. At the Options prompt, you can specify that the command be run with special
options. The default, *NONE, uses no special options. If you do not want an error
to be reported if a file specified in a data group file entry does not exist, specify
*NOFILECHK.
4. At the Output prompt, specify where the output from the command should be
sentto print, to an outfile, or to both. See Step 6.
5. At the User data prompt, you can assign your own 10-character name to the
spooled file or choose not to assign a name to the spooled file. The default, *CMD,
uses the CHKDGFE command name to identify the spooled file.
6. At the File to receive output prompts, you can direct the output of the command to
the name and library of a specific database file. If the database file does not exist,
it will be created in the specified library with the name MXCDGFE.
7. At the Output member options prompts, you can direct the output of the command
to the name of a specific database file member. You can also specify how to
handle new records if the member already exists. Do the following:
a. At the Member to receive output prompt, accept the default *FIRST to direct
the output to the first member in the file. If it does not exist, a new member is
created with the name of the file specified in Step 6. Otherwise, specify a
member name.
b. At the Replace or add records prompt, accept the default *REPLACE if you
want to clear the existing records in the file member before adding new
records. To add new records to the end of existing records in the file member,
specify *ADD.
8. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to check data group file entries.
Checking file entry configuration manually
277
To submit the job for batch processing, accept *YES. Press Enter and continue
with the next step.
9. At the Job description prompts, specify the name and library of the job description
used to submit the batch request. Accept MXAUDIT to submit the request using
the default job description, MXAUDIT.
10. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
11. To start the data group file entry check, press Enter.
278
Changes to startup programs
If you use startup programs, ensure that you include the following operations when
you configure for remote journaling:
If you use TCP/IP as the communications protocol you need to start TCP/IP,
including the DDM server, before starting replication.
If you use OptiConnect as the communications protocol, the QSOC subsystem
must be active.
Starting the DDM TCP/IP server
279
Starting the DDM TCP/IP server
Use this procedure if you need to start the DDM TCP/IP server in an environment
configured for MIMIX RJ support.
From the system on which you want to start the TCP server, do the following:
1. Ensure that the DDM TCP/IP attributes allow the DDM server to be automatically
started when the TCP/IP server is started (STRTCP). Do the following:
a. Type the command CHGDDMTCPA and press F4 (Prompt).
b. Check the value of the Autostart server prompt. If the value is *YES, it is set
appropriately. Otherwise, change the value to *YES and press Enter.
2. To prevent install problems due to locks on the library name, ensure that the
MIMIX product library is not in your user library list.
3. To start the DDM server, type the command STRTCPSVR(*DDM) and press Enter.
Verifying that the DDM TCP/IP server is running
Do the following:
1. Enter the command NETSTAT OPTION(*CNN)
2. The Work with TCP/IP Connection Status appears. Look for these servers in the
Local Port column:
ddm
ddm-ssl
3. These servers should exist and should have a vale of Listen in the State column.
280
Checking DDM password validation level in use
MIMIX Remote J ournal support uses the DDM communications infrastructure. This
infrastructure can be configured to require a password to be provided when a server
connection is made. The MIMIXOWN user profile, which establishes the remote
journal connection, ships with a preset password so that it is consistent on all
systems. If you have implemented DDM password validation on any systems where
MIMIX will be used, you should verify the DDM level in use. If the MIMIXOWN
password is not the same on both systems, you may need to change the MIMIXOWN
user profile or the DDM security level to allow MIMIX Remote J ournal support to
function properly. These changes have security implications of which you should be
aware.
To check the DDM password validation level in use, do the following on both systems:
1. From a command line, type CHGDDMTCPA and press F4 (prompt).
2. Check the value of the Password required field.
If the value is *NO or *VLDONLY, no further action is required. Press F12
(Cancel).
If the field contains any other value, you must take further action to enable
MIMIX RJ support to function in your environment. Press F12, then continue
with the next step.
3. You have two options for changing your environment to enable MIMIX RJ support
to function. Each option has security implications. You must decide which option is
best for your environment. The options are:
Option 1. Enable MIMIXOWN user profile for DDM environment on page 280.
MIMIX must be installed and transfer definitions must exist before you can
make the necessary changes. For new installations this should automatically
configured for you.
Option 2. Allow user profiles without passwords on page 281. You can use
this option before or after MIMIX is installed. However, this option should be
performed before configuring MIMIX RJ support.
Option 1. Enable MIMIXOWN user profile for DDM environment
This option changes the MIMIXOWN user profile to have a password and adds server
authentication entries to recognize the MIMIXOWN user profile.
Do the following from both systems:
1. Access the Work with Transfer Definitions (WRKTFRDFN) display. Then do the
following:
a. Type a 5 (Display) next to each transfer definition that will be used with MIMIX
RJ support and press Enter.
b. Page down to locate the value for Relational database (RDB parameter) and
record the value indicated.
Checking DDM password validation level in use
281
c. If you selected multiple transfer definitions, press Enter to advance to the next
selection and record its RDB value. Ensure that you record the values for all
transfer definitions you selected.
Note: If the RDB value was generated by MIMIX, it will be in the form of the
characters MX followed by the System1 definition, System2 definition,
and the name of the transfer definition, with up to 18 characters.
2. On the source system, change the MIMIXOWN user profile to have a password
and to prevent signing on with the profile. To do this, enter the following command:
CHGUSRPRF USRPRF( MI MI XOWN) PASSWORD( user-defined-password)
I NLMNU( *SI GNOFF)
Note: The password is case sensitive and must be the same on all systems in
the MIMIX network. If the password does not match on all systems, some
MIMIX functions will fail with security error message LVE0127.
3. Verify that the QRETSVRSEC (Retain server security data) system value is set to
1. The value 1 allows the password you specify in the server authentication entry
in Step 4 to take effect.
DSPSYSVAL SYSVAL( QRETSVRSEC)
If necessary, change the system value.
4. You need a server authentication entry for the MIMIXOWN user profile for each
RDB entry you recorded in Step 1. To add a server authentication entry, type the
following command, using the password you specified in Step 2 and the RDB
value from Step 1. Then press Enter.
ADDSVRAUTE USRPRF( MI MI XOWN) SERVER( recorded-RDB-value)
PASSWORD( user-defined-password)
5. Repeat Step 2 through Step 4 on the target system.
Option 2. Allow user profiles without passwords
This option changes DDM TCP attributes to allow user profiles without passwords to
function in environments that use DDM password validation. Do the following:
1. From a command line on the source system, type CHGDDMTCPA
PWDRQD(*VLDONLY) and press Enter.
2. From a command line on the target system, CHGDDMTCPA PWDRQD(*VLDONLY)
and press Enter.
282
Starting data groups for the first time
Use this procedure when a configuration checklist directs you to start a newly
configured data group for the first time. You should have identified the starting point in
the journals with Establish a synchronization point on page 454 when you
synchronized the systems.
1. From the Work with Data Groups display, type a 9 (Start DG) next to the data
group that you want to start and press Enter.
2. The Start Data Group (STRDG) display appears. Press Enter to access additional
prompts. Do the following:
a. Specify the starting point for user journal journal replication. For the Database
journal receiver and Database large sequence number prompts specify the
information you recorded in Step 5 of Establish a synchronization point on
page 454.
b. Specify the starting point for system journal journal replication. For the Object
journal receiver and Object large sequence number prompts specify the
information you recorded in Step 6 of Establish a synchronization point on
page 454.
c. Specify *YES for the Clear pending prompt.
3. If the data group participates in an application group, do the following:
a. Press F10 (Additional parameters).
b. At the Override if in data rsc. group prompt, specify *YES.
4. Press Enter.
5. A confirmation display appears. Press Enter.
6. A second confirmation display appears. Press Enter to start the data group.
Identifying data groups that use an RJ link
283
Identifying data groups that use an RJ link
Use this procedure to determine which data groups use a remote journal link before
you end a remote journal link or remove a remote journaling environment.
1. Enter the command WRKRJLNK and press Enter.
2. Make a note of the name indicated in the Source Jrn Def column for the RJ Link
you want.
3. From the command line, type WRKDGDFN and press Enter.
4. For all data groups listed on the Work with DG Definitions display, check the
Journal Definition column for the name of the source journal definition you
recorded in Step 2.
If you do not find the name from Step 2, the RJ link is not used by any data
group. The RJ link can be safely ended or can have its remote journaling
environment removed without affecting existing data groups.
If you find the name from Step 2 associated with any data groups, those data
groups may be adversely affected if you end the RJ link. A request to remove
the remote journaling environment removes configuration elements and
system objects that need to be created again before the data group can be
used. Continue with the next step.
5. Press F10 (View RJ links). Consider the following and contact your MIMIX
administrator before taking action that will end the RJ link or remove the remote
journaling environment.
When *NO appears in the Use RJ Link column, the data group will not be
affected by a request to end the RJ link or to end the remote journaling
environment.
Note: If you allow applications other than MIMIX to use the RJ link, they will be
affected if you end the RJ link or remove the remote journaling
environment.
When *YES appears in the Use RJ Link column, the data group may be
affected by a request to end the RJ link. If you use the procedure for ending a
remote journal link independently in the MIMIX Operations book, ensure that
any data groups that use the RJ link are inactive before ending the RJ link.
284
Using file identifiers (FIDs) for IFS objects
Commands used for user journal replication of IFS objects use file identifiers (FIDs) to
uniquely identify the correct IFS tracking entries to process. The System 1 file
identifier and System 2 file identifier prompts ensure that IFS tracking entries are
accurately identified during processing. These prompts can be used alone or in
combination with the System 1 object prompt.
These prompts enable the following combinations:
Processing by object path: A value is specified for the System 1 object prompt
and no value is specified for the System 1 file identifier or System 2 file identifier
prompts.
When processing by object path, a tracking entry is required for all commands
with the exception of the SYNCIFS command. If no tracking entry exists, the
command cannot continue processing. If a tracking entry exists, a query is
performed using the specified object path name.
Processing by object path and FIDs: A value is specified for the System 1
object prompt and a value is specified for either or both of the System 1 file
identifier or System 2 file identifier prompts.
When processing by object path and FIDs, a tracking entry is required for all
commands. If no tracking entry exists, the command cannot continue processing.
If a tracking entry exists, a query is performed using the specified FID values. If
the specified object path name does not match the object path name in the
tracking entry, the command cannot continue processing.
Processing by FIDs: A value is specified for either or both of the System 1 file
identifier or System 2 file identifier prompts and, with the exception of the
SYNCIFS command, no value is specified for the System 1 object prompt. In the
case of SYNCIFS, the default value *ALL is specified for the System 1 object
prompt.
When processing by FIDs, a tracking entry is required for all commands. If no
tracking entry exists, the command cannot continue processing. If a tracking entry
exists, a query is performed using the specified FID values.
Configuring restart times for MIMIX jobs
285
Configuring restart times for MIMIX jobs
Certain MIMIX jobs are restarted, or recycled, on a regular basis in order to maintain
the MIMIX environment. The ability to configure this activity can ease conflicts with
your scheduled workload by changing when the MIMIX jobs restart to a more
convenient time for your environment.
The default operation of MIMIX is to restart MIMIX jobs at midnight (12:00 a.m.).
However, you can change the restart time by setting a different value for the Job
restart time parameter (RSTARTTIME) on system definitions and data group
definitions. The time is based on a 24 hour clock. The values specified in the system
definitions and data group definitions are retrieved at the time the MIMIX jobs are
started. Changes to the specified values have no effect on jobs that are currently
running. Changes are effective the next time the affected MIMIX jobs are started.
For a data group definition you can also specify either *SYSDFN1 or the *SYSDFN2
for the Job restart time (RSTARTTIME) parameter. Respectively, these values use the
restart time specified in the system definition identified as System 1 or System 2 for
the data group.
Both system and data group definition commands support the special value *NONE,
which prevents the MIMIX jobs from automatically restarting. Be sure to read
Considerations for using *NONE on page 287 before using this value.
Configurable job restart time operation
To make effective use of the configurable job restart time, you may need to set the job
restart time in as few as one or as many as all of these locations:
One or more data group definitions
The system definition for the management system
The system definitions for one or more network systems.
MIMIX system-level jobs affected by the Job restart time value specified in a system
definition are: system manager (SYSMGR), system manager receive
(SYSMGRRCV), and journal manager (J RNMGR).
MIMIX data group-level jobs affected by the Job restart time value specified in a data
group definition are: object send (OBJ SND), object receive (OBJ RCV), database
send (DBSND), database receive (DBRCV), database reader (DBRDR), object
retrieve (OBJ RTV), container send (CNRSND), container receive (CNRRCV), status
send (STSSND), status receive (STSRCV), and object apply (OBJ APY).
Also, the role of the system on which you change the restart time affects the results.
For system definitions, the value you specify for the restart time and the role of the
system (management or network) determines which MIMIX system-level jobs will
restart and when. For data group definitions, the value you specify for the restart time
and the role of the system (source or target) determines which data group-level jobs
will restart and when. Time zone differences between systems also influence the
results you obtain.
MIMIX system-level jobs restart when they detect that the time specified in the system
definition has passed.
286
The system manager jobs are a pair of jobs that run between a network system and
the management system. The management and network systems both have journal
manager jobs, but the jobs operate independently. The job restart time specified in the
management systems system definition determines when to restart the journal
manager on the management system. The job restart time specified in the network
systems system definition determines when to restart the journal manager job on the
network system, when to restart the system manager jobs on both systems, and also
affects when cleanup jobs on both systems are submitted. Table 39 shows how the
role of the system affects the results of the specified job restart time.
For MIMIX data group-level jobs, a delay of 2 to 35 minutes from the specified time is
built into the job restart processing. The actual delay is unique to each job. By
distributing the jobs within this range the load on systems and communications is
more evenly distributed, reducing bottlenecks caused by many jobs simultaneously
attempting to end, start, and establish communications. MIMIX determines the actual
restart time for the object apply (OBJ APY) jobs based on the timestamp of the system
on which the jobs run. For all other affected jobs, MIMIX determines the actual start
time for object or database jobs based on the timestamp of the system on which the
OBJ SND or the DBSND job runs. Table 40 shows how these key jobs affect when
Table 39. Effect of the systems role on changing the job restart time in a system definition.
System
Definition
Role
Effect on Jobs by the value specified
Jobs Time *NONE
Management
System
System managers Specified value is not used to determine restart time. Restart is
determined by value specified for network system.
Cleanup jobs
J ournal managers J ob on management system
restarts at time specified.
J ob on management system is
not restarted.
Collector services
Network
System
System managers J obs on both systems restart
when time on the management
system reaches the time
specified.
J obs are not restarted on either
system.
Cleanup jobs J obs are submitted on both
systems by system manager
jobs after they restart.
J obs are submitted on both
systems when midnight occurs
on the management system.
J ournal managers J ob on network system restarts
at time specified.
J ob on network system is not
restarted.
Collector services
Configuring restart times for MIMIX jobs
287
other data group-level jobs restart.
For more information about MIMIX jobs see Replication job and supporting job
names on page 46.
Considerations for using *NONE
If you specify the value *NONE for the Job restart time in a data group definition, no
MIMIX data group-level jobs are automatically restarted.
If you specify the value *NONE for the Job restart time in a system definition, the
cleanup jobs started by the system manager will continue to be submitted based on
when midnight occurs on the management system. All other affected MIMIX system-
level jobs will not be restarted. Table 39 shows the effect of the value *NONE.
Examples: job restart time
Restart time examples: system definitions on page 288 and Restart time examples:
system and data group definition combinations on page 288 illustrate the effect of
using the Job restart time (RSTARTTIME) parameter. These examples assume that
the system configured as the management system for MIMIX operations is also the
target system for replication during normal operation. For each example, consider the
effect it would have on nightly backups that complete between midnight and 1 a.m. on
the target system.
Table 40. Systems on which data group-level jobs run.
In each row, the highlighted job determines the restart time for all jobs in the row.
Source System Jobs Target System Jobs
Object send (OBJ SND)
Object retrieve (OBJ RTV)
Container send (CNRSND)
Status receive (STSRCV)
Object receive (OBJ RCV)
Container receive (CNRRCV)
Status Send (STSSND)
Database send (DBSND)
1
Database receive (DBRCV)
1

Database reader (DBRDR)
1

Object apply (OBJ APY)
1
When MIMIX is configured for remote journaling, the DBSND and DBRCV jobs are
replaced by the DBRDR job. The DBRDR job restarts when the specified time occurs on
the target system.
Attention: The value *NONE for the Job restart time parameter is not
recommended. If you specify *NONE in a system definition or a data group
definition, you need to develop and implement alternative procedures to
ensure that the affected MIMIX jobs are periodically restarted. Restarting
the jobs ensures that long running MIMIX jobs are not ended by the system
due to resource constraints and refreshes the job log to avoid overflow and
abnormal job termination.
288
Restart time examples: system definitions
These examples show the effect of changing the job restart time only in system
definitions.
Example 1: MIMIX is running Monday noon when you change the job restart time to
013000 in system definition NEWYORK, which is the management system. The
network systems system definition uses the default value 000000 (midnight). MIMIX
remains up the rest of the day. Because the current jobs use values that existed prior
to your change, all the MIMIX system-level jobs on NEWYORK automatically restart
at midnight. As a result of your change, the journal manager on NEWYORK restarts at
1:30 a.m. Tuesday and thereafter. The network systems journal manager restarts
when midnight occurs on that system. The system manager jobs on both systems
restart and submit the cleanup jobs when the management system reaches midnight.
Example 2: It is Friday evening and all MIMIX processes on the system CHICAGO
are ended while you perform planned maintenance. During that time you change the
job restart time to 040000 in system definition CHICAGO, which is a network system.
You start MIMIX processing again at 11:07 p.m. so your changes are in effect. The
MIMIX system-level jobs that restart Saturday and thereafter at 4 a.m. Chicago time
are:
The journal manager job on CHICAGO
The system manager jobs on the management system and on CHICAGO
The cleanup jobs are submitted on the management system and on CHICAGO
Because the management systems system definition uses the default value of
midnight, the journal manager on the management system restarts when midnight
occurs on that system.
Example 3: Friday afternoon you change system definition HONGKONG to have a
job restart time value of *NONE. HONGKONG is the management system. LONDON
is the associated network system and its system definition uses the default setting
000000 (midnight). You end and restart the MIMIX jobs to make the change effective.
The journal manager on HONGKONG is no longer restarted. At midnight (00:00 a.m.
Saturday and thereafter) HONGKONG time, the system manager jobs on both
systems restart and submit cleanup jobs on both systems. In your runbook you
document the new procedures to manually restart the journal manager on
HONGKONG.
Example 4: Wednesday evening you change the system definitions for LONDON and
HONG KONG to both have a job restart time of *NONE. HONGKONG is the
management system. You restart the MIMIX jobs to make the change effective. At
midnight HONGKONG time, only the cleanup jobs on both systems are submitted. In
your runbook you document the new procedures to manually restart the journal
managers and system managers.
Restart time examples: system and data group definition combinations
These examples show the effect of changing the job restart time in various
combinations of system definitions and data group definitions.
Configuring restart times for MIMIX jobs
289
Example 5: You have a data group that operates between SYSTEMA and
SYSTEMB, which are both in the same time zone. Both the system definitions and the
data group definition use the default value 000000 (midnight) for the job restart time.
For both systems, the MIMIX system-level jobs restart at midnight. The data group
jobs on both systems restart between 2 and 35 minutes after midnight.
Example 6: 10:30 Tuesday morning you change data group definition APP1 to have a
job restart time value of 013500. The data group operates between SYSTEMA and
SYSTEMB, which are both in the same time zone. Both system definitions use the
default restart time of midnight. MIMIX jobs remain up and running. At midnight, the
system-level jobs on both systems restart using the values from the preexisting
configuration; the data group-level jobs restart on both systems between 0:02 and
0:35 a.m. On Wednesday and thereafter, APP1 data group-level jobs restart between
1:37 and 2:10 a.m. while the MIMIX system-level jobs and jobs for other data groups
restart at midnight.
Example 7: You have a data group that operates between SYSTEMA and SYSTEMB
which are both in the same time zone and are defined as the values of System 1 and
System 2, respectively. The data group definition specifies a job restart time value of
*SYSDFN2. The system definition for SYSTEMA specifies the default job restart time
of 000000 (midnight). SYSTEMB is the management system and its system definition
specifies the value *NONE for the job restart time. The journal manager on SYSTEMB
does not restart and the data group jobs do not restart on either system because of
the *NONE value specified for SYSTEMB. The journal manager on SYSTEMA
restarts at midnight. System manager jobs on both systems restart and submit
cleanup jobs at midnight as a result of the value in the network system and the fact
that the systems are in the same time zone.
Example 8A: You have a data group defined between CHICAGO and NEWYORK
(System 1 and System 2, respectively) and the data groups job restart time is set to
030000 (3 a.m.). CHICAGO is the source system as well as a network system; its
system definition uses the default job restart time of midnight. NEWYORK is the target
system as well as the management system; its system definition uses a job restart
time of 020000 (2 a.m.). There is a one hour time difference between the two
systems; said another way, NEWYORK is an hour ahead of CHICAGO. Figure 17
shows the effect of the time zone difference on this configuration.
The journal manager on CHICAGO restarts at midnight Chicago time and the journal
manager on NEWYORK restarts at 2 a.m. New York time. The system manager jobs
on both systems restart when the management system (NEWYORK) reaches the
restart time specified for the network system (CHICAGO). The cleanup jobs are
submitted by the system manager jobs when they restart.
With the exception of the object apply jobs (OBJ APY), the data group jobs restart
during the same 2 to 35 minute timeframe based on Chicago time (between 2 and 35
minutes after 3 a.m. in Chicago; after 4 a.m. in New York). Because the OBJ APY jobs
are based on the time on the target system, which is an hour ahead of the source
290
system time used for the other jobs, the OBJ APY jobs restart between 3:02 and 3:35
a.m. New York time.
Figure 17. Results of Example 8A. This is configured as a standard MIMIX environment.
Example 8B: This scenario is the same as example 8A with one exception. In this
scenario, the MIMIX environment is configured to use MIMIX Remote J ournal
support. Figure 18 shows that the database reader (DBRDR) job restarts based on
the time on the target system. Because the database send (DBSND) and database
receive (DBRCV) jobs are not used in a remote journaling environment, those jobs do
not restart.
Figure 18. Results of example 8B. This environment is configured to use MIMIX Remote
J ournal support.
Configuring restart times for MIMIX jobs
291
Configuring the restart time in a system definition
To configure the restart time for MIMIX system-level jobs in an existing environment,
do the following:
1. On the Work with System Definitions display, type a 2 (Change) next to the
system definition you want and press F4 (Prompt).
2. Press F10 (Additional parameters), then scroll down to the bottom of the display.
3. At the Job restart time prompt, specify the value you want. You need to consider
the role of the system definition (management or network system) and the effect
of any time zone differences between the management system and the network
system.
Notes:
The time is based on a 24 hour clock, and must be specified in HHMMSS
format. Although seconds are ignored, the complete time format must be
specified. Valid values range from 000000 to 235959. The value 000000 is the
default and is equivalent to midnight (00:00:00 a.m.).
If you specify *NONE, cleanup jobs are submitted on both the network and
management systems based on when midnight occurs on the management
system. System manager and journal manager jobs will not restart. The value
*NONE is not recommended. For more information, see Considerations for
using *NONE on page 287.
4. To accept the change, press Enter.
The change has no effect on jobs that are currently running. The value for the Job
restart time is retrieved from the system definition at the time the jobs are started.
The change is effective the next time the jobs are started.
Configuring the restart time in a data group definition
To configure the restart time for MIMIX data group-level jobs in an existing
environment, do the following:
1. On the Work with Data Group Definitions display, type a 2 (Change) next to the
data group definition you want and press F4 (Prompt).
2. Press F10 (Additional parameters), then scroll down to the bottom of the display.
3. At the Job restart time prompt, specify the value you want. You need to consider
the effect of any time zone differences between the systems defined to the data
group.
Notes:
The time is based on a 24 hour clock, and must be specified in HHMMSS
format. Although seconds are ignored, the complete time format must be
specified. Valid values range from 000000 to 235959. The value 000000 is the
default and is equivalent to midnight (00:00:00 a.m.).
The value *NONE is not recommended. For more information, see
Considerations for using *NONE on page 287.
292
4. To accept the change, press Enter.
Changes have no effect on jobs that are currently running. The value for the Job
restart time is retrieved at the time the jobs are started. The change is effective the
next time the jobs are started.
Setting the system time zone and time
293
Setting the system time zone and time
Each MIMIX system must have the correct time zone (QTIMZON) and time (QTIME)
system values set. If the time zone and time are not set correctly, it may cause issues
when running procedures for application groups. For example, the procedure status
time may display in the wrong order with incorrect times, which can make it difficult to
work with the procedure, or a switch may be unable to complete.
Note: These system values are updated immediately, so timed jobs may be
triggered when the values are updated. Therefore, you may want to schedule
this change, if necessary, during a time with minimum scheduled jobs or
during a planned outage when the system is in restricted state.
Verify that the QTIMZON system value is set with the correct value for the time zone
in which the LPAR is intended to run. If a change is needed, you should immediately
change the QTIME system value since the time of day is updated based on the new
value entered in the QTIMZON system value. To change the system values, do the
following:
1. Set the correct time zone in QTIMZON.
To determine the correct time zone when updating QTIMZON, you need to know:
The time zone name.
If Daylight Savings Time is observed. If Daylight Savings Time is observed, you
must also know when Daylight Savings Time starts.
In the TIME ZONE field in QTIMZON, you can press F4 for a list of time zones
included with the system. A description of the time zones included with the system
can be found at:
http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/rzati/rzatitim
ezone.htm
For a description of time zones that were added to support the Daylight Savings
Time extension, see:
http://www-
01.ibm.com/support/docview.wss?rs=0&q1=SI24906&uid=nas35b3da840f6fe6c2
186257230005266d8&loc=en_US&cs=utf-8&cc=us&lang=en
Once set, the QTIME *SYSVAL immediately changes to reflect the new QTIMZON
as if the previous QTIME value was the time in GMT.
2. Set the system time (QTIME) to the correct time so that previously scheduled jobs
do not repeat or get bypassed by the change in the QTIMZON value.
294
Creating an application group definition
Use this topic to create an application group. Application groups are best practice and
provide the ability to group and control multiple data groups as one entity. Default
procedures for starting, switching, and ending the application group are also created.
To create an application group definition, do the following:
1. From the MIMIX Basic Main Menu, type 1 (Work with application groups) and
press Enter.
2. The Work with Application Groups display appears. Type 1 (Create) next to the
blank line at the top of the list area and press Enter.
3. The Create Application Group Def. (CRTAGDFN) appears. Do the following:
a. At the Application group definition prompt, specify a name.
b. At the Application group type prompt, specify *NONCLU to indicate that the
application group will not participate in a cluster controlled by the IBM i
operating system.
c. Press Enter.
4. An additional prompt appears. Specify a description of the application group.
5. Press Enter.
Loading data resource groups into an application group
295
Loading data resource groups into an application group
Use this topic to load data resource groups into an application group by selecting data
group definitions.
Data resource groups identify the data to be replicated within the application group.
When loading resource groups, you identify the type of relationship among objects
identified for replication by the data groups. The name space for the specified data
groups can either be unique or shared.
The Data group name space (NAMESPC) parameter on the Load Data Rsc. Grp. Ent.
(LODDTARGE) command determines the relationship among the objects defined for
replication within the specified data groups. The specified value also affects the
number of data resource group entries created.
When default values for parameters on the LODDTARGE command are used, the
names of the specified data groups are used in determining the names of the data
resource group entries created. (For data groups of type *PEER, the resource group
entry will be named ADMDMN.) If a data resource group entry already exists with the
data group name or ADMDMN, a unique name is generated by concatenating up to
the first five characters of the data group name, or ADMDMN, followed by the
characters RGE. If necessary, a two character alphanumeric suffix is added to ensure
its uniqueness.
Do the following to load data resource group entries for an application group:
1. Enter the following command, specifying the name of the installation library:
installation_library/ LODDTARGE
The Load Data Rsc. Grp. Ent. (LODDTARGE) appears.
2. At the Application group definition prompt, specify the name of the application
group.
3. At the Data group name space prompt, specify the value that represents the
relationship among the objects defined for replication within the specified data
groups.
Specify *UNIQUE when the objects replicated within the specified data groups
are unique. A unique resource group entry will be created for each specified
data group.
Specify *SHARED when the objects replicated within the specified data groups
are shared. Only one resource group entry will be created for the specified set
of data groups. Data groups of type *PEER are not assigned to the resource
group entry when *SHARED is specified.
4. Press Enter. One or more additional prompts appear.
5. The Data resource group entry prompt is only available when *SHARED is
specified in Step 3. The value *DFT uses the name of the first data group listed as
the name for the data resource group entry or generates a unique name if an
entry with that name already exists. If you specify a name, it cannot be the name
of an existing application group, data resource group entry, a cluster resource
group object (CRG).
296
6. The Data group definition prompt appears. The value *ALL selects all available
data groups within the installation. To have a smaller set of data groups
associated with the application group, specify the name of one or more data
groups. To see a list of the available data group names, press F4.
7. To load the entries, press Enter.
Specifying the primary node for the application group
Use this topic to specify the correct primary node when you are configuring and have
associated existing data groups to an application group by using Loading data
resource groups into an application group on page 295.
Do the following:
1. From the MIMIX Basic Main Menu, type 1 (Work with application groups) and
press Enter.
2. The Work with Application Groups display appears. Type 12 (Node entries) next
to the application group you want and press Enter.
3. The Work with Node Entries display appears. Press F10 to toggle between
configured view and status view.
Note: While configuring, the status view of this display will show the Current Role
and Data Provider with values of *UNDEFINED until the application group
is started.
4. From the configured view, type 2 (Change) next to the node that you want to be
the primary node and press Enter.
5. The Change Node Entry (CHGNODE) command appears. Specify *PRIMARY at
the Role prompt.
6. Press Enter.
Starting, ending, or switching an application group
297
Starting, ending, or switching an application group
Application group commands that start (STRAG), end (ENDAG), or switch (SWTAG)
the replication environment invoke procedures to perform the requested operation.
For the purpose of describing their use, these commands are quite similar.
This topic describes behavior of the commands for application groups that do not
participate in a cluster controlled by the IBM i operating system (*NONCLU
application groups).
The following parameters are available on all of the commands unless otherwise
noted.
The following parameters identify the scope of the requested operation:
Application group definition (AGDFN) - Specifies the requested application group.
You can either specify a name or the value *ALL.
Resource groups (TYPE) - Specifies the types of resource groups to be
processed for the requested application group.
Data resource group entry (DTARSCGRP) - Specifies the data resource groups to
include in the request. The default is *ALL or you can specify a name. This
parameter is ignored when TYPE is *ALL or *APP.
The following parameters, when available, define the expected behavior:
Switch type (SWTTYP) - Only available on the SWTAG command, this specifies
the reason the application group is being switched. The procedure called to
perform the switch and the actions performed during the switch differ based on
whether the current primary node (data source) is available at the start of the
switch procedure. The default value, *PLANNED, indicates that the primary node
is still available and the switch is being performed for normal business processes
(such as to perform maintenance on the current source system or as part of a
standard switch procedure). The value *UNPLANNED indicates that the switch is
an unplanned activity and the data source system may not be available.
Current node roles (ROLE) - Only available on the STRAG command, this
parameter is ignored for non-cluster application groups.
Node roles (ROLE) - Only available on the SWTAG command, this specifies
which set of node roles will determine the node that becomes the new primary
node as a result of the switch. The default value *CURRENT uses the current
order of node roles. If the application group participates in a cluster, the current
roles defined within the CRGs will be used. If *CONFIG is specified, the
configured primary node will become the new primary node and the new role of
other nodes in the recovery domain will be determined from their current roles. If
you specify a name of a node within the recovery domain for the application
group, the node will be made the new primary node and the new role of other
nodes in the recovery domain will be determined from their current roles.
The following parameters identify the procedure to use and its starting point:
Begin at step (STEP) - Specifies where the request will start within the specified
procedure. This parameter is described in detail below.
298
Procedure (PROC) - Specifies the name of the procedure to run to perform the
requested operation when starting from its first step. The value *DFT will use the
procedure designated as the default for the application group. The value
*LASTRUN uses the same procedure used for the previous run of the command.
You can also specify the name of a procedure that is valid the specified
application group and type of request.
Where should the procedure begin? The value specified for the Begin at step
(STEP) parameter on the request to run the procedure determines the step at which
the procedure will start. The status of the last run of the procedure determines which
values are valid.
The default value, *FIRST, will start the specified procedure at its first step. This value
can be used when the procedure has never been run, when its previous run
completed (*COMPLETED or *COMPERR), or when a user acknowledged the status
of its previous run which failed or was canceled (*ACKFAILED or *ACKCANCEL).
Other values are for resolving problems with a failed or canceled procedure. When a
procedure fails or is canceled, subsequent attempts to run the same procedure will
fail until user action is taken. You will need to determine the best course of action for
your environment based on the implications of the canceled or failed steps and any
steps which completed.
The value *RESUME will start the last run of the procedure beginning with the step at
which it failed, the step that was canceled in response to an error, or the step
following where the procedure was canceled. Only procedures with status values of
*FAILED or *CANCELED can be resumed. The value *RESUME may be appropriate
after you have investigated resolved the problem which caused the procedure to end.
The value *OVERRIDE will acknowledge the status of the last run of a procedure that
failed or was canceled and start a new run of the procedure beginning at the first step.
Only procedures with status values of *FAILED or *CANCELED can be overridden;
the status of that run is set to *ACKFAILED or *ACKCANCELED. This value may be
appropriate after you have investigated the problem and understand the effect of the
partially performed procedure on your environment. Activity for steps that did
complete is not reversed. It is assumed that you have determined that starting the
procedure at its first step would not be detrimental to data or your environment.

Starting an application group
For an application group, a procedure for only one operation (start, end, or switch)
can run at a time.
To start an application group, do the following:
1. From the Work with Application Groups display, type 9 (Start) next to the
application group you want and press F4 (Prompt).
2. Verify that the values you want are specified for Resource groups and Data
resource group entry.
3. If you are starting after addressing problems with the previous start request,
specify the value you want for Begin at step. Be certain that you understand the
effect the value you specify will have on your environment.
Starting, ending, or switching an application group
299
4. Press Enter.
5. The Procedure prompt appears. Do one of the following:
To use the default start procedure, press Enter.
To use a different start procedure for the application group, specify its name.
Then press Enter.
Ending an application group
For an application group, a procedure for only one operation (start, end, or switch)
can run at a time.
To end an application group, do the following:
1. From the Work with Application Groups display, type 10 (End) next to the
application group you want and press F4 (Prompt).
2. Verify that the values you want are specified for Resource groups and Data
resource group entry.
3. If you are starting the procedure after addressing problems with the previous end
request, specify the value you want for Begin at step. Be certain that you
understand the effect the value you specify will have on your environment.
4. Press Enter.
5. The Procedure prompt appears. Do one of the following:
To use the default end procedure, press Enter.
To use a different end procedure for the application group, specify its name.
Then press Enter.
Switching an application group
For an application group, a procedure for only one operation (start, end, or switch)
can run at a time.
To switch an application group, do the following:
1. From the Work with Application Groups display, type 15 (Switch) next to the
application group you want and press Enter.
The Switch Application Group (SWTAG) display appears.
2. Verify that the values you want are specified for Resource groups and Data
resource group entry.
3. Specify the type of switch to perform at the Switch type prompt.
4. Verify that the default value *CURRENT for Node roles prompt is valid for the
switch you need to perform. If necessary, specify a different value.
5. If you are starting the procedure after addressing problems with the previous
switch request, specify the value you want for Begin at step. Be certain that you
understand the effect the value you specify will have on your environment.
6. Press Enter.
300
7. The Procedure prompt appears. Do one of the following:
To use the default switch procedure for the specified switch type, press Enter.
To use a different switch procedure for the application group, specify its name.
Then press Enter.
8. A switch confirmation panel appears. To perform the switch, press F16.
301
CHAPTER 14 Starting, ending, and verifying
journaling
This chapter describes procedures for starting and ending journaling. J ournaling must
be active on all files, IFS objects, data areas and data queues that you want to
replicate through a user journal. Normally, journaling is started during configuration.
However, there are times when you may need to start or end journaling on items
identified to a data group.
The topics in this chapter include:
What objects need to be journaled on page 302 describes, for supported
configuration scenarios, what types of objects must have journaling started before
replication can occur. It also describes when journaling is started implicitly, as well
as the authority requirements necessary for user profiles that create the objects to
be journaled when they are created.
MIMIX commands for starting journaling on page 304 identifies the MIMIX
commands available for starting journaling and describes the checking performed
by the commands.
J ournaling for physical files on page 305 includes procedures for displaying
journaling status, starting journaling, ending journaling, and verifying journaling for
physical files identified by data group file entries.
J ournaling for IFS objects on page 308 includes procedures for displaying
journaling status, starting journaling, ending journaling, and verifying journaling for
IFS objects replicated cooperatively (advanced journaling). IFS tracking entries
are used in these procedures.
J ournaling for data areas and data queues on page 311 includes procedures for
displaying journaling status, starting journaling, ending journaling, and verifying
journaling for data area and data queue objects replicated cooperatively
(advanced journaling). IFS tracking entries are used in these procedures.
302
What objects need to be journaled
A data group can be configured in a variety of ways that involve a user journal in the
replication of files, data areas, data queues and IFS objects. J ournaling must be
started for any object to be replicated through a user journal or to be replicated by
cooperative processing between a user journal and the system journal.
Requirements for system journal replication - System journal replication
processes use a special journal, the security audit (QAUDJ RN) journal. Events are
logged in this journal to create a security audit trail. When data group object entries,
IFS entries, and DLO entries are configured, each entry specifies an object auditing
value that determines the type of activity on the objects to be logged in the journal.
Object auditing is automatically set for all objects defined to a data group when the
data group is first started, or any time a change is made to the object entries, IFS
entries, or DLO entries for the data group. Because security auditing logs the object
changes in the system journal, no special action is need.
Requirements for user journal replication - User journal replication processes
require that the journaling be started for the objects identified by data group file
entries. Both MIMIX Dynamic Apply and legacy cooperative processing use data
group file entries and therefore require journaling to be started. Configurations that
include advanced journaling for replication of data areas, data queues, or IFS objects
also require that journaling be started on the associated object tracking entries and
IFS tracking entries, respectively. Starting journaling ensures that changes to the
objects are recorded in the user journal, and are therefore available for MIMIX to
replicate.
During initial configuration, the configuration checklists direct you when to start
journaling for objects identified by data group file entries, IFS tracking entries, and
object tracking entries. The MIMIX commands STRJ RNFE, STRJ RNIFSE, and
STRJ RNOBJ E simplify the process of starting journaling. For more information about
these commands, see MIMIX commands for starting journaling on page 304.
Although MIMIX commands for starting journaling are preferred, you can also use
IBM commands (STRJ RNPF, STRJ RN, STRJ RNOBJ ) to start journaling if you have
the appropriate authority for starting journaling.
Requirements for implicit starting of journaling - J ournaling can be automatically
started for newly created database files, data areas, data queues, or IFS objects
when certain requirements are met.
The user ID creating the new objects must have the required authority to start
journaling and the following requirements must be met:
IFS objects - A new IFS object is automatically journaled if the directory in which it
is created is journaled as a result of a request that permitted journaling inheritance
for new objects. Typically, if MIMIX started journaling on the parent directory,
inheritance is permitted. If you manually start journaling on the parent directory
using the IBM command STRJ RN, specify INHERIT(*YES). This will allow IFS
objects created within the journaled directory to inherit the journal options and
journal state of the parent directory.
Database files created by SQL statements - A new file created by a CREATE
What objects need to be journaled
303
TABLE statement is automatically journaled if the library in which it is created
contains a journal named QSQJ RN.
New *FILE, *DTAARA, *DTAQ objects - The operating system will automatically
journal a new object if it is created in a library that contains a QDFTJ RN data area
and the data area has enabled automatic journaling for the object type. The
default value (*DFT) for the Journal at creation (J RNATCRT) parameter in the
data group definition enables MIMIX to create the QDFTJ RN data area in a library
and enable the data area for automatic journaling for an object type. When the
data group is started, MIMIX evaluates all data group object entries for each
object type. (Entries for *FILE objects are only evaluated when the data group
specifies COOPJ RN(*USRJ RN).) Entries properly configured to allow cooperative
processing of the object type determine whether MIMIX will create the QDFTJ RN
data area. MIMIX uses the data group entry with the most specific match to the
object type and library that also specifies *ALL for its System 1 object (OBJ 1) and
Attribute (OBJ ATR). When the QDFTJ RN data area in a library is enabled for an
object type, all new objects of that type are journaled, not just those which are
eligible for replication.
Note: MIMIX prevents the QDFTJ RN data area from being created in the
following libraries: QSYS*, QRECOVERY, QRCY*, QUSR*, QSPL*,
QRPL*, QRCL*, QRPL*, QGPL, QTEMP and SYSIB*.
For example, if MIMIX finds only the following data group object entries for library
MYLIB, it would use the first entry when determining whether to create the
QDFTJ RN data area because it is the most specific entry that also meets the
OBJ 1(*ALL) and OBJ ATR(*ALL) requirements. The second entry is not
considered in the determination because its OBJ 1 and OBJ ATR values do not
meet these requirements.
LI B1( MYLI B) OBJ 1( *ALL) OBJ TYPE( *FI LE) OBJ ATR( *ALL) COOPDB( *YES)
PRCTYPE( *I NCLD)
LI B1( MYLI B) OBJ 1( MYAPP) OBJ TYPE( *FI LE) OBJ ATR( DSPF) COOPDB( *YES)
PRCTYPE( *I NCLD)
Authority requirements for starting journaling
Normal MIMIX processes run under the MIMIXOWN user profile, which ships with
*ALLOBJ special authority. Therefore, it is not necessary for other users to account
for journaling authority requirements when using MIMIX commands (STRJ RNFE,
STRJ RNIFSE, STRJ RNOBJ E) to start journaling.
When the MIMIX journal managers are started, or when the Build J ournaling
Environment (BLDJ RNENV) command is used, MIMIX checks the public authority
(*PUBLIC) for the journal. If necessary, MIMIX changes public authority so the user ID
in use has the appropriate authority to start journaling.
Authority requirements must be met to enable the automatic journaling of newly
created objects and if you use IBM commands to start journaling instead of MIMIX
commands.
If you create database files, data areas, or data queues for which you expect
automatic journaling at creation, the user ID creating these objects must have the
required authority to start journaling.
304
If you use the IBM commands (STRJ RNPF, STRJ RN, STRJ RNOBJ ) to start
journaling, the user ID that performs the start journaling request must have the
appropriate authority requirements.
For journaling to be successfully started on an object, one of the following authority
requirements must be satisfied:
The user profile of the user attempting to start journaling for an object must have
*ALLOBJ special authority.
The user profile of the user attempting to start journaling for an object must have
explicit *ALL object authority for the journal to which the object is to be journaled.
Public authority (*PUBLIC) must have *OBJ ALTER, *OBJ MGT, and *OBJ OPR
object authorities for the journal to which the object is to be journaled.
MIMIX commands for starting journaling
Before you use any of the MIMIX commands for starting journaling, the data group file
entries, IFS tracking entries, or object tracking entries associated with the commands
object class must be loaded.
The MIMIX commands for starting journaling are:
Start J ournal Entry (STRJ RNFE) - This command starts journaling for files
identified by data group file entries.
Start J ournaling IFS Entries (STRJ RNIFSE) - This command starts journaling of
IFS objects configured for advanced journaling. Data group IFS entries must be
configured and IFS tracking entries be loaded (LODDGIFSTE command) before
running the STRJ RNIFSE command to start journaling.
Start J ournaling Obj Entries (STRJ RNOBJ E) - This command starts journaling of
data area and data queue objects configured for advanced journaling. Data group
object entries must be configured and object tracking entries be loaded
(LODDGOBJ TE command) before running the STRJ RNOBJ E command to start
journaling.
If you attempt to start journaling for a data group file entry, IFS tracking entry, or object
tracking entry and the files or objects associated with the entry are already journaled,
MIMIX checks that the physical file, IFS object, data area, or data queue is journaled
to the journal associated with the data group. If the file or object is journaled to the
correct journal, the journaling status of the data group file entry, IFS tracking or object
tracking entry is changed to *YES. If the file or object is not journaled to the correct
journal or the attempt to start journaling fails, an error occurs and the journaling status
is changed to *NO.
Journaling for physical files
305
Journaling for physical files
Data group file entries identify physical files to be replicated. When data group file
entries are added to a configuration, they may have an initial status of *ACTIVE.
However, the physical files which they identify may not be journaled. In order for
replication to occur, journaling must be started for the files on the source system.
This topic includes procedures to display journaling status, and to start, end, or verify
journaling for physical files.
Displaying journaling status for physical files
Use this procedure to display journaling status for physical files identified by data
group file entries. Do the following:
1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the
Work with Data Groups display.
2. On the Work with Data Groups display, type 17 (File entries) next to the data
group you want and press Enter.
3. The Work with DG File Entries display appears. The initial view shows the current
and requested status of the data group file entry. Press F10 (J ournaled view).
At the right side of the display, the Journaled System 1 and System 2 columns
indicate whether the physical file associated with the file entry is journaled on
each system.
Note: Logical files will have a status of *NA. Data group file entries exist for
logical files only in data groups configured for MIMIX Dynamic Apply.
Starting journaling for physical files
Use this procedure to start journaling for physical files identified by data group file
entries. In order for replication to occur, journaling must be started for the file on the
source system.
This procedure invokes the Start J ournal Entry (STRJ RNFE) command. The
command can also be entered from a command line.
Do the following:
1. Access the journaled view of the Work with DG File Entries display as described
in Displaying journaling status for physical files on page 305.
2. From the Work with DG File Entries display, type a 9 (Start journaling) next to the
file entries you want. Then do one of the following:
To start journaling using the command defaults, press Enter.
To modify command defaults, press F4 (Prompt) then continue with the next
step.
3. The Start J ournal Entry (STRJ RNFE) display appears. The Data group definition
prompts and the System 1 file prompts identify your selection. Accept these
values or specify the values you want.
306
4. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data
group is configured for journaling on the target system (J RNTGT) and starts or
prevents journaling from starting as required.
5. If you want to use batch processing, specify *YES for the Submit to batch prompt.
6. To start journaling for the physical file associated with the selected data group,
press Enter.
The system returns a message to confirm the operation was successful.
Ending journaling for physical files
Use this procedure to end journaling for a physical file associated with a data group
file entry. Once journaling for a file is ended, any changes to that file are not captured
and are not replicated. You may need to end journaling if a file no longer needs to be
replicated, to prepare for upgrading MIMIX software, or to correct an error.
This procedure invokes the End J ournaling File Entry (ENDJ RNFE) command. The
command can also be entered from a command line.
To end journaling, do the following:
1. Access the journaled view of the Work with DG File Entries display as described
in Displaying journaling status for physical files on page 305.
2. From the Work with DG File Entries display, type a 10 (End journaling) next to the
file entry you want and do one of the following:
Note: MIMIX cannot end journaling on a file that is journaled to the wrong
journal. For example, a file that does not match the journal definition for
that data group. If you want to end journaling outside of MIMIX, use the
ENDJ RNPF command.
To end journaling using command defaults, press Enter. J ournaling is ended.
To modify additional prompts for the command, press F4 (Prompt) and
continue with the next step.
3. The End J ournal File Entry (ENDJ RNFE) display appears. If you want to end
journaling for all files in the library, specify *ALL at the System 1 file prompt.
4. Specify the value you want for the End journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data
group is configured for journaling on the target system (J RNTGT) and ends or
prevents journaling from ending as required.
5. If you want to use batch processing, specify *YES for the Submit to batch prompt.
6. To end journaling, press Enter.
Journaling for physical files
307
Verifying journaling for physical files
Use this procedure to verify if a physical file defined by a data group file entry is
journaled correctly. This procedure invokes the Verify J ournaling File Entry
(VFYJ RNFE) command to determine whether the file is journaled and whether it is
journaled to the journal defined in the journal definition. When these conditions are
met, the journal status on the Work with DG File Entries display is set to *YES. The
command can also be entered from a command line.
To verify journaling for a physical file, do the following:
1. Access the journaled view of the Work with DG File Entries display as described
in Displaying journaling status for physical files on page 305.
2. From the Work with DG File Entries display, type a 11 (Verify journaling) next to
the file entry you want and do one of the following:
To verify journaling using command defaults, press Enter.
To modify additional prompts for the command, press F4 (Prompt) and
continue with the next step.
3. The Verify J ournaling File Entry (VFYJ RNFE) display appears. The Data group
definition prompts and the System 1 file prompts identify your selection. Accept
these values or specify the values you want.
4. Specify the value you want for the Verify journaling on system prompt. When
*DGDFN is specified, MIMIX considers whether the data group is configured for
journaling on the target system (J RNTGT) when determining where to verify
journaling.
5. If you want to use batch processing, specify *YES for the Submit to batch prompt
6. Press Enter.
308
Journaling for IFS objects
IFS tracking entries are loaded for a data group after the data group IFS entries have
been configured for replication through the user journal (advanced journaling).
However, loading IFS tracking entries does not automatically start journaling on the
IFS objects they identify. In order for replication to occur, journaling must be started on
the source system for the IFS objects identified by IFS tracking entries.
This topic includes procedures to display journaling status, and to start, end, or verify
journaling for IFS objects identified for replication through the user journal.
These references go to different files in different books.
You should be aware of the information in Long IFS path names on page 107
Displaying journaling status for IFS objects
Use this procedure to display journaling status for IFS objects identified by IFS
tracking entries. Do the following:
1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the
Work with Data Groups display.
2. On the Work with Data Groups display, type 50 (IFS trk entries) next to the data
group you want and press Enter.
3. The Work with DG IFS Trk. Entries display appears. The initial view shows the
object type and status at the right of the display. Press F10 (J ournaled view).
At the right side of the display, the Journaled System 1 and System 2 columns
indicate whether the IFS object identified by the tracking is journaled on each
system.
Starting journaling for IFS objects
Use this procedure to start journaling for IFS objects identified by IFS tracking entries.
This procedure invokes the Start J ournaling IFS Entries (STRJ RNIFSE) command.
The command can also be entered from a command line.
To start journaling for IFS objects, do the following:
1. If you have not already done so, load the IFS tracking entries for the data group.
Use the procedure in Loading IFS tracking entries on page 257.
2. Access the journaled view of the Work with DG IFS Trk. Entries display as
described in Displaying journaling status for IFS objects on page 308.
3. From the Work with DG IFS Trk. Entries display, type a 9 (Start journaling) next to
the IFS tracking entries you want. Then do one of the following:
To start journaling using the command defaults, press Enter.
To modify the command defaults, press F4 (Prompt) and continue with the next
step.
4. The Start J ournaling IFS Entries (STRJ RNIFSE) display appears. The Data group
Journaling for IFS objects
309
definition and IFS objects prompts identify the IFS object associated with the
tracking entry you selected. You cannot change the values shown for the IFS
objects prompts
1
.
5. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data
group is configured for journaling on the target system (J RNTGT) and starts or
prevents journaling from starting as required.
6. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
7. The System 1 file identifier and System 2 file identifier prompts identify the file
identifier (FID) of the IFS object on each system. You cannot change the values
2
.
8. To start journaling on the IFS objects specified, press Enter.
Ending journaling for IFS objects
Use this procedure to end journaling for IFS objects identified by IFS tracking entries.
This procedure invokes the End J ournaling IFS Entries (ENDJ RNIFSE) command.
The command can also be entered from a command line.
To end journaling for IFS objects, do the following:
1. Access the journaled view of the Work with DG IFS Trk. Entries display as
described in Displaying journaling status for IFS objects on page 308.
2. From the Work with DG IFS Trk. Entries display, type a 10 (End journaling) next to
the IFS tracking entries you want. Then do one of the following:
To end journaling using the command defaults, press Enter.
To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The End J ournaling IFS Entries (ENDJ RNIFSE) display appears. The Data group
definition and IFS objects prompts identify the IFS object associated with the
tracking entry you selected. You cannot change the values shown for the IFS
objects prompts
1
.
4. Specify the value you want for the End journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data
group is configured for journaling on the target system (J RNTGT) and ends or
prevents journaling from ending as required.
1. When the command is invoked from a command line, you can change values specified for the
IFS objects prompts. Also, you can specify as many as 300 object selectors by using the + for
more values prompt.
2. When the command is invoked from a command line, use F10 to see the FID prompts. Then you
can optionally specify the unique FID for the IFS object on either system. The FID values can be
used alone or in combination with the IFS object path name.
310
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. The System 1 file identifier and System 2 file identifier identify the file identifier
(FID) of the IFS object on each system. You cannot change the values shown
2
.
7. To end journaling on the IFS objects specified, press Enter.
Verifying journaling for IFS objects
Use this procedure to verify if an IFS object identified by an IFS tracking entry is
journaled correctly. This procedure invokes the Verify J ournaling IFS Entries
(VFYJ RNIFSE) command to determine whether the IFS object is journaled, whether it
is journaled to the journal defined in the data group definition, and whether it is
journaled with the attributes defined in the data group definition. The command can
also be entered from a command line.
To verify journaling for IFS objects, do the following:
1. Access the journaled view of the Work with DG IFS Trk. Entries display as
described in Displaying journaling status for IFS objects on page 308.
2. From the Work with DG IFS Trk. Entries display, type a 11 (Verify journaling) next
to the IFS tracking entries you want. Then do one of the following:
To verify journaling using the command defaults, press Enter.
To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The Verify J ournaling IFS Entries (VFYJ RNIFSE) display appears. The Data
group definition and IFS objects prompts identify the IFS object associated with
the tracking entry you selected. You cannot change the values shown for the IFS
objects prompts
1
.
4. Specify the value you want for the Verify journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN is specified, MIMIX considers whether the data group is
configured for journaling on the target system (J RNTGT) and verifies journaling on
the appropriate systems as required.
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. The System 1 file identifier and System 2 file identifier identify the file identifier
(FID) of the IFS object on each system. You cannot change the values shown
2
.
7. To verify journaling on the IFS objects specified, press Enter.
Using file identifiers (FIDs) for IFS objects on page 284.
Journaling for data areas and data queues
311
Journaling for data areas and data queues
Object tracking entries are loaded for a data group after the data group object entries
have been configured replication through the user journal (advanced journaling).
However, loading object tracking entries does not automatically start journaling on the
objects they identify. In order for replication to occur, journaling must be started for the
objects on the source system for the objects identified by object tracking entries.
This topic includes procedures to display journaling status, and to start, end, or verify
journaling for data areas and data queues identified for replication through the user
journal.
Displaying journaling status for data areas and data queues
To check journaling status for data areas and data queues identified by object tracking
entries. Do the following:
1. From the MIMIX Intermediate Main Menu, type 1 and press Enter to access the
Work with Data Groups display.
2. On the Work with Data Groups display, type 52 (Obj trk entries) next to the data
group you want and press Enter.
3. The Work with DG Obj. Trk. Entries display appears. The initial view shows the
object type and status at the right of the display. Press F10 (J ournaled view).
At the right side of the display, the Journaled System 1 and System 2 columns
indicate whether the object identified by the tracking is journaled on each system.
Starting journaling for data areas and data queues
Use this procedure to start journaling for data areas and data queues identified by
object tracking entries.
This procedure invokes the Start J ournaling Obj Entries (STRJ RNOBJ E) command.
The command can also be entered from a command line.
To start journaling for data areas and data queues, do the following:
1. If you have not already done so, load the object tracking entries for the data
group. Use the procedure in Loading object tracking entries on page 258.
2. Access the journaled view of the Work with DG Obj. Trk. Entries display as
described in Displaying journaling status for data areas and data queues on
page 311.
3. From the Work with DG Obj. Trk. Entries display, type a 9 (Start journaling) next to
the object tracking entries you want. Then do one of the following:
To start journaling using the command defaults, press Enter.
To modify the command defaults, press F4 (Prompt) and continue with the next
step.
4. The Start J ournaling Obj Entries (STRJ RNOBJ E) display appears. The Data
group definition and Objects prompts identify the object associated with the
312
tracking entry you selected. Although you can change the values shown for these
prompts, it is not recommended unless the command was invoked from a
command line.
5. Specify the value you want for the Start journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data
group is configured for journaling on the target system (J RNTGT) and starts or
prevents journaling from starting as required.
6. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
7. To start journaling on the objects specified, press Enter.
Ending journaling for data areas and data queues
Use this procedure to end journaling for data areas and data queues identified by
object tracking entries.
This procedure invokes the End J ournaling Obj Entries (ENDJ RNOBJ E) command.
The command can also be entered from a command line.
To end journaling for data areas and data queues, do the following:
1. Access the journaled view of the Work with DG Obj. Trk. Entries display as
described in Displaying journaling status for data areas and data queues on
page 311.
2. From the Work with DG Obj. Trk. Entries display, type a 10 (End journaling) next
to the object tracking entries you want. Then do one of the following:
To verify journaling using the command defaults, press Enter.
To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The End J ournaling Obj Entries (ENDJ RNOBJ E) display appears. The Data group
definition and IFS objects prompts identify the object associated with the tracking
entry you selected. Although you can change the values shown for these prompts,
it is not recommended unless the command was invoked from a command line.
4. Specify the value you want for the End journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN, *SRC, or *TGT is specified, MIMIX considers whether the data
group is configured for journaling on the target system (J RNTGT) and ends or
prevents journaling from ending as required.
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. To end journaling on the objects specified, press Enter.
Journaling for data areas and data queues
313
Verifying journaling for data areas and data queues
Use this procedure to verify if an object identified by an object tracking entry is
journaled correctly. This procedure invokes the Verify J ournaling Obj Entries
(VFYJ RNOBJ E) command to determine whether the object is journaled, whether it is
journaled to the journal defined in the data group definition, and whether it is journaled
with the attributes defined in the data group definition. The command can also be
entered from a command line.
To verify journaling for objects, do the following:
1. Access the journaled view of the Work with DG Obj. Trk. Entries display as
described in Displaying journaling status for data areas and data queues on
page 311.
2. From the Work with DG Obj. Trk. Entries display, type a 11 (Verify journaling) next
to the object tracking entries you want. Then do one of the following:
To verify journaling using the command defaults, press Enter.
To modify the command defaults, press F4 (Prompt) and continue with the next
step.
3. The Verify J ournaling Obj Entries (VFYJ RNOBJ E) display appears. The Data
group definition and Objects prompts identify the object associated with the
tracking entry you selected. Although you can change the values shown for these
prompts, it is not recommended unless the command was invoked from a
command line.
4. Specify the value you want for the Verify journaling on system prompt. Press F4 to
see a list of valid values.
When *DGDFN is specified, MIMIX considers whether the data group is
configured for journaling on the target system (J RNTGT) and verifies journaling on
the appropriate systems as required.
5. To use batch processing, specify *YES for the Submit to batch prompt and press
Enter. Additional prompts for Job description and Job name appear. Either accept
the default values or specify other values.
6. To verify journaling on the objects specified, press Enter.
Configuring for improved performance
314
CHAPTER 15 Configuring for improved
performance
This chapter describes how to modify your configuration to use advanced techniques
to improve journal performance and MIMIX performance.
Journal performance: The following topics describe how to improve journal
performance:
Minimized journal entry data on page 318 describes benefits of and restrictions
for using minimized user journal entries for *FILE and *DTAARA objects. A
discussion of large object (LOB) data in minimized entries and configuration
information are included.
Configuring database apply caching on page 320 describes benefits of and how
to configure MIMIX functionality for database apply caching.
Configuring for high availability journal performance enhancements on page 321
describes journal caching and journal standby state within MIMIX to support IBMs
High Availability J ournal Performance IBM i option 42, J ournal Standby feature
and J ournal caching. Requirements and restrictions are included.
MIMIX performance: The following topics describe how to improve MIMIX
performance:
Configuring parallel access path maintenance on page 315 describes this
function and how it can be used to improve performance for database apply
processes.
Caching extended attributes of *FILE objects on page 325 describes how to
change the maximum size of the cache used to store extended attributes of *FILE
objects replicated from the system journal.
Increasing data returned in journal entry blocks by delaying RCVJ RNE calls on
page 326 describes how you can improve object send performance by changing
the size of the block of data from a receive journal entry (RCVJ RNE) call and
delaying the next call based on a percentage of the requested block size.
Configuring high volume objects for better performance on page 329 describes
how to change your configuration to improve system journal performance.
Improving performance of the #MBRRCDCNT audit on page 330 describes how
to use the CMPRCDCNT commit threshold policy to limit comparisons and
thereby improve performance of this audit in environments which use commitment
control.
Configuring parallel access path maintenance
315
Configuring parallel access path maintenance
The parallel access path (AP) maintenance function provides improved performance
for database apply processes by using multiple parallel monitor jobs to maintain
access paths associated with logical files.
This is accomplished by automatically creating a set of *INTERVAL monitors that are
responsible for the access path maintenance for non-uniquely keyed logical file
access paths affected by database record operations such as inserts, updates and
deletes. This removes the access path maintenance responsibility from the 'normal'
database apply sessions, allowing them to process journal entries more efficiently.
Underlying Technology
The MAINT attribute for IBM i logical files specify how the access path associated with
the logical file is maintained.
It can be set as follows.
*IMMED Indicates that changes to the underlying files should be immediately
reflected in the access path whenever a record is inserted, updated or deleted.
This is the default.
*REBLD Indicates that the access path should not even exist, until such time as
the logical file is opened, at which point the access path is rebuilt from scratch,
using all the underlying physical file records. This build process can be very time
consuming for large files.
*DLY With this setting, changes to the access path are not applied directly to the
access path tree structure, but are instead logged for later application to the tree.
(This is also known as delayed maintenance.) Since applying to the tree structure
for large access paths can be very expensive due to multiple page faults, this
greatly reduces the maintenance cost at the time of the update, insert or delete of
a record. The log of delayed maintenance items grows until one of several events
occurs:
1. The maintenance is set to *IMMED and then the logged items are applied to the
access path.
2. The maintenance is set to *REBLD and then the logged items are deleted.
3. The logical file is opened for keyed access and then the logged items are applied
to the access path. When the file is closed, delayed maintenance logging is
resumed.
4. The delayed maintenance log grows to 10% of the access path size. The logged
items are deleted, and the access path is rebuilt at the time of the next logical file
open.
Parallel Access Path Maintenance usage of MAINT
Parallel Access Path Maintenance uses the MAINT attribute for IBM i logical files to
specify how the access path associated with the logical file is maintained. Parallel
Access Path Maintenance sets eligible logical files to MAINT(*DLY) on the target
316
system to relieve the access path maintenance responsibility from the database apply
sessions. To avoid letting the delayed maintenance log grow too large, Parallel
Access Path Maintenance also creates *INTERVAL monitors which periodically open
each file member. It is during this open operation that the access path maintenance
operations are performed, under the Monitor job.
The logical files eligible for this treatment are those in which:
1. A Data Group file entry for the logical file, with MBR(*ALL), exists and is active.
2. The file is MAINT(*IMMED) on the source system.
3. The file is keyed.
4. The file is not uniquely keyed.
When the monitors are inactive, the MAINT attribute is reset back to its original state
(normally *IMMED.) The monitors are responsible to periodically open the logical files
to assure that the access path stays 'caught up.'
Parallel Access Path Maintenance is implemented with the Parallel AP maintenance
(PRLAPMNT) parameter in the Set MIMIX Policies (SETMMXPCY) command.
PRLAPMNT specifies the criteria for enabling the parallel access path maintenance
function.
Note: These changes are not effective until the associated data groups have been
started.
1. From the command line type SETMMXPCY and press F4 (Prompt).
2. For the Data group definition, do one of the following:
To set the default policy for the installation, verify that the value specified for
Data group definition is *INST.
To set the policy for a specific data group, specify the full three-part name.
3. Press Enter.
You will see all the policies and their current values for the level you specified in
Step 2.
4. Use the Page Down key to locate the Parallel AP maintenance policy, then specify
Configuring parallel access path maintenance
317
the following:
5. To accept the changes, press Enter.
Table 41. Parallel AP maintenance policy
Parameter Description
Method Specifies the method by which the parallel access path maintenance function is
implemented.
*SAMEThe value is not changed.
*NONEThe parallel access path maintenance function is not used. The values
specified for all other elements are ignored.
*AUTOAll eligible access paths are automatically assigned to access path
maintenance jobs and are applied in parallel.
*INSTThe policy is set to the value used for the installation. This is only valid
when a value other than *INST is specified for the data group definition (DGDFN).
*MANUALThe access paths to be maintained in parallel are specified manually.
Use this method only under the direction of a certified MIMIX representative.
Number of jobs Specifies the number of parallel jobs to use for access path maintenance.
*SAMEThe value is not changed.
*CALCMIMIX calculates the number of parallel access path maintenance jobs to
use, with a minimum of two jobs.
*INSTThe policy is set to the value used for the installation. This is only valid
when a value other than *INST is specified for the data group definition (DGDFN).
number-of-jobsSpecifies the number of parallel access path maintenance jobs to
use. Valid values range from 1 through 1000.
Delay interval (sec) Specifies the number of seconds to wait between iterations of access path
maintenance operations. The default is 60 seconds.
*SAMEThe value is not changed.
*INSTThe policy is set to the value used for the installation. This is only valid
when a value other than *INST is specified for the data group definition (DGDFN).
number-of-secondsSpecifies the number of seconds to wait between iterations.
Valid values range from 5 through 900 seconds.
Log retention
(days)
Specifies the number of days to retain log records for the parallel access path
maintenance function. The default value is 1 day.
*SAMEThe value is not changed.
*INSTThe policy is set to the value used for the installation. This is only valid
when a value other than *INST is specified for the data group definition (DGDFN).
*NONENo logging is performed.
number-of-daysSpecifies the number of days to retain log records for parallel
access path maintenance jobs. Valid values range from 1 through 365 days.
318
Minimized journal entry data
MIMIX supports the ability to process minimized journal entries placed in a user
journal for object types of file (*FILE) and data area (*DTAARA).
The IBM i provides the ability to create journal entries using an internal format that
minimizes the data specific to these object types that are stored in the journal entry.
This support is enabled in the MIMIX create or change journal definitions commands
and built using the Build J ournal Environment (BLDJ RNENV) command.
When a journal entry for one of these object types is generated, the system compares
the size of the minimized format to the standard format and places whichever is
smaller in the journal. For database files, only update journal entries (R-UP and R-
UB) and rollback-type update entries (R-BR and R-UR) can be minimized.
If MINENTDTA(*FILE) or MINENTDTA(*FLDBDY) is in effect and a database record
includes LOB fields, LOB data is journaled only when that LOB is changed. Changes
to other fields in the record will not cause the LOB data to be journaled unless the
LOB is also changed. When database files have records with static LOB values,
minimized journal entries can produce considerable savings.
The benefit of using minimized journal entries is that less data is stored in the journal.
In a MIMIX replication environment, you also benefit by having less data sent over
communications lines and saved in MIMIX log spaces. Factors in your environment
such as the percentage of journal entries that are updates (R-UP), the size of
database records, the number of bytes typically changed in an update, may influence
how much benefit you achieve.
Restrictions of minimized journal entry data
The following MIMIX and operating system restrictions apply:
If you plan to use keyed replication do not use minimized journal entry data.
Minimized journal entries cannot be used when MIMIX support for keyed
replication is in use, since the key may not be present in a minimized journal entry.
Minimized before-images cannot be selected for automatic before-image
synchronization checking.
Your environment may impose additional restrictions:
If you rely on full image captures in the receiver as part of your auditing rules, do
not configure for minimized entry data.
Even if you do not rely on full image captures for auditing purposes, consider the
effect of how data is minimized. The minimizing resulting from specifying *FILE
does not occur on field boundaries. Therefore, the entry specific data may not be
viewable and may not be used for auditing purposes. When *FLDBDY is
specified, file data for modified fields is minimized on field boundaries. With
*FLDBDY, entry-specific data is viewable and may be used for auditing purposes.
Configuring for minimized journal entry data may affect your ability to use the
Work with Data Group File Entries on Hold (WRKDGFEHLD) command. For
example, using option 2 (Change) on WRKDGFEHLD to convert a minimized
record update (RUP) to a record put (RPT), will result in failure when applied.
Minimized journal entry data
319
RPTs requires the presence of a full, non-minimized, record.
See the IBM book, Backup and Recovery for restrictions and usage of journal entries
with minimized entry-specific data.
Configuring for minimized journal entry data
By default, MIMIX user journal replication processes use complete journal entry data.
To enable MIMIX to use minimized journal entry data for specific object types, do the
following:
1. From the Work with J ournal Definitions display, use option 2 (Change) to access
the journal definition you want.
2. On the following display, press Enter twice to see all prompts for the display. Page
down to the bottom of the display.
3. Press F10 (Additional parameters) to access the Minimize entry specific data
prompt.
4. Specify the values you want at the Minimize entry specific data prompt and press
Enter.
5. In order for the changes to be effective, you must build the journaling environment
using the updated journal definition. To do this, type 14 (Build) next to the
definition you just modified on the Work with J ournal Definitions display and press
Enter.
320
Configuring database apply caching
Customers who need faster performance from database apply processes can take
advantage of the functionality made available through the DB apply cache
(DBAPYCACHE) policy on the Set MIMIX Policies (SETMMXPCY) command.
Customers who enable this policy will see a significant, general improvement in
database apply performance.This functionality is ideal for customers who have
allocated a highly active file to its own apply session and need more performance but
do not want to purchase the J ournal Caching feature from IBM (High Availability
J ournal Performance IBM i option 42, J ournal Standby feature and J ournal caching).
Note: Database apply caching within MIMIX cannot be used in conjunction with IBM
option 42. For more information about MIMIX support for IBM option 42, see
Configuring for high availability journal performance enhancements on
page 321.
When the DBAPYCACHE policy is enabled, before and after journal images are sent
to the local journal on the target system. This will increase the amount of storage
needed for journal receivers on the target system if before images were not previously
being sent to the journal.
The DBAPYCACHE policy is shipped so that it is disabled at both the installation level
and data group level. This preserves the behavior of database apply processes that
existed in version 6 and earlier versions of MIMIX.
To enable this functionality, do the following from the management system:
1. From the command line type SETMMXPCY and press F4 (Prompt).
2. For the Data group definition, do one of the following:
To set the policy for the installation, verify that the value specified for Data
group definition is *INST.
To set the policy for a specific data group, specify the full three-part name.
3. Press Enter. You will see all the policies and their current values for the level you
specified in Step 2.
4. Use the Page Down key to locate the DB apply cache policy. Specify *ENABLED.
5. To accept the changes, press Enter.
Changes to this policy are not effective until the database apply processes for the
affected data groups have been ended and restarted.
Configuring for high availability journal performance enhancements
321
Configuring for high availability journal performance
enhancements
MIMIX supports IBMs High Availability J ournal Performance IBM i option 42, J ournal
Standby feature and J ournal caching. These high availability performance
enhancements improve replication performance on the target system and provide
significant performance improvement by eliminating the need to start journaling at
switch time.
MIMIX support of IBMs high availability performance enhancements consists of two
independent components: journal standby state and journal caching. These
components work individually or together, although when used together, each
component must be enabled separately. J ournal standby state minimizes replication
impact on the target system by providing the benefits of an active journal without
writing the journal entries to disk. As such, journal standby state is particularly helpful
in saving disk space in environments that do not rely on journal entries for other
purposes. Moreover, journal standby state minimizes switch times by retaining the
journal relationship for replicated objects.
J ournal caching provides a means by which to cache journal entries and their
corresponding database records into main storage and write to disks only as
necessary. J ournal caching is particularly helpful during batch operations when large
numbers of add, update, and delete operations against journaled objects are
performed.
J ournal standby state and journal caching can be used in source send configuration
environments as well as in environments where remote journaling is enabled. For
restrictions of MIMIX support of IBMs high availability performance enhancements,
see Restrictions of high availability journal performance enhancements on
page 323.
Note: For more information, also see the topics on journal management and system
performance in the IBM eServer iSeries Information Center.
Journal standby state
J ournal standby state minimizes replication impact by providing the benefits of an
active journal without writing the journal entries to disk. As such, journal standby state
is particularly helpful in saving disk space in environments that do not rely on journal
entries for other purposes. Moreover, If you are journaling on apply, journal standby
state can provide a performance improvement on the apply session.
If you are not using journaling on target and want to have a switchable data group,
then using journal standby state may offer a benefit in reduced switch time. When a
journal is in standby state, it is not necessary to start journaling for objects on the
target system prior to switching. All that is necessary prior to switching, is to change
the journal state to active.
You can start or stop journaling while the journal standby state is enabled. However,
commitment control cannot be used for files that are journaled to any journal in
standby state. Most referential constraints cannot be used when the journal is in
standby state. When journal standby state is not an option because of these
322
restrictions, journal caching can be used as an alternative. See J ournal caching on
page 322.
Minimizing potential performance impacts of standby state
It is possible to experience degraded performance of database apply (DBAPY)
processing after enabling journal standby state. You can reduce potential impacts by
using the Change Recovery for Access Paths (CHGRCYAP) command, which allows
you to change the target access path recovery time for the system.
Note: While this procedure improves performance, it can cause potentially longer
initial program loads (IPL). Deciding to use standby state is a trade off
between run-time performance and IPL duration.
Do the following:
1. On a command line, type the following and press Enter:
CHGRCYAP
2. At the Include access paths prompt, specify *ELIGIBLE to include only eligible
access paths in the recovery time specification.
Journal caching
J ournal caching is an attribute of the journal that is defined. When journal caching is
enabled, the system caches journal entries and their corresponding database records
into main storage. This means that neither the journal entries nor their corresponding
database records are written to disk until an efficient disk write can be scheduled. This
usually occurs when the buffer is full, or at the first commit, close, or file end of data.
Because most database transactions must no longer wait for a synchronous write of
the journal entries to disk, the performance gain can be significant.
For example, batch operations must usually wait for each new journal entry to be
written to disk. J ournal caching can be helpful during batch operations when large
numbers of add, update, and delete operations against journaled objects are
performed.
The default value for journal caching is *BOTH. It is recommended that you use the
default value of *BOTH to perform journal caching on both the source and the target
systems.
For more information about journal caching, see IBMs Redbooks TechnoteJ ournal
Caching: Understanding the Risk of Data Loss.
MIMIX processing of high availability journal performance enhancements
You can enable both journal standby state and journal caching using a combination of
MIMIX and IBM commands. For example, the Journal state (J RNSTATE) parameter,
available on the IBM command Change J ournal (CHGJ RN), offers equivalent and
complementary function to the MIMIX parameter Target journal state (TGTSTATE).
Note: For purposes of this document, only MIMIX parameters are described in detail.
To enable journal standby state or journal caching in a MIMIX environment, two
parameters have been added to the Create J ournal Definition (CRTJ RNDFN) and
Configuring for high availability journal performance enhancements
323
Change J ournal Definition (CHGJ RNDFN) commands: Target journal state
(TGTSTATE) and Journal caching (J RNCACHE). See Creating a journal definition
on page 192 and Changing a journal definition on page 194.
When journaling is used on the target system, the TGTSTATE parameter specifies the
requested status of the target journal. Valid values for the TGTSTATE parameter are
*ACTIVE and *STANDBY. When *ACTIVE is specified and the data group associated
with the journal definition is journaling on the target system (J RNTGT(*YES)), the
target journal state is set to active when the data group is started. When *STANDBY is
specified, objects are journaled on the target system, but most journal entries are
prevented from being deposited into the target journal. An additional value, *SAME, is
valid for the CHGJ RNDFN command, which indicates the TGTSTATE value should
remain unchanged.
The J RNCACHE parameter specifies whether the system should cache journal
entries in main storage before writing them to disk. Valid values for the J RNCACHE
parameter are *TGT, *BOTH, *NONE, or *SRC. Although journal caching can be
configured on the target system, source system, or both, it is recommended to be
performed on both (*BOTH) the target system and source system. The recommended
value of *BOTH is the default. An additional value, *SAME, is valid for the
CHGJ RNDFN command, which indicates the J RNCACHE value should remain
unchanged.
Requirements of high availability journal performance enhancements
Feature 5117, i5/OS Option 42 - HA J ournal Performance, is required in order to use
MIMIX support of IBMs high availability performance enhancements. Each system in
the replication environment must have this software installed and be up to date with
the latest PTFs and service packs applied.
Restrictions of high availability journal performance enhancements
MIMIX support of IBMs high availability performance enhancements has a unique set
of restrictions and high availability considerations. Make sure that you are aware of
these restrictions before using journal standby state or journal caching in your MIMIX
environment.
When using journal standby state or journal caching, be aware of the following
restrictions documented by IBM:
Do not use these high availability performance enhancements in conjunction with
commitment control. For journals in standby mode, commitment control entries
are not sent to or deposited in the journal.
Note: MIMIX does not use commitment control on the target system. As such,
MIMIX support of IBMs high availability performance enhancements can
be configured on the target system even if commitment control is being
used on the source system.
Do not use these high availability performance enhancements in conjunction with
referential constraints, with the exception of referential constraint types of
*RESTRICT.
Also be aware of the following additional restrictions:
324
Do not change journal standby state or journal caching on IBM-supplied journals.
These journal names begin with Q and reside in libraries which names also
begin with Q (not QGPL). Attempting to change these journals results in an error
message.
Do not place a remote journal in journal standby state. J ournal caching is also not
allowed on remote journals.
Do not use MIMIX support of IBMs high availability performance enhancements in
a cascading environment.
Caching extended attributes of *FILE objects
325
Caching extended attributes of *FILE objects
In order to accurately replicate actions against *FILE objects, it is sometimes
necessary to retrieve the extended attribute of a *FILE object, such as PF, LF or
DSPF. Whenever large volumes of journal entries for *FILE objects are replicated
from the security audit journal (system journal), MIMIX caches this information for a
fixed set of *FILE objects to prevent unnecessary retrievals of the extended attribute.
The result is a potential reduction of CPU consumption by the object send job and a
significant performance improvement.
This function can be tailored to suit your environment. The maximum size of the
cache is controlled though the use of a data area in the MIMIX product library. The
cache size indicates the number of entries that can be contained in the cache. If the
data area is not created or does not exist in the MIMIX product library, the size of the
cache defaults to 15.
To configure the extended attribute cache, do the following:
1. Create the data area on the systems on which the object send jobs are running.
Type the following command:
CRTDTAARA DTAARA( installation_library/ MXOBJ SND) TYPE( *CHAR)
LEN( 2)
2. Specify the cache size (xx). Valid cache values are numbers 00 through 99. Type
the following command:
CHGDTAARA DTAARA(installation_library/ MXOBJ SND) VALUE( ' xx,
RCVJRNE_delay_values' )
Notes:
The four RCVJ RNE delay values are specified in this string along with the
cache size. See topic Increasing data returned in journal entry blocks by
delaying RCVJ RNE calls on page 326 for more information.
Using 00 for the cache size value disables the extended attribute cache.
326
Increasing data returned in journal entry blocks by delay-
ing RCVJRNE calls
Enhancements have been made to MIMIX to increase the performance of the object
send job when a small number of journal entries are present during the Receive
J ournal Entry (RCVJ RNE) call. J ournal entries are received in configurable-sized
blocks that have a default size of 99,999 bytes. When multiple RCVJ RNE calls are
performed and each block retrieved is less than 99,999 bytes, unnecessary overhead
is created.
Through additional controls added to the MXOBJ SND *DTAARA objects within the
MIMIX installation library, you can now specify the size of the block of data received
from RCVJ RNE and delay the next RCVJ RNE call based on a percentage of the
requested block size. Doing so increases the probability of receiving a full journal
entry block and improves object send performancereducing the number of
RCVJ RNE calls while simultaneously increasing the quantity of data returned in each
block. This delay, along with the extended file attribute cache capability, also reduces
CPU consumption by the object send job. See Caching extended attributes of *FILE
objects on page 325 for related information.
Understanding the data area format
This enhancement allows you to provide byte values for the block size to receive data
from RCVJ RNE, as well as specify the percentage of that block size to use for both a
small delay block and a medium delay block in the data area. These values are added
in segments to the string of characters used by the file attribute cache size. Each
block segment is followed by a multiplier value, which determines how long the
previously specified journal entry block is delayed. The duration of the delay is the
multiplier value multiplied by the value specified on the Reader wait time (seconds)
(RDRWAIT) parameter in the data group definition. The RDRWAIT default value is 1
second. The RCVJ RNE block size is specified in kilobytes, ranging from 32 Kb to
4000 Kb. If not specified, the default size is 99,999 bytes (100 Kb -1).
The following defines each segment and includes the number of characters that
particular segment can contain:
DTAARA VALUE( cache_size2, small_block_percentage2,
small_multipler2, medium_block_percentage2,
medium_multiplier2, block_size4 )
To illustrate the effect of specific delay and multiplier values, let us assume the
following:
DTAARA VALUE( 15, 10, 02, 30, 01, 0200 )
In this example, a small block is defined as any journal entry block consisting of 10
percent of the RCVJ RNE block size of 200 Kb, or 20,000 bytes. Assuming the
RDRWAIT default is in effect, small journal entry blocks will be delayed for 2 seconds
before the next RCVJ RNE call. Similarly, a medium block is defined as any journal
entry block containing between 10 and 30 percent of the RCVJ RNE block size,
between 20,001 and 60,000 bytes. Medium blocks are then delayed for 1 second
assuming the default RDRWAIT value is used.
Increasing data returned in journal entry blocks by delaying RCVJRNE calls
327
Note: Delays are not applied to blocks larger than the specified medium block
percentage. In the previous example, no delays will be applied to blocks larger
than 30 percent of the RCVJ RNE block size, or 60,000 bytes.
Determining if the data area should be changed
Before changing the data area, it is recommended that you contact a Certified MIMIX
Consultant for assistance with running object send processing with diagnostic
messages enabled. Review the set of LVI0001 messages returned as a result.
By default, the RCVJ RNE block size is 99,999 bytes, with the small block value set to
5,000 bytes and the medium block value set to 20,000 bytes. If the resulting
messages indicate that you are processing full journal entry blocks, there is no need
to add a delay to the RCVJ RNE call. In this case, the object send job is already
running as efficiently as possible. Note that a block is considered full when the next
journal entry in the sequence cannot fit within the size limitations of the block currently
being processed.
Note: Reviewing these messages can also be helpful once you have changed the
default values, to ensure that the object send job is operating efficiently.
The following are examples of LVI0001 messages:
LVI 0001 OM2120 Bl ock Si zes ( i n Kb) : Smal l =20; Medi um=60
LVI 0001 OM2120 Bl ock Count s: Smal l =129; Medi um=461; Lar ge=46;
Ful l =1
LVI 0001 OM2120 Usi ng RCVJ RNE Bl ock Si ze ( i n Kb) : 200
LVI 0001 OM2120 - Range Count s: 0%=80; 2%=28; 5%=21; 10%=23;
15%=56; 20%=161; 25%=221; 30%=23
LVI 0001 OM2120 - Range Count s: 40%=10; 50%=4; 60%=5; 70%=3;
80%=0; 90%=1; Ful l =1
OM2120 Fi l e At t r Cache: Si ze= 30, no cache l ookup at t empt s
In the above example, 636 blocks were sent but only one of the sent blocks were full.
Making changes to the delay multiplier or altering the small or medium block size
specification would probably make sense in this scenario. Recommendations for
changing the block size values are provided in Configuring the RCVJ RNE call delay
and block values on page 327.
Configuring the RCVJRNE call delay and block values
To configure the delay and block values when retrieving journal entry blocks, do the
following:
Note: Prior to configuring the RCVJ RNE call delay, carefully read the information
provided in Understanding the data area format on page 326 and
Determining if the data area should be changed on page 327.
1. Create the data area on the systems on which the object send jobs are running.
Type the following command:
CRTDTAARA DTAARA( installation_library/ MXOBJ SND) TYPE( *CHAR)
LEN( 20)
Note: Although you will see improvements from the file attribute cache with the
default character value (LEN(2)), enhancements are maximized by
328
recreating the MXOBJ SEND data area as a LEN(20) to use the RCVJ RNE
call delays.
2. In the above example, 636 blocks were sent but only one of the sent blocks were
full. Making changes to the delay multiplier or altering the small or medium block
size specification would probably make sense in this scenario. Recommendations
for changing the block size values are provided in Configuring the RCVJ RNE call
delay and block values on page 327.
CHGDTAARA DTAARA( installation_library/ MXOBJ SND)
VALUE( cache_size, 10, 02, 30, 01, 0100 )
Note: For information about the cache size, see Caching extended attributes of
*FILE objects on page 325.
Configuring high volume objects for better performance
329
Configuring high volume objects for better performance
Some objects, such as data areas and data queues can have significant activity
against them and can cause MIMIX to use significant CPU resource.
One or several programs can use the QSNDDTAQ and QRCVDTAQ APIs to generate
thousands of journal entries for a single *DTAQ. For each journal entry, system journal
replication processes package all of the entries of the *DTAQ and sends it to the apply
system. MIMIX then individually applies each *DTAQ entry using the QSNDDTAQ
API.
If the data group is configured for multiple Object retrieve processing (OBJ RTVPRC)
jobs, then several object retrieve jobs could be started (up to the maximum
configured) to handle the activity against the *DTAQ.
MIMIX contains redundancy logic that eliminates multiple journal entries for the same
object when the entire object is replicated. When you configure a data group for
system journal replication, you should:
Place all *DTAQs in the same object-only data group
Limit the maximum number of object retrieve jobs for the data group to one.
Defaults can be used for the other object data group jobs.
330
Improving performance of the #MBRRCDCNT audit
Environments that use commitment control may find that, in some conditions, a
request to run the #MBRRCDCNT audit or the Compare Record Count
(CMPRCDCNT) command can be extremely long-running. This is possible in
environments that use commitment control with long-running commit transactions that
include large numbers (tens of thousands) of record operations within one
transaction. In such an environment, the compare request can be long running when
the number of members to be compared is very large and there are uncommitted
changes present at the time of the request.
The Set MIMIX Policies (SETMMXPCY) command includes the policy CMPRCDCNT
commit threshold policy (CMPRCDCMT parameter) that provides the ability to specify
a threshold at which requests to compare record counts will no longer perform the
comparison due to commit cycle activity on the source system.
The shipped default values for this policy (CMPRCDCMT parameter) permit record
count comparison requests without regard to commit cycle activity on the source
system. These policy default values are suitable for environments that do not have
the commitment control environment indicated, or that can tolerate a long-running
comparison.
If your environment cannot tolerate a long-running request, you can specify a numeric
value for the CMPRCDCMT parameter for either the MIMIX installation or for a
specific data group. This will change the behavior of MIMIX by affecting what is
compared, and can improve performance of #MBRRCDCNT and CMPRCDCNT
requests.
Note: Equal record counts suggest but do not guarantee that files are synchronized.
When a threshold is specified for the CMPRCDCNT commit threshold policy,
record count comparisons can have a higher number of file members that are
not compared. This must be taken into consideration when using the
comparison results to gauge of whether systems are synchronized.
A numeric value for the CMPRCDCMT parameter defines the maximum number of
uncommitted record operations that can exist for files waiting to be applied in an apply
session at the time a compare record count request is invoked. The number specified
must be representative of the number of uncommitted record operations.
When a numeric value is specified, MIMIX recognizes whether the number of
uncommitted record operations for an apply session exceeds the threshold at the time
a compare request is invoked. If an apply session has not reached the threshold, the
comparison is performed. If the threshold is exceeded, MIMIX will not attempt to
compare members from that apply session. Instead, the results will display the *CMT
value for the difference indicator, indicating that commit cycle activity on the source
system prevented active processing from comparing counts of current records and
deleted records in the selected member.
Each database apply session is evaluated against the threshold independently. As a
result, it is possible for record counts to be compared for files in one apply session but
not be compared in another apply session, as illustrated in the following example.
Improving performance of the #MBRRCDCNT audit
331
Example: This example shows the result of setting the policy for a data group to a
value of 10,000. Table 42 shows the files replicated by each of the apply sessions
used by the data group and the result of comparison. Because of the number of
uncommitted record operations present at the time of the request, files processed by
apply sessions A and C are not compared.
Table 42. Sample results with a policy threshold value of 10,000.
Apply
Session
Files Uncommitted Record Operation Result
Per File Apply Session Total
A A01
A02
11,000
0
>10,000 Not compared, *CMT
Not compared, *CMT
B B01
B02
5,000
0
<10,000 Compared
Compared
C C01
C02
7,000
6,000
>10,000 Not compared, *CMT
Not compared, *CMT
D D01
D02
50
500
<10,000 Compared
Compared
Configuring advanced replication techniques
332
CHAPTER 16 Configuring advanced replication
techniques
This chapter describes how to modify your configuration to support advanced
replication techniques for user journal (database) and system journal (object)
replication.
User journal replication: The following topics describe advanced techniques for
user journal replication:
Keyed replication on page 334 describes the requirements and restrictions of
replication that is based on key values within the data. This topic also describes
how to configure keyed replication at the data group or file entry level as well as
how to verify key attributes.
Data distribution and data management scenarios on page 339 defines and
identifies configuration requirements for the following techniques: bi-directional
data flow, file combining, file sharing, file merging, broadcasting, and cascading.
Trigger support on page 346 describes how MIMIX handles triggers and how to
enable trigger support. Requirements and considerations for replication of
triggers, including considerations for synchronizing files with triggers, are
included.
Constraint support on page 348 identifies the types of constraints MIMIX
supports. This topic also describes delete rules for referential constraints that can
cause dependent files to change and MIMIX considerations for replication of
constraint-induced modifications.
Handling SQL identity columns on page 350 describes the problem of duplicate
identity column values and how the Set Identity Column Attribute (SETIDCOLA)
command can be used to support replication of SQL tables with identity columns.
Requirements and limitations of the SETIDCOLA command as well as alternative
solutions are included.
Collision resolution on page 357 describes available support within MIMIX to
automatically resolve detected collisions without user intervention and its
requirements. This topic also describes how to define and work with collision
resolution classes.
System journal replication: The following topics describe advanced techniques for
system journal replication:
Omitting T-ZC content from system journal replication on page 362 describes
considerations and requirements for omitting content of T-ZC journal entries from
replicated transactions for logical and physical files.
Selecting an object retrieval delay on page 366 describes how to set an object
retrieval delay value so that a MIMIX lock on an object does not interfere with your
applications. This topic includes several examples.
Configuring to replicate SQL stored procedures and user-defined functions on
333
page 368 describes the requirements for replicating these constructs and how
configure MIMIX to replicate them.
Using Save-While-Active in MIMIX on page 370 describes how to change type of
save-while-active option to be used when saving objects. You can view and
change these configuration values for a data group through an interface such as
SQL or DFU.
334
Keyed replication
By default, MIMIX user journal replication processes use positional replication. You
can change from positional replication to keyed replication for database files.
Keyed vs positional replication
In data groups that are configured for user journal replication, default values use
positional replication. In positional file replication, data on the target system is
identified by position, or relative record number (RRN), in the file member. If data
exists in a file on the source system, an exact copy must exist in the same position in
a file on the target system. When the file on the source system is updated, MIMIX
finds the data in the exact location on the target system and updates that data with the
changes.
User journal replication processes support the update of files by key, allowing
replication to be based on key values within the data instead of by the position of the
data within the file. Key replication support is subject to the requirements and
restrictions described.
Positional file replication provides the best performance. Keyed file replication offers a
greater level of flexibility, but you may notice greater CPU usage when MIMIX must
search each file for the specified key. You also need to be aware that data collisions
can occur when an attempt is made to simultaneously update the same data from two
different sources.
Positional replication is recommended for most high availability requirements. Keyed
replication is best used for more flexible scenarios, such as file sharing, file routing, or
file combining.
Requirements for keyed replication
Journal images - MIMIX may need to be configured so that both before and after
images of the journal transaction are placed in the journal.
The Journal image element of the File and tracking entry options (FEOPT) parameter
controls which journal images are placed in the journal. Default values result in only
an after-image of the record. However, some configurations require both before-
images and after-images. The J ournal image value specified in the data group
definition is in effect unless a different value is specified for the FEOPT parameter in a
file entry or object entry.
It is recommended that you use the Journal image value of *BOTH whenever there
are file entries with keyed replication to prevent before images from being filtered out
by the database send process. If the unique key fields of the database file are
updated by applications, you must use the value *BOTH.
Unique access path - At least one unique access path must exist for the file being
replicated.The access path can be either part of the physical file itself or it can be
defined in a logical file dependent on the physical file.
Keyed replication
335
You can use the Verify Key Attributes (VFYKEYATR) command to determine whether
a physical file is eligible for keyed replication. See Verifying key attributes on
page 338.
Restrictions of keyed replication
The Compare File Data (CMPFILDTA) command cannot compare files that are
configured for keyed replication. If you run the #FILDTA audit or the CMPFILDTA
command against keyed files, the files are excluded from the comparison and a
message indicates that files using *KEYED replication were not processed.
When keyed replication is in use, the journal and journal definition cannot be
configured to allow object types to support minimized entry specific data. For more
information, see Minimized journal entry data on page 318.
Implementing keyed replication
You can implement keyed replication for an entire data group or for individual data
group file entries. If you configure a data group for keyed replication, MIMIX uses
keyed replication as the default for all processing of all associated data group file
entries. If you configure individual data group file entries for keyed replication, the
values you define in the data group file entry override the defaults used by the data
group for the associated file.
Changing a data group configuration to use keyed replication
You can define keyed replication for a data group when you are initially configuring
MIMIX or you can change the configuration later. To use keyed replication for all
database replication defined for a data group, the following requirements must be
met:
1. Before you change a data group definition to support keyed replication, do the
following:
d. Verify that the files defined to the data group are journaled correctly. Do not
continue until this is verified.
e. If the files are not currently journaled correctly, you need to end journaling for
the file entries defined to the data group. Use topic Ending J ournaling in the
MIMIX Operations book.
2. In the data group definition used for replication you must specify the following:
Data group type of *ALL or *DB.
DB journal entry processing must have Before images as *SEND for source
send configurations. When using remote journaling, all journal entries are sent.
Attention: If you attempt to change the file replication from
*KEYED to *POSITION, a warning message will be returned that
indicates that the position of the file may not match the position of
the file on the backup system. Attempting to change from keyed to
positional replication can result in a mismatch of the relative record
numbers (RRN) between the target system and source system.
336
Verify that you have the value you need specified for the Journal image
element of the File and tracking ent. options. *BOTH is recommended.
File and tracking ent. options must specify *KEYED for the Replication type
element.
3. The files identified by the data group file entries for the data group must be eligible
for keyed replication. See topic Verifying Key Attributes in the MIMIX Operations
book.
4. If you have modified file entry options on individual data group file entries, you
need to ensure that the values used are compatible with keyed replication.
5. Start journaling for the file entries using Starting journaling for physical files on
page 305.
Changing a data group file entry to use keyed replication
By default, data group file entries use the same file entry options as specified in the
data group definition. If you configure individual data group file entries for keyed
replication, the values you define in the data group file entry override the defaults
used by the data group for the associated file.
If you want to use keyed replication for one or more individual data group file entries
defined for a data group, you need the following:
1. Before you change a data group file entry to support keyed replication, ensure
the following:
f. If the file is not being journaled correctly, for example the data group file entry
is not set as described in Step 4, you will need to end journaling for the file
entries.
2. The data group definition used for replication must have a Data group type of
*ALL or *DB.
3. DB journal entry processing must have Before images as *SEND for source send
configurations. When using remote journaling, all journal entries are sent.
4. The data group file entry must have File and tracking ent. options set as follows:
To override the defaults from the data group definition to use keyed replication
on only selected data group file entries, verify that you have the value you need
specified for the Journal image (*BOTH is recommended) and specify *KEYED
for the Replication type.
If you are using keyed replication at the data group level, the data group file
entries can use the default value *DGDFT for both Journal image and
Replication type.
Note: You can use any of the following ways to configure data group file entries
for keyed replication:
Use either procedure in topic Loading file entries on page 246 to add or
modify a group of data group file entries. If you are modifying existing file
entries in this way, you should specify *UPDADD for the Update option
parameter.
Keyed replication
337
Use topic Adding a data group file entry on page 252 to create a new file
entry.
Use topic Changing a data group file entry on page 253 to modify an
existing file entry.
5. The files identified by the data group file entries for the data group must be eligible
for keyed replication. See topic Verifying Key Attributes in the MIMIX Operations
book.
6. After you have changed individual data group file entries, you need to start
journaling for the file entries using Starting journaling for physical files on
page 305.
338
Verifying key attributes
Before you configure for keyed replication, verify that the file or files you for which you
want to use keyed replication are actually eligible.
Do the following to verify that the attributes of a file are appropriate for keyed
replication:
1. On a command line, type VFYKEYATR (Verify Key Attributes). The Verify Key
Attributes display appears.
2. Do one of the following:
To verify a file in a library, specify a file name and a library.
To verify all files in a library, specify *ALL and a library.
To verify files associated with the file entries for a data group, specify
*MIMIXDFN for the File prompt and press Enter. Prompts for the Data group
definition appear. Specify the name of the data group that you want to check.
3. Press Enter.
4. A spooled file is created that indicates whether you can use keyed replication for
the files in the library or data group you specified. Display the spooled file
(WRKSPLF command) or use your standard process for printing. You can use
keyed replication for the file if *BOTH appears in the Replication Type Allowed
column. If a value appears in the Replication Type Defined column, the file is
already defined to the data group with the replication type shown.
Data distribution and data management scenarios
339
Data distribution and data management scenarios
MIMIX supports a variety of scenarios for data distribution and data management
including bi-directional data flow, file combining, file sharing, and file merging. MIMIX
also supports data distribution techniques such as broadcasting, and cascading.
Often, this support requires a combination of advanced replication techniques as well
as customizing. These techniques require additional planning before you configure
MIMIX. You may need to consider the technical aspects of implementing a technique
as well as how your business practices may be affected. Consider the following:
Can each system involved modify the data?
Do you need to filter data before sending to it to another system?
Do you need to implement multiple techniques to accomplish your goal?
Do you need customized exit programs?
Do any potential collision points exist and how will each be resolved?
MIMIX user journal replication provides filtering options within the data group
definition. Also, MIMIX provides options within the data group definition and for
individual data group file entries for resolving most collision points. Additionally,
collision resolution classes allow you to specify different resolution methods for each
collision point.
Configuring for bi-directional flow
Both MIMIX user journal and system journal replication processes allow data to flow
bi-directionally, but their implementations and configuration requirements are
distinct.
In user journal replication processing, bi-directional data flow is a data sharing
technique in which the same named database file can be replicated between
databases on two systems in two directions at the same time. When MIMIX user
journal replication processes are configured for bi-directional data flow, each
system is both a source system and a target system.
System journal replication processing supports the bi-directional flow of objects
between two systems, but it does not support simultaneous (bi-directional)
updates to the same object on multiple systems. Updating the same object from
two systems at the same time can cause a loss of data integrity.
File sharing is a scenario in which a file can be shared among a group of systems
and can be updated from any of the systems in the group. MIMIX implements file
sharing among systems defined to the same MIMIX installation. To enable file
sharing, MIMIX must be configured to allow bi-directional data flow. An example of file
sharing is when an enterprise maintains a single database file that must be updated
from any of several systems.
Bi-directional requirements: system journal replication
To configure system journal replication processes to support bi-directional flow of
objects, you need the following:
340
Configure two data group definitions between the two systems. In one data group,
specify *SYS1 for the Data source (DTASRC) parameter. In the other data group,
specify *SYS2 for this parameter.
Each data group definition should specify *NO for the Allow to be switched
(ALWSWT) parameter.
Note: In system journal replication, MIMIX does not support simultaneous updates to
the same object on multiple systems and does not support conflict resolution
for objects. Once an object is replicated to a target system, system journal
replication processes prevent looping by not allowing the same object,
regardless of name mapping, to be replicated back to its original source
system.
Bi-directional requirements: user journal replication
To configure user journal replication processes to support bi-directional data flow, you
need the following:
Configure two data group definitions between the two systems. In one data group,
specify *SYS1 for the Data source (DTASRC) parameter. In the other data group,
specify *SYS2 for this parameter.
For each data group definition, set the DB journal entry processing (DBJ RNPRC)
parameter so that its Generated by MIMIX element is set to *IGNORE. This
prevents any journal entries that are generated by MIMIX from being sent to the
target system and prevents looping.
The files defined to each data group must be configured for keyed replication. Use
topics Keyed replication on page 334 and Verifying key attributes on page 338
to determine if files can use keyed replication.
Analyze your environment to determine the potential collision points in your data.
You need to understand how each collision point will be resolved. Consider the
following:
Can the collision be resolved using the collision resolution methods provided in
MIMIX or do you need customized exit programs? See Collision resolution on
page 357.
How will your business practices be affected by collision scenarios?
For example, say that you have an order entry application that updates shared
inventory records such as Figure 19. If two locations attempt to access the last item in
stock at the same time, which location will be allowed to fill the order? Does the other
location automatically place a backorder or generate a report?
Figure 19. Example of bi-directional configuration to implement file sharing.
Data distribution and data management scenarios
341
Configuring for file routing and file combining
File routing and file combining are data management techniques supported by MIMIX
user journal replication processes. The way in which data is used can affect the
configuration requirements for a file routing or file combining operation. Evaluate the
needs for each pair of systems (source and target) separately. Consider the following:
Does the data need to be updated in both directions between the systems? If you
need bi-directional data flow, see topic Configuring for bi-directional flow on
page 339.
Will users update the data from only one or both systems? If users can update
data from both systems, you need to prevent the original data from being returned
to its original source system (recursion).
Is the file routing or file combining scenario a complete solution or is it part of a
larger solution? Your complete solution may be a combination of multiple data
management and data distribution techniques. Evaluate the requirements for
each technique separately for a pair of systems (source and target). Each
technique that you need to implement may have different configuration
requirements.
File combining is a scenario in which all or partial information from files on multiple
systems can be sent to and combined in a single file on a target system. In its user
journal replication processes, MIMIX implements file combining between multiple
source systems and a target system that are defined to the same MIMIX installation.
MIMIX determines what data from the multiple source files is sent to the target system
based on the contents of a journal transaction. An example of file combining is when
many locations within an enterprise update a local file and the updates from all local
files are sent to one location to update a composite file. The example in Figure 20
342
shows file combining from multiple source systems onto a composite file on the
management system.
Figure 20. Example of file combining
To enable file combining between two systems, MIMIX user journal replication must
be configured as follows:
Configure the data group definition for keyed replication. See topic Keyed
replication on page 334.
If only part of the information from the source system is to be sent to the target
system, you need an exit program to filter out transactions that should not be sent
to the target system.
If you allow the data group to be switched (by specifying *YES for Allow to be
switched (ALWSWT) parameter) and a switch occurs, the file combining operation
effectively becomes a file routing operation. To ensure that the data group will
perform file combining operations after a switch, you need an exit program that
allows the appropriate transactions to be processed regardless of which system is
acting as the source for replication.
After the combining operating is complete, if the combined data will be replicated
or distributed again, you need to prevent it from returning to the system on which it
originated.
File routing is a scenario in which information from a single file can be split and sent
to files on multiple target systems. In user journal replication processes, MIMIX
implements file routing between a source system and multiple target systems that are
defined to the same MIMIX installation. To enable file routing, MIMIX calls a user exit
program that makes the file routing decision. The user exit program determines what
data from the source file is sent to each of the target systems based on the contents
Data distribution and data management scenarios
343
of a journal transaction. An example of file routing is when one location within an
enterprise performs updates to a file for all other locations, but only updated
information relevant to a location is sent back to that location. The example in Figure
21 shows the management system routing only the information relevant to each
network system to that system.
Figure 21. Example of file routing
To enable file routing, MIMIX user journal replication processes must be configured as
follows:
Configure the data group definition for keyed replication. See topic Keyed
replication on page 334.
The data group definition must call an exit program that filters transactions so that
only those transactions which are relevant to the target system are sent to it.
If you allow the data group to be switched (by specifying *YES for Allow to be
switched (ALWSWT) parameter) and a switch occurs, the file routing operation
effectively becomes a file combining operation. To ensure that the data group will
perform file routing operations after a switch, you need an exit program that allows
the appropriate transactions to be processed regardless of which system is acting
as the source for replication.
Configuring for cascading distributions
Cascading is a distribution technique in which data passes through one or more
intermediate systems before reaching its destination. MIMIX supports cascading in
both its user journal and system journal replication paths. However, the paths differ in
their implementation.
344
Data can pass through one intermediate system within a MIMIX installation.
Additional MIMIX installations will allow you to support cascading in scenarios that
require data to flow though two or more intermediate systems before reaching its
destination. Figure 22 shows the basic cascading configuration that is possible within
one MIMIX installation.
Figure 22. Example of a simple cascading scenario
To enable cascading you must have the following:
Within a MIMIX installation, the management system must be the intermediate
system.
Configure a data group between the originating system (a network system) to the
intermediate (management) system. Configure another data group for the flow
from the intermediate (management) system to the destination system.
For user journal replication, you also need the following:
The data groups should be configured to send journal entries that are
generated by MIMIX. To do this, specify *SEND for the Generated by MIMIX
element of the DB journal entry processing (DBJ RNPRC) parameter. When
this is the case, MIMIX performs the database updates.
If it is possible for the data to be routed back to the originating or any
intermediate systems, you need to use keyed replication.
Note: Once an object is replicated to a target system, MIMIX system journal
replication processes prevent looping by not allowing the same object,
regardless of name mapping, to be replicated back to its original source
system.
Cascading may be used with other data management techniques to accomplish a
specific goal. Figure 23 shows an example where the Chicago system is a
management system in a MIMIX installation that collects data from the network
systems and broadcasts the updates to the other participating systems. The network
systems send unfiltered data to the management system. Figure 23 is a cascading
scenario because changes that originate on the Hong Kong system pass through an
intermediate system (Chicago) before being distributed to the Mexico City system and
other network systems in the MIMIX installation. Exit programs are required for the
Data distribution and data management scenarios
345
data groups acting between the management system and the destination systems
and need to prevent updates from flowing back to their system of origin.
Figure 23. Bi-directional example that implements cascading for file distribution.
346
Trigger support
A trigger program is a user exit program that is called by the database when a
database modification occurs. Trigger programs can be used to make other database
modifications which are called trigger-induced database modifications.
How MIMIX handles triggers
The method used for handling triggers is determined by settings in the data group
definition and file entry options. MIMIX supports database trigger replication using
one of the following ways:
Using IBM i trigger support to prevent the triggers from firing on the target system
and replicating the trigger-induced modifications.
Ignoring trigger-induced modifications found in the replication stream and allowing
the triggers to fire on the target system.
Considerations when using triggers
You should choose only one of these methods for each data group file entry. Which
method you use depends on a variety of considerations:
The default replication type for data group file entry options is positional
replication. With positional replication, each file is replicated based on the position
of the record within the file. The value of the relative record number used in the
journal entry is used to locate a database record being updated or deleted. When
positional replication is used and triggers fire on the target system they can cause
trigger-induced modifications to the files being replicated. These trigger-induced
modifications can change the relative record number of the records in the file
because the relative record numbers of the trigger-induced modifications are not
likely to match the relative record numbers generated by the same triggers on the
source system. Because of this, triggers should not be allowed to fire on the target
system. You should prevent the triggers from firing on the target system and
replicate the trigger-induced modifications from source to the target system.
When trigger-induced modifications are made by replicated files to files not
replicated by MIMIX, you may want the triggers to fire on the target system. This
will ensure that the files that are not replicated receive the same trigger-induced
modifications on the target system as they do on the source system.
When triggers do not cause database record changes, you may choose to allow
them to fire on the target system. However, if non-database changes occur and
you are using object replication, the object replication will replicate trigger-induced
object changes from the source system. In this case, the triggers should not be
permitted to fire.
When triggers are allowed to fire on the target system, the files being updated by
these triggers should be replicated using the same apply session as the parent
files to avoid lock contention.
A slight performance advantage may be achieved by replicating the trigger-
induced modifications instead of ignoring them and allowing the triggers to fire.
Trigger support
347
This is because the database apply process checks each transaction before
processing to see if filtering is required, and firing the trigger adds additional
overhead to database processing.
Enabling trigger support
Trigger support is enabled for user journal replication by specifying the appropriate file
entry option values for parameters on the Create Data Group Definition (CRTDGDFN)
and Change Data Group Definition (CHGDGDFN) commands. You can also enable
trigger support at a file level by specifying the appropriate file entry options associated
with the file.
If you already have a trigger solution in place you can continue to use that
implementation or you can use the MIMIX trigger support.
Synchronizing files with triggers
When you are synchronizing a file with triggers and you are using MIMIX trigger
support, you must specify *DATA on the Sending mode parameter on the
Synchronize DG File Entry (SYNCDGFE) command.
On the Disable triggers on file parameter, you can specify if you want the triggers
disabled on the target system during file synchronization. The default is *DGFE, which
will use the value indicated for the data group file entry. If you specify *YES, triggers
will be disabled on the target system during synchronization. A value of *NO will leave
triggers enabled.
For more information on synchronizing files with triggers, see About synchronizing
file entries (SYNCDGFE command) on page 451.
348
Constraint support
A constraint is a restriction or limitation placed on a file. There are four types of
constraints: referential, unique, primary key and check. Unique, primary key and
check constraints are single file operations transparent to MIMIX. If a constraint is met
for a database operation on the source system, the same constraint will be met for the
replicated database operation on the target. Referential constraints, however, ensure
the integrity between multiple files. For example, you could use a referential constraint
to:
Ensure when an employee record is added to a personnel file that it has an
associated department from a company organization file.
Empty a shopping cart and remove the order records if an internet shopper exits
without placing an order.
When constraints are added, removed or changed on files replicated by MIMIX, these
constraint changes will be replicated to the target system. With the exception of files
that have been placed on hold, MIMIX always enables constraints and applies
constraint entries. MIMIX tolerates mismatched before images or minimized journal
entry data CRC failures when applying constraint-generated activity. Because the
parent record was already applied, entries with mismatched before images are
applied and entries with minimized journal entry data CRC failures are ignored. To
use this support:
Ensure that your target system is at the same release level or greater than the
source system to ensure the target system is able to use all of the IBM i function
that is available on the source system. If an earlier IBM i level is installed on the
target system the operation will be ignored.
You must have your MIMIX environment configured for either MIMIX Dynamic
Apply or legacy cooperative processing.
Referential constraints with delete rules
Referential constraints can cause changes to dependent database files when the
parent file is changed. Referential constraints defined with the following delete rules
cause dependent files to change:
*CASCADE: Record deletion in a parent file causes records in the dependent file
to be deleted when the parent key value matches the foreign key value.
*SETNULL: Record deletion in a parent file updates those records in the
dependent file where the value of the parent non-null key matches the foreign key
value. For those dependent records that meet the preceding criteria, all null
capable fields in the foreign key are set to null. Foreign key fields with the non-null
attribute are not updated.
*SETDFT: Record deletion in a parent file updates those records in the dependent
file where the value of the parent non-null key matches the foreign key value. For
those dependent records that meet the preceding criteria, the foreign key field or
fields are set to their corresponding default values.
Constraint support
349
Referential constraint handling for these dependent files is supported through the
replication of constraint-induced modifications.
MIMIX does not provide the ability to disable constraints because IBM i would check
every record in the file to ensure constraints are met once the constraint is re-
enabled. This would cause a significant performance impact on large files and could
impact switch performance. If the need exists, this can be done through automation.
Replication of constraint-induced modifications
MIMIX always attempts to apply constraint-induced modifications. Earlier levels of
MIMIX provided the Process constraint entries element in the File entry options
(FEOPT) parameter, which now is removed.
1
Any previously specified value is now
mapped to *YES so that processing always occurs.
The considerations for replication of constraint-induced modifications are:
Files with referential constraints and any dependent files must be replicated by the
same apply session.
When referential constraints cause changes to dependent files not replicated by
MIMIX, enabling the same constraints on the target system will allow changes to
be made to the dependent files.
1. This element was removed in version 5 service pack 5.0.08.00.
350
Handling SQL identity columns
MIMIX replicates identity columns in SQL tables and checks for scenarios that can
cause duplicate identity column values after switching and, if possible, prevents the
problem from occurring. In some cases, identity columns will need to be processed by
manually running the Set Identity Column Attribute (SETIDCOLA) command.
This command is useful for handling scenarios that would otherwise result in errors
caused by duplicate identity column values when inserting rows into tables.
The identity column problem explained
In SQL, a table may have a single numeric column which is designated an identity
column. When rows are inserted into the table, the database automatically generates
a value for this column, incrementing the value with each insertion. Several attributes
define the behavior of the identity column, including: Minimum value, Maximum value,
Increment amount, Start value, Cycle/No Cycle, Cache amount. This discussion is
limited to the following attributes:
Increment amount - the amount by which each new rows identity column differs
from the previously inserted row. This can be a positive or negative value.
Start value - the value used for the next row added. This can be any value,
including one that is outside of the range defined by the minimum and maximum
values.
Cycle/No Cycle - indicates whether or not values cycle from maximum back to
minimum, or from minimum to maximum if the increment is negative.
Nothing prevents identity column values from being generated more than once.
However, in typical usage, the identity column is also a primary, unique key and set to
not cycle.
The value generator for the identity column is stored internally with the table.
Following certain actions which transfer table data from one system to another, the
next identity column value generated on the receiving system may not be as
expected. This can occur after a MIMIX switch and after other actions such as certain
save/restore operations on the backup system. Similarly, other actions such as
applying journaled changes (APYJ RNCHG), also do not keep the value generator
synchronized.
Any SQL table with an identity column that is replicated by a switchable data group
can potentially experience this problem. J ournal entries used to replicate inserted
rows on the production system do not contain information that would allow the value
generator to remain synchronized. The result is that after a switch to the backup
system, rows can be inserted on the backup system using identity column values
other than the next expected value. The starting value for the value generator on the
backup system is used instead of the next expected value based on the tables
content. This can result in the reuse of identity column values which in turn can cause
a duplicate key exception.
Handling SQL identity columns
351
Detailed technical descriptions of all attributes are available in the IBM eServer
iSeries Information Center. Look in the Database section for the SQL Reference for
CREATE TABLE and ALTER TABLE statements.
When the SETIDCOLA command is useful
Important! The SETIDCOLA command should not be used in all environments. Its
use is subject to the limitations described in SETIDCOLA command limitations on
page 351. If you cannot use the SETIDCOLA command, see Alternative solutions
on page 352.
Examples of when you may need to run the SETIDCOLA command are:
The SETIDCOLA command can be used to determine whether a data group
replicates tables which contain identity columns and report the results. To do so,
specify ACTION(*CHECKONLY) on the command. It is recommended that you
initially use this capability before setting values. You may want to perform this type
of check whenever new tables are created that might contain identity columns.
See Checking for replication of tables with identity columns on page 355.
For many environments, default values on the SETIDCOLA command are
appropriate for use following a planned switch to the backup system to ensure
that the identity column values inserted on the backup system start at the proper
point. After performing a switch to the backup system, run the command from the
backup system before starting replication in the reverse direction.
After a restore (RSTnnn command) from a "save of backup machine." For this
scenario, run the command on the system on which you performed the restore.
Before saving files to tape or other media from the backup system. For this
scenario, run the command from the backup system. By doing this, you avoid the
need to run the command after restoring.
Also, the SETIDCOLA command is needed in any environment in which you are
attempting to restore from a save that was created while replication processes were
running.
SETIDCOLA command limitations
In general, SETIDCOLA only works correctly for the most typical scenario where all
values for identity columns have been generated by the system, and no cycles are
allowed. In other scenarios, it may not restart the identity column at a useful value.
Limited support for unplanned switch - Following an unplanned switch, the backup
system may not be caught up with all the changes that occurred on the production
system. Using the SETIDCOLA command on the backup system may result in the
generation of identity column values that were used on the production system but not
yet replicated to the backup system. Careful selection of the value of the
INCREMENTS parameter can minimize the likelihood of this problem, but the value
chosen must be valid for all tables in the data group. See Examples of choosing a
value for INCREMENTS on page 354.
352
Not supported -The following scenarios are known to be problematic and are not
supported. If you cannot use the SETIDCOLA command in your environment,
consider the Alternative solutions on page 352.
Columns that have cycled - If an identity column allows cycling and adding a row
increments its value beyond the maximum range, the restart value is reset to the
beginning of the range. Because cycles are allowed, the assumption is that
duplicate keys will not be a problem. However, unexpected behavior may occur
when cycles are allowed and old rows are removed from the table with a
frequency such that the identity column values never actually complete a cycle. In
this scenario, the ideal starting point would be wherever there is the largest gap
between existing values. The SETIDCOLA command cannot address this
scenario; it must be handled manually.
Rows deleted on production table - An application may require that an identity
column value never be generated twice. For example, the value may be stored in
a different table, data area or data queue, given to another application, or given to
a customer. The application may also require that the value always locate either
the original row or, if the row is deleted, no row at all. If rows with values at the end
of the range are deleted and you perform a switch followed by the SETIDCOLA
command, the identity column values of the deleted rows will be re-generated for
newly inserted rows. The SETIDCOLA command is not recommended for this
environment. This must be handled manually.
No rows in backup table - If there are no rows in the table on the backup system,
the restart value will be set to the initial start value. Running the SETIDCOLA
command on the backup system may result in re-generating values that were
previously used. The SETIDCOLA command cannot address this scenario; it
must be handled manually.
Application generated values - Optionally, applications can supply identity column
values at the time they insert rows into a table. These application-generated
identity values may be outside the minimum and maximum values set for the
identity column. For example, a tables identity column range may be from 1
through 100,000,000 but an application occasionally supplies values in the range
of 200,000,000 through 500,000,000. If cycling is permitted and the SETIDCOLA
command is run, the command would recognize the higher values from the
application and would cycle back to the minimum value of 1. Because the result
would be problematic, the SETIDCOLA command is not recommended for tables
which allow application-generated identity values. This must be handled manually.
Alternative solutions
If you cannot use the SETIDCOLA command because of its known limitations, you
have these options.
Manually reset the identity column starting point: Following a switch to the
backup system, you can manually reset the restart value for tables with identity
columns. The SQL statement ALTER TABLE name ALTER COLUMN can be used for
this purpose.
Convert to SQL sequence objects: To overcome the limitations of identity column
switching and to avoid the need to use the SETIDCOLA command, SQL sequence
Handling SQL identity columns
353
objects can be used instead of identity columns. Sequence objects are implemented
using a data area which can be replicated by MIMIX. The data area for the sequence
object must be configured for replication through the user journal (cooperatively
processed).
SETIDCOLA command details
The Set Identity Column Attribute (SETIDCOLA) command performs a RESTART
WITH alteration on the identity column of any SQL tables defined for replication in the
specified data group. For each table, the new restart value determines the identity
column value for the next row added to the table. Careful selection of values can
ensure that, when applications are started, the identity column starting values exceed
the last values used prior to the switch or save/restore operation.
If you use Lakeview-provided product-level security, the minimum authority level for
this command is *OPR.
The Data group definition (DGDFN) parameter identifies the data group against
which the specified action is taken. Only tables that are identified for replication by the
specified data group are addressed.
The Action (ACTION) parameter specifies what action is to be taken by the
command. Only tables which can be replicated by the specified data group are acted
upon. Possible values are:
*SET The command checks and sets the attribute of the identity column of each
table which meets the criteria. This is the default value.
*CHECKONLY The command checks for tables which have identity columns. It
does not set the attributes of the identity columns. The result of the check is
reported in the job log. If there are affected tables, message LVE3E2C will be
issued. If no tables are affected, message LVI3E26 will be issued.
The Number of jobs (JOBS) parameter specifies the number of jobs to use to
process tables which meet the criteria for processing by the command. A table will
only be updated by one job; each job can update multiple tables. The default value,
*DFT, is currently set to one job. You can specify as many as 30 jobs.
The Number of increments to skip (INCREMENTS) parameter specifies how many
increments of the counter which generates the starting value for the identity column to
skip. The value specified is used for all tables which meet the criteria for processing
by the command. Be sure to read the information in Examples of choosing a value for
INCREMENTS on page 354. Possible values are:
*DFT Skips the default number of increments, currently set to 1 increment.
Following a planned switch where tables are synchronized, you can usually use
*DFT.
number-of-increments-to-skip Specify the number of increments to skip. Valid
values are 1 through 2,147,483,647. Following an unplanned switch, use a larger
value to ensure that you skip any values used on the production system that may
not have been replicated to the backup system.
354
Usage notes
The reason you are using this command determines which system you should run
it from. See When the SETIDCOLA command is useful on page 351 for details.
The command can be invoked manually or as part of a MIMIX Model Switch
Framework custom switching program. Evaluation of your environment to
determine an appropriate increment value is highly recommended before using
the command.
This command can be long running when many files defined for replication by the
specified data group contain identity columns. This is especially true when
affected identity columns do not have indexes over them or when they are
referenced by constraints. Specifying a higher number of jobs (J OBS) can reduce
this time.
This command creates a work library named SETIDCOLA which is used by the
command. The SETIDCOLA library is not deleted so that it can be used for any
error analysis.
Internally, the SETIDCOLA command builds RUNSQLSTM scripts (one for each
job specified) and uses RUNSQLSTM in spawned jobs to execute the scripts.
RUNSQLSTM produces spooled files showing the ALTER TABLE statements
executed, along with any error messages received. If any statement fails, the
RUNSQLSTM will also fail, and return the failing status back to the job where
SETIDCOLA is running and an escape message will be issued.
Examples of choosing a value for INCREMENTS
When choosing a value for INCREMENTS, consider the rate at which each table
consumes its available identity values. Account for the needs of the table which
consumes numbers at the highest rate, as well as any backlog in MIMIX processing
and the activity causing you to run the command. If you have available numbers to
use, add a safety factor of at least 100 percent. For example, if the rate of the fastest
file is 1,000 numbers per hour and MIMIX is 15 minutes behind (0.25 hours), the value
you specify for INCREMENTS needs to result in at least 250 numbers (1000 x 0.25)
being skipped. Adding 100% to 250, results in an increment of 500.
Note: The MIMIX backlog, sometimes called the latency of changes being
transferred to the backup system, is the amount of time from when an
operation occurs on the production system until it is successfully sent to the
backup system by MIMIX. It does not include the time it takes for MIMIX to
apply the entry. Use the DSPDGSTS command to view the Unprocessed entry
count for the DB Apply process; this value is the size of the backlog. You need
to approximate how long it would take for this value to become zero (0) if
application activity were to be stopped on the production system.
For example, data group ORDERS contains tables A and B. Each row added to table
A increases the identity value by 1 and each row added to table B increases the
identify value by 1,000. Rows are inserted into table A at a rate of approximately 600
rows per hour. Rows are inserted into table B at a rate of approximately 20 rows per
hour. Prior to a switch, on the production system the latest value for table A was 75
and the latest value for table B was 30,000. Consider the following scenarios:
Handling SQL identity columns
355
Scenario 1. You performed a planned switch for test purposes. Because
replication of all transactions completed before the switch and no users have been
allowed on the backup system, the backup system has the same values as the
production. Before starting replication in the reverse direction you run the
SETIDCOLA command with an INCREMENTS value of 1. The next rows added to
table A and B will have values of 76 and 31,000, respectively.
Scenario 2. You performed an unplanned switch. From previous experience, you
know that the latency of changes being transferred to the backup system is
approximately 15 minutes. Rows are inserted into Table A at the highest rate. In
15 minutes, approximately 150 rows will have been inserted into Table A (600
rows/hour * 0.25 hours). This suggests an INCREMENTS value of 150. However,
since all measurements are approximations or based on historical data, this
amount should be adjusted by a factor of at least 100% to 300 to ensure that
duplicate identity column values are not generated on the backup system. The
next rows added to table A and B will have values of 75+(300*1) =375 and 30,000
+(300*1000)=330,000 respectively.
Checking for replication of tables with identity columns
To determine whether any files being replicated by a data group have identity
columns, do the following.
1. From the production system, specify the data group to check in the following
command:
SETI DCOLA DGDFN( name system1 system2) ACTI ON( *CHECKONLY)
2. Check the job log for the following messages. Message LVE3E2C identifies the
number of tables found with identity columns. Message LVI3E26 indicates that no
tables were found with identity columns.
3. If the results found tables with identity columns, you need to evaluate the tables
and determine whether you can use the SETIDCOLA command to set values.
Setting the identity column attribute for replicated files
At a high level, the steps you need to perform to set the identity columns of files being
replicated by a data group are listed below. You may want to plan for the time
required for investigation steps and time to run the command to set values.
1. Run the SETIDCOLA command in check only mode first to determine if you need
to set values. See Checking for replication of tables with identity columns on
page 355.
2. Determine whether limitations exist in the replicated tables that would prevent you
from running the command to set values. See SETIDCOLA command limitations
on page 351.
3. Determine what increment value is appropriate for use for all tables replicated by
the data group. Consider the needs of each table. Also consider the MIMIX
backlog at the time you plan to use the command. See Examples of choosing a
value for INCREMENTS on page 354.
4. From the appropriate system, as defined in When the SETIDCOLA command is
356
useful on page 351 specify a data group and the number of increments to skip in
the command:
SETI DCOLA DGDFN( name system1 system2) ACTI ON( *SET)
I NCREMENTS( number)
Collision resolution
357
Collision resolution
Collision resolution is a function within MIMIX user journal replication that
automatically resolves detected collisions without user intervention. MIMIX supports
the following choices for collision resolution that you can specify in the file entry
options (FEOPT) parameter in either a data group definition or in an individual data
group file entry:
Held due to error: (*HLDERR) This is the default value for collision resolution in
the data group definition and data group file entries. MIMIX flags file collisions as
errors and places the file entry on hold. Any data group file entry for which a
collision is detected is placed in a "held due to error" state (*HLDERR). This
results in the journal entries being replicated to the target system but they are not
applied to the target database. If the file entry specifies member *ALL, a
temporary file entry is created for the member in error and only that file entry is
held. Normal processing will continue for all other members in the file. You must
take action to apply the changes and return the file entry to an active state. When
held due to error is specified in the data group definition or the data group file
entry, it is used for all 12 of the collision points.
Automatic synchronization: (*AUTOSYNC) MIMIX attempts to automatically
synchronize file members when an error is detected. The member is put on hold
while the database apply process continues with the next transaction. The file
member is synchronized using copy active file processing, unless the collision
occurred at the compare attributes collision point. In the latter case, the file is
synchronized using save and restore processing. When automatic
synchronization is specified in the data group definition or data group file entry, it
is used for all 12 of the collision points.
Collision resolution class: A collision resolution class is a named definition
which provides more granular control of collision resolution. Some collision points
also provide additional methods of resolution that can only be accessed by using
a collision resolution class. With a defined collision resolution class, you can
specify how to handle collision resolution at each of the 12 collision points. You
can specify multiple methods of collision resolution to attempt at each collision
point. If the first method specified does not resolve the problem, MIMIX uses the
next method specified for that collision point.
Additional methods available with CR classes
Automatic synchronization (*AUTOSYNC) and held due to error (*HLDERR) are
essentially predefined resolution methods. When you specify *HLDERR or
*AUTOSYNC in a data group definition or a data group file entry, that method is used
for all 12 of the collision points. If you specify a named collision resolution class in a
data group definition or data group file entry, you can customize what resolution
method to use at each collision point.
Within a collision resolution class, you can specify one or more resolution method to
use for each collision point. *AUTOSYNC and *HLDERR are available for use at each
collision point. Additionally, the following resolution methods are also available:
Exit program: (*EXITPGM) A specified user exit program is called to handle the
358
data collision. This method is available for all collision points.
The MXCCUSREXT service program dynamically links your exit program. The
MXCCUSREXT service program is shipped with MIMIX and runs on the target
system.
The exit program is called on three occasions. The first occasion is when the data
group is started. This call allows the exit program to handle any initialization or set
up you need to perform.
The MXCCUSREXT service program (and your exit program) is called if a
collision occurs at a collision point for which you have indicated that an exit
program should perform collision resolution actions.
Finally, the exit program is called when the data group is ended.
Field merge: (*FLDMRG) This method is only available for the update collision
point 3, used with keyed replication. If certain rules are met, fields from the after-
image are merged with the current image of the file to create a merged record that
is written to the file. Each field within the record is checked using the series of
algorithms below.
In the following algorithms, these abbreviations are used:
RUB =before-image of the source file
RUP =after-image of the source file
RCD =current record image of the target file
a. If the RUB equals the RUP and the RUB equals the RCD, do not change the
RUP field data.
b. If the RUB equals the RUP and the RUB does not equal the RCD, copy the
RCD field data into the RUP record.
c. If the RUB does not equal the RUP and the RUB equals the RCD, do not
change the RUP field data.
d. If the RUB does not equal the RUP and the RUB does not equal the RCD, fail
the field-level merge.
Applied: (*APPLIED) This method is only available for the update collision point 3
and the delete collision point 1. For update collision point 3, the transaction is
ignored if the record to be updated already equals the data in the updated record.
For delete collision point 1, the transaction is ignored because the record does not
exist.
If multiple collision resolution methods are specified and do not resolve the problem
MIMIX will always use *HLDERR as the last resort, placing the file on hold.
Requirements for using collision resolution
To use a collision resolution other than the default *HLDERR, you must have the
following:
The data group definition used for replication must specify a data group type of
*ALL or *DB.
Collision resolution
359
You must specify either *AUTOSYNC or the name of a collision resolution class
for the Collision resolution element of the File entry option (FEOPT) parameter.
Specify the value as follows:
If you want to implement collision resolution for all files processed by a data
group, specify a value in the parameter within the data group definition.
If you want to implement collision resolution for only specific files, specify a
value in the parameter within an individual data group file entry.
Note: Ensure that data group activity is ended before you change a data group
definition or a data group file entry.
If you plan to use an exit program for collision resolution, you must first create a
named collision resolution class. In the collision resolution class, specify
*EXITPGM for each of the collision points that you want to be handled by the exit
program and specify the name of the exit program.
Working with collision resolution classes
Do the following to access options for working with collision resolution:
1. From the MIMIX Main Menu, select option 11 (Configuration menu) and press
Enter.
2. From the MIMIX Configuration Menu, select option 5 (Work with collision
resolution classes) and press Enter. The Work with CR Classes display appears.
Creating a collision resolution class
To create a collision resolution class, do the following:
1. From the Work with CR Classes display, type a 1 (Create) next to the blank line at
the top of the display and press Enter.
2. The Create Collision Res. Class (CRTCRCLS) display appears. Specify a name at
the Collision resolution class prompt.
3. At each of the collision point prompts on the display, specify the value for the type
of collision resolution processing you want to use. Press F1 (Help) to see a
description of the collision point.
Note: You can specify more than one method of collision resolution for each
prompt by typing a +(plus sign) at the prompt. With the exception of the
*HLDERR method, the methods are attempted in the order you specify. If
the first method you specify does not successfully resolve the collision,
then the next method is run. *HLDERR is always the last method
attempted. If all other methods fail, the member is placed on hold due to
error.
4. Press Page Down to see additional prompts.
5. At each of the collision point prompts on the second display, specify the value for
the type of collision resolution processing you want to use.
6. If you specified *EXITPGM at any of the collision point prompts, specify the name
and library of program to use at the Exit point prompt.
360
7. At the Number of retry attempts prompt, specify the number of times to try to
automatically synchronize a file. If this number is exceeded in the time specified in
the Retry time limit, the file will be placed on hold due to error
8. At the Retry time limit prompt, specify the number of maximum number of hours to
retry a process if a failure occurs due to a locking condition or an in-use condition.
Note: If a file encounters repeated failures, an error condition that requires
manual intervention is likely to exist. Allowing excessive synchronization
requests can cause communications bandwidth degradation and
negatively impact communications performance.
9. To create the collision resolution class, press Enter.
Changing a collision resolution class
To change an existing collision resolution class, do the following:
1. From the Work with CR Classes display, type a 2 (Change) next to the collision
resolution class you want and press Enter.
2. The Change CR Class Details display appears. Make any changes you need.
Page Down to see all of the prompts.
3. Provide the required values in the appropriate fields. Inspect the default values
shown on the display and either accept the defaults or change the value.
4. You can specify as many as 3 values for each collision point prompt. To expand
this field for multiple entries, type a plus sign (+) in the entry field opposite the
phrase "+for more" and press Enter.
5. To accept the changes, press Enter.
Deleting a collision resolution class
To delete a collision resolution class, do the following:
1. From the Work with CR Classes display, type a 4 (Delete) next to the collision
resolution class you want and press Enter.
2. A confirmation display appears. Verify that the collision resolution class shown on
the display is what you want to delete.
3. Press Enter.
Displaying a collision resolution class
To display a collision resolution class, do the following:
1. From the Work with CR Classes display, type a 5 (Display) next to the collision
resolution class you want and press Enter.
2. The Display CR Class Details display appears. Press Page Down to see all of the
values.
Collision resolution
361
Printing a collision resolution class
Use this procedure to create a spooled file of a collision resolution class which you
can print.
1. From the Work with CR Classes display, type a 6 (Print) next to the collision
resolution class you want and press Enter.
2. A spooled file is created with the name MXCRCLS on which you can use your
standard printing procedure.
362
Omitting T-ZC content from system journal replication
For logical and physical files configured for replication solely through the system
journal, MIMIX provides the ability to prevent replication of predetermined sets of T-
ZC journal entries associated with changes to object attributes or content changes.
Default T-ZC processing: Files that have an object auditing value of *CHANGE or
*ALL will generate T-ZC journal entries whenever changes to the object attributes or
contents occur. The access type field within the T-ZC journal entry indicates what type
of change operation occurred. Table 43 lists the T-ZC journal entry access types that
are generated by PF-DTA, PF38-DTA, PF-SRC, PF38-SRC, LF, and LF-38 file types.
By default, MIMIX replicates file attributes and file member data for all T-ZC entries
generated for logical and physical files configured for system journal replication. While
Table 43. T-ZC journal entry access types generated by file objects. These T-ZC journal entries are eligible
for replication through the system journal.
Access
Type
Access Type
Description
Operation Type Operations that Generate T-ZC Access Type
File Member Data
1 Add X Add member for physical files and logical files
(ADDPFM)
7 Change
1
X X Change Physical File (CHGPF), Change
Logical File (CHGLF), Change Physical File
Member (CHGPFM), Change Logical File
Member (CHGLFM), Change Object
Description (CHGOBJ D)
10 Clear X Clear member for physical files (CLRPFM)
25 Initialize X Initialize member for physical files (INZPFM)
30 Open X Opening member for write for physical files
36 Reorganize X Reorganize member for physical files
(RGZPFM)
37 Remove X Remove member for physical files and logical
files (RMVM)
38 Rename X Rename member for physical files and logical
files (RNMM)
62 Add
constraint
X Adding constraint for physical files
(ADDPFCST)
63 Change
constraint
X Changing constraint for physical files
(CHGPFCST)
64 Remove
constraint
X Removing constraint for physical files
(RMVPFCST)
1. These T-ZC journal entries may or may not have a member name associated with them. If a member name is associ-
ated with the journal entry, the T-ZC is a member operation. If no member name is associated with the journal entry,
the T-ZC is assumed to be a file operation.
Omitting T-ZC content from system journal replication
363
MIMIX recreates attribute changes on the target system, member additions and data
changes require MIMIX to replicate the entire object using save, send, and restore
processes. This can cause unnecessary replication of data and can impact
processing time, especially in environments where the replication of file data
transactions is not necessary.
Omitting T-ZC entries: Through the Omit content (OMTDTA) parameter on data
group object entry commands, you can specify a predetermined set of access types
for *FILE objects to be omitted from system journal replication. T-ZC journal entries
with access types within the specified set are omitted from processing by MIMIX.
The OMTDTA parameter is useful when a file or members data does not need to be
the replicated. For example, when replicating work files and temporary files, it may be
desirable to replicate the file layout but not the file members or data. The OMTDTA
parameter can also help you reduce the number of transactions that require
substantial processing time to replicate, such as T-ZC journal entries with access type
30 (Open).
Each of the following values for the OMTDTA parameter define a set of access types
that can be omitted from replication:
*NONE - No T-ZCs are omitted from replication. All file, member, and data
operations in transactions for the access types listed in Table 43 are replicated.
This is the default value.
*MBR - Data operations are omitted from replication. File and member operations
in transactions for the access types listed in Table 43 are replicated. Access type
7 (Change) for both file and member operations are replicated.
*FILE - Member and data operations are omitted from replication. Only file
operations in transactions for the access types listed in Table 43 are replicated.
Only file operations in transactions with access type 7 (Change) are replicated.
Configuration requirements and considerations for omitting T-ZC content
To omit transactions, logical and physical files must be configured for system journal
replication and meet these configuration requirements:
The data group definition must specify *ALL or *OBJ for the Data group type
(TYPE).
The file for which you want to omit transactions must be identified by a data group
object entry that specifies the following:
Cooperate with database (COOPDB) must be *NO when Cooperating object
types (COOPTYPE) specifies *FILE. If COOPDB is *YES, then COOPTYPE
cannot specify *FILE.
Omit content (OMTDTA) must be either *FILE or *MBR.
Object auditing value considerations - The file must have an object auditing value
of *CHANGE or *ALL in order for any T-ZC journal entry resulting from a change
operation to be created in the system journal. To ensure that changes to the file
continue to be journaled and replicated, the data group object entry should also
specify *CHANGE or *ALL for the Object auditing value (OBJ AUD) parameter.
364
For all library-based objects, MIMIX evaluates the object auditing level when starting
data a group after a configuration change. If the configured value specified for the
OBJ AUD parameter is higher than the objects actual value, MIMIX will change the
object to use the higher value. If you use the SETDGAUD command to force the
object to have an auditing level of *NONE and the data group object entry also
specifies *NONE, any changes to the file will no longer generate T-ZC entries in the
system journal. For more information about object auditing, see Managing object
auditing on page 54.
Object attribute considerations - When MIMIX evaluates a system journal entry
and finds a possible match to a data group object entry which specifies an attribute in
its Attribute (OBJ ATR) parameter, MIMIX must retrieve the attribute from the object in
order to determine which object entry is the most specific match.
If the object attribute is not needed to determine the most specific match to a data
group object entry, it is not retrieved.
After determining which data group object entry has the most specific match, MIMIX
evaluates that entry to determine how to proceed with the journal entry. When the
matching object entry specifies *FILE or *MBR for OMTDTA, MIMIX does not need to
consider the object attribute in any other evaluations. As a result, the performance of
the object send job may improve.
Omit content (OMTDTA) and cooperative processing
The OMTDTA and COOPDB parameters are mutually exclusive. MIMIX allows only a
value of *NONE for OMTDTA when a data group object entry specifies cooperative
processing of files with COOPDB(*YES) and COOPTYPE(*FILE).
When using MIMIX Dynamic Apply for cooperative processing, logical files and
physical files (source and data) are replicated primarily through the user journal.
Legacy cooperative processing replicates only physical data files. When using legacy
cooperative processing, system journal replication processes select only file attribute
transactions. File attribute transactions are T-ZC journal entries with access types 7
(Change), 62 (Add constraint), 63 (Change constraint), and 64 (Remove constraint).
These transactions are replicated by system journal replication during legacy
cooperative processing, while most other transactions are replicated by user journal
replication.
Omit content (OMTDTA) and comparison commands
All T-ZC journal entries for files are replicated when *NONE is specified for the
OMTDTA parameter. However, when OMTDTA is enabled by specifying *FILE or
*MBR, some T-ZC journal entries for file objects are omitted from system journal
replication. This may affect whether replicated files on the source and target systems
are identical.
For example, recall how a file with an object auditing attribute value of *NONE is
processed. After MIMIX replicates the initial creation of the file through the system
journal, the file on the target system reflects the original state of the file on the source
system when it was retrieved for replication. However, any subsequent changes to file
data are not replicated to the target system. According to the configuration
Omitting T-ZC content from system journal replication
365
information, the files are synchronized between source and target systems, but the
files are not the same.
A similar situation can occur when OMTDTA is used to prevent replication of
predetermined types of changes. For example, if *MBR is specified for OMTDTA, the
file and member attributes are replicated to the target system but the member data is
not. The file is not identical between source and target systems, but it is synchronized
according to configuration. Comparison commands will report these attributes as *EC
(equal configuration) even though member data is different. MIMIX audits, which call
comparison commands with a data group specified, will have the same results.
Running a comparison command without specifying a data group will report all the
synchronized-but-not-identical attributes as *NE (not equal) because no configuration
information is considered.
Consider how the following comparison commands behave when faced with non-
identical files that are synchronized according to the configuration.
The Compare File Attributes (CMPFILA) command has access to configuration
information from data group object entries for files configured for system journal
replication. When a data group is specified on the command, files that are
configured to omit data will report those omitted attributes as *EC (equal
configuration). When CMPFILA is run without specifying a data group, the
synchronized-but-not-identical attributes are reported as *NE (not equal).
The Compare File Data (CMPFILDTA) command uses data group file entries for
configuration information. As a result, when a data group is specified on the
command, any file objects configured for OMTDTA will not be compared. When
CMPFILDTA is run without specifying a data group, the synchronized-but-not-
identical file member attributes are reported as *NE (not equal).
The Compare Object Attributes (CMPOBJ A) command can be used to check for
the existence of a file on both systems and to compare its basic attributes (those
which are common to all object types). This command never compares file-
specific attributes or member attributes and should not be used to determine
whether a file is synchronized.
366
Selecting an object retrieval delay
When replicating objects, particularly documents (*DOC) and stream files (*STMF),
MIMIX will obtain a lock on the object that can prevent your applications from
accessing the object in a timely manner.
Some of your applications may be unable to recover from this condition and may fail
in an unexpected manner.
You can reduce, or eliminate, contention for an object between MIMIX and your
applications if the object retrieval processing is delayed for a predetermined amount
of time before obtaining a lock on the object to retrieve it for replication.
You can use the Object retrieval delay element within the Object processing
parameter on the change or create data group definition commands to set the delay
time between the time the object was last changed on the source system and the time
MIMIX attempts to retrieve the object on the source system.
Although you can specify this value at the data group level, you can override the data
group value at the object level by specifying an Object retrieval delay value on the
commands for creating or changing data group entries.
You can specify a delay time from 0 through 999 seconds. The default is 0.
If the object retrieval latency time (the difference between when the object was last
changed and the current time) is less than the configured delay value, then MIMIX will
delay its object retrieval processing until the difference between the time the object
was last changed and the current time exceeds the configured delay value.
If the object retrieval latency time is greater than the configured delay value, MIMIX
will not delay and will continue with the object retrieval processing.
Object retrieval delay considerations and examples
You should use care when choosing the object retrieval delay. A long delay may
impact the ability of system journal replication processes to move data from a system
in a timely manner. Too short a delay may allow MIMIX to retrieve an object before an
application is finished with it. You should make the value large enough to reduce or
eliminate contention between MIMIX and applications, but small enough to allow
MIMIX to maintain a suitable high availability environment.
Example 1 - The object retrieval delay value is configured to be 3 seconds:
Object A is created or changed at 9:05:10.
The Object Retrieve job encounters the create/change journal entry at 9:05:14. It
retrieves the last change date/time attribute from the object and determines that
the delay time (object last changed date/time of 9:05:10 +configured delay value
of :03 =9:05:13) is less than the current date/time (9:05:14). Because the object
retrieval delay time has already been exceeded, the object retrieve job continues
normal processing and attempts to package the object.
Example 2 - The object retrieval delay value is configured to be 2 seconds:
Object A is created or changed at 10:45:51.
Selecting an object retrieval delay
367
The Object Retrieve job encounters the create/change journal entry at 10:45:52. It
retrieves the last change date/time attribute from the object and determines that
the delay time (object last changed date/time of 10:45:51 +configured delay value
of :02 =10:45:53) exceeds the current date/time (10:45:52). Because the object
retrieval delay value has not be met or exceeded, the object retrieve job delays for
1 second to satisfy the configured delay value.
After the delay (at time 10:45:53), the Object Retrieve job again retrieves the last
change date/time attribute from the object and determines that the delay time
(object last changed date/time of 10:45:51 +configured delay value of :02 =
10:45:53) is equal to the current date/time (10:45:53). Because the object retrieval
delay value has been met, the object retrieve job continues with normal
processing and attempts to package the object.
Example 3 - The object retrieval delay value is configured to be 4 seconds:
Object A is created or changed at 13:20:26.
The Object Retrieve job encounters the create/change journal entry at 13:20:27. It
retrieves the last change date/time attribute from the object and determines that
the delay time (object last changed date/time of 13:20:26 +configured delay value
of :04 =13:20:30) exceeds the current date/time (13:20:27) and delays for 3
seconds to satisfy the configured delay value.
While the object retrieve job is waiting to satisfy the configured delay value, the
object is changed again at 13:20:28.
After the delay (at time 13:20:30), the Object Retrieve job again retrieves the last
change date/time attribute from the object and determines that the delay time
(object last changed date/time of 13:20:28 +configured delay value of :04 =
13:20:32) again exceeds the current date/time (13:20:30) and delays for 2
seconds to satisfy the configured delay value.
After the delay (at time 13:20:32), the Object Retrieve job again retrieves the last
change date/time attribute from the object and determines that the delay time
(object last changed date/time of 13:20:28 +configured delay value of :04 =
13:20:32) is equal to the current date/time (13:20:32). Because the object retrieval
delay value has now been met, the object retrieve job continues with normal
processing and attempts to package the object.
368
Configuring to replicate SQL stored procedures and
user-defined functions
DB2 UDB for IBM Power
TM
Systems supports external stored procedures and SQL
stored procedures. This information is specifically for replicating SQL stored
procedures and user-defined functions. SQL stored procedures are defined entirely in
SQL and may contain SQL control statements. MIMIX can replicate operations related
to stored procedures that are written in SQL (SQL stored procedures), such as
CREATE PROCEDURE (create), DROP PROCEDURE (delete), GRANT PRIVILEGES
ON PROCEDURE (authority), and REVOKE PRIVILEGES ON PROCEDURE (authority).
An SQL procedure is a program created and linked to the database as the result of a
CREATE PROCEDURE statement that specifies the language SQL and is called using
the SQL CALL statement. For example, the following statements create program
SQLPROC in LIBX and establish it as a stored procedure associated with LIBX:
CREATE PROCEDURE LI BX/ SQLPROC( OUT NUM I NT) LANGUAGE SQL
SELECT COUNT( *) I NTO NUM FROM FI LEX
For SQL stored procedures, an independent program object is created by the system
and contains the code for the procedure. The program object usually shares the name
of the procedure and resides in the same library with which the procedure is
associated. A DROP PROCEDURE statement for an SQL procedure removes the
procedure from the catalog and deletes the external program object.
Procedures are associated with a particular library. Because information about the
procedure is stored in the database catalog and not the library, it cannot be seen by
looking at the library. Use System i Navigator to view the stored procedures
associated with a particular library (select Databases >Libraries).
Requirements for replicating SQL stored procedure operations
The following configuration requirements and restrictions must be met:
Apply any IBM PTFs (or their supersedes) associated with IBM i releases as they
pertain to your environment. Log in to Support Central and refer the Technical
Documents page for a list of required and recommended IBM PTFs.
To correctly replicate a create operation, name mapping cannot be used for either
the library or program name.
GRANT and REVOKE only affect the associated program object. MIMIX
replicates these operations correctly.
The COMMENT statement cannot be replicated.
An appropriately configured data group object entry must identify the object to
which the stored procedure is associated.
Stored procedures or other system table concepts that have non-deterministic ties to
a library-based object cannot be replicated.
Configuring to replicate SQL stored procedures and user-defined functions
369
To replicate SQL stored procedure operations
Do the following:
1. Ensure that the replication requirements for the various operations are followed.
See Requirements for replicating SQL stored procedure operations on
page 368.
2. Ensure that you have a data group object entry that includes the associated
program object. For example:
ADDDGOBJ E DGDFN( name system1 system2) LI B1( library)
OBJ 1( *ALL) OBJ TYPE( *PGM)
370
Using Save-While-Active in MIMIX
MIMIX system journal replication processes use save/restore when replicating most
types of objects. If there is conflict for the use of an object between MIMIX and some
other process, the initial save of the object may fail. When such a failure occurs,
MIMIX will attempt to process the object by automatically starting delay or retry
processing using the values configured in the data group definition.
For the initial save of *FILE objects, save-while-active capabilities will be used unless
it is disabled. By default, save-while-active is only used when saving *FILE objects; it
is not used when saving other library-based object types, DLOs, or IFS objects.
However, you can specify to have MIMIX attempt saves of DLOs and IFS objects
using save-while-active.
Values for retry processing are specified in the First retry delay interval
(RTYDLYITV1) and Number of times to retry (RTYNBR) parameters in the data group
definition. After the initial failed save attempt, MIMIX delays for the number of
seconds specified in the RTYDLYITV1 value, before retrying the save operation. This
is repeated for the number of times that is specified for the RTYNBR value in the data
group definition. If the object cannot be saved after the attempts specified in
RTYNBR, then MIMIX uses the delay interval value which is specified in the
RTYDLYITV2 parameter. The save is then attempted for the number of retries
specified in the RTYNBR parameter. For the initial default values for a data group, this
calculates to be 7 save attempts (1 initial attempt, 3 attempts using the first delay
value of 5 seconds, and 3 attempts using the second delay value of 300 seconds), in
a time frame of approximately 20 minutes. For more information on retry processing,
see the parameters for automatic retry processing in Tips for data group parameters
on page 209.
Considerations for save-while-active
If a file is being saved and it shares a journal with another file that has uncommitted
transactions, then the file may be successfully saved by using a normal (non save-
while-active) save. This assumes that the file being saved does not have
uncommitted transactions. If you disable save-while-active, attempts to save any type
of object will use a normal save.
In addition to providing the ability to enable the use of save-while-active for object
types other than *FILE, MIMIX provides the abilities to control the wait time when
using save-while-active or to disable the use of save-while-active for all object types.
Save-while-active wait time
For the default (*FILE objects), MIMIX uses save-while-active with a wait time of 120
seconds on the initial save attempt. MIMIX then uses normal (non save-while-active)
processing on all subsequent save attempts if the initial save attempt fails.
You can configure the save-while-active wait time when specifying to use save-while-
active for the initial save attempt of a *FILE, a DLO, or and IFS object. When
specifying to use save-while-active, the first attempt to save the object after delaying
the amount of time configured for the Second retry delay interval (RTYDLYITV2)
Using Save-While-Active in MIMIX
371
value will also use save-while-active. All other attempts to save the object will use a
normal save.
Note: Although MIMIX has the capability to replicate DLOs using save/restore
techniques, it is recommended that DLOs be replicated using optimized
techniques, which can be configured using the DLO transmission method
under Object processing in the data group definition.
Types of save-while-active options
MIMIX uses the configuration value (DGSWAT) to select the type of save-while-active
option to be used when saving objects. You can view and change these configuration
values for a data group through an interface such as SQL or DFU.
DGSWAT: Save-while-active type. You can specify the following values:
A value of 0 (the default) indicates that save-while-active is to be used when
saving files, with a save-while-active wait time of 120 seconds. For DLOs and IFS
objects, a normal save will be attempted.
A value of 1 through 99999 indicates that save-while-active is to be used when
saving files, DLOs and IFS objects. The value specified will be used as the save-
while-active wait time, such as when passed to the SAVACTWAIT parameter on
the SAVOBJ and SAVDLO commands.
A value of -1 indicates that save-while-active is disabled and is not to be used
when saving files, DLOs or IFS objects. Normal saves will always be used to save
any type of object.
Example configurations
The following examples describe the SQL statements that could be used to view or
set the configuration settings for a data group definition (data group name, system 1
name, system 2 name) of MYDGDFN, SYS1, SYS2.
Example - Viewing: Use this SQL statement to view the values for the data group
definition:
SELECT DGDGN, DGSYS, DGSYS2, DGSWAT FROM MI MI X/ DM0200P WHERE
DGDGN= MYDGDFN AND DGSYS= SYS1 AND DGSYS2= SYS2
Example - Disabling: If you want to modify the values for a data group definition to
disable use of save-while-active for a data group and use a normal save, you could
use the following statement:
UPDATE MI MI X/ DM0200P SET DGSWAT=- 1 WHERE DGDGN= MYDGDFN AND
DGSYS= SYS1 AND DGSYS2= SYS2
Example - Modifying: If you want to modify a data group definition to enable use of
save-while-active with a wait time of 30 seconds for files, DLOs and IFS objects, you
could use the following statement:
UPDATE MI MI X/ DM0200P SET DGSWAT=30 WHERE DGDGN= MYDGDFN AND
DGSYS= SYS1 AND DGSYS2= SYS2
Note: You only have to make this change on the management system; the network
system will be automatically updated by MIMIX.
Object selection for Compare and Synchronize commands
372
CHAPTER 17 Object selection for Compare and
Synchronize commands
Many of the Compare and Synchronize commands, which provide underlying support
for MIMIX AutoGuard, use an enhanced set of common parameters and a common
processing methodology that is collectively referred to as object selection. Object
selection provides powerful, granular capability for selecting objects by data group,
object selection parameter, or a combination.
The following commands use the MIMIX object selection capability:
Compare File Attributes (CMPFILA)
Compare Object Attributes (CMPOBJ A)
Compare IFS Attributes (CMPIFSA)
Compare DLO Attributes (CMPDLOA)
Compare File Data (CMPFILDTA)
Compare Record Count (CMPRCDCNT)
Synchronize Object (SYNCOBJ )
Synchronize IFS Object (SYNCIFS)
Synchronize DLO (SYNCDLO)
The topics in this chapter include:
Object selection process on page 372 describes object selection which interacts
with your input from a command so that the objects you expect are selected for
processing.
Parameters for specifying object selectors on page 375 describes object
selectors and elements which allow you to work with classes of objects
Object selection examples on page 380 provides examples and graphics with
detailed information about object selection processing, object order precedence,
and subtree rules.
Report types and output formats on page 390 describes the output of compare
commands: spooled files and output files (outfiles).
Object selection process
It is important to be able to predict the manner in which object selection interacts with
your input from a command so that the objects you expect are selected for
processing.
The object selection capability provides you with the option to select objects by data
group, object selection parameter, or a combination. Object selection supports four
classes of objects: files, objects, IFS objects, and DLOs.
Object selection process
373
The object selection process takes a candidate group of objects, subsets them as
defined by a list of object selectors, and produces a list of objects to be processed.
Figure 24 illustrates the process flow for object selection.
Figure 24. Object selection process flow
Candidate objects are those objects eligible for selection. They are input to the
object selection process. Initially, candidate objects consist of all objects on the
Object selection for Compare and Synchronize commands
374
system. Based on the command, the set of candidate objects may be narrowed down
to objects of a particular class (such as IFS objects).
The values specified on the command determine the object selectors used to further
refine the list of candidate objects in the class. An object selector identifies an object
or group of objects. Object selectors can come from the configuration information for
a specified data group, from items specified in the object selector parameter, or both.
MIMIX processing for object selection consists of two distinct steps. Depending on
what is specified on the command, one or both steps may occur.
The first major selection step is optional and is performed only if a data group
definition is entered on the command. In that case, data group entries are the source
for object selectors. Data group entries represent one of four classes of objects: files,
library-based objects, IFS objects, and DLOs. Only those entries that correspond to
the class associated with the command are used. The data group entries subset the
list of candidate objects for the class to only those objects that are eligible for
replication by the data group.
If the command specifies a data group and items on the object selection parameter,
the data group entries are processed first to determine an intermediate set of
candidate objects that are eligible for replication by the data group. That intermediate
set is input to the second major selection step. The second step then uses the input
specified on the object selection parameter to further subset the objects selected by
the data group entries.
If no data group is specified on the data group definition parameter, the object
selection parameter can be used independently to select from all objects on the
system.
The second major object selection step subsets the candidate objects based on
Object selectors from the commands object selector parameter (file, object, IFS
object, or DLO). Up to 300 object selectors may be specified on the parameter. If
none are specified, the default is to select all candidate objects.
Note: A single object selector can select multiple objects through the use of generic
names and special values such as *ALL, so the resulting object list can easily
exceed the limit of 300 object selectors that can be entered on a command.
The selection parameter is separate and distinct from the data group
configuration entries. If a data group is specified, the possible object selectors are 1
to N, where N is defined by the number of data group entries. The remaining
candidate objects make up the resultant list of objects to be processed.
Each object selector consists of multiple object selector elements, which serve as
filters on the object selector. The object selector elements vary by object class.
Elements provide information about the object such as its name, an indicator of
whether the objects should be included in or omitted from processing, and name
mapping for dual-system and single-system environments. See Table 44 for a list of
object selector elements by object class.
Order precedence
Object selectors are always processed in a well-defined sequence, which is important
when an object matches more than one selector.
Parameters for specifying object selectors
375
Selectors from a data group follow data group rules and are processed in most- to
least-specific order. Selectors from the object selection parameter are always
processed last to first. If a candidate object matches more than one object selector,
the last matching selector in the list is used.
As a general rule when specifying items on an object selection parameter, first specify
selectors that have a broad scope and then gradually narrow the scope in subsequent
selectors. In an IFS-based command, for example, include /A/B* and then omit /A/B1.
Object selection examples on page 380 illustrates the precedence of object
selection.
For each object selector, the elements are checked according to a priority defined for
the object class. The most specific element is checked for a match first, then the
subsequent elements are checked according to their priority. For additional, detailed
information about order precedence and priority of elements, see the following topics:
How MIMIX uses object entries to evaluate journal entries for replication on
page 92
Identifying IFS objects for replication on page 106
How MIMIX uses DLO entries to evaluate journal entries for replication on
page 111
Processing variations for common operations on page 117
Parameters for specifying object selectors
The object selectors and elements allow you to work with classes of objects. These
objects can be library-based, directory-based, or folder-based. An object selector
consists of several elements that identify an object or group of objects, indicates if
those objects should be included in or omitted from processing, and may describe
name mapping for those objects. The elements vary, depending on the class of
objects with which a particular command works.
Library-based selection allows you to work with files or objects based on object name,
library name, member name, object type, or object attribute. Directory-based
selection allows you to work with objects based on a IFS object path name and
includes a subtree option that determines the scope of directory-based objects to
include. Folder-based selection allows you to work with objects based on DLO path
name. Folder-based selection also includes a subtree object selector.
Object selection supports generic object name values for all object classes. A generic
name is a character string that contains one or more characters followed by an
asterisk (*). When a generic name is specified, all candidate objects that match the
generic name are selected.
For all classes of objects, you can specify as many as 300 object selectors. However,
the specific object selector elements that you can specify on the command is
determined by the class of object.
Object selector elements provide three functions:
Object identification elements define the selected object by name, including
Object selection for Compare and Synchronize commands
376
generic name specifications.
Filtering elements provide additional filtering capability for candidate objects.
Name mapping elements are required primarily for environments where objects
exist in different libraries or paths.
Include or omit elements identify whether the object should be processed or
explicitly excluded from processing.
Table 44 lists object selection elements by function and identifies which elements are
available on the commands.
File name and object name elements: The File name and Object name elements
allow you to identify a file or object by name. These elements allow you to choose a
specific name, a generic name, or the special value *ALL.
Using a generic name, you can select a group of files or objects based on a common
character string. If you want to work with all objects beginning with the letter A, for
example, you would specify A* for the object name.
To process all files within the related selection criteria, select *ALL for the file or object
name. When a data group is also specified on the command, a value of *ALL results
in the selection of files and objects defined to that data group by the respective data
group file entries or data group object entries. When no data group is specified on the
command, specifying *ALL and a library name, only the objects that reside within the
given library are selected.
Library name element: The library name element specifies the name of the library
that contains the files or objects to be included or omitted from the resultant list of
Table 44. Object selection parameters and parameter elements by class
Class File Library-based
object
IFS DLO
Commands: CMPFILA,
CMPFILDTA,
CMPRCDCNT
1

CMPOBJ A,
SYNCOBJ
CMPIFSA,
SYNCIFS
CMPDLOA,
SYNCDLO
Parameter: FILE OBJ OBJ DLO
Identification
elements:
File
Library
Member
Object
Library
Path
Subtree
Name Pattern
Path
Subtree
Name Pattern
Filtering elements: Attribute
1
Type
Attribute
Type Type
Owner
Processing elements: Include/Omit Include/Omit Include/Omit Include/Omit
Name mapping
elements:
System 2 file
1

System 2 library
1

System 2 object
System 2 library
System 2 path
System 2 name
pattern
System 2 path
System 2 name
pattern
1. The Compare Record Count (CMPRCDCNT) command does not support elements for attributes or name mapping.
Parameters for specifying object selectors
377
objects. Like the file or object name, this element allows you to define a library a
specific name, a generic name, or the special value *ALL.
Note: The library value *ALL is supported only when a data group is specified.
Member element: For commands that support the ability to work with file members,
the Member element provides a means to select specific members. The Member
element can be a specific name, a generic name, or the special value *ALL.
Refer to the individual commands for detailed information on member processing.
Object path name (IFS) and DLO path name elements: The Object path name
(IFS) and DLO path name elements identify an object or DLO by path name. They
allow a specific path, a generic path, or the special value *ALL.
Traditionally, DLOs are identified by a folder path and a DLO name. Object selection
uses an element called DLO path, which combines the folder path and the DLO
name.
If you specify a data group, only those objects defined to that data group by the
respective data group IFS entries or data group DLO entries are selected.
Directory subtree and folder subtree elements: The Directory subtree and Folder
subtree elements allow you to expand the scope of selected objects and include the
descendants of objects identified by the given object or DLO path name. By default,
the subtree element is *NONE, and only the named objects are selected. However, if
*ALL is used, all descendants of the named objects are also selected.
Figure 25 illustrates the hierarchical structure of folders and directories prior to
processing, and is used as the basis for the path, pattern, and subtree examples
shown later in this document. For more information, see the graphics and examples
beginning with Example subtree on page 383.
Figure 25. Directory or folder hierarchy
Object selection for Compare and Synchronize commands
378
Directory subtree elements for IFS objects: When selecting IFS objects, only the
objects in the file system specified will be included. Object selection will not cross file
system boundaries when processing subtrees with IFS objects. Objects from other file
systems do not need to be explicitly excluded, however you will need to specify if you
want to include objects from other file systems. For more information, see the graphic
and examples beginning with Example subtree for IFS objects on page 388.
Name pattern element: The Name pattern element provides a filter on the last
component of the object path name. The Name pattern element can be a specific
name, a generic name, or the special value *ALL.
If you specify a pattern of $*, for example, only those candidate objects with names
beginning with $ that reside in the named DLO path or IFS object path are selected.
Keep in mind that improper use of the Name pattern element can have undesirable
results. Let us assume you specified a path name of /corporate, a subtree of *NONE,
and pattern of $*. Since the path name, /corporate, does not match the pattern of $*,
the object selector will identify no objects. Thus, the Name pattern element is
generally most useful when subtree is *ALL.
For more information, see the Example Name pattern on page 387.
Object type element: The Object type element provides the ability to filter objects
based on an object type. The object type is valid for library-based objects, IFS
objects, or DLOs, and can be a specific value or *ALL. The list of allowable values
varies by object class.
When you specify *ALL, only those object types which MIMIX supports for replication
are included. For a list of replicated object types, see Supported object types for
system journal replication on page 533.
Supported object types for CMPIFSA and SYNCIFS are listed in Table 45.
Supported object types for CMPDLOA and SYNCDLO are listed in Table 46.
Table 45. Supported object types for CMPIFSA and SYNCIFS
Object type Description
*ALL All directories, stream files, and symbolic links are selected
*DIR Directories
*STMF Stream files
*SYMLNK Symbolic links
Table 46. Supported DLO types for CMPDLOA and SYNCDLO
DLO type Description
*ALL All documents and folders are selected
*DOC Documents
*FLR Folders
Parameters for specifying object selectors
379
For unique object types supported by a specific command, see the individual
commands.
Object attribute element: The Object attribute element provides the ability to filter
based on extended object attribute. For example, file attributes include PF, LF, SAVF,
and DSPF, and program attributes include CLP and RPG. The attribute can be a
specific value, a generic value, or *ALL.
Although any value can be entered on the Object attribute element, a list of supported
attributes is available on the command. Refer to the individual commands for the list
of supported attributes.
Owner element: The Owner element allows you to filter DLOs based on DLO owner.
The Owner element can be a specific name or the special value *ALL. Only candidate
DLOs owned by the designated user profile are selected.
Include or omit element: The Include or omit element determines if candidate
objects or included in or omitted from the resultant list of objects to be processed by
the command.
Included entries are added to the resultant list and become candidate objects for
further processing. Omitted entries are not added to the list and are excluded from
further processing.
System 2 file and system 2 object elements: The System 2 file and System 2
object elements provide support for name mapping. Name mapping is useful when
working with multiple sets of files or objects in a dual-system or single-system
environment.
This element may be a specific name or the special value *FILE1 for files or *OBJ 1 for
objects. If the File or Object element is not a specific name, then you must use the
default value of *FILE1 or *OBJ 1. This specification indicates that the name of the file
or object on system 2 is the same as on system 1 and that no name mapping occurs.
Generic values are not supported for the system 2 value if a generic value was
specified on the File or Object parameter.
System 2 library element: The System 2 library element allows you to specify a
system 2 library name that differs from the system 1 library name, providing name
mapping between files or objects in different libraries.
This element may be a specific name or the special value *LIB1. If the System 2
library element is not a specific name, then you must use the default value of *LIB1.
This specification indicates that the name of the library on system 2 is the same as on
system 1 and that no name mapping occurs. Generic values are not supported for the
system 2 value if a generic value was specified on the Library object selector.
System 2 object path name and system 2 DLO path name elements: The System
2 object path name and System 2 DLO path name elements support name mapping
for the path specified in the Object path name or DLO path name element. Name
mapping is useful when working with two sets of IFS objects or DLOs in different
paths in either a dual-system or single-system environment.
Generic values are not supported for the system 2 value if you specified a generic
value for the IFS Object or DLO element. Instead, you must choose the default values
of *OBJ 1 for IFS objects or *DLO1 for DLOs. These values indicate that the name of
Object selection for Compare and Synchronize commands
380
the file or object on system 2 is the same as that value on system 1. The default
provides support for a two-system environment without name mapping.
System 2 name pattern element: The System 2 name pattern provides support for
name mapping for the descendents of the path specified for the Object path name or
DLO path name element.
The System 2 name pattern element may be a specific name or the special value
*PATTERN1. If the Object path name or DLO path name element is not a specific
name, then you must use the default value of *PATTERN1. This specification
indicates that no name mapping occurs. Generic values are not supported for the
System 2 name pattern element if you specified a generic value for the Name pattern
element.
Object selection examples
In this section, examples and graphics provide you with detailed information about
object selection processing, object order precedence, and subtree rules. These
illustrations show how objects are selected based on specific selection criteria.
Processing example with a data group and an object selection parameter
Using the CMPOBJ A command, let us assume you want to compare the objects
defined to data group DG1. For simplicity, all candidate objects in this example are
defined to library LIBX.
Table 47 lists all candidate objects on your system .
Next, Table 48 represents the object selectors based on the data group object entry
configuration for data group DG1. Objects are evaluated against data group entries in
the same order of precedence used by replication processes.
Table 47. Candidate objects on system
Object Library Object type
ABC LIBX *FILE
AB LIBX *SBSD
A LIBX *OUTQ
DEF LIBX *PGM
DE LIBX *DTAARA
D LIBX *CMD
Table 48. Object selectors from data group entries for data group DG1
Order Processed Object Library Object type Include or omit
3 A* LIBX *ALL *INCLUDE
2 ABC* LIBX *FILE *OMIT
Object selection examples
381
The object selectors from the data group subset the candidate object list, resulting in
the list of objects defined to the data group shown in Table 49. This list is internal to
MIMIX and not visible to users.
Note: Although job queue DEF in library LIBX did not appear in Table 47, it would be
added to the list of candidate objects when you specify a data group for some
commands that support object selection. These commands are required to
identify or report candidate objects that do not exist.
Perhaps you now want to include or omit specific objects from the filtered candidate
objects listed in Table 49. Table 50 shows the object selectors to be processed based
on the values specified on the object selection parameter. These object selectors
serve as an additional filter on the candidate objects.
The objects compared by the CMPOBJ A command are shown in Table 51. These are
the result of the candidate objects selected by the data group (Table 49) that were
subsequently filtered by the object selectors specified for the Object parameter on the
CMPOBJ A command (Table 50).
In this example, the CMPOBJ A command is used to compare a set of objects. The
input source is a selection parameter. No data group is specified.
1 DEF LIBX *J OBQ *INCLUDE
Table 49. Objects selected by data group DG1
Object Library Object type
A LIBX *OUTQ
AB LIBX *SBSD
DEF LIBX *J OBQ
Table 50. Object selectors for CMPOBJ A object selection parameter
Order Processed Object Library Object type Include or omit
1 *ALL LIBX *OUTQ *INCLUDE
2 *ALL LIBX *SBSD *INCLUDE
3 *ALL LIBX *J OBQ *OMIT
Table 51. Resultant list of objects to be processed
Object Library Object type
A LIBX *OUTQ
AB LIBX *SBSD
Table 48. Object selectors from data group entries for data group DG1
Order Processed Object Library Object type Include or omit
Object selection for Compare and Synchronize commands
382
The data in the following tables show how candidate objects would be processed in
order to achieve a resultant list of objects.
Table 52 lists all the candidate objects on your system.
Table 53 represents the object selectors chosen on the object selection parameter.
The sequence column identifies the order in which object selectors were entered. The
object selectors serve as filters to the candidate objects listed in Table 52.
The last object selector entered on the command is the first one used when
determining whether or not an object matches a selector. Thus, generic object
selectors with the broadest scope, such as A*, should be specified ahead of more
specific generic entries, such as ABC*. Specific entries should be specified last.
Table 54 illustrates how the candidate objects are selected.
Table 52. Candidate objects on system
Object Library Object type
ABC LIBX *FILE
AB LIBX *SBSD
A LIBX *OUTQ
DEFG LIBX *PGM
DEF LIBX *PGM
DE LIBX *DTAARA
D LIBX *CMD
Table 53. Object selectors entered on CMPOBJ A selection parameter
Sequence
Entered
Object Library Object type Include or omit
1 A* LIBX *ALL *INCLUDE
2 D* LIBX *ALL *INCLUDE
3 ABC* LIBX *ALL *OMIT
4 *ALL LIBX *PGM *OMIT
5 DEFG LIBX *PGM *INCLUDE
Table 54. Candidate objects selected by object selectors
Sequence
Processed
Object Library Object type Include or
omit
Selected
candidate objects
5 DEFG LIBX *PGM *INCLUDE DEFG
4 *ALL LIBX *PGM *OMIT DEF
Object selection examples
383
Table 55 represents the included objects from Table 54. This filtered set of candidate
objects is the resultant list of objects to be processed by the CMPOBJ A command.
Example subtree
In the following graphics, the shaded area shows the objects identified by the
combination of the Object path name and Subtree elements of the Object parameter
for an IFS command. Circled objects represent the final list of objects selected for
processing.
3 ABC* LIBX *ALL *OMIT ABC
2 D* LIBX *ALL *INCLUDE D, DE
1 A* LIBX *ALL *INCLUDE A, AB
Table 55. Resultant list of objects to be processed
Object Library Object type
A LIBX *OUTQ
AB LIBX *SBSD
D LIBX *CMD
DE LIBX *DTAARA
DEFG LIBX *PGM
Table 54. Candidate objects selected by object selectors
Sequence
Processed
Object Library Object type Include or
omit
Selected
candidate objects
Object selection for Compare and Synchronize commands
384
Figure 26 illustrates a path name value of /corporate/accounting, a subtree
specification of *ALL, a pattern value of *ALL, and an object type of *ALL. The
candidate objects selected include /corporate/accounting and all descendants.
Figure 26. Directory of /corporate/accounting/
Figure 27 shows a path name of /corporate/accounting/*, a subtree specification of
*NONE, a pattern value of *ALL, and an object type of *ALL. In this case, no
Object selection examples
385
additional filtering is performed on the objects identified by the path and subtree. The
candidate objects selected consist of the specified objects only.
Figure 27. Subtree *NONE for /corporate/accounting/*
Object selection for Compare and Synchronize commands
386
Figure 28 displays a path name of /corporate/accounting/*, a subtree specification of
*ALL, a pattern value of *ALL, and an object type of *ALL. All descendants of
/corporate/accounting/* are selected.
Figure 28. Subtree *ALL for /corporate/accounting/*
Object selection examples
387
Figure 29 is a subset of Figure 28. Figure 29 shows a path name of
/corporate/accounting, a subtree specification of *NONE, a pattern value of *ALL, and
an object type of *ALL, where only the specified directory is selected.
Figure 29. Subtree *NONE for /corporate/accounting
Example Name pattern
The Name pattern element acts as a filter on the last component of the object path
name. Figure 30 specifies a path name of /corporate/accounting, a subtree
specification of *ALL, a pattern value of $*, and an object type of *ALL. In this
Object selection for Compare and Synchronize commands
388
scenario, only those candidate objects which match the generic pattern value ($123,
$236, and $895) are selected for processing.
Figure 30. Pattern $* for /corporate/accounting
Example subtree for IFS objects
In the following graphic, the shaded areas show file systems containing IFS objects.
When selecting objects in file systems that contain IFS objects, only the objects in the
file system specified will be included. The non-generic part of a path name indicates
the file system to be searched. Object selection does not cross file system boundaries
when processing subtrees with IFS objects.
Object selection examples
389
Figure 31 illustrates a directory with a subtree that contains IFS objects. The shaded
areas are the file systems. Table 56 contains examples showing what file systems
would be selected with the path names specified and a subtree specification of *ALL.
Figure 31. Directory with a subtree containing IFS objects.
.
Table 56. Examples of specified paths and objects selected for Figure 31
Path specified File system Objects selected
/qsy* Root file system /qsyabc
/PARIS/* Root file system in independent
ASP PARIS
/PARIS/qsyabc
/PARIS* Root file system None
390
Report types and output formats
The following compare commands support output in spooled files and in output files
(outfiles): the Compare Attributes commands (CMPFILA, CMPOBJ A, CMPIFSA,
CMPDLOA), the Compare Record Count (CMPRCDCNT) command, the Compare
File Data (CMPFILDTA) command, and the Check DG File Entries (CHKDGFE)
command.
The spooled output is a human-readable print format that is intended to be delivered
as a report. The output file, on the other hand, is primarily intended for automated
purposes such as automatic synchronization. It is also a format that is easily
processed using SQL queries.
The level of information in the output is determined by the value specified on the
Report type parameter. These values vary by command. For the CMPFILA,
CMPOBJ A, CMPIFSA, and CMPDLOA commands, the levels of output available are
*DIF, *SUMMARY, and *ALL. The report type of *DIF includes information on objects
with detected differences. A report type of *SUMMARY provides a summary of all
objects compared as well as an object-level indication whether differences were
detected. *SUMMARY does not, however, include details about specific attribute
differences. Specifying *ALL for the report type will provide you with information found
on both *DIF and *SUMMARY reports.
The CMPRCDCNT command supports the *DIF and *ALL report types. The report
type of *DIF includes information on objects with detected differences. Specifying
*ALL for the report type will provide you with information found on all objects and
attributes that were compared.
The CMPFILDTA supports the *DIF and *ALL report types, as well as *RRN. The
*RRN value allows you to output, using the MXCMPFILR outfile format, the relative
record number of the first 1,000 objects that failed to compare. Using this value can
help resolve situations where a discrepancy is known to exist, but you are unsure
which system contains the correct data. In this case, the *RRN value provides
information that enables you to display the specific records on the two systems and to
determine the system on which the file should be repaired.
Spooled files
The spooled output is generated when a value of *PRINT is specified on the Output
parameter. The spooled output consists of four main sectionsthe input or header
section, the object selection list section, the differences section, and the summary
section.
First, the header section of the spooled report includes all of the input values specified
on the command, including the data group value (DGDFN), comparison level
(CMPLVL), report type (RPTTYPE), attributes to compare (CMPATR), actual
attributes compared, number of files, objects, IFS objects or DLOs compared, and
number of detected differences. It also provides a legend that provides a description
of special values used throughout the report.
Report types and output formats
391
The second section of the report is the object selection list. This section lists all of the
object selection entries specified on the comparison command. Similar to the header
section, it provides details on the input values specified on the command.
The detail section is the third section of the report, and provides details on the objects
and attributes compared. The level of detail in this section is determined by the report
type specified on the command. A report type value of *ALL will list all objects
compared, and will begin with a summary status that indicates whether or not
differences were detected. The summary row indicates the overall status of the object
compared. Following the summary row, each attribute compared is listedalong with
the status of the attribute and the attribute value. In the event the attribute compared
is an indicator, a special value of *INDONLY will be displayed in the value columns.
A comparison level value of *DIF will list details only for those objects with detected
attribute differences. A value of *SUMMARY will not include the detail section for any
object.
The fourth section of the report is the summary, which provides a one row summary
for each object compared. Each row includes an indicator that indicates whether or
not attribute differences were detected.
Outfiles
The output file is generated when a value of *OUTFILE is specified on the Output
parameter. Similar to the spooled output, the level of output in the output file is
dependent on the report type value specified on the Report type parameter.
Each command is shipped with an outfile template that uses a normalized database
to deliver a self-defined record, or row, for every attribute you compare. Key
information, including the attribute type, data group name, timestamp, command
name, and system 1 and system 2 values, helps define each row. A summary row
precedes the attribute rows. The normalized database feature ensures that new
object attributes can be added to the audit capabilities without disruption to current
automation processing.
The template files for the various commands are located in the MIMIX product library.
Comparing attributes
392
CHAPTER 18 Comparing attributes
This chapter describes the commands that compare attributes: Compare File
Attributes (CMPFILA), Compare Object Attributes (CMPOBJ A), Compare IFS
Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA). These commands
are designed to audit the attributes, or characteristics, of the objects within your
environment and report on the status of replicated objects. Together, these command
are collectively referred to as the compare attributes commands.
You may already be using the compare attributes commands when they are called by
audit functions within MIMIX AutoGuard. When used in combination with the
automatic recovery features in MIMIX AutoGuard, the compare attributes commands
provide robust functionality to help you determine whether your system is in a state to
ensure a successful rollover for planned events or failover for unplanned events.
The topics in this chapter include:
About the Compare Attributes commands on page 392 describes the unique
features of the Compare Attributes commands (CMPFILA, CMPOBJ A, CMPIFSA,
and CMPDLOA.
Comparing file and member attributes on page 396 includes the procedure to
compare the attributes of files and members.
Comparing object attributes on page 399 includes the procedure to compare
object attributes.
Comparing IFS object attributes on page 402 includes the procedure to compare
IFS object attributes.
Comparing DLO attributes on page 405 includes the procedure to compare DLO
attributes.
About the Compare Attributes commands
With the Compare Attributes commands (CMPFILA, CMPOBJ A, CMPIFSA, and
CMPDLOA), you have significant flexibility in selecting objects for comparison, the
attributes to be compared, and the format in which the resulting report is created.
Each command generates a candidate list of objects on both systems and can detect
objects missing from either system. For each object compared, the command checks
for the existence of the object on the source and target systems and then compares
the attributes specified on the command. The results from the comparisons performed
are placed in a report.
Each command offers several unique features as well.
CMPFILA provides significant capability to audit file-based attributes such as
triggers, constraints, ownership, authority, database relationships, and the like.
Although the CMPFILA command does not specifically compare the data within
the database file, it does check attributes such as record counts, deleted records,
About the Compare Attributes commands
393
and others that check the size of data within a file. Comparing these attributes
provides you with assurance that files are most likely synchronized.
The CMPOBJ A command supports many attributes important to other library-
based objects, including extended attributes. Extended attributes are attributes
unique to given objects, such as auto-start job entries for subsystems.
The CMPIFSA and CMPDLOA commands provide enhanced audit capability for
IFS objects and DLOs, respectively.
Choices for selecting objects to compare
You can select objects to compare by using a data group, the object selection
parameters, or both. The compare attributes commands do not require active data
groups to run.
By data group only: If you specify only by data group, all of the objects of the
same class as the command that are within the name space configured for the
data group are compared. For example, specifying a data group on the CMPIFSA
command would compare all IFS objects in the name space created by data group
IFS entries associated with the data group.
By object selection parameters only: You can compare objects that are not
replicated by a data group. By specifying *NONE for the data group and specifying
objects on the object selection parameters, you define a name spacethe library
for CMPFILA or CMPOBJ A, or the directory path for CMPIFSA or CMPDLOA.
Detailed information about object selection is available in Object selection for
Compare and Synchronize commands on page 372.
By data group and object selection parameters: When you specify a data
group name as well as values on the object selection parameters, the values
specified in object selection parameters act as a filter for the items defined to the
data group.
Unique parameters
The following parameters for object selection are unique to the compare attributes
commands and allow you to specify an additional level of detail when comparing
objects or files.
Unique File and Object elements: The following are unique elements on the File
parameter (CMPFILA command) and Objects parameter (CMPOBJ A command):
Member: On the CMPFILA command, the value specified on the Member
element is only used when *MBR is also specified on the Comparison level
parameter.
Object attribute: The Object attribute element enables you to select particular
characteristics of an object or file, and provides a level of filtering. For details, see
CMPFILA supported object attributes for *FILE objects on page 395 and
CMPOBJ A supported object attributes for *FILE objects on page 395.
System 2: The System 2 parameter identifies the remote system name, and
represents the system to which objects on the local system are compared.
This parameter is ignored when a data group is specified, since the system 2
Comparing attributes
394
information is derived from the data group. A value is required if no data group is
specified.
Comparison level (CMPFILA only): The Comparison level parameter indicates
whether attributes are compared at the file level or at the member level.
System 1 ASP group and System 2 ASP group (CMPFILA and CMPOBJA only):
The System 1 ASP group and System 2 ASP group parameters identify the name of
the auxiliary storage pool (ASP) group where objects configured for replication may
reside. The ASP group name is the name of the primary ASP device within the ASP
group. This parameter is ignored when a data group is specified.
Choices for selecting attributes to compare
The Attributes to compare parameter allows you to select which combination
attributes to compare.
Each compare attribute command supports an extensive list of attributes. Each
command provides the ability to select pre-determined sets of attributes (basic or
extended), all supported attributes, as well as any other unique combination of
attributes that you require.
The basic set of attributes is intended to compare attributes that provide an indication
that the objects compared are the same, while avoiding attributes that may be
different but do not provide a valid indication that objects are not synchronized, such
as the create timestamp (CRTTSP) attribute. Some objects, for example, cannot be
replicated using IBM's save and restore technology. Therefore, the creation date
established on the source system is not maintained on the target system during the
replication process. The comparison commands take this factor into consideration
and check the creation date for only those objects whose values are retained during
replication.
The extended set of attributes includes the basic set of attributes and some additional
attributes.
The following topics list the supported attributes for each command:
Attributes compared and expected results - #FILATR, #FILATRMBR audits on
page 581
Attributes compared and expected results - #OBJ ATR audit on page 586
Attributes compared and expected results - #IFSATR audit on page 594
Attributes compared and expected results - #DLOATR audit on page 596
All comparison attributes supported by a specific compare attribute command may not
be applicable for all object types supported by the command. For example,
CMPOBJ A supports a large number of object types and related comparison
attributes. There are many cases where a specific comparison attributes are only
valid for a particular object type.
Comparison attributes not supported by a given object type are ignored. For example,
auto-start job entries is a valid comparison attribute for object types of subsystem
description (*SBSD). For all other object types selected as a result of running the
About the Compare Attributes commands
395
report, the auto-start job entry attribute is ignored for object types that are not of type
*SBSD.
If a data group is specified on a compare request, configuration data is used when
comparing objects that are identified for replication through the system journal. If an
objects configured object auditing value (OBJ AUD) is *NONE, its attribute changes
are not replicated. When differences are detected on attributes of such an object, they
are reported as *EC (equal configuration) instead of being reported as *NE (not
equal).
For *FILE objects configured for replication through the system journal and configured
to omit T-ZC journal entries, also see Omit content (OMTDTA) and comparison
commands on page 364.
CMPFILA supported object attributes for *FILE objects
When you specify a data group to compare, the CMPFILA command obtains
information from the configured data group entries for all PF and LF files and their
subtypes. Those files that are within the name space created by data group entries
are compared.
Table 57 lists the extended attributes for objects of type *FILE that are supported as
values on the Object attribute element.
CMPOBJA supported object attributes for *FILE objects
When you specify a data group to compare, the CMPOBJ A command obtains data
group information from the data group object entries. Those objects defined to the
data group object entries are compared.
The default value on the Object attribute element is *ALL, which represents the entire
list of supported attributes. Any value is supported, but a list of recommended
attributes is available in the online help.
Table 57. CMPFILA supported extended attributes for *FILE objects
Object attribute Description
*ALL All physical and logical file types are selected for processing
LF Logical file
LF38 Files of type LF38
PF Physical file types, including PF, PF-SRC, and PF-DTA
PF-DTA Files of type PF-DTA
PF-SRC Files of type PF-SRC
PF38 Files of type PF38, including PF38, PF38-SRC, and PF38-DTA
PF38-DTA Files of type PF38-DTA
PF38-SRC Files of type PF38-SRC
396
Comparing file and member attributes
You can compare file attributes to ensure that files and members needed for
replication exist on both systems or any time you need to verify that files are
synchronized between systems. You can optionally specify that results of the
comparison are placed in an outfile.
Note: If you have automation programs monitoring escape messages for differences
in file attributes, be aware that differences due to active replication (Step 16)
are signaled via a new difference indicator (*UA) and escape message. See
the auditing and reporting topics in this book.
To compare the attributes of files and members, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 1
(Compare file attributes) and press Enter.
3. The Compare File Attributes (CMPFILA) command appears. At the Data group
definition prompts, do one of the following:
To compare attributes for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
To compare files by name only, specify *NONE and continue with the next step.
To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see Object selection for
Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.
f. Press Enter.
Comparing file and member attributes
397
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Comparison level prompt, accept the default to compare files at a file level
only. Otherwise, specify *MBR to compare files at a member level.
Note: If *FILE is specified, the Member prompt is ignored (see Step 4b).
7. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes based on whether the comparison is at a file or member level or
press F4 to see a valid list of attributes.
8. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 7, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Report type prompt, specify the level of detail for the output report.
12. At the Output prompt, do one of the following
To generate print output, accept *PRINT and press Enter.
To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 14.
To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 14.
13. The User data prompt appears if you selected *PRINT or *BOTH in Step 12.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 18.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the Maximum replication lag prompt, specify the maximum amount of time
between when a file in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
398
Note: This parameter is only valid when a data group is specified in Step 3.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, specify *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.
Comparing object attributes
399
Comparing object attributes
You can compare object attributes to ensure that objects needed for replication exist
on both systems or any time you need to verify that objects are synchronized between
systems. You can optionally specify that results of the comparison are placed in an
outfile.
Note: If you have automation programs monitoring escape messages for differences
in object attributes, be aware that differences due to active replication
(Step 15) are signaled via a new difference indicator (*UA) and escape
message. See the auditing and reporting topics in this book.
To compare the attributes of objects, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 2
(Compare object attributes) and press Enter.
3. The Compare Object Attributes (CMPOBJ A) command appears. At the Data
group definition prompts, do one of the following:
To compare attributes for all objects defined by the data group object entries
for a particular data group definition, specify the data group name and skip to
Step 6.
To compare objects by object name only, specify *NONE and continue with the
next step.
To compare a subset of objects defined to a data group, specify the data group
name and skip to continue with the next step.
4. At the Object prompts, you can specify elements for one or more object selectors
that either identify objects to compare or that act as filters to the objects defined to
the data group indicated in Step 3. For more information, see Object selection for
Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the Object and library prompts, specify the name or the generic value you
want.
b. At the Object type prompt, accept *ALL or specify a specific object type to
compare.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the object and library
names on system 2 are equal to system 1, accept the defaults. Otherwise,
specify the name of the object and library to which objects on the local system
are compared.
Note: The System 2 file and System 2 library values are ignored if a data
400
group is specified on the Data group definition prompts.
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing objects not defined
to a data group. If necessary, specify the name of the remote system to which
objects on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Report type prompt, specify the level of detail for the output report.
11. At the Output prompt, do one of the following
To generate print output, accept *PRINT and press Enter.
To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 13.
To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 13.
12. The User data prompt appears if you selected *PRINT or *BOTH in Step 11.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 17.
13. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
14. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
15. At the Maximum replication lag prompt, specify the maximum amount of time
between when an object in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
Note: This parameter is only valid when a data group is specified in Step 3.
Comparing object attributes
401
16. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
17. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter and
continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
19. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
20. To start the comparison, press Enter.
402
Comparing IFS object attributes
You can compare IFS object attributes to ensure that IFS objects needed for
replication exist on both systems or any time you need to verify that IFS objects are
synchronized between systems. You can optionally specify that results of the
comparison are placed in an outfile.
Note: If you have automation programs monitoring for differences in IFS object
attributes, be aware that differences due to active replication (Step 13) are
signaled via a new difference indicator (*UA) and escape message. See the
auditing and reporting topics in this book.
To compare the attributes of IFS objects, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 3
(Compare IFS attributes) and press Enter.
3. The Compare IFS Attributes (CMPIFSA) command appears. At the Data group
definition prompts, do one of the following:
To compare attributes for all IFS objects defined by the data group IFS object
entries for a particular data group definition, specify the data group name and
skip to Step 6.
To compare IFS objects by object path name only, specify *NONE and continue
with the next step.
To compare a subset of IFS objects defined to a data group, specify the data
group name and continue with the next step.
4. At the IFS objects prompts, you can specify elements for one or more object
selectors that either identify IFS objects to compare or that act as filters to the IFS
objects defined to the data group indicated in Step 3. For more information, see
Object selection for Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the Object path name prompt, accept *ALL or specify the name or the
generic value you want.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
Note: The *ALL default is not valid if a data group is specified on the Data
group definition prompts.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
compare.
e. At the Include or omit prompt, specify the value you want.
Comparing IFS object attributes
403
f. At the System 2 object path name and System 2 name pattern prompts, if the
IFS object path name and name pattern on system 2 are equal to system 1,
accept the defaults. Otherwise, specify the name of the path name and pattern
to which IFS objects on the local system are compared.
Note: The System 2 object path name and System 2 name pattern values are
ignored if a data group is specified on the Data group definition prompts.
g. Press Enter.
5. The System 2 parameter prompt appears if you are comparing IFS objects not
defined to a data group. If necessary, specify the name of the remote system to
which IFS objects on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the Report type prompt, specify the level of detail for the output report.
9. At the Output prompt, do one of the following
To generate print output, accept *PRINT and press Enter.
To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 11.
To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 15.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the Maximum replication lag prompt, specify the maximum amount of time
between when an IFS object in the data group changes and when replication of
the change is expected to be complete, or accept *DFT to use the default
maximum time of 300 seconds (5 minutes). You can also specify *NONE, which
indicates that comparisons should occur without consideration for replication in
progress.
Note: This parameter is only valid when a data group is specified in Step 3.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
404
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To start the comparison, press Enter.
Comparing DLO attributes
405
Comparing DLO attributes
You can compare DLO attributes to ensure that DLOs needed for replication exist on
both systems or any time you need to verify that DLOs are synchronized between
systems. You can optionally specify that results of the comparison are placed in an
outfile.
Note: If you have automation programs monitoring escape messages for differences
in DLO attributes, be aware that differences due to active replication (Step 13)
are signaled via a new difference indicator (*UA) and escape message. See
the auditing and reporting topics in this book.
To compare the attributes of DLOs, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 4
(Compare DLO attributes) and press Enter.
3. The Compare DLO Attributes (CMPDLOA) command appears. At the Data group
definition prompts, do one of the following:
To compare attributes for all DLOs defined by the data group DLO entries for a
particular data group definition, specify the data group name and skip to
Step 6.
To compare DLOs by path name only, specify *NONE and continue with the
next step.
To compare a subset of DLOs defined to a data group, specify the data group
name and continue with the next step.
4. At the Document library objects prompts, you can specify elements for one or
more object selectors that either identify DLOs to compare or that act as filters to
the DLOs defined to the data group indicated in Step 3. For more information, see
Object selection for Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt.
For each selector, do the following:
a. At the DLO path name prompt, accept *ALL or specify the name or the generic
value you want.
b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the DLO path name.
Note: The *ALL default is not valid if a data group is specified on the Data
group definition prompts.
d. At the DLO type prompt, accept *ALL or specify a specific DLO type to
compare.
e. At the Owner prompt, accept *ALL or specify the owner of the DLO.
406
f. At the Include or omit prompt, specify the value you want.
g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if
the DLO path name and name pattern on system 2 are equal to system 1,
accept the defaults. Otherwise, specify the name of the path name and pattern
to which DLOs on the local system are compared.
Note: The System 2 DLO path name and System 2 DLO name pattern values
are ignored if a data group is specified on the Data group definition
prompts.
h. Press Enter.
5. The System 2 parameter prompt appears if you are comparing DLOs not defined
to a data group. If necessary, specify the name of the remote system to which
DLOs on the local system are compared.
6. At the Attributes to compare prompt, accept *BASIC to compare a pre-determined
set of attributes or press F4 to see a valid list of attributes.
7. At the Attributes to omit prompt, accept *NONE to compare all attributes specified
in Step 6, or enter the attributes to exclude from the comparison. Press F4 to see
a valid list of attributes.
8. At the Report type prompt, specify the level of detail for the output report.
9. At the Output prompt, do one of the following
To generate print output, accept *PRINT and press Enter.
To generate both print output and an outfile, specify *BOTH and press Enter.
Skip to Step 11.
To generate an outfile, specify *OUTFILE and press Enter. Skip to Step 11.
10. The User data prompt appears if you selected *PRINT or *BOTH in Step 9.
Accept the default to use the command name to identify the spooled output or
specify a unique name. Skip to Step 15.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the Maximum replication lag prompt, specify the maximum amount of time
between when a DLO in the data group changes and when replication of the
change is expected to be complete, or accept *DFT to use the default maximum
time of 300 seconds (5 minutes). You can also specify *NONE, which indicates
that comparisons should occur without consideration for replication in progress.
Note: This parameter is only valid when a data group is specified in Step 3.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
Comparing DLO attributes
407
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To start the comparison, press Enter.
Comparing file record counts and file member data
408
CHAPTER 19 Comparing file record counts and file
member data
This chapter describes the features and capabilities of the Compare Record Counts
(CMPRCDCNT) command and the Compare File Data (CMPFILDTA) command.
The topics in this chapter include:
Comparing file record counts on page 408 describes the CMPRCDCNT
command and provides a procedure for performing the comparison.
Significant features for comparing file member data on page 411 identifies
enhanced capabilities available for use when comparing file member data.
Considerations for using the CMPFILDTA command on page 412 describes
recommendations and restrictions of the command. This topic also describes
considerations for security, use with firewalls, comparing records that are not
allocated, as well as comparing records with unique keys, triggers, and
constraints.
Specifying CMPFILDTA parameter values on page 416 provides additional
information about the parameters for selecting file members to compare and using
the unique parameters of this command.
Advanced subset options for CMPFILDTA on page 422 describes how to use
the capability provided by the Advanced subset options (ADVSUBSET)
parameter.
Ending CMPFILDTA requests on page 426 describes how to end a CMPFILDTA
request that is in progress and describes the results of ending the job.
Comparing file member data - basic procedure (non-active) on page 427
describes how to compare file data in a data group that is not active.
Comparing and repairing file member data - basic procedure on page 430
describes how to compare and repair file data in a data group that is not active.
Comparing and repairing file member data - members on hold (*HLDERR) on
page 433 describes how to compare and repair file members that are held due to
error using active processing.
Comparing file member data using active processing technology on page 436
describes how to use active processing to compare file member data.
Comparing file member data using subsetting options on page 439 describes
how to use the subset feature of the CMPFILDTA command to compare a portion
of member data at one time.
Comparing file record counts
The Compare Record Counts (CMPRCDCNT) command allows you to compare the
record counts of members of a set of physical files between two systems. This
Comparing file record counts
409
command compares the number of current records (*CURRDS) and the number of
deleted records (*NBRDLTRCDS) for members of physical files that are defined for
replication by an active data group. In resource-constrained environments, this
capability provides a less-intensive means to gauge whether files are likely to be
synchronized.
Note: Equal record counts suggest but do not guarantee that members are
synchronized. To check for file data differences, use the Compare File Data
(CMPFILDTA) command. To check for attribute differences, use the Compare
File Attributes (CMPFILA) command.
Members to be processed must be defined to a data group that permits replication
from a user journal. J ournaling is required on the source system. User journal
replication processes must be active when this command is used.
Members on both systems can be actively modified by applications and by MIMIX
apply processes while this command is running.
For information about the results of a comparison, see What differences were
detected by #MBRRCDCNT on page 576.
The #MBRRCDCNT calls the CMPRCDCNT command during its compare phase.
Unlike other audits, the #MBRRCDCNT audit does not have an associated recovery
phase. Differences detected by this audit appear as not recovered in the Audit
Summary user interfaces. Any repairs must be undertaken manually, in the following
ways:
In MIMIX Availability Manager, repair actions are available for specific errors when
viewing the output file for the audit.
Run the #FILDTA audit for the data group to detect and correct problems.
Run the Synchronize DG File Entry (SYNCDGFE) command to correct problems.
To compare file record counts
Do the following to compare record counts for an active data group:
1. From a command line, type installation_library/CMPRCDCNT and press
F4 (Prompt).
2. The Compare Record Counts (CMPRCDCNT) display appears. At the Data group
definition prompts, do one of the following:
To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 4.
To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
3. At the File prompts, you can specify elements for one or more object selectors to
act as filters to the files defined to the data group indicated in Step 2. For more
information, see Object selection for Compare and Synchronize commands on
page 372.
You can specify as many as 300 object selectors by using the + for more prompt
Comparing file record counts and file member data
410
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Include or omit prompt, specify the value you want.
4. At the Report type prompt, do one of the following:
If you want all compared objects to be included in the report, accept the
default.
If you only want objects with detected differences to be included in the report,
specify *DIF.
5. At the Output prompt, do one of the following:
To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 9.
To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
6. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
7. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
8. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
9. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
10. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
11. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
Significant features for comparing file member data
411
12. To start the comparison, press Enter.
Significant features for comparing file member data
The Compare File Data (CMPFILDTA) command provides ability to compare data
within members of physical files. The CMPFILDTA command is called
programmatically by MIMIX AutoGuard functions that help you determine whether
files are synchronized and whether your MIMIX environment is prepared for
switching. You can also use the CMPFILDTA command interactively or call it from a
program.
Unique features of the CMPFILDTA command include active server technology and
isolated data correction capability. Together, these features enable the detection and
correction of file members that are not synchronized while applications and replication
processes remain active. File members that are held due to an error can also be
compared and repaired.
Repairing data
You can optionally choose to have the CMPFILDTA command repair differences it
detects in member data between systems.
When files are not synchronized, the CMPFILDTA command provides the ability to
resynchronize the file at the record level by sending only the data for the incorrect
member to the target system. (In contrast, the Synchronize DG File Entry
(SYNCDGFE) command would resynchronize the file by transferring all data for the
file from the source system to the target system.)
Active and non-active processing
The Process while active (ACTIVE) parameter determines whether a requested
comparison can occur while application and replication activity is present.
Two modes of operation are available: active and non-active. In non-active mode,
CMPFILDTA assumes that all files are quiesced and performs file comparisons and
repairs without regard to application or replication activity. In active mode, processing
begins in the same manner, performing an internal compare and generating a list of
records that are not synchronized. This list is not reported, however. Instead,
CMPFILDTA checks the mismatched records against the activity that is happening on
the source system and the apply activity that is occurring on the target. If there is a
member that needs repair, CMPFILDTA will then report the error. At that time, the
command will also repair the target file member if *YES was specified on the Repair
parameter.
During active processing of a member, the DB apply threshold (DBAPYTHLD)
parameter can be used to specify what action CMPFILDTA should take if the
database apply session backlog exceeds the threshold warning value configured for
the database apply process.
Comparing file record counts and file member data
412
Processing members held due to error
The CMPFILDTA command also provides the ability to compare and repair members
being held due to error (*HLDERR). When members in *HLDERR status are
processed, the CMPFILDTA command works cooperatively with the database apply
(DBAPY) process to compare and repair the file membersand when possible,
restore them to an active state. To repair members in *HLDERR status, you must also
specify that the repair be performed on the target system and request that active
processing be enabled.
To support the cooperative efforts of CMPFILDTA and DBAPY, the following
transitional states are used for file entries undergoing compare and repair processing:
*CMPRLS - The file in *HLDERR status has been released. DBAPY will clear the
journal entry backlog by applying the file entries in catch-up mode.
*CMPACT - The journal entry backlog has been applied. CMPFILDTA and
DBAPY are cooperatively repairing the member previously in *HLDERR status,
and incoming journal entries continue to be applied in forgiveness mode.
When a member held due to error is being processed by the CMPFILDTA command,
the entry transitions from *HLDERR status to *CMPRLS to *CMPACT. The member
then changes to *ACTIVE status if compare and repair processing is successful. In
the event that compare and repair processing is unsuccessful, the member-level entry
is set back to *HLDERR.
Additional features
The CMPFILDTA command incorporates many other features to increase
performance and efficiency.
Subsetting and advanced subsetting options provide a significant degree of flexibility
for performing periodic checks of a portion of the data within a file.
Parallel processing uses multi-threaded jobs to break up file processing into smaller
groups for increased throughput. Rather than having a single-threaded job on each
system, multiple thread groups break up the file into smaller units of work. This
technology can benefit environments with multiple processors as well as systems with
a single processor.
Considerations for using the CMPFILDTA command
Before you use the CMPFILDTA command, you should be aware of the information in
this topic.
Recommendations and restrictions
It is recommended that the CMPFILDTA command be used in tandem with the
CMPFILA command. Use the CMPFILA command to determine whether you have a
matching set of files and attributes on both systems and use the CMPFILDTA
command to compare the actual data within the files.
Considerations for using the CMPFILDTA command
413
Keyed replication - Although you can run the CMPFILDTA command on keyed files,
the command only supports files configured for *POSITIONAL replication. The
CMPFILDTA command cannot compare files configured for *KEYED replication.
SNA environments - CMPFILDTA requires a TCP/IP transfer definitionyou cannot
use SNA. You can be configured for SNA, but then you must override CMPFILDTA to
refer to a transfer definition that specifies *TCP as the communications protocol. For
more information, see System-level communications on page 143.
Apply threshold and apply backlog - Do not compare data using active processing
technology if the apply process is 180 seconds or more behind, or has exceeded a
threshold limit.
Using the CMPFILDTA command with firewalls
The CMPFILDTA command uses a communications port based on the port number
specified in the transfer definition. If you need to run simultaneous CMPFILDTA jobs,
you must open the equivalent number of ports in your firewall. For example, if the port
number in your transfer definition is 5000 and you want to run 10 CMPFILDTA jobs at
once, you should open at least 10 ports in your firewallminimally, ports 5001
through 5010. If you attempt to run more jobs than there are open ports, those jobs
will fail.
Security considerations
You should take extra precautions when using CMPFILDTAs repair function, as it is
capable of accessing and modifying data on your system.
To compare file data, you must have read access on both systems. When using the
repair function, write access on the system to be repaired may also be necessary
when active technology is not used.
CMPFILDTA builds upon the RUNCMD support in MIMIX. CMPFILDTA starts a
remote process using RUNCMD, which requires two conditions to be true. First, the
user profile of the job that is invoking CMPFILDTA must exist on the remote system
and have the same password on the remote system as it does on the local system.
Second, the user profile must have appropriate read or update access to the
members to be compared or repaired. If active processing and repair is requested,
only read access is needed. In this case, the repair processing would be done by the
database apply process.
Comparing allocated records to records not yet allocated
In some situations, members differ in the number of records allocated. One member
may have allocated records, while the corresponding records of the other member are
not yet allocated. If the member to be repaired is the smaller of the two members,
records are added to make the members the same size.
If the member to be repaired is the larger of the two members, however, the excess
records are deleted. When MIMIX replication encounters these situations, no error is
generated nor is the member placed on error hold.
Comparing file record counts and file member data
414
If one or more members differ in the manner described above, a distinct escape
message is issued. If you use CMPFILDTA in a CL program, you may wish to monitor
these escape messages specifically.
Comparing files with unique keys, triggers, and constraints
If members being repaired have unique keys, active triggers, or constraints, special
care should be taken. An updated or insert repair action that results in one or more
duplicate key exceptions automatically results in the deletion of records with duplicate
keys.
Note: The records that could be deleted include those outside the subset of records
being compared. Deletion of records with duplicate keys is not recorded in the
outfile statistics.
If triggers are enabled, any compare or repair action causes the applicable trigger to
be invoked. Triggers should be disabled if this action is not desired by the user. When
a compare is specified, read triggers are invoked as records are read. If repair action
is specified, update, insert, and delete triggers are invoked as records are repaired.
Table 58 describes the interaction of triggers with CMPFILDTA repair and active
processing.
Attention: If an attempt is made to use one of the unsupported
situations listed in Table 58, the job that invokes the trigger will end
abruptly. You will see a CEE0200 information message in the job
log shortly before the job ends. You may also see an MCH2004
message.
Table 58. CMPFILDTA and trigger support
Trigger type Trigger activation
group (ACTGRP)
CMPFILDTA -
Repair on system
(REPAIR)
CMPFILDTA -
Process while
active (ACTIVE)
CMPFILDTA
support
Read *NEW Any value Any value Not supported
Read NAMED or
*CALLER
Any value Any value Supported
Update, insert, and
delete
*NEW *NONE Any value Supported
Update, insert, and
delete
*NEW Any value other than
*NONE
*NO Not supported
Update, insert, and
delete
*NEW Any value other than
*NONE
*YES Supported
Update, insert, and
delete
NAMED or
*CALLER
Any value Any value Supported
Considerations for using the CMPFILDTA command
415
Avoiding issues with triggers
It is possible to avoid potential trigger restrictions. You can use any one of the
following techniques, which are listed in the preferred order:
Recreate the trigger program, specifying the ACTGRP(*CALLER) or
ACTGRP(NAMED)
Use the Update Program (UPDPRG) command to change to ACTGRP(NAMED)
Disable trigger programs on the file
Use the Synchronize Objects (SYNCOBJ ) command rather than CMPFILDTA
Use the Synchronize Data Group File Entries (SYNCDGFE) command rather than
CMPFILDTA
Use the Copy Active File (CPYACTF) command rather than CMPFILDTA
Save and restore outside of MIMIX
Referential integrity considerations
Referential integrity enforcement can present complex CMPFILDTA repair scenarios.
Like triggers, a delete rule of cascade, set null, or set default can cause records
in other tables to be modified or deleted as a result of a repair action. In other
situations, a repair action may be prevented due to referential integrity constraints.
Consider the case where a foreign key is defined between a department table and
an employee table. The referential integrity constraint requires that records in the
employee table only be permitted if the department number of the employee record
corresponds to a row in the department table with the same department number.
It will not be possible for CMPFILDTA repair processing to add a row to the employee
table if the corresponding parent row is not present in the department table. Because
of this, you should use CMPFILDTA to repair parent tables before using CMPFILDTA
to repair dependant tables. Note that the order you specify the tables on the
CMPFILDTA command is not necessarily the order in which they will be processed,
so you must issue the command once for the parent table, and then again for the
dependant table.
Repairing the parent department table first may present its own problems. If
CMPFILDTA attempts to delete a row in the department table and the delete rule for
the constraint is restrict, the row deletion may fail if the employee table still contains
records corresponding to the department to be deleted. Such constraints should use a
delete rule of cascade, set null, or set default. Otherwise, CMPFILDTA may not
be able to make all repairs.
See the IBM Database Programming manual (SC41-5701) for more information on
referential integrity.
Job priority
When run, the remote CMPFILDTA job uses the run priority of the local CMPFILDTA
job. However, the run priority of either CMPFILDTA job is superseded if a
Comparing file record counts and file member data
416
CMPFILDTA class object (*CLS) exists in the installation library of the system on
which the job is running.
Note: Use the Change J ob (CHGJ OB) command on the local system to modify the
run priority of the local job. CMPFILDTA uses the priority of the local job to set
the priority of the remote job, so that both jobs have the same run priority. To
set the remote job to run at a different priority than the local job, use the
Create Class (CRTCLS) command to create a *CLS object for the job you
want to change.
CMPFILDTA and network inactivity
When the CMPFILDTA command processes large object selection lists, there may be
an extended period of communications inactivity. If the period of inactivity exceeds the
timeout value of any network inactivity timer in effect, the network timeout will
terminate the communications session, causing the CMPFILDTA job to end. To
prevent this from occurring, you can use the Change TCP/IP Attributes (CHGTCPA)
command to change the TCP Keep Alive (TCPKEEPALV) value so that it is lower than
the network inactivity timeout value.
Specifying CMPFILDTA parameter values
This topic provides information about specific parameters of the CMPFILDTA
command.
Specifying file members to compare
The CMPFILDTA command allows you to work with physical file members only. You
can select the files to compare by using a data group, the object selection
parameters, or both.
By data group only: If you specify only by data group, the list of candidate
objects to compare is determined by the data group configuration.
By object selection parameters only: You can compare file members that are
not replicated by a data group. By specifying *NONE for the data group and
specifying file and member information on the object selection parameters, you
define a name space on the each system from which a list of candidate objects is
created.
The Object attribute element on the File parameter enables you to select
particular characteristics of a file. Table 59 lists the extended attributes for objects
of type *FILE that are supported as values for the Object attribute element
By data group and object selection parameters: When you specify a data
group name as well as values on the object selection parameters, the values
specified in object selection parameters act as a filter for the items defined to the
data group.
Specifying CMPFILDTA parameter values
417
Detailed information about object selection is available in Object selection for
Compare and Synchronize commands on page 372.
Tips for specifying values for unique parameters
The CMPFILDTA command includes several parameters that are unique among
MIMIX commands.
Repair on system: When you choose to repair files that do not match, CMPFILDTA
allows you to select the system on which the repair should be made.
File repairs can be performed on system 1, system 2, local, target, source, or you can
specify the system definition name.
Note: *TGT and *SRC are only valid when a data group is specified. However, you
cannot select *SRC when *YES is specified for the Process while active
parameter. Refer to the Process while active section.
Process while active: CMPFILDTA includes while-active support. This parameter
allows you to indicate whether compares should be made while file activity is taking
place. For efficiencys sake, it is always best to perform active repairs during a period
of low activity. CMPFILDTA, however, uses a mechanism that retries comparison
activity until it detects no interference from active files.
Three values are allowed on the Process while active parameter*DFT, *NO, and
*YES. The *NO option should be used when the files being compared are not actively
being updated by either application activity or MIMIX replication activity. All file repairs
are handled directly by CMPFILDTA. *YES is only allowed when a data group is
specified and should be used when the files being compared are actively being
updated by application activity or MIMIX replication activity. In this case, all file repairs
are routed through the data group and require that the data group is active. If a data
group is specified, the default value of *DFT is equivalent to *YES. If a data group is
not specified, *DFT is the same as *NO.
Specifying *NO for the Process while active parameter is the recommended option for
running in a quiesced environment. When used in combination with an active data
group, it assumes there is no application activity and MIMIX replication is current. If
you specify *NO for the Process while active parameter in combination with repairing
the file, the data group apply process must be configured not to lock the files on the
Table 59. CMPFILDTA supported extended attributes for *FILE objects
Object attribute Description
PF Physical file types, including PF, PF-SRC, and PF-DTA
PF-DTA Files of type PF-DTA
PF-SRC Files of type PF-SRC
PF38 Files of type PF38, including PF38, PF38-SRC, and PF38-DTA
PF38-DTA Files of type PF38-DTA
PF38-SRC Files of type PF38-SRC
Comparing file record counts and file member data
418
apply system. This configuration can be accomplished by specifying *NO on the Lock
on apply parameter of the data group definition.
Note: Do not compare data using active processing technology if the apply
process is 180 seconds or more behind, or has exceeded a threshold limit.
File entry status: The File entry status parameter provides options for selecting
members with specific statuses, including members held due to error (*HLDERR).
When members in *HLDERR status are processed, the CMPFILDTA command works
cooperatively with the database apply (DBAPY) process to compare and repair
members held due to errorand when possible, restore them to an active state.
Valid values for the File entry status parameter are *ALL, *ACTIVE, and *HLDERR. A
data group must also be specified on the command or the parameter is ignored. The
default value, *ALL, indicates that all supported entry statuses (*ACTIVE and
*HLDERR) are included in compare and repair processing. The value *ACTIVE
processes only those members that are active
1
. When *HLDERR is specified, only
member-level entries being held due to error are selected for processing. To repair
members held due to error using *ALL or *HLDERR, you must also specify that the
repair be performed on the target system and request that active processing be used.
System 1 ASP group and System 2 ASP group: The System 1 ASP group and
System 2 ASP group parameters identify the name of the auxiliary storage pool (ASP)
group where objects configured for replication may reside. The ASP group name is
the name of the primary ASP device within the ASP group. This parameter is ignored
when a data group is specified. You must be running on OS V5R2 or greater to use
these parameters.
Subsetting option: The Subsetting option parameter provides a robust means by
which to compare a subset of the data within members. In some instances, the value
you select will determine which additional elements are used when comparing data.
Several options are available on this parameter: *ALL, *ADVANCED, *ENDDTA, or
*RANGE. If *ALL is specified, all data within all selected files is compared, and no
additional subsetting is performed. The other options compare only a subset of the
data.
The following are common scenarios in which comparing a subset of your data is
preferable:
If you only need to check a specific range of records, use *RANGE.
When a member, such as a history file, is primarily modified with insert operations,
only recently inserted data needs to be compared. In this situation, use *ENDDTA.
If time does not permit a full comparison, you can compare a random sample
using *ADVANCED.
If you do not have time to perform a full comparison all at once but you want all
data to be compared over a number of days, use *ADVANCED.
*RANGE indicates that the Subset range parameter will be used to specify the subset
of records to be compared. For more information, see the Subset range section.
1. The File entry status parameter was introduced in V4R4 SPC05SP2. If you want to preserve pre-
vious behavior, specify STATUS(*ACTIVE).
Specifying CMPFILDTA parameter values
419
If you select *ENDDTA, the Records at end of file parameter specifies how many
trailing records are compared. This value allows you to compare a selected number of
records at the end of all selected members. For more information, see the section
titled Records at end of file.
Advanced subsetting can be used to audit your entire database over a number of
days or to request that a random subset of records be compared. To specify
advanced subsetting select *ADVANCED. For more information see Advanced
subset options for CMPFILDTA on page 422.
Subset range: Subset range is enabled when *RANGE is specified on the Subsetting
option parameter, as described in the Subsetting option section.
Two elements are included, First record and Last record. These elements allow you to
specify a range of records to compare. If more than one member is selected for
processing, all members are compared using the same relative record number range.
Thus, using the range specification is usually only useful for a single member or a set
of members with related records.
The First record element can be specified as *FIRST or as a relative record
number. In the case of *FIRST, records in the member are compared beginning
with the first record.
The Last record element can be specified as *LAST or as a relative record
number. In the case of *LAST, records in the member are compared up to, and
including, the last record.
Advanced subset options: The Advanced subset options (ADVSUBSET) provides
the ability to use sophisticated comparison techniques. For detailed information and
examples, see Advanced subset options for CMPFILDTA on page 422.
Records at end of file: The Records at end of file (ENDDTA) parameter allows you to
compare recently inserted data without affecting the other subsetting criteria. If you
specified *ENDDTA in the Subsetting option parameter, as indicated in the
Subsetting option section, only those records specified in the Records at end of file
parameter will be processed.
This parameter is also valid if values other than *ENDDTA were specified in the
Subsetting option. In this case, both records at the end of the file as well as any
additional subsetting options factor into the compare. If some records are selected by
both by the ENDDTA parameter and another subsetting option, those records are only
processed once.
The Records at end of file parameter can be specified as *NONE or number-of-
records. When *NONE is specified, records at the end of the members are not
compared unless they are selected by other subset criteria. To compare particular
records at the end of each member, you must specify the number of records.
The ENDDTA value is always applied to the smaller of the System 1 and System 2
members, and continues through until the end of the larger member. Let us assume
that you specify 200 for the ENDDTA value. If one system has 1000 records while the
other has 1100, relative records 801-1100 would be checked. The relative record
numbers of the last 200 records of the smaller file are compared as well as the
additional 100 relative record numbers due to the difference in member size.
Comparing file record counts and file member data
420
Using the Records at end of file parameter in daily processing can keep you from
missing records that were inserted recently.
Specifying the report type, output, and type of processing
The options for selecting processing method, output format, and the contents of the
reported differences are similar to that provided for other MIMIX compare commands.
For additional details, see Report types and output formats on page 390.
System to receive output
The System to receive output (OUTSYS) parameter indicates the system on which
the output will be created. By default, the output is created on the local system.
When Output is *OUTFILE and Process while active is *YES, complete outfile
information is only available if the System to receive output parameter indicates that
the output file is on the data group target system. In this case, the outfile will be
updated as the database apply encounters journal entries relating to possible
mismatched records.
The Wait time (seconds) parameter can be used to ensure that all such outfile
updates are complete before the command completes.
Interactive and batch processing
On the Submit to batch parameter, the *YES default submits a multi-thread capable
batch job. When *NO is specified for the parameter, CMPFILDTA generates a batch
immediate job to do the bulk of the processing. A batch immediate job is not
processed through a job queue and is identified with a job type of BCI on the
WRKACTJ OB screen. Similarly, if CMPFILDTA is issued from a batch job whose
ALWMLTTHD attribute is *NO, a batch immediate job will also be spawned.
In cases where a batch immediate job is generated, the original job waits for the batch
immediate job to complete and re-issues any messages generated by CMPFILDTA.
Interactive jobs are not permitted to have multiple threads, which are required for
CMPFILDTA processing. Thus, you need to be aware of the following issues when a
batch immediate job is generated:
The identity of the job will be issued in a message in the original job.
Since the batch immediate job cannot access the interactive jobs QTEMP library,
outfiles and files to be compared may not reside in QTEMP, even when
CMPFILDTA is issued from a multi-thread capable batch job.
Re-issued messages will not have the original from and to program
information. Instead, you must view the job log of the generated job to determine
this information.
Escape messages created prior to the final message will be converted to
diagnostic messages.
Canceling the interactive request will not cancel the batch immediate job.
Specifying CMPFILDTA parameter values
421
Using the additional parameters
The following parameters allow you to specify an additional level of detail regarding
CMPFILDTA command processing. These parameters are available by pressing F10
(Additional parameters).
Transfer definition: The default for the Transfer definition parameter is *DFT. If a
data group was specified, the default uses the transfer definition associated with the
data group. If no data group was specified, the transfer definition associated with
system 2 is used.
The CMPFILDTA command requires that you have a TCP/IP transfer definition for
communication with the remote system. If your data group is configured for SNA,
override the SNA configuration by specifying the name of the transfer definition on the
command.
Number of thread groups: The Number of thread groups parameter indicates how
many thread groups should be used to perform the comparison. You can specify from
1 to 100 thread groups.
When using this parameter, it is important to balance the time required for processing
against the available resources. If you increase the number of thread groups in order
to reduce processing time, for example, you also increase processor and memory
use. The default, *CALC, will determine the number of thread groups automatically.
To maximize processing efficiency, the value *CALC does not calculate more than 25
thread groups.
The actual number of threads used in the comparison is based on the result of the
formula 2x +1, where x is the value specified or the value calculated internally as the
result of specifying *CALC. When *CALC is specified, the CMPFILDTA command
displays a message showing the value calculated as the number of thread groups.
Note: Thread groups are created for primary compare processing only. During
setup, multiple threads may be utilized to improve performance, depending on
the number of members selected for processing. The number of threads used
during setup will not exceed the total number of threads used for primary
compare processing. During active processing, only one thread will be used.
Wait time (seconds): The Wait time (seconds) value is only valid when active
processing is in effect and specifies the amount of time to wait for active processing to
complete. You can specify from 0 to 3600 seconds, or the default *NOMAX.
If active processing is enabled and a wait time is specified, CMPFILDTA processing
waits the specified time for all pending compare operations processed through the
MIMIX replication path to complete. In most cases, the *NOMAX default is highly
recommended.
DB apply threshold: The DB apply threshold parameter is only valid during active
processing and requires that a data group be specified. The parameter specifies what
action CMPFILDTA should take if the database apply session backlog exceeds the
threshold warning value configured for the database apply process. The default value
*END stops the requested compare and repair action when the database apply
threshold is reached; any repair actions that have not been completed are lost. The
value *NOMAX allows the compare and repair action to continue even when the
database apply threshold has been reached. Continuing processing when the apply
Comparing file record counts and file member data
422
process has a large backlog may adversely affect performance of the CMPFILDTA
job and its ability to compare a file with an excessive number of outstanding entries.
Therefore, *NOMAX should only be used in exceptional circumstances.
Change date: The Change date parameter provides the ability to compare file
members based on the date they were last changed or restored on the source
system. This parameter specifies the date and time that MIMIX will use in determining
whether to process a file member. Only members changed or restored after the
specified date and time will be processed.
Members that have not been updated or restored since the specified timestamp will
not be compared. These members are identified in the output by a difference indicator
value of *EQ (DATE), which is omitted from results when the requested report type is
*DIF.
The shipped default value is *ALL. All available dates are considered when
determining whether to include or exclude a file member. However, the last changed
and last restored timestamps are ignored by the decision process.
When *AUDIT is specified, the compare start timestamp of the #FILDTA audit is used
in the determination. The command must specify a data group when this value is
used. The *AUDIT value can only be used if audit level *LEVEL30 was in effect at the
time the last audit was performed. If the audit level is lower, an error message is
issued. The audit level is available by displaying details for the audit (WRKAUD
command).
When *ALL or *AUDIT is specified for Date, the value specified for Time is ignored.
Note: Exercise caution when specifying actual date and time values. A specified
timestamp that is later than the start of the last audit can result in one or more
file members not being compared. Any member changed between the time of
its last audit and the specified timestamp will not be compared and therefore
cannot be reported if it is not synchronized. The recommended values for this
parameter are either *ALL or *AUDIT.
Advanced subset options for CMPFILDTA
You can use the Advanced subset options (ADVSUBSET) parameter on the Compare
File Data (CMPFILDTA) command for advanced techniques such as comparing
records over time and comparing a random sample of data. These techniques provide
additional assurance that files are replicated correctly.
For example, let us assume you have a limited batch window. You do not have time to
run a total compare everyday, but have the requirement to assure that all data is
compared over the course of a week. Using the advanced CMPFILDTA capability, you
can divide this work over a number of days.
Advanced subsetting makes it simple to accomplish this task by comparing 10
percent of your data each weeknight and completing the remaining 50 percent over
the weekend. However, as the following example demonstrates, it is always best to
compare a random representative sampling of data. The Advanced subset options
also provides this capability.
Advanced subset options for CMPFILDTA
423
For example, if a member contains 1000 records on Monday, records 1 through 100
will be compared on Monday. By Tuesday, perhaps the member has grown to 1500
records. The second 10 percent, to be processed on Tuesday, will contain records
151 through 300. Records 101 through 150 will not get checked at all. Advanced
subsetting provides you with an alternative that does not skip records when members
are growing.
Advanced subset options are applied independently for each member processed. The
advanced subset function assigns the data in each member to multiple non-
overlapping subsets in one of two ways. It also allows a specified range of these
subsets to be compared, which permits a representative sample subset of the data to
be compared. It also permits a full compare to be partitioned into multiple
CMPFILDTA requests that, in combination, assures that all data that existed at the
time of the first request is compared.
To use advanced subsetting, you will need to identify the following:
The number of subsets or bins to define for the compare
The manner in which records are assigned to bins
The specific bins to process
Number of subsets: The first issue to consider when performing advanced subset
options is how many subsets or bins to establish. The Number of subsets element is
the number of approximately equal-sized bins to define. These bins are numbered
from 1 up to the number specified (N). You must specify at least one bin. Each record
is assigned to one of these bins.
The Interleave element specifies the manner in which members are assigned to a bin.
Interleave: The Interleave factor specifies the mapping between the relative record
number and the bin number. There are two approaches that can be used.
If you specify *NONE, records in each member are divided on a percentage basis. For
example:
Note that when the total number of records in a member changes, the mapping also
changes. Records that were once assigned to bin 2 may in the future be assigned to
bin 1. If you wish to compare all records over the course of a few days, the changing
mapping may cause you to miss records. A specific Interleave value is preferable in
this case.
Table 60. Interleave *NONE
Member A on Monday Member A on Tuesday
Total records in member: 30 45
Number of subsets (bins): 3 3
Interleave: *NONE *NONE
Records assigned to bin 1: 1-10 1-15
Records assigned to bin 2: 11-20 16-30
Records assigned to bin 3: 21-30 31-45
Comparing file record counts and file member data
424
Using bytes, the Interleave value specifies a number of contiguous records that
should be assigned to each bin before moving to the next bin. Once the last bin is
filled, assignment restarts at the first bin. Let us assume you have specified in
interleave value of 20 bytes. The following example is based on the one provided in
Table 60:
If the Interleave and Number of Subsets is constant, the mapping of relative record
numbers to bins is maintained, despite the growth of member size. Because every bin
is eventually selected, comparisons made over several days will compare every
record that existed on the first day.
In most circumstances, *CALC is recommended for the interleave specification. When
you select *CALC, the system determines how many contiguous bytes are assigned
Table 61. Interleave(20)
Member A on Monday Member A on Tuesday
Total records in member: 30 45
Record length: 10 bytes 10 bytes
Number of subsets (bins): 3 3
Interleave (bytes): 20 20
Interleave (records): 2 2
Records assigned to bin 1: 1-2
7-8
13-14
19-20
25-26
1-2
7-8
13-14
19-20
25-26
31-32
37-38
43-44
Records assigned to bin 2: 3-4
9-10
15-16
21-22
27-28
3-4
9-10
15-16
21-22
27-28
33-34
39-40
45
Records assigned to bin 3: 5-6
11-12
17-18
23-24
29-30
5-6
11-12
17-18
23-24
29-30
35-36
41-42
Advanced subset options for CMPFILDTA
425
to each bin before subsequent bytes are placed in the next bin. This calculated value
will not change due to member size changes.
Specifying *NONE or a very large interleave factor maximizes processing efficiency,
since data in each bin is processed sequentially. Specifying a very small interleave
factor can greatly reduce efficiency, as little sequential processing can be done before
the file must be repositioned. If you wish to compare a random sample, a smaller
interleave factor provides a more random, or scattered, sample to compare.
The next parameters, the First subset and the Last subset, allow you to specify which
bin to process.
First and last subset: The First subset and Last subset values work in combination
to determine a range of bins to compare. For the First subset, the possible values are
*FIRST and subset-number. If you select *FIRST, the range to compare will start with
bin 1. Last subset has similar values, *LAST and subset-number. When you specify
*LAST, the highest numbered bin is the last one processed.
To compare a random sample of your data, specify a range of subsets that represent
the size of the sample. For example, suppose you wish to compare seven percent of
your data. If the number of subsets are 100, the first subset is 1, and the last subset is
7, seven percent of the data is compared. A first subset value of 21 and a last subset
value of 27 would also compare seven percent of your data, but it would compare a
different seven percent than the first example.
To compare all your data over the course of several days, specify the number of
subsets and interleave factor that allows you to size each days workload as your
needs require. For example, you would keep the subset value and interleave factor a
constant, but vary the First and Last subset values each day. The following settings
could be used over the course of a week to compare all of your data:
Note: You can automate these tasks using MIMIX Monitor. Refer to the MIMIX
Monitor documentation for more information.
Table 62. Using First and last subset to compare data
Day of week Number of
subsets (bins)
Interleave First
subset
Last
subset
Percentage
compared
Monday 100 *CALC 1 10 10
Tuesday 100 *CALC 11 20 10
Wednesday 100 *CALC 21 30 10
Thursday 100 *CALC 31 40 10
Friday 100 *CALC 41 50 10
Saturday 100 *CALC 51 65 15
Sunday 100 *CALC 66 100 35
Comparing file record counts and file member data
426
Ending CMPFILDTA requests
The Compare File Data (CMPFILDTA) command, or a rule which calls it, can be long
running and may exceed the time which you have available for it to run.
The CMPFILDTA command recognizes requests to end the job in a controlled
manner (ENDJ OB OPTION(*CNTRLD)). Messages indicate the step within
CMPFILDTA processing at which the end was requested. The report and output file
contain as much information as possible with the data available at the step in
progress when the job ended. The output may not be accurate because the full
CMPFILDTA request did not complete.
The content of the report and output file is most valuable if the command completed
processing through the end of phase 1 compare. The output may be incomplete if the
end occurred earlier. If processing did not complete to a point where MIMIX can
accurately determine the result of the compare, the value *UN (unknown) is placed in
the Difference Indicator.
Note: If the CMPFILDTA command has been long running or has encountered many
errors, you may need to specify more time on the ENDJ OB commands Delay
time, if *CNTRLD (DELAY) parameter. The default value of 30 seconds may
not be adequate in these circumstances.
Comparing file member data - basic procedure (non-active)
427
Comparing file member data - basic procedure (non-
active)
You can use the CMPFILDTA command to ensure that data required for replication
exists on both systems and any time you need to verify that files are synchronized
between systems. You can optionally specify that results of the comparison are
placed in an outfile.
Before you begin, see the recommendations, restrictions, and security considerations
described in Considerations for using the CMPFILDTA command on page 412. You
should also read Specifying CMPFILDTA parameter values on page 416 for
additional information about parameters and values that you can specify.
To perform a basic data comparison, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
To compare data by file name only, specify *NONE and continue with the next
step.
To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see Object selection for
Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
428
group is specified on the Data group definition prompts.
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, accept *NONE to indicate that no repair action is
done.
7. At the Process while active prompt, specify *NO to indicate that active processing
technology should not be used in the comparison.
8. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
12. At the Report type prompt, do one of the following:
If you want all compared objects to be included in the report, accept the
default.
If you only want objects with detected differences to be included in the report,
specify *DIF.
If you want to include the member details and relative record number (RRN) of
the first 1,000 objects that have differences, specify *RRN.
Notes:
The *RRN value can only be used when *NONE is specified for the Repair
on system prompt and *OUTFILE is specified for the Output prompt.
The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN
can help resolve situations where a discrepancy is known to exist but you are
unsure which system contains the correct data. This value provides the
information that enables you to display the specific records on the two
systems and determine the system on which the file should be repaired.
13. At the Output prompt, do one of the following:
To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
Comparing file member data - basic procedure (non-active)
429
To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 18.
To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.
430
Comparing and repairing file member data - basic proce-
dure
You can use the CMPFILDTA command to repair data on the local or remote system.
Before you begin, see the recommendations, restrictions, and security considerations
described in Considerations for using the CMPFILDTA command on page 412. You
should also read Specifying CMPFILDTA parameter values on page 416 for
additional information about parameters and values that you can specify.
To compare and repair data, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
To compare data by file name only, specify *NONE and continue with the next
step.
To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see Object selection for
Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.
f. Press Enter.
Comparing and repairing file member data - basic procedure
431
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, specify *SYS1, *SYS2, *LOCAL, *TGT, *SRC, or
the system definition name to indicate the system on which repair action should
be performed.
Note: *TGT and *SRC are only valid if you are comparing files defined to a data
group. *SRC is not valid if active processing is in effect.
7. At the Process while active prompt, specify *NO to indicate that active processing
technology should not be used in the comparison.
8. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
9. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
10. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
11. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
12. At the Report type prompt, do one of the following:
If you want all compared objects to be included in the report, accept the
default.
If you only want objects with detected differences to be included in the report,
specify *DIF.
13. At the Output prompt, do one of the following:
To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 18.
To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
14. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
15. At the Output member options prompts, do the following:
432
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
16. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
17. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
18. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter.
19. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
20. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
21. To start the comparison, press Enter.
Comparing and repairing file member data - members on hold (*HLDERR)
433
Comparing and repairing file member data - members on
hold (*HLDERR)
Members that are being held due to error (*HLDERR) can be repaired with the
Compare File Data (CMPFILDTA) command during active processing. When
members in *HLDERR status are processed, the CMPFILDTA command works
cooperatively with the database apply (DBAPY) process to compare and repair the
membersand when possible, restore them to an active state.
Before you begin, see the recommendations, restrictions, and security considerations
described in Considerations for using the CMPFILDTA command on page 412. You
should also read Specifying CMPFILDTA parameter values on page 416 for
additional information about parameters and values that you can specify.
The following procedure repairs a member without transmitting the entire member. As
such, this method is generally faster than other methods of repairing members in
*HLDERR status that transmit the entire member or file. However, if significant activity
has occurred on the source system that has not been replicated on the target system,
it may be faster to synchronize the member using the Synchronize Data Group File
Entry (SYNCDGFE) command.
To repair a member with a status of *HLDERR, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, you must specify a data group name.
Note: If you want to compare data for all files defined by the data group file
entries for a particular data group definition, skip to Step 5.
4. At the File prompts, you can optionally specify elements for one or more object
selectors that act as filters to the files defined to the data group indicated in
Step 3. For more information, see Object selection for Compare and Synchronize
commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. Press Enter.
Note: The System 2 file and System 2 library values are ignored when a data
group is specified on the Data group definition prompts.
434
5. At the Repair on system prompt, specify *TGT to indicate that repair action be
performed on the target system.
6. At the Process while active prompt, specify *YES to indicate that active
processing technology should be used in the comparison.
7. At the File entry status prompt, specify *HLDERR to process members being held
due to error only.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Output prompt, do one of the following:
To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 15.
To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
11. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
12. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
13. At the System to receive output prompt, specify the system on which the output
should be created.
14. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
15. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
Comparing and repairing file member data - members on hold (*HLDERR)
435
To submit the job for batch processing, accept the default. Press Enter.
16. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
17. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
18. To compare and repair the file, press Enter.
436
Comparing file member data using active processing
technology
You can set the CMPFILDTA command to use active processing technology when a
data group is specified on the command.
Before you begin, see the recommendations, restrictions, and security considerations
described in Considerations for using the CMPFILDTA command on page 412. You
should also read Specifying CMPFILDTA parameter values on page 416 for
additional information about parameters and values that you can specify.
Note: Do not compare data using active processing technology if the apply process
is 180 seconds or more behind, or has exceeded a threshold limit.
To compare data using the active processing, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see Object selection for
Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, accept the defaults.
f. Press Enter.
5. At the Repair on system prompt, specify *TGT to indicate that repair action be
performed on the target system of the data group.
6. At the Process while active prompt, specify *YES or *DFT to indicate that active
Comparing file member data using active processing technology
437
processing technology be used in the comparison. Since a data group is specified
on the Data group definition prompts, *DFT will render the same results as *YES.
7. At the File entry status prompt, specify *ACTIVE to process only those file
members that are active.
8. At the System 1 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 1. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 1.
Note: This parameter is ignored when a data group definition is specified.
9. At the System 2 ASP group prompt, accept the default if no objects from any ASP
group are to be compared on system 2. Otherwise, specify the name of the ASP
group that contains objects to be compared on system 2.
Note: This parameter is ignored when a data group definition is specified.
10. At the Subsetting option prompt, specify *ALL to select all data and to indicate
that no subsetting is performed.
11. At the Report type prompt, do one of the following:
If you want all compared objects to be included in the report, accept the
default.
If you only want objects with detected differences to be included in the report,
specify *DIF.
12. At the Output prompt, do one of the following:
To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 17.
To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
13. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
14. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
15. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *OUTFILE was specified on the Outfile prompt, it is recommended that
you select *SYS2 for the System to receive output prompt.
16. At the Object difference messages prompt, specify whether you want detail
438
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used when the command is invoked from outside of
shipped audits. When used as part of shipped audits, the default value is *OMIT
since the results are already placed in an outfile.
17. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
18. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
19. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
20. To start the comparison, press Enter.
Comparing file member data using subsetting options
439
Comparing file member data using subsetting options
You can use the CMPFILDTA command to audit your entire database over a number
of days.
Before you begin, see the recommendations, restrictions, and security considerations
described in Considerations for using the CMPFILDTA command on page 412. You
should also read Specifying CMPFILDTA parameter values on page 416 for
additional information about parameters and values that you can specify.
Note: Do not compare data using active processing technology if the apply process
is 180 seconds or more behind, or has exceeded a threshold limit.
To compare data using the subsetting options, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 7
(Compare file data) and press Enter.
3. The Compare File Data (CMPFILDTA) command appears. At the Data group
definition prompts, do one of the following:
To compare data for all files defined by the data group file entries for a
particular data group definition, specify the data group name and skip to
Step 6.
To compare data by file name only, specify *NONE and continue with the next
step.
To compare a subset of files defined to a data group, specify the data group
name and continue with the next step.
4. At the File prompts, you can specify elements for one or more object selectors that
either identify files to compare or that act as filters to the files defined to the data
group indicated in Step 3. For more information, see Object selection for
Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the File and library prompts, specify the name or the generic value you want.
b. At the Member prompt, accept *ALL or specify a member name to compare a
particular member within a file.
c. At the Object attribute prompt, accept *ALL to compare the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, specify the value you want.
e. At the System 2 file and System 2 library prompts, if the file and library names
on system 2 are equal to system 1, accept the defaults. Otherwise, specify the
name of the file and library to which files on the local system are compared.
Note: The System 2 file and System 2 library values are ignored if a data
group is specified on the Data group definition prompts.
440
f. Press Enter.
5. The System 2 parameter prompt appears if you are comparing files not defined to
a data group. If necessary, specify the name of the remote system to which files
on the local system are compared.
6. At the Repair on system prompt, specify a value if you want repair action
performed.
Note: To process members in *HLDERR status, you must specify *TGT. See
Step 8.
7. At the Process while active prompt, specify whether active processing technology
should be used in the comparison.
Notes:
To process members in *HLDERR status, you must specify *YES. See
Step 8.
If you are comparing files associated with a data group, *DFT uses active
processing. If you are comparing files not associated with a data group,
*DFT does not use active processing.
Do not compare data using active processing technology if the apply process
is 180 seconds or more behind, or has exceeded a threshold limit.
8. At the File entry status prompt, you can select files with specific statuses for
compare and repair processing. Do one of the following:
a. To process active members only, specify *ACTIVE.
b. To process both active members and members being held due to error
(*ACTIVE and *HLDERR), specify the default value *ALL.
c. To process members being held due to error only, specify *HLDERR.
Note: When *ALL or *HLDERR is specified for the File entry status prompt,
*TGT must also be specified for the Repair on system prompt (Step 6)
and *YES must be specified for the Process while active prompt
(Step 7).
9. At the Subsetting option prompt, you must specify a value other than *ALL to use
additional subsetting. Do one of the following:
To compare a fixed range of data, specify *RANGE then press Enter to see
additional prompts. Skip to Step 10.
To define how many subsets should be established, how member data is
assigned to the subsets, and which range of subsets to compare, specify
*ADVANCED and press Enter to see additional prompts. Skip to Step 11.
To indicate that only data specified on the Records at end of file prompt is
compared, specify *ENDDTA and press Enter to see additional prompts. Skip
to Step 12.
10. At the Subset range prompts, do the following:
a. At the First record prompt, specify the relative record number of the first record
to compare in the range.
Comparing file member data using subsetting options
441
b. At the Last record prompt, specify the relative record number of the last record
to compare in the range.
c. Skip to Step 12.
11. At the Advanced subset options prompts, do the following:
a. At Number of subsets prompt, specify the number of approximately equal-
sized subsets to establish. Subsets are numbered beginning with 1.
b. At the Interleave prompt, specify the interleave factor. In most cases, the
default *CALC is highly recommended.
c. At the First subset prompt, specify the first subset in the sequence of subsets
to compare.
d. At the Last subset prompt, specify the last subset in the sequence of subsets to
compare.
12. At the Records at end of file prompt, specify the number of records at the end of
the member to compare. These records are compared regardless of other
subsetting criteria.
Note: If *ENDDTA is specified on the Subsetting option prompt, you must specify
a value other than *NONE.
13. At the Report type prompt, do one of the following:
If you want all compared objects to be included in the report, accept the
default.
If you only want objects with detected differences to be included in the report,
specify *DIF.
If you want to include the member details and relative record number (RRN) of
the first 1,000 objects that have differences, specify *RRN.
Notes:
The *RRN value can only be used when *NONE is specified for the Repair
on system prompt and *OUTFILE is specified for the Output prompt.
The *RRN value outputs to a unique outfile (MXCMPFILR). Specifying *RRN
can help resolve situations where a discrepancy is known to exist but you are
unsure which system contains the correct data. This value provides the
information that enables you to display the specific records on the two
systems and determine the system on which the file should be repaired.
14. At the Output prompt, do one of the following:
To generate spooled output that is printed, accept the default, *PRINT. Press
Enter and continue with the next step.
To generate an outfile and spooled output that is printed, specify *BOTH. Press
Enter and continue with the next step.
If you do not want to generate output, specify *NONE. Press Enter and skip to
Step 19.
442
To generate an outfile, specify *OUTFILE. Press Enter and continue with the
next step.
15. At the File to receive output prompts, specify the file and library to receive the
output. (Press F1 (Help) to see the name of the supplied database file.)
16. At the Output member options prompts, do the following:
a. At the Member to receive output prompt, specify the name of the database file
member to receive the output of the command.
b. At the Replace or add prompt, specify whether new records should replace
existing file members or be added to the existing list.
17. At the System to receive output prompt, specify the system on which the output
should be created.
Note: If *YES is specified on the Process while active prompt and *OUTFILE
was specified on the Outfile prompt, you must select *SYS2 for the
System to receive output prompt.
18. At the Object difference messages prompt, specify whether you want detail
messages placed in the job log. The value *INCLUDE places detail messages in
the job log, and is the default used outside of shipped rules. When used as part of
shipped rules, the default value is *OMIT since the results are already placed in
an outfile.
19. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter
continue with the next step.
20. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
21. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
22. To start the comparison, press Enter.
443
CHAPTER 20 Synchronizing data between
systems
This chapter contains information about support provided by MIMIX commands for
synchronizing data between two systems. The data that MIMIX replicates must by
synchronized on several occasions.
During initial configuration of a data group, you need to ensure that the data to be
replicated is synchronized between both systems defined in a data group.
If you change the configuration of a data group to add new data group entries, the
objects must be synchronized.
You may also need to synchronize a file or object if an error occurs that causes
the two systems to become not synchronized.
The automatic recovery features of MIMIX

AutoGuard

also use synchronize


commands to recover differences detected during replication and audits. If
automatic recovery policies are disabled, you may need to use synchronize
commands to correct a file or object in error or to correct differences detected by
audits or compare commands.
The synchronize commands provided with MIMIX can be loosely grouped by common
characteristics and the level of function they provide. Topic Considerations for
synchronizing using MIMIX commands on page 445 describes subjects that apply to
more than one group of commands, such as the maximum size of an object that can
be synchronized, how large objects are handled, and how user profiles are
addressed.
Initial synchronization: Initial synchronization can be performed manually with a
variety of MIMIX and IBM commands, or by using the Synchronize Data Group
(SYNCDG) command. The SYNCDG command is intended especially for performing
the initial synchronization of one or more data groups and uses the auditing and
automatic recovery support provided by MIMIX AutoGuard. The command can be
long-running. For information about initial synchronization, see these topics:
Performing the initial synchronization on page 454 describes how to establish a
synchronization point and identifies other key information.
Environments using MIMIX support for IBM WebSphere MQ have additional
requirements for the initial synchronization of replicated queue managers. For
more information, see the MIMIX for IBM WebSphere MQ book.
Synchronize commands: The commands Synchronize Object (SYNCOBJ ),
Synchronize IFS Object (SYNCIFS), and Synchronize DLO (SYNCDLO) provide
robust support in MIMIX environments, for synchronizing library-based objects, IFS
objects, and DLOs, as well as their associated object authorities. Each command has
considerable flexibility for selecting objects associated with or independent of a data
group. Additionally, these commands are often called by other functions, such as by
the automatic recovery features of MIMIX AutoGuard and by options to synchronize
Synchronizing data between systems
444
objects identified in tracking entries used with advanced journaling. For additional
information, see:
About MIMIX commands for synchronizing objects, IFS objects, and DLOs on
page 449
About synchronizing tracking entries on page 453
Synchronize Data Group Activity Entry: The Synchronize DG Activity Entry
(SYNCDGACTE) command provides the ability to synchronize library-based objects,
IFS objects, and DLOs that are associated with data group activity entries which have
specific status values. The contents of the object and its attributes and authorities are
synchronized. For additional information, see About synchronizing data group activity
entries (SYNCDGACTE) on page 450.
Synchronize Data Group File Entry: The Synchronize DG File Entry (SYNCDGFE)
command provides the means to synchronize database files associated with a data
group by data group file entries. Additional options provide the means to address
triggers, referential constraints, logical files, and related files. For more information
about this command, see About synchronizing file entries (SYNCDGFE command)
on page 451.
Send Network commands: The Send Network Object (SNDNETOBJ ), Send
Network IFS Object (SNDNETIFS), and Send Network DLO (SNDNETDLO)
commands support fewer usage options and usability benefits than the Synchronize
commands. These commands may require multiple invocations per library, path, or
directory, respectively. These commands do not support synchronizing based on a
data group name.
Procedures: The procedures in this chapter are for commands that are accessible
from the MIMIX Compare, Verify, and Synchronize menu. Typically, when you need to
synchronize individual items in your configuration, the best approach is to use the
options provided on the displays where they are appropriate to use. The options call
the appropriate command and, in many cases, pre-select some of the fields. The
following procedures are included:
Synchronizing database files on page 460
Synchronizing objects on page 462
Synchronizing IFS objects on page 466
Synchronizing DLOs on page 470
Synchronizing data group activity entries on page 473
Synchronizing tracking entries on page 475
Sending library-based objects on page 476
Sending IFS objects on page 478
Sending DLO objects on page 479
Considerations for synchronizing using MIMIX commands
445
Considerations for synchronizing using MIMIX com-
mands
For discussion purposes, the synchronize commands are grouped as follows:
Synchronize commands (SYNCOBJ , SYNCIFS, and SYNCDLO)
Synchronize Data Group Activity Entry (SYNCDGACTE)
Synchronize Data Group File Entry (SYNCDGFE)
The following subtopics apply to more than one group of commands. Before you
synchronize you should be aware of information in the following topics:
Limiting the maximum sending size on page 445
Synchronizing user profiles on page 445
Synchronizing large files and objects on page 447
Status changes caused by synchronizing on page 447
Synchronizing objects in an independent ASP on page 448
Limiting the maximum sending size
The Synchronize commands (SYNCOBJ , SYNCIFS, and SYNCDLO) and the
Synchronize Data Group File Entry (SYNCDGFE) command provide the ability to limit
the size of files or objects transmitted during synchronization with the Maximum
sending size (MAXSIZE) parameter. By default, no maximum value is specified. You
can also specify the value *TFRDFN to use the threshold size from the transfer
definition associated with the data group
1
, or specify a value between 1 and
9,999,999 megabytes (MB). On the SYNCDGFE command, the value *TFRDFN is
only allowed when the Sending mode (METHOD) parameter specifies *SAVRST.
When automatic recovery actions initiate a Synchronize or SYNCDGFE command,
the policies in effect determine the value used for the commands MAXSIZE
parameter. The Set MIMIX Policies (SETMMXPCY) command sets policies for
automatic recovery actions and for the synchronize threshold used by the commands
MIMIX invokes to perform recovery actions. When any of the automatic recovery
policies are enabled (DBRCY, OBJ RCY, or AUDRCY), the value of the Sync.
threshold size (SYNCTHLD) policy is used for the MAXSIZE value on the command.
You can adjust the SYNCTHLD policy value for the installation or optionally set a
value for a specific data group.
Synchronizing user profiles
User profile objects (*USRPRF) can be synchronized explicitly or implicitly.
The Synchronize commands (SYNCOBJ , SYNCIFS, and SYNCDLO) and the Send
Network Objects (SNDNETOBJ ) command can synchronize user profiles either
1. To preserve behavior prior to changes made in V4R4 service pack SPC05SP4, specify
*TFRDFN.
446
implicitly or explicitly. The following information describes slight variations in
processing.
Synchronizing user profiles with SYNCnnn commands
The SYNCOBJ command explicitly synchronizes user profiles when you specify
*USRPRF for the object type on the command. The status of the user profile on the
target system is affected as follows:
If you specified a data group and a user profile which is configured for replication,
the status of the user profile on the target system is the value specified in the
configured data group object entry.
If you specified a user profile but did not specify a data group, the following
occurs:
If the user profile exists on the target system, its status on the target system
remains unchanged.
If the user profile does not exist on the target system, it is synchronized and its
status on the target system is set to *DISABLED.
When synchronizing other object types, the SYNCOBJ , SYNCIFS, and SYNCDLO
commands implicitly synchronize user profiles associated with the object if they do
not exist on the target system. Although only the requested object type, such as
*PGM, is specified on these commands, the owning user profile, the primary group
profile, and user profiles that have private authorities to an object are implicitly
synchronized, as follows:
When the Synchronize command specifies a data group and that data group has
a data group object entry which includes the user profile, the object and the user
profile are synchronized. The status of the user profile on the target system is set
to match the value from the data group object entry.
If a data group object entry excludes the user profile from replication, the object is
synchronized and its owner is changed to the default owner indicated in the data
group definition. The user profile is not synchronized.
When the Synchronize command specifies a data group and that data group does
not have a data group object entry for the user profile, the object and the
associated user profile are synchronized. The status of the user profile on the
target system is set to *DISABLED.
Synchronizing user profiles with the SNDNETOBJ command
The Send Network Objects (SNDNETOBJ ) command explicitly synchronizes user
profiles when you specify *USRPRF for the object type on the command. The status
of the user profile on the target system is affected as follows:
If the user profile exists on the target system, its status on the target system
remains unchanged.
If the user profile does not exist on the target system, it is synchronized and its
status on the target system is set to *DISABLED.
Considerations for synchronizing using MIMIX commands
447
When synchronizing other object types, this command implicitly synchronizes user
profiles associated with the object if they do not exist on the target system. Although
only the requested object type, such as *PGM, is specified on the command, the
owning user profile, the primary group profile, and user profiles that have private
authorities to an object are implicitly synchronized. The object and associated user
profiles are synchronized. The status of the user profile on the target system is set to
*DISABLED.
Missing system distribution directory entries automatically added
When a missing user profile is detected during replication or synchronization of an
object, MIMIX automatically adds any missing system distribution directory entries for
user profiles. The synchronize (SYNCnnn) and the SNDNETOBJ commands provide
this capability.
If replication or a synchronization request determines that a user profile is missing on
the target system and a system directory entry exists on the source system for that
user profile, MIMIX adds the system distribution directory entry for the user profile on
the target system and specifies these values:
User ID: same value as retrieved from the source system
Description: same value as retrieved from the source system
Address: local-system name
User profile: user-profile name
All other directory entry fields are blank
Synchronizing large files and objects
When configured for advanced journaling, large objects (LOBs) can be synchronized
through the user (database) journal. You can synchronize a database file that
contains LOB data using the Synchronize Data Group File Entry (SYNCDGFE)
command.
If advanced journaling is not used in your environment, you may want to consider
synchronizing large files or objects (over 1 GB) outside of MIMIX. During traditional
synchronization, large files or objects can negatively impact performance by
consuming too much bandwidth. Certain commands for synchronizing provide the
ability to limit the size of files or objects transmitted during synchronization. See
Limiting the maximum sending size on page 445 for more information.
On certain commands, it is possible to control the size of files and objects sent to
another system. The Threshold size (THLDSIZE) parameter on the transfer definition
can be used to limit the size of objects transmitted with the Send Network Object
commands.
Status changes caused by synchronizing
In some circumstances the Synchronize Data Group Activity Entry (SYNCDGACTE)
command changes the status of activity entries when the command completes. For
additional details, see About synchronizing data group activity entries
(SYNCDGACTE) on page 450.
448
The Synchronize commands (SYNCOBJ , SYNCIFS and SYNCDLO) do not change
the status of activity entries associated with the objects being synchronized. Activity
entries retain the same status after the command completes.
Note: The SYNCIFS command will change the status of an activity entry for an
IFS object configured for advanced journaling.
When advanced journaling is configured, each replicated activity has associated
tracking entries. When you use the SYNCOBJ or SYNCIFS commands to
synchronize an object that has a corresponding tracking entry, the status of the
tracking entry will change to *ACTIVE upon successful completion of the
synchronization request. If the synchronization is not successful, the status of the
tracking entry will remain in its original status or have a status of *HLD. If the data
group is not active, the status of the tracking entry will be updated once the data
group is restarted.
Synchronizing objects in an independent ASP
When synchronizing data that is located in an independent ASP, be aware of the
following:
In order for MIMIX to access objects located in an independent ASP, do one of the
following on the Synchronize Object (SYNCOBJ ) command:
Specify the data group definition.
If no data group is specified, you must specify values for the System 1 ASP
group or device, System 2 ASP device number, and System 2 ASP device
number parameters.
In order for the Send Network Object (SNDNETOBJ ) command to access objects
that are located in an independent auxiliary storage pool (ASP) on the source
system, you must first use the IBM command Set ASP Group (SETASPGRP) on
the local system before using the SNDNETOBJ command.
About MIMIX commands for synchronizing objects, IFS objects, and DLOs
449
About MIMIX commands for synchronizing objects, IFS
objects, and DLOs
The Synchronize Object (SYNCOBJ ), Synchronize IFS (SYNCIFS), and Synchronize
DLO (SYNCDLO) commands provide versatility for synchronizing objects and their
authority attributes.
Where to run: The synchronize commands can be run from either system. However,
if you run these commands from a target system, you must specify the name of a data
group to avoid overwriting the objects on the source system.
Identifying what to synchronize: On each command, you can identify objects to
synchronize by specifying a data group, a subset of a data group, or by specifying
objects independently of a data group.
When you specify a data group, its source system determines the objects to
synchronize. The objects to be synchronized by the command are the same as
those identified for replication by the data group. For example, specifying a data
group on the SYNCOBJ command, will synchronize the same library-based
objects as those configured for replication by the data group.
If you specify a data group as well as specify additional object information in
command parameters, the additional parameter information is used to filter the list
of objects identified for the data group.
When no data group is specified, the local system becomes the source system
and a target system must be identified. The list of objects to synchronize is
generated on the local system. For more information about the object selection
criteria used when no data group is specified on these commands, see Object
selection for Compare and Synchronize commands on page 372.
Each command has a Synchronize authorities parameter to indicate whether authority
attributes are synchronized. By default, the object and all authority-related attributes
are synchronized. You can also synchronize only the object or only the authority
attributes of an object. Authority attributes include ownership, authorization list,
primary group, public and private authorities.
When you use the SYNCOBJ command to synchronize only the authorities for an
object and a data group name is not specified, if any files processed by the command
are cooperatively processed and a data group that contains these files is active, the
command could fail if the database apply job has a lock on these files.
When to run: Each command can be run when the data group is in an active or an
inactive state. You can synchronize objects whether or not the data group is active.
Using the SYNCOBJ , SYNCIFS, and SYNCDLO commands during off-peak usage or
when the objects being synchronized are in a quiesced state reduces contention for
object locks.
When using the SYNCIFS command for a data group configured for advanced
journaling, the data group can be active but it should not have a backlog of
unprocessed entries.
450
Additional parameters: On each command, the following parameters provide
additional control of the synchronization process.
The Save active parameter provides the ability to save the object in an active
environment using IBM's save while active support. Values supported are the
same as those used in related IBM commands.
The Save active wait time parameter specifies the amount of time to wait for a
commit boundary or for a lock on an object. If a lock is not obtained in the
specified time, the object is not saved. If a commit boundary is not reached in the
specified time, the save operation ends and the synchronization attempt fails.
The Maximum sending size (MB) parameter specifies the maximum size that an
object can be in order to be synchronized. For more information, see Limiting the
maximum sending size on page 445.
About synchronizing data group activity entries (SYNCD-
GACTE)
The Synchronize Data Group Activity Entry (SYNCDGACTE) command supports the
ability to synchronize library-based objects, IFS objects, or DLOs associated with data
group activity entries. Activity entries whose status falls in the following categories can
be synchronized: *ACTIVE, *COMPLETED, *DELAYED, or *FAILED. The contents of
the object, its attributes, and its authorities are synchronized between the source and
target systems.
Note: From the 5250 emulator, data group activity and the status category of the
represented object are listed on the Work with Data Group Activity display
(WRKDGACT command). The specific status of individual activity entries
appear on the Work with DG Activity Entries display (WRKDGACTE
command).
The data group can either be active or inactive during the synchronization request.
If the item you are synchronizing has multiple activity entries with varying statuses (for
example, an entry with a status of completed, followed by a failed entry, and
subsequent delayed entries), the SYNCDGACTE command will find the first non-
completed activity entry and synchronize it. The same SYNCDGACTE request will
then find the next non-completed entry and synchronize it. The SYNCDGACTE
request will continue to synchronize these non-completed entries until all entries for
that object have been synchronized.
Any existing active, delayed, or failed activity entries for the specified object are
processed and set to completed by synchronization (PZ) when the synchronization
request completes successfully.
When all activity entries are completed for the specified object, when the
synchronization request completes successfully, only the status of the very last
completed entry is changed from complete (CP) to completed by synchronization
(CZ).
Not supported: Spooled files and cooperatively processed files are not eligible to be
synchronized using the SYNCDGACTE command.
About synchronizing file entries (SYNCDGFE command)
451
Status changes during to synchronization: During synchronization processing, if
the data group is active, the status of the activity entries being synchronized are set to
a status of pending synchronization (PZ) and then to pending completion (PC).
When the synchronization request completes, the status of the activity entries is set to
either completed by synchronization (CZ) or to failed synchronization (FZ).
If the data group is inactive, the status of the activity entries remains either pending
synchronization (PZ) or pending completion (PC) when the synchronization request
completes. When the data group is restarted, the status of the activity entries is set to
either completed by synchronization (CZ) or to failed synchronization (FZ).
About synchronizing file entries (SYNCDGFE command)
The Synchronize Data Group File Entry (SYNCDGFE) command synchronizes
database files associated with a data group by data group file entries.
Active data group required: Because the SYNCDGFE command runs through a
database apply job, the data group must be active when the command is used.
Choice of what to synchronize: The Sending mode (METHOD) parameter provides
granularity in specifying what is synchronized. Table 63 describes the choices.
Files with triggers: The SYNCDGFE command provides the ability to optionally
disable triggers during synchronization processing and enable them again when
processing is complete. The Disable triggers on file (DSBTRG) parameter specifies
whether the database apply process (used for synchronization) disables triggers
when processing a file.
The default value *DGFE uses data group file entry to determine whether triggers
should be disabled. The value *YES will disable triggers on the target system during
synchronization.
If configuration options for the data group, or optionally for a data group file entry,
allow MIMIX to replicate trigger-generated entries and disable the triggers, when
synchronizing a file with triggers you must specify *DATA as the sending mode.
Table 63. Sending mode (METHOD) choices on the SYNCDGFE command.
*DATA This is the default value. Only the physical file data is replicated using
MIMIX Copy Active File processing. File attributes are not replicated
using this method.
If the file exists on the target system, MIMIX refreshes its contents. If the
file format is different on the target system, the synchronization will fail. If
the file does not exist on the target system, MIMIX uses save and restore
operations to create the file on the target system and then uses copy
active file processing to fill it with data from the file on the source system.
*ATR Only the physical file attributes are replicated and synchronized.
*AUT Only the authorities for the physical file are replicated and synchronized.
*SAVRST The content and attributes are replicated using the IBM i save and
restore commands. This method allows save-while-active operations.
This method also has the capability to save associated logical files.
452
Including logical files: The Include logical files (INCLF) parameter allows you to
include any attached logical files in the synchronization request. This parameter is
only valid when *SAVRST is specified for the Sending mode prompt.
Physical files with referential constraints: Physical files with referential constraints
require a field in another physical file to be valid. When synchronizing physical files
with referential constraints, ensure all files in the referential constraint structure are
synchronized concurrently during a time of minimal activity on the source system.
Doing so will ensure the integrity of synchronization points.
Including related files: You can optionally choose whether the synchronization
request will include files related to the file specified by specifying *YES for the Include
related (RELATED) parameter. Related files are those physical files which have a
relationship with the selected physical file by means of one or more join logical files.
J oin logical files are logical files attached to fields in two or more physical files.
The Include related (RELATED) parameter defaults to *NO. In some environments,
specifying *YES could result in a high number of files being synchronized and could
potentially strain available communications and take a significant amount of time to
complete.
A physical file being synchronized cannot be name mapped if it is not in the same
library as the logical file associated with it. Logical files may be mapped by using
object entries.
About synchronizing tracking entries
453
About synchronizing tracking entries
Tracking entries provide status of IFS objects, data areas, and data queues that are
replicated using MIMIX advanced journaling. Object tracking entries represent data
areas or data queues. IFS tracking entries represent IFS objects. IFS tracking entries
also track the file identifier (FID) of the object on the source and target systems.
You can synchronize the object represented by a tracking entry by using the
synchronize option available on the Work with DG Object Tracking Entries display or
the Work with DG IFS Tracking Entries display. For object tracking entries, the option
calls the Synchronize Object (SYNCOBJ ) command. For IFS tracking entries, the
option calls the Synchronize IFS Object (SYNCIFS) command.
The contents, attributes, and authorities of the item are synchronized between the
source and target systems.
Notes:
Before starting data groups for the first time, any existing objects to be replicated
from the source system must be synchronized to the target system.
If tracking entries do not exist, you must create them by doing one of the following:
Change the data group IFS entry or object entry configuration as needed and
end and restart the data groups.
Load tracking entries using the Load DG IFS Tracking Entries (LODDGIFSTE)
or Load DG Obj Tracking Entries (LODDGOBJ TE) commands. See Loading
tracking entries on page 257.
Tracking entries may not exist for existing IFS objects, data areas, or data queues
that have been configured for replication with advanced journaling since the last
start of the data group.
For status changes to be effective for a tracking entry that is being synchronized,
the data group must be active. When the apply session receives notification that
the object represented by the tracking entry is synchronized successfully, the
tracking entry status changes to *ACTIVE.
454
Performing the initial synchronization
Ensuring that data is synchronized before you begin replication is crucial to
successful replication. How you perform the initial synchronization can be influenced
by the available communications bandwidth, the complexity of describing the data,
the size of the data, as well as time.
Note: If you have configured or migrated a MIMIX configuration to use integrated
support for IBM WebSphere MQ, you must use the procedure Initial
synchronization for replicated queue managers in the MIMIX for IBM
WebSphere MQ book. Large IBM WebSphere MQ environments should plan
to perform this during off-peak hours.
Establish a synchronization point
J ust before you start the initial synchronization, establish a known start point for
replication by changing journal receivers. The information gathered in this procedure
will be used when you start replication for the first time.
From the source system, do the following:
1. Quiesce your applications before continuing with the next step.
2. For each data group that will replicate from a user journal, use the following
command to change the user journal receiver. Record the new receiver names
shown in the posted message. On a command line, type:
( installation-library-name) / CHGDGRCV DGDFN( data-group-name)
TYPE( *DB)
3. Change the system journal receiver and record the new receiver name shown in
the posted message. On a command line, type:
CHGJ RN J RN( QAUDJ RN) J RNRCV( *GEN)
4. When you synchronize the database files and objects between systems, record
the time at which you submit the synchronization requests as this information is
needed when determining the journal location at which to initially start replication.
Resources for synchronizing on page 455 identifies available options.
5. Identify the synchronization starting point in the source user journal. This
information will be needed when starting replication.
a. Specify the source user journal for library/journal_name, specify the date of the
first synchronize request for mm/dd/yyyy, and specify a time just before the first
synchronize request for hh:mm:ss in the following command:
DSPJ RN J RN( library/jounal_name) RCVRNG( *CURRENT)
FROMTI ME( ' mm/dd/yyyy' ' hh:mm:ss' )
Note: You can also specify values for the ENTTYP parameter to narrow the
search. Table 64 shows values which identify save actions associated
Performing the initial synchronization
455
with synchronizing.
b. Record the exact time and the sequence number of the journal entry
associated with the first synchronize request. Typically, a synchronize request
is represented by a journal entry for a save operation.
c. Type 5 (Display entire entry) next to the entry and press Enter.
d. Press F10 (Display only entry details).
e. The Display J ournal Entry Details display appears. Page down to locate the
Receiver name. This should be the same name as recorded in Step 2.
6. Identify the synchronization starting point in the source system journal. This
information will be needed when starting replication.
a. Specify the date from Step 5a for mm/dd/yyyy and specify the time from
Step 5b for hh:mm:ss in the following command:
DSPJ RN J RN( QSYS/ QAUDJ RN) RCVRNG( *CURRENT)
FROMTI ME( ' mm/dd/yyyy' ' hh:mm:ss' )
b. Record the sequence number associated with the first journal entry with the
specified time stamp.
c. Type 5 (Display entire entry) next to the entry and press Enter.
d. Press F10 (Display only entry details).
e. The Display J ournal Entry Details display appears. Page down to locate the
Receiver name. This should be the same name as recorded in Step 3.
Resources for synchronizing
The available choices for synchronizing are, in order of preference:
IBM Save and Restore commands: IBM save and restore commands are best
suited for initial synchronization and are used when performing a manual
synchronization. While MIMIX SYNCDG, SYNC, and SNDNET commands can be
used, the communications bandwidth required for the size and quantity of objects
may exceed capacity.
SYNC commands: The Synchronize commands (SYNCOBJ , SYNCIFS,
SYNCDLO) should be your starting point. These commands provide significantly
Table 64. Common values for using ENTTYP
Journaled
Object Type
Jour-
nal
Code
Common ENT-
TYP Values
File F MS, SS
Data Area E ES, EW
Data Queue Q QX, QY
IFS object B FS, FW
456
more flexibility in object selection and also provide the ability to synchronize object
authorities. By specifying a data group on any of these commands, you can
synchronize the data defined by its data group entries.
You can also use the Synchronize Data Group File Entry (SYNCDGFE) to
synchronize database files and members. This command provides the ability to
choose between MIMIX copy active file processing and save/restore processing
and provides choices for handling trigger programs during synchronization.
If you have configured or migrated to integrated advanced journaling, follow the
SYNCIFS procedures for IFS objects, SYNCOBJ procedures for data areas and
data queues, and SYNCDGFE procedures for files containing LOB data. You can
also use options to synchronize objects associated with tracking entries from the
Work with DG IFS Trk. Entries display and the Work with DG Obj. Trk. Entries
display.
SYNCDG command: The SYNCDG command is intended especially for
performing the initial synchronization of one or more data groups by MIMIX
IntelliStart. The SYNCDG command synchronizes by using the auditing and
automatic recovery support provided by MIMIX AutoGuard. This command can be
long-running. Because this command requires that journaling and data group
replication processes be started before synchronization starts, it may not be
appropriate for some environments.
SNDNET commands: The Send Network commands (SNDNETIFS,
SNDNETDLO, SNDNETOBJ ) support fewer options for selecting and specifying
multiple objects and do not provide a way to specify by data group. These
commands may require multiple invocations per path, folder, or library,
respectively.
This chapter (Synchronizing data between systems on page 443) includes
additional information about the MIMIX SYNC and SNDNET commands.
Using SYNCDG to perform the initial synchronization
This topic describes the procedure for performing the initial synchronization using the
Synchronize Data Group (SYNCDG) command prior to beginning replication. The
initial synchronization ensures that data is the same on each system and reduces the
time and complexity involved with starting replication for the first time.
The SYNDG command utilizes the auditing and automatic recovery functions of
MIMIX

AutoGuard

to synchronize an enabled data group between the source


system and the target system. The SYNCDG command is intended to be used for
initial synchronization of a data group and can be used in other situations where data
groups are not synchronized. The SYNCDG command can only be run on the
management system, and only one instance of the command per data group can be
running at any time. This command submits a batch program that can run for several
days. The SYNCDG command can be performed automatically through MIMIX
IntelliStart.
Using SYNCDG to perform the initial synchronization
457
Note: The SYNCDG command will not process a request to synchronize a data
group that is currently using the MIMIX CDP feature. This feature is in use if
a recovery window is configured or when a recovery point is set for a data
group. Also, do not configure a recovery window or set a recovery point if a
SYNCDG request is in progress for the data group. The MIMIX CDP feature
may not protect data under these circumstances.
Ensure the following conditions are met for each data group that you want to
synchronize, before running this command:
Apply any IBM PTFs (or their supersedes) associated with IBM i releases as
they pertain to your environment. Log in to Support Central and access the
Technical Documents page for a list of required and recommended IBM PTFs.
J ournaling is started on the source system for everything defined to the data
group.
All replication processes are active.
The user ID submitting the SYNCDG has *MGT authority in product level
security if it is enabled for the installation.
No other audits (comparisons or recoveries) are in progress when the
SYNCDG is requested.
Collector services has been started.
If DLOs are identified for replication, before running the SYNDG command,
ensure that the DLOs exist only on the source system.
While the synchronization is in progress, other audits for the data group are prevented
from running.
To perform the initial synchronization using the SYNCDG command
defaults
Do the following:
1. Use the command STRDG DGDFN(*ALL)
2. Type the command SYNCDG and press Enter. Specify the following values,
pressing F4 for valid options on each parameter:
Data group definition (DGDFN).
J ob description (J OBD).
3. Press Enter to perform the initial synchronization.
4. Verify your configuration is using MIMIX AutoGuard. This step includes performing
audits to verify that journaling and other aspects of your environment are ready to
use. Audits automatically check for and attempt to correct differences found
between the source system and the target system. See Verifying the initial
synchronization on page 458 for more information.
458
Verifying the initial synchronization
This procedure uses MIMIX AutoGuard

to ensure your environment is ready to start


replication. Shipped policy settings for MIMIX allow audits to automatically attempt
recovery actions for any problems they detect. You should not use this procedure if
you have already synchronized your systems using the Synchronize Data Group
(SYNCDG) command or the automatic synchronization method in MIMIX IntelliStart.
The audits used in this procedure will:
Verify that journaling is started on the source and target systems for the items you
identified in the deployed replication patterns. Without journaling, replication will
not occur.
Verify that data is synchronized between systems. Audits will detect potential
problems with synchronization and attempt to automatically recover differences
found.
Do the following:
1. Check whether all necessary journaling is started for each data group. Enter the
following command:
( installation-library-name) / DSPDGSTS DGDFN( data-group-name)
VI EW( *DBFETE)
On the File and Tracking Entry Status display, the File Entries column identifies
how many file entries were configured from your replication patterns and indicates
whether any file entries are not journaled on the source or target systems. If your
configuration permits user journal replication of IFS objects, data areas, or data
queues, the Tracking Entries columns provide similar information.
2. Use MIMIX AutoGuard to audit your environment. To access the audits, enter the
following command:
( installation-library-name) / WRKAUD
3. Each audit listed on the Work with Audits display is a unique combination of data
group and MIMIX rule. When verifying an initial configuration, you need to perform
a subset of the available audits for each data group in a specific order, shown in
Table 65. Do the following:
a. To change the number of active audits at any one time, enter the following
command:
CHGJ OBQE SBSD( MI MI XQGPL/ MI MI XSBS) J OBQ( MI MI XQGPL/ MI MI XVFY)
MAXACT( *NOMAX)
b. Use F18 (Subset) to subset the audits by the name of the rule you want to run.
c. Type a 9 (Run rule) next to the audit for each data group and press Enter.
Verifying the initial synchronization
459
Repeat Step 3b and Step 3c for each rule in Table 65 until you have started all the
listed audits for all data groups.
d. Reset the number of active audit jobs to values consistent with regular
auditing:
CHGJ OBQE SBSD( MI MI XQGPL/ MI MI XSBS) J OBQ( MI MI XQGPL/ MI MI XVFY)
MAXACT( 5)
4. Wait for all audits to complete. Some audits may take time to complete. Then
check the results and resolve any problems. You may need to change subsetting
values again so you can view all rule and data group combinations at once. On
the Work with Audits display, check the Audit Status column for the following
value:
*NOTRCVD - The comparison performed by the rule detected differences. Some
of the differences were not automatically recovered. Action is required. View
notifications for more information and resolve the problem.
Note: For more information about resolving reported problems, see Interpreting
audit results on page 568.
Table 65. Rules for initial validation, listed in the order to be performed.
Rule Name
1. #DGFE
2. #OBJ ATR
3. #FILATR
4. #IFSATR
5. #FILATRMBR
6. #DLOATR
460
Synchronizing database files
The procedures in this topic use the Synchronize DG File Entry (SYNCDGFE)
command to synchronize selected database files associated with a data group,
between two systems. If you use this command when performing the initial
synchronization of a data group, use the procedure from the source system to send
database files to the target system.
You should be aware of the information in the following topics:
Considerations for synchronizing using MIMIX commands on page 445
About synchronizing file entries (SYNCDGFE command) on page 451.
To synchronize a database file between two systems using the SYNCDGFE
command defaults, do the following or use the alternative process described below:
1. From the Work with DG Definitions display, type 17 (File entries) next to the data
group to which the file you want to synchronize is defined and press Enter.
2. The Work with DG File Entries display appears. Type 16 (Sync DG file entry) next
to the file entry for the file you want to synchronize and press Enter.
Note: If you are synchronizing file entries as part of your initial configuration, you
can type 16 next to the first file entry and then press F13 (Repeat). When
you press Enter, all file entries will be synchronized.
Alternative Process:
You will need to identify the data group and data group file entry in this procedure. In
Step 8 and Step 9, you will need to make choices about the sending mode and trigger
support. For additional information, see About synchronizing file entries
(SYNCDGFE command) on page 451.
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 41
(Synchronize DG File Entry) and press Enter.
3. The Synchronize DG File Entry (SYNCDGFE) display appears. At the Data group
definition prompts, specify the name of the data group to which the file is
associated.
4. At the System 1 file and Library prompts, specify the name of the database file
you want to synchronize and the library in which it is located on system 1.
5. If you want to synchronize only one member of a file, specify its name at the
Member prompt.
6. At the Data source prompt, ensure that the value matches the system that you
want to use as the source for the synchronization.
7. The default value *YES for the Release wait prompt indicates that MIMIX will hold
the file entry in a release-wait state until a synchronization point is reached. Then
it will change the status to active. If you want to hold the file entry for your
intervention, specify *NO.
Synchronizing database files
461
8. At the Sending mode prompt, specify the value for the type of data to be
synchronized.
9. At the Disable triggers on file prompt, specify whether the database apply process
should disable triggers when processing the file. Accept *DGFE to use the value
specified in the data group file entry or specify another value. Skip to Step 14.
10. At the Save active prompt, accept *NO so that objects in use are not saved, or,
specify another value.
11. At the Save active wait time prompt, specify the number of seconds to wait for a
commit boundary or a lock on the object before continuing the save.
12. At the Allow object differences prompt, accept the default or specify *YES to
indicate whether certain differences encountered during the restore of the object
on the target system should be allowed.
13. At the Include logical files prompt, accept the default or *NO to indicate whether
you want to include attached logical files when sending the file.
14. To change any of the additional parameters, press F10 (Additional parameters).
Verify that the values shown for Include related files, Maximum sending file size
(MB) and Submit to batch are what you want.
15. To synchronize the file, press Enter
462
Synchronizing objects
The procedures in this topic use the Synchronize Object (SYNCOBJ ) command to
synchronize library-based objects between two systems. The objects to be
synchronized can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
Considerations for synchronizing using MIMIX commands on page 445
About MIMIX commands for synchronizing objects, IFS objects, and DLOs on
page 449
To synchronize library-based objects associated with a data group
To synchronize objects between two systems that are identified for replication by data
group object entries, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 42
(Synchronize object) and press Enter. The Synchronize Object (SYNCOBJ )
command appears.
3. At the Data group definition prompts, specify the data group for which you want to
synchronize objects.
Note: if you run this command from a target system, you must specify the name
of a data group to avoid overwriting the objects on the source system.
4. To synchronize all objects identified by data group object entries for this data
group, skip to Step 5. To synchronize a subset of objects defined to the data
group, at the Object prompts specify elements for one or more object selectors to
act as filters to the objects defined to the data group. For more information, see
see Object selection for Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the Object and library prompts, specify the name or the generic value you
want.
b. At the Object type prompt, accept *ALL or specify a specific object type to
synchronize.
c. At the Object attribute prompt, accept *ALL to synchronize the entire list of
supported attributes or press F4 to select from a list of attributes.
d. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
Note: The System 2 object and System 2 library prompts are ignored when a
data group is specified.
e. Press Enter.
5. At the Synchronize authorities prompt, accept *YES to synchronize both
Synchronizing objects
463
authorities and objects or specify another value.
6. At the Save active prompt, accept *NO to specify that objects in use are not
saved. Or, specify another value.
7. At the Save active wait time, specify the number of seconds to wait for a commit
boundary or a lock on the object before continuing the save.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
Note: When a data group is specified the following parameters are ignored:
System 1 ASP group or device, System 2 ASP device number, and
System 2 ASP device name.
9. Determine how the synchronize request will be processed. Choose one of the
following:
To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
10. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
11. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
12. To start the synchronization, press Enter.
To synchronize library-based objects without a data group
To synchronize objects between two systems, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 42
(Synchronize object) and press Enter. The Synchronize Object (SYNCOBJ )
command appears.
3. At the Data group definition prompts, specify *NONE.
4. At the Object prompts, specify elements for one or more object selectors that
identify objects to synchronize.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For more information, see Object selection for Compare and
Synchronize commands on page 372.
For each selector, do the following:
a. At the Object and library prompts, specify the name or the generic value you
want.
b. At the Object type prompt, accept *ALL or specify a specific object type to
synchronize.
464
c. At the Object attribute prompt, accept *ALL to synchronize the entire list of
supported attributes or press F4 to see a valid list of attributes.
d. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
e. At the System 2 object and System 2 library prompts, if the object and library
names on system 2 are equal to the system 1 names, accept the defaults.
Otherwise, specify the name of the object and library on system 2 to which you
want to synchronize the objects.
f. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system to
which to synchronize the objects.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
Note: When you specify *ONLY and a data group name is not specified, if any
files that are processed by this command are cooperatively processed and
the data group that contains these files is active, the command could fail if
the database apply job has a lock on these files.
7. At the Save active prompt, accept *NO to specify that objects in use are not saved
or specify another value.
8. At the Save active wait time, specify the number of seconds to wait for a commit
boundary or a lock on the object before continuing the save.
9. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
10. At the System 1 ASP group or device prompt, specify the name of the auxiliary
storage pool (ASP) group or device where objects configured for replication may
reside on system 1. Otherwise, accept the default to use the current jobs ASP
group name.
11. At the System 2 ASP device number prompt, specify the number of the auxiliary
storage pool (ASP) where objects configured for replication may reside on system
2. Otherwise, accept the default to use the same ASP number from which the
object was saved (*SAVASP). Only the libraries in the system ASP and any basic
user ASPs from system 2 will be in the library name space.
12. At the System 2 ASP device name prompt, specify the name of the auxiliary
storage pool (ASP) device where objects configured for replication may reside on
system 2. Otherwise, accept the default to use the value specified for the system
1 ASP group or device (*ASPGRP1).
13. Determine how the synchronize request will be processed. Choose one of the
following
To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter.
To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
Synchronizing objects
465
14. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
15. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
16. To start the synchronization, press Enter.
466
Synchronizing IFS objects
The procedures in this topic use the Synchronize IFS Object (SYNCIFS) command to
synchronize IFS objects between two systems. The IFS objects to be synchronized
can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
Considerations for synchronizing using MIMIX commands on page 445
About MIMIX commands for synchronizing objects, IFS objects, and DLOs on
page 449
To synchronize IFS objects associated with a data group
To synchronize IFS objects between two systems that are identified for replication by
data group IFS entries, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43
(Synchronize IFS object) and press Enter. The Synchronize IFS Object
(SYNCIFS) command appears.
3. At the Data group definition prompts, specify the data group for which you want to
synchronize objects.
Note: if you run this command from a target system, you must specify the name
of a data group to avoid overwriting the objects on the source system.
4. To synchronize all IFS objects identified by data group IFS entries for this data
group, skip to Step 5. To synchronize a subset of IFS objects defined to the data
group, at the IFS objects prompts specify elements for one or more object
selectors to act as filters to the objects defined to the data group. For more
information, see Object selection for Compare and Synchronize commands on
page 372.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the Object path name prompt, you can optionally accept *ALL or specify the
name or generic value you want.
Note: The IFS object path name can be used alone or in combination with FID
values. See Step 12.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
synchronize.
Synchronizing IFS objects
467
e. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
Note: The System 2 object path name and System 2 name pattern values are
ignored when a data group is specified.
f. Press Enter.
5. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
6. At the Save active prompt, accept *NO to specify that objects in use are not saved
or specify another value.
7. If you chose values in Step 6 to save active objects, you can optionally specify
additional options at the Save active option prompt. Press F1 (Help) for additional
information.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
9. Determine how the synchronize request will be processed. Choose one of the
following:
To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. Continue with Step 12.
10. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
11. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
12. To optionally specify a file identifier (FID) for the object on either system, do the
following:
a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 1. Values for System 1 file identifier prompt can be used
alone or in combination with the IFS object path name.
b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 2. Values for System 2 file identifier prompt can be used
alone or in combination with the IFS object path name.
Note: For more information, see Using file identifiers (FIDs) for IFS objects on
page 284.
13. To start the synchronization, press Enter.
To synchronize IFS objects without a data group
To synchronize IFS objects not associated with a data group between two systems,
do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
468
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 43
(Synchronize IFS object) and press Enter. The Synchronize IFS Object
(SYNCIFS) command appears.
3. At the Data group definition prompts, specify *NONE.
4. At the IFS objects prompts, specify elements for one or more object selectors that
identify IFS objects to synchronize. You can specify as many as 300 object
selectors by using the + for more prompt for each selector. For more information,
see the topic on object selection in the MIMIX Administrator Reference book.
For each selector, do the following:
a. At the Object path name prompt, you can optionally accept *ALL or specify the
name or generic value you want.
Note: The IFS object path name can be used alone or in combination with FID
values. See Step 13.
b. At the Directory subtree prompt, accept *NONE or specify *ALL to define the
scope of IFS objects to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the IFS object path name.
d. At the Object type prompt, accept *ALL or specify a specific IFS object type to
synchronize.
e. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
f. At the System 2 object path name and System 2 name pattern prompts, if the
IFS object path name and name pattern on system 2 are equal to the system 1
names, accept the defaults. Otherwise, specify the path name and pattern on
system 2 to which you want to synchronize the IFS objects.
g. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system on
which to synchronize the IFS objects.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
7. At the Save active prompt, accept *NO to specify that objects in use are not saved
or specify another value.
8. If you chose values in Step 7 to save active objects, you can optionally specify
additional options at the Save active option prompt. Press F1 (Help) for additional
information.
9. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
10. Determine how the synchronize request will be processed. Choose one of the
following:
To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
Synchronizing IFS objects
469
To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. Continue with Step 13.
11. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
12. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
13. To optionally specify a file identifier (FID) for the object on either system, do the
following:
a. At the System 1 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 1. Values for System 1 file identifier prompt can be used
alone or in combination with the IFS object path name.
b. At the System 2 file identifier prompt, specify the file identifier (FID) of the IFS
object on system 2. Values for System 2 file identifier prompt can be used
alone or in combination with the IFS object path name.
Note: For more information, see Using file identifiers (FIDs) for IFS objects on
page 284.
14. To start the synchronization, press Enter.
470
Synchronizing DLOs
The procedures in this topic use the Synchronize DLO (SYNCDLO) command to
synchronize document library objects (DLOs) between two systems. The DLOs to be
synchronized can be defined to a data group or can be independent of a data group.
You should be aware of the information in the following topics:
Considerations for synchronizing using MIMIX commands on page 445
About MIMIX commands for synchronizing objects, IFS objects, and DLOs on
page 449
To synchronize DLOs associated with a data group
To synchronize DLOs between two systems that are identified for replication by data
group DLO entries, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44
(Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO)
command appears.
3. At the Data group definition prompts, specify the data group for which you want to
synchronize DLOs.
Note: if you run this command from a target system, you must specify the name
of a data group to avoid overwriting the objects on the source system.
4. To synchronize all objects identified by data group DLO entries for this data group,
skip to Step 5. To synchronize a subset of objects defined to the data group, at
the Document library objects prompts specify elements for one or more object
selectors to act as filters to DLOs defined to the data group. For more information,
see Object selection for Compare and Synchronize commands on page 372.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For each selector, do the following:
a. At the DLO path name prompt, accept *ALL or specify the name or the generic
value you want.
b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the
scope of DLOs to be processed.
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the DLO path name.
d. At the DLO type prompt, accept *ALL or specify a specific DLO type to
synchronize.
e. At the Owner prompt, accept *ALL or specify the owner of the DLO.
f. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
Note: The System 2 DLO path name and System 2 DLO name pattern values
Synchronizing DLOs
471
are ignored when a data group is specified.
g. Press Enter.
5. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
6. At the Save active prompt, accept *NO to specify that objects in use are not saved
or specify another value.
7. At the Save active wait time, specify the number of seconds to wait for a lock on
the object before continuing the save.
8. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
9. Determine how the synchronize request will be processed. Choose one of the
following:
To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
10. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
11. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
12. To start the synchronization, press Enter.
To synchronize DLOs without a data group
To synchronize DLOs between two systems, do the following:
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize Menu, select option 44
(Synchronize DLO) and press Enter. The Synchronize DLO (SYNCDLO)
command appears.
3. At the Data group definition prompts, specify *NONE.
4. At the Document library objects prompts, specify elements for one or more object
selectors that identify DLOs to synchronize.
You can specify as many as 300 object selectors by using the + for more prompt
for each selector. For more information, see Object selection for Compare and
Synchronize commands on page 372.
For each selector, do the following:
a. At the DLO path name prompt, accept *ALL or specify the name or the generic
value you want.
b. At the Folder subtree prompt, accept *NONE or specify *ALL to define the
scope of DLOs to be processed.
472
c. At the Name pattern prompt, specify a value if you want to place an additional
filter on the last component of the DLO path name.
d. At the DLO type prompt, accept *ALL or specify a specific DLO type to
synchronize.
e. At the Owner prompt, accept *ALL or specify the owner of the DLO.
f. At the Include or omit prompt, accept *INCLUDE to include the object for
synchronization or specify *OMIT to omit the object from synchronization.
g. At the System 2 DLO path name and System 2 DLO name pattern prompts, if
the DLO path name and name pattern on system 2 are equal to the system 1
names, accept the defaults. Otherwise, specify the path name and pattern on
system 2 to which you want to synchronize the DLOs.
h. Press Enter.
5. At the System 2 parameter prompt, specify the name of the remote system on
which to synchronize the DLOs.
6. At the Synchronize authorities prompt, accept *YES to synchronize both
authorities and objects or specify another value.
7. At the Save active prompt, accept *NO to specify that objects in use are not saved
or specify another value.
8. At the Save active wait time, specify the number of seconds to wait for a lock on
the object before continuing the save.
9. At the Maximum sending size (MB) prompt, specify the maximum size that an
object can be and still be synchronized.
10. Determine how the synchronize request will be processed. Choose one of the
following:
To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
11. At the Submit to batch prompt, do one of the following:
If you do not want to submit the job for batch processing, specify *NO and
press Enter to start the comparison.
To submit the job for batch processing, accept the default. Press Enter and
continue with the next step.
12. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
13. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
14. To start the synchronization, press Enter.
Synchronizing data group activity entries
473
Synchronizing data group activity entries
The procedures in this topic use the Synchronize DG Activity Entry (SYNCDGACTE)
command to synchronize an object that is identified by a data group activity entry with
any status value*ACTIVE, *DELAYED, *FAILED, or *COMPLETED.
You should be aware of the information in the following topics:
Considerations for synchronizing using MIMIX commands on page 445
About synchronizing data group activity entries (SYNCDGACTE) on page 450
To synchronize an object identified by a data group activity entry, do the following:
1. From the Work with Data Group Activity Entry display, type 16 (Synchronize) next
to the activity entry that identifies the object you want to synchronize and press
Enter.
2. The Confirm Synchronize of Object display appears. Press Enter to confirm the
synchronization.
Alternative Process:
You will need to identify the data group and data group activity entry in this procedure.
1. From the MIMIX Intermediate Main Menu, select option 12 (Compare, verify, and
synchronize menu) and press Enter.
2. From the MIMIX Compare, Verify, and Synchronize menu, select option 45
(Synchronize DG File Entry) and press Enter.
3. At the Data group definition prompts, specify the data group name.
4. At the Object type prompt, specify a specific object type to synchronize or press
F4 to see a valid list.
5. Additional parameters appear based on the object type selected. Do one of the
following:
For files, you will see the Object, Library, and Member prompts. Specify the
object, library and member that you want to synchronize.
For objects, you will see the Object and Library prompts. Specify the object and
library of the object you want to synchronize.
For IFS objects, you will see the IFS object prompt. Specify the IFS object that
you want to synchronize.
For DLOs, you will see the Document library object and Folder prompts.
Specify the folder path and DLO name of the DLO you want to synchronize.
6. Determine how the synchronize request will be processed. Choose one of the
following:
To submit the job for batch processing, accept the default value *YES for the
Submit to batch prompt and press Enter. Continue with the next step.
To not use batch processing for the job, specify *NO for the Submit to batch
prompt and press Enter. The request to synchronize will be started.
474
7. At the Job description and Library prompts, specify the name and library of the job
description used to submit the batch request.
8. At the Job name prompt, accept *CMD to use the command name to identify the
job or specify a simple name.
9. To start the synchronization, press Enter.
Synchronizing tracking entries
475
Synchronizing tracking entries
Tracking entries are MIMIX constructs which identify IFS objects, data areas, or data
queues configured for replication with MIMIX advanced journaling. You can use a
tracking entry to synchronize the contents, attributes, and authorities of the item it
represents.
You should be aware of the information in the following topics:
Considerations for synchronizing using MIMIX commands on page 445
About MIMIX commands for synchronizing objects, IFS objects, and DLOs on
page 449
About synchronizing tracking entries on page 453
To synchronize an IFS tracking entry
To synchronize an object represented by an IFS tracking entry, do the following:
1. From the Work with DG IFS Tracking Entries (WRKDGIFSTE) display, type option
16 (Synchronize) next to the IFS tracking entry you want to synchronize. If you
want to change options on the command SYNCIFS command, press F4 (Prompt).
2. To synchronize the associated IFS object, press Enter.
3. When the apply session has been notified that the object has been synchronized,
the status will change to *ACTIVE. To monitor the status, press F5 (Refresh).
4. If the synchronization fails, correct the errors and repeat the previous steps.
To synchronize an object tracking entry
To synchronize an object represented by an object tracking entry, do the following:
1. From the Work with DG Object Tracking Entries (WRKDGOBJ TE) display, type
option 16 (Synchronize) next to the object tracking entry you want to synchronize.
If you want to change options on the SYNCOBJ command, press F4 (Prompt).
2. To synchronize the associated data area or data queue, press Enter.
3. When the apply session has been notified that the object has been synchronized,
the status will change to *ACTIVE. To monitor the status, press F5 (Refresh).
4. If the synchronization fails, correct the errors and repeat the previous steps.
476
Sending library-based objects
This procedure sends one or more library-based objects between two systems using
the Send Network Object (SNDNETOBJ ) command.
Use the appropriate command: In general, you should use the SYNCOBJ
command to synchronize objects between systems. For more information about
differences between commands, see Performing the initial synchronization on
page 454.
You should be familiar with the information in the following topics before you use this
command:
Considerations for synchronizing using MIMIX commands on page 445
Synchronizing user profiles with the SNDNETOBJ command on page 446
Missing system distribution directory entries automatically added on page 447
To send library-based objects between two systems, do the following:
1. If the objects you are sending are located in an independent auxiliary storage pool
(ASP) on the source system, you must use the IBM command Set ASP Group
(SETASPGRP) on the local system to change the ASP group for your job. This
allows MIMIX to access the objects.
2. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and
press Enter.
3. The MIMIX Utilities Menu appears. Select option 11 (Send object) and press
Enter.
4. The Send Network Object (SNDNETOBJ ) display appears. At the Object prompt,
specify either *ALL, the name of an object, or a generic name.
Note: You can specify as many as 50 objects. To expand this prompt for multiple
entries, type a plus sign (+) at the prompt and press Enter.
5. Specify the name of the library that contains the objects at the Library prompt.
6. Specify the type of objects to be sent from the specified library at the Object type
prompt.
Notes:
If you specify *ALL, all object types supported by the IBM i Save Object
(SAVOBJ ) command are selected. The single values that are listed for this
parameter are not included when *ALL is specified because they are not
supported by the IBM i SAVOBJ command.
To expand this field for multiple entries, type a plus sign (+) at the prompt and
press Enter.
7. Press Enter.
8. Additional prompts appear on the display. Do the following:
a. Specify the name of the system to which you are sending objects at the
Remote system prompt.
Sending library-based objects
477
b. If the library on the remote system has a different name, specify its name at the
Remote library prompt.
c. The remaining prompts on the display are used for objects synchronized via a
save and restore operation. Verify that the values shown are what you want. To
see a description of each prompt and its available values, place the cursor on
the prompt and press F1 (Help).
9. By default, objects are restored to the same ASP device or number from which
they were saved. To change the location where objects are restored, press F10
(Additional parameters), then specify a value for either the Restore to ASP device
prompt or the Restore to ASP number prompt.
Note: Object types *J RN, *J RNRCV, *LIB, and *SAVF can be restored to any
ASP. IBM restricts which object types are allowed in user ASPs. Some
object types may not be restored to user ASPs. Specifying a value of 1
restores objects to the system ASP. Specifying 2 through 32 restores
values to the basic user ASP specified. If the specified ASP number does
not exist on the target system or if it has overflowed, the objects are placed
in the system ASP on the target system.
10. By default, authority to the object on the remote system is determined by that
system. To have the authorities on the remote system determined by the settings
of the local system, press F10 (Additional parameters), then specify *SRC at the
Target authority prompt.
11. To start sending the specified objects, press Enter.
478
Sending IFS objects
This procedure uses IBM i save and restore functions to send one or more integrated
files system (IFS) objects between two systems with the Send Network IFS
(SNDNETIFS) command.
Use the appropriate command: In general, you should use the SYNCIFS command
to synchronize IFS objects between systems. For more information about differences
between commands, see Performing the initial synchronization on page 454.
You should be familiar with the information in Considerations for synchronizing using
MIMIX commands on page 445.
To send IFS objects between two systems, do the following:
1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and
press Enter.
2. The MIMIX Utilities Menu appears. Select option 13 (Send IFS object) and press
Enter.
3. The Send Network IFS (SNDNETIFS) display appears. At the Object prompt, the
name of the IFS object to send.
Note: You can specify as many as 30 IFS objects. To expand this prompt for
multiple entries, type a plus sign (+) at the prompt and press Enter.
4. Specify the name of the system to which you are sending IFS objects at the
Remote system prompt.
5. Press F10 (Additional parameters).
6. Additional parameters appear which MIMIX uses in the save and restore
operations. Verify that the values shown for the additional prompts are what you
want. To see a description of each prompt and its available values, place the
cursor on the prompt and press F1 (Help).
7. To start sending the specified IFS objects, press Enter.
Sending DLO objects
479
Sending DLO objects
This procedure uses IBM i save and restore functions to send one or more document
library objects (DLOs) between two systems using the Send Network DLO
(SNDNETDLO) command. When you are configuring for system journal replication,
use this procedure from the source system to send DLOs to the target system for
replication.
Use the appropriate command: In general, you should use the SYNCDLO
command to synchronize objects between systems. For more information about
differences between commands, see Performing the initial synchronization on
page 454.
You should be familiar with the information in Considerations for synchronizing using
MIMIX commands on page 445.
To send DLO objects between systems, do the following:
1. From the MIMIX Intermediate Main Menu, select option 13 (Utilities menu) and
press Enter.
2. The MIMIX Utilities Menu appears. Select option 12 (Send DLO object) and press
Enter.
3. The Send Network DLO (SNDNETDLO) display appears. At the Document library
object prompt, specify either *ALL or the name of the DLO.
Note: You can specify multiple DLOs. To expand this prompt for multiple entries,
type a plus sign (+) at the prompt and press Enter.
4. Specify the name of the folder that contains the DLOs at the Folder prompt.
5. Specify the name of the system to which you are sending DLOs at the Remote
system prompt.
6. Specify a folder name in the Folder field and a network system name in the
Remote system field.
7. Press F10 (Additional parameters).
8. Additional parameters appear on the display. MIMIX uses the Remote folder, Save
active, Save active wait time, and Allow object differences prompts in the save
and restore operations. Verify that the values shown are what you want. To see a
description of each prompt and its available values, place the cursor on the
prompt and press F1 (Help).
9. By default, authority to the object on the remote system is determined by that
system. To have the authorities on the remote system determined by the settings
of the local system, specify *SRC at the Target authority prompt.
10. To start sending the specified DLOs, press Enter.
Introduction to programming
480
CHAPTER 21 Introduction to programming
MIMIX includes a variety of functions that you can use to extend MIMIX capabilities
through automation and customization.
The topics in this chapter include:
Support for customizing on page 481 describes several functions you can use to
customize your replication environment.
Completion and escape messages for comparison commands on page 483 lists
completion, diagnostic, and escape messages generated by comparison
commands.
The MIMIX message log provides a common location to see messages from all
MIMIX products. Adding messages to the MIMIX message log on page 490
describes how you can include your own messaging from automation programs in
the MIMIX message log.
MIMIX supports batch output jobs on numerous commands and provides several
forms of output, including outfiles. For more information, see Output and batch
guidelines on page 491.
Displaying a list of commands in a library on page 496 describes how to display
the super set of all commands known to License Manager or subset the list by a
particular library.
Running commands on a remote system on page 497 describes how to run a
single command or multiple commands on a remote system.
Procedures for running commands RUNCMD, RUNCMDS on page 498
provides procedures for using run commands with a specific protocol or by
specifying a protocol through existing MIMIX configuration elements.
Using lists of retrieve commands on page 504 identifies how to use MIMIX list
commands to include retrieve commands in automation.
Commands are typically set with default values that reflect the recommendation of
Vision Solutions. Changing command defaults on page 505 provides a method
for customizing default values should your business needs require it.
Support for customizing
481
Support for customizing
MIMIX includes several functions that you can use to customize processing within
your replication environment.
User exit points
User exit points are predefined points within a MIMIX process at which you can call
customized programs. User exit points allow you insert customized programs at
specific points in an application process to perform additional processing before
continuing with the application's processing.
MIMIX provides user exit points for journal receiver management. For more
information, see Chapter 23, Customizing with exit point programs.
Collision resolution
In the context of high availability, a collision is a clash of data that occurs when a
target object and a source object are both updated at the same time. When the
change to the source object is replicated to the target object, the data does not match
and the collision is detected.
With MIMIX user journal replication, the definition of a collision is expanded to include
any condition where the status of a file or a record is not what MIMIX determines it
should be when MIMIX applies a journal transaction. Examples of these detected
conditions include the following:
Updating a record that does not exist
Deleting a record that does not exist
Writing to a record that already exists
Updating a record for which the current record information does not match the
before image
The database apply process contains 12 collision points at which MIMIX can attempt
to resolve a collision.
When a collision is detected, by default the file is placed on hold due to an error
(*HLDERR) and user action is needed to synchronize the files. MIMIX provides
additional ways to automatically resolve detected collisions without user intervention.
This process is called collision resolution. With collision resolution, you can specify
different resolution methods to handle these different types of collisions. If a collision
does occur, MIMIX attempts the specified collision resolution methods until either the
collision is resolved or the file is placed on hold.
You can specify collision resolution methods for a data group or for individual data
group file entries. If you specify *AUTOSYNC for the collision resolution element of
the file entry options, MIMIX attempts to fix any problems it detects by synchronizing
the file.
You can also specify a named collision resolution class. A collision resolution class
allows you to define what type of resolution to use at each of the collision points.
Collision resolution classes allow you to specify several methods of resolution to try
482
for each collision point and support the use of an exit program. These additional
choices for resolving collisions allow customized solutions for resolving collisions
without requiring user action. For more information, see Collision resolution on
page 357.
Completion and escape messages for comparison commands
483
Completion and escape messages for comparison com-
mands
When the comparison commands finish processing, a completion or escape message
is issued. In the event of an escape message, a diagnostic message is issued prior to
the escape message. The diagnostic message provides additional information
regarding the error that occurred.
All completion or escape messages are sent to the MIMIX message log. To find
messages for comparison commands, specify the name of the command as the
process type. For more information about using the message log, see the MIMIX
Operations book.
CMPFILA messages
The following are the messages for CMPFILA, with a comparison level specification of
*FILE:
Completion LVI3E01 This message indicates that all files were compared
successfully.
Diagnostic LVE3E0D This message indicates that a particular attribute
compared differently.
Diagnostic LVE3385 This message indicates that differences were detected for
an active file.
Diagnostic LVE3E12 This message indicates that a file was not compared. The
reason the file was not compared is included in the message.
Escape LVE3E05 This message indicates that files were compared with
differences detected. If the cumulative differences include files that were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
Escape LVE3381 This message indicates that compared files were different but
active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
Escape LVE3E09 This message indicates that the CMPFILA command ended
abnormally.
Escape LVE3E17 This message indicates that no object matched the specified
selection criteria.
Informational LVI3E06 This message indicates that no object was selected to be
processed.
The following are the messages for CMPFILA, with a comparison level specification of
*MBR:
Completion LVI3E05 This message indicates that all members compared
successfully.
Diagnostic LVE3388 This message indicates that differences were detected for
an active member.
484
Escape LVE3E16 This message indicates that members were compared with
differences detected. If the cumulative differences include members that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
CMPOBJA messages
The following are the messages for CMPOBJ A:
Completion LVI3E02 This message indicates that objects were compared but no
differences were detected.
Diagnostic LVE3384 This message indicates that differences were detected for
an active object.
Escape LVE3E06 This message indicates that objects were compared and
differences were detected. If the cumulative differences include objects that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
Escape LVE3380 This message indicates that compared objects were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
Escape LVE3E17 This message indicates that no object matched the specified
selection criteria.
Informational LVI3E06 This message indicates that no object was selected to be
processed.
The LVI3E02 includes message data containing the number of objects compared, the
system 1 name, and the system 2 name. The LVE3E06 message includes the same
message data as LVI3E02, and also includes the number of differences detected.
CMPIFSA messages
The following are the messages for CMPIFSA:
Completion LVI3E03 This message indicates that all IFS objects were compared
successfully.
Diagnostic LVE3E0F This message indicates that a particular attribute was
compared differently.
Diagnostic LVE3386 This message indicates that differences were detected for
an active IFS object.
Diagnostic LVE3E14 This message indicates that a IFS object was not
compared. The reason the IFS object was not compared is included in the
message.
Escape LVE3E07 This message indicates that IFS objects were compared with
differences detected. If the cumulative differences include IFS objects that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
Escape LVE3382 This message indicates that compared IFS objects were
Completion and escape messages for comparison commands
485
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
Escape LVE3E17 This message indicates that no object matched the specified
selection criteria.
Escape LVE3E0B This message indicates that the CMPIFSA command ended
abnormally.
Informational LVI3E06 This message indicates that no object was selected to be
processed.
CMPDLOA messages
The following are the messages for CMPDLOA:
Completion LVI3E04 This message indicates that all DLOs were compared
successfully.
Diagnostic LVE3E11 This message indicates that a particular attribute
compared differently.
Diagnostic LVE3387 This message indicates that differences were detected for
an active DLO.
Diagnostic LVE3E15 This message indicates that a DLO was not compared.
The reason the DLO was not compared is included in the message.
Escape LVE3E08 This message indicates that DLOs were compared and
differences were detected. If the cumulative differences include DLOs that were
different but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter, this message also includes those differences.
Escape LVE3383 This message indicates that compared objects were different
but active within the time span specified on the Maximum replication lag
(MAXREPLAG) parameter.
Escape LVE3E17 This message indicates that no object matched the specified
selection criteria.
Escape LVE3E0C This message indicates that the CMPDLOA command ended
abnormally.
Informational LVI3E06 This message indicates that no object was selected to be
processed.
CMPRCDCNT messages
The following are the messages for CMPRCDCNT:
Escape LVE3D4D This message indicates that ACTIVE(*YES) outfile
processing failed and identifies the reason code.
Escape LVE3D5A This message indicates that system journal replication is not
active.
Escape LVE3D5F This message indicates that an apply session exceeded the
unprocessed entry threshold.
486
Escape LVE3D6D This message indicates that user journal replication is not
active.
Escape LVE3D6F This message identifies the number of members compared
and how many compared members had differences.
Escape LVE3D72 This message identifies a child process that ended
unexpectedly.
Escape LVE3E17 This message indicates that no object was found for the
specified selection criteria.
Informational LVI306B This message identifies a child process that started
successfully.
Informational LVI306D This message identifies a child process that completed
successfully.
Informational LVI3D45 This message indicates that active processing
completed.
Informational LVI3D50 This message indicates that work files are not deleted.
Informational LVI3D5A This message indicates that system journal replication is
not active.
Informational LVI3D5F This message identifies an apply session that has
exceeded the unprocessed entry threshold.
Informational LVI3D6D This message indicates that user journal replication is
not active.
Informational LVI3E05 This message identifies the number of members
compared. No differences were detected.
Informational LVI3E06 This message indicates that no object was selected for
processing.
CMPFILDTA messages
The following are the messages for CMPFILDTA:
Completion LVI3D59 This message indicates that all members compared were
identical or that one or more members differed but were then completely repaired.
Diagnostic LVE3031 - This message indicates the name of the local system is
entered on the System 2 (SYS2) prompt. Using the name of the local system on
the SYS2 prompt is not valid.
Diagnostic LVE3D40 This message indicates that a record in one of the
members cannot be processed. In this case, another job is holding an update lock
on the record and the wait time has expired.
Diagnostic LVE3D42 - This message indicates that a selected member cannot be
processed and provides a reason code.
Diagnostic LVE3D46 This message indicates that a file member contains one or
more field types that are not supported for comparison. These fields are excluded
from the data compared.
Completion and escape messages for comparison commands
487
Diagnostic LVE3D50 This message indicates that a file member contains one or
more large object (LOB) fields and a value other than *NONE was specified on the
Repair on system (REPAIR) prompt. Files containing LOB fields cannot be
repaired. In this case, the request to process the file member is ignored. Specify
REPAIR(*NONE) to process the file member.
Diagnostic LVE3D64 This message indicates that the compare detected minor
differences in a file member. In this case, one member has more records
allocated. Excess allocated records are deleted. This difference does not affect
replication processing, however.
Diagnostic LVE3D65 This message indicates that processing failed for the
selected member. The member cannot be compared. Error message LVE0101 is
returned.
Escape LVE3358 This message indicates that the compare has ended
abnormally, and is shown only when the conditions of messages LVI3D59,
LVE3D5D, and LVE3D59 do not apply.
Escape LVE3D5D This message indicates that insignificant differences were
found or remain after repair. The message provides a statistical summary of the
differences found. Insignificant differences may occur when a member has
deleted records while the corresponding member has no records yet allocated at
the corresponding positions. It is also possible that one or more selected
members contains excluded fields, such as large objects (LOBs).
Escape LVE3D5E This message indicates that the compare request ended
because the data group was not fully active. The request included active
processing (ACTIVE), which requires a fully active data group. Output may not be
complete or accurate.
Escape LVE3D5F This message indicates that the apply session exceeded the
specified threshold for unprocessed entries. The DB apply threshold
(DBAPYTHLD) parameter determines what action should be taken when the
threshold is exceeded. In this case, the value *END was specified for
DBAPYTHLD, thereby ending the requested compare and repair action.
Escape LVE3D59 This message indicates that significant differences were
found or remain after repair, or that one or more selected members could not be
compared. The message provides a statistical summary of the differences found.
Escape LVE3D56 This message indicates that no member was selected by the
object selection criteria.
Escape LVE3D60 This message indicates that the status of the data group
could not be determined. The WRKDG (MXDGSTS) outfile returned a value of
*UNKNOWN for one or more fields used in determining the overall status of the
data group.
Escape LVE3D62 This message indicates the number of mismatches that will
not be fully processed for a file due to the large number of mismatches found for
this request. The compare will stop processing the affected file and will continue to
process any other files specified on the same request.
Escape LVE3D67 This message indicates that the value specified for the File
488
entry status (STATUS) parameter is not valid. To process members in *HLDERR
status, a data group must be specified on the command and *YES must be
specified for the Process while active parameter.
Escape LVE3D68 This message indicates that a switch cannot be performed
due to members undergoing compare and repair processing.
Escape LVE3D69 This message indicates that the data group is not configured
for database. Data groups used with the CMPFILDTA command must be
configured for database, and all processes for that data group must be active.
Escape LVE3D6C This message indicates that the CMPFILDTA command
ended before it could complete the requested action. The processing step in
progress when the end was received is indicated. The message provides a
statistical summary of the differences found.
Escape LVE3E41 This message indicates that a database apply job cannot
process a journal entry with the indicated code, type, and sequence number
because a supporting function failed. The journal information and the apply
session for the data group are indicated. See the database apply job log for
details of the failed function.
Informational LVI3727 This message indicates that the database apply process
(DBAPY) is currently processing a repair request for a specific member. The
member was previously being held due to error (*HLDERR) and is now in
*CMPRLS state.
Informational LVI3728 This message indicates that the database apply process
(DBAPY) is currently processing a repair request for a specific member. The
member was previously being held due to error (*HLDERR) and has been
changed from *CMPRLS to *CMPACT state.
Informational LVI3729 This message indicates that the repair request for a
specific member was not successful. As a result, the CMPFILDTA command has
changed the data group file entry for the member back to *HLDERR status.
Informational LVI372C The CMPFILDTA command is ending controlled because
of a user request. The command did not complete the requested compare or
repair. Its output may be incomplete or incorrect.
Informational LVI372D The CMPFILDTA command exceeded the maximum rule
recovery time policy and is ending. The command did not complete the requested
compare or repair. Its output may be incomplete or incorrect.
Informational LVI372E The CMPFILDTA command is ending unexpectedly. It
received an unexpected request from the remote CMPFILDTA job to shut down
and is ending. The command did not complete the requested compare or repair.
Its output may be incomplete or incorrect.
Informational LVI3D4B This message indicates that work files are not
automatically deleted because the time specified on the Wait time (seconds)
(ACTWAIT) prompt expired or an internal error occurred.
Informational LVI3D59 This message indicates that the CMPFILDTA command
completed successfully. The message also provides a statistical summary of
compare processing.
Completion and escape messages for comparison commands
489
Informational LVI3D5E - This message indicates that the compare request ended
because the request required Active processing and the data group was not
active. Results of the comparison may not be complete or accurate.
Informational LVI3D5F This message indicates that the apply session exceeded
the specified threshold for unprocessed entries, thereby ending the requested
compare and repair action. In this case, the value *END was specified for the DB
apply threshold (DBAPYTHLD) parameter, which determines what action should
be taken when the threshold is exceeded.
Informational LVI3D60 - This message indicates that the status of the data group
could not be determined. The MXDGSTS outfile returned a value of *UNKNOWN
for one or more status fields associated with systems, journals, system managers,
journal managers, system communications, remote journal link, and database
send and apply processes.
Informational LVI3E06 This message indicates that the data group specified
contains no data group file entries.
When active processing and ACTWAIT(*NONE) is specified, or when the active wait
time out occurs, some members will have unconfirmed differences if none of the
differences initially found was verified by the MIMIX database apply process.
The CMPFILDTA outfile contains more detail on the results of each member compare,
including information on the types of differences that are found and the number of
differences found in each member.
Messages LVI3D59, LVE3D5D, LVE3D59, and LVE3D6C include message data
containing the number of members selected on each system, the number of members
compared, the number of members with confirmed differences, the number of
members with unconfirmed differences, the number of members successfully
repaired, and the number of members for which repair was unsuccessful.
490
Adding messages to the MIMIX message log
The Add Message Log Entry (ADDMSGLOGE) command allows you to add an entry
to the MIMIX message log. This is helpful when you want to include messages from
your automation programs into the MIMIX message log for easier tracking. To see the
parameters for this command, type the command and press F4 (Prompt). Help text for
the parameters describe the options available.
The message is written to the message log file. The message is also sent to the
primary and secondary messages queues if the message meets the filter criteria for
those queues. The message can also be sent to a program message queue.
Messages generated on a network system will be automatically sent to the
management system. However, messages generated on a management system may
not be sent to any network systems. The system manager on the management
system does not send messages to network systems when it cannot determine which
system should receive the message.
Output and batch guidelines
491
Output and batch guidelines
This topic provides guidelines for display, print, and file output. In addition, the user
interface, the mechanics of selecting and producing output, and content issues such
as formatting are described.
Batch job submission guidelines are also provided. These guidelines address the
user interface as well as the mechanics of submitting batch jobs that are not part of
the mainline replication process.
General output considerations
Commands can produce many forms of output, including messages, display output
(interactive panels), printer output (spooled files), and file output. This section focuses
primarily on display, print, and file-related output. In most cases, the output
information can be selectively directed to a display, a printer, or an outfile. Messages,
on the other hand, are intended to provide diagnostic or status-related information, or
an indication of error conditions. Messages are not intended for general output.
Several commands support display, print, output files, or some combination thereof.
The Work (WRK) and Display (DSP) commands are the most common classes of
commands that support various forms of output. Other classes of commands, such as
Compare (CMP) and Verify (VFY), also support various forms of output in many
cases. As part of an on-going effort to ensure consistent capabilities across similar
classes of commands, most commands in the same class support the same output
formats. For example, all Work (WRK) commands typically support display, print, and
output formats. This section describes the general guidelines used throughout the
product. However, there are some exceptions, which are described in the sections
about specific commands.
Display support is intended primarily for Display (DSP) commands for displaying
detailed information about a specific entry, or for Work (WRK) related commands that
display lists of entries. Audit-based commands, such as Compare (CMP) and Verify
(VFY), are often long-running requests and do not typically provide display support.
Spooled output support provides a more easily readable form of output for print or
distribution purposes. Output is generated in the form of spooled output files that can
easily be printed or distributed. Nearly all Display (DSP) or Work (WRK) commands
support this form of output. In some cases, other command-specific options may
affect the contents of the spooled output file.
Output files are intended primarily for automation purposes, providing MIMIX-related
information in a manner that facilitates programming automation for various
purposessuch as additional monitoring support, auditing support, automatic
detection, and the correction of error conditions. Output files are also beneficial as
intermediate data for advance reporting using SQL query support.
Output parameter
Some commands can produce output of more than one typedisplay, print, or output
file. In these cases, the selection is made on the Output parameter. Table 66 lists the
values supported by the Output parameter.
492
Note: Not all values are supported for all commands. For some commands, a
combination of values is supported.
Commands that support OUTPUT(*) that can also run in batch are required to support
the other forms of output as well.
Commands called from a program or submitted to batch with a specification of
OUTPUT(*) default to OUTPUT(*PRINT). Displaying a panel during batch processing
or when called from another program would otherwise fail.
With the exception of messages generated as a result of running a command,
commands that support OUTPUT(*NONE) will generate no other forms of output.
Commands that support combinations of output values do not support OUTPUT(*) in
combination with other output values.
Display output
Commands that support OUTPUT(*) provide the ability to display information
interactively. Display (DSP) and Work (WRK) commands commonly use display
support. Display commands typically display detailed information for a specific entity,
such as a data group definition. Work commands display a list of entries and provide a
summary view of list of entries. Display support is required to work interactively with
the MIMIX product.
Work commands often provide subsetting capabilities that allow you to select a
subset of information. Rather than viewing all configuration entries for all data groups,
for example, subsetting allows you to view the configuration entries for a specific data
group. This ability allows you to easily view data that is important or relevant to you at
a given time.
Print output
Spooled output is generated by specifying OUTPUT(*PRINT), and is intended to
provide a readable form of output for print or distribution purposes. Output is
generated in the form of spooled output files that can easily be printed or distributed.
On commands that support spooled output, the spooled output is generated as a
result of specifying OUTPUT(*PRINT). Most Display (DSP) or Work (WRK)
commands support this form of output. Other commands, such as Compare (CMP)
and Verify (VFY), also support spooled output in most cases.
Table 66. Values supported by the Output parameter
* Display only
*NONE No output is generated
*PRINT Spooled output is generated
*OUTFILE An output file is generated
*BOTH Both spooled output and an output file are generated.
Output and batch guidelines
493
The Work (WRK) and Display (DSP) commands support different categories of
reports. The following are standard categories of reports available from these
commands:
The detail report contains information for one item, such as an object, definition,
or entry. A detail report is usually obtained by using option 6 (Print) on a Work
(WRK) display, or by specifying *PRINT on the Output parameter on a Display
(DSP) command.
The list summary report contains summary information for multiple objects,
definitions, or entries. A list summary is usually obtained by pressing F21 (Print)
on a Work (WRK) display. You can also get this report by specifying *BASIC on
the Detail parameter on a Work (WRK) command.
The list detail report contains detailed information for multiple objects,
definitions, or entries. A list detail report is usually obtained by specifying *PRINT
on the Output parameter of a Work (WRK) command.
Certain parameters, which vary from command to command, can affect the contents
of spooled output. The following list represents a common set of parameters that
directly impact spooled output:
EXPAND(*YES or *NO) - The expand parameter is available on the Work with
Data Group Object Entries (WRKDGOBJ E), the Work with Data Group IFS
Entries (WRKDGIFSE), and the Work with Data Group DLO Entries
(WRKDGDLOE) commands. Configuration for objects, IFS objects, and DLOs can
be accomplished using generic entries, which represent one or more actual
objects on the system. The object entry ABC*, for example, can represent many
entries on a system. Expand support provides a means to determine that actual
objects on a system are represented by a MIMIX configuration. Specifying *NO on
the EXPAND parameter prints the configured data group entries.
DETAIL(*FULL or *BASIC) - Available on the Work (WRK) commands, the detail
option determines the level of detail in the generated spool file. Specifying
DETAIL(*BASIC) prints a summary list of entries. For example, this specification
on the Work with Data Group Definitions (WRKDGDFN) command will print a
summary list of data group definitions. Specifying DETAIL(*FULL) prints each data
group definition in detail, including all attributes of the data group definition.
Note: This parameter is ignored when OUTPUT(*) or OUTPUT(*OUTFILE) is
specified.
RPTTYPE(*DIF, *ALL, *SUMMARY or *RRN, depending on command) - The
Report Type (RPTTYPE) parameter controls the amount of information in the
spooled file. The values available for this parameter vary, depending on the
command.
The values *DIF, *ALL, and *SUMMARY are available on the Compare File
Attributes (CMPFILA), Compare Object Attributes (CMPOBJ A), Compare IFS
Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA) commands.
Specifying *DIF reports only detected differences. A value of *SUMMARY reports
a summary of objects compared, including an indication of differences detected.
*ALL provides a comprehensive listing of objects compared as well as difference
detail.
494
The Compare File Data (CMPFILDTA) command supports *DIF and *ALL values,
as well as the value *RRN. Specifying *RRN allows you to output the relative
record number of the first 1,000 objects that failed to compare. Using the *RRN
value can help resolve situations where a discrepancy is known to exist, but you
are unsure which system contains the correct data. In this case, *RRN provides
the information that enables you to display the specific records on the two
systems and to determine the system on which the file should be repaired.
File output
Output files can be generated by specifying OUTPUT(*OUTFILE). Having full outfile
support across the MIMIX product is important for a number of reasons. Outfile
support is a key enabler for advanced automation purposes. The support also allows
MIMIX customers and qualified MIMIX consultants to develop and deliver solutions
tailored to the individual needs of the user.
As with the other forms of output, output files are commonly supported across certain
classes of commands. The Work (WRK) commands commonly support output files. In
addition, many audit-based reports, such as Comparison (CMP) commands, also
provide output file support. Output file support for Work (WRK) commands provides
access to the majority of MIMIX configuration and status-related data. The Compare
(CMP) commands also provide output files as a key enabler for automatic error
detection and correction capabilities.
When you specify OUTPUT(*OUTFILE), you must also specify the OUTFILE and
OUTMBR parameters. The OUTFILE parameter requires a qualified file and library
name. As a result of running the command, the specified output file will be used. If the
file does not exist, it will automatically be created.
Note: If a new file is created for CMPFILA, for example, the record format used is
from the supplied model database file MXCMPFILA, found in the installation
library. The text description of the created file is Output file for CMPFILA. The
file cannot reside in the product library.
The Outmember (OUTMBR) parameter allows you to specify which member to use in
the output file. If no member exists, the default value of *FIRST will create a member
name with the same name as the file name. A second element on the Outmember
parameter indicates the way in which information is stored for an existing member. A
value of *REPLACE will clear the current contents of the member and add the new
records. A value of *ADD will append the new records to the existing data.
Expand support: The Expand support was developed specifically as a feature for
data group configuration entries that support generic specifications. Data group object
entries, IFS entries, and DLO entries can all be configured using generic name
values. If you specify an object entry with an object name of ABC* in library XYZ and
accept the default values for all other fields, for example, all objects in library XYZ are
replicated. Specifying EXPAND(*NO) will write the specific configuration entries to the
output files. Using EXPAND(*YES) will list all objects from the local system that match
the configuration specified. Thus, if object name ABC* for library XYZ represented
1000 actual objects on the system, EXPAND(*YES) would add 1000 rows to the
output file. EXPAND(*NO) would add a single generic entry.
Note: EXPAND(*YES) support locates all objects on the local system.
Output and batch guidelines
495
General batch considerations
MIMIX functions that are identified as long-running processes typically allow you to
submit the requests to batch and avoid the unnecessary use of interactive resources.
Parameters typically associated with the Batch (BATCH) parameter include J ob
description (J OBD) and J ob name (J OB).
Batch (BATCH) parameter
Values supported on the Batch (BATCH) parameter include *YES and *NO. A value of
*YES indicates that the request will be submitted to batch. A value of *NO will cause
the request to run interactively. The default value varies from command to command,
and is based on the general usage of the command. If a command usually requires
significant resource to run, the default will likely be *YES.
Some commands, such as Start Data Group (STRDG), perform a number of
interactive tasks and start numerous jobs by submitting the requests to batch.
Likewise, some jobs, such as the data group apply process, run on a continuous basis
and do not end until specifically requested. These jobs represent the various
processes required to support an active data group. Commands of this type do not
specifically provide a batch (BATCH) parameter since it is the only method available.
For commands that are called from other programs, it is important to understand the
difference between BATCH(*YES) and BATCH(*NO). Implementing automatic audit
detection and correction support is easier to accomplish using BATCH(*NO). Let us
assume you are running the Compare File Attributes (CMPFILA) command as part of
an audit. If differences are detected, specifying BATCH(*NO) allows you to monitor for
specific exceptions and implement automatic correction procedures. This capability
would not be available if you submitted the request to BATCH(*YES).
Job description (JOBD) parameter
The J ob Description (J OBD) parameter allows the user of the command to specify
which job description to use when submitting the batch request. Newer MIMIX
commands use the job descriptions MXAUDIT, MXSYNC, and MXDFT, which are
automatically created in the MIMIX installation library when MIMIX is installed. J obs
and related output are associated to the user profile submitting the request. Older
commands that provided job description support for batch processing have not been
altered. Refer to individual commands for default values.
Job name (JOB) parameter
The J ob name (J OB) parameter allows the user of the command to specify the job
name used for the submitted job request. By default, the job name defaults to the
name of the command. The job name parameter is intended to make it easier to
identify the active job as well as the spooled files generated as a result of running the
command. For spooled files, the job name is also used for the user data information.
Only newer features provide this capability.
496
Displaying a list of commands in a library
You can use the IBM Select Command (SLTCMD) command to display a list of all
commands contained within a particular library on the system. This list includes any
commands you have added to the associated library, including copies of other
commands.
Note: This list does not indicate whether you are licensed to the command or if
authority to the command exists.
Do the following:
1. From the library you want, access the MIMIX Intermediate Main Menu.
2. Select option 13 (Utilities menu) and press Enter.
3. When the MIMIX Utilities Menu is displayed, select option 1 (Select all
commands).
Running commands on a remote system
497
Running commands on a remote system
The Run Command (RUNCMD) and Run Commands (RUNCMDS) commands
provide a convenient way to run a single command or multiple commands on a
remote system. The RUNCMD and RUNCMDS commands replace and extend the
capabilities available in the IBM commands, Submit Remote Command
(SBTRMTCMD) and Run Remote Command (RUNRMTCMD).
The MIMIX commands provide a protocol-independent way of running commands
using MIMIX constructs such as system definitions, data group definitions, and
transfer definitions. The MIMIX commands enable you to run commands and receive
messages from the remote system.
In addition, the RUNCMD and RUNCMDS commands use the current data group
direction to determine where the command is to be run. This capability simplifies
automation by eliminating the need to manually enter source and target information at
the time a command is run.
Note: Do not change the RUNCMD or RUNCMDS commands to
PUBLIC(*EXCLUDE) without giving MIMIXOWN proper authority.
Benefits - RUNCMD and RUNCMDS commands
Individually, the RUNCMD command can be used as a convenient tool to debug base
communications problems. The RUNCMD command also provides the ability to
prompt on any command. The RUNCMDS command, while supporting up to 300
commands, does not allow command prompting. When multiple commands are run
on a single RUNCMDS command, only one communications session is established.
The target program environment, including QTEMP and the local data area, is also
kept intact. Additionally, the RUNCMDS command has options for monitoring escape
and completion messages. All messages are sent to the same program level as the
program or command line running the command, enabling you to program remote
commands in the same manner as local commands.
Both RUNCMD and RUNCMDS allow you to specify commands to be sent through
the journal stream and run by the database apply process. This protocol is a MIMIX
request that the U-MX journal entry codes send through the journal stream. The value
*DGJ RN on the Protocol prompt enables this capability, thereby replacing
conventional U-EX support. In addition, the When to run (RUNOPT) prompt can be
used to specify when the journal entry associated with the command is processed by
the target system for the specified data group. See Procedures for running
commands RUNCMD, RUNCMDS on page 498 for additional details about the
RUNOPT parameter.
Benefits of the RUNCMD and RUNCMDS commands also include the following:
Provides a convenient and consistent interface to automate tasks across a
network.
Centralizes the management and control of networked systems.
Enables protocol-independent testing and verification of MIMIX communications
setups.
498
Supports sending and receiving local data area (LDA) data.
Allows commands to be run under other user profiles as long as the user ID and
password are the same on both systems. The password is validated before the
command is run on the remote system, thus the user must have authority to the
user profile being used.
Procedures for running commands RUNCMD, RUNCMDS
There are two ways to use the RUNCMD or RUNCMDS commands. You can use
them with a specific protocol, or you can use them by specifying a protocol through
existing MIMIX configuration elements. To use the commands with a specific protocol,
use the procedure Running commands using a specific protocol on page 498. To
use the commands using an existing MIMIX configuration, use the procedure
Running commands using a MIMIX configuration element on page 500.
Running commands using a specific protocol
1. From the MIMIX Main Menu, select option 13 (Utilities menu). The MIMIX Utilities
Menu appears.
2. From the MIMIX Utilities Menu, select option 1 (Select all commands). The Select
Command display appears.
3. Page down and do one of the following:
To run a single command on a remote system, type a 1 next to RUNCMD. The
Run Command (RUNCMD) display appears.
To run multiple commands on a remote system, type a 1 next to RUNCMDS.
The Run Commands (RUNCMDS) display appears.
4. Specify the commands to run or messages to monitor for the command as follows:
d. At the Command prompt specify the command to run on the remote system.
When using the RUNCMDS command, you can specify up to 300 commands.
e. If you are using the RUNCMDS command, you can specify as many as ten
escape, notify, or status messages to be monitored for each command. Specify
these at the Monitor for messages prompt.
5. Specify the protocol and protocol-specific implementation using Table 67.
Table 67. Specific protocols and specifications used for RUNCMD and RUNCMDS
How to run
(protocol)
Specify
Run on local
system
At the Protocol prompt, specify *LOCAL.
Procedures for running commands RUNCMD, RUNCMDS
499
6. Do one of the following:
To access additional options, skip to Step 7.
To run the commands or monitor for messages, press Enter.
7. Press F10 (Additional parameters).
8. At the Check syntax prompt, specify whether to check the syntax of the command
only. If *YES is specified, the syntax is checked but the command is not run.
9. At the Local data area length prompt, specify the amount of the current local data
area (LDA) to copy. This is useful for automating application processing that is
dependent on the local data area and for passing binary information to command
programs.
10. At the Return LDA prompt, specify whether to return the contents of the local data
area (LDA) from the remote system after the commands are run. The value
specified in the Local data area length prompt in Step 9 determines how much
data is returned.
Run using
TCP/IP
Do the following:
1. At the Protocol prompt, specify *TCP to run the commands using
Transmission Control Protocol/Internet Protocol (TCP/IP)
communications. Press Enter for additional prompts.
2. At the Host name or address prompt, specify the host alias or address
of the TCP protocol.
3. At the Port number or alias prompt, specify the port number or port
alias on the local system to communicate with the remote system. This
value is a 14-character mixed-case TCP port alias or port number.
Run using
SNA
Do the following:
1. At the Protocol prompt, specify *SNA to run the commands using
System Network Architecture (SNA) communications. Press Enter for
additional prompts.
2. At the Remote location prompt, specify the name or address of the
remote location.
3. At the Local location prompt, specify the unique location name that
identifies the system to remote devices.
4. At the Remote network identifier prompt, specify the name or address
of the remote location.
5. At the Mode prompt, specify the name of the mode description used
for communications. The product default for this parameter is MIMIX.
Run using
OptiConnect
Do the following:
1. At the Protocol prompt, specify *OPTI to run the commands using
OptiConnect fiber optic network communications. Press Enter for
additional prompts.
2. At the Remote location prompt, specify the name or address of the
remote location.
Table 67. Specific protocols and specifications used for RUNCMD and RUNCMDS
How to run
(protocol)
Specify
500
11. At the User prompt, specify the user profile to use when the command is run on
the remote system.
12. To run the commands or monitor for messages, press Enter.
Running commands using a MIMIX configuration element
To use RUNCMD or RUNCMDS using a MIMIX configuration element, do the
following:
1. From the MIMIX Main Menu, select option 13 (Utilities menu). The MIMIX Utilities
Menu appears.
2. From the MIMIX Utilities Menu, select option 1 (Select all commands). The Select
Command display appears.
3. Page down and do one of the following:
To run a single command on a remote system, type a 1 next to RUNCMD. The
Run Command (RUNCMD) display appears.
To run multiple commands on a remote system, type a 1 next to RUNCMDS.
The Run Commands (RUNCMDS) display appears.
4. Specify the commands to run or messages to monitor for the command as follows:
a. At the Command prompt specify the command to run on the remote system.
When using the RUNCMDS command, you can specify up to 300 commands.
b. If you are using the RUNCMDS command, you can specify as many as ten
escape, notify, or status messages to be monitored for each command. Specify
these at the Monitor for messages prompt.
5. Specify the MIMIX configuration element using Table 68.
Table 68. MIMIX configuration protocols and specifications
Protocol using
MIMIX configuration element
Protocol
prompt
value
Also specify
Run on system defined by the
default transfer definition
*SYSDFN System definition prompt:
Specify the name of the
system definition or press F4
for a list of valid definitions.
Press Enter for additional
prompts
Procedures for running commands RUNCMD, RUNCMDS
501
Run on the system specified in the
transfer definition (TFRDFN
parameter) that is not the local
system
*TFRDFN Transfer definition prompt:
Press F1 Help for assistance
in specifying the three-part
qualified name of the transfer
definition.
Press Enter for additional
prompts.
Run on the system specified in the
data group definition that is not the
local system
*DGDFN Data group definition prompt:
Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Run on the current source system
defined for the data group
*DGSRC Data group definition prompt:
Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Run on the current target system
defined for the data group
*DGTGT Data group definition prompt:
Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Run by the database apply process
when the journal entry is processed
*DGJRN Data group definition prompt:
Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Run on the system defined as
System 1 for the data group
*DGSYS1 Data group definition prompt:
Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Table 68. MIMIX configuration protocols and specifications
Protocol using
MIMIX configuration element
Protocol
prompt
value
Also specify
502
6. Do one of the following:
To access additional options, skip to Step 7.
To run the commands or monitor for messages, press Enter.
7. Press F10 (Additional parameters).
8. At the Check syntax only prompt, specify whether to check the syntax of the
command only. If *YES is specified, the syntax is checked but the command is not
run.
9. At the Local data area length prompt, specify the amount of the current local data
area (LDA) to copy. This is useful for automating application processing that is
dependent on the local data area and for passing binary information to command
programs.
10. At the Return LDA prompt, specify whether to return the contents of the local data
area (LDA) from the remote system after the commands are run. The value
specified in the Local data area length prompt in Step 9 determines how much
data is returned.
11. At the User prompt, specify the user profile to use when the command is run on
the remote system.
12. If you specified *DGJ RN for the Protocol prompt, you will see the File prompts. Do
the following:
a. At the File name prompt, specify the name of the file to use when the journal
entry generated by the commands is sent.
Note: Use these prompts if you want the command to run in the database
apply job associated with the named file. If a file is not specified,
database apply (DBAPY) session A is selected.
b. At the Library prompt, specify the name of the library associated with the file.
13. If you specified a file name for the File prompt, you will see the When to run
prompt. Using Table 69, specify when the journal entry associated with the
command is processed by the target system for the specified data group.
14. To run the commands or monitor for messages, press Enter.
Run on the system defined as
System 2 for the data group
*DGSYS2 Data group definition prompt:
Press F1 Help for assistance
in specifying the three-part
qualified name of the data
group definition.
Table 68. MIMIX configuration protocols and specifications
Protocol using
MIMIX configuration element
Protocol
prompt
value
Also specify
Procedures for running commands RUNCMD, RUNCMDS
503
Table 69. Options for processing journal entries with MIMIX *DGJ RN protocol
When to run
(Runopt)
Specify
Run when the database apply job for the
specified file receives the journal entry
Do the following:
1. At the Protocol prompt, specify *DGJ RN.
2. At the When to run prompt, specify *RCV.
Run in sequence with all other entries for
the file.
Do the following:
1. At the Protocol prompt, specify *DGJ RN.
2. At the When to run prompt, specify *APY.
504
Using lists of retrieve commands
The following additional commands make working with retrieve commands easier:
Note: Although the current retrieve commands will be supported indefinitely, they will
not be enhanced. You are now encouraged to use the extensive outfile
support now available. Outfile support provides the means to generate a list of
entries. The retrieve commands are primarily intended to handle retrieving
information for a specific entry only. For more information, see Output and
batch guidelines on page 491.
Open MIMIX List (OPNMMXLST) This command allows you to open a list of
specified MIMIX definitions or data group entries for use with the MIMIX retrieve
commands. You specify the type of definitions or data group entries to include in
the list, a CL variable to receive the list identifier, and a data group definition. The
CL variable for the list identifier is needed for the MIMIX retrieve commands.
Close MIMIX List (CLOMMXLST) This command allows you to close a list of
specified MIMIX definitions or data group entries opened by the Open MIMIX List
(OPNMMXLST) command. A close is necessary in order to free resources. You
specify the list identifier to close.
Changing command defaults
505
Changing command defaults
Nearly all MIMIX processes are based on commands that have been shipped with
default values that reflect best practices recommendations. This ensures the easiest
and best use of each command. MIMIX implements named configuration definitions
through which you can customize your configuration by using options on commands
without resorting to changing command defaults.
If you wish to customize command defaults to fit a specific business need, use the
IBM Change Command Default (CHGCMDDFT) command. Be aware that by
changing a command default, you may be affecting the operation of other MIMIX
processes. Also, each update of MIMIX software will cause any changes to be lost.
Customizing procedures
506
CHAPTER 22 Customizing procedures
This chapter describes how to perform operations to customize the configuration of
procedures for application groups. With procedures, environments which use
application groups have greater flexibility over how start, end, and switch operations
are performed. This flexibility also increases user responsibility for understanding how
procedures function as well as what actions their steps perform.
Detailed information about the effect that operational commands for procedures and
steps can have on your environment is available in the MIMIX Operations book.
This chapter includes:
Procedure components and concepts on page 506 describes the functionality
provided by procedures and related configuration components, the types of
procedures available, jobs used in processing procedures and runtime attributes
of steps. This section also includes topics describing operational control, status,
and capabilities for retaining historical information about completed runs of
procedures.
Customizing user application handling for switching on page 510 describes the
action required to avoid problems when attempting to switch and describes
options for customizing step programs intended for starting and ending user
applications within in switch procedures.
Working with procedures on page 512 includes how to use the Work with
Procedures display and topics for creating and deleting procedures.
Working with the steps of a procedure on page 515 includes how the use the
Work with Steps display and topics for displaying the configured steps of a
procedure, changing runtime attributes of steps, as well as topics for adding,
removing, and enabling or disabling steps.
Working with step programs on page 517 describes how to access a list of the
available step programs and includes topics for changing step programs and
creating custom step programs. The step program format for custom programs is
included.
Working with step messages on page 520 describes step messages and
includes topics for adding and removing them.
Additional programming support for procedures and steps on page 522 identifies
the available commands for retrieving information and commands with outfile
support for about procedures and steps.
Procedure components and concepts
Each procedure is associated with an application group. When an application group is
created, a set of default procedures for that application group is also created to
provide the ability to start, end, perform pre-check activity for switching, and switch
Procedure components and concepts
507
the application group. All procedures created when an application group is created
are copies of the shipped default procedure for the specified type.
Each operation is performed by a procedure that consists of a sequence of steps.
Each step calls a predetermined step program to perform a specific subtask of the
larger operation. Steps also identify runtime attributes for handling before and after
the program call within the context of the procedure.
Each step program is a reusable configuration element that identifies a program
which can perform a task and attributes which determine where the program runs and
what type of work it performs. A step program can perform work on an application
group, its data resource groups, or their respective data groups. A set of shipped step
programs provide functionality for the default procedures created for application
groups.
In addition, you can copy or create your own procedures and step programs to
perform custom activity, change which procedure is the default of its type for an
application group, and change attributes of steps within a procedure.
You can also optionally create step messages. These are configuration elements that
define the error action to be taken for a specific error message identifier. A step
message provides the ability to determine the error action taken by a step based on
attributes defined in the error message identifier. Each step message is defined for an
installation so it can be used by multiple steps or by steps in multiple procedures.
Procedure types
Procedures have a type (TYPE) value which determines the operations for which the
procedure can be used. The following types are supported:
*END - The procedure is usable with the End Application Group (ENDAG)
command.
*START - The procedure is usable with the Start Application Group (STRAG)
command.
*SWTPLAN - The procedure is usable with the Switch Application Group
(SWTAG) command for a *PLANNED switch type.
*SWTUNPLAN - The procedure is usable with the Switch Application Group
(SWTAG) command for an *UNPLANNED switch type.
*USER - The procedure is user defined.
Procedure job processing
It is important to understand how multiple jobs are used to process steps when a
procedure is invoked. A procedure uses multiple asynchronous jobs to run the
programs identified within its steps. Starting a procedure starts one job for the
application group and an additional job for each of its data resource groups. These
jobs operate independently and persist until the procedure ends. Each persistent job
evaluates each step in sequence for work to be performed within its domain. When a
job for a data resource group encounters a step that acts on data groups, it spawns an
additional job for each subordinate data group. Each spawned data group job
performs the work for that step then ends.
Customizing procedures
508
Attributes of a step
A step defines attributes to be used at runtime for a specified step program in the
context of the specified procedure and application group. The following parameters
identify the attributes of a step.
Sequence number (SEQNBR) -The sequence number determines the order in which
the step will be performed.
Action before step (BEFOREACT) - This parameter identifies what action is taken
by all jobs for the procedure before starting the step. The default value *NONE
indicates that the step will begin without additional action. Users can also specify
*WAIT so that jobs wait for all asynchronous jobs to complete processing previous
steps before the starting the step. The value *MSGW will cause the step to be started
only after all asynchronous jobs from previous steps complete and an operator has
responded to an inquiry message indicating the step is waiting to start. A response of
G (Go) will start processing the step; a response of C (Cancel) will cancel the
procedure.
Action on error (ERRACT) - This parameter identifies what action to take for a job
used in processing the step when the job ends in error.
The default value *QUIT will set the status of the job that ended in error *FAILED,
as indicated in the expanded view of step status. The type of step program used
by this step determines what happens to other jobs for the step and whether
subsequent steps are prevented from starting, as follows:
If the step program is of type *DGDFN, jobs that are processing other data
groups within the same data resource group continue. When they complete,
the data resource group job ends. No subsequent steps that apply to that data
resource group or its data groups will be started. However, subsequent steps
will still be processed for other data resource groups and their data groups.
If the step program is of type *DTARSCGRP, no subsequent steps that apply to
that data resource group or its data groups will be started. J obs for other data
resource groups may still be running and will process subsequent steps that
apply to their data resource groups and data groups.
If the step program is of type *AGDFN, subsequent steps that apply to the
application group will not be started. J obs for data resource group or data
group steps may still be running and will process subsequent steps that apply
to their data resource groups and data groups.
For the value *CONTINUE, the job continues processing as if the job had not
ended in error. The status of the job in error is set to *IGNERR and is indicated in
the expanded view of step status.
For the value *MSGID, error processing is determined by what is specified in a
predefined step message identifier for the installation (see Step messages); if a
step message is not found for the error message ID, the error action defaults to
*QUIT.
For the value *MSGW, an inquiry message issued by the job requires a response
before any additional processing for the job can occur. A response of R (Retry) will
Procedure components and concepts
509
retry processing the step program within the same job. A response of C (Cancel)
will set the job status to *CANCEL as indicated in the expanded view of step
status and any other jobs and subsequent steps are handled in the same manner
described for the value *QUIT. A response of I (Ignore) will set the jobs status to
*IGNERR as indicated in the expanded view of step status, and processing
continues as if the job had not ended in error.
State (STATE) - the state determines whether the step runs when the procedure is
invoked. The value *ENABLED indicates that a step is enabled to run. For user-
defined steps and optional steps, users can specify *DISABLED to prevent a step
from running. Steps shipped with a state value of *REQUIRED are always enabled
and cannot be disabled.
Operational control
Procedures of type *USER can be invoked by the Run Procedure (RUNPROC)
command. For procedures of other types, the application group command which
corresponds to the procedure type must be used to invoke the procedure. For
example, a procedure of type *START must be invoked by the Start Application Group
(STRAG) command.
Where should the procedure begin? The value specified for the Begin at step
(STEP) parameter on the request to run the procedure determines the step at which
the procedure will start. The status of the last run of the procedure determines which
values are valid.
The default value, *FIRST, will start the specified procedure at its first step. This value
can be used when the procedure has never been run, when its previous run
completed (*COMPLETED or *COMPERR), or when a user acknowledged the status
of its previous run which failed or was canceled (*ACKFAILED or *ACKCANCEL).
Other values are for resolving problems with a failed or canceled procedure. When a
procedure fails or is canceled, subsequent attempts to run the same procedure will
fail until user action is taken. You will need to determine the best course of action for
your environment based on the implications of the canceled or failed steps and any
steps which completed.
The value *RESUME will start the last run of the procedure beginning with the step at
which it failed, the step that was canceled in response to an error, or the step
following where the procedure was canceled. Only procedures with status values of
*FAILED or *CANCELED can be resumed. The value *RESUME may be appropriate
after you have investigated resolved the problem which caused the procedure to end.
The value *OVERRIDE will acknowledge the status of the last run of a procedure that
failed or was canceled and start a new run of the procedure beginning at the first step.
Only procedures with status values of *FAILED or *CANCELED can be overridden;
the status of that run is set to *ACKFAILED or *ACKCANCELED. This value may be
appropriate after you have investigated the problem and understand the effect of the
partially performed procedure on your environment. Activity for steps that did
complete is not reversed. It is assumed that you have determined that starting the
procedure at its first step would not be detrimental to data or your environment.
Customizing procedures
510
The MIMIX Operations book describes the operational level of working with
procedures and steps in detail.
Current status and run history
When a procedure is invoked for an application group, the status of that run of the
procedure is reported at the application group level and overall status roll-ups within
MIMIX. You can also view procedure status and drill down to see status of specific
steps and jobs run by a step.
Timestamps are in the local job time. If you have not already ensured that the systems
in your installation use coordinated universal time, see the topic for setting system
time.
The Work with Procedure Status display provides status of the most recent run of
each procedure as well retained history of previously completed runs of procedures.
The Work with Step Status display provides access to detailed information about
status of steps for a specific run of a procedure for an application group. Steps are
listed in sequence number order as defined by steps in the procedure. The default
view collapses status to s summary record for each step. The expanded view shows
the status of each step expanded to shows the status of each job used to process
each step.
The Procedure history retention (PROCHST) policy specifies criteria for retaining
historical information about procedure runs that completed, completed with errors, or
that failed or were canceled and then acknowledged. Timestamps for the procedure
and detailed information about each step are kept for each run of a procedure. Each
run is evaluated separately and its information is retained until the policy criteria are
met. When a run exceeds the policy criteria, system manager cleanup jobs will
remove the historical information for that procedure run from all systems. The policy
values specified at the time the cleanup jobs run are used for evaluation.
The MIMIX Operations book describes the policy and status interfaces and values in
detail, including how to resolve problems with status.
Customizing user application handling for switching
After installing MIMIX and configuring an environment that uses application groups,
you will need to customize the step programs identified in Table 70 to handle ending
and starting user applications during switching. If these step programs have not been
customized or otherwise addressed, any attempt to switch using default procedures
will result in an error message (LVEE936 or LVEE938). These error messages
indicate that the identified step program has not been customized but it is called by an
enabled step within a switch procedure for the identified application group. The switch
procedure will not continue running until you have taken action.
Customizing user application handling for switching
511
Note: Any procedure with a step that invokes the step programs identified in Table
70 will issue the same error messages if action is not taken.
You have the following options:
Option 1. Customize the step programs so that actions to start and end user
applications are performed as part of the switch procedure. All procedures that
use the step programs will be updated. Use Customize the step programs for
user applications on page 511.
Option 2. Allow a procedure with steps that reference the step programs to run by
changing the Action on error attribute of those steps. Use Changing attributes of
a step on page 516 to change the steps, specifying *CONTINUE as the value of
the Action on error attribute. If all other steps of a changed procedure run
successfully, the procedure will end with a status of Completed with error
(*COMPERR). This option assumes that you will start or end user applications
outside of running the procedures.
Note: With this option, you must address each affected step in each procedure
for each application group separately.
Option 3. You have other processes that will end the user applications before
running the switch procedure and start them after the switch procedure
completes. For the steps referencing these step programs, either disable the
steps using Enabling or disabling a step on page 517 or remove the step using
Removing a step from a procedure on page 517.
Note: With this option, you must address each affected step in each procedure
for each application group separately.
Customize the step programs for user applications
Use this topic to customize the step programs so that actions to start and end user
applications are performed as part of the switch procedure. These instructions identify
how to create and compile a custom version of the program identified within the step
Table 70. Step programs that need customizing.
Step
Program
Description
ENDUSRAPP Customize to end user applications on the current primary node before a
switch occurs.
Where used: Procedures of type *SWTPLAN that use shipped default
steps.
Source code template: ENDUSRAPP in source physical file
MCTEMPLSRC in the installation library.
STRUSRAPP Customize to start user applications on the new primary system following
a switch.
Where used: Procedures of type *SWTPLAN and *SWTUNPLAN that
use shipped default steps.
Source code template: STRUSRAPP source physical file.
MCTEMPLSRC in the installation library.
Customizing procedures
512
program. All procedures that use the step programs listed in Table 70 will use the
customization.
Do the following:
1. Copy the source code template for the step program from the location indicated in
Table 70.
2. Create and compile a custom version of the program that will perform the
necessary activity for your applications. See Step program format STEP0100 on
page 519 for details.
3. Copy the complied step program to all systems in the installation. Ensure that it
has the same name and location on all systems.
Note: To prevent having your custom program replaced when a service pack is
installed, either the name of the program object or the library where it is
located must be different than the name and location specified in the
shipped default step program.
4. From the management system, enter the command:
installation_library/ WRKSTEPPGM
5. Type 2 (Change) next to the step program you want and press Enter.
6. The Change Step Program (CHGSTEPPGM) command appears. Specify the
name and library of your custom program and press Enter.
Working with procedures
Procedures are used to perform operations for application groups. A procedure
consists of steps, which reference separately callable programs, arranged in a
particular sequence.
The sequence of steps and their runtime attributes can be changed from the Work
with Steps display.
Accessing the Work with Procedures display
The Work with Procedures display shows a list of procedures that can be subsetted
by procedure name, application group, or procedure type. This display is used
Working with procedures
513
primarily for configuring and modifying procedures. Only procedures of type *USER
can be run from this display.
Figure 32. Example of the Work with Procedures display.
For detailed information about status for steps and procedures, see the MIMIX
Operations book.
Displaying the procedures for an application group
Do the following:
1. From the MIMIX Basic Main Menu, type 1 (Work with application groups) and
press Enter.
2. From the Work with Application Groups display, type 20 (Procedures) next to the
application group you want and press Enter.
The Work with Procedures display appears, listing all procedures for the selected
application group.
Displaying all procedures
Do one of the following:
From the MIMIX Intermediate Main Menu, type 7 (Work with procedures) and
press Enter.
From the Work with Application Groups display, press F17 (Procedures).
Type the following command and press Enter:
installation_library/ WRKPROC
Wor k wi t h Pr ocedur es
Syst em: SYSTEMA
Type opt i ons, pr ess Ent er .
1=Cr eat e 2=Change 3=Copy 4=Del et e 5=Di spl ay 6=Pr i nt 7=Rename
8=Wor k wi t h st eps 9=Run 13=Last st ar t ed st at us 14=Pr ocedur e st at us

Opt Pr ocedur e App Gr oup Type Df t Descr i pt i on

__ END SAMPLEAG *END *YES END APPLI CATI ON GROUP PROCED >
__ ENDTGT SAMPLEAG *END *NO END APPLI CATI ON GROUP PROCED >
__ PRECHECK SAMPLEAG *USER *NO OPTI ONAL PRECHECK FOR SWI TCH
__ START SAMPLEAG *START *YES START APPLI CATI ON GROUP PROC >
__ SWTPLAN SAMPLEAG *SWTPLAN *YES PLANNED APPLI CATI ON GROUP SW>
__ SWTUNPLAN SAMPLEAG *SWTUNPLAN *YES UNPLANNED APPLI CATI ON GROUP >




Bot t om
Par amet er s or command
===> _________________________________________________________________________
F3=Exi t F4=Pr ompt F5=Ref r esh F6=Cr eat e F9=Ret r i eve F12=Cancel
F13=Repeat F14=Pr ocedur e st at us F18=Subset F21=Pr i nt l i st

Customizing procedures
514
Creating a procedure
Use these instructions to create a new procedure for an application group.
For procedures of a type other than *USER, the new procedure is a copy of the
shipped default procedure for the specified procedure type, including its steps, and
will be invoked as determined by that type. By default, the new procedure is not the
default for the application group. If you specify *YES for Default for type, the request
to create the new procedure will also change the existing default procedure so that it
is no longer the default.
For procedures of type *USER, you will need to manually add steps to reference step
programs and specify runtime attributes after the procedure is created. (If you copy a
procedure of type *USER, the steps are copied into the new procedure.) Procedures
of type *USER do not support the concept of a default for the application group.
Do the following from the management system:
1. On the Work with Procedures display, type 1 (Create) next to the blank line at the
top of the list and press Enter.
2. The Create Procedure (CRTPROC) display appears. Specify a name for the
Procedure prompt.
3. At the Application group definition prompt, specify the name of the application
group with which the procedure will be associated.
4. At the Type prompt, specify the type of operation that will invoke the procedure.
5. If you want the procedure to be the default for the specified type for the application
group, specify *YES for the Default for type prompt.
6. At the Description prompt, specify text that describes the purpose of the
procedure.
7. To create the procedure, press Enter.
8. Add or remove steps and adjust step attributes as needed using the topics within
Working with the steps of a procedure on page 515.
Deleting a procedure
Use these instructions to delete a procedure for an application group, including the
runtime attributes of steps within the procedure. The step programs referenced by the
steps of the procedure are not deleted.
The procedure cannot be in use. The default procedure for an application group
cannot be deleted.
Do the following from the management system:
1. On the Work with Procedures display, type 4 (Delete) next to the procedure you
want press Enter.
2. A confirmation display appears. To delete the procedure, press Enter.
Working with the steps of a procedure
515
Working with the steps of a procedure
When an application group is created, each of the resulting default procedures
created has a default set of predefined steps. You can also add steps that reference
your own custom step programs. Steps are controlled by the procedure for which they
are defined.
A step defines attributes to be used at runtime for a specified step program in the
context of the specified procedure and application group.
Displaying the steps within a procedure
The Work with Steps display shows the steps within one procedure for a specific
application group.
To access the display do the following:
1. Display a list of procedures. See Accessing the Work with Procedures display on
page 512.
2. Type 2 (Work with steps) next to the procedure you want for an application group
and press Enter.
The Work with Steps display appears, listing all steps that have been added to the
procedure according to their sequence numbers.
Figure 33. Example of the Work with Steps display.
Displaying step status for the last started run of a procedure
To display the step status for the most recently started (last run) of a procedure, do
the following:
1. Display a list of procedures. See Accessing the Work with Procedures display on
Wor k wi t h St eps
SYSTEM: SYSTEMA
Pr ocedur e: SWTPLAN App. gr oup: SAMPLEAG Type: *SWTPLAN

Type opt i ons, pr ess Ent er .
1=Add 2=Change 4=Remove 5=Di spl ay 6=Pr i nt 20=Enabl e 21=Di sabl e

St ep Bef or e Er r or St ep Pgm Node
Opt Pr ogr am Seq. Act i on Act i on St at e Type Type
__ __________ _______
__ MXCHKCOM 100 *NONE *QUI T *REQUI RED *AGDFN *LOCAL
__ MXCHKCFG 200 *NONE *QUI T *REQUI RED *DGDFN *NEWPRI M
__ ENDUSRAPP 300 *WAI T *MSGW *ENABLED *AGDFN *PRI MARY
__ MXENDDG 400 *NONE *QUI T *REQUI RED *DGDFN *NEWPRI M
__ MXENDRJ LNK 500 *WAI T *QUI T *ENABLED *DGDFN *NEWPRI M
__ MXAUDACT 600 *NONE *QUI T *ENABLED *DGDFN *NEWPRI M
__ MXAUDCMPLY 700 *NONE *QUI T *ENABLED *DGDFN *NEWPRI M
__ MXAUDDI FF 800 *NONE *QUI T *ENABLED *DGDFN *NEWPRI M
MORE. . .
Par amet er s or command
===> _________________________________________________________________________
F3=Exi t F4=Pr ompt F5=Ref r esh F6=Add F9=Ret r i eve F14=St ep pr ogr ams
F15=St ep messages F18=Subset F21=Pr i nt l i st F24=Mor e keys
Customizing procedures
516
page 512.
2. From the Work with Procedures display, type 13 (Last started status) next to the
procedure and application group you want and press Enter.
For detailed information about status for steps and procedures, see the MIMIX
Operations book.
Adding a step to a procedure
Use these instructions to add a defined step program as a step within a procedure.
You can specify the sequence in which the step is performed within the procedure and
other runtime attributes for the step. The procedure to which a step is being added
cannot be active when adding a step.
A required step program can be added as a step only once within a procedure. Step
programs that are required steps for shipped default procedures of type *SWTPLAN
or *SWTUNPLAN cannot be added as steps in procedures of type *USER. For more
information about adding and customizing step programs, see Working with step
programs on page 517.
Do the following from the management system:
1. Display the existing steps of the procedure for the application group you want.
See Displaying the steps within a procedure on page 515.
2. The Work with Steps display appears. Type 1 (Add) next to the blank line at the
top of the list and press Enter.
3. The Add Step (ADDSTEP) command appears with the procedure and application
group preselected. Do the following:
a. At the Step program name prompt, specify the step program that you want this
step to run.
b. The default value *LAST for the Sequence number prompt will add the step at
the end of the procedure using a number that is 100 greater than the current
last sequence number in the procedure. If you want the step to run in a
different relative order within the procedure, specify a different value.
c. Specify the values you want for other runtime attributes in the remaining
prompts.Default values will allow asynchronous jobs to process the step
without waiting for other jobs reach the step, and will quit if a job ends in error.
For details about the resulting behavior of other values for Action before step,
Action on error, and State see Attributes of a step on page 508.
d. To add the step, press Enter.
Changing attributes of a step
Use these instructions to change runtime attributes for a step. The procedure cannot
be active when changing a step.
Do the following from the management system:
1. Display the existing steps of the procedure for the application group you want.
See Displaying the steps within a procedure on page 515.
Working with step programs
517
2. The Work with Steps display appears. Type 2 (Change) next to the step you want
and press Enter.
3. The Change Step (CHGSTEP) command appears. Make the changes you want.
To change the relative order in which the step is performed, specify a different
value for the To sequence number prompt.
Specify the values you want for the Action before step, Action on error, and
State prompts. For information, see Attributes of a step on page 508.
4. To change the step, press Enter.
Enabling or disabling a step
Use these instructions to enable or disable a step within a procedure. Disabling a step
prevents it from running but the step remains in the procedure. Enabling a step that
was disabled allows the step to be performed in sequence within the procedure.
Required steps do not support being enabled or disabled.
The procedure cannot be active when a step is enabled or disabled.
Do the following from the management system:
1. Display the existing steps of the procedure for the application group you want.
See Displaying the steps within a procedure on page 515.
2. The Work with Steps display appears. Do one of the following:
To enable a step, type 20 (Enable) next to the step you want and press Enter.
To disable a step, type 21 (Disable) next to the step you want and press Enter.
Removing a step from a procedure
Use these instructions to remove a step from a procedure. The step program
referenced by the step will remain available for use by other procedures within the
installation.
The procedure cannot be active when a step is removed.
Do the following from the management system:
1. Display the existing steps of the procedure for the application group you want.
See Displaying the steps within a procedure on page 515.
2. The Work with Steps display appears. Type 4 (Remove) next to the step you want
and press Enter.
3. A confirmation display appears. To remove the step, press Enter.
Working with step programs
Step programs are configuration elements that enable the reuse of programs that
perform unique actions by multiple procedures. Each step program identifies the
name and location of a program and attributes which identify the type of node on
Customizing procedures
518
which the program can run as well as whether the program will run at the level of the
application group, data resource group, or data group.
MIMIX ships default step programs that are used as steps within shipped procedures.
Shipped step programs cannot be changed or removed.
Accessing step programs
Do one of the following to access to the Work with Step Programs display:
From the Work with Steps display, press F14 (Step programs)
Enter the command installation_library/WRKSTEPPGM
The list displayed identifies step programs defined within the MIMIX installation. Both
shipped step programs and user-defined step programs are listed.
Creating a custom step program
This interface supports programs written in C, RPG and CL.
1. Create and compile the program that will be invoked by the step program when it
is called by a procedure. Use Step program format STEP0100 on page 519.
2. Copy the complied step program to all systems in the installation. Ensure that the
name and location is the same on all systems.
3. Do the following from the management system to add a step program to the
installation:
a. Type ADDSTEPPGM and press F4 (Prompt).
b. Specify a name for the step program.
c. Specify the name of the program object and the library in which it is located.
d. Specify the type of step program. This indicates the operational level at which
the program will run.
e. Specify the type of node on which the program will run.
f. Specify a description of the step program. This will be displayed when you view
details of a step which uses the step program.
g. To add the step program, press Enter.
Changing a step program
You can change the attributes of a step program. The changes you make will affect all
procedures with steps that invoke the step program.
Procedures whose steps reference the specified step program cannot be active when
these instructions performed.
To change a step program, do the following:
1. Type WRKSTEPPGM and press Enter.
2. Type 2 (Change) next to the step program you want and press Enter.
Working with step programs
519
3. The Change Step Program (CHGSTEPPGM) command appears. Specify values
for the attributes you want to change.
4. Press Enter.
Step program format STEP0100
You can create your own program that can be identified in and called by a procedure
by using format STEP0100.
The program should identify a specific task to be performed. The step program
identifies the program to MIMIX and specifies the type of node on which the program
can run and the type of step program. The step program type determines the
configuration level at which jobs for the procedure will process the program. When a
step is added to a procedure, attributes of the step define runtime attributes within the
context of the procedure, including the action to take when a job used to run the
program ends in error.
Note: For steps that run at data group level, the program object will be called
regardless of whether the data group state is enabled or disabled. Therefore, if
you want your program logic to be performed only for data groups in a
particular state, you must check the state at the beginning of the program.
This is a requirement to allow steps to operate on disabled data groups, which
are frequently used in environments that have three or more nodes.
Programs can be written in C, RPG, or CL. Source code templates ENDUSRAPP and
STRUSRAPP in source physical file MCTEMPLSRC in the installation library can be
used as templates for any step program. These templates can be used for any
custom step program; however, avoid using these names for your program.
A step program is called with the following parameters.
Application Group Name
INPUT; CHAR (10)
The name that identifies the application group definition.
Resource Group Name
INPUT; CHAR (10)
The name that identifies the resource group. If the resource group is not applicable,
this parameter contains all blanks.
Data Group Name
INPUT; CHAR (26)
The name that identifies the data group definition (name system1 system2). If the
data group is not applicable, this parameter contains all blanks.
Data Group Source
INPUT; CHAR (1)
The value that identifies the data source as configured in the data group definition.
1 System1 is the source for the data group.
2 System2 is the source for the data group.
Customizing procedures
520
New Primary Node
INPUT; CHAR (8)
The name that identifies the node that becomes the new primary node during a switch
operation. This is the system to which production is being switched. If used in a proce-
dure of a type other than *SWTPLAN or *SWTUNPLAN, the node name is the primary
node.
Old Primary Node
INPUT; CHAR (8)
The name that identifies the node that is the old primary node during a switch opera-
tion. This is the system from which production is being switched.
Current Node
INPUT; CHAR (8)
The name that identifies the current local node. This is the node on which the step
program is running.
Returned Message Identifier
OUTPUT; CHAR (7)
The value is the message identifier returned for an error message. If there is no error,
the step program should return all blanks.
Returned Message Data Length
OUTPUT; DECIMAL (5, 0)
The value identifies the length of the message data being returned.
Returned Message Data
OUTPUT; CHAR (900)
The text returned as message data for the returned message ID.
Working with step messages
A step message is an optional, user-created configuration element that defines the
error action to be taken for a specific error message identifier. A step message
provides the ability to determine the error action taken by a step based on the error
message identifier. Each step message is defined for an installation but can be used
by multiple steps or by steps in multiple procedures.
When a step with a specified error action of *MSGID fails, MIMIX will check the
installation for a defined step messages that matches the step's error message ID. If a
matching step message exists, it determines the error action used for the step. If a
step message is not found for the error, processing quits without running any
subsequent steps in the procedure.
Note: Any step with an error action of *MSGID that can encounter the specified
message ID will take the error action specified in the step message.
0 No message data is returned.
value Identifies the length of the data returned in the Returned Message Data parameter.
Working with step messages
521
No step messages are shipped with MIMIX.
Assessing the Work with Step Messages display
Do one of the following to access to the Work with Step Messages display:
From the Work with Steps display, press F15 (Step messages).
Enter the command installation_library/WRKSTEPMSG
The list displayed identifies step messages defined within the MIMIX installation. The
messages listed in alphabetical order.
Adding or changing a step message
Step messages can only be added or changed from the management system.
From the management system, do the following
1. Access the Work with Step Messages display.
2. Do one of the following and press Enter:
To add a message, type 1 (Add) next to the blank line at the top of the list.
To change a message, type 2 (Change) next to the message you want.
3. If you are adding a message, specify the Message identifier.
4. Specify a value for the Action on error. Press F1 (Help) to see details for possible
options.
5. Specify a description of the message.
6. Press Enter.
The added or changed step message is effective immediately. Any step within the
installation which specifies an error action of *MSGID will use the message ID if the
step ends in error with the indicated message ID.
Removing a step message
Step messages can only be removed from the management system.
From the mangement system, do the following
1. Access the Work with Step Messages display.
2. Type 4 (Remove) next to the message you want and press Enter.
3. A confirmation display appears. Press Enter.
The change is effective immediately and can affect the behavior of procedures in the
installation. After a step message is removed, any steps that could potentially use the
step message error action will no longer have the error action available and
processing of the procedure will quit if the error message identifier is encountered.
Customizing procedures
522
Additional programming support for procedures and
steps
The following additional capabilities facilitate programming in environments that use
for procedures and steps:
The Open MIMIX List (OPNMMXLST) command supports procedures and steps.
The Type of request (TYPE) parameter includes values for *PROC, *STEP,
*STEPPGM, and *STEPMSG. The parameters Procedure (PROC) and
Application group definition (AGDFN) qualify the type *STEP.
The following retrieve commands are available:
Retrieve Procedure (RTVPROC)
Retrieve Step (RTVSTEP)
Retrieve Step Message (RTVSTEPMSG)
Retrieve Step Program (RTVSTEPPGM)
Outfile support is available for the following commands:
Work with Procedure (WRKPROC)
Work with Procedure Status (WRKPROCSTS)
Work with Steps (WRKSTEP)
Work with Step Messages (WRKSTEPMSG)
Work with Step Programs (WRKSTEPPGM)
Work with Step Status (WRKSTEPSTS)
Summary of exit points
523
CHAPTER 23 Customizing with exit point
programs
The MIMIX family of products provide a variety of exit points to enable you to extend
and customize your operations.
The topics in this chapter include:
Summary of exit points on page 523 provides tables that summarize the exit
points available for use.
Working with journal receiver management user exit points on page 526
describes how to use user exit points safely.
Summary of exit points
The following tables summarize the exit points available for use.
MIMIX user exit points
MIMIX provides the exit points identified in Table 71. for journal receiver
management. For additional information, see Working with journal receiver
management user exit points on page 526.
MIMIX also supports a generic interface to existing database and object replication
process exit points that provides enhanced filtering capability on the source system.
This generic user exit capability is only available through a Certified MIMIX
Consultant.
MIMIX Monitor user exit points
Table 72 identifies the user exit points available in MIMIX Monitor. You can use the
exit points through programs controlled by a monitor. Monitors can be set up to
operate with other products, including MIMIX. You can also use the MIMIX Monitor
User Access API (MMUSRACCS) for all interfaces to MIMIX Monitor.
MIMIX Monitor also contains the MIMIX Model Switch Framework. This support
provides powerful customization opportunities through a set of programs and
Table 71. MIMIX exit points for journal receiver management
Type Exit Point Name
J ournal receiver management exit
points
Receiver change management pre-change
Receiver change management post-change
Receiver delete management pre-check
Receiver delete management pre-delete
Receiver delete management post-delete
Customizing with exit point programs
524
commands that are designed to provide a consistent switch framework for you to use
in your switching environment.
The Using MIMIX Monitor book documents the user exit points, the API, and MIMIX
Model Switch Framework.
MIMIX Promoter user exit points
Table 73 identifies the exit points within MIMIX Promoter. If you perform concurrent
operations between MIMIX Promoter and MIMIX, you might consider using these exit
points within automation.
Table 72. MIMIX Monitor exit points
Type Exit Point Name
Interface exit points Pre-create
Post-create
Pre-change
Post-change
Pre-copy
Post-copy
Pre-delete
Post-delete
Pre-display
Post-display
Pre-print
Post-print
Pre-rename
Post-rename
Pre-start
Post-start
Pre-end
Post-end
Pre-work with information
Post-work with information
Pre-hold
Post-hold
Pre-release
Post-release
Pre-status
Post-status
Pre-change status
Post-change status
Pre-run
Post-run
Pre-export
Post-export
Pre-import
Post-import
Condition program exit point After pre-defined condition check
Event program exit point After condition check (pre-defined and user-defined)
Table 73. MIMIX Promoter exit points
Type Exit Point Name
Control exit points
(The control exit service program supports these exit
points.)
Transfer complete
Lock failure
After lock
Copy failure
Copy finalize
After temporary journal delete
Data exit points
(The data exit service program supports these exit
points.)
Data initialize
Data transfer
Data finalize
Summary of exit points
525
Requesting customized user exit programs
If you need a specialized user exit program designed for your applications, contact us
at mimixsupport@visionsolutions.com or through the online tools at
www.visionsolutions.com/support. Our personnel will ask about your requirements
and design a customized program to work with your applications.
526
Working with journal receiver management user exit
points
User exit points in critical processing areas enable you to incorporate specialized
processing with MIMIX to extend function to meet additional needs for your
environment. Access to user exit processing is provided through the use of an exit
program that can be written in any language supported by IBM i.
Since user exit programming allows for user code to be run within MIMIX processes,
great care must be exercised to prevent the user code from interfering with the proper
operation of MIMIX. For example, a user exit program that inadvertently causes an
entry to be discarded that is needed by MIMIX could result in a file not being available
in case of a switch. Use caution in designing a configuration for use with user exit
programming. You can safely use user exit processing with proper design,
programming, and testing. Services are also available to help customers implement
specialized solutions.
Journal receiver management exit points
MIMIX includes support that allows user exit programming in the journal receiver
change management and journal receiver delete management processes. With this
support, you can customize change management and delete management of journal
receivers according to the needs of your environment
J ournal receiver management exit points are enabled when you specify a exit
program to use in a journal definition.
Change management exit points
MIMIX can change journal receivers when a specified time is reached, when the
receiver reaches a specified size, or when the sequence number reaches a specified
threshold. You specify these values when you create a journal definition. MIMIX also
changes the journal receiver at other times, such as during a switch and when a user
requests a change with the Change Data Group Receiver (CHGDGRCV) command.
The following user exit points are available for customizing change management
processing:
Receiver Change Management Pre-Change User Exit Point. This exit point is
located immediately before the point in processing where MIMIX changes a
journal receiver. Either the user forced a journal receiver change (CHGDGRCV
command) or MIMIX processing determined that the journal receiver needs to
change. The return code from the exit program can prevent MIMIX from changing
the journal receiver, which can be useful when the exit program changes the
receiver.
Receiver Change Management Post-Change User Exit Point. This exit point is
located immediately after the point in processing where MIMIX changes a journal
receiver. MIMIX ignores the return code from the exit program. This exit point is
useful for processing that does not affect MIMIX processing, such as saving the
journal receiver to media. (The example program in Table 74 on page 530 shows
how you can determine the name of the previously attached journal by retrieving
Working with journal receiver management user exit points
527
the name of the first entry in the currently attached journal receiver.)
Restrictions for Change Management Exit Points: The following restriction applies
when the exit program is called from either of the change management exit points:
Do not include the Change Data Group Receiver (CHGDGRCV) command in your
exit program.
Do not submit batch jobs for journal receiver change or delete management from
the exit program. Submitting a batch job would allow the in-line exit point
processing to continue and potentially return to normal MIMIX journal
management processing, thereby conflicting with journal manager operations. By
not submitting journal receiver change management to a batch job, you prevent a
potential problem where the journal receiver is locked when it is accessed by a
batch program.
Delete management exit points
MIMIX can delete journal receivers when the send process has completed processing
the journal receiver and other configurable conditions are met. When you create a
journal definition you specify whether unsaved journal receivers can be deleted, the
number of receivers that must be retained, and how many days to retain the
receivers.
The following user exit points are available for customizing delete management
processing:
Receiver Delete Management Pre-Check User Exit Point. This exit point is
located before MIMIX determines whether to delete a journal receiver. When
called at this exit point, actions specified in a user exit program can affect
conditions that MIMIX processing checks before the pre-delete exit point. For
example, an exit program that saves the journal receiver may make the journal
receiver eligible for deletion by MIMIX processing. The return code from the exit
program can prevent MIMIX from deleting the journal receiver and any other
journal receiver in the chain.
Receiver Delete Management Pre-Delete User Exit Point. This exit point is
located immediately before the point in processing where MIMIX deletes a journal
receiver. MIMIX processing determined that the journal receiver is eligible for
deletion. The return code from the exit program can prevent MIMIX from deleting
the journal receiver, which is useful when the receiver is being used by another
application.
Receiver Delete Management Post-Delete User Exit Point. This exit point is
immediately after the point in processing where MIMIX deletes a journal receiver.
The return code from the exit program can prevent MIMIX from deleting any other
(newer) journal receivers attached to the journal.
Requirements for journal receiver management exit programs
This exit program allows you to include specialized processing in your MIMIX
environment at points that handle journal receiver management. The exit program
runs with the authority of the user profile that owns the exit program. If your exit
528
program fails and signals an exception to MIMIX, MIMIX processing continues as if
the exit program was not specified.
Return Code
OUTPUT; CHAR (1)
This value indicates how to continue processing the journal receiver when the exit
program returns control to the MIMIX process. This parameter must be set. When the
exit program is called from Function C2, the value of the return code is ignored. Pos-
sible values are:
Function
INPUT; CHAR (2)
The exit point from which this exit program is called. Possible values are:
Note: Restrictions for exit programs called from the C1 and C2 exit points are
described within topic Change management exit points on page 526.
Journal Definition
INPUT; CHAR (10)
The name that identifies the journal definition.
System
INPUT; CHAR (8)
The name of the system defined to MIMIX on which the journal is defined.
Reserved1
INPUT; CHAR (10)
This field is reserved and contains blank characters.
Journal Name
INPUT; CHAR (10)
The name of the journal that MIMIX is processing.
Attention: It is possible to cause long delays in MIMIX processing
that are undesirable when you use this exit program. When the exit
program is called, MIMIX passes control to the exit program. MIMIX
will not continue change management or delete management
processing until the exit program returns. Consider placing long
running processes that will not affect journal management in a
batch job that is called by the exit program.
0 Do not continue with MIMIX journal management processing for this journal
receiver.
1 Continue with MIMIX journal management processing.
C1 Pre-change exit point for receiver change management.
C2 Post-change exit point for receiver change management.
D0 Pre-check exit point for receiver delete management.
D1 Pre-change exit point for receiver delete management.
D2 Post-change exit point for receiver delete management.
Working with journal receiver management user exit points
529
Journal Library
INPUT; CHAR (10)
The name of the library in which the journal is located.
Receiver Name
INPUT; CHAR (10)
The name of the journal receiver associated with the specified journal. This is the jour-
nal receiver on which journal management functions will operate. For receiver change
management functions, this always refers to the currently attached journal receiver.
For receiver delete management functions, this always refers to the same journal
receiver.
Receiver Library
INPUT; CHAR (10)
The library in which the journal receiver is located.
Sequence Option
INPUT; CHAR (6)
The value of the Sequence option (SEQOPT) parameter on the CHGJ RN command
that MIMIX processing would have used to change the journal receiver. It is recom-
mended that you specify this parameter to prevent synchronization problems if you
change the journal receiver. This parameter is only used when the exit program is
called at the C1 (pre-change) exit point. Possible values are:
Threshold Value
INPUT; DECIMAL(15, 5)
The value to use for the THRESHOLD parameter on the CRTJ RNRCV command.
This parameter is only used when the exit program is called at the C1 (pre-change)
exit point. Possible values are:
Reserved2
INPUT; CHAR (1)
This field is reserved and contains blank characters.
*CONT The journal sequence number of the next journal entry created is 1 greater than
the sequence number of the last journal entry in the currently attached journal
receiver.
*RESET The journal sequence number of the first journal entry in the newly attached
journal receiver is reset to 1. The exit program should either reset the sequence
number or set the return code to 0 to allow MIMIX to change the journal receiver
and reset the sequence number.
0 Do not change the threshold value. The exit program must not change the
threshold size for the journal receiver.
value The exit program must create a journal receiver with this threshold value, specified
in kilobytes. The exit program must also change the journal to use that receiver, or
send a return code value of 0 so that MIMIX processing can change the journal
receiver.
530
Reserved3
INPUT; CHAR (1)
This field is reserved and contains blank characters.
Journal receiver management exit program example
The following example shows how an exit program can customize changing and
deleting journal receivers. This exit program only processes journal receivers when it
is called at the pre-change exit point (C1), the post-change exit point (C2), or the pre-
check exit point (D0).
When called at the pre-change exit point, the sample exit program handles changing
any journal receiver in library MYLIB. For any other journal library, MIMIX handles
change management processing.
When called at the post-change exit point, the exit program saves the recently
detached journal receiver if the journal is in library ABCLIB. (The recently detached
journal receiver was the attached receiver at the pre-change exit point.)
When called at the pre-check exit point, if the journal library is TEAMLIB, the exit
program saves the journal receiver to tape and allows MIMIX receiver delete
management to continue processing.
Table 74. Sample journal receiver management exit program
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/ * Pr ogr am. . . . : DMJ REXI T */
/ * Descr i pt i on: Exampl e user exi t pr ogr amusi ng CL */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */

PGM PARM( &RETURN &FUNCTI ON &J RNDEF &SYSTEM +
&RESERVED1 &J RNNAME &J RNLI B &RCVNAME +
&RCVLI B &SEQOPT &THRESHOLD &RESERVED2 +
&RESERVED3)

DCL VAR( &RETURN) TYPE( *CHAR) LEN( 1)
DCL VAR( &FUNCTI ON) TYPE( *CHAR) LEN( 2)
DCL VAR( &J RNDEF) TYPE( *CHAR) LEN( 10)
DCL VAR( &SYSTEM) TYPE( *CHAR) LEN( 8)
DCL VAR( &RESERVED1) TYPE( *CHAR) LEN( 10)
DCL VAR( &J RNNAME) TYPE( *CHAR) LEN( 10)
DCL VAR( &J RNLI B) TYPE( *CHAR) LEN( 10)
DCL VAR( &RCVNAME) TYPE( *CHAR) LEN( 10)
DCL VAR( &RCVLI B) TYPE( *CHAR) LEN( 10)
DCL VAR( &SEQOPT) TYPE( *CHAR) LEN( 6)
DCL VAR( &THRESHOLD) TYPE( *DEC) LEN( 15 5)
DCL VAR( &RESERVED2) TYPE( *CHAR) LEN( 1)
DCL VAR( &RESERVED3) TYPE( *CHAR) LEN( 1)
Working with journal receiver management user exit points
531
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/ * Const ant s and mi sc. var i abl es */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
DCL VAR( &STOP) TYPE( *CHAR) LEN( 1) VALUE( ' 0' )
DCL VAR( &CONTI NUE) TYPE( *CHAR) LEN( 1) VALUE( ' 1' )
DCL VAR( &PRECHG) TYPE( *CHAR) LEN( 2) VALUE( ' C1' )
DCL VAR( &POSTCHG) TYPE( *CHAR) LEN( 2) VALUE( ' C2' )
DCL VAR( &PRECHK) TYPE( *CHAR) LEN( 2) VALUE( ' D0' )
DCL VAR( &PREDLT) TYPE( *CHAR) LEN( 2) VALUE( ' D1' )
DCL VAR( &POSTDLT) TYPE( *CHAR) LEN( 2) VALUE( ' D2' )
DCL VAR( &RTNJ RNE) TYPE( *CHAR) LEN( 165)
DCL VAR( &PRVRCV) TYPE( *CHAR) LEN( 10)
DCL VAR( &PRVRLI B) TYPE( *CHAR) LEN( 10)
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/ * MAI N */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
CHGVAR &RETURN &CONTI NUE / * Cont i nue pr ocessi ng r ecei ver */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/ * Handl e pr ocessi ng f or t he pr e- change exi t poi nt . */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
I F ( &FUNCTI ON *EQ &PRECHG) THEN( DO)
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/ * I f t he j our nal l i br ar y i s my l i br ar y( MYLI B) , exi t pr ogr am */
/ * wi l l do t he changi ng of t he r ecei ver s. */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
I F ( &J RNLI B *EQ ' MYLI B' ) THEN( DO)
I F ( &THRESHOLD *GT 0) THEN( DO)
CRTJ RNRCV J RNRCV( &RCVLI B/ NEWRCV0000) +
THRESHOLD( &THRESHOLD)
CHGJ RN J RN( &J RNLI B/ &J RNNAME) +
J RNRCV( &RCVLI B/ NEWRCV0000) SEQOPT( &SEQOPT)
ENDDO / * Ther e has been a t hr eshol d change */
ELSE ( CHGJ RN J RN( &J RNLI B/ &J RNNAME) J RNRCV( *GEN) +
SEQOPT( &SEQOPT) ) / * No t hr eshol d change */
CHGVAR &RETURN &STOP / * St op pr ocessi ng ent r y */
ENDDO / * &J RNLI B i s MYLI B */
ENDDO / * &FUNCTI ON *EQ &PRECHG */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/ * At t he post - change user exi t poi nt i f t he j our nal l i br ar y i s */
/ * ABCLI B, save t he j ust det ached j our nal r ecei ver . */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
ELSE I F ( &FUNCTI ON *EQ &POSTCHG) THEN( DO)
I F COND( &J RNLI B *EQ ' ABCLI B' ) THEN( DO)
RTVJ RNE J RN( &J RNLI B/ &J RNNAME) +
RCVRNG( &RCVLI B/ &RCVNAME) FROMENTLRG( *FI RST) +
RTNJ RNE( &RTNJ RNE)
Table 74. Sample journal receiver management exit program
532
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/ * Ret r i eve t he j our nal ent r y, ext r act t he pr evi ous r ecei ver */
/ * name and l i br ar y t o do t he save wi t h. */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
CHGVAR &PRVRCV ( %SUBSTRI NG( &RTNJ RNE 126 10) )
CHGVAR &PRVRLI B ( %SUBSTRI NG( &RTNJ RNE 136 10) )
SAVOBJ OBJ ( &PRVRCV) LI B( &PRVRLI B) DEV( TAP02) +
OBJ TYPE( *J RNRCV) / * Save det ached r ecei ver */
ENDDO / * &J RNLI B i s ABCLI B */
ENDDO / * &FUNCTI ON i s &POSTCHG */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
/ * Handl e pr ocessi ng f or t he pr e- check exi t poi nt . */
/ *- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */
ELSE I F ( &FUNCTI ON *EQ &PRECHK) THEN( DO)
I F ( &J RNLI B *EQ ' TEAMLI B' ) THEN( +
SAVOBJ OBJ ( &RCVNAME) LI B( &RCVLI B) DEV( TAP01) +
OBJ TYPE( *J RNRCV) )
ENDDO / * &FUNCTI ON i s &PRECHK */
ENDPGM
Table 74. Sample journal receiver management exit program
533
APPENDIX A Supported object types for system
journal replication
This list identifies IBM i object types and indicates whether MIMIX can replicate these
through the system journal.
Note: Not all object types exist in all releases of IBM i.
Object Type Description Replicated
*ALRTBL Alert table Yes
*AUTL Authorization list Yes
*BLKSF Block special file No
*BNDDIR Binding directory Yes
*CFGL Configuration list No
6

*CHTFMT Chart format No
9

*CLD C locale description Yes
*CLS Class Yes
*CMD Command Yes
*CNNL Connection list Yes
*COSD Class-of-service description Yes
*CRG Cluster resource group No
9

*CRQD Change request description Yes
*CSI Communications side information Yes
*CTLD Controller description Yes
1

*DDIR Distributed file directory No
2

*DEVD Device description Yes
1,12

*DEVNWSH Device network server host adapter Yes
*DIR Directory Yes
2

*DOC Document Yes
*DSTMF Distributed stream file No
2

*DTAARA Data area Yes
*DTADCT Data dictionary No
*DTAQ Data queue Yes
*EDTD Edit description Yes
*EXITRG Exit registration Yes
*FCT Forms control table Yes
*FILE File Yes
3

*FLR Folder Yes
*FNTRSC Font resource Yes
*FNTTBL Font mapping table No
9

*FORMDF Form definition Yes
*FTR Filter Yes
*GSS Graphics symbol set Yes
*IGCDCT Double-byte character set conversion
dictionary
No
9

*IGCSRT Double-byte character set sort table No
9

*IGCTBL Double-byte character set font table No
9

*IPXD Internetwork packet exchange
description
Yes
*J OBD J ob description Yes
Supported object types for system journal replication
534
*J OBQ J ob queue Yes
4

*J OBSCD J ob schedule Yes
*J RN J ournal No
7

*J RNRCV J ournal receiver No
7

*LIB Library Yes
4

*LIND Line description Yes
1

*LOCALE Locale space Yes
*M36 AS/400 Advanced 36 machine No
8

*M36CFG AS/400 Advanced 36 machine
configuration
No
8

*MEDDFN Media definition Yes
*MENU Menu Yes
*MGTCOL Management collection Yes
*MODD Mode description Yes
*MODULE Module Yes
*MSGF Message file Yes
*MSGQ Message queue Yes
4

*NODGRP Node group No
9

*NODL Node list Yes
*NTBD NetBIOS description Yes
*NWID Network interface description Yes
1

*NWSD Network server description Yes
*OOPOOL Persistent pool (for OO objects) No
*OUTQ Output queue Yes
4, 5

*OVL Overlay Yes
*PAGDFN Page definition Yes
*PAGSEG Page segment Yes
*PDFMAP PDF Map Yes
*PDG Print descriptor group Yes
*PGM Program Yes
11
*PNLGRP Panel group Yes
*PRDAVL Product availability No
6

*PRDDFN Product definition No
6

*PRDLOD Product load No
6

*PSFCFG Print Services Facility (PSF)
configuration
Yes
*QMFORM Query management form Yes
*QMQRY Query management query Yes
*QRYDFN Query definition Yes
*RCT Reference code translate table No
9

*S36 System/36 machine description No
9

*SBSD Subsystem description Yes
*SCHIDX Search index Yes
*SOCKET Local socket No
*SOMOBJ System Object Model (SOM) object No
*SPADCT Spelling aid dictionary Yes
*SPLF Spool file Yes
*SQLPKG Structured query language package Yes
*SQLUDT User-defined SQL type Yes
*SRVPGM Service program Yes
*SSND Session description Yes
*STMF Bytestream file Yes
2

*SVRSTG Server storage space No
8

Object Type Description Replicated
535
*SYMLNK Symbolic link Yes
2

*TBL Table Yes
*USRIDX User index Yes
*USRPRF User profile Yes
13

*USRQ User queue Yes
4

*USRSPC User space Yes
10

*VLDL Validation list Yes
*WSCST Workstation customizing object Yes
Notes:
1. Replicating configuration objects to a previous version of IBM i may cause unpredictable
results.
2. Objects in QDLS, QSYS.LIB, QFileSvr.400, QLANSrv, QOPT, QNetWare, QNTC, QSR,
and QFPNWSSTG file systems are not currently supported via Data Group IFS Entries.
Objects in QSYS.LIB and QDLS are supported via Data Group Object Entries and Data
Group DLO Entries. Excludes stream files associated with a server storage space.
3. File attribute types include: DDMF, DSPF, DSPF36, DSPF38, ICFF, LF, LF38, MXDF38,
PF-DTA, PF-SRC, PF38-DTA, PF38-SRC, PRTF, PRTF38, and SAVF.
4. Content is not replicated.
5. Spooled files are replicated separately from the output queue.
6. These objects are system specific. Duplicating them could cause unpredictable results on
the target system.
7. Duplicating these objects can potentially cause problems on the target system.
8. These objects are not duplicated due to size and IBM recommendation.
9. These object types can be supported by MIMIX for replication through the system journal,
but are not currently included. Contact CustomerCare if you need support for these object
types.
10.Changes made though external interfaces such as APIs and commands are replicated.
Direct update of the content through a pointer is not supported.
11.To replicate *PGM objects to an earlier release of IBM i you must be able to save them to
that earlier release of IBM i.
12.Device description attributes include: APPC, ASC, ASP, BSC, CRP, DKT, DSPLCL,
DSPRMT, DSPVRT, FNC, HOST, INTR, MLB, NET, OPT, PRTLAN, PRTLCL, PRTRMT,
PRTVRT, RTL, SNPTUP, SNPTDN, SNUF, and TAP.
13.The MIMIX-supplied user profiles MIMIXOWN and LAKEVIEW, as well as IBM supplied
user profiles, should not be replicated.
Object Type Description Replicated
Copying configurations
536
APPENDIX B Copying configurations
This section provides information about how you can copy configuration data between
systems.
Supported scenarios on page 536 identifies the scenarios supported in version 7
of MIMIX.
Checklist: copy configuration on page 537 directs you through the correct order
of steps for copying a configuration and completing the configuration.
Copying configuration procedure on page 541 documents how to use the Copy
Configuration Data (CPYCFGDTA) command.
Supported scenarios
The Copy Configuration Data (CPYCFGDTA) command supports copying
configuration data from one library to another library on the same system. After MIMIX
is installed, you can use the CPYCFGDTA command.
The supported scenarios are as follows:
Table 75. Supported scenarios for copying configuration
From To
MIMIX version 7 MIMIX version 7
1

1. The installation you are copying to must be at the same or a higher level service
pack.
MIMIX version 6 MIMIX version 7
Checklist: copy configuration
537
Checklist: copy configuration
Use this checklist when you have installed MIMIX in a new library and you want to
copy an existing configuration into the new library.
To configure MIMIX with configuration information copied from one or more existing
product libraries, do the following:
1. Review Supported scenarios on page 536.
2. Use the procedure Copying configuration procedure on page 541 to copy the
configuration information from one or more existing libraries.
3. Verify that the system definitions created by the CPYCFGDTA command have the
correct message queue, output queues, and job descriptions required. Be sure to
check system definitions for the management system and all of the network
systems.
4. Verify that transfer definitions created have the correct three-part name and that
the values specified for each transfer protocol are correct. For *TCP, verify the
port number. For *SNA, verify that the SNA mode is what is defined for SNA
configuration.
Note: One of the transfer definitions should be named PRIMARY if you intend to
create additional data group definitions or system definitions that will use
the default value PRIMARY for the Primary transfer definition PRITFRDFN
parameter.
5. Verify that the journal definitions created have the information you want for the
journal receiver prefix name, auxiliary storage pool, and journal receiver change
management and delete management. The default journal receiver prefix for the
user journal is generated; for the system journal, the default journal receiver prefix
is AUDRCV. If you want to use a prefix other than these defaults, you will need to
modify the journal definition using topic Changing a journal definition on
page 194.
6. If you change the names of any of the system, transfer, or journal definitions
created by the copy configuration command, ensure that you also update that
name in other locations within the configuration.
Table 76. Changing named definitions after copying a configuration
If you change this name Also change the name in this location
System definition, SYSDFN parameter Transfer definition, TFRDFN parameter
Data group definition, DGDFN
parameter
Transfer definition, TFRDFN parameter System definition, PRITFRDFN and
SECTFRDFN parameters
Data group definition, PRITFRDFN and
SECTFRDFN parameters
J ournal definition, J RNDFN parameter Data group definition, J RNDFN1 and
J RNDFN2 parameters
538
7. Verify the data group definitions created have the correct job descriptions. Verify
that the values of parameters for job descriptions are what you want to use.
MIMIX provides default job descriptions that are tailored for their specific tasks.
Note: You may have multiple data groups created that you no longer need.
Consider whether or not you can combine information from multiple data
groups into one data group. For example, it may be simpler to have both
database files and objects for an application be controlled by one data
group.
8. Verify that the options which control data group file entries are set appropriately.
a. For data group definitions, ensure that the values for file entry options (FEOPT)
are what you want as defaults for the data group.
b. Check the file entry options specified in each data group file entry. Any file
entry options (FEOPT) specified in a data group file entry will override the
default FEOPT values specified in the data group definition. You may need to
modify individual data group file entries.
9. Check the data group entries for each data group. Ensure that all of the files and
objects that you need to replicate are represented by entries for the data group.
Be certain that you have checked the data group entries for your critical files and
objects. Use the procedures in the MIMIX Operations book to verify your
configuration.
10. Check how the apply sessions are mapped for data group file entries. You may
need to adjust the apply sessions.
11. Use Table 77 to entries for any additional database files or objects that you need
to add to the data group.
Table 77. How to configure data group entries for the preferred configuration.
Class Do the following: Planning and Requirements
Information
Library-
based
objects
1. Create object entries using Creating data group object
entries on page 242.
2. After creating object entries, load file entries for LF and
PF (source and data) *FILE objects using Loading file
entries from a data groups object entries on page 247.
Note: If you cannot use MIMIX Dynamic Apply for logical files or
PF data files, you should still create file entries for PF
source files to ensure that legacy cooperative processing
can be used.
3. After creating object entries, load object tracking entries
for *DTAARA and *DTAQ objects that are journaled to a
user journal. Use Loading object tracking entries on
page 258.
Identifying library-based
objects for replication on
page 91
Identifying logical and physical
files for replication on page 96
Identifying data areas and data
queues for replication on
page 103
Checklist: copy configuration
539
12. Do the following to confirm and automatically correct any problems found in file
entries associated with data group object entries:
a. From the management system, temporarily change the Action for running
audits policy using the following command: SETMMXPCY DGDFN(name
system1 system2) RULE(*NONE) RUNAUDIT(*CMPRPR)
b. From the source system, type WRKAUD RULE(#DGFE) and press Enter.
c. Next to the data group you want to confirm, type 9 (Run rule) and press F4
(Prompt).
d. On the Run Rule (RUNRULE) display specify *NO for the Use run rule on
system policy prompt. Then press Enter.
e. Check the audit status for a value of *NODIFF or *AUTORCVD. If the audit
results in any other status, resolve the problem. For additional information, see
Resolving audit problems on page 569 and Interpreting results for
configuration data - #DGFE audit on page 572.
f. From the management system, set the Action for running audits policy to its
previous value. (The default value is *INST.) Use the command: SETMMXPCY
DGDFN(name system1 system2) RULE(*NONE) RUNAUDIT(*INST)
13. Ensure that object auditing values are set for the objects identified by the
configuration before synchronizing data between systems. Use the procedure
Setting data group auditing values manually on page 270. Doing this now
ensures that objects to be replicated have the object auditing values necessary for
replication and that any transactions which occur between configuration and
starting replication processes can be replicated.
14. Verify that system-level communications are configured correctly.
a. If you are using SNA as a transfer protocol, verify that the MIMIX mode and
that the communications entries are added to the MIMIXSBS subsystem.
b. If you are using TCP as a transfer protocol, verify that the MIMIX TCP server is
started on each system (on each "side" of the transfer definition). You can use
the WRKACTJ OB command for this. Look for a job under the MIMIXSBS
subsystem with a function of LV-SERVER.
IFS
objects
1. Create IFS entries using Creating data group IFS
entries on page 255.
2. After creating IFS entries, load IFS tracking entries for
IFS objects that are journaled to a user journal. Use
Loading IFS tracking entries on page 257.
Identifying IFS objects for
replication on page 106
DLOs Create DLO entries using Creating data group DLO
entries on page 259.
Identifying DLOs for
replication on page 111
Table 77. How to configure data group entries for the preferred configuration.
Class Do the following: Planning and Requirements
Information
540
c. Use the Verify Communications Link (VFYCMNLNK) command to ensure that
a MIMIX installation on one system can communicate with a MIMIX installation
on another system. Refer to topic Verifying the communications link for a data
group on page 176.
15. Ensure that there are no users on the system that will be the source for replication
for the rest of this procedure. Do not allow users onto the source system until you
have successfully completed the last step of this procedure.
16. Start journaling using the following procedures as needed for your configuration.
Note: If the objects do not yet exist on the target system, be sure to specify *SRC
for the Start journaling on system (J RNSYS) parameter in the commands
to start journaling.
For user journal replication, use J ournaling for physical files on page 305 to
start journaling on both source and target systems
For IFS objects, configured for user journal replication, use J ournaling for IFS
objects on page 308.
For data areas or data queues configured for user journal replication, use
J ournaling for data areas and data queues on page 311.
17. Synchronize the database files and objects on the systems between which
replication occurs. Topic Performing the initial synchronization on page 454
includes instructions for how to establish a synchronization point and identifies the
options available for synchronizing.
18. Start the system managers using topic Starting the system and journal
managers on page 269.
19. Start the data group using Starting data groups for the first time on page 282.
Copying configuration procedure
541
Copying configuration procedure
This procedure addresses only some of the tasks needed to complete your
configuration. Use this procedure only when directed from the Checklist: copy
configuration on page 537.
This procedure addresses only some of the tasks needed to complete your
configuration. Use this procedure only when directed from the Checklist: copy
configuration on page 537.

Note: By default, the CPYCFGDTA command replaces all MIMIX configuration data
in the current product library with the information from the specified library.
Any configuration created in the product library will be replaced with data from
the specified library. This may not be desirable.
To copy existing configuration data to the new MIMIX product, do the following:
1. The products in the installation library that will receive the copied configuration
data must be shut down for the duration of this procedure. Use topic Choices
when ending replication in the MIMIX Operations book to end activity for the
appropriate products.
2. Sign on to the system with the security officer (QSECOFR) user profile or with a
user profile that has security officer class and all special authorities.
3. Access the MIMIX Basic Main Menu in the product library that will receive the
copied configuration data. From the command line, type the command
CPYCFGDTA and press F4 (Prompt).
4. At the Copy from library prompt, specify the name of the library from which you
want to copy data.
5. To start copying configuration data, press Enter.
6. When the copy is complete, return to topic Checklist: copy configuration on
page 537 to verify your configuration.
Configuring Intra communications
542
APPENDIX C Configuring Intra communications
The MIMIX set of products supports a unique configuration called Intra. Intra is a
special configuration that allows the MIMIX products to function fully within a single-
system environment. Intra support replicates database and object changes to other
libraries on the same system by using system facilities that allow for communications
to be routed back to the same system. This provides an excellent way to have a test
environment on a single machine that is similar to a multiple-system configuration.
The Intra environment can also be used to perform backups while the system remains
active.
In an Intra configuration, the product is installed into two libraries on the same system
and configured in a special way. An Intra configuration uses these libraries to
replicate data to additional disk storage on the same system. The second library in
effect becomes a "backup" library.
By using an Intra configuration you can reduce or eliminate your downtime for routine
operations such as performing daily and weekly backups. When replicating changes
to another library, you can suspend the application of the replicated changes. This
enables you to concurrently back up the copied library to tape while your application
remains active. When the backup completes, you can resume operations that apply
replicated changes to the "backup" library.
An Intra configuration enables you to have a "live" copy of data or objects that can be
used to offload queries and report generations. You can also use an Intra
configuration as a test environment prior to installing MIMIX on another system or
connecting your applications to another system.
Because both libraries exist on the same system, an Intra configuration does not
provide protection from disaster.
Database replication within an Intra configuration requires that the source and target
files either have different names or reside in different libraries. Similarly, objects
cannot be replicated to the same named object in the same named library, folders, or
directory.

Note: Newly created data groups use remote journaling as the default configuration.
Remote journaling is not compatible with intra communications, so you must
use source send configuration when configuring for intra communications.
This section includes the following procedures:
Manually configuring Intra using SNA on page 543
Manually configuring Intra using TCP on page 544
Manually configuring Intra using SNA
543
Manually configuring Intra using SNA
In an Intra environment, MIMIX communicates between two product libraries on the
same system instead of between a local system and a remote system. If you manually
configure the communications necessary for Intra, consider the default product library
(MIMIX) to be the local system and the second product library (in this example,
MIMIXI) to be the remote system.
Important! We recommend that these steps be performed by MIMIX Services
personnel. Also, the system name for Intra should be named 'INTRA' as described
in this example.
If you need to manually configure SNA communications for an Intra environment, do
the following:
1. Create the system definitions for the product libraries used for Intra as follows:
a. For the MIMIX library (local system), use the local location name in the
following command:
CRTSYSDFN SYSDFN( local-location-name) TYPE( *MGT)
TEXT( Manual cr eat i on )
b. For the MIMIXI library (remote system), use the following command:
CRTSYSDFN SYSDFN( I NTRA) TYPE( *NET) TEXT( Manual cr eat i on )
2. Create the transfer definition between the two product libraries with the following
command:
CRTTFRDFN TFRDFN( PRI MARY I NTRA local-location-name)
PROTOCOL( *SNA) LOCNAME1( I NTRA1) LOCNAME2( I NTRA2)
NETI D1( *LOC) TEXT( Manual cr eat i on )
3. Create the MIMIX mode description using the following command:
CRTMODD MODD( MI MI X) MAXSSN( 100) MAXCNV( 100) LCLCTLSSN( 12)
TEXT( ' MI MI X I NTRA MODE DESCRI PTI ON Manual cr eat i on. ' )
4. Create a controller description for MIMIX Intra using the following command:
CRTCTLAPPC CTLD( MI MI XI NTRA) LI NKTYPE( *LOCAL) TEXT( ' MI MI X
I NTRA Manual cr eat i on. ' )
5. Create a local device description for MIMIX using the following command:
CRTDEVAPPC DEVD( MI MI X) RMTLOCNAME( I NTRA1) LCLLOCNAME( I NTRA2)
CTL( MI MI XI NTRA) MODE( MI MI X) APPN( *NO) SECURELOC( *YES)
TEXT( ' MI MI X I NTRA Manual cr eat i on. ' )
6. Create a remote device description for MIMIX using the following command:
CRTDEVAPPC DEVD( MI MI XI ) RMTLOCNAME( I NTRA2)
LCLLOCNAME( I NTRA1) CTL( MI MI XI NTRA) MODE( MI MI X) APPN( *NO)
SECURELOC( *YES) TEXT( ' MI MI X REMOTE I NTRA SUPPORT. ' )
7. Add a communication entry to the MIMIXSBS subsystem for the local location
using the following command:
Configuring Intra communications
544
ADDCMNE SBSD( MI MI XQGPL/ MI MI XSBS) RMTLOCNAME( I NTRA2)
J OBD( MI MI XQGPL/ MI MI XCMN) DFTUSR( MI MI XOWN) MODE( MI MI X)
8. Add a communication entry to the MIMIXSBS subsystem for the remote location
using the following command:
ADDCMNE SBSD( MI MI XQGPL/ MI MI XSBS) RMTLOCNAME( I NTRA1)
J OBD( MI MI XQGPL/ MI MI XCMN) DFTUSR( MI MI XOWN) MODE( MI MI X)
9. Vary on the controller, local device, and remote device using the following
commands:
VRYCFG CFGOBJ ( MI MI XI NTRA) CFGTYPE( *CTL) STATUS( *ON)
VRYCFG CFGOBJ ( MI MI X) CFGTYPE( *DEV) STATUS( *ON)
VRYCFG CFGOBJ ( MI MI XI ) CFGTYPE( *DEV) STATUS( *ON)
10. Start the MIMIX system manager in both product libraries using the following
commands:
MI MI X/ STRMMXMGR SYSDFN( *I NTRA) MGR( *ALL)
MI MI X/ STRMMXMGR SYSDFN( *LOCAL) MGR( *J RN)
Note: You still need to configure journal definitions and data group definitions.
Manually configuring Intra using TCP
In an Intra environment, MIMIX communicates between two product libraries on the
same system instead of between a local system and a remote system. The libraries
for the MIMIX installations need to have the same name with the Intra library having
an 'I' appended to the end of the library name.
Important! We recommend that these steps be performed by MIMIX Services
personnel. Also, the system name for Intra should be named 'INTRA' as described
in this example.
In this example, the MIMIX library is the management system and the MIMIXI library
is the network system. If you manually configure the communications necessary for
Intra, consider the MIMIX library as the local system and the MIMIXI library as the
remote system. You may already have a management system defined and need to
add an Intra network system. All the configuration should be done in the MIMIX library
on the management system.
Note: If you have multiple network systems, you need to configure your transfer
definitions to have the same name with system1 and system2 being different.
For more information, see Multiple network system considerations on
page 155.
To add an entry in the host name table, use the command Configure TCP/IP
(CFGTCP) command to access the Configure TCP/IP menu.
Select option 10 (Work with TCP/IP Host Table Entries) from the menu. From the
Work with TCP/IP Host Table display, type a 2 (Change) next to the LOOPBACK
entry and add 'INTRA' to that entry.
Manually configuring Intra using TCP
545
For this example, the host name of the management system is Source and the host
name for the network or target system is Intra.
1. Create the system definitions for the product libraries used for Intra as follows:
a. For the MIMIX library (local system) enter the following command:
MI MI X/ CRTSYSDFN SYSDFN( source) TYPE( *MGT) TEXT( management
syst em )
Note: You may have already configured this system.
b. For the MIMIXI library (remote system), use the following command:
MI MI X/ CRTSYSDFN SYSDFN( I NTRA) TYPE( *NET) TEXT( net wor k
syst em )
2. Create the transfer definition between the two product libraries with the following
command. Note that the values for PORT1 and PORT2 must be unique.
MI MI X/ CRTTFRDFN TFRDFN( PRI MARY SOURCE I NTRA) HOST1( SOURCE)
HOST2( I NTRA) PORT1( 55501) PORT2( 55502) MNGAJ E( *YES)
3. Start the server for the management system (source) by entering the following
command:
MI MI X/ STRSVR HOST( SOURCE) PORT( 55501) J OBD( MI MI X/ PORT55501)
4. Start the server for the network system (Intra) by entering the following command:
MI MI XI / STRSVR HOST( I NTRA) PORT( 55502) J OBD( MI MI XI / PORT55502)
5. Start the system managers from the management system by entering the
following command:
MI MI X/ STRMMXMGR SYSDFN( I NTRA) MGR( *ALL) RESET( *YES)
Start the remaining managers normally.
Note: You will still need to configure journal definitions and data group definitions on
the management system.
You may want to add service table entries for ports 55501 and 55502 to ensure that
other applications will not try and use these ports.
MIMIX support for independent ASPs
546
APPENDIX D MIMIX support for independent ASPs
MIMIX has always supported replication of library-based objects and IFS objects to
and from the system auxiliary storage pool (ASP 1) and basic storage pools (ASPs 2-
32). Now, MIMIX also supports replication of library-based objects and IFS objects,
including journaled IFS objects, data areas and data queues, located in independent
ASPs
1
(33-255).
The system ASP and basic ASPs are collectively known as SYSBAS. Figure 34
shows that MIMIX supports replication to and from SYSBAS and to and from
independent ASPs. Figure 35 shows that MIMIX also supports replication from
SYSBAS to an independent ASP and from an independent ASP to SYSBAS.
Figure 34. MIMIX supports replication to and from an independent ASP as well as standard
replication to and from SYSBAS (the system ASP and basic ASPs).
Figure 35. MIMIX also supports replication between SYSBAS and an independent ASP.
Restrictions: There are several permanent and temporary restrictions that pertain to
replication when an independent ASP is included in the MIMIX configuration. See
1. An independent ASP is an iSeries construct introduced by IBM in V5R1 and extended in V5R2 of
IBM i.
Benefits of independent ASPs
547
Requirements for replicating from independent ASPs on page 550 and Limitations
and restrictions for independent ASP support on page 550.
Benefits of independent ASPs
The key characteristic of an independent ASP is its ability to function independently
from the rest of the storage on a server. Independent ASPs can also be made
available and unavailable at the time of your choosing. The benefits of using
independent ASPs in your environment can be significant. You can isolate
infrequently used data that does not always need to be available when the system is
up and running. If you have a lot of data that is unnecessary for day-to-day business
operations, for example, you can isolate it and leave it offline until it is needed. This
allows you to shorten processing time for other tasks, such as IPLs, reclaim storage,
and system start time.
Additional benefits of independent ASPs allow you to do the following:
Consolidate applications and data from multiple servers into a single IBM System
i allowing for simpler system management and application maintenance.
Decrease downtime, enabling data on your system to be made available or
unavailable without an IPL.
Add storage as necessary, without having to make the system unavailable.
Avoid the need to recover all data in the event of a system failure, since the data is
isolated.
Streamline naming conventions, since multiple instances of data with the same
object and library names can coexist on a single System i in separate independent
ASPs.
Protect data that is unique to a specific environment by isolating data associated
with specific applications from other groups of users.
Using MIMIX provides a robust solution for high availability and disaster recovery for
data stored in independent ASPs.
Auxiliary storage pool concepts at a glance
An independent ASP is actually a part of the larger construct of an auxiliary storage
pool (ASP). Each ASP on your system is a group of disk units that can be used to
organize data for single-level storage to limit storage device failure and recovery time.
The system spreads data across the disk units within an ASP.
Figure 36 shows the types and subtypes of ASPs. The system ASP (ASP 1) is
defined by the system and consists of disk unit 1 and any other configured storage not
assigned to a basic or independent ASP. The system ASP contains the system
objects for the operating system and any user objects not defined to a basic or
independent ASP.
User ASPs are additional ASPs defined by the user. A user ASP can either be a
basic ASP or an independent ASP.
MIMIX support for independent ASPs
548
One type of user ASP is the basic ASP. Data that resides in a basic ASP is always
accessible whenever the server is running. Basic ASPs are identified as ASPs 2
through 32. Attributes, such as those for spooled files, authorization, and ownership
of an object, stored in a basic ASP reside in the system ASP. When storage for a
basic ASP is filled, the data overflows into the system ASP.
Collectively, the system ASP and the basic ASPs are called SYSBAS.
Another type of user ASP is the independent ASP. Identified by device name and
numbered 33 through 255, an independent ASP can be made available or
unavailable to the server without restarting the system. Unlike basic ASPs, data in an
independent ASP cannot overflow into the system ASP. Independent ASPs are
configured using iSeries Navigator.
Figure 36. Types of auxiliary storage pools.
Subtypes of independent ASPs consist of primary, secondary, and user-defined file
system (UFDS) independent ASPs
1
. Subtypes can be grouped together to function as
a single entity known as an ASP group. An ASP group consists of a primary
independent ASP and zero or more secondary independent ASPs. For example, if
you make one independent ASP unavailable, the others in the ASP group are made
unavailable at the same time.
A primary independent ASP defines a collection of directories and libraries and may
have associated secondary independent ASPs. A primary independent ASP defines a
database for itself and other independent ASPs belonging to its ASP group. The
primary independent ASP name is always the name of the ASP group in which it
resides.
A secondary independent ASP defines a collection of directories and libraries and
must be associated with a primary independent ASP. One common use for a
secondary independent ASP is to store the journal receivers for the objects being
journaled in the primary independent ASP.
Before an independent ASP is made available (varied on), all primary and secondary
independent ASPs in the ASP group undergo a process similar to a server restart.
1. MIMIX does not support UDFS independent ASPs. UDFS independent ASPs contain only user-
defined file systems and cannot be a member of an ASP group unless they are converted to a pri-
mary or secondary independent ASP.
Auxiliary storage pool concepts at a glance
549
While this processing occurs, the ASP group is in an active state and recovery steps
are performed. The primary independent ASP is synchronized with any secondary
independent ASPs in the ASP group, and journaled objects are synchronized with
their associated journal.
While being varied on, several server jobs are started in the QSYSWRK subsystem to
support the independent ASP. To ensure that their names remain unique on the
server, server jobs that service the independent ASP are given their own job name
when the independent ASP is made available.
Once the independent ASP is made available, it is ready to use. Completion message
CPC2605 (vary on completed for device name) is sent to the history log.
550
Requirements for replicating from independent ASPs
The following requirements must be met before MIMIX can support your independent
ASP environment:
License Program 5722-SS1 option 12 (Host Server) must be installed in order for
MIMIX to properly replicate objects in an independent ASP on the source and
target systems.
Any PTFs for IBM i that are identified as being required need to be installed on
both the source and target systems. Log in to Support Central and check the
Technical Documents page for a list of IBM i PTFs that may be required.
MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must
be installed into *SYSBAS.
Limitations and restrictions for independent ASP sup-
port
Limitations: Before using independent ASP support, be aware that independent
ASPs do not protect against disk failure. If the disks in the independent ASP are
damaged and the data is unrecoverable, data is available only up to the last backup
copy. A replication solution such as MIMIX is still required for high-availability and
disaster recovery. In addition, be aware of the following limitations:
Although you can use the same library name between independent ASPs, an
independent ASP cannot share a library name with a library in the system ASP or
basic ASPs (SYSBAS). SYSBAS is a component of every name space, so the
presence of a library name in SYSBAS precludes its use in any independent ASP.
This will affect how you configure object for replication with MIMIX, especially for
IFS objects. See Configuring library-based objects when using independent
ASPs on page 552.
Unlike basic ASPs, when an independent ASP fills, no new objects can be created
into the device. Also, updates to existing objects in the independent ASP, such as
adding records to a file, may not be successful. If an independent ASP attached to
the target system fills, your high-availability and disaster recovery solutions are
compromised.
IBM restricts the object types that can be stored in an independent ASP. For
example, DLOs cannot reside in an independent ASP.
Restrictions in MIMIX support for independent ASPs include the following:
MIMIX supports the replication of objects in primary and secondary independent
ASPs only. Replication of IFS objects that reside in user-defined file system
(UDFS) independent ASPs is not supported.
You should not place libraries in independent ASPs within the system portion of a
library list. MIMIX commands automatically call the IBM command SETASPGRP,
which can result in significant changes to the library list for the associated user
job. See Avoiding unexpected changes to the library list on page 553.
Configuration planning tips for independent ASPs
551
MIMIX product libraries, the LAKEVIEW library, and the MIMIXQGPL library must
be installed into SYSBAS. These libraries cannot exist in an independent ASP.
Any *MSGQ libraries, *J OBD libraries, and *OUTFILE libraries specified on MIMIX
commands must reside in SYSBAS.
For successful replication, ASP devices in ASP groups that are configured in data
group definitions must be made available (varied on). Objects in independent
ASPs attached to the source system cannot be journaled if the device is not
available. Objects cannot be applied to an independent ASP on the target system
if the device is not available.
Planned switchovers of data groups that include an ASP group must take place
while the ASP devices on both the source and target systems are available. If the
ASP device for the data group on either the source or target system is unavailable
at the time the planned switchover is attempted, the switchover will not complete.
To support an unplanned switch (failover), the independent ASP device on the
backup system (which will become the temporary production system) must be
available in order for the failover to complete successfully.
You must run the Set ASP Group (SETASPGRP) command on the local system
before running the Send Network Object (SNDNETOBJ ) command if the object
you are attempting to send to a remote system is located in an independent ASP.
Also be aware of the following temporary restrictions:
MIMIX does not perform validity checking to determine if the ASP group specified
in the data group definition actually exists on the systems. This may cause error
conditions when running commands.
Any monitors configured for use with MIMIX must specify the ASP group. Monitors
of type *J RN or *MSGQ that watch for events in an independent ASP must specify
the name of the ASP group where the journal or message queue exists. This is
done with the ASPGRP parameter of the CRTMONOBJ command.
Information regarding independent ASPs is not provided on the following displays:
Display Data Group File Entry (DSPDGFE), Display Data Group Data Area Entry
(DSPDGDAE), Display Data Group Object Entry (DSPDGOBJ E), and Display
Data Group Activity Entry (DSPDGACTE). To determine the independent ASP in
which the object referenced in these displays resides, see the data group
definition.
Configuration planning tips for independent ASPs
A job can only reference one independent ASP at a time. Storing applications and
programs in SYSBAS ensures that they are accessible by any job. Data stored in an
independent ASP is not accessible for replication when the independent ASP is
varied off.
For database replication and replication of objects through Advanced J ournaling
support, due to the requirement for one user journal per data group, it is not possible
for a single data group to replicate both SYSBAS data and ASP group data.
552
For object replication of library-based objects through the system journal, you should
configure related objects in SYSBAS and an ASP group to be replicated by the same
data group. Objects in SYSBAS and an ASP group that are not related should be
separated into different data groups. This precaution ensures that the data group will
start and that objects residing in SYSBAS will be replicated when the independent
ASP is not available.
Note: To avoid replicating an object by more than one data group, carefully plan
what generic library names you use when configuring data group object
entries in an environment that includes independent ASPs. Make every
attempt to avoid replicating both SYSBAS data and independent ASP data for
objects within the same data group. See the example in Configuring library-
based objects when using independent ASPs on page 552.
Journal and journal receiver considerations for independent ASPs
For database replication and replication of objects through Advanced J ournaling
support, data to be replicated and the journal used for its replication must exist in the
same ASP. When you configure replication for independent ASP, consider what data
you store there and the location of the journal and journal receivers needed to
replicate the data.
With independent ASPs, you have the option of placing journal receivers in an
associated secondary independent ASP. When you create an independent ASP, an
ASP group is automatically created that uses the same name you gave the primary
independent ASP.
Configuring IFS objects when using independent ASPs
Replication of IFS objects in an independent ASP is supported through default
replication processes and through MIMIX Advanced J ournaling support. However,
there are differences in how to configure for these different environments.
For IFS replication by default object replication processes, you do not need to identify
an ASP group in a data group definition because an IFS objects path includes the
independent ASP device name.
However, for IFS replication through Advanced J ournaling support, you must specify
the ASP group name in the data group definition so that MIMIX can locate the
appropriate user journal.
If you are using Advanced J ournaling support and want to limit a data group to only
replicate IFS objects from SYSBAS, specify *NONE for the ASP group parameters in
the data group definition.
Configuring library-based objects when using independent ASPs
Use care when creating generic data group object entries; otherwise you can create
situations where the same object is replicated by multiple data groups. This applies
for replication between independent ASPs as well as replication between an
independent ASP and SYSBAS.
Configuration planning tips for independent ASPs
553
For example, data group APP1 defines replication between ASP groups named
WILLOW on each system. Similarly, group APP2 defines replication between ASP
groups named OAK on each system. Both data groups have a generic data group
object entry that includes object XZY from library names beginning with LIB*. If object
LIBASP/XYZ exists in both independent ASPs and matches the generic data group
object entry defined in each data group, both data groups replicate the corresponding
object. This is considered normal behavior for replication between independent ASPs,
as shown in Figure 37.
However, in this example, if SYSBAS contains an object that matches the generic
data group object entry defined for each data group, the same object is replicated by
both data groups. Figure 37 shows that object LIBBAS/XYZ meets the criteria for
replication by both data groups, which is not desirable.
Figure 37. Object XYZ in library LIBBAS is replicated by both data groups APP1 and APP2
because the data groups contain the same generic data group object entry. As a result, this
presents a problem if you need to perform a switch.
Avoiding unexpected changes to the library list
It is recommended that the system portion of your library list does not include any
libraries that exist in an ASP group.
Whenever you run a MIMIX command, MIMIX automatically determines whether the
job requires a call to the IBM command Set ASP Group (SETASPGRP). The
SETASPGRP command changes the current job's ASP group environment and
enables MIMIX to access objects that reside in independent ASP libraries. MIMIX
resets the job's ASP group to its initial value as needed before processing is
completed.
The SETASPGRP command may modify the library list of the current job. If the library
list contains libraries for ASP groups other than those used by the ASP group for
which the command was called, the SETASPGRP removes the extra libraries from
554
the library list. This can affect the system and user portions of the library list as well as
the current library in the library list.
When a MIMIX command runs the SETASPGRP command during processing, MIMIX
resets the user portion of the library list and the current library in the library list to their
initial values. The system portion of the library list is not restored to its initial value.
Figure 38, Figure 39, and Figure 40 show how the system portion of the library list is
affected on the Display Library List (DSPLIBL) display when the SETASPGRP
command is run.
Figure 38. Before a MIMIX command runs. The library list contains three independent ASP
libraries, including a library in independent ASP WILLOW in the system portion of the library
list.
Figure 39. During the running of a MIMIX command. The independent ASP libraries are
removed from the library list.
Figure 40. After the MIMIX command runs. The library in independent ASP WILLOW in the
system portion of the library list is removed. The libraries in independent ASP OAK in the user
Display Library List
Syst em: CHI CAGO
Type opt i ons, pr ess Ent er .
5=Di spl ay obj ect s i n l i br ar y

Opt Li br ar y Type ASP devi ce Text
___ LI BSYS1 SYS WI LLOW :
___ LI BSYS2 SYS :
___ LI BSYS3 SYS :
___ LI BCUR1 CUR WI LLOW :
___ LI BUSR1 USR OAK :
___ LI BUSR2 USR :
Bot t om
F3=Exi t F12=Cancel F17=Top F18=Bot t om

Display Library List
Syst em: CHI CAGO
Type opt i ons, pr ess Ent er .
5=Di spl ay obj ect s i n l i br ar y

Opt Li br ar y Type ASP devi ce Text
___ LI BSYS1 SYS :
___ LI BSYS2 SYS :
___ LI BSYS3 SYS :
___ LI BCUR1 CUR :
___ LI BUSR1 USR :
___ LI BUSR2 USR :
Bot t om
F3=Exi t F12=Cancel F17=Top F18=Bot t om

Detecting independent ASP overflow conditions
555
portion of the library list and the current library are restored.
The SETASPGRP command can return escape message LVE3786 if License
Program 5722-SS1 option 12 (Host Server) is not installed.
Detecting independent ASP overflow conditions
You can take advantage of the independent ASP threshold monitor to detect
independent ASP overflow conditions that put your high availability solution at risk
due to insufficient storage.
The independent ASP threshold monitor, MMIASPTHLD, monitors the QSYSOPR
message queue in library QSYS for messages indicating that the amount of storage
used by an independent ASP exceeds a defined threshold. When this condition is
detected, the monitor sends a warning notification that the threshold is exceeded. The
status of warning notifications is incorporated into overall MIMIX status. Notifications
can be displayed with the Work with Notifications (WRKNFY) command.
Each ASP defaults to 90% as the threshold value. To change the threshold value, you
must use IBM's iSeries Navigator.
The independent ASP threshold monitor is shipped with MIMIX. The monitor is not
automatically started after MIMIX is installed. If you want to use this monitor, you must
start it. The monitor is controlled by the master monitor.
Display Library List
Syst em: CHI CAGO
Type opt i ons, pr ess Ent er .
5=Di spl ay obj ect s i n l i br ar y

Opt Li br ar y Type ASP devi ce Text
___ LI BSYS1 SYS :
___ LI BSYS2 SYS :
___ LI BSYS3 SYS :
___ LI BCUR1 CUR WI LLOW :
___ LI BUSR1 USR OAK :
___ LI BUSR2 USR :
Bot t om
F3=Exi t F12=Cancel F17=Top F18=Bot t om

556
What are rules and how they are used by auditing
A rule provides the construct for defining and processing an action to detect a
problem that is inconsistent with your availability requirements. In other words, a rule
is a check for compliance. Specifically, a rule defines a command to be invoked by the
MIMIX Run Rule (RUNRULE) command and options for notifying you of the result.
There are two types of rules: MIMIX rules and user-defined rules.
MIMIX rules - MIMIX AutoGuard uses MIMIX rules as the mechanism for defining
and invoking audits. Each shipped MIMIX rule pre-defines a command invoked by
the compare phase of an audit and the possible actions that can be initiated, if
needed, in the recovery phase of an audit. MIMIX rules have names which begin
with the pound sign (#) character. While MIMIX rules cannot be changed, you
have considerable control over audits that use MIMIX rules through policies and
scheduling. MIMIX rules require job scheduling support, such as that provided by
MIMIX.
User-defined rules - User-defined rules are those that you create for a specific
purpose either by copying a MIMIX rule or by creating a rule in MIMIX Availability
Manager. You specify the command or program to be invoked by the rule. User-
defined rules provide a way of incorporating other types of checks into your MIMIX
environment. When user-defined rules are run, the rules framework automatically
generates indicators, called notifications, that are available in the MIMIX user
interface and incorporates the severity of detected errors into MIMIX status. User-
defined rules do not include the automatic job scheduling support available for
MIMIX rules. Also, user-defined rules do not support automatic recovery actions,
even if the specified command was copied from a MIMIX rule. You can create
MIMIX monitors to handle job scheduling or use a different job scheduling
mechanism.
Two commands, Run Rule (RUNRULE) and Run Rule Group (RUNRULEGRP),
enable programmatic scheduling of rule activity. MIMIX invokes the RUNRULE
command when submitting automatically scheduled audits. You can also run rules on
demand by using user interface options for audits and rules or by using these
commands interactively.
557
APPENDIX E Creating user-defined rules and
notifications
MIMIX provides the capability to create user-defined rules and integrate the status of
those rules into status reporting for MIMIX. This can be useful to perform specialized
checks of your environment that augment your regularly scheduled audits. This
appendix describes how to create user-defined rules and notifications.
What are rules and how they are used by auditing on page 556 defines the
differences between MIMIX rules used for auditing and user-defined rules.
Requirements for using audits and rules on page 558 identifies the policy
required for automatic audit recovery and the authority levels needed for working
with rules when additional product and command security functions provided
through License Manager are used.
Guidelines and recommendations for auditing on page 558 provides
considerations for effectively auditing a replication environment and
recommendations for using both MIMIX rules and user-defined rules.
Creating user-defined rules on page 562 describes how to create a rule and
provides an example of a user-defined rule that checks the name of the first and
last member of files that have multiple members.
Creating user-generated notifications on page 563 describes how to create a
notification that can be used with custom automation.
Running user rules and rule groups programmatically on page 566 describes
running rules when initiated by a job scheduling task.
MIMIX rule groups on page 567 lists the pre-configured sets of MIMIX rules that
are shipped with MIMIX.
558
Requirements for using audits and rules
To take advantage of automatic recoveries that audits invoke through MIMIX rules,
you must have the Automatic audit recovery policy enabled.
If you take advantage of the additional product and command security functions
provided through License Manager, you may need different authority levels to work
with rules. Viewing rules requires display (*DSP) authority. Running rules requires
operator (*OPR) authority. Changing rules requires management (*MGT) authority.
(MIMIX rules cannot be changed.) For more information about these provided security
functions, see the License and Availability Manager book.
Guidelines and recommendations for auditing
To effectively audit a replication environment, there are a number of things to
consider. This section highlights the main considerations but does not make specific
recommendations or provide full examples. MIMIX service providers are specifically
trained to provide a robust audit solution that meets your needs.
The following are key considerations:
How much time or system resource can you dedicate to audit processing each
day, week, or month?
How often should all data within the database be audited?
In addition to regularly scheduled audits, consider when you may need to run
audits manually. For example:
Before switching, you should run all audits at audit level 30. See audit level.
In some environments using commitment control, the #MBRRCDCNT audit may
be long-running. Refer to the MIMIX Administrator Reference book for information
about improve performance of this audit.
Audit level: Best practice for auditing is to run the most extensive comparison
possible. Specifying level 30 for the Audit level policy enables this. If you choose to
run daily audits at a lower audit level, you should be aware of the risks, especially
when switching.
The level you choose for daily audits depends on your environment, and especially on
the data compared by the #FILDTA and #IFSATR audits. When choosing a value,
consider how much data there is to compare, how frequently it changes, how long the
audit runs, how often you run the audit, and how often you need to be certain that data
is synchronized between source and target systems.
The #FILDTA audit compares all file member data defined for file members defined to
a data group only when audit level 30 is used. Level 10 and level 20 compare 5
percent and 20 percent of data, respectively. Lower audit levels may take days or
weeks to completely audit file data. New files created during that time may not be
audited.
The #IFSATR audit compares data when audit level 20 or 30 is used. At level 10, only
attributes are compared.
Guidelines and recommendations for auditing
559
Regardless of the level you use for daily operations, Vision Solutions strongly
recommends that you perform audits at audit level 30 before the following events to
ensure that 100 percent of that data is valid on the target system:
Before performing a planned switch to the backup system.
Before switching back to the production system.
Recommendations when automatic audit recovery is enabled: You should also
consider the following when you use audit recoveries:
MIMIX rules support recoveries only when the automatic audit recovery policy is
enabled. Automatic recovery is not supported for user-defined rules.
It may take multiple iterations of running audits with recoveries before the results
are clean. Recovering from one error may result in a different error surfacing the
next time the audit is performed. For example, a recovery that adds data group file
entries may result in detecting a database relationship difference (*DBRIND) error
the next time the audit is performed, where the root problem is that a library of
logical files is not identified for replication.
Always review the results of the audits. Audit results reflect only what was actually
compared. Some objects may not have been compared due to object activity or
due to the audit level policy value in effect, even when no differences (*NODIFF)
are reported. You may need to take actions other than running an audit to correct
detected issues. For example, you may need to change a procedure so that target
system objects are only updated by replication processes.
Watch for trends in the audit results. Trends may indicate situations that need
further investigation. For example, objects that are being recovered for the same
reason every time you run an audit can be an indication that something in your
environment is affecting the objects between audits. In this case, investigating the
environment for the cause may determine that a change is needed in the
environment, in the MIMIX configuration, or in both. Trends may also indicate a
MIMIX problem, such as reporting an object as being recovered when it was not.
Report these scenarios to MIMIX CustomerCare. You can do this by creating a
new case using the Case Management page in Support Central.
Considerations and recommendations for rules
The following considerations apply to MIMIX rules used by audits as well as to user-
defined rules unless explicitly noted otherwise.
Note: Rules are not allowed to run against disabled data groups.
General recommendations:
When choosing the value for the Run rule on system policy, consider your
switching needs.
Run MIMIX rules on a scheduled basis. This will help you detect problems in a
timely manner and when you have time to address them.
Considerations for the run rule commands: The RUNRULE command allows you
to run multiple rules concurrently, with each specified rule running in an independent
process. A limit of 100 unique rules can be specified per RUNRULE request.
560
The RUNRULEGRP command only allows you to specify one rule group at a time.
Otherwise, this command is like the RUNRULE command.
When prompting the RUNRULE or RUNRULEGRP commands, consider the
following:
For the Data group definition prompts, the default value, *NONE, means the
rule will not be run against a data group. If *NONE is specified on the
command when the rule uses the &DGDFN replacement variable, running the
RUNRULE command results in an error condition in the audit status and a
message log entry. When a data group name or *ALL is specified, any instance
of the &DGDFN replacement variable is replaced with the data group name
and each data group is run in a separate process.
For the Job description and Library prompts, the default value, MXAUDIT,
submits the request using the default job description, MXAUDIT.
Replacement variables
Replacement variables are used to simplify the configuration and management of
rules by allowing rule actions to be used for multiple data groups. They can also
simplify outfile generation and cleanup. Replacement variables begin with an
ampersand (&) and are used to pass in a value when a rule action is run.
Some commonly used replacement variables include:
The &PRDLIB replacement variable passes in the library from which the
command specified in the rule is initiated.
The &DGDFN replacement variable identifies the data group the rule is to act
upon. In order to run a rule that contains &DGDFN, you must specify the value for
the data group definition on the RUNRULE command.
The &OUTFILE replacement variable passes in the name of a MIMIX generated
output file (outfile). The outfile is placed in a library whose name is the name of the
MIMIX installation library followed by the characters _0. The outfile is managed by
MIMIX. When &OUTFILE is specified in a rule, you will be able to view the
resulting outfile from the user interface.
Rule-generated messages and notifications
For audits, the primary interface for checking results is the Audit Summary interface.
This topic describes additional, secondary messaging for rules.
When the action identified in a rule is started, an informational message appears in
the message log. An informational message also appears when a rule action
completes successfully.
When an action initiated by a rule ends in error or runs successfully but detects
differences, an escape message appears in the message log and an error notification
is sent to the notifications user interface.
Rules that call MIMIX commands may result in an error notification and a message
log entry if you not have a valid access code for the MIMIX product or if the access
code expired.
Guidelines and recommendations for auditing
561
Rule-related messages are marked with a Process value of *NOTIFY to facilitate the
filtering of rules- and notification-related messages.
562
Creating user-defined rules
A user-defined rule can address specific needs that are unique to your environment.
By creating user-defined rules, you can automate customized checks of your
environment and have the results incorporated into the overall status for MIMIX.
Automatic recovery actions are not supported for any user-defined rules, including
those which are copies of MIMIX rules.
Note: User-defined rules can only be created through MIMIX Availability Manager
1
.
Once created, user-defined rules can be run and their results can be checked
from either user interface. Many windows in MIMIX Availability Manager
support actions for running rules. From a 5250 emulator, rules can be run
using the RUNRULE command. Results are accessible from the notifications
associated with the rule. See Considerations and recommendations for rules
on page 559. For more information about running rules manually and checking
the results of user-defined rules, see the MIMIX Operations book.
From MIMIX Availability Manager, do the following:
1. From the navigation bar, select the version 6 or earlier System and Installation on
which you want to create the rule.
2. Select Rules.
3. In the Rules window, do one of the following:
Select New Rule
Locate a rule to copy, select the Copy action and click .
4. A new window opens. If you copied a rule, the details of that rule are displayed.
Do the following:
a. Specify a name at the Rule prompt.
b. Specify or change the Description prompt to identify the purpose of the rule.
c. At the Command prompt, specify or change the command to be run by the
rule. You can use substitution variables within the specified command so it can
be used for multiple data groups.
5. Click Save.
Example of a user-defined rule
User-defined rules can supplement auditing by checking additional attributes that are
critical to your environment. For example, although the Compare File Attributes
(CMPFILA) command supports comparing the *FIRSTMBR and *LASTMBR
attributes, when the command is invoked by an audit using the #FILATR rule these
attributes are not compared. You can compare the name of the first member, the
name of the last member, or both by using the CMPFILA command. If you need to
perform this check often, you can easily automate the task by creating a user-defined
rule to perform this check. The rule can then be submitted by a job scheduling utility,
such as a MIMIX monitor, and scheduled so that it runs periodically.
1. MIMIX Availability Manager does not MIMIX support version 7 installations.
Creating user-generated notifications
563
Table 78 illustrates a user-defined rule that compares the *FIRSTMBR and
*LASTMBR attributes of a multi-member file. The substitution characters (&prodlib,
&dgdfn, and &outfile) used in the specified command allow this rule to be used to
check the first and last member name of files in any data group. Although not explicitly
specified in the command shown in this example, default values result in a report type
that includes only detected differences.
Creating user-generated notifications
MIMIX supports the ability to create user-generated notifications for user-defined
events and have their status and severity reflected within overall MIMIX status. User-
generated notifications can be created interactively from a command line or by
automation programs when user-defined events are detected.
User-generated notifications are created with a status of *NEW. User-generated
notifications appear on the Work with Notifications display and their severity is
reflected in MIMIX status on higher-level displays. The systems from which you can
view a notification are subject to the role of the system on which the notification was
created and the value that was specified for the DGDFN parameter.
To create a user-generated notification, do the following:
1. Enter the following from a command line:
installation_library/ ADDNFYE
2. The Add Notification Entry (ADDNFYE) display appears. Specify values for the
following prompts:
a. Notification description (TEXT) - Specify a short description with no more than
132 characters of text, enclosed in apostrophes. This text will appear on the
Work with Notifications display
Table 78. Sample user-defined rule to check first and last member names
Rule:
Description:
Based on audit level:
Command:
Notification severity:
Notification on success:
mbrname
Compares the *FIRSTMBR and *LASTMBR names.
30
&pr dl i b/ CMPFI LA DGDFN( &dgdf n) CMPLVL( *FI LE)
CMPATR( *FI RSTMBR *LASTMBR) OBJ DI FMSG( *OMI T)
OUTPUT( *OUTFI LE) OUTFI LE( &out f i l e) BATCH( *NO)
Error
Send a notification
564
b. Notification severity (SEVERITY) - Specify the severity assigned to the
notification. The specified value determines how the notification is prioritized in
overall MIMIX status. Use the default value *ERROR to indicate an error was
detected; typically action is required to resolve the problem. The value
*WARNING identifies that action may be required and the value *INFO informs
of a successful operation.
c. Data group definition (DGDFN) - If necessary, specify the three-part name for a
data group. When a data group is specified, the notification is available on
either system defined to the data group. When the value *NONE is specified,
the role of the system (management or network) determines where the
notification will be available. A notification with a value of *NONE added from a
management system will not be available on any network systems. A
notification with a value of *NONE added from a network system will not be
available on any other network systems.
d. Notification details (DETAIL) - Specify information to identify what caused the
notification and what users are expected to do if action is needed. This field
must be no more than 512 characters of text, enclosed in apostrophes. This
information is visible when the notification details are displayed.
3. You can optionally specify values for Job name details (J OB) and File details
(FILE) to identify the job which generated the notification and an associated
output file and library. In order to have this information available to users, you
must specify it now. When specified, this information is available for the
notification from the system on which the notification was sent.
4. To add the notification, press Enter.
Example of a user-generated notification
A MIMIX administrator wants to see a notification reflected in MIMIX status when TCP
communications fails. A message queue monitor on a specific system can check for a
message indicating a communications failure and issue a notification when the
message occurs.
Note: The administrator in this example must use care when determining where to
create the monitor. A monitor runs only a single system but the notification it
will generate may not be available on multiple systems. The role of the system
(management or network) on which the monitor runs and the values specified
for the Add Notification Entry command in the monitors event program
determine where the notification will be available. (For details, see the DGDFN
information in Creating user-generated notifications on page 563.) Because
the communications problem being monitored for may also prevent the
notification from reaching the appropriate systems, the administrator chose to
create this monitor on multiple systems in the installation.
The following command creates a message queue monitor named COMPROB to
check for message LVE0113 (TCP communications request failed with error &1) in
the MIMIX message queue in the MIMIXQGPL library:
CRTMONOBJ MONI TOR( COMPROB) EVTCLS( *MSGQ)
EVTPGM( user_library/ COMPROB) MSGQ( MI MI XQGPL/ MI MI X)
Creating user-generated notifications
565
MSGI D( LVE0113) AUTOSTR( *YES) TEXT( ' I ssue not i f i cat i on
ent r y f or TCP communi cat i on pr obl em' )
The event program includes the instruction to issue the following command, which will
add a notification to MIMIX in the specified installation library:
installation_library/ ADDNFYE TEXT( ' commf ai l ur e' )
SEVERI TY( *ERROR) DGDFN( *NONE) DETAI L( TCP communi cat i ons
f ai l ed. I nvest i gat i on needed. )
Once the monitor is enabled and started, the event program COMPROB will run when
the message LVE0113 is detected. For additional information about creating monitors
and writing event programs, see the Using MIMIX Monitor book.
566
Running user rules and rule groups programmatically
The benefit of using rules is that, with them, you can automate activity that would
otherwise be difficult or time consuming. To get the most benefit from rules, they
should be run programmatically, initiated by a job scheduling task.
Example of creating a monitor to run a user rule
You can create your own monitors to run user-defined rules automatically at
scheduled intervals.
Example: In this example, the user rule MBRNAME has already been created
through MIMIX Availability Manager. The following procedure creates a monitor that
runs the MBRNAME rule on the local system at 1:00 AM on Sundays. The rule will run
against all data groups. The monitor is enabled to start with the master monitor so that
scheduling is automatic.
1. From the system on which you want the rule to run, enter the following command
to create the monitor:
CRTMONOBJ MONI TOR( MBRNAME) EVTCLS( *TI ME) EXI TPGM( *CMD)
FRQ( *WEEKLY) SCDDATE( *NONE) SCDDAY( *SUN) SCDTI ME( 010000)
AUTOSTR( *YES) TEXT( Moni t or t o r un t he r ul e MBRNAME )
2. The Add Monitor Information (ADDMONINF) display appears. Specify the
following at the Command prompt and press Enter.
RUNRULE RULE( MBRNAME) DGDFN( *ALL)
3. Enable the monitor using the command:
CHGMONSTS MONI TOR( MBRNAME) STATUS( *ENABLED)
4. Start the monitor using the command:
STRMON MONI TOR( MBRNAME)
For more information about MIMIX Monitor, see the Using MIMIX Monitor book.
MIMIX rule groups
567
MIMIX rule groups
Each MIMIX rule group consists of a predetermined set of MIMIX rules. Table 79 lists
the pre-configured rule groups shipped with MIMIX. For a description of each MIMIX
rule used by each rule group, see topic How audits are scheduled automatically in the
MIMIX Operations book.
Table 79. Pre-configured MIMIX rule groups
Rule group name Description Individual rules included
#ALL Set of all shipped DLO, file, IFS,
and object rules.
#DGFE, #DLOATR, #FILATR,
#FILATRMBR, #FILDTA,
#IFSATR, #MBRRCDCNT,
#OBJ ATR
#ALLATR Set of shipped attribute
comparisons for files, objects, IFS
objects, and DLOs.
#DLOATR, #FILATR,
#FILATRMBR, #IFSATR,
#OBJ ATR
#ALLDTA Set of data comparisons for files
and IFS objects.
#FILDTA, #IFSATR
#FILALL Set of shipped file rules that
compares file and member
attributes and file data, and checks
configuration for files using
cooperative processing.
#DGFE, #FILATR,
#FILATRMBR, #FILDTA,
#MBRRCDCNT
#FILATR Set of shipped file rules that
compares file and member
attributes.
#FILATR, #FILATRMBR
#IFSALL Set of shipped IFS rules. #IFSATR
Interpreting audit results
568
APPENDIX F Interpreting audit results
Audits use commands that compare and synchronize data. The results of the audits
are placed in output files associated with the commands. The following topics provide
supporting information for interpreting data returned in the output files.
Resolving audit problems on page 569 describes how to check the status of an
audit and resolve any problems that occur.
Checking the job log of an audit on page 571 describes how to use an audits job
log to determine why an audit failed.
Interpreting results for configuration data - #DGFE audit on page 572 describes
the #DGFE audit which verifies the configuration data defined to your
configuration using the Check Data Group File Entries (CHKDGFE) command.
Interpreting results of audits for record counts and file data on page 574
describes the audits and commands that compare file data or record counts.
Interpreting results of audits that compare attributes on page 577 describes the
Compare Attributes commands and their results.
Resolving audit problems
569
Resolving audit problems
When viewing results of audits, the starting point is the Summary view of the Work
with Audits display. You may also need to view the output file or the job log, which are
only available from the system where the audits ran. In most cases, this is the
management system.
Do the following from the management system:
1. Do one of the following to access the Work with Audits display.
From the MIMIX Intermediate Main Menu, select option 6 (Work with audits)
and press Enter. Then use F10 as needed to access the Audit summary view.
From a command line, enter WRKAUD VIEW(*AUDSTS)
2. Check the Audit Status column for values shown in Table 80. Audits with potential
problems are at the top of the list. Take the action indicated in Table 80.
Table 80. Addressing audit problems
Status Action
*FAILED The audit failed for these possible reasons.
Reason 1: The rule called by the audit failed or ended abnormally.
To run the rule for the audit again, select option 9 (Run rule).
To check the job log, see Checking the job log of an audit on page 571.
Reason 2: The #FILDTA audit or the #MBRRCDCNT audit required replication processes
that were not active.
1. From the command line type WRKDG and press Enter.
If all processes for the data group are active, skip to Step 2.
If processes for the data group show a red I, L, or P in the Source and Target columns,
use option 9 (Start DG).
2. When the data group is active, return to the Work with Audits display and use option 9
(Run rule) to run the audit.
3. If the audit fails again, check the job log using Checking the job log of an audit on
page 571.
570
For more information about the values displayed in the audit results, see Interpreting
results for configuration data - #DGFE audit on page 572, Interpreting results of
*DIFFNORCY The comparison performed by the audit detected differences. No recovery actions were
attempted because of a policy in effect when the audit ran. Either the Automatic audit
recovery policy is disabled or the Action for running audits policy prevented recovery
actions while the data group was inactive or had an apply process which exceeded its
threshold.
If policy values were not changed since the audit ran, checking the current settings will
indicate which policy was the cause. Use option 36 to check data group level policies and
F16 to check installation level policies.
If the Automatic audit recovery policy was disabled, the differences must be manually
resolved.
If the Action for running audits policy was the cause, either manually resolve the
differences or correct any problems with the data group status. You may need to start
the data group and wait for threshold conditions to clear. Then run the audit again.
To manually resolve differences do the following:
1. Type 7 (History) next to the audit with *DIFFNORCY status and press Enter.
2. The Work with Audit History display appears with the most recent run of the audit at the
top of the list. Type 8 (Display difference details) next to an audit to see its results in the
output file.
3. Check the Difference Indicator column. All differences shown for an audit with
*DIFFNORCY status need to be manually resolved. For more information about the
possible values, see Interpreting audit results on page 568.
To have MIMIX always attempt to recover differences on subsequent audits, change the
value of the automatic audit recovery policy.
*NOTRCVD The comparison performed by the audit detected differences. Some of the differences were
not automatically recovered. The remaining detected differences must be manually
resolved.
Note: For audits using the #MBRRCDCNT rule, automatic recovery is not possible. Other audits,
such as #FILDTA, may correct the detected differences.
Do the following:
1. Type 7 (History) next to the audit with *NOTRCVD status and press Enter.
2. The Work with Audit History display appears with the most recent run of the audit at the
top of the list. Type 8 (Display difference details) next to an audit to see its results in the
output file.
3. Check the Difference Indicator column. Any differences with values other than
*RECOVERED must be manually resolved. For more information about the possible
values, see Interpreting audit results on page 568.
*NOTRUN The audit was prevented from running by the Action for running audits policy. Either the
data group was inactive or an apply process exceeded its threshold. This may be expected
during periods of peak activity or when data group processes have been ended
intentionally. However, if the audit is frequently not run due to this policy, action may be
needed to resolve the cause of the problem.
Table 80. Addressing audit problems
Status Action
Checking the job log of an audit
571
audits for record counts and file data on page 574, and Interpreting results of audits
that compare attributes on page 577.
Checking the job log of an audit
An audits job log can provide more information about why an audit failed. If it still
exists, the job log is available on the system where the audit ran. Typically, this is the
management system.
You must display the notifications from an audit in order to view the job log. Do the
following:
1. From the Work with Audits display, type 7 (History) next to the audit and press
Enter.
2. The Work with Audit History display appears with the most recent run of the audit
at the top of the list.
3. Use option 12 (Display job) next to the audit you want and press Enter.
4. The Display J ob menu opens. Select option 4 (Display spooled files). Then use
option 5 (Display) from the Display J ob Spooled Files display.
5. Look for messages from the job log for the audit in question. Usually the most
recent messages are at the bottom of the display.
Message LVE3197 is issued when errors remain after an audit completed.
Message LVE3358 is issued when an audit failed. Check for following
messages in the job log that indicate a communications problem (LVE3D5E,
LVE3D5F, or LVE3D60) or a problem with data group status (LVI3D5E,
LVI3D5F, or LVI3D60).
572
Interpreting results for configuration data - #DGFE audit
The #DGFE audit verifies the configuration data that is defined for replication in your
configuration. This audit invokes the Check Data Group File Entries (CHKDGFE)
command for the audits comparison phase. The CHKDGFE command collects data
on the source system and generates a report in a spooled file or an outfile.
The report is available on the system where the command ran. The values in the
Result column of the report indicate detected problems and the result of any
attempted automatic recovery actions. Table 81 shows the possible Result values
and describes the action to take to resolve any reported problems.
The Option column of the report provides supplemental information about the
comparison. Possible values are:
*NONE - No options were specified on the comparison request.
*NOFILECHK - The comparison request included an option that prevented an
error from being reported when a file specified in a data group file entry does not
exist.
*DGFESYNC - The data group file entry was not synchronized between the
source and target systems. This may have been resolved by automatic recovery
Table 81. CHKDGFE - possible results and actions to for resolving errors
Result Recovery Actions
*NODGFE No file entry exists.
Create the DGFE or change the DGOBJ E to COOPDB(*NO)
Note: Changing the object entry affects all objects using the object entry. If you
do not want all objects changed to this value, copy the existing DGOBJ E
to a new, specific DGOBJ E with the appropriate COOPDB value.
*EXTRADGFE An extra file entry exists.
Delete the DGFE or change the DGOBJ E to COOPDB(*YES)
Note: Changing the object entry affects all objects using the object entry. If you
do not want all objects changed to this value, copy the existing DGOBJ E
to a new, specific DGOBJ E with the appropriate COOPDB value.
*NOFILE No file exists for the existing file entry.
Delete the DGFE, re-create the missing file, or restore the missing file.
*NOMBR No file member exists for the existing file entry.
Delete the DGFE for the member or add the member to the file.
*RCYFAILED Automatic audit recovery actions were attempted but failed to correct the
detected error.
Run the audit again.
*RECOVERED Recovered by automatic recovery actions.
No action is needed.
*UA File entries are in transition and cannot be compared.
Run the audit again.
Interpreting results for configuration data - #DGFE audit
573
actions for the audit.
One possible reason why actual configuration data in your environment may not
match what is defined to your configuration is that a file was deleted but the
associated data group file entries were left intact. Another reason is that a data group
file entry was specified with a member name, but a member is no longer defined to
that file. If you use the automatic scheduling and automatic audit recovery functions of
MIMIX AutoGuard, these configuration problems can be automatically detected and
recovered for you. Table 82 provides examples of when various configuration errors
might occur.
Table 82. CHKDGFE - possible error conditions
Result File
exists
Member
exists
DGFE
exists
DGOBJE exists
*NODGFE Yes Yes No COOPDB(*YES)
*EXTRADGFE Yes Yes Yes COOPDB(*NO)
*NOFILE No No Yes Exclude
*NOMBR Yes No Yes No entry
574
Interpreting results of audits for record counts and file
data
The audits and commands that compare file data or record counts are as follows:
#FILDTA audit or Compare File Data (CMPFILDTA) command
#MBRRCDCNT audit or Compare Record Count (CMPRCDCNT) command
Each record in the output files for these audits or commands identifies a file member
that has been compared and indicates whether a difference was detected for that
member.
You can see the full set of fields in each output file by viewing it from a 5250 emulator.
The type of data included in the output file is determined by the report type specified
on the compare command. The data included for each report type is as follows:
Difference reports (RPTTYPE(*DIF)) return information about detected
differences. Difference reports are the default for these compare commands.
Full reports (RPTTYPE(*ALL)) return information about all objects and attributes
compared. Full reports include both differences and objects that are considered
synchronized.
Relative record number reports (RPTTYPE(*RRN)) return the relative record
number of the first 1,000 records of a member that fail to compare. Relative record
number reports apply only to the Compare File Data command.
What differences were detected by #FILDTA
The Difference Indicator (DIFIND) field identifies the result of the comparison. Table
83 identifies values for the Compare File Data command that can appear in this field
Table 83. Possible values for Compare File Data (CMPFILDTA) output file field Difference
Indicator (DIFIND)
Values Description
*APY The database apply (DBAPY) job encountered a problem
processing a U-MX journal entry for this member.
*CMT Commit cycle activity on the source system prevents active
processing from comparing records or record counts in the
selected member.
*CO Unable to process selected member. Cannot open file.
*CO (LOB) Unable to process selected member containing a large object
(LOB). The file or the MIMIX-created SQL view cannot be opened.
*DT Unable to process selected member. The file uses an unsupported
data type.
*EQ Data matches. No differences were detected within the data
compared. Global difference indicator.
Interpreting results of audits for record counts and file data
575
*EQ (DATE) Member excluded from comparison because it was not changed or
restored after the timestamp specified for the CHGDATE
parameter.
*EQ (OMIT) No difference was detected. However, fields with unsupported
types were omitted.
*FF The file feature is not supported for comparison. Examples of file
features include materialized query tables.
*FMC Matching entry not found in database apply table.
*FMT Unable to process selected member. File formats differ between
source and target files. Either the record length or the null
capability is different.
*HLD Indicates that a member is held or an inactive state was detected.
*IOERR Unable to complete processing on selected member. Messages
preceding LVE0101 may be helpful.
*NE Indicates a difference was detected.
*NF1 Member not found on system 1.
*NF2 Member not found on system 2.
*REP The file member is being processed for repair by another job
running the Compare File Data (CMPFILDTA) command.
*SJ The source file is not journaled, or is journaled to the wrong journal.
*SP Unable to process selected member. See messages preceding
message LVE3D42 in job log.
*SYNC The file or member is being processed by the Synchronize DG File
Entry (SYNCDGFE) command.
*UE Unable to process selected member. Reason unknown. Messages
preceding message LVE3D42 in job log may be helpful.
*UN Indicates that the members synchronization status is unknown.
Table 83. Possible values for Compare File Data (CMPFILDTA) output file field Difference
Indicator (DIFIND)
Values Description
576
What differences were detected by #MBRRCDCNT
Table 84 identifies values for the Compare Record Count command that can appear
in the Difference Indicator (DIFIND) field.
Table 84. Possible values for Compare Record Count (CMPRCDCNT) output file field Dif-
ference Indicator (DIFIND)
Values Description
*APY The database apply (DBAPY) job encountered a problem
processing a U-MX journal entry for this member.
*CMT Commit cycle activity on the source system prevents active
processing from comparing records or record counts in the
selected member.
*EC The attribute compared is equal to configuration
*EQ Record counts match. No difference was detected within the record
counts compared. Global difference indicator.
*FF The file feature is not supported for comparison. Examples of file
features include materialized query tables.
*FMC Matching entry not found in database apply table.
*HLD Indicates that a member is held or an inactive state was detected.
*LCK Lock prevented access to member.
*NE Indicates a difference was detected.
*NF1 Member not found on system 1.
*NF2 Member not found on system 2.
*SJ The source file is not journaled, or is journaled to the wrong journal.
*UE Unable to process selected member. Reason unknown. Messages
preceding LVE3D42 in job log may be helpful.
*UN Indicates that the members synchronization status is unknown.
Interpreting results of audits that compare attributes
577
Interpreting results of audits that compare attributes
Each audit that compares attributes does so by calling a Compare Attributes
1

command and places the results in an output file. Each row in an output file for a
Compare Attributes command can contain either a summary record format or a
detailed record format. Each summary row identifies a compared object and includes
a prioritized object-level summary of whether differences were detected. Each detail
row identifies a specific attribute compared for an object and the comparison results.
The type of data included in the output file is determined by the report type specified
on the Compare Attributes command. The data included for each report type is as
follows:
Difference reports (RPTTYPE(*DIF)) return information about detected
differences. Only summary rows for objects that had detected differences are
included. Detail rows for all compared attributes are included. Difference reports
are the default for the Compare Attributes commands.
Full reports (RPTTYPE(*ALL)) return information about all objects and attributes
compared. For each object compared there is a summary row as well as a detail
row for each attribute compared. Full reports include both differences and objects
that are considered synchronized.
Summary reports (RPTTYPE(*SUMMARY)) return only a summary row for each
object compared. Specific attributes compared are not included.
For difference and full reports of compare attribute commands, several of the attribute
selectors return an indicator (*INDONLY) rather than an actual value. Attributes that
return indicators are usually variable in length, so an indicator is returned to conserve
space. In these instances, the attributes are checked thoroughly, but the report only
contains an indication of whether it is synchronized.
For example, an authorization list can contain a variable number of entries. When
comparing authorization lists, the CMPOBJ A command will first determine if both lists
have the same number of entries. If the same number of entries exist, it will then
determine whether both lists contain the same entries. If differences in the number of
entries are found or if the entries within the authorization list are not equal, the report
will indicate that differences are detected. The report will not provide the list of
entriesit will only indicate that they are not equal in terms of count or content.
You can see the full set of fields in the output file by viewing it from a 5250 emulator.
What attribute differences were detected
The Difference Indicator (DIFIND) field identifies the result of the comparison. Table
85 identifies values that can appear in this field. Not all values may be valid for every
Compare command.
When the output file is viewed from a 5250 emulator, the summary row is the first
record for each compared object and is indicated by an asterisk (*) in the Compared
1. The Compare Attribute commands are: Compare File Attributes (CMPFILA), Compare Object
Attributes (CMPOBJ A), Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMP-
DLOA).
578
Attribute (CMPATR) field. The summary rows Difference Indicator value is the
prioritized summary of the status of all attributes checked for the object. When
included, detail rows appear below the summary row for the object compared and
show the actual result for the attributes compared.
The Priority
2
column in Table 85 indicates the order of precedence MIMIX uses when
determining the prioritized summary value for the compared object.
Table 85. Possible values for output file field Difference Indicator (DIFIND)
Values
1
Description Summary
Record
2
Priority
*EC The values are based on the MIMIX configuration settings. The
actual values may or may not be equal.
5
*EQ Record counts match. No differences were detected. Global
difference indicator.
5
*NA The values are not compared. The actual values may or may not
be equal.
5
*NC The values are not equal based on the MIMIX configuration
settings. The actual values may or may not be equal.
3
*NE Indicates differences were detected. 2
*NS Indicates that the attribute is not supported on one of the systems.
Will not cause a global not equal condition.
5
*RCYSBM Indicates that MIMIX AutoGuard submitted an automatic audit
recovery action that must be processed through the user journal
replication processes. The database apply (DBAPY) will attempt
the recovery and send an *ERROR or *INFO notification to indicate
the outcome of the recovery attempt.
*RCYFAILED Used to indicate that automatic recovery attempts via MIMIX
AutoGuard failed to recover the detected difference.
*RECOVERED Indicates that recovery for this object was successful. 1
3
*SJ Unable to process selected member. The source file is not
journaled.
1
*SP Unable to process selected member. See messages preceding
message LVE3D42 in job log.
1
*UA Object status is unknown due to object activity. If an object
difference is found and the comparison has a value specified on
the Maximum replication lag prompt, the difference is seen as
unknown due to object activity. This status is only displayed in the
summary record.
Note: The Maximum replication lag prompt is only valid when a data
group is specified on the command.
2
*UN Indicates that the objects synchronization status is unknown. 4
1. Not all values may be possible for every Compare command.
Interpreting results of audits that compare attributes
579
For most attributes, when the outfile is viewed from a 5250 emulator, when a detailed
row contains blanks in either of the System 1 Indicator or System 2 Indicator fields,
MIMIX determines the value of the Difference Indicator field according to Table 86.
For example, if the System 1 Indicator is *NOTFOUND and the System 2 Indicator is
blank (Object found), the resultant Difference Indicator is *NE.
When viewed through Vision Solutions Portal, data group directionality is
automatically resolved so that differences are viewed as Source and Target instead of
System1 and System2.
For a small number of specific attributes, the comparison is more complex. The
results returned vary according to parameters specified on the compare request and
MIMIX configuration values. For more information see the following topics:
Comparison results for journal status and other journal attributes on page 598
Comparison results for auxiliary storage pool ID (*ASP) on page 602
Comparison results for user profile status (*USRPRFSTS) on page 605
Comparison results for user profile password (*PRFPWDIND) on page 608
Where was the difference detected
The System 1 Indicator (SYS1IND) and System 2 (SYS2IND) fields show the status
of the attribute on each system as determined by the compare request. Table 87
2. Priorities are used to determine the value shown in output files for Compare Attribute commands.
3. The value *RECOVERED can only appear in an output file modified by a recovery action. The object was initially
found to be *NE or *NC but MIMIX autonomic functions recovered the object.
Table 86. Difference Indicator values that are derived from System Indicator values.
Difference Indicator
System 1 Indicator
Object
Found (blank
value)
*NOTCMPD *NOTFOUND *NOTSPT *RTVFAILED *DAMAGED
System
2
Indicator
Object Found
(blank value)
*EQ / *EQ
(LOB) / *NE /
*UA / *EC /
*NC
*NA *NE *NS *UN *NE
*NOTCMPD *NA *NA *NE *NS *UN *NE
*NOTFOUND *NE / *UA *NE / *UA *EQ *NE / *UA *NE / *UA *NE
*NOTSPT *NS *NS *NE *NS *UN *NE
*RTVFAILED *UN *UN *NE *UN *UN *NE
*DAMAGED *NE *NE *NE *NE *NE *NE
580
identifies the possible values. These fields are available in both summary and detail
rows in the output file.
For comparisons which include a data group, the Data Source (DTASRC) field
identifies which system is configured as the source for replication.
What attributes were compared
In each detailed row, the Compared Attribute (CMPATR) field identifies a compared
attribute. The following topics identify the attributes that can be compared by each
command and the possible values returned.
Attributes compared and expected results - #FILATR, #FILATRMBR audits on
page 581
Attributes compared and expected results - #OBJ ATR audit on page 586
Attributes compared and expected results - #IFSATR audit on page 594
Attributes compared and expected results - #DLOATR audit on page 596
Table 87. Possible values for output file fields SYS1IND and SYS2IND
Value Description Summary Record
1

Priority
<blank> No special conditions exist for this object. 5
*DAMAGED Object damaged condition. 3
*MBRNOTFND Member not found. 2
*NOTCMPD Attribute not compared. Due to MIMIX configuration settings, this
attribute cannot be compared.
N/A
2

*NOTFOUND Object not found. 1
*NOTSPT Attribute not supported. Not all attributes are supported on all IBM
i releases. This is the value that is used to indicate an
unsupported attribute has been specified.
N/A
2

*RTVFAILED Unable to retrieve the attributes of the object. Reason for failure
may be a lock condition.
4
1. The priority indicates the order of precedence MIMIX uses when setting the system indicators fields in the summary
record.
2. This value is not used in determining the priority of summary level records.
581
Attributes compared and expected results - #FILATR, #FILATRMBR audits
The Compare File Attribute (CMPFILA) command supports comparisons at the file
and member level. Most of the attributes supported are for file-level comparisons. The
#FILATR audit and the #FILATRMBR audit each invoke the CMPFILA command for
the comparison phase of the audit.
Some attributes are common file attributes such as owner, authority, and creation
date. Most of the attributes, however, are file-specific attributes. Examples of file-
specific attributes include triggers, constraints, database relationships, and journaling
information.
The difference Indicator (DIFIND) returned after comparing file attributes may depend
on whether the file is defined by file entries or object entries. For instance, a attribute
could be equal (*EC) to the database configuration but not equal (*NC) to the object
configuration. See What attribute differences were detected on page 577.
Table 88 lists the attributes that can be compared and the value shown in the
Compared Attribute (CMPATR) field in the output file. The Returned Values column
lists the values you can expect in the System1 Value (SYS1VAL) and System 2 Value
(SYS2VAL) columns as a result of running the comparison.
Table 88. Compare File Attributes (CMPFILA) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
*ACCPTH
1
Access path AR - Arrival sequence access path
EV - Encoded vector with a 1-, 2-, or 4-byte vector.
KC - Keyed sequence access path with duplicate keys allowed.
Duplicate keys are accessed in first-changed-first-out (FCFO)
order.
KF - Keyed sequence access path with duplicate keys allowed.
Duplicate keys are accessed in first-in-first-out (FIFO) order.
KL - Keyed sequence access path with duplicate keys allowed.
Duplicate keys are accessed in last-in-first-out (LIFO) order
KN - Keyed sequence access path with duplicate keys allowed.
No order is guaranteed when accessing duplicate keys.
KU - Keyed sequence access path with no duplicate keys allowed
(UNIQUE).
*ACCPTHVLD
2
Access path valid *YES, *NO
*ACCPTHSIZ
1
Access path size *MAX4GB, *MAX1TB
*ALWDLT Allow delete
operation
*YES, *NO
*ALWOPS Allow operations Group which checks attributes *ALWDLT, *ALWRD, *ALWUPD,
*ALWWRT
*ALWRD Allow read
operation
*YES, *NO
*ALWUPD Allow update
operation
*YES, *NO
582
*ALWWRT Allow write
operation
*YES, *NO
*ASP Auxiliary storage
pool ID
1-16 (pre-V5R2)
1-255 (V5R2)
1 =System ASP
See Comparison results for auxiliary storage pool ID (*ASP) on
page 602 for details.
*AUDVAL Object audit value *NONE, *CHANGE, *ALL
*AUT File authorities Group which checks attributes *AUTL, *PGP, *PRVAUTIND,
*PUBAUTIND
*AUTL Authority list name *NONE, list name
*BASEDONPF
2
Name of based-on
physical file
member
33 character name in the format: library/file(member)
*BASIC Pre-determined set
of basic attributes
Group which checks a pre-determined set of attributes.
When *FILE is specified for the Comparison level (CMPLVL),
these attributes are compared: *CST (group), *NBRMBR,
*OBJ ATR, *RCDFMT, *TEXT, and *TRIGGER (group).
When *MBR is specified for the Comparison level (CMPLVL),
these attributes are compared: *CURRCDS, *EXPDATE,
*NBRDLTRCD, *OBJ ATR, *SHARE, and *TEXT.
*CCSID
1
Coded character set 1-65535
*CST Constraint attributes Group which checks attributes *CSTIND, *CSTNBR
*CSTIND
3
Constraint equal
indicator
No value, indicator only
6

When this attribute is returned in output, its Difference Indicator
value indicates if the number of constraints, constraint names,
constraint types, and the check pending attribute are equal. For
referential and check constraints, the constraint state as well as
whether the constraint status is enabled or disabled is also
compared.
*CSTNBR
3
Number of
constraints
Numeric value
*CURRCDS Current number of
records
0-4294967295
*DBCSCAP DBCS capable *YES, *NO
*DBR Group which checks *DBRIND, *OBJ ATR
Table 88. Compare File Attributes (CMPFILA) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
583
*DBRIND
3
Database relations No value, indicator only
6

When this attribute is returned in output, its Difference Indicator
value indicates if the number of database relations and the
dependent file names are equal.
*EXPDATE
1
Expiration date for
member
Blank for *NONE or date in CYYMMDD format, where C equals
the century. Value 0 is 19nn and 1 is 20nn.
*EXTENDED Pre-determined,
extended set
Valid only for Comparison level of *FILE, this group compares the
basic set of attributes (*BASIC) plus an extended set of attributes.
The following attributes are compared: *ACCPTH, *AUT (group),
*CCSID, *CST (group), *CURRCDS, *DBR (group), *MAXKEYL,
*MAXMBRS, *MAXRCDL, *NBRMBR, *OBJ ATR, *OWNER,
*PFSIZE (group), *RCDFMT, *REUSEDLT, *SELOMT, *SQLTYP,
*TEXT, and *TRIGGER (group).
*FIRSTMBR
1

4
Name of member
*FIRST
10 character name
*NONE if the file has no members.
*FRCKEY
1
Force keyed access
path
*YES, *NO
*FRCRATIO
1
Records to force a
write
*NONE, 1-32767
*INCRCDS
1
Increment number
of records
0-32767
*J OIN J oin Logical file *YES, *NO
Add, update, and delete authorities are not checked. Differences
in these authorities do not result in an *NE condition.
*J OURNAL J ournal attributes Group which checks *J OURNALED, *J RN, *J RNLIB, *J RNIMG,
*J RNOMIT. Results are described in Comparison results for
journal status and other journal attributes on page 598.
*J OURNALED File is currently
journaled
*YES, *NO
*J RN Current or last
journal
10 character name, blank if never journaled
*J RNIMG Record images *AFTER, *BOTH
*J RNLIB Current or last
journal library
10 character name, blank if never journaled
*J RNOMIT J ournal entries to be
omitted
*OPNCLO, *NONE
*LANGID
1
Language ID 3 character ID
*LASTMBR
1

4
Name of member
*LAST
10 character name
*NONE if the file has no members.
Table 88. Compare File Attributes (CMPFILA) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
584
*LVLCHK
1
Record format level
check
*YES, *NO
*MAINT
1
Access path
maintenance
*IMMED, *REBLD, *DLY
5

*MAXINC
1
Maximum
increments
0-32767
*MAXKEYL
1
Maximum key
length
1-2000
*MAXMBRS
1
Maximum members *NOMAX, 1-32767
*MAXPCT
1
Max % deleted
records allowed
*NONE, 1-100
*MAXRCDL
1
Maximum record
length
1-32766
*NBRDLTRCD
1
Current number of
deleted records
0-4294967295
*NBRMBR
1
Number of
members
0-32767
*NBRRCDS
1
Initial number of
records
*NOMAX, 1-2147483646
*OBJ CTLLVL
1
Object control level 8 character user-defined value
*OWNER File owner User profile name
*PFSIZE File size attributes Group which checks *CURRCDS, *INCRCDS, *MAXINC,
*NBRDLTRCD, *NBRRCDS
*PGP Primary group *NONE, user profile name
*PRVAUTIND Private authority
indicator
No value, indicator only
6

When this attribute is returned in output, its Difference Indicator
value indicates if the number of private authorities and private
authority values are equal.
*PUBAUTIND Public authority
indicator
No value, indicator only
6

When this attribute is returned in output, its Difference Indicator
value indicates if public authority values are equal.
*RCDFMT Number of record
formats
1-32
*RECOVER
1
Access path
recovery
*IPL, *AFTIPL, *NO
*REUSEDLT
1
Reuse deleted
records
*YES, *NO
Table 88. Compare File Attributes (CMPFILA) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
585
*SELOMT Select / omit file *YES, *NO
*SHARE
1
Share open data
path
*YES, *NO
*SQLTYP SQL file type PF Types - NONE, TABLE,
LF Types - INDEX, VIEW, NONE
*TEXT
1
Text description 50 character value
*TRIGGER Group which checks *TRGIND, *TRGNBR, *TRGXSTIND
*TRGIND
3
Trigger equal
indicator
No value, indicator only
6

When this attribute is returned in output, its Difference Indicator
value indicates whether it is enabled or disabled, and if the
number of triggers, trigger names, trigger time, trigger event, and
trigger condition with an event type of update are equal.
*TRGNBR
3
Number of triggers Numeric value
*TRGXSTIND
3
Trigger existence
indicator
No value, indicator only
6

When this attribute is returned in output, its Difference Indicator
value indicates if a trigger program exists on the system.
*USRATR User-defined
attribute
10 character user-defined value
*WAITFILE
1
Maximum file wait
time
*IMMED, *CLS, 1-32767
*WAITRCD
1
Maximum record
wait time
*IMMED, *NOMAX, 1-32767
1. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a
data group and the object is configured for system journal replication with a configured object auditing value of
*NONE.
2. This attribute is only compared for logical file members by the #FILATRMBR audit.
3. This attribute cannot be specified as input for comparing but it is included in a group attribute. When the group attri-
bute is checked, this value may appear in the output.
4. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a
data group and the file is configured for system journal replication with a configured Omit content (OMTDTA) value of
*FILE.
5. Differences detected for this attributes are marked as *EC (equal configuration) when the source is set to *IMMED and
the target is set to *DLY by Parallel Access Path Maintenance.
6. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is
specified, however, these values are blank.
Table 88. Compare File Attributes (CMPFILA) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
586
Attributes compared and expected results - #OBJATR audit
The #OBJ ATR audit calls the Compare Object Attributes (CMPOBJ A) command and
places the results in an output file. Table 89 lists the attributes that can be compared
by the CMPOBJ A command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The command supports attributes that are common
among most library-based objects as well as extended attributes which are unique to
specific object types, such as subsystem descriptions, user profiles, and data areas.
The Returned Values column lists the values you can expect in the System1 Value
(SYS1VAL) and System 2 Value (SYS2VAL) columns as a result of running the
compare.
Table 89. Compare Object Attributes (CMPOBJ A) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
*ACCPTHSIZ
1

2
Access path size
Valid for logical files only.
*MAX4GB and *MAX1TB
*AJ EIND Auto start job entries.
Valid for subsystem
descriptions only.
No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if the number of auto start job
entries, job entry and associated job description, and library
entry values are equal.
*ASP Auxiliary storage pool ID 1-16 (pre-V5R1)
1-32 (V5R1)
1-255 (V5R2), 1 =System ASP
See Comparison results for auxiliary storage pool ID
(*ASP) on page 602 for details.
*ASPNBR Number of defined
storage pools.
Valid for subsystem
descriptions only.
Numeric value
*ATTNPGM
2
Attention key handling
program
Valid for user profiles
only.
*SYSVAL, *NONE, *ASSIST, attention program name
*AUDVAL Object audit value *NONE, *USRPRF, *CHANGE, *ALL
*AUT Authority attributes Group which checks *AUTL, *PGP, *PRVAUTIND,
*PUBAUTIND
*AUTCHK
2
Authority to check.
Valid for job queues only.
*OWNER, *DTAAUT
*AUTL Authority list name *NONE, list name
*BASIC Pre-determined set of
basic attributes
Group which checks a pre-determined set of attributes.
These attributes are compared: *CRTTSP, *DOMAIN,
*INFSTS, *OBJ ATR, *TEXT, and *USRATR.
587
*CCSID
2
Character identifier
control.
Valid for user profiles
only.
*SYSVAL, ccsid-value
*CNTRYID
2
Country ID
Valid for user profiles
only.
*SYSVAL, country-id
*COMMEIND Communications entries
Valid for subsystem
descriptions only.
No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if the number of communication
entries, maximum number of active jobs, communication
device, communication mode, associated job description
and library, and the default user entry values are equal.
*CRTAUT
2
Authority given to users
who do not have specific
authority to the object.
Valid for libraries only.
*SYSVAL, *CHANGE, *ALL, *USE, *EXCLUDE, *SYSVAL,
*CHANGE, *ALL, *USE, *EXCLUDE
*CRTOBJ AUD
2
Auditing value for objects
created in this library
Valid for libraries only.
*SYSVAL, *NONE, *USRPRF, *CHANGE, *ALL
*CRTOBJ OWN Profile that owns objects
created by user
Valid for user profiles
only.
*USRPRF, *GRPPRF, profile-name
*CRTTSP Object creation date YYYY-MM-DD-HH.MM.SS.mmmmmm
*CURLIB Current library
Valid for user profiles
only.
*CRTDFT, current-library
*DATACRC
2
Data cyclic redundancy
check (CRC)
Valid for data queues
only.
10 character value
*DDMCNV
2
DDM conversation
Valid for job descriptions
only.
*KEEP, *DROP
*DECPOS Decimal positions
Valid for data areas only.
0-9
*DOMAIN Object Domain *SYSTEM, *USER
*DTAARAEXT Data area extended
attributes
Group which checks *DECPOS, *LENGTH, *TYPE, *VALUE
Table 89. Compare Object Attributes (CMPOBJ A) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
588
*EXTENDED Pre-determined,
extended set
Group which compares the basic set of attributes (*BASIC)
plus an extended set of attributes. The following attributes
are compared: *AUT, *CRTTSP, *DOMAIN, *INFSTS,
*OBJ ATR, *TEXT, and *USRATR.
*FRCRATIO
1

2
Records to force a write
Valid for logical files only.
*NONE, 1 - 32,767
*GID Group profile ID number
Valid for user profiles
only.
1 - 4294967294
*GRPAUT Group authority to
created objects
Valid for user profiles
only.
*NONE, *ALL, *CHANGE, *USE, *EXCLUDE
*GRPAUTTYP Group authority type
Valid for user profiles
only.
*PGP, *PRIVATE
*GRPPRF Group profile name
Valid for user profiles
only.
*NONE, profile-name
*INFSTS Information status *OK (No errors occurred), *RTVFAILED (No information
returned - insufficient authority or object is locked),
*DAMAGED (Object is damaged or partially damaged).
*INLMNU Initial menu
Valid for user profiles
only.
Menu - *SIGNOFF, menu name
Library - *LIBL, library name
*INLPGM Initial program
Valid for user profiles
only.
Program - *NONE, program name
Library - *LIBL, library name
*J OBDEXT J ob description extended
attributes
Group which checks *DDMCNV, *J OBQ, *J OBQLIB,
*J OBQPRI, *LIBLIND, *LOGOUTPUT, *OUTQ, *OUTQLIB,
*OUTQPRI, *PRTDEV
*J OBQ
2
J ob queue
Valid for job descriptions
only.
10 character name
*J OBQEIND J ob queue entries
Valid for subsystem
descriptions only.
No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if the number of job queue entries,
job queue names, job queue libraries, and order of entries
are the same
Table 89. Compare Object Attributes (CMPOBJ A) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
589
*J OBQEXT J ob queue extended
attributes
Group which checks *AUTCHK, *J OBQSBS, *J OBQSTS,
*OPRCTL
*J OBQLIB
2
J ob queue library
Valid for job descriptions
only.
10 character name
*J OBQPRI
2
J ob queue priority
Valid for job descriptions
only.
1 (highest) - 9 (lowest)
*J OBQSBS
2
Subsystem that receives
jobs from this queue
Valid for job queues only.
Subsystem name
*J OBQSTS
2
J ob queue status
Valid for job queues only.
HELD, RELEASED
*J OURNAL J ournal attributes Group which checks *J OURNALED, *J RN, *J RNLIB,
*J RNIMG, *J RNOMIT
4
.
Results are described in Comparison results for journal
status and other journal attributes on page 598.
*J OURNALED Object is currently
journaled
*YES, *NO
*J RN Current or last journal 10 character name
*J RNIMG Record images *AFTER, *BOTH
*J RNLIB Current or last journal
library
10 character name
*J RNOMIT J ournal entries to be
omitted
*OPNCLO, *NONE
*LANGID
2
Language ID
Valid for user profiles
only.
*SYSVAL, language-id
*LENGTH Data area length
Valid for data areas only
1-2000 (character), 1-24 (decimal), 1 (logical)
*LIBEXT Extended library
information attributes
Group which checks *CRTAUT, *CRTOBJ AUD
*LIBLIND Initial library list
Valid for job descriptions
only.
No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if the number of library list entries
and entry list values are equal. The comparison is order
dependent.
Table 89. Compare Object Attributes (CMPOBJ A) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
590
*LMTCPB Limit capabilities
Valid for user profiles
only.
*PARTIAL, *YES, *NO
*LOGOUTPUT
2
J ob log output
Valid for job descriptions
only.
*SYSVAL, *J OBLOGSVR, *J OBEND, *PND
*LVLCHK
1

2
Record format level
check
Valid for logical files only.
*YES, *NO
*MAINT
1

2
Access path
maintenance
Valid for logical files only.
*DLY, *IMMED, *REBLD
*MAXACT
2
Maximum active jobs
Valid for subsystem
descriptions only.
Numeric value, *NOMAX (32,767)
*MAXMBRS
1

2
Maximum members
Valid for logical files only.
*NOMAX, 1 - 32,767
*MSGQ
2
Message queue
Valid for user profiles
only.
Message queue - message queue name
Library - *LIBL, library name
*NBRMBR
1

2
Number of logical file
members
Valid for logical files only.
0 - 32,767
*OBJ ATR Object attribute 10 character object extended attribute
*OBJ CTLLVL
2
Object control level
Valid for object types that
support this attribute
5
.
8 character user-defined value
*OPRCTL
2
Operator controlled
Valid for job queues only.
*YES, *NO
*OUTQ
2
Output queue
Valid for job descriptions
only.
*USRPRF, *DEV, *WRKSTN, output queue name
*OUTQLIB
2
Output queue library
Valid for job descriptions
only.
10 character name
*OUTQPRI
2
Output queue priority
Valid for job descriptions
only.
1 (highest) - 9 (lowest)
Table 89. Compare Object Attributes (CMPOBJ A) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
591
*OWNER Object owner 10 character name
*PGP Primary group *NONE, user profile name
*PRESTIND Pre-start job entries
Valid for subsystem
descriptions only.
No value, indicator only
1

When this attribute is returned in output, its Difference
Indicator value indicates if the number of prestart jobs,
program, user profile, start job, wait for job, initial jobs,
maximum jobs, additional jobs, threshold, maximum users,
job name, job description, first and second class, and
number of first and second class jobs values are equal.
*PRFOUTQ
2
Output queue
Valid for user profiles
only.
*LIBL/*WRKSTN, *DEV
*PRFPWDIND User profile password
indicator
See Comparison results for user profile password
(*PRFPWDIND) on page 608 for details.
*PRTDEV
2
Printer device
Valid for job descriptions
only.
*USRPRF, *SYSVAL, *WRKSTN, printer device name
*PRVAUTIND Private authority indicator No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if the number of private authorities
and private authority values are equal
*PUBAUTIND Public authority indicator No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if the public authority values are
equal.
*PWDEXPITV Password expiration
interval
Valid for user profiles
only.
*SYSVAL, *NOMAX, 1-366 days
*PWDIND No password indicator
Valid for user profiles
only.
*YES (no password), *NO (password)
*QUEALCIND J ob queue allocation
indicator
Valid for subsystem
descriptions only.
No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if the job queue entries for a
subsystem are in the same order and have the same queue
names and queue library names. It also compares the
allocation indicator values
Table 89. Compare Object Attributes (CMPOBJ A) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
592
*RLOCIND Remote location entries
Valid for subsystem
descriptions only.
No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if the number of remote location
entries, remote location, mode, job description and library,
maximum active jabs, and default user entry values are
equal.
*RTGEIND Routing entries
Valid for subsystem
descriptions only.
No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if the number of routing entries,
sequence number, maximum active, steps, compare start,
entry program, class, and compare entry values are equal
*SBSDEXT Subsystem description
extended attributes
Group which checks *AJ EIND, *ASPNBR, *COMMEIND,
*J OBQEIND, *MAXACT, *PRESTIND, *RLOCIND,
*RTGEIND, *SBSDSTS
*SBSDSTS
2
Subsystem status
Valid for subsystem
descriptions only.
*ACTIVE, *INACTIVE
*SIZE Object size Numeric value
*SPCAUTIND Special authorities
Valid for user profiles
only.
No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if special authority values are equal
*SQLSP SQL stored procedures
Valid for programs and
service programs only.
*NONE, or indicator only
3

*NONE is returned when there are no stored procedures
associated with the program or service program.
When the indicator only is returned in output, the Difference
Indicator value identifies whether SQL stored procedures
associated with the object are equal.
*SQLUDF SQL user defined
functions
Valid for programs and
service programs only.
*NONE, or indicator only
3

*NONE is returned when there are no user defined functions
associated with the program or service program.
When the indicator only is returned in output, the Difference
Indicator value identifies whether SQL user defined
functions associated with the object are equal.
*SUPGRPIND Supplemental Groups
Valid for user profiles
only.
No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value indicates if supplemental group values are
equal
*TEXT
2
Text description 50 character description
Table 89. Compare Object Attributes (CMPOBJ A) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
593
*TYPE Data area type - data
area types of DDM
resolved to actual data
area types
Valid for data areas only.
*CHAR, *DEC, *LGL
*UID User profile ID number
Valid for user profiles
only.
1 - 4294967294
*USRATR
2
User-defined attribute 10 character user-defined value
*USRCLS User Class
Valid for user profiles
only.
*SECOFR, *SECADM, *PGMR, *SYSOPR, *USER
*USRPRFEXT User profile extended
attributes
Group which checks *ATTNPGM, *CCSID, *CNTRYID,
*CRTOBJ OWN, *CURLIB, *GID, *GRPAUT, *GRPAUTTYP,
*GRPPRF, *INLMNU, *INLPGM, *LANGID, *LMTCPB,
*MSGQ, *PRFOUTQ, *PWDEXPITV, *PWDIND,
*SPCAUTIND, *SUPGRPIND, *USRCLS
*USRPRFSTS User profile status *ENABLED, *DISABLED
6
For details, see Comparison results for user profile status
(*USRPRFSTS) on page 605.
*VALUE
2
Data area value
Valid for data areas only.
Character value of data
1. This attribute only applies to logical files. Use the Compare File Attributes (CMPFILA) command to compare or omit
physical file attributes.
2. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a
data group and the object is configured for system journal replication with a configured object auditing value of *NONE.
3. If *PRINT is specified for the output format on the compare request, an indicator appears in the System 1 and System
2 columns. If *OUTFILE is specified, these values are blank.
4. These attributes are compared for object types of *FILE, *DTAQ, and *DTAARA. These are the only objects supported
by IBM's user journals.
5. The *OBJ CTLLVL attribute is only supported on the following object types: *AUTL, *CNNL, *COSD, *CTLD, *DEVD,
*DTAARA, *DTAQ, *FILE, *IPXD, *LIB, *LIND, *MODD, *NTBD, *NWID, *NWSD, and *USRPRF.
6. The profile status is only compared if no data group is specified or the USRPRFSTS has a value of *SRC for the spec-
ified data group. If a data group is specified on the CMPOBJ A command and the USRPRFSTS value on the object
entry has a value of *TGT, *ENABLED, or *DISABLED, the user profile status is not compared.
Table 89. Compare Object Attributes (CMPOBJ A) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
594
Attributes compared and expected results - #IFSATR audit
The #IFSATR audit calls the Compare IFS Attributes (CMPIFSA) command and
places the results in an output file. Table 90 lists the attributes that can be compared
by the CMPIFSA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The Returned Values column lists the values you
can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL)
columns as a result of running the compare.
Table 90. Compare IFS Attributes (CMPIFSA) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
*ALWSAV
1
Allow save *YES, *NO
*ASP Auxiliary storage pool 1-16 (pre-V5R1)
1-255 (V5R1)
1-System ASP
See Comparison results for auxiliary storage pool ID
(*ASP) on page 602 for details.
*AUDVAL Object auditing value *ALL, *CHANGE, *NONE, *USRPRF
*AUT Authority attributes Group which checks attributes *AUTL, *PGP,
*PUBAUTIND, *PRVAUTIND
*AUTL Authority list name *NONE, list name
*BASIC Pre-determined set of
basic attributes
Group which checks a pre-determined set of attributes. The
following set of attributes are compared: *CCSID,
*DATASIZE, *OBJ TYPE, and the group *PCATTR.
*CCSID
1
Coded character set 1-65535
*CRTTSP
2
Create timestamp SAA format (YY-MM-DD-HH.MM.SS.mmmmmm)
*DATACRC
3
Data cyclic redundancy
check (CRC)
8 character value
*DATASIZE
1
Data size 0-4294967295
*EXTENDED Pre-determined,
extended set
Group which checks a pre-determined set of attributes.
Compares the basic set of attributes (*BASIC) plus an
extended set of attributes. The following attributes are
compared: *AUT (group), *CCSID, *DATASIZE,
*OBJ TYPE, *OWNER, and *PCATTR (group).
*J OURNAL J ournal information Groups which checks attributes *J OURNALED, *J RN,
*J RNLIB, *J RNIMG, *J RNOPT. Results are described in
Comparison results for journal status and other journal
attributes on page 598.
*J OURNALED File is currently journaled *YES, *NO
*J RN Current or last journal 10 character name
*J RNIMG Record images *AFTER, *BOTH
595
*J RNLIB Current or last journal
library
10 character name
*J RNOPT J ournal optional entries *YES, *NO
*OBJ TYPE Object type *STMF, *DIR, *SYMLNK
*OWNER File owner 10 character name
*PCARCHIVE
1
Archived file *YES, *NO
*PCATTR PC Attributes Group which checks *PCARCHIVE, *PCHIDDEN,
*PCREADO, *PCSYSTEM
*PCHIDDEN
1
Hidden file *YES, *NO
*PCREADO
1
Read only attribute *YES, *NO
*PCSYSTEM
1
System file *YES, *NO
*PGP Primary group *NONE, user profile name
*PRVAUTIND Private authority indicator No value, indicator only
4

When this attribute is returned in output, its Difference
Indicator value indicates if the number of private authorities
and private authority values are equal.
*PUBAUTIND Public authority indicator No value, indicator only
4

When this attribute is returned in output, its Difference
Indicator value indicates if the public authority values are
equal.
1. Differences detected for this attribute are marked as *EC (equal configuration) when the compare request specified a
data group and the object is configured for system journal replication with a configured object auditing value of
*NONE.
2. The *CRTTSP attribute does not compare directories (*DIR) or symbolic links (*SYMLNK). For stream files (*STMF),
the #IFSATR audit omits the *CRTTSP attribute from comparison since creation timestamps are not preserved during
replication. Running the CMPIFSA command will detect differences in the creation timestamps for stream files.
3. When a stream file has Storage Freed *YES on either the source system or target system, the status of this attribute is
reflected as not supported (*NS) and the data has not been compared.
4. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is
specified, these values are blank.
Table 90. Compare IFS Attributes (CMPIFSA) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
596
Attributes compared and expected results - #DLOATR audit
The #DLOATR audit calls the Compare DLO Attributes (CMPDLOA) command and
places the results in an output file. Table 91 lists the attributes that can be compared
by the CMPDLOA command and the value shown in the Compared Attribute
(CMPATR) field in the output file. The Returned Values column lists the values you
can expect in the System1 Value (SYS1VAL) and System 2 Value (SYS2VAL)
columns as a result of running the compare.
Table 91. Compare DLO Attributes (CMPDLOA) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
*ASP Auxiliary storage pool 1-16 (pre-V5R1)
1-32 (V5R1)
1 =System ASP
See Comparison results for auxiliary storage pool ID
(*ASP) on page 602 for details.
*AUDVAL Object audit value *NONE, *USRPRF, *CHANGE, *ALL
*AUT Authority attributes Group which checks *AUTL, *PGP, *PUBAUTIND,
*PRVAUTIND
*AUTL Authority list name *NONE, list name
*BASIC Pre-determined set of basic
attributes
Group which checks a pre-determined set of
attributes. The following set of attributes are
compared: *CCSID, *DATASIZE, *OBJ TYPE,
*PCATTR, and *TEXT.
*CCSID Coded character set 1-65535
*CRTTSP Create timestamp SAA format (YY-MM-DD-HH.MM.SS.mmmmmm)
*DATASIZE Data size 0-4294967295
1
*EXTENDED Pre-determined, extended set Group which checks a pre-determined set of
attributes. Compares the basic set of attributes
(*BASIC) plus an extended set of attributes. The
following attributes are compared *AUT, *CCSID,
*DATASIZE, *OBJ TYPE, *OWNER, *PCATTR, and
*TEXT.
*MODTSP Modify timestamp SAA format (YY-MM-DD-HH.MM.SS.mmmmmm)
*OBJ TYPE Object type *DOC, *FLR
2
*OWNER File owner 10 character name
*PCARCHIVE Archived file *YES, *NO
*PCATTR PC Attributes Group which checks *PCARCHIVE, *PCHIDDEN,
*PCREADO, *PCSYSTEM
*PCHIDDEN Hidden file *YES, *NO
*PCREADO Read only attribute *YES, *NO
597
*PCSYSTEM System file *YES, *NO
*PGP Primary group *NONE, user profile name
*PRVAUTIND Private authority indicator No value, indicator only
3

When this attribute is returned in output, its Difference
Indicator value if the number of private authorities and
private authority values are equal
*PUBAUTIND Public authority indicator No value, indicator only
1

When this attribute is returned in output, its Difference
Indicator value if the public authority values are equal
*TEXT Text description 50 character description
1. This attribute is not supported for DLOs with an object type of *FLR.
2. This attribute is always compared.
3. If *PRINT is specified in the comparison, an indicator appears in the system 1 and system 2 columns. If *OUTFILE is
specified, these values are blank.
Table 91. Compare DLO Attributes (CMPDLOA) attributes
Attribute Description Returned Values (SYS1VAL, SYS2VAL)
598
Comparison results for journal status and other journal attributes
The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJ A),
and Compare IFS Attributes (CMPIFSA) commands support comparing the journaling
attributes listed in Table 92 for objects replicated from the user journal. These
commands function similarly when comparing journaling attributes.
When a compare is requested, MIMIX determines the result displayed in the
Differences Indicator field by considering whether the file is journaled, whether the
request includes a data group, and the data groups configured settings for journaling.
Regardless of which journaling attribute is specified on the command, MIMIX always
checks the journaling status first (*J OURNALED attribute). If the file or object is
journaled on both systems, MIMIX then considers whether the command specified a
data group definition before comparing any other requested attribute.
Compares that do not specify a data group - When no data group is specified on
the compare request, MIMIX compares the journaled status (*J OURNALED attribute).
Table 93 shows the result displayed in the Differences Indicator field. If the file or
Table 92. J ournaling attributes
When specified on the CMPOBJ A command, these values apply only to files, data areas,
or data queues. When specified on the CMPFILA command, these values apply only to
PF-DTA and PF38-DTA files.
*J OURNAL Object journal information attributes. This value acts as a group
selection, causing all other journaling attributes to be selected
*J OURNALED J ournal Status. Indicates whether the object is currently being
journaled. This attribute is always compared when any of the other
journaling attributes are selected.
*J RN
1

1. When these values are specified on a Compare command, the journal status (*J OURNALED) attri-
bute is always evaluated first. The result of the journal status comparison determines whether the
command will compare the specified attribute.
J ournal. Indicates the name of the current or last journal. If blank, the
object has never been journaled.
*J RNIMG
1

2

2. Although *J RNIMG can be specified on the CMPIFSA command, it is not compared even when the
journal status is as expected. The journal image status is reflected as not supported (*NS) because
the operating system only supports after (*AFTER) images.
J ournal Image. Indicates the kinds of images that are written to the
journal receiver for changes to objects.
*J RNLIB
1
J ournal Library. Identifies the library that contains the journal. If blank,
the object has never been journaled.
*J RNOMIT
1
J ournal Omit. Indicates whether file open and close journal entries
are omitted.
599
object is not journaled on both systems, the compare ends. If both source and target
systems are journaled, MIMIX then compares any other specified journaling attribute.
Compares that specify a data group - When a data group is specified on the
compare request, MIMIX compares the journaled status (*J OURNALED attribute) to
the configuration values. If both source and target systems are journaled according to
the expected configuration settings, then MIMIX compares any other specified
journaling attribute against the configuration settings.
The Compare commands vary slightly in which configuration settings are checked.
For CMPFILA requests, if the journaled status is as configured, any other
specified journal attributes are compared. Possible results from comparing the
*J OURNALED attribute are shown in Table 94.
For CMPOBJ A and CMPIFSA requests, if the journaled status is as configured
and the configuration specifies *YES for Cooperate with database (COOPDB),
then any other specified journal attributes are compared. Possible results from
comparing the *J OURNALED attribute are shown in Table 94 and Table 95. If the
configuration specifies COOPDB(*NO), only the journaled status is compared;
possible results are shown in Table 96.
Table 94, Table 95, and Table 96 show results for the *J OURNALED attribute that
can appear in the Difference Indicator field when the compare request specified a
data group and considered the configuration settings.
Table 93. Difference indicator values for *J OURNALED attribute when no data group is
specified
Difference Indicator
Journal Status
1

1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Target
Yes No *NOTFOUND
Source
Yes *EQ *NE *NE
No *NE *EQ *NE
*NOTFOUND *NE *NE *UN
600
Table 94 shows results when the configured settings for Journal on target and
Cooperate with database are both *YES.
Table 95 shows results when the configured settings are *NO for Journal on target
and *YES for Cooperate with database. .
Table 96 shows results when the configured setting for Cooperate with database is
*NO. In this scenario, you may want to investigate further. Even though the Difference
Indicator shows values marked as configured (*EC), the object can be not journaled
Table 94. Difference indicator values for *J OURNALED attribute when a data group is spec-
ified and the configuration specifies *YES for J RNTGT and COOPDB
Difference Indicator
Journal Status
1
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Target
Yes No *NOTFOUND
Source
Yes *EC *EC *NE
No *NC *NC *NE
*NOTFOUND *NE *NE *UN
Table 95. Difference indicator values for *J OURNALED attribute when a data group is spec-
ified and the configuration specifies *NO for J RNTGT and *YES for COOPDB.
Difference Indicator
Journal Status
1
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Target
Yes No *NOTFOUND
Source
Yes *NC *EC *NE
No *NC *NC *NE
*NOTFOUND *NE *NE *UN
601
on one or both systems. The actual journal status values are returned in the System 1
Value (SYS1VAL) and System 2 Value (SYS2VAL) fields.
How configured journaling settings are determined
When a data group is specified on a compare request, MIMIX also considers
configuration settings when comparing journaling attributes. For comparison
purposes, MIMIX assumes that the source system is journaled and that the target
system is journaled according to configuration settings.
Depending on the command used, there are slight differences in what configuration
settings are checked. The CMPFILA, CMPOBJ A, and CMPIFSA commands retrieve
the following configurable journaling attributes from the data group definition:
The Journal on target (J RNTGT) parameter identifies whether activity replicated
through the user journal is journaled on the target system. The default value is
*YES.
The System 1 journal definition (J RNDFN1) and System 2 journal definition
(J RNDFN2) values are retrieved and used to determine the source journal, source
journal library, target journal, and target journal library.
Values for elements Journal image and Omit open/close entries specified in the
File entry options (FEOPT) parameter are retrieved. The default values are
*AFTER and *YES, respectively.
Because the data groups values for Journal image and Omit open/close entries can
be overridden by a data group file entry or a data group object entry, the CMPFILA
and CMPOBJ A commands also retrieve these values from the entries. The values
determined after the order of precedence is resolved, sometimes called the overall
MIMIX configuration values, are used for the compare.
For CMPOBJ A and CMPIFSA requests, the value of the Cooperate with database
(COOPDB) parameter is retrieved from the data group object entry or data group IFS
entry. The default value in object entries is *YES, while the default value in IFS entries
is *NO.
Table 96. Difference indicator values for *J OURNALED attribute when a data group is spec-
ified and the configuration specifies *NO for COOPDB.
Difference Indicator
Journal Status
1
1. The returned values for journal status found on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Target
Yes No *NOTFOUND
Source
Yes *EC *EC *NE
No *EC *EC *NE
*NOTFOUND *NE *NE *UN
602
Comparison results for auxiliary storage pool ID (*ASP)
The Compare File Attributes (CMPFILA), Compare Object Attributes (CMPOBJ A),
Compare IFS Attributes (CMPIFSA), and Compare DLO Attributes (CMPDLOA)
commands support comparing the auxiliary storage pool (*ASP) attribute for objects
replicated from the user journal. These commands function similarly.
When a compare is requested, MIMIX determines the result displayed in the
Differences Indicator field by considering whether a data group was specified on the
compare request.
Compares that do not specify a data group - When no data group is specified on
the compare request, MIMIX compares the *ASP attribute for all files or objects that
match the selection criteria specified in the request. The result displayed in the
Differences Indicator field. Table 97 shows the possible results in the Difference
Indicator field.
Compares that specify a data group - When a data group is specified on the
compare request (CMPFILA, CMPDLOA, CMPIFSA commands), MIMIX does not
compare the *ASP attribute. When a data group is specified on a CMPOBJ A request
which specifies an object type except libraries (*LIB), MIMIX does not compare the
*ASP attribute. Table 98 shows the possible results in the Difference Indicator field
Table 97. Difference Indicator values when no data group is specified
Difference Indicator
ASP Values
1
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Target
ASP1 ASP2 *NOTFOUND
Source
ASP1 *EQ *NE *NE
ASP2 *NE *EQ *NE
*NOTFOUND *NE *NE *EQ
Table 98. Difference Indicator values for non-library objects when the request specified a
data group
Difference Indicator
ASP Values
1
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Target
ASP1 ASP2 *NOTFOUND
Source
ASP1 *NOTCMPD *NOTCMPD *NE
ASP2 *NOTCMPD *NOTCMPD *NE
*NOTFOUND *NE *NE *EQ
603
For CMPOBJ A requests which specify a a data group and an object type of *LIB,
MIMIX considers configuration settings for the library. Values for the System 1 library
ASP number (LIB1ASP), System 1 library ASP device (LIB1ASPD), System 2 library
ASP number (LIB2ASP), and System 2 library ASP device (LIB2ASPD) are retrieved
from the data group object entry and used in the comparison. Table 99, Table 100,
and Table 101 show the possible results in the Difference Indicator field.
Note: For Table 99, Table 100, and Table 101, the results are the same even if the
system roles are switched.
Table 99 shows the expected values for the ASP attribute when the request specifies
a data group and the configuration specifies *SRCLIB for the System 1 library ASP
number and the data source is system 2. .
Table 100 shows the expected values for the ASP attribute the request specifies a
data group and the configuration specifies 1 for the System 1 library ASP number and
the data source is system 2.
Table 101 shows the expected values for the ASP attribute when the request
specifies a data group and the configuration specifies *ASPDEV for the System 1
Table 99. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(*SRCLIB) and DTASRC(*SYS2).
Difference Indicator
ASP Values
1
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Target
ASP1 ASP2 *NOTFOUND
Source
ASP1 *EC *NC *NE
ASP2 *NC *EC *NE
*NOTFOUND *NE *NE *EQ
Table 100. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(1) and DTASRC(*SYS2)
Difference Indicator
ASP Values
1
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Target
1 2 *NOTFOUND
Source
1 *EC *NC *NE
2 *EC *NC *NE
*NOTFOUND *NE *NE *EQ
604
library ASP number, DEVNAME is specified for the System 1 library ASP device, and
data source is system 2. .
Table 101. Difference Indicator values for libraries when a data group is specified and config-
ured values are LIB1ASP(*ASPDEV), LIB1ASPD(DEVNAME) and
DTASRC(*SYS2)
Difference Indicator
ASP Values
1
1. The returned values for *ASP attribute on the Source and Target systems are shown in the
SYS1VAL and SYS2VAL fields. Which system is source and which is target is determined by the
value of the DTASRC field.
Target
DEVNAME 2 *NOTFOUND
Source
1 *EC *NC *NE
2 *EC *NC *NE
*NOTFOUND *NE *NE *EQ
605
Comparison results for user profile status (*USRPRFSTS)
When comparing the attribute *USRPRFSTS (user profile status) with the Compare
Object Attributes (CMPOBJ A) command, MIMIX determines the result displayed in
the Differences Indicator field by considering the following:
The status values of the object on both the source and target systems
Configured values for replicating user profile status, at the data group and object
entry levels
The value of the Data group definition (DGDFN) parameter specified on the
CMPOBJ A command.
Compares that do not specify a data group - When the CMPOBJ A command does
not specify a data group, MIMIX compares the status values between source and
target systems. The result is displayed in the Differences Indicator field, according to
Table 85 in Interpreting results of audits that compare attributes on page 577.
Compares that specify a data group - When the CMPOBJ A command specifies a
data group, MIMIX checks the configuration settings and the values on one or both
systems. (For additional information, see How configured user profile status is
determined on page 606.)
When the configured value is *SRC, the CMPOBJ A command compares the values
on both systems. The user profile status on the target system must be the same as
the status on the source system, otherwise an error condition is reported. Table 102
shows the possible values.
When the configured value is *ENABLED or *DISABLED, the CMPOBJ A command
checks the target system value against the configured value. If the user profile status
on the target system does not match the configured value, an error condition is
reported. The source system user profile status is not relevant. Table 103 and Table
Table 102. Difference Indicator values when configured user profile status is *SRC
Difference Indicator
User profile status
Target
*ENABLED *DISABLED *NOTFOUND
Source
*ENABLED *EC *NC *NE
*DISABLED *NC *EC *NE
*NOTFOUND *NE *NE *UN
606
104 show the possible values when configured values are *ENABLED or *DISABLED,
respectively.
When the configured value is *TGT, the CMPOBJ A command does not compare the
values because the result is indeterminate. Any differences in user profile status
between systems are not reported. Table 105 shows possible values.
How configured user profile status is determined
The data group definition determines the behavior for replicating user profile status
unless it is explicitly overridden by a non-default value in a data group object entry.
The value determined after the order of precedence is resolved is sometimes called
the overall MIMIX configuration value. Unless specified otherwise in the data group or
Table 103. Difference Indicator values when configured user profile status is *ENABLED
Difference Indicator
User profile status
Target
*ENABLED *DISABLED *NOTFOUND
Source
*ENABLED *EC *NC *NE
*DISABLED *EC *NC *NE
*NOTFOUND *NE *NE *UN
Table 104. Difference Indicator values when configured user profile status is *DISABLED
Difference Indicator
User profile status
Target
*ENABLED *DISABLED *NOTFOUND
Source
*ENABLED *NC *EC *NE
*DISABLED *NC *EC *NE
*NOTFOUND *NE *NE *UN
Table 105. Difference Indicator values when configured user profile status *TGT
Difference Indicator
User profile status
Target
*ENABLED *DISABLED *NOTFOUND
Source
*ENABLED *NA *NA *NE
*DISABLED *NA *NA *NE
*NOTFOUND *NE *NE *UN
607
in an object entry, the default is to use the value *SRC from the data group definition.
Table 106 shows the possible values at both the data group and object entry levels.
Table 106. Configuration values for replicating user profile status
*DGDFT Only available for data group object entries, this indicates that the specified
in the data group definition is used for the user profile statue. This is the
default value for object entries.
*DISABLE
1

1. Data group definitions use these values. In data group object entries, the values *DISABLED and
*ENABLED are used but have the same meaning.
The status of the user profile is set to *DISABLED when the user profile is
created or changed on the target system.
*ENABLE
1
The status of the user profile is set to *ENABLED when the user profile is
created or changed on the target system.
*SRC This is the default value in the data group definition. The status of the user
profile on the source system is always used when the user profile is created
or changed on the target system.
*TGT If a new user profile is created, the status is set to *DISABLED. If an
existing user profile is changed, the status of the user profile on the target
system is not altered.
608
Comparison results for user profile password (*PRFPWDIND)
When comparing the attribute *PRFPWDIND (user profile password indicator) with
the Compare Object Attributes (CMPOBJ A) command, MIMIX assumes that the user
profile names are the same on both systems. User profile passwords are only
compared if the user profile name is the same on both systems and the user profile of
the local system is enabled and has a defined password.
If the local or remote user profile has a password of *NONE, or if the local user profile
is disabled or expired, the user profile password is not compared. The System
Indicator fields will indicate that the attribute was not compared (*NOTCMPD). The
Difference Indicator field will also return a value of not compared (*NA).
The CMPOBJ A command does not support name mapping while comparing the
*PRFPWDIND attribute. If the user profile names are different, or if you attempt name
mapping, the System Indicator fields will indicate that comparing the attribute is not
supported (*NOTSPT). The Difference Indicator field will also return a value of not
supported (*NS).
The following tables identify the expected results when user profile password is
compared. Note that the local system is the system on which the command is being
run, and the remote system is defined as System 2.
Table 107 shows the possible Difference Indicator values when the user profile
passwords are the same on the local and remote systems and are not defined as
*NONE.
Table 107. Difference Indicator values when user profile passwords are the same, but not
*NONE.
Difference Indicator
User Profile Password
Remote System
*ENABLED *DISABLED Expired Not Found
Local System
*ENABLED *EQ *EQ *EQ *NE
*DISABLED *NA *NA *NA *NE
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ
609
Table 108 shows the possible Difference Indicator values when the user profile
passwords are different on the local and remote systems and are not defined as
*NONE.
Table 109 shows the possible Difference Indicator values when the user profile
passwords are defined as *NONE on the local and remote systems.
Table 108. Difference Indicator values when user profile passwords are different, but not
*NONE
Difference Indicator
User Profile Password
Remote System
*ENABLED *DISABLED Expired Not Found
Local System
*ENABLED *NE *NE *NE *NE
*DISABLED *NA *NA *NA *NE
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ
Table 109. Difference Indicator values when user profile passwords are *NONE.
Difference Indicator
User Profile Password
Remote System
*ENABLED *DISABLED Expired Not Found
Local System
*ENABLED *NA *NA *NA *NE
*DISABLED *NA *NA *NA *NE
Expired *NA *NA *NA *NE
Not Found *NE *NE *NE *EQ
Journal Codes and Error Codes
610
APPENDIX G Journal Codes and Error Codes
This appendix lists journal codes and error codes associated with replication activity,
including:
J ournal entry codes for files on page 610 identifies journal codes supported for
files, IFS objects, data areas, and data queues configured for replication through
the user journal. This section also includes a list of error codes associated with
files held due to error.
J ournal entry codes for system journal transactions on page 617 identifies
journal codes associated with object replicated through the system journal.
Journal entry codes for user journal transactions
The following sections identify journal codes associated with user journal replication.
J ournal entry codes for files on page 610 lists journal codes associated with files
replicated through the user journal.
Error codes for files in error on page 612 lists the error codes that can be
associated with journal entries for held due to error.
J ournal codes and entry types for journaled IFS objects on page 615 identifies
what B entry types are supported for IFS objects configured for user journal
replication.
J ournal codes and entry types for journaled data areas and data queues on
page 615 identifies what E and Q entry types are supported for data area and
data queue objects configured for user journal replication.
Journal entry codes for files
Table 110 lists journal entry codes for transactions that may appear in user journal
transactions with a status of on hold due to error (*HLDERR). J ournal codes for
cooperatively processed physical files are listed in Table 115.
Table 110. J ournal entry codes and types supported for files
Journal
Code
Type Description Notes
C CM Set of record changes committed 1
C RB Set of record changes rolled back 1
C SC Commit transaction started 1
D AC Add constraint
D CG Change file
Journal entry codes for user journal transactions
611
D CT Create file
D DC Remove constraint
D DH File saved
D DJ Change journaled object attribute
D DT Delete file
D DW Start of save while active
D FM Move file
D FN Rename File
D GC Change constraint
D GO Change owner
D GT Grant file
D RV Revoke file
D TC Add trigger
D TD Delete trigger
D TG Change trigger
D TQ Refresh table
D ZB Object attribute change
F CB Physical file member change
F CE Change end of data for physical file (PF)
F CR Physical file member cleared
F DM Delete member
F EJ J ournaling for a physical file member ended
F IT Identity value
F IZ Physical file member initialized
F J M J ournaling for a physical file member started
F MC Create member
F MD Physical file member deleted
F MM Physical file containing the member moved to different
library
F MN Physical file containing the member renamed
Table 110. J ournal entry codes and types supported for files
Journal
Code
Type Description Notes
Journal Codes and Error Codes
612
Error codes for files in error
Table 111 lists error codes that identify the internal reason a file replicated through the
user journal is on hold due to an error.
F MS Physical file member saved
F RC J ournaled changes removed from a physical file member
F RG Physical file member reorganized (RGZPFM)
F RM Member reorganized
F SS Start of save of a physical file member using save-while-
active function
R DL Record deleted in the physical file member
R DR Record deleted for rollback operation
R PT Record added to a physical file member
R PX Record added directly to RRN (relative record number)
physical file member
R UB Before image of a record that is updated in the physical file
member
1
R UP After image of a record that is updated in the physical file
member
R UR After image of a record that is updated for rollback
information
R PP MIMIX-generated pre-apply of RPT entry
U MX MIMIX-generated entry
Note:
1. This journal code is not supported by the Display J ournal Statistics (DSPJ RNSTC)
command
Table 110. J ournal entry codes and types supported for files
Journal
Code
Type Description Notes
Table 111. Error codes for files
Error code Description
01 Generic record not found
02 Before image record not found
03 After image record not found
04 Record in use
05 Allocation error
Journal entry codes for user journal transactions
613
A Error attempting to place a data group file entry *ACTIVE status
AD Data group initialization for apply session failed
AF *ALL missing for file in FMC
AI Apply initialization error
AK Minimized journal entry found for keyed replication
AM Minimized journal entry cannot be applied
AM Minimized journal entry cannot be converted to R-PX or R-PT
AO Apply already active error
AP Minimized journal entry applied, full image needed
C1 The database apply process found the entry in *CMPACT, *CMPRLS,
or *CMPRPR state during a request to start data groups (STRDG
command)
C2 A request to compare file data (CMPFILDTA command) ended
abnormally while attempting to repair the entry
CC Error creating cursor
CE Change end of data operation failed
DE Data mismatch for delete request
DL Delete of record failed
FE File level error
FO File open for minimized data error
GE General error message
GL Get log space entry failed
I Error attempting to place a data group file entry on *HLDIGN status
IG Database replicator is ignoring entries for this file entry
J S Error attempting to retrieve the journal information for the target
journal and the J S OS/400 HA performance feature is installed
LE Length of record retrieved not same as the transactions
LK A lock on the file caused the operation to fail
LM Error locking member
NF The record was not found
OE Error opening member
OF Error opening data group file entry file
Table 111. Error codes for files
Error code Description
Journal Codes and Error Codes
614
R Error attempting to place a data group file entry on *RLS status
RE Error reading record
RI Non-restrictive referential constraint exists on the file and the target
journal is in standby state
R1 Error with data group file entry after the database apply reorganized a
file
R2 Error removing file from *HLDRGZ
R3 Error applying held entries after the database apply reorganized a file
R4 Error occurred while the database apply was reorganizing a file
SE Compare data group file entry mismatch error
TG Error while attempting to disable triggers
UD Apply of data area failed
UE Error on update (record not updated)
UF Error updating the data group file entry file
W Error attempting to place a data group file entry on *RLSWAIT status
W1 Write failed (record not written)
W2 Record written to wrong location
W3 Write of deleted record failed
WF Error writing record to data group file entry file
XX An unexpected error occurred
X0 Apply exception encountered
X1 Generic exception encountered
X2 Could not create a needed apply object
X3 Could not add record to hold log
X4 Could not add record to commit index
X5 Error opening timestamp file
X6 Force of apply objects failed
Table 111. Error codes for files
Error code Description
Journal entry codes for user journal transactions
615
Journal codes and entry types for journaled IFS objects
The system uses journal code B to indicate that the journal entry deposited is related
to an IFS operation. Table 112 shows the currently supported IFS entry types that
MIMIX can replicate for IFS objects configured for user journal replication.
Journal codes and entry types for journaled data areas and data queues
The operating system uses journal codes E and Q to indicate that journal entries are
related to operations on data areas and data queues, respectively. When configured
for user journal replication, MIMIX recognizes specific E and Q journal entry types as
eligible for replication from a user journal.
Table 112. J ournal entry codes and types supported for IFS objects
Journal
Code
Type Description Notes
B AA Change audit attributes
B B1 Create files, directories, or symbolic links
B B3 Move/rename object 1
B B5 Remove link (unlink) 1
B B6 Bytes cleared, after-image
B ET End journaling for object
B FA Change object attribute
B FR Restore object 1
B FS Saved IFS object
B FW Start of save-while-active
B J T Start journaling for object
B OA Change object authority
B OG Change primary group
B OO Change object owner
B RN Rename file identifier
B TR Truncated IFS object
B WA Write after-image
Note:
1. The action identified in these entries are replicated cooperatively through the security
audit journal.
Journal Codes and Error Codes
616
Table 113 shows the currently supported journal entry types for data areas.
Table 114 shows the currently supported journal entry types for data queues.
Table 113. J ournal entry codes and types supported for data areas
Journal
Code
Type Description
E EA Update data area, after image
E EB Update data area, before image
E ED Data area deleted
E EE Create data area
E EG Start journal for data area
E EH End journal for data area
E EK Change journaled object attribute
E EL Data area restored
E EM Data area moved
E EN Data area renamed
E ES Data area saved
E EW Start of save for data area
E ZA Change authority
E ZB Change object attribute
E ZO Ownership change
E ZP Change primary group
E ZT Auditing change
Table 114. J ournal entry codes and types supported for data queues
Journal
Code
Type Description
Q QA Create data queue
Q QB Start data queue journaling
Q QC Data queue cleared, no key
Q QD Data queue deleted
Q QE End data queue journaling
Q QG Data queue attribute changed
Journal entry codes for system journal transactions
617
For more information about journal entries, see J ournal Entry Information (Appendix
D) in the iSeries Backup and Recovery guide in the IBM eServer iSeries Information
Center.
Journal entry codes for system journal transactions
Table 115 lists the journal entry codes and subtypes that may appear in transactions
replicated through the system journal. Object types that are configured for
cooperative processing will replicate the transaction through the user journal..
Q QJ Data queue cleared, has key
Q QK Send data queue entry, has key
Q QL Receive data queue entry, has key
Q QM Data queue moved
Q QN Data queue renamed
Q QR Receive data queue entry, no key
Q QS Send data queue entry, no key
Q QX Start of save for data queue
Q QY Data queue saved
Q QZ Data queue restored
Q ZA Change authority
Q ZB Change object attribute
Q ZO Ownership change
Q ZP Change primary group
Q ZT Auditing change
Table 114. J ournal entry codes and types supported for data queues
Journal
Code
Type Description
Table 115. J ournal entry codes and subtypes for system journal entries
Journal
entry
Description Subtypes Supported subtype values
T-AD Auditing change Entry D (DLO auditing)
O (Object auditing)
U (User auditing)
Journal Codes and Error Codes
618
T-CA Change authority Command GRT (Grant)
RPL (Replace)
RVK (Revoke)
USR (Grant user authority)
T-CO Create object Entry N (Create)
R (Replace)
T-CP User profile Command CHG (Change)
CRT (Create)
DST (Reset using DST)
RPA (Reset IBM-supplied)
RST (Restore)
T-DO Delete object Entry A (Object was deleted not under
commitment control)
D (Pending object create rolled
back)
P (Pending delete under
commitment control)
R (Pending delete rolled back)
T-J D J ob description Command CHG (Change)
CRT (Create)
T-LD Link, unlink, or
lookup directory
Entry U (Unlink directory)
T-OM Object
management
change
Entry M (Move)
R (Rename)
T-OR Object restore Entry N (New object restored)
E (Existing object restored)
T-OW Object ownership
changed
Entry A (Change of object owner)
T-PA Program adopt
authority
Entry A (Adopt owner authority)
J (J ava adopt owner)
M (Change to S_USUID)
T-PG Change of an
objects primary
group
Entry A (Change primary group)
T-RA Authority change
during restore
Entry A (Changes to authority for object
restored)
Table 115. J ournal entry codes and subtypes for system journal entries
Journal
entry
Description Subtypes Supported subtype values
Journal entry codes for system journal transactions
619
T-RO Change of object
owner during
restore
Entry A (Restoring objects that had
ownership changed when
restored)
T-SE Subsystem
routing entry
Command ADD (Add)
CHG (Change)
RMV (Remove)
T-SF Spooled file
change
Access A (Read)
C (Created)
D (Deleted)
H (Held)
I (Created inline)
R (Released)
S (Spooled file restored)
T (Spooled file saved)
U (Changed)
T-VO Validation list
change
Entry A (Add)
C (Change)
F (Find)
R (Remove)
U (Unsuccessful verify)
V (Successful verify)
T-YC DLO object
change
Access Various access types are
supported.
T-ZC Object change Access Various access types are
supported.
Table 115. J ournal entry codes and subtypes for system journal entries
Journal
entry
Description Subtypes Supported subtype values
Outfile formats
620
APPENDIX H Outfile formats
This section contains the output files (outfile) formats for those MIMIX commands that
provide outfile support.
For each command that can produce an outfile, MIMIX provides a model database file
that defines the record format for the outfile. These database files can be found in the
product installation library.
Public authority to the created outfile is the same as the create authority of the library
in which the file is created. Use the Display Library Description (DSPLIBD) command
to see the create authority of the library.
You can use the Run Query (RUNQRY) command to display outfiles with column
headings and data type formatting if you have the licensed program 5722QU1, Query,
installed.
Otherwise, you can use the Display File Field Description (DSPFFD) command to see
detailed outfile information, such as the field length, type, starting position, and
number of bytes.
Work panels with outfile support
621
Work panels with outfile support
The following table lists the work panels with outfile support.
Table 116. Work panels with outfile support
Panel Description
WRKDGDFN Work with DG Definitions
WRKJ RNDFN Work with J ournal Definitions
WRKTFRDFN Work with Transfer Definitions
WRKSYSDFN Work with System Definitions
WRKDGFE Work with DG File Entries
WRKDGDAE Work with DG Data Area Entries
WRKDGOBJ E Work with DG Object Entries
WRKDGDLOE Work with DG DLO Entries
WRKDGIFSE Work with DG IFS Entries
WRKDGACT Work with DG Activity
WRKDGACTE Work with DG Activity Entries
WRKDGIFSTE Work with DG IFS Tracking Entries
WRKDGOBJ TE Work with DG Object Tracking Entries
MCAG outfile (WRKAG command)
622
MCAG outfile (WRKAG command)
The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Application Groups (WRKAG)
command.
Table 117. MCAG outfile (WRKAG command)
Field Description Type, length Valid values Column
headings
AGDFN Application group definition CHAR(10) User-defined name AGDFN
NAME
USRPRF User profile CHAR(10) Any valid user profile USER
PROFILE
APP Application name CHAR(10) *AGDFN, user-defined name APP NAME
APPLIB Application library CHAR(10) *APP, user-defined name APP
LIBRARY
RLSLVL Application release level CHAR(10) User-defined value APP
RELEASE
LEVEL
TYPE Application group type CHAR(7) *CLU, *NONCLU TYPE
APPCRG Application CRG CHAR(6) *AGDFN, *NONE APP CRG
DTACRG Data CRG CHAR(10) *NONE, User-defined name DTA CRG
EXITPGM Application CRG exit
program
CHAR(10) User-defined name APP CRG
EXIT PGM
EXITPGMLIB Application CRG exit
program library
CHAR(10) *APPLIB, user-defined name CRG EXIT
PGM LIB
J OB Exit program job name CHAR(10) *APP, *J OBD, user-defined name CRG EXIT
PGM J OB
NAME
EXITDTA Exit program data CHAR(256) User-defined value CRG EXIT
PGM DATA
NBRRESTART Number of restarts PACKED(5 0) 0-3 NUMBER OF
RESTARTS
MCAG outfile (WRKAG command)
623
HOST Takeover IP address CHAR(256) User-defined value TAKEOVER
IP ADDRESS
TEXT Description CHAR(50) User-defined value DESCRIPTIO
N
UPDENV Update cluster environment CHAR(10) *YES, *NO UPDATE
CLUSTER
ENV
IDA Input data area name CHAR(10) BLANK, Name of the Input Data Area INPUT DATA
AREA NAME
AGSTS Application CRG status CHAR(10) BLANK, *ACTIVE, *INACTIVE,
*UNKNOWN, *NONE, *NOTAVAIL,
*INDOUBT, *RESTORED,
*ADDNODPND, *DLTPND,
*DLTCMDPND, *CHGPND,
*CRTPND, *ENDCRGPND,
*RMVNODPND, *STRCRGPND,
*SWTPND
APP CRG
STATUS
AGNODS Application CRG nodes
status
CHAR(10) BLANK, *ACTIVE, *INACTIVE,
*UNKNOWN, *NONE, *NOTAVAIL
APP CRG
NODES
STATUS
DCSTS Data CRGs status CHAR(10) BLANK, *ACTIVE, *INACTIVE,
*ATTN, *UNKNOWN, *NONE,
*NOTAVAIL, *INDOUBT,
*RESTORED, *ADDNODPND,
*DLTPND, *DLTCMDPND,
*CHGPND, *CRTPND,
*ENDCRGPND, *RMVNODPND,
*STRCRGPND, *SWTPND
DATA CRG
STATUS
DCNODS Data CRG nodes status CHAR(10) BLANK, *ACTIVE, *INACTIVE,
*ATTN, *UNKNOWN, *NONE,
*NOTAVAIL
DATA CRG
NODES
STATUS
Table 117. MCAG outfile (WRKAG command)
Field Description Type, length Valid values Column
headings
MCAG outfile (WRKAG command)
624
REPSTS Replication status CHAR(10) BLANK, *ACTIVE, *INACTIVE,
*ATTN, *UNKNOWN, *NONE,
*NOTAVAIL, *ATTN_PPRC,
*AUTHORITY, *SUSPENDED,
*STGMGTSVR
DG STATUS
PROCSTS Procedure status CHAR(10) *ACTIVE, *ATTN, *COMP, *NONE PROCEDURE
FMSGQL Failover message queue
library
CHAR(10) *NONE, User-defined name FAILOVER
MSGQ
LIBRARY
FMSGQN Failover message queue
name
CHAR(10) *NONE, User-defined name FAILOVER
MSGQ NAME
FWTIME Failover wait time PACKED(5 0) *NOMAX, 1-32767 FAILOVER
WAIT TIME
FDFTACT Failover default action PACKED(5 0) *CANCEL, *PROCEED FAILOVER
DFT ACTION
Table 117. MCAG outfile (WRKAG command)
Field Description Type, length Valid values Column
headings
MCDTACRGE outfile (WRKDTARGE command)
625
MCDTACRGE outfile (WRKDTARGE command)
The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Data CRG Entries (WRKDTARGE)
command.
Table 118. MCDTACRGE outfile (WRKDTARGE command)
Field Description Type, length Valid values Column headings
DTACRGE Data CRG CHAR(10) User-defined name DATA CRG
DGDFN Data group name CHAR(10) *DTACRG, user-defined name DGDFN NAME
AGDFN Application group definition CHAR(10) User-defined name AGDFN NAME
J RN J ournal name CHAR(10) *DGDFN, user-defined name J OURNAL
J RNLIB J ournal library CHAR(10) User-defined name J OURNAL
LIBRARY
OSF Object specifier file CHAR(10) *DTACRG, user-defined name OBJ ECT
SPECIFIER FILE
(OSF)
OSFLIB Object specifier file library CHAR(10) *AGDFN, user-defined name OSF LIBRARY
OSFMBR Object specifier file member CHAR(10) *DTACRG, user-defined name OSF MEMBER
DELIVERY RJ mode CHAR(10) *NONE, *ASYNC, *SYNC RJ MODE
(DELIVER)
EXITPGM Data CRG exit program CHAR(10) MMXDTACRG, user-defined name DATA CRG EXIT
PGM
EXITPGMLIB Data CRG exit program library CHAR(10) *MIMIX, user-defined name DATA CRG EXIT
PGM LIBRARY
DCSTS Data CRGs status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN,
*NONE, *NOTAVAIL, *INDOUBT, *RESTORED,
*ADDNODPND, *DLTPND, *DLTCMDPND,
*CHGPND, *CRTPND, *ENDCRGPND,
*RMVNODPND, *STRCRGPND, *SWTPND
DATA CRG
STATUS
DCNODS Data CRG nodes status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN, DATA CRG
*NONE, *NOTAVAIL STATUS
REPSTS Replication status CHAR(10) BLANK, *ACTIVE, *INACTIVE, *ATTN, *UNKNOWN,
*NONE, *NOTAVAIL
REPLICATION
STATUS
MCDTACRGE outfile (WRKDTARGE command)
626
DEVCRG Device CRG name CHAR(10) User-defined name DEVICE CRG
ASPGRP ASP Group CHAR(10) *NONE, User-defined name ASP
GROUP
DTATYPE Data resource group type CHAR(10) *DEV, *DTA, *PEER, *XSM DATA RESOURCE
TYPE
FMSGQL Failover message queue library CHAR(10) *AGDFN, *NONE, User-defined name FAILOVER MSGQ
LIBRARY
FMSGQN Failover message queue name CHAR(10) *AGDFN, *NONE, User-defined name FAILOVER MSGQ
NAME
FWTIME Failover wait time PACKED(5 0) *AGDFN, *NOMAX, 1-32767 FAILOVER WAIT
TIME
FDFTACT Failover default action PACKED(5 0) *AGDFN, *CANCEL, *PROCEED FAILOVER DFT
ACTION
ADMDMN Cluster administrative domain CHAR(10) *NONE, User-defined value CLUSTER
ADMINISTRATIVE
DOMAIN
SYNCOPT Synchronization option PACKED(10 5) *LASTCHG, *ACTDMN SYNCHRONIZATI
ON DOMAIN
SANUSER SAN user CHAR(16) *NONE, User-defined name SAN CONSOLE
USER
PPRCNOD1 PPRC node 1 CHAR(8) *NONE, User-defined value PPRC NODE 1
PPRCDEV1 PPRC device 1 CHAR(20) User-defined value PPRC
DEVICE 1
PPRCIP1 PPRC IP address 1 CHAR(16) User-defined value PPRC CONSOLE
IP 1
PPRCNOD2 PPRC node 2 CHAR(8) *NONE, User-defined value PPRC NODE 2
PPRCDEV2 PPRC device 2 CHAR(20) User-defined value PPRC DEVICE 2
PPRCIP2 PPRC IP address 2 CHAR(16) User-defined value PPRC CONSOLE
IP 2
PPRCLUN PPRC logical unit name CHAR(1000) *NONE, User-defined value PPRC LUNS
Table 118. MCDTACRGE outfile (WRKDTARGE command)
Field Description Type, length Valid values Column headings
MCDTACRGE outfile (WRKDTARGE command)
627
LUNDEV LUN device CHAR(20) User-defined value LUN DEVICE
LUNIP LUN IP address CHAR(16) User-defined value LUN CONSOLE IP
ADDRESS
LUNNOD1 LUN node 1 CHAR(8) *NONE, User-defined value LUN NODE 1
LUNCONID1 LUN connection ID 1 CHAR(4) User-defined value LUN CONN ID 1
LUNNOD2 LUN node 2 CHAR(8) *NONE, User-defined value LUN NODE 2
LUNCONID2 LUN connection ID 2 CHAR(4) User-defined value LUN CONN ID 2
LUNVOLID LUN volume ID CHAR(5) *NONE, User-defined value LUN VOLUME ID
GMNOD1 GM node 1 CHAR(8) *NONE, User-defined value GM NODE 1
GMDEV1 GM device 1 CHAR(20) User-defined value GM
GMIP1 GM IP address 1 CHAR(16) User-defined value GM CONSOLE
GMNOD2 GM node 2 CHAR(8) *NONE, User-defined value GM NODE 2
GMDEV2 GM device 2 CHAR(20) User-defined value GM
GMIP2 GM IP address 2 CHAR(16) User-defined value GM CONSOLE
GMLUN GM logical unit name CHAR(2500) *NONE, User-defined value GM LUNS
Table 118. MCDTACRGE outfile (WRKDTARGE command)
Field Description Type, length Valid values Column headings
MCNODE outfile (WRKNODE command)
628
MCNODE outfile (WRKNODE command)
The following fields are available if you specify *OUTFILE on the Output parameter of the Work with Node Entries (WRKNODE) command.
Table 119. MCNODE outfile (WRKNODE command)
Field Description Type, length Valid values Column
headings
AGDFN Data CRG CHAR(10) User-defined name AGDFN
NAME
CRG CRG name CHAR(10) *AGDFN, user-defined name CRG NAME
NODE System name CHAR(8) User-defined name NODE
CURROLE Current role CHAR(10) *PRIMARY, *BACKUP, *REPLICATE,
*UNDEFINED
CURRENT
ROLE
CURSEQ Current sequence PACKED(5 0) -2, -1, 0-127
(-2=*UNDEFINED)
(-1 =*REPLICATE)
(0 =*PRIMARY)
(1-127 =*BACKUP sequence)
CURRENT
SEQUENCE
CURDTAPVD Current data provider CHAR(10) *PRIMARY, *BACKUP,
*UNDEFINED, user-defined name
CURRENT
DATA
PROVIDER
PREFROLE Preferred role CHAR(10) Blank PREFERRED
ROLE
PREFSEQ Preferred sequence PACKED(5 0) 0 PREFERRED
SEQUENCE
CFGROLE Configured role CHAR(10) *PRIMARY, *BACKUP, *REPLICATE,
*UNDEFINED
CONFIGURE
D ROLE
CFGSEQ Configured sequence PACKED(5 0) -2, -1, 0-127
(-2=*UNDEFINED)
(-1 =*REPLICATE)
(0 =*PRIMARY)
(1-127 =*BACKUP sequence)
CONFIGURE
D
SEQUENCE
MCNODE outfile (WRKNODE command)
629
CFGDTAPVD Configured data provider CHAR(10) *PRIMARY, *BACKUP,
*UNDEFINED, user-defined name
CONFIGURE
D DATA
PROVIDER
STATUS CRG node status CHAR(10) *ACTIVE, *INACTIVE, *ATTN,
*NONE, *NOTAVAIL, *UNKNOWN
CRG NODE
STATUS
Table 119. MCNODE outfile (WRKNODE command)
Field Description Type, length Valid values Column
headings
MXCDGFE outfile (CHKDGFE command)
630
MXCDGFE outfile (CHKDGFE command)
The following fields are available if you specify *OUTFILE on the Output parameter of the Check Data Group File Entries (CHKDGFE)
command.The command is also called by audits which run the #DGFE rule. For additional information, see Interpreting results for
configuration data - #DGFE audit on page 572.
Table 120. MXCDGFE outfile (CHKDGFE command)
Field Description Type, length Valid values Column headings
TIMESTAMP Timestamp (YYYY-MM-
DD.HH.MM.SSmmmm)
TIMESTAMP SAA timestamp TIMESTAMP
COMMAND Command name CHAR(10) CHKDGFE COMMAND NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN SHORT NAME
DGDFN Data group definition name CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 CHAR(8) User-defined system name SYSTEM 1
DGSYS2 System 2 CHAR(8) User-defined system name SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA SOURCE
FILE System 1 file name CHAR(10) User-defined name SYSTEM 1 OBJ ECT
LIB System 1 library name CHAR(10) User-defined name SYSTEM 1 LIBRARY
MBR System 1 member name CHAR(10) User-defined name SYSTEM 1 MEMBER
OBJ TYPE Object type CHAR(10) *FILE OBJ ECT TYPE
RESULT Result CHAR(10) *NODGFE, *EXTRADGFE,
*NOFILE, *NOMBR,
*RCYFAILED, *RECOVERED,
*UA
Note: The values *RCYFAILED
and *RECOVERED may be
placed in the outfile as a
result of automatic audit
recovery actions.
RESULT
OPTION Option CHAR(100) *NONE, *NOFILECHK,
*DGFESYNC
OPTION
FILE2 System 2 file name CHAR(10) User-defined name SYSTEM 2 OBJ ECT
LIB2 System 2 library name CHAR(10) User-defined name SYSTEM 2 LIBRARY
MXCDGFE outfile (CHKDGFE command)
631
MBR2 System 2 member name CHAR(10) User-defined name SYSTEM 2 MEMBER
ASPDEV Source ASP device CHAR(10) *UNKNOWN - if object not found
or an API error
*SYSBAS - if object in ASP 1-32
User-defined name - if object in
ASP 33-255
ASP DEVICE
OBJ ATR Object attribute CHAR(10) PF-DTA, PF-SRC, LF,
PF38-DTA, PF38-SRC, LF38
OBJ ECT ATTRIBUTE
Table 120. MXCDGFE outfile (CHKDGFE command)
Field Description Type, length Valid values Column headings
MXCMPDLOA outfile (CMPDLOA command)
632
MXCMPDLOA outfile (CMPDLOA command)
For additional supporting information, see Interpreting results of audits that compare attributes on page 577.
Table 121. CMPDLOA Output file (MXCMPDLOA)
Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (CCCC-YY-MM-
DD.HH.MM.SSmmmm)
CHAR(26) SAA timestamp TIMESTAMP
COMMAND Command name CHAR(10) CMPDLOA COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN
SHORT NAME
DGNAME Data group definition name CHAR(10) User-defined data group name
Note: Blank if not DG specified on the command.
DGDFN NAME
SYSTEM1 System 1 CHAR(8) User-defined system name
Note: Local system name if no DG specified.
SYSTEM 1
SYSTEM2 System 2 CHAR(8) User-defined system name
Note: Local system name if no DG specified.
SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
SYS1DLO System 1 DLO name CHAR(76) User-defined name SYSTEM 1
DLO
SYS2DLO System 2 DLO name CHAR(76) User-defined name SYSTEM 2
DLO
CCSID DLO name CCSID BIN(5) User-defined name CCSID
CNTRYID DLO name country ID CHAR(2) System-defined name CNTRYID
LANGID DLO name language ID CHAR(3) System-defined name LANGID
CMPATR Compared attribute CHAR(10) See Attributes compared and expected results -
#DLOATR audit on page 596
COMPARED
ATTRIBUTE
SYS1IND System 1 file indicator CHAR(10) See Table 87 in Where was the difference detected
on page 579
SYSTEM 1
INDICATOR
MXCMPDLOA outfile (CMPDLOA command)
633
SYS2IND Stem 2 file indicator CHAR(10) See Table 87 in Where was the difference detected
on page 579
SYSTEM 2
INDICATOR
DIFIND Differences indicator CHAR(10) See What attribute differences were detected on
page 577
DIFFERENCE
INDICATOR
SYS1VAL System 1 value of the specified
attribute
VARCHAR(2048)
MINLEN(50)
See Attributes compared and expected results -
#DLOATR audit on page 596
SYSTEM 1
VALUE
SYS1CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 1
CCSID
SYS2VAL System 1 value of the specified
attribute
VARCHAR(2048)
MINLEN(50)
See Attributes compared and expected results -
#DLOATR audit on page 596
SYSTEM 2
VALUE
SYS2CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 2
CCSID
Table 121. CMPDLOA Output file (MXCMPDLOA)
Field Description Type, length Valid values Column head-
ings
MXCMPFILA outfile (CMPFILA command)
634
MXCMPFILA outfile (CMPFILA command)
For additional supporting information, see Interpreting results of audits that compare attributes on page 577.
Table 122. CMPFILA Output file (MXCMPFILA)
Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM-
DD.HH.MM.SSmmmmmm)
TIMESTAMP SAA timestamp TIMESTAMP
COMMAND Command name CHAR(10) CMPFILA COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN
SHORT NAME
DGNAME Data group definition name CHAR(10) User-defined data group name
*blank if not DG specified on the command.
DGDFN NAME
SYSTEM1 System 1 CHAR(8) User-defined system name
*local system name if no DG specified.
SYSTEM 1
SYSTEM2 System 2 CHAR(8) User-defined system name
*remote system name if no DG specified.
SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
FILE
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) System-defined name SYSTEM 2
FILE
SYS2LIB System 2 library name CHAR(10) System-defined name SYSTEM 2
LIBRARY
OBJ TYPE Object type CHAR(10) *FILE OBJ ECT TYPE
MXCMPFILA outfile (CMPFILA command)
635
CMPATR Compared attribute CHAR(10) See Attributes compared and expected results -
#FILATR, #FILATRMBR audits on page 581.
COMPARED
ATTRIBUTE
SYS1IND System 1 file indicator CHAR(10) See Table 87 in Where was the difference detected
on page 579.
SYSTEM 1
INDICATOR
SYS2IND System 2 file indicator CHAR(10) See Table 87 in Where was the difference detected
on page 579.
SYSTEM 2
INDICATOR
DIFIND Differences indicator CHAR(10) See What attribute differences were detected on
page 577.
DIFFERENCE
INDICATOR
SYS1VAL System 1 value of the specified
attribute
VARCHAR(2048)
MINLEN(50)
See Attributes compared and expected results -
#FILATR, #FILATRMBR audits on page 581.
SYSTEM 1
VALUE
SYS1CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 1
CCSID
SYS2VAL System 2 value of the specified
attribute
VARCHAR(2048)
MINLEN(50)
See Attributes compared and expected results -
#FILATR, #FILATRMBR audits on page 581.
SYSTEM 2
VALUE
SYS2CCSID System 2 value CCSID BIN(5) 1-65535 SYSTEM 2
CCSID
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1
ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2
ASP
DEVICE
Table 122. CMPFILA Output file (MXCMPFILA)
Field Description Type, length Valid values Column head-
ings
MXCMPFILD outfile (CMPFILDTA command)
636
MXCMPFILD outfile (CMPFILDTA command)
For additional information for interpreting this outfile, see Interpreting results of audits for record counts and file data on page 574.
The following fields require additional explanation:
Major mismatches before - Indicates the number of mismatched records found. A value other than 0 (zero) indicates that there are either
missing records or data within records does not match.
Major mismatches after - Indicates the number of mismatched records remaining. If repair was requested, this value should be 0 (zero);
otherwise, the value should equal that shown in the Major mismatches before column.
Minor mismatches after - Indicates the number of differences remaining that do not affect data integrity.
Apply pending - Indicates the number of records for which the database apply process has not yet performed repair processing.

Table 123. Compare File Data (CMPFILDTA) output file (MXCMPFILD)
Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM-
DD.HH.MM.SSmmmmmm)
TIMESTAMP SAA timestamp TIMESTAMP
COMMAND Command name CHAR(10) CMPFILDTA COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN SHORT
NAME
DGNAME Data group definition name CHAR(10) User-defined data group name
* blank if not DG specified on the
command
DGDFN NAME
SYSTEM1 System 1 CHAR(8) User-defined system name
*local system name if no DG
specified
SYSTEM 1
SYSTEM2 System 2 CHAR(8) User-defined system name
*remote system name if no DG
specified
SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA SOURCE
MXCMPFILD outfile (CMPFILDTA command)
637
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
OBJ ECT
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJ ECT
SYS2LIB System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
OBJ TYPE Object type CHAR(10) *FILE OBJ ECT TYPE
DIFIND Differences indicator CHAR(10) What attribute differences were
detected on page 577
DIFFERENCE
INDICATOR
REPAIRSYS Repair system CHAR(10) *SYS1, *SYS2 REPAIR
SYSTEM
FILEREP File repair successful CHAR(10) Blank, *YES, *NO FILE REPAIR
SUCCESSFUL
TOTRCDS Total records compared DECIMAL(20) 0 - 99999999999999999999 TOTAL
RECORDS
COMPARED
MAJ MISMBEF Major mismatches before
processing
DECIMAL(20) 0 - 99999999999999999999 MAJ OR
MISMATCHES
BEFORE
PROCESSING
MAJ MISMAFT Major mismatches after
processing
DECIMAL(20) 0 - 99999999999999999999 MAJ OR
MISMATCHES
AFTER
PROCESSING
MINMISMAFT Minor mismatches after
processing
DECIMAL(20) 0 - 99999999999999999999 MINOR
MISMATCHES
AFTER
PROCESSING
Table 123. Compare File Data (CMPFILDTA) output file (MXCMPFILD)
Field Description Type, length Valid values Column head-
ings
MXCMPFILD outfile (CMPFILDTA command)
638
APYPENDING Apply pending records DECIMAL(20) 0 - 99999999999999999999 ACTIVE
RECORDS
PENDING
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1 ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2 ASP
DEVICE
TMPSQLVIEW Temporary target system SQL
view pathname
CHAR(33) IBM i-format path name or blanks TEMPORARY
TARGET
SQL VIEW
Table 123. Compare File Data (CMPFILDTA) output file (MXCMPFILD)
Field Description Type, length Valid values Column head-
ings
MXCMPFILR outfile (CMPFILDTA command, RRN report)
639
MXCMPFILR outfile (CMPFILDTA command, RRN report)
This output file format is the result of specifying *RRN for the report type on the Compare File Data command. Output in this format
enables you to see the relative record number (RRN) of the first 1,000 objects that failed to compare. This value is useful when resolving
situations where a discrepancy is known to exist, but you are unsure which system contains the correct data. Viewing the RRN value
provides information that enables you to display the specific records on the two systems and to determine the system on which the file
should be repaired.
Table 124. Compare File Data (CMPFILDTA) relative record number (RRN) output file (MXCMPFILR)
Field Description Type, length Valid values Column head-
ings
SYSTEM 1 System 1 CHAR(8) User-defined system name
*local system name if no DG
specified
SYSTEM 1
SYSTEM 2 System 2 CHAR(8) User-defined system name
*local system name if no DG
specified
SYSTEM 2
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
OBJ ECT
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJ ECT
SYS2LIB System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
RRN Relative record number DECIMAL(10) Number RRN
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1 ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2 ASP
DEVICE
MXCMPRCDC outfile (CMPRCDCNT command)
640
MXCMPRCDC outfile (CMPRCDCNT command)
For additional information for interpreting this outfile, see Interpreting results of audits for record counts and file data on page 574.
Table 125. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC)
Field Description Format Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM-
DD.HH.MM.SS.mmmmmm)
TIMESTAMP SAA timestamp TIMESTAMP
COMMAND Command Name CHAR(10) CMPFILDTA COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) short data group name DGDFN
SHORT
NAME
DGNAME Data group definition name CHAR(10) user-defined data group name
* blank if not DG specified on the
command
DGDFN
NAME
SYSTEM1 System 1 CHAR(8) user-defined system name
* local system name if no DG
specified
SYSTEM 1
SYSTEM2 System 2 CHAR(8) user-defined system name
* remote system name if no DG
specified
SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
SYS1OBJ System 1 object name CHAR(10) user-defined name SYSTEM 1
OBJ ECT
SYS1LIB System 1 library name CHAR(10) user-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) user-defined name MEMBER
MXCMPRCDC outfile (CMPRCDCNT command)
641
DIFIND Differences indicator CHAR(10) Refer to differences indicator table DIFFERENCE
INDICATOR
SYS1CURCNT System 1 current records DECIMAL(20) 0 - 99999999999999999999 SYSTEM 1
CURRENT
RECORDS
SYS2CURCNT System 2 current records DECIMAL(20) 0 - 99999999999999999999 SYSTEM 2
CURRENT
RECORDS
SYS1DLTCNT System 1 deleted records DECIMAL(20) 0 - 99999999999999999999 SYSTEM 1
DELETED
RECORDS
SYS2DLTCNT System 2 deleted records DECIMAL(20) 0 - 99999999999999999999 SYSTEM 2
DELETED
RECORDS
ASPDEV1 System 1 ASP device CHAR(10) *NONE, user-defined name SYSTEM 1
ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, user-defined name SYSTEM 2
ASP
DEVICE
ACTRCDPND Active records pending DECIMAL(20) 0 - 99999999999999999999 ACTIVE
RECORDS
PENDING
Table 125. Compare Record Count (CMPRCDCNT) output file (MXCMPRCDC)
Field Description Format Valid values Column head-
ings
MXCMPIFSA outfile (CMPIFSA command)
642
MXCMPIFSA outfile (CMPIFSA command)
For additional supporting information, see Interpreting results of audits that compare attributes on page 577.
Table 126. CMPIFSA Output file (MXCMPIFSA)
Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM-
DD.HH.MM.SSmmmmmm)
TIMESTAMP SAA timestamp TIMESTAMP
COMMAND Command name CHAR(10) CMPIFSA COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN
SHORT NAME
DGNAME Data group definition name CHAR(10) User-defined data group name
*blank if not DG specified on the command.
DGDFN NAME
SYSTEM1 System 1 CHAR(8) User-defined system name
*local system name if no DG specified.
SYSTEM 1
SYSTEM2 System 2 CHAR(8) User-defined system name
*remote system name if no DG specified.
SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
OBJ ECT
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJ ECT
CCSID IFS object name CCSID BIN(5) User-defined name CCSID
CNTRYID IFS object name country ID CHAR(2) System-defined name CNTRYID
LANGID IFS object name language ID CHAR(3) System-defined name LANGID
CMPATR Compared attribute CHAR(10) See Attributes compared and expected results -
#IFSATR audit on page 594.
COMPARED
ATTRIBUTE
SYS1IND System 1 file indicator CHAR(10) See Table 87 in Where was the difference detected
on page 579.
SYSTEM 1
INDICATOR
MXCMPIFSA outfile (CMPIFSA command)
643
SYS2IND System 2 file indicator CHAR(10) See Table 87 in Where was the difference detected
on page 579.
SYSTEM 2
INDICATOR
DIFIND Differences indicator CHAR(10) What attribute differences were detected on
page 577.
DIFFERENCE
INDICATOR
SYS1VAL System 1 value of the specified
attribute
VARCHAR(2048)
MINLEN(50)
See Attributes compared and expected results -
#IFSATR audit on page 594.
SYSTEM 1
VALUE
SYS1CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 1
CCSID
SYS2VAL System 2 value of the specified
attribute
VARCHAR(2048)
MINLEN(50)
See Attributes compared and expected results -
#IFSATR audit on page 594.
SYSTEM 2
VALUE
SYS2CCSID System 2 value CCSID BIN(5) 1-65535 SYSTEM 2
CCSID
Table 126. CMPIFSA Output file (MXCMPIFSA)
Field Description Type, length Valid values Column head-
ings
MXCMPOBJA outfile (CMPOBJA command)
644
MXCMPOBJA outfile (CMPOBJA command)
For additional supporting information, see Interpreting results of audits that compare attributes on page 577.
Table 127. CMPOBJ A Output file (MXCMPOBJ A)
Field Description Type, length Valid values Column head-
ings
TIMESTAMP Timestamp (YYYY-MM-
DD.HH.MM.SSmmmm)
TIMESTAMP SAA timestamp TIMESTAMP
COMMAND Command name CHAR(10) CMPOBJ A COMMAND
NAME
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN
SHORT NAME
DGNAME Data group definition name CHAR(10) User-defined data group name
*blank if not DG specified on the command.
DGDFN NAME
SYSTEM1 System 1 CHAR(8) User-defined system name
*local system name if no DG specified.
SYSTEM 1
SYSTEM2 System 2 CHAR(8) User-defined system name
*remote system name if no DG specified.
SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
SYS1OBJ System 1 object name CHAR(10) User-defined name SYSTEM 1
FILE
SYS1LIB System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR Member name CHAR(10) User-defined name MEMBER
SYS2OBJ System 2 object name CHAR(10) User-defined name SYSTEM 2
OBJ ECT
SYS2LIB System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
OBJ TYPE Object type CHAR(10) User-defined name OBJ ECT TYPE
MXCMPOBJA outfile (CMPOBJA command)
645
CMPATR Compared attribute CHAR(10) See Attributes compared and expected results -
#OBJ ATR audit on page 586
COMPARED
ATTRIBUTE
SYS1IND System 1 file indicator CHAR(10) See Table 87 in Where was the difference detected
on page 579
SYSTEM 1
INDICATOR
SYS2IND Stem 2 file indicator CHAR(10) See Table 87 in Where was the difference detected
on page 579
SYSTEM 2
INDICATOR
DIFIND Differences indicator CHAR(10) What attribute differences were detected on
page 577
DIFFERENCE
INDICATOR
SYS1VAL System 1 value of the specified
attribute
VARCHAR(2048)
MINLEN(50)
See Attributes compared and expected results -
#OBJ ATR audit on page 586
SYSTEM 1
VALUE
SYS1CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 1
CCSID
SYS2VAL System 1 value of the specified
attribute
VARCHAR(2048)
MINLEN(50)
See Attributes compared and expected results -
#OBJ ATR audit on page 586
SYSTEM 2
VALUE
SYS2CCSID System 1 value CCSID BIN(5) 1-65535 SYSTEM 2
CCSID
ASPDEV1 System 1 ASP device CHAR(10) *NONE, User-defined name SYSTEM 1
ASP
DEVICE
ASPDEV2 System 2 ASP device CHAR(10) *NONE, User-defined name SYSTEM 2
ASP
DEVICE
Table 127. CMPOBJ A Output file (MXCMPOBJ A)
Field Description Type, length Valid values Column head-
ings
MXAUDHST outfile (WRKAUDHST command)
646
MXAUDHST outfile (WRKAUDHST command)

Table 128. MXAUDHST outfile (WRKAUDHST command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group
definition)
CHAR(10) User-defined data group name DGDFN
NAME
DGSYS1 System 1 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 2
RULE Audit rule CHAR(10) #DLOATR, #FILATR, #FILATRMBR, #FILDTA,
#IFSATR, #MBRRCDCNT, #OBJ ATR
AUDIT
RULE
CMPSTRTSP Compare start timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm COMPARE
START
TIMESTAMP
CMPENDTSP Compare end timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm COMPARE
END
TIMESTAMP
AUDLVL Audit level CHAR(10) *DISABLED, *LEVEL10, *LEVEL20, *LEVEL30 AUDIT
LEVEL
STATUS Audit status CHAR(10) *AUTORCVD, *CMPACT, *DIFFNORCY,
*DISABLED, *FAILED, *NEW, *NODIFF,
*NOTRCVD, *NOTRUN, *QUEUED, *RCYACT,
*USRRCVD
AUDIT
STATUS
TTLSELECT Total selected PACKED(9 0) 0-999999999 TOTAL
OBJ ECTS
SELECTED
MXAUDHST outfile (WRKAUDHST command)
647
NOTRCVD Not recovered PACKED(9 0) 0-999999999 OBJ ECTS
NOT
RECOVERED
RCVD Recovered PACKED(9 0) 0-999999999 OBJ ECTS
RECOVERED
NOTCMP Not compared PACKED(9 0) 0-999999999 OBJ ECTS
NOT COMPARED
CMP Compared PACKED(9 0) 0-999999999 OBJ ECTS
COMPARED
DETECTNE Detected Not Equal PACKED(9 0) 0-999999999 DETECTED
NOT EQUAL
DURATION Audit Duration TIME HH.MM.SS AUDIT
DURATION
RCYSTRTSP Recovery start timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm RECOVERY
START
TIMESTAMP
RCYENDTSP Recovery end timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm RECOVERY
END
TIMESTAMP
AUDRCY Audit Recovery status CHAR(10) *DISABLED, *LEVEL10, *LEVEL20, *LEVEL30 AUDIT
RECOVERY
STATUS
Table 128. MXAUDHST outfile (WRKAUDHST command)
Field Description Type, length Valid values Column head-
ings
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)
648
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)
This outfile is used by the Work with Audited Objects (WRKAUDOBJ ) and the Work with Audited Obj. History (WRKAUDOBJ H) command.
When created by the WRKAUDOBJ command, the outfile may include objects from multiple audits; however, only information from the
most recent audit that compared an object is included.
When created by the WRKAUDOBJ H command, the outfile includes the available audit history for a single object that was audited for a
specific data group. The outfile records are sorted in reverse chronological order so that the audit history having the most recent audit start
date is at the top. For a given object of type *FILE, there can be records from multiple audits (#FILATR, #FILDTA, #MBRRCDCNT).
Table 129. MXAUDOBJ outfile (WRKAUDOBJ and WRKAUDOBJ H commands)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group
definition)
CHAR(10) User-defined data group name DGDFN
NAME
DGSYS1 System 1 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 2
RULE Audit rule CHAR(10) #DLOATR, #FILATR, #FILATRMBR, #FILDTA,
#IFSATR, #MBRRCDCNT, #OBJ ATR
AUDIT
RULE
CMPSTRTSP Compare start timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm COMPARE
START
TIMESTAMP
TYPE Object type CHAR(10) Refer to the OM5100P file for the list of valid object
types.
OBJ ECT
TYPE
OBJ LIB Library name CHAR(10) User-defined name, BLANK OBJ ECT
LIBRARY
OBJ Object name CHAR(10) User-defined name, BLANK OBJ ECT
MEMBER
OBJ MBR Member name CHAR(10) User-defined name, BLANK MEMBER
DLO DLO name CHAR(12) User-defined name, BLANK DLO
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)
649
FLR Folder name CHAR(63) User-defined name, BLANK FOLDER
IFS Object IFS name CHAR(1024)
VARLEN(100)
User-defined name, BLANK IFS
OBJ ECT
CCSID IFS name CCSID BINARY(5) numeric (0-65535) CCSID
RMTOBJ LIB Remote Library name CHAR(10) User-defined name, BLANK REMOTE
OBJ ECT
LIBRARY
RMTOBJ Remote Object name CHAR(10) User-defined name, BLANK REMOTE
OBJ ECT
RMTOBJ MBR Remote Member name CHAR(10) User-defined name, BLANK REMOTE
MEMBER
RMTDLO Remote DLO name CHAR(12) User-defined name, BLANK REMOTE
DLO
RMTFLR Remote Folder name CHAR(63) User-defined name, BLANK REMOTE
FOLDER
RMTIFS Remote Object IFS name CHAR(1024)
VARLEN(100)
User-defined name, BLANK REMOTE
IFS
OBJ ECT
AUDSTS Audited status CHAR(10) *EQ, *NE, *NS, *RCVD, *SYNC, *UN OVERALL
AUDITED
STATUS
CMPSTS Compare status CHAR(10) *APY, *CMT, *CO, *CO (LOB), *DT, *EQ, *EQ
(DATE), *EQ (OMIT), *EC, *FF, *FMC, *FMT, *HLD,
*IOERR, *LCK, *NA, *NC, *NE, *NF1, *NF2, *NS,
*REP, *SJ , *SP, *SYNC, *UA, *UE, *UN (Any status
issued by an audit's compare phase)
OBJ ECT
COMPARE
STATUS
RCYSTRTSP Recovery start timestamp TIMESTAMP YYYY-MM-DD.HH.MM.SS.mmmmmm RECOVERY
START
TIMESTAMP
Table 129. MXAUDOBJ outfile (WRKAUDOBJ and WRKAUDOBJ H commands)
Field Description Type, length Valid values Column head-
ings
MXAUDOBJ outfile (WRKAUDOBJ, WRKAUDOBJH commands)
650
RCYSTS Recovery status CHAR(10) *RECOVERED, *RCYFAILED, *RCYSBM, or BLANK RECOVERY
STATUS
Table 129. MXAUDOBJ outfile (WRKAUDOBJ and WRKAUDOBJ H commands)
Field Description Type, length Valid values Column head-
ings
MXDGACT outfile (WRKDGACT command)
651
MXDGACT outfile (WRKDGACT command)

Table 130. MXDGACT outfile (WRKDGACT command)
Field Description Type, length Valid values Column
headings
DGDFN Data group name (Data
group definition)
CHAR(10) User-defined data group name DGDFN
NAME
DGSYS1 System 1 name (Data
group definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name CHAR(8) User-defined system name DGDFN
(Data group definition) SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
STATUS Object status category CHAR(10) *COMPLETED, *FAILED, *DELAYED, *ACTIVE OBJ ECT
STATUS
CATEGORY
TYPE Object type CHAR(10) Refer to the OM5100P file for the list of valid
object types
OBJ ECT
TYPE
OBJ ATR Object attribute CHAR(10) Refer to the OM5200P file for the list of valid
object attributes
OBJ ECT
ATTRIBUTE
REASON Failure reason CHAR(11) *INUSE, *RESTRICTED, *NOTFOUND,
*OTHER, blank
FAILURE
REASON
COUNT Entry count PACKED(5 0) 0-9999 (9999 =maximum value supported) ENTRY
COUNT
OBJ CAT Object category CHAR(10) *DLO, *IFS, *SPLF, *LIB OBJ ECT
CATEGORY
OBJ LIB Object library CHAR(10) User-defined name, BLANK OBJ ECT
LIBRARY
OBJ Object name CHAR(10) User-defined name, BLANK OBJ ECT
OBJ MBR Member name CHAR(10) User-defined name, BLANK MEMBER
DLO DLO name CHAR(12) User-defined name, BLANK DLO
MXDGACT outfile (WRKDGACT command)
652
FLR Folder name CHAR(63) User-defined name, BLANK FOLDER
SPLFJ OB Spooled file job name CHAR(26) Three part spooled file name, BLANK SPLF J OB
SPLF Spooled file name CHAR(10) User-defined name, BLANK SPLF NAME
SPLFNBR Spooled file number PACKED(7 0) 1-99999, BLANK SPLF
NUMBER
OUTQ Output queue CHAR(10) User-defined name, *NONE, BLANK OUTQ
OUTQLIB Output queue library CHAR(10) User-defined name, *NONE, BLANK OUTQ
LIBRARY
IFS Object IFS name CHAR(1024)
VARLEN(100)
User-defined name, BLANK IFS OBJ ECT
CCSID Object CCSID BIN(5 0) Default to job CCSID. If unable to convert to
job's CCSID or job CCSID is 65535, related
fields will be written in Unicode
CCSID
IFSUCS IFS Object (UNICODE) Graphic(512)
VARLEN(75)
CCSID(13488)
User-defined name (Unicode), BLANK IFS Object
(UNICODE)
Table 130. MXDGACT outfile (WRKDGACT command)
Field Description Type, length Valid values Column
headings
MXDGACTE outfile (WRKDGACTE command)
653
MXDGACTE outfile (WRKDGACTE command)

Table 131. MXDGACTE outfile (WRKDGACTE command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data
group definition)
CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA
SOURCE
STATUS Object status category CHAR(10) *COMPLETED, *FAILED,
*DELAYED, *ACTIVE
OBJ ECT
STATUS
CATEGORY
OBJ STATUS Object status CHAR(2) Refer to on-line help for complete list OBJ ECT
STATUS
TYPE Object type CHAR(10) Refer to the OM5100P file for the list
of valid object types
OBJ ECT
TYPE
OBJ ATR Object attribute CHAR(10) Refer to the OM5200P file for the list
of valid object attributes
OBJ ECT
ATTRIBUTE
REASON Failure reason CHAR(11) *INUSE, *RESTRICTED,
*NOTFOUND, *OTHER, blank
FAILURE
REASON
OBJ CAT Object category CHAR(10) *DLO, *IFS, *SPLF, *LIB OBJ ECT
CATEGORY
SEQJ RN J ournal sequence number ZONED(20 0) 1-99999999999999999999 J OURNAL
SEQUENCE
NUMBER
SEQNBR J ournal sequence number PACKED(10 0) 1-9999999999 J OURNAL
SEQUENCE
NUMBER
MXDGACTE outfile (WRKDGACTE command)
654
J RNCODE J ournal entry code CHAR(1) Valid journal codes J OURNAL
ENTRY CODE
J RNTYPE J ournal entry type CHAR(2) Valid journal types J OURNAL
ENTRY TYPE
J RNTSP J ournal entry timestamp TIMESTAMP YYYY-MM-
DD.HH.MM.SS.mmmmmm
J OURNAL
ENTRY
TIMESTAMP
J RNSNDTSP J ournal entry send
timestamp
TIMESTAMP YYYY-MM-
DD.HH.MM.SS.mmmmmm
J OURNAL
ENTRY SEND
TIMESTAMP
J RNRCVTSP J ournal entry receive
timestamp
TIMESTAMP YYYY-MM-
DD.HH.MM.SS.mmmmmm
J OURNAL
ENTRY RCV
TIMESTAMP
J RNRTVTSP J ournal entry retrieve
timestamp
TIMESTAMP YYYY-MM-
DD.HH.MM.SS.mmmmmm
J OURNAL
ENTRY RTV
TIMESTAMP
CNRSNDTSP Container send timestamp TIMESTAMP YYYY-MM-
DD.HH.MM.SS.mmmmmm
CONTAINER
SEND
TIMESTAMP
J RNAPYTSP J ournal entry apply
timestamp
TIMESTAMP YYYY-MM-
DD.HH.MM.SS.mmmmmm
J OURNAL
ENTRY APY
TIMESTAMP
REQCNRSND Requires container CHAR(10) *YES, *NO REQUIRES
CONTAINER
SEND
RTYWAIT Waiting for retry CHAR(10) *YES, *NO WAITING FOR
RETRY
RTYATTEMPT Number of retries attempted PACKED(5 0) 0-1998 NUMBER OF
RETRIES
ATTEMPTED
Table 131. MXDGACTE outfile (WRKDGACTE command)
Field Description Type, length Valid values Column head-
ings
MXDGACTE outfile (WRKDGACTE command)
655
RTYREMAIN Number of retries remaining PACKED(5 0) 0-1998 NUMBER OF
RETRIES
REMAINING
DLYITV Delay interval PACKED(5 0) 1-7200 DELAY
INTERVAL
NXTRTYTSP Next retry timestamp TIMESTAMP YYYY-MM-
DD.HH.MM.SS.mmmmmm
NEXT RETRY
TIMESTAMP
MSGID Message ID CHAR(7) Valid message ID, BLANK MESSAGE ID
MSG Message data CHAR(256)
VARLEN(50)
Valid message data, BLANK MESSAGE
DATA
FAILEDJ OB Failed job name CHAR(26) J ob name, BLANK FAILED J OB
NAME
J RNENT J ournal entry CHAR(400) J ournal entry J OURNAL
ENTRY
OBJ LIB Object library CHAR(10) User-defined name, BLANK OBJ ECT
LIBRARY
OBJ Object name CHAR(10) User-defined name, BLANK OBJ ECT
OBJ MBR Member name CHAR(10) User-defined name, BLANK MEMBER
DLO DLO name CHAR(12) User-defined name, BLANK DLO
FLR Folder name CHAR(63) User-defined name, BLANK FOLDER
SPLFJ OB Spooled file job name CHAR(26) Three part spooled file name, BLANK SPLF J OB
SPLF Spooled file name CHAR(10) User-defined name, BLANK SPLF NAME
SPLFNBR Spooled file number PACKED(7 0) 1-99999, BLANK SPLF
NUMBER
OUTQ Output queue CHAR(10) User-defined name, *NONE, BLANK OUTQ
OUTQLIB Output queue library CHAR(10) User-defined name, *NONE, BLANK OUTQ
LIBRARY
IFS Object IFS name CHAR(1024)
VARLEN(100)
User-defined name, BLANK IFS OBJ ECT
Table 131. MXDGACTE outfile (WRKDGACTE command)
Field Description Type, length Valid values Column head-
ings
MXDGACTE outfile (WRKDGACTE command)
656
CCSID Object CCSID BIN(5 0) Default to job CCSID. If unable to
convert to job's CCSID or job CCSID
is 65535, related fields will be written
in Unicode.
CCSID
TGTOBJ LIB Target system object library
name
CHAR(10) User-defined name, BLANK TARGET
OBJ ECT
LIBRARY
TGTOBJ Target system object name CHAR(10) User-defined name, BLANK TARGET
OBJ ECT
TGTOBJ MBR Target system object
member name
CHAR(10) User-defined name, BLANK TARGET
MEMBER
TGTDLO Target system DLO name CHAR(12) User-defined name, BLANK TARGET DLO
TGTFLR Target system object folder
name
CHAR(63) User-defined name, BLANK TARGET
FOLDER
TGTSPLFJ OB Target system spooled file
job name
CHAR(26) Three part spooled file name, BLANK TARGET SPLF
TGTSPLF Target system spooled file
name
CHAR(10) User-defined name, BLANK J OB
TGTSPLFNBR Target system spooled file
job number
PACKED(7 0) 1-999999, BLANK TARGET SPLF
NUMBER
TGTOUTQ Target system output queue CHAR(10) User-defined name, BLANK TARGET
OUTQ
TGTOUTQLIB Target system output queue
library
CHAR(10) User-defined name, BLANK TARGET
OUTQ
LIBRARY
TGTIFS Target system IFS name CHAR(1024)
VARLEN(100)
User-defined name, BLANK TARGET IFS
OBJ ECT
RNMOBJ LIB Renamed object library
name
CHAR(10) User-defined name, BLANK RENAMED
OBJ ECT
LIBRARY
Table 131. MXDGACTE outfile (WRKDGACTE command)
Field Description Type, length Valid values Column head-
ings
MXDGACTE outfile (WRKDGACTE command)
657
RNMOBJ Renamed object name CHAR(10) User-defined name, BLANK RENAMED
OBJ ECT
RNMOBJ MBR Renamed object member
name
CHAR(10) User-defined name, BLANK RENAMED
MEMBER
RNMDLO Renamed DLO name CHAR(12) User-defined name, BLANK RENAMED
DLO
RNMFLR Renamed object folder
name
CHAR(63) User-defined name, BLANK RENAMED
FOLDER
RNMSPLFJ OB Renamed spooled file job
name
CHAR(26) Three part spooled file name, BLANK RENAMED
SPLF J OB
RNMSPLF Renamed spooled file name CHAR(10) User-defined name, BLANK RENAMED
SPLF NAME
RNMSPLFNBR Renamed spooled file
number
PACKED(7 0) 1-999999, BLANK RENAMED
SPLF
NUMBER
RNMOUTQ Renamed output queue CHAR(10) User-defined name, BLANK RENAMED
OUTQ
RNMOUTQLIB Renamed output queue
library
CHAR(10) User-defined name, BLANK RENAMED
OUTQ
LIBRARY
RNMIFS Renamed IFS object name CHAR(1024)
VARLEN(100)
User-defined name, BLANK RENAMED
IFS OBJ ECT
RNMOBJ LIB Renamed target object
library name
CHAR(10) User-defined name, BLANK RENAMED
TGT
OBJ ECTS
LIBRARY
RNMTGTOBJ Renamed target object
name
CHAR(10) User-defined name, BLANK RENAMED
TARGET
OBJ ECT
Table 131. MXDGACTE outfile (WRKDGACTE command)
Field Description Type, length Valid values Column head-
ings
MXDGACTE outfile (WRKDGACTE command)
658
RNMTOBJ MBR Renamed target object
member name
CHAR(10) User-defined name, BLANK RENAMED
TARGET OBJ
MEMBER
RNMTGTDLO Renamed target object DLO
name
CHAR(12) User-defined name, BLANK RENAMED
TARGET DLO
RNMTGTFLR Renamed target object
folder name
CHAR(63) User-defined name, BLANK RENAMED
TARGET
FOLDER
RNMTSPLFJ Renamed target spooled file
job name
CHAR(26) Three part spooled file name, BLANK RENAMED
TARGET SPLF
J OB
RNTTGTSPLF Renamed target spooled file
name
CHAR(10) User-defined name, BLANK RENAMED
TARGET SPLF
NAME
RNMTSPLFN Renamed target spooled file
number
PACKED(7 0) 1-999999, BLANK RENAMED
TARGET SPLF
NUMBER
RNMTGTOUTQ Renamed target output
queue
CHAR(10) User-defined name, BLANK RENAMED
TARGET
OUTQ
RNMTOUTQL Renamed target output
queue library
CHAR(10) User-defined name, BLANK RENAMED
TARGET
OUTQ
LIBRARY
RNMTGTIFS Renamed target object IFS
name
CHAR(1024)
VARLEN(100)
User-defined name, BLANK RENAMED
TARGET IFS
OBJ ECT
COOPDB Cooperate with DB CHAR(10) *YES, *NO, BLANK COOPERATE
WITH
DATABASE
OBJ FID IFS object file identifier
(binary format)
BIN(16) Binary representation of file identifier IFS OBJ ECT
FID (Binary)
Table 131. MXDGACTE outfile (WRKDGACTE command)
Field Description Type, length Valid values Column head-
ings
MXDGACTE outfile (WRKDGACTE command)
659
OBJ FIDHEX IFS object file identifier
(character format)
CHAR(32) Character representation of file
identifier
IFS OBJ ECT
FID (Hex)
IFSUCS IFS Object (UNICODE) GRAPHIC(512)
VARLEN(75)
CCSID(13488
User-defined name (Unicode),
BLANK
IFS Object
(UNICODE)
TGTIFSUCS TGT IFS Object (UNICODE) GRAPHIC(512)
VARLEN(75)
CCSID(13488)
User-defined name (Unicode),
BLANK
TGT IFS
Object
(UNICODE)
RNMIFSUCS RNM IFS Object
(UNICODE)
GRAPHIC(512)
VARLEN(75)
CCSID(13488)
User-defined name (Unicode),
BLANK
RNM IFS
Object
(UNICODE)
RNMTGTIFSU RNM TGT IFS Object
(UNICODE)
GRAPHIC(512)
VARLEN(75)
CCSID(13488)
User-defined name (Unicode),
BLANK
RNM TGT IFS
Object
(UNICODE)
Table 131. MXDGACTE outfile (WRKDGACTE command)
Field Description Type, length Valid values Column head-
ings
MXDGDAE outfile (WRKDGDAE command)
660
MXDGDAE outfile (WRKDGDAE command)

Table 132. MXDGDAE outfile (WRKDGDAE command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data
group definition)
CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 2
DTAARA1 System 1 data area CHAR(10) User-defined name, *ALL SYSTEM 1
DATA AREA
DTAARALIB1 System 1 data area library CHAR(10) User-defined name SYSTEM 1
DATA AREA
LIBRARY
DTAARA2 System 2 data area CHAR(10) User-defined name, *ALL SYSTEM 2
DATA AREA
DTAARALIB2 System 2 data area library CHAR(10) User-defined name SYSTEM 2
DATA AREA
LIBRARY
TEXT Description CHAR(50) User-defined text DESCRIPTION
RTVERR Retrieve error field CHAR(10) *NO, *YES RETRIEVE
ERROR FIELD
MXDGDFN outfile (WRKDGDFN command)
661
MXDGDFN outfile (WRKDGDFN command)

Table 133. MXDGDFN outfile (WRKDGDFN command)
Field Description Type, length Valid values Column Head-
ings
DGDFN Data group definition name (Data group definition) CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 (Data group definition) CHAR(8) User-defined system name DGDFN NAME
SYSTEM 1
DGSYS2 System 2 (Data group definition) CHAR(8) User-defined system name DGDFN NAME
SYSTEM 2
DGSHRTNM Data group short name CHAR(3) Short data group name DGDFN SHORT
NAME
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA SOURCE
ALWSWT Allow to be switched CHAR(10) *YES, *NO ALLOW SWITCH
DGTYPE Data group type CHAR(10) *ALL, *OBJ , *DB DG TYPE
PRITFRDFN Configured primary transfer definition CHAR(10) User-defined name, *DGDFN CONFIGURED
PRITFRDFN
SECTFRDFN Secondary transfer definition CHAR(10) User-defined name, *NONE CONFIGURED
SECTFRDFN
RDRWAIT Reader wait time (seconds) PACKED(5 0) 0-600 DB READER
WAITTIME
J RNTGT J ournal on target CHAR(10) *YES, *NO J OURNAL ON
TARGET
J RNDFN1 Configured system 1 journal definition CHAR(10) *DGDFN, user-defined name,
*NONE
CONFIGURED
SYSTEM 1
J RNDFN
J RNDFN1NM Actual system 1 journal definition name CHAR(10) User-defined name, blank ACTUAL
SYSTEM 1
J RNDFN
J RNDFN2SYS System 2 journal definition system name CHAR(8) User-defined name, blank J RNDFN
SYSTEM 2
MXDGDFN outfile (WRKDGDFN command)
662
J RNDFN2 Configured system 2 journal definition CHAR(10) *DGDFN, user-defined name,
*NONE
CONFIGURED
SYSTEM 2
J RNDFN
J RNDFN2NM Actual system 2 journal definition name CHAR(10) User-defined name, blank ACTUAL
SYSTEM 2
J RNDFN
J RNDFN2SYS System 2 journal definition system name CHAR(8) User-defined name, blank J RNDFN
SYSTEM 2
RJ LNK User remote journal link CHAR(10) *YES, *NO RJ LINK
NBRDBAPY Number of DB apply sessions PACKED(3 0) 1-6 CURRENT
NUMBER OF DB
APPLIES
RQSDBAPY Requested number of DB apply sessions PACKED(3 0) 1-6 REQUESTED
NUMBER OF DB
APPLIES
DBBFRIMG Before images (DB journal entry processing) CHAR(10) *IGNORE, *SEND DBJ RNPRC
BEFORE
IMAGES
DBNOTINDG For files not in data group (DB journal entry
processing)
CHAR(10) *IGNORE, *SEND DBJ RNPRC
FILES NOT IN
DG
DBMMXGEN Generated by MIMIX activity (DB journal entry
processing)
CHAR(10) *IGNORE, *SEND DBJ RNPRC
GEND BY MIMIX
ACT
DBNOTUSED Not used by MIMIX (DB journal entry processing) CHAR(10) *IGNORE, *SEND DBJ RNPRC NOT
USED BY MIMIX
TEXT Description CHAR(50) *BLANK, user-defined text DESCRIPTION
SYNCCHKITV Synchronization check interval PACKED(5 0) 0 - 999999 (0=*NONE) SYNC CHECK
INTERVAL
TSPITV Time stamp interval PACKED(5 0) 0 - 999999 (0=*NONE) TIME STAMP
INTERVAL
Table 133. MXDGDFN outfile (WRKDGDFN command)
Field Description Type, length Valid values Column Head-
ings
MXDGDFN outfile (WRKDGDFN command)
663
VFYITV Verify interval PACKED(5 0) 1000-999999 VERIFICATION
INTERVAL
DTAARAITV Data area polling interval PACKED(5 0) 1-7200 DATA AREA
POLLING
INTERVAL
RTYNBR Number of times to retry PACKED(3 0) 0-999 NUMBER OF
RETRIES
RTYDLYITV1 First retry delay interval PACKED(5 0) 1-3600 FIRST RETRY
INTERVAL
RTYDLYITV2 Second retry delay interval PACKED(5 0) 10-7200 SECOND
RETRY
INTERVAL
ADPCHE Adaptive cache CHAR(10) *YES, *NO
Version 7 and higher, field is always
*NO.
USE ADAPTIVE
CACHE
DATACRG Data cluster resource group CHAR(10) User-defined name, blank, *NONE DATA CRG
DFTJ RNIMG J ournal image (File entry options) CHAR(10) *AFTER, *BOTH FEOPT
J OURNAL
IMAGES
DFTOPNCLO Omit open / close entries (File entry options) CHAR(10) *NO, *YES FEOPT OMIT
OPEN CLOSE
DFTREPTYPE Replication type (File entry options) CHAR(10) *POSITION, *KEYED FEOPT
REPLICATION
TYPE
DFTAPYLOCK Lock member during apply (File entry options) CHAR(10) *YES, *NO FEOPT LOCK
MBR ON APPLY
DFTAPYSSN Configured apply session (File entry options) CHAR(10) *ANY, A-F FEOPT CFG
APPY SESSION
DFTCRCLS Collision resolution (File entry options) CHAR(10) *HLDERR, *AUTOSYNC, user-
defined name
FEOPT
COLLISION
RESOLUTION
Table 133. MXDGDFN outfile (WRKDGDFN command)
Field Description Type, length Valid values Column Head-
ings
MXDGDFN outfile (WRKDGDFN command)
664
DFTSBTRG Disable triggers during apply (File entry options) CHAR(10) *YES, *NO FEOPT DISABLE
TRIGGERS
DFTPRCCST Process constraint entries (File entry options) CHAR(10) *YES FEOPT
PROCESS
CONSTRAINT
DBFRCITV Force data interval (Database apply processing) PACKED(5 0) 1-99999 DBAPYPRC
FORCE DATA
DBMAXOPN Maximum open members (Database apply
processing)
PACKED(5 0) 50 - 32767 DBAPYPRC
MAX OPEN
MEMBERS
DBAPYTWRN Threshold warning (Database apply processing) PACKED(7 0) 0, 100-9999999 DBAPYPRC
THRESHOLD
WARNING
DBAPYHST Apply history log spaces (Database apply
processing)
PACKED(5 0) 0-9999 DBAPYPRC
HISTORY
DBKEEPLOG Keep journal log spaces (Database apply
processing)
PACKED(5 0) 0-9999 DBAPYPRC
KEEP J RN
DBLOGSIZE Size of log spaces (MB) (Database apply
processing)
PACKED(5 0) 1-16 DBAPYPRC
SIZE OF LOG
SPACES
OBJ DFTOWN Object default owner (Object processing) CHAR(10) User-defined name OBJ PRC
DEFAULT
OWNER
OBJ DLOMTH DLO transmission method (Object processing) CHAR(10) *OPTIMIZED, *SAVRST OBJ PRC DLO
TRANSFER
METHOD
OBJ IFSMTH IFS transmission method (Object processing) CHAR(10) *SAVRST, *OPTIMIZED OBJ PRC IFS
TRANSFER
METHOD
Table 133. MXDGDFN outfile (WRKDGDFN command)
Field Description Type, length Valid values Column Head-
ings
MXDGDFN outfile (WRKDGDFN command)
665
OBJ USRSTS User profile status (Object processing) CHAR(10) *SRC, *TGT, *ENABLE, *DISABLE OBJ PRC
USER PROFILE
STATUS
OBJ KEEPSPL Keep deleted spooled files (Object processing) CHAR(10) *YES, *NO OBJ PRC
KEEP DELETED
SPLF
OBJ KEEPDLO Keep DLO System Name (Object Processing) CHAR(10) *YES, *NO OBJ PRC
KEEP DLO
SYS NAME
OBJ RTVDLY Retrieve delay (Object retrieve processing) PACKED(3 0) 0-999 OBJ RTVPRC
DELAY
OBJ RTVMINJ Minimum number of jobs (Object retrieve
processing)
PACKED(3 0) 1-99 OBJ RTVPRC
MIN NUMBER
OF J OBS
OBJ RTVMAXJ Maximum number of jobs (Object retrieve
processing)
PACKED(3 0) 1-99 OBJ RTVPRC
MAX NUMBER
OF J OBS
OBJ RTVTHLD Threshold for more jobs (Object retrieve
processing)
PACKED(5 0) 1-99999 OBJ RTVPRC
THLD FOR
MORE J OBS
CNRSNDMINJ Minimum number of jobs (Container send
processing)
PACKED(3 0) 1-99 CNRSNDPRC
MIN NUMBER
OF J OBS
CNRSNDMAXJ Maximum number of jobs (Container send
processing)
PACKED(3 0) 1-99 CNRSNDPRC
MAX NUMBER
OF J OBS
CNRSNDTHLD Threshold for more jobs (Container send
processing)
PACKED(5 0) 1-99999 CNRSNDPRC
THLD FOR
MORE J OBS
Table 133. MXDGDFN outfile (WRKDGDFN command)
Field Description Type, length Valid values Column Head-
ings
MXDGDFN outfile (WRKDGDFN command)
666
OBJ APYMINJ Minimum number of jobs (Object apply processing) PACKED(3 0) 1-99 OBJ APYPRC
MIN NUMBER
OF J OBS
OBJ APYMAXJ Maximum number of jobs (Object apply processing) PACKED(3 0) 1-99 OBJ APYPRC
MAX NUMBER
OF J OBS
OBJ APYTHLD Threshold for more jobs (Object apply processing) PACKED(5 0) 1-99999 OBJ APYPRC
THLD FOR
MORE J OBS
OBJ APYTWRN Threshold for warning messages (Object apply
processing)
PACKED(5 0) 0, 50-99999 (0 =*NONE) OBJ APYPRC
THLD FOR
WARNING
MSGS
SBMUSR User profile for submit job CHAR(10) *J OBD, *CURRENT USRPRF FOR
SUBMIT J OB
SNDJ OBD Send job description CHAR(10) J ob description name SEND J OBD
SNDJ OBDLIB Send job description library CHAR(10) J ob description library SEND J OBD
LIBRARY
APYJ OBD Apply job description CHAR(10) J ob description name APPLY J OBD
APYJ OBDLIB Apply job description library CHAR(10) J ob description library APPLY J OBD
LIBRARY
RGZJ OBD Reorganize job description CHAR(10) J ob description name REORGANIZE
J OBD
RGZJ OBDLIB Reorganize job description library CHAR(10) J ob description library REORGANIZE
J OBD LIBRARY
SYNJ OBD Synchronize job description CHAR(10) J ob description name SYNC J OBD
SYNJ OBDLIB Synchronize job description library CHAR(10) J ob description library SYNC J OBD
LIBRARY
Table 133. MXDGDFN outfile (WRKDGDFN command)
Field Description Type, length Valid values Column Head-
ings
MXDGDFN outfile (WRKDGDFN command)
667
SAVACT Save while active (seconds) PACKED(5 0) -1, 0, 1-999999
(0 =Save while active for files only
with a 120 second wait time)
(-1 =No save while active)
(1-99999 =Save while active for all
object types with specified wait time)
SAVE WHILE
ACTIVE (SEC)
RSTARTTIME Restart Time CHAR((8) 000000 - 235959, *NONE,
*SYSDFN1, *SYSDFN2
000000 =midnight (default)
RESTART TIME
ASPGRP1 System 1 ASP group CHAR(10) *NONE, User-defined name SYSTEM 1
ASP GROUP
ASPGRP2 System 2 ASP group CHAR(10) *NONE, User-defined name SYSTEM 2
ASP GROUP
COOPJ RN Cooperative J ournal CHAR(10) *SYSJ RN, *USRJ RN COOPERATIVE
J OURNAL
RCYWINPRC Recovery Window Process CHAR (7) *NONE, *ALLAPY RECOVERY
PROCESS
RCHWINDUR Recovery Window Duration PACKED (5 0) 0-99999 RECOVERY
DURATION
J RNATCRT J ournal at creation CHAR(10) *DFT, *YES, *NO J OURNAL AT
CREATION
RJ LNKTHLDM RJ Link Threshold (Time in minutes) PACKED(4 0) 0-9999 (0 =*NONE) RJ LNK
THRESHOLD
(TIME IN MIN)
RJ LNKTHLDE RJ Link Threshold (Number of journal entries) PACKED(7 0) 0, 1000-9999999 (0 =*NONE) RJ LNK
THRESHOLD
(NBR OF J RNE)
DBSNDTHLDM DB Send/Reader Threshold (Time in minutes) PACKED(4 0) 0-9999 (0 =*NONE) DBSND/DBRDR
THRESHOLD
(TIME IN MIN)
Table 133. MXDGDFN outfile (WRKDGDFN command)
Field Description Type, length Valid values Column Head-
ings
MXDGDFN outfile (WRKDGDFN command)
668
DBSNDTHLDE DB Send/Reader Threshold (Number of journal
entries)
PACKED(7 0) 0, 1000-9999999 (0 =*NONE) DBSND/DBRDR
THRESHOLD
(NBR OF J RNE)
OBJ SNDTHDM Object Send Threshold (Time in minutes) PACKED(4 0) 0-9999 (0 =*NONE) OBJ SND
THRESHOLD
(TIME IN MIN)
OBJ SNDTHDE Object Send Threshold (Number of journal entries) PACKED(7 0) 0, 1000-9999999 (0 =*NONE) OBJ SND
THRESHOLD
(NBR OF J RNE)
OBJ RTVTHDE Object Retrieve Threshold (Number of activity
entries)
PACKED(5 0) 0, 50-99999 (0 =*NONE) OBJ RTV
THRESHOLD
CNRSNDTHDE Container Send Threshold (Number of activity
entries)
PACKED(5 0) 0, 50-99999 (0 =*NONE) CNRSND
THRESHOLD
Table 133. MXDGDFN outfile (WRKDGDFN command)
Field Description Type, length Valid values Column Head-
ings
MXDGDLOE outfile (WRKDGDLOE command)
669
MXDGDLOE outfile (WRKDGDLOE command)

Table 134. MXDGDLOE outfile (WRKDGDLOE command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group definition) CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 2
FLR1 System 1 folder CHAR(63) User-defined name SYSTEM 1
FOLDER
DOC1 System 1 document CHAR(12) User-defined name, *ALL SYSTEM 1
DLO
OWNER Owner CHAR(10) User-defined name, *ALL OWNER
FLR2 System 2 folder CHAR(63) *FLR1, User-defined name SYSTEM 2
FOLDER
DOC2 System 2 document CHAR(12) *DOC1, User-defined name SYSTEM 2
DLO
OBJ AUD Object auditing value CHAR(10) *CHANGE, *ALL, *NONE OBJ ECT
AUDITING
VALUE
PRCTYPE Process type CHAR(10) *INCLD, *EXCLD PROCESS
TYPE
OBJ RTVDLY Retrieve delay (Object retrieve processing) PACKED(3 0) 0-999, *DGDFT OBJ RTVPRC
DELAY
MXDGDLOE outfile (WRKDGDLOE command)
670
MXDGFE outfile (WRKDGFE command)
671
MXDGFE outfile (WRKDGFE command)

Table 135. MXDGFE outfile (WRKDGFE command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name
(Data group definition)
CHAR(10) User-defined data group name DGDFN
NAME
DGSYS1 System 1 name
(Data group definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name
(Data group definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 2
FILE1 System 1 file name CHAR(10) User-defined name SYSTEM 1
FILE
LIB1 System 1 library name CHAR(10) User-defined name SYSTEM 1
LIBRARY
MBR1 System 1 member name CHAR(10) User-defined name SYSTEM 1
MEMBER
FILE2 System 2 file name CHAR(10) User-defined name SYSTEM 2
FILE
LIB2 System 2 library name CHAR(10) User-defined name SYSTEM 2
LIBRARY
MBR2 System 2 member name CHAR(10) User-defined name SYSTEM 2
MEMBER
TEXT Description CHAR(50) User-defined text DESCRIPTION
J RNIMG J ournal image
(File entry options)
CHAR(10) *AFTER, *BOTH, *DGDFT FEOPT
J OURNAL
IMAGE
OPNCLO Omit open/close entries
(File entry options)
CHAR(10) *YES, *NO, *DGDFT FEOPT OMIT
OPEN CLOSE
MXDGFE outfile (WRKDGFE command)
672
REPTYPE Replication type (File
entry options)
CHAR(10) *POSITION, *KEYED, *DGDFT FEOPT
REPLICATION
TYPE
APYLOCK Lock member during
apply (File entry options)
CHAR(10) *YES, *NO, *DGDFT FEOPT LOCK
MBR ON
APPLY
FTRBFRIMG Filter before image (File
entry options)
CHAR(10) *YES, *NO, *DGDFT FEOPT FILTER
BFR IMAGE
APYSSN Current apply session
(File entry options)
CHAR(10) A-F, *DGDFT FEOPT
CURRENT
APYSSN
RQSAPYSS
N
Configured or requested
apply session (File entry
options)
CHAR(10) A-F, *DGDFT FEOPT
REQUESTED
APYSSN
CRCLS Collision resolution class
(File entry options)
CHAR(10) *HLDERR, *AUTOSYNC, user-
defined name
FEOPT
COLLISION
RESOLUTION
DSBTRG Disable triggers during
apply (File entry options)
CHAR(10) *YES, *NO, *DGDFT FEOPT
DISABLE
TRIGGERS
PRCTRG Process trigger entries
(File entry options)
CHAR(10) *YES, *NO, *DGDFT FEOPT
PROCESS
TRIGGERS
PRCCST Process constraint
entries (File entry
options)
CHAR(10) *YES FEOPT
PROCESS
CONSTRAINTS
STATUS File status CHAR(10) *ACTIVE, *RLSWAIT, *RLSCLR,
*HLD, *HLDIGN, *RLS, *HLDRGZ,
*HLDPRM, *HLDRNM, *HLDSYNC,
*HLDRTY, *HLDERR, *HLDRLTD,
*CMPACT, *CMPRLS, *CMPRPR
CURRENT
STATUS
Table 135. MXDGFE outfile (WRKDGFE command)
Field Description Type, length Valid values Column head-
ings
MXDGFE outfile (WRKDGFE command)
673
RQSSTS Requested file status CHAR(10) *ACTIVE, *HLD, *HLDIGN, *RLS,
*RLSWAIT
REQUESTED
STATUS
J RN1STS System 1 journaled CHAR(10) *YES, *NO, *NA SYSTEM 1
J OURNALED
J RN2STS System 2 journaled CHAR(10) *YES, *NO, *NA SYSTEM 2
J OURNALED
ERRCDE Error code CHAR(2) Valid error codes ERROR CODE
J ECDE J ournal entry code CHAR(1) Valid journal entry code J OURNAL
ENTRY CODE
J ETYPE J ournal entry type CHAR(2) Valid journal entry type J OURNAL
ENTRY TYPE
Table 135. MXDGFE outfile (WRKDGFE command)
Field Description Type, length Valid values Column head-
ings
MXDGFE outfile (WRKDGFE command)
674
MXDGIFSE outfile (WRKDGIFSE command)
675
MXDGIFSE outfile (WRKDGIFSE command)

Table 136. MXDGIFSE outfile (WRKDGIFSE command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group
definition)
CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 2
OBJ 1 System 1 object CHAR(1024) User-defined name SYSTEM 1 IFS
OBJ ECT
OBJ 2 System 2 object CHAR(1024) *OBJ 1, user-defined name SYSTEM 2 IFS
OBJ ECT
CCSID Object CCSID BIN(5 0) Defaults to job CCSID. If job CCSID is 65535 or data
cannot be converted to job CCSID, OBJ 1 and OBJ 2
values remain in Unicode
CCSID
PRCTYPE Process type CHAR(10) *INCLD, *EXCLD PROCESS
TYPE
TYPE Object type CHAR(10) *DIR, *STMF, *SYMLNK OBJ ECT TYPE
OBJ RTVDLY Retrieve delay (Object retrieve
processing)
CHAR(10) 0-999, *DGDFT OBJ RTVPRC
DELAY
COOPDB Cooperate with database CHAR(10) *YES, *NO, blank COOPERATE
WITH
DATABASE
OBJ AUD Object auditing CHAR(10) *NONE, *CHANGE, *ALL OBJ ECT
AUDITING
VALUE
MXDGIFSE outfile (WRKDGIFSE command)
676
MXDGSTS outfile (WRKDG command)
677
MXDGSTS outfile (WRKDG command)
The MXDGSTS outfile contains status information which corresponds to fields shown in the following interfaces:
MIMIX Availability Manager: the data group detail status displays for version 6 or earlier installations
5250 emulator: Work with Data Groups (WRKDG) command
The Work with Data Groups (WRKDG) command generates new outfiles based on the MXDGSTSF record format from the MXDGSTS
model database file supplied with MIMIX. The content of the outfile is based on the criteria specified on the command. If there are no
differences found, the file is empty.
Usage notes:
When the value *UNKNOWN is returned for either the Data group source system status (DTASRCSTS) field or the Data group target
system status (DTATGTSTS), status information is not available from the system that is remote relative to where the request was
made. For example, if you requested the report from the target system and the value returned for DTASRCSTS is *UNKNOWN, the
WRKDG request could not communicate with the source system. Fields which rely on data collected from the remote system will be
blank.
If a data group is configured for only database or only object replication, any fields associated with processes not used by the
configured type of replication will be blank.
See WRKDG outfile SELECT statement examples on page 699 for examples of how to query the contents of this output file.
You can automate the process of gathering status. If you use MIMIX Monitor to create a synchronous interval monitor, the monitor can
specify the command to generate the outfile. Through exit programs, you can program the monitor to take action based on the status
returned in the outfile. For information about creating interval monitors, see the Using MIMIX Monitor book.
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
ENTRYTSP Entry timestamp TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu TIME REQUEST
PROCESSED
DGDFN Data group definition name (Data group
definition)
CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 1
MXDGSTS outfile (WRKDG command)
678
DGSYS2 System 2 (Data group definition) CHAR(8) User-defined system name DGDFN
SYSTEM 2
STSTIME Elapsed time for data group status
(seconds)
PACKED(10 0) Calculated, 0-9999999999 ELAPSED TIME
STSTIMF Elapsed time for data group status
(HHH:MM:SS)
CHAR(10) Calculated, 0-9999999 ELAPSED TIME
(HHH:MM:SS)
STSAVAIL Data group status retrieved from these
systems
CHAR(10) *ALL, *SOURCE, *TARGET, *NONE SYS STATUS
RETRIEVED
FROM
DTASRC Data group source system CHAR(8) User-defined system name DG SOURCE
SYSTEM
DTASRCSTS Data group source system status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN DG SOURCE
STATUS
DTATGT Data group target system CHAR(8) User-defined system name DG TARGET
SYSTEM
DTATGTSTS Data group target system status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN DG TARGET
STATUS
SWTSTS1 Switch mode status for system 1 CHAR(10) *NONE, *SWITCH SYSTEM 1
SWITCH
STATUS
SWTSTS2 Switch mode status for system 2 CHAR(10) *NONE, *SWITCH SYSTEM 2
SWITCH
STATUS
DGSTS Data group status summary CHAR(10) BLANK, *ERROR, *WARNING, *DISABLED OVERALL DG
STATUS
DBCFG Data group configured for data base
replication
CHAR(10) *YES, *NO CONFIGURED
FOR DB
REPLICATION
OBJ CFG Data group configured for object
replication
CHAR(10) *YES, *NO CONFIGURED
FOR OBJ ECT
REPLICATION
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
679
SRCSYSSTS Source system manager status
summation (system manager
CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN SOURCE
MANAGER
SUMMATION
DBSNDSTS Database send process status
summation (DBSNDPRC)
CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN, *NONE,
*THRESHOLD
DB SEND
STATUS
OBJ SNDSTS Object send process status summation
(OBJ SNDPRC)
CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN, *NONE,
*THRESHOLD
OBJ ECT SEND
STATUS
DTAPOLLSTS Data area polling process status
(DTAPOLLPRC)
CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN, *NONE DATA AREA
POLLER
STATUS
TGTSYSSTS Target System manager status
summation (system manager plus
journal manager status)
CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN TARGET
MANAGER
SUMMATION
DBAPYSTS Database apply status summation
(Apply sessions A-F)
CHAR(10) *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN,
*NONE, *THRESHOLD
DB APPLY
SUMMATION
OBJ APYSTS Object apply status summation CHAR(10) *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN,
*NONE, *THRESHOLD
OBJ ECT APPLY
SUMMATION
FECNT Total database file entries PACKED(5 0) 0-99999 TOTAL DB FILE
ENTRIES
FEACTIVE Active database file entries (FEACT) PACKED(5 0) 0-99999 ACTIVE DB FILE
ENTRIES
FENOTACT Inactive database file entries PACKED(5 0) 0-99999 INACTIVE DB
FILE ENTRIES
FENOTJ RNS Database file entries not journaled on
source
PACKED(5 0) 0-99999 FILES NOT
J OURNALED
ON SOURCE
FENOTJ RNT Database file entries not journaled on
target
PACKED(5 0) 0-99999 FILES NOT
J OURNALED
ON TARGET
FEHLDERR Database file entries held due to error PACKED(5 0) 0-99999 FILES HELD
FOR ERRORS
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
680
FEHLDOTHR Database file entries held for other
reasons (FEHLD)
PACKED(5 0) 0-99999 FILES HELD
FOR OTHER
OBJ PENDSRC Objects in pending status, source
system
PACKED(5 0) 0-99999 OBJ ECTS
PENDING ON
SOURCE
SYSTEM
OBJ PENDAPY Objects in pending status, target system PACKED(5 0) 0-99999 OBJ ECTS
PENDING ON
TARGET
SYSTEM
OBJ DELAY Objects in delayed status PACKED(5 0) 0-99999 TOTAL
OBJ ECTS
DELAYED
OBJ ERR Objects in error PACKED(5 0) 0-99999 TOTAL
OBJ ECTS IN
ERROR
DLOCFGCHG DLO configuration changed CHAR(10) *YES, *NO DLO CONFIG
CHANGED
IFSCFGCHG IFS configuration changed CHAR(10) *YES, *NO IFS CONFIG
CHANGED
OBJ CFGCHG Object configuration changed CHAR(10) *YES, *NO OBJ ECT
CONFIG
CHANGED
PRITFRDFN Primary transfer definition CHAR(10) User-defined transfer definition name PRIMARY
TFRDFN
SECTFRDFN Secondary transfer definition CHAR(10) User-defined transfer definition name SECONDARY
TFRDFN
TFRDFN Current transfer definition CHAR(10) User-defined transfer definition name LAST USED
TFRDFN
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
681
TFRSTS Current transfer definition
communications status
CHAR(10) *ACTIVE, *INACTIVE LAST USED
TFRDFN
STATUS
SRCMGRSTS Source system manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN SOURCE SYS
MANAGER
STATUS
SRCJ RNSTS Source journal manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN SOURCE J RN
MANAGER
STATUS
CNRSNDSTS Container send process status CHAR(10) *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN,
*NONE, *THRESHOLD
CONTAINER
SEND STATUS
OBJ RTVSTS Object retrieve process status CHAR(10) *ACTIVE, *INACTIVE, *PARTIAL, *UNKNOWN,
*NONE, *THRESHOLD
OBJ ECT
RETRIEVE
STATUS
TGTMGRSTS Target system manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN TARGET SYS
MANAGER
STATUS
TGTJ RNSTS Target journal manager status CHAR(10) *ACTIVE, *INACTIVE, *UNKNOWN TARGET J RN
MANAGER
STATUS
CURDBRCV Current database journal entry receiver
name
CHAR(10) User-defined value DB J RNRCV
CURDBLIB Current database journal entry receiver
library name
CHAR(10) User-defined value DB J RNRCV
LIBRARY
CURDBCODE Current database journal code and
entry type
CHAR(3) Valid journal entry types and codes DB ENTRY
TYPE AND
CODE
CURDBSEQ Current database journal entry
sequence number
PACKED(10 0) 0-9999999999 DB ENTRY
SEQUENCE
CURDBTSP Current database journal entry
timestamp
TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu DB ENTRY
TIMESTAMP
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
682
CURDBTPH Current database journal entry
transactions per hour
PACKED(15 0) Calculated, 0-9999999999999 DB ARRIVAL
RATE
RDDBRCV Last read database journal entry
receiver name (DBSNTRCV)
CHAR(10) User-defined value DB READER
J RNRCV
RDDBLIB Last read database journal entry
receiver library name
CHAR(10) User-defined value DB READER
J RNRCV
LIBRARY
RDDBCODE Last read database journal code and
entry type
CHAR(3) Valid journal entry types and codes DB READER
TYPE AND
ENTRY CODE
RDDBSEQ Last read database journal entry
sequence number (DBSNTSEQ)
PACKED(10 0) 0-9999999999 DB READER
ENTRY
SEQUENCE
RDDBTSP Last read database journal entry
timestamp (DBSNTDATE,
DBSNTTIME)
TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu DB READER
ENTRY
TIMESTAMP
RDDBTPH Last read database journal entry
transactions per hour
PACKED(15 0) Calculated, 0-999999999999999 DB READER
READ RATE
DBSNDBKLG Number of database entries not sent PACKED(15 0) Calculated, 0-999999999999999 DB SEND
BACKLOG
DBSNBKTIME Estimated time to process database
entries not sent (seconds)
PACKED(10 0) Calculated, 0-9999999999 DB SEND
BACKLOG
SECONDS
DBSNBKTIMF Estimated time to process database
entries not sent (HHH:MM:SS)
CHAR(10) Calculated, 0-999:99:99 DB SEND
BACKLOG
HHH:MM:SS
RCVDBRCV Last received database journal entry
receiver name
CHAR(10) User-defined value DB LAST
RECEIVED
J RNRCV
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
683
RCVDBLIB Last received database journal entry
receiver library name
CHAR(10) User-defined value DB LAST
RECEIVED
J RNRCV LIB
RCVDBCODE Last received database journal code
and entry type
CHAR(3) See the IBM OS/400 Backup and Recovery Guide
for journal and entry types
DB LAST RCV
TPE AND
ENTRY
RCVDBSEQ Last received database journal entry
sequence number
PACKED(10 0) 0-9999999999 DB LAST
RECEIVED
SEQUENCE
RCVDBTSP Last received database journal entry
timestamp
TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu DB LAST
RECEIVED
TIMESTAMP
RCVDBTPH Last received database journal entry
transactions per hour
PACKED(15 0) Calculated, 0-999999999999999 DB RECEIVE
ARRIVAL RATE
DBAPYREQ Number of database apply sessions
requested
PACKED(5 0) 1-6 REQUESTED
DB APPLY
SESSIONS
DBAPYMAX Number of database apply sessions
configured
PACKED(5 0) 1-6 CONFIGURED
DB APPLY
SESSIONS
DBAPYACT Number of database apply session
currently active (DBAPYPRC)
PACKED(5 0) 1-6 ACTIVE DB
APPLY
SESSIONS
DBAPYBKLG Number of database entries not applied PACKED(15 0) Calculated, 0-999999999999999 DB APPLY
BACKLOG
DBAPBKTIME Estimated time to process database
entries not applied (seconds)
PACKED(10 0) Calculated, 0-9999999999 DB APPLY TIME
SECONDS
DBAPBKTIMF Estimated time to process database
entries not applied (HHH:MM:SS)
CHAR(10) Calculated, 0-999:99:99 DB APPLY TIME
HHH:MM:SS
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
684
DBAPYTPH Database apply total transactions per
hour
PACKED(15 0) Calculated, 0-999999999999999 DB APPLY
PROCESSING
RATE
DBASTS Database apply session A status CHAR(10) *ACTIVE, *INACTIVE, *THRESHOLD,
*UNKNOWN
DB APPLY A
STATUS
DBARCVSEQ Database apply session A last received
sequence number
PACKED(10 0) 0-9999999999 DB APPLY A
LAST
RECEIVED
DBAPRCSEQ Database apply session A last
processed sequence number
PACKED(10 0) 0-9999999999 DB APPLY A
LAST
PROCESSED
DBABKLG Database apply session A number of
unprocessed entries
PACKED(15 0) Calculated, 0-999999999999999 DB APPLY A
BACKLOG
DBABKTIME Database apply session A estimated
time to apply unprocessed transactions
(seconds)
PACKED(10 0) Calculated, 0-9999999999 DB APPLY A
TIME SECONDS
DBABKTIMF Database apply session A estimated
time to apply unprocessed transactions
(HHH:MM:SS)
CHAR(10) Calculated, 0-999:99:99 DB APPLY A
TIME
HHH:MM:SS
DBATPH Database apply session A number of
transactions per hour
PACKED(15 0) Calculated, 0-999999999999999 DB APPLY A
PROCESSING
RATE
DBAOPNCMT Database apply session A open commit
indicator
CHAR(10) *YES, *NO DB APPLY A
COMMIT
INDICATOR
DBACMTID Database apply session A oldest open
commit ID
CHAR(10) J ournal-defined commit ID DB APPLY A
CURRENT
COMMIT ID
DBAAPYCODE Database apply session A last applied
journal code and entry type
CHAR(3) See the IBM OS/400 Backup and Recovery Guide
for journal codes and entry types.
DB APPLY A
TYPE AND
ENTRY
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
685
DBAAPYSEQ Database apply session A last applied
sequence number
PACKED(10 0) 0-9999999999 DB APPLY A
LAST APPLIED
DBAAPYTSP Database apply session A last applied
journal entry timestamp
TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu DB APPLY A
LAST
TIMESTAMP
DBAAPYOBJ Database apply session A object to
which last transaction was applied
CHAR(10) User-defined object name DB APPLY A
OBJ ECT NAME
DBAAPYLIB Database apply session A library of
object to which last transaction was
applied
CHAR(10) User-defined object library name DB APPLY A
LIBRARY NAME
DBAAPYMBR Database apply session A member of
object to which last transaction was
applied.
CHAR(10) User-defined object member name DB APPLY A
MEMBER NAME
DBAAPYTIME Database apply session A last applied
journal entry clock time difference
(seconds)
PACKED(10 0) Calculated, 0-9999999999 DB APPLY A
TIME DIFF
SECONDS
DBAAPYTIMF Database apply session A last applied
journal entry clock time difference
(HHH:MM:SS)
CHAR(10) Calculated, 0-999:99:99 DB APPLY A
TIME DIFF
HHH:MM:SS
DBAHLDSEQ Database apply session A hold MIMIX
log sequence number
PACKED(10 0) 0-9999999999 DB APPLY A
HOLD
SEQUENCE
DBxSTS through
DBxHLDSEQ,
where x is
database apply
session B - F
Reserved for up to 5 additional
database apply sessions (B - F).
Contains fields for each additional apply
session which correspond to fields for
apply session A (DBASTS through
DBAHLDSEQ).
885 bytes (5 x
177)
All DBx field values match the DBA field values. All DBx headings
match the DBA
headings, with x
CUROBJ RCV Current object journal entry receiver
name
CHAR(10) User-defined value OBJ ECT
J RNRCV
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
686
CUROBJ LIB Current object journal entry receiver
library name
CHAR(10) User-defined value OBJ ECT
J RNRCV
LIBRARY
CUROBJ CODE Current object journal code and entry
type
CHAR(3) See the IBM OS/400 Backup and Recovery Guide
for journal codes and entry types.
OBJ ECT TYPE
AND ENTRY
CODES
CUROBJ SEQ Current object journal entry sequence
number
PACKED(10 0) 0-9999999999 OBJ ECT
J OURNAL
SEQUENCES
CUROBJ TSP Current object journal entry timestamp TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJ ECT J RN
ENTRY
TIMESTAMP
CUROBJ TPH Current object journal entry transactions
per hour
PACKED(15 0) 0-999999999999999 OBJ ECT
ARRIVAL PER
HOUR
RDOBJ RCV Last read object journal entry receiver
name (OBJ SNTRCV)
CHAR(10) User-defined value OBJ RDRPRC
J RNRCV
RDOBJ LIB Last read object journal entry receiver
library name
CHAR(10) User-defined value OBJ RDRPRC
J RNRCV
LIBRARY
RDOBJ CODE Last read object journal code and entry
type
CHAR(3) See the IBM OS/400 Backup and Recovery Guide
for journal entry codes and entry types.
OBJ RDRPRC
TYPE AND
ENTRY CODE
RDOBJ SEQ Last read object journal entry sequence
number (OBJ SNTSEQ)
PACKED(10 0) 0-9999999999 OBJ RDRPRC
J OURNAL
SEQUENCE
RDOBJ TSP Last read object journal entry timestamp
(OBJ SNTDATE, OBJ SNTTIME)
TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJ RDRPRC
J RN ENTRY
TIMESTAMP
RDOBJ TPH Last read object journal entry
transactions per hour
PACKED(15 0) Calculated, 0-999999999999999 OBJ RDRPRC
READ RATE
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
687
OBJ SNDBKLG Object entries not processed PACKED(15 0)) Calculated, 0-999999999999999 OBJ SNDPRC
BACKLOG
OBJ SNDNUM Number of object entries sent PACKED(15 0)) Calculated, 0-999999999999999 OBJ SNDPRC
SENT IN TIME
SLICE
OBJ SBKTIME Estimated time to process object entries
not sent (seconds)
PACKED(10 0) Calculated, 0-9999999999 OBJ SNDPRC
BACKLOG
SECONDS
OBJ SBKTIMF Estimated time to process entries not
sent (HHH:MM:SS)
CHAR(10) Calculated, 0-999:99:99 OBJ SNDPRC
BACKLOG
HHH:MM:SS
RCVOBJ RCV Last received object journal entry
receiver name
CHAR(10) User-defined value OBJ RCVPRC
LAST RCVD
J RNRCV
RCVOBJ LIB Last received object journal entry
receiver library name
CHAR(10) User-defined value OBJ RCVPRC
LAST RCVD
J RNRCV LIB
RCVOBJ CODE Last received object journal code and
entry type
CHAR(3) See the IBM OS/400 Backup and Recovery Guide
for journal codes and entry types.
OBJ RCVPRC
LAST TYPE
AND ENTRY
RCVOBJ SEQ Last received object journal entry
sequence number
PACKED(10 0) 0-9999999999 OBJ RCVPRC
LAST ENTRY
SEQUENCE
RCVOBJ TSP Last received object journal entry
timestamp
TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJ RCVPRC
LAST ENTRY
TIMESTAMP
RCVOBJ TPH Last received object journal entry
transactions per hour
PACKED(15 0) 0-999999999999999 OBJ RCVPRC
RECEIVE RATE
OBJ RTVMIN Minimum number of object retriever
processes
PACKED(3 0) 1-99 OBJ RTVPRC
MIN NUMBER
OF J OBS
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
688
OBJ RTVACT Active number of object retriever
processes (OBJ RTVPRC)
PACKED(3 0) 1-99 OBJ RTVPRC
NUMBER OF
J OBS
OBJ RTVMAX Maximum number of object retriever
processes
PACKED(3 0) 1-99 OBJ RTVPRC
MAX NUMBER
OF J OBS
OBJ RTVBKLG Number of object retriever entries not
processed
PACKED(15 0) 0-999999999999999 OBJ RTVPRC
BACKLOG
OBJ RTVCODE Last processed object retrieve journal
code and entry type
CHAR(3) See the IBM OS/400 Backup and Recovery Guide
for journal codes and entry types.
OBJ RTVPRC
LAST TYPE
AND ENTRY
OBJ RTVSEQ Last processed object retrieve journal
sequence number
PACKED(10 0) 0-9999999999 OBJ RTVPRC
LAST
SEQUENCE
OBJ RTVTSP Last processed object retrieve journal
entry timestamp (OBJ RTVDATE,
OBJ RTVTIME)
TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJ RTVPRC
LAST
TIMESTAMP
OBJ RTVTYPE Type of object last processed by object
retrieve
CHAR(10) Object type of user-defined object OBJ RTVPRC
LAST OBJ TYPE
OBJ RTVOBJ Qualified name of object last processed
by object retrieve
CHAR(1024) User-defined object name and path
Note: Variable length of 75.
OBJ RTVPRC
LAST OBJ
NAME
CNRSNDMIN Minimum number of container send
processes
PACKED(3 0) 1-99 CNRSNDPRC
MIN NUMBER
OF J OBS
CNRSNDACT Active number of container send
processes (CNRSNDPRC)
PACKED(3 0) 1-99 CNRSNDPRC
NUMBER OF
J OBS
CNRSNDMAX Maximum number of container send
processes
PACKED(3 0) 1-99 CNRSNDPRC
MAX NUMBER
OF J OBS
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
689
CNRSNDBKLG Number of container send entries not
processed
PACKED(15 0) 0-999999999999999 CNRSNDPRC
BACKLOG
CNRSNDNUM Number of containers sent PACKED(15 0) 0-999999999999999 CNRSNDPRC
NUMBER SENT
CNRSNDCPH Containers per hour PACKED(15 0) 0-999999999999999 CNRSNDPRC
RATE
CNRSNDCODE Last processed container send journal
code and entry type
CHAR(3) See the IBM OS/400 Backup and Recovery Guide
for journal codes and entry types.
CNRSNDPRC
LAST TYPE
AND ENTRY
CNRSNDSEQ Last processed container send journal
sequence number (CNRSNTSEQ)
PACKED(10 0) 0-9999999999 CNRSNDPRC
LAST
SEQUENCE
CNRSNDTSP Last processed container send journal
entry timestamp (CNRSNTDATE,
CNTRSNTTIME)
TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu CNRSNDPRC
LAST
TIMESTAMP
CNRSNDTYPE Type of object last processed by
container send
CHAR(10) Object type of user-defined object CNRSNDPRC
LAST OBJ TYPE
CNRSNDOBJ Qualified name of object last processed
by container send
CHAR(1024) User-defined object name and path
Note: Variable length of 75.
CNRSNDPRC
LAST OBJ
NAME
OBJ APYMIN Minimum number of object apply
processes
PACKED(3 0) 1-99 OBJ APYPRC
MIN NUMBER
OF J OBS
OBJ APYACT Active number of object apply
processes (OBJ APYPRC)
PACKED(3 0) 1-99 OBJ APYPRC
NUMBER OF
J OBS
OBJ APYMAX Maximum number of object apply
processes
PACKED(3 0) 1-99 OBJ APYPRC
MAX NUMBER
OF J OBS
OBJ APYBKLG Number of object apply entries not
processed
PACKED(15 0) Calculated, 0-999999999999999 OBJ APYPRC
BACKLOG
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
690
OBJ APYACTA Number of active objects PACKED(15 0) Calculated, 0-999999999999999 OBJ APYPRC
ACTIVE
BACKLOG
OBJ APYNUM Number of object entries applied PACKED(15 0) Calculated, 0-999999999999999 OBJ APYPRC
APPLIED IN
TIME SLICE
OBJ ABKTIME Estimated time to process object entries
not applied (seconds)
PACKED(10 0) Calculated, 0-9999999999 OBJ APYPRC
BACKLOG
SECONDS
OBJ ABKTIMF Estimated time to process object entries
not applied (HHH:MM:SS)
CHAR(10) Calculated, 0-999:99:99 OBJ APYPRC
BACKLOG
HHH:MM:SS
OBJ APYTPH Number of object entries applied per
hour
PACKED(15 0) Calculated, 0-999999999999999 OBJ APYPRC
RATE
OBJ APYCODE Last applied object journal code and
entry type
CHAR(3) See the IBM OS/400 Backup and Recovery Guide
for journal codes and entry types.
OBJ APYPRC
LAST TYPE
AND ENTRY
OBJ APYSEQ Last applied object journal sequence
number (OBJ APYSEQ)
PACKED(10 0) 0-9999999999 OBJ APYPRC
LAST
SEQUENCE
OBJ APYTSP Last applied object journal entry
timestamp (OBJ APYDATE,
OBJ APYTIME)
TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu OBJ APYPRC
LAST
TIMESTAMP
OBJ APYTYPE Type of object last processed by object
apply
CHAR(10) Object type of user-defined object OBJ APYPRC
LAST OBJ TYPE
OBJ APYOBJ Qualified name of object last processed
by object apply
CHAR(1024) User-defined object name and path
Note: Variable length of 75.
OBJ APYPRC
LAST OBJ
NAME
RJ INUSE Remote journal (RJ ) link used by data
group
CHAR(10) *YES, *NO RJ LINK USED
BY DG
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
691
RJ SRCDFN RJ link source journal definition CHAR(10) User-defined journal definition name RJ LINK
SOURCE
J RNDFN
RJ SRCSYS RJ link source system CHAR(8) User-defined system name RJ LINK
SOURCE
J RNDFN
RJ TGTDFN RJ link target journal definition CHAR(10) User-defined journal definition name RJ LINK
TARGET
SYSTEM
RJ TGTSYS RJ link target system CHAR(8) User-defined system name RJ LINK
TARGET
J RNDFN
RJ PRIRDB RJ link primary RDB entry CHAR(18) User-defined or MIMIX generated RDB name RJ PRIMARY
RDB ENTRY
RJ PRITFR RJ link primary transfer definition name CHAR(10) User-defined transfer definition name RJ PRIMARY
TFRDFN
RJ SECRDB RJ link secondary RDB entry CHAR(18) User-defined or MIMIX generated RDB name RJ
SECONDARY
RDB ENTRY
RJ SECTFR RJ link secondary transfer definition
name
CHAR(10) User-defined transfer definition name RJ
SECONDARY
TFRDFN
RJ STATE RJ link state CHAR(10) BLANK, *FAILED, *CTLINACT, *INACTPEND,
*ASYNC, *SYNC, *ASYNPEND, *SYNCPEND,
*NOTBUILT, *UNKNOWN
RJ LINK STATE
RJ DLVRY RJ link delivery mode CHAR(10) *ASYNC, *SYNC, BLANK RJ LINK
DELIVERY
MODE
RJ SNDPTY RJ link send task priority PACKED(3 0) 0-99 0=*SYSDFT RJ LINK SEND
PRIORITY
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
692
RJ RDRSTS RJ reader task status CHAR(10) BLANK, *UNKNOWN, *ACTIVE, *INACTIVE,
*THRESHOLD
RJ READER
STATUS
RJ SMONSTS RJ link source monitor status CHAR(10) BLANK, *UNKNOWN, *ACTIVE, *INACTIVE RJ SOURCE
MONITOR
RJ TMONSTS RJ link target monitor status CHAR(10) BLANK, *UNKNOWN, *ACTIVE, *INACTIVE RJ TARGET
MONITOR
ITECNT Total IFS tracking entries PACKED(10 0) 0-999999 TOTAL IFS
TRACKING
ENTRIES
ITEACTIVE Active IFS tracking entries PACKED(10 0) 0-999999 ACTIVE IFS
TRACKING
ENTRIES
ITENOTACT Inactive IFS tracking entries PACKED(10 0) 0-999999 INACT IFS
TRACKING
ENTRIES
ITENOTJ RNS IFS tracking entries not journaled on
source
PACKED(10 0) 0-999999 IFS TE NOT
J OURNALED
ON SOURCE
ITENOTJ RNT IFS tracking entries not journaled on
target
PACKED(10 0) 0-999999 IFS TE NOT
J OURNALED
ON TARGET
ITEHLDERR IFS tracking entries held due to error PACKED(10 0) 0-999999 IFS TE HELD
FOR ERRORS
ITEHLDOTHR IFS tracking entries held for other
reasons
PACKED(10 0) 0-999999 IFS TE HELD
FOR OTHER
OTECNT Total object tracking entries PACKED(10 0) 0-999999 TOTAL OBJ
TRACKING
ENTRIES
OTEACTIVE Active object tracking entries PACKED(10 0) 0-999999 ACTIVE OBJ
TRACKING
ENTRIES
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
693
OTENOTACT Inactive object tracking entries PACKED(10 0) 0-999999 INACT OBJ
TRACKING
ENTRIES
OTENOTJ RNS Object tracking entries not journaled on
source
PACKED(10 0) 0-999999 OBJ TE NOT
J OURNALED
ON SOURCE
OTENOTJ RNT Object tracking entries not journaled on
target
PACKED(10 0) 0-999999 OBJ TE NOT
J OURNALED
ON TARGET
OTEHLDERR Object tracking entries held due to error PACKED(10 0) 0-999999 OBJ TE HELD
FOR ERRORS
OTEHLDOTHR Object tracking entries held for other
reasons
PACKED(10 0) 0-999999 OBJ TE HELD
FOR OTHER
J RNCACHETA J ournal cache target CHAR(10) *YES, *NO, *UNKNOWN J OURNAL
CACHE
TARGET
J RNCACHESA J ournal cache source CHAR(10) *YES, *NO, *UNKNOWN J OURNAL
CACHE
SOURCE
J RNSTATETA J ournal state target CHAR(10) *ACTIVE, *STANDBY, *INACTIVE J OURNAL
STATE TARGET
J RNSTATESA J ournal state source CHAR(10) *ACTIVE, *STANDBY, *INACTIVE J OURNAL
STATE SOURCE
J RNCACHETS J ournal cache status - target CHAR(10) *ERROR, *NONE, *OK, *WARNING,
*NOFEATURE, *UNKNOWN
J RN CACHE
TARGET
STATUS
J RNCACHESS J ournal cache status - source CHAR(10) *ERROR, *NONE, *OK, *WARNING,
*NOFEATURE, *UNKNOWN
J RN CACHE
SOURCE
STATUS
J RNSTATETS J ournal state target status CHAR(10) *ERROR, *NONE, *OK, *WARNING,
*NOFEATURE, *UNKNOWN
J OURNAL
STATE TARGET
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
694
J RNSTATESS J ournal state source status CHAR(10) *ERROR, *NONE, *OK, *WARNING,
*NOFEATURE, *UNKNOWN
J OURNAL
STATE SOURCE
RJ TGTRCV Last RJ target journal entry receiver
name
CHAR(10) User-defined value RJ TGT
J RNRCV
RJ TGTLIB Last RJ target journal entry receiver
library name
CHAR(10) User-defined value RJ TGT
J RNRCV
LIBRARY
RJ TGTCODE Last RJ target journal code and entry
type
CHAR(3) Valid journal entry types and codes RJ TGT TYPE
AND ENTRY
CODE
RJ TGTSEQ Last RJ target journal entry sequence
number
PACKED(10 0) 0-9999999999 RJ TGT ENTRY
SEQUENCE
RJ TGTTSP Last RJ target journal entry timestamp TIMESTAMP SAA format: YYYY-MM-DD-hh.mm.ss.mmmuuu RJ TGT ENTRY
TIMESTAMP
OBJ RTVUCS Qualified name of object last qualified
by object retrieve - Unicode
GRAPHIC(512)
VARLEN(75)
CCSID(13488)
User-defined object name and path LAST OBJ
RETRIEVED
(UNICODE)
CNRSNDUCS Qualified name of object last qualified
by container send - Unicode
GRAPHIC(512)
VARLEN(75)
CCSID(13488)
User-defined object name and path LAST OBJ SENT
(UNICODE)
OBJ APYUCS Qualified name of object last qualified
by object apply - Unicode
GRAPHIC(512)
VARLEN(75)
CCSID(13488)
User-defined object name and path LAST OBJ
APPLIED
(UNICODE)
FECNT2 Total database file entries PACKED(10 0) 0-9999999999 TOTAL
DB FILE
ENTRIES2
FEACTIVE2 Active database file entries (FEACT) PACKED(10 0) 0-9999999999 ACTIVE
DB FILE
ENTRIES2
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
695
FENOTACT2 Inactive database file entries PACKED(10 0) 0-9999999999 INACTIVE
DB FILE
ENTRIES2
FENOTJ RNS2 Database file entries not journaled on
source
PACKED(10 0) 0-9999999999 FILES NOT
J OURNALED
ON SOURCE2
FENOTJ RNT2 Database file entries not journaled on
target
PACKED(10 0) 0-9999999999 FILES NOT
J OURNALED
ON TARGET2
FEHLDERR2 Database file entries held due to error PACKED(10 0) 0-9999999999 FILES
HELD FOR
ERRORS2
FEHLDOTHR2 Database file entries held for other
reasons (FEHLD)
PACKED(10 0) 0-9999999999 FILES HELD
FOR OTHERS2
FECMPRPR2 Database file entries being repaired PACKED(10 0) 0-9999999999 FILES
BEING
REPAIRED2
RJ LNKTHLDM RJ Link Threshold Exceeded (Time in
minutes)
PACKED(4 0) 0-9999 RJ LNK
THRESHOLD
(TIME IN MIN)
RJ LNKTHLDE RJ Link Threshold Exceeded (Number
of journal entries)
PACKED(7 0) 0-9999999 RJ LNK
THRESHOLD
(NBR OF J RNE)
DBRDRTHLDM DB Send/Reader Threshold Exceeded
(Time in minutes)
PACKED(4 0) 0-9999 DBSND/DBRDR
THRESHOLD
(TIME IN MIN)
DBRDRTHLDE DB Send/Reader Threshold Exceeded
(Number of journal entries)
PACKED(7 0) 0-9999999 DBSND/DBRDR
THRESHOLD
(NBR OF J RNE)
DBAPYATHLD DB Apply A Threshold Exceeded
(Number of journal entries)
PACKED(5 0) 0-99999 DB APPLY A
THRESHOLD
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
696
DBAPYBTHLD DB Apply B Threshold Exceeded
(Number of journal entries)
PACKED(5 0) 0-99999 DB APPLY B
THRESHOLD
DBAPYCTHLD DB Apply C Threshold Exceeded
(Number of journal entries)
PACKED(5 0) 0-99999 DB APPLY C
THRESHOLD
DBAPYDTHLD DB Apply D Threshold Exceeded
(Number of journal entries)
PACKED(5 0) 0-99999 DB APPLY D
THRESHOLD
DBAPYETHLD DB Apply E Threshold Exceeded
(Number of journal entries)
PACKED(5 0) 0-99999 DB APPLY E
THRESHOLD
DBAPYFTHLD DB Apply F Threshold Exceeded
(Number of journal entries)
PACKED(5 0) 0-99999 DB APPLY F
THRESHOLD
OBJ SNDTHDM Object Send Threshold Exceeded (Time
in minutes)
PACKED(4 0) 0-9999 OBJ SND
THRESHOLD
(TIME IN MIN)
OBJ SNDTHDE Object Send Threshold Exceeded
(Number of journal entries)
PACKED(7 0) 0-9999999 OBJ SND
THRESHOLD
(NBR OF J RNE)
OBJ RTVTHDE Object Retrieve Threshold Exceeded
(Number of activity entries)
PACKED(5 0) 0-99999 OBJ RTV
THRESHOLD
CNRSNDTHDE Container Send Threshold Exceeded
(Number of activity entries)
PACKED(5 0) 0-99999 CNRSND
THRESHOLD
OBJ APYTHDE Object Apply Threshold Exceeded
(Number of activity entries)
PACKED(5 0) 0-99999 OBJ APY
THRESHOLD
RJ BKLG RJ Backlog PACKED(15 0) Calculated 0-999999999999 RJ BACKLOG
CURDBSEQ2 Current database journal entry large
sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB ENTRY
LARGE
SEQUENCE
RDDBSEQ2 Last read database journal entry large
sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB READER
ENTRY LARGE
SEQUENCE
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
697
RCVDBSEQ2 Last received database journal entry
large sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB LAST
RECEIVED
LARGE
SEQUENCE
DBARCVSEQ2 Database apply session A last received
large sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB APPLY A
LAST
RECEIVED
LARGE
SEQUENCE
DBAPRCSEQ2 Database apply session A last
processed large sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB APPLY A
LAST
PROCESSED
LARGE
SEQUENCE
DBACMTID2 Database apply session A oldest open
large commit ID
CHAR(20) 0-99999999999999999999 (twenty 9s) DB APPLY A
CURRENT
LARGE
COMMIT ID
DBAAPYSEQ2 Database apply session A last applied
large sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB APPLY A
LAST APPLIED
LARGE
SEQUENCE
DBAHLDSEQ2 Database apply session A hold MIMIX
log large sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) DB APPLY A
HOLD LARGE
SEQUENCE
DBxRCVSEQ2
through
DBxHLDSEQ2
where x is
database apply
session B - F
Reserved for up to 5 additional
database apply sessions (B - F).
Contains fields for each additional apply
session which correspond to fields for
apply session A (DBARCVSEQ2
through DBAHLDSEQ2)
600 bytes (5 x
120)
All DBx field values match the DBA field values. All DBx headings
match the DBA
headings, with x
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
698
CUROBJ SEQ2 Current object journal entry large
sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJ ECT
J OURNAL
LARGE
SEQUENCE
RDOBJ SEQ2 Last read object journal entry large
sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJ RDRPRC
J OURNAL
LARGE
SEQUENCE
RCVOBJ SEQ2 Last received object journal entry large
sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJ RCVPRC
LAST ENTRY
LARGE
SEQUENCE
OBJ RTVSEQ2 Last processed object retrieve journal
entry large sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJ RTVPRC
LAST ENTRY
LARGE
SEQUENCE
CNRSNDSEQ2 Last processed container send journal
entry large sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) CNRSNDPRC
LAST ENTRY
LARGE
SEQUENCE
OBJ APYSEQ2 Last applied object journal entry large
sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) OBJ APYPRC
LAST ENTRY
LARGE
SEQUENCE
RJ TGTSEQ2 Last RJ target journal entry large
sequence number
ZONED(20 0) 0-99999999999999999999 (twenty 9s) RJ TARGET
LAST ENTRY
LARGE
SEQUENCE
Table 137. MXDGSTS outfile (WRKDG command)
Field Description Type, length Valid values Column head-
ings
MXDGSTS outfile (WRKDG command)
699
WRKDG outfile SELECT statement examples
Following are some example SELECT statements that query a WRKDG outfile and produce various outfile reports. The first three
examples show how to use wild cards to produce reports about specific data groups in the outfile.
The last example adds a few field definitions, in request time sequence, to produce outfile reports with additional data group related
information.
These are basic examples, there may be additional formatting options that you may want to apply to your output.
WRKDG outfile example 1
This SELECT statement uses a single wildcard character to query the outfile to retrieve and display all of the data group names that start
with an A and have 0 or more characters following the A. The records are listed in record arrival order. The statement would be entered
as follows:
SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN l i ke ' A%'
The outfile report produced follows:
DGN SYS SYS
ACCTPAY CHI CAGO LONDON
ACCTREC CHI CAGO LONDON
APP1 CHI CAGO LONDON
APP2 CHI CAGO LONDON
WRKDG outfile example 2
This SELECT statement uses wildcard characters to query the outfile for all data group names that are in the outfile. The records are listed
in record arrival order. The statement would be entered as follows:
SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN l i ke ' %%'
The outfile report produced follows:
DGN SYS SYS
I NVENTORY CHI CAGO LONDON
PAYROLL CHI CAGO LONDON
ACCTPAY CHI CAGO LONDON
ORDERS CHI CAGO LONDON
ACCTREC CHI CAGO LONDON
APP1 CHI CAGO LONDON
APP2 CHI CAGO LONDON
MXDGSTS outfile (WRKDG command)
700
SUPERAPP CHI CAGO LONDON
WRKDG outfile example 3
This SELECT statement uses wildcard characters to find all data groups with names that contain an A. The records are listed in record
arrival order. The statement would be entered as follows:
SELECT DGDFN, DGSYS1, DGSYS2 FROM library/filename WHERE DGDFN l i ke ' %A%'
The outfile report produced is follows:
DGN SYS SYS
PAYROLL CHI CAGO LONDON
ACCTPAY CHI CAGO LONDON
ACCTREC CHI CAGO LONDON
APP1 CHI CAGO LONDON
APP2 CHI CAGO LONDON
SUPERAPP CHI CAGO LONDON
WRKDG outfile example 4
This SELECT statement selects all records that have a data group name containing an A. These records are listed in data group name
order with all duplicate data group names listed by the time the entry was placed in the outfile. All records for a data group are listed
together in ascending time sequence. Additionally, the time stamp that the entry was placed in the file and the current top sequence
number of the object journal are also listed with the entry. The statement would be entered as follows:
SELECT DGDFN, DGSYS1, DGSYS2, ENTRYTSP, CUROBJ SEQ, FROM library/filename WHERE DGDFN l i ke ' %A%'
ORDER BY DGDFN, DGSYS1, DGSYS2, ENTRYTSP
The outfile report produced follows:
DGN SYS SYS ENTRYTSP SEQN
PAYROLL CHI CAGO LONDON 2001- 02- 06- 11. 09. 59. 842000 29, 034, 877
ACCTPAY CHI CAGO LONDON 2001- 02- 06- 11. 24. 05. 851000 29, 035, 093
ACCTREC CHI CAGO LONDON 2001- 02- 06- 11. 09. 59. 842000 29, 034, 879
APP1 CHI CAGO LONDON 2001- 02- 06- 11. 24. 05. 851000 29, 035, 095
APP2 CHI CAGO LONDON 2001- 02- 06- 14. 24. 49. 793000 29, 051, 130
SUPERAPP CHI CAGO LONDON 2001- 02- 06- 11. 09. 59. 842000 0
MXDGOBJE outfile (WRKDGOBJE command)
701
MXDGOBJE outfile (WRKDGOBJE command)

Table 138. MXDGOBJ E outfile (WRKDGOBJ E command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group
definition)
CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group
definition)
CHAR(8) User-defined system name DGDFN SYSTEM
1
DGSYS2 System 2 name (Data group
definition)
CHAR(8) User-defined system name DGDFN SYSTEM
2
OBJ 1 System 1 folder CHAR(10) User-defined name, *ALL SYSTEM 1
OBJ ECT
LIB1 System 1 library CHAR(10) User-defined name, generic* SYSTEM 1
LIBRARY
TYPE Object type CHAR(10) Refer to the OM5100P file for the list of valid values OBJ ECT TYPE
OBJ ATR Object attribute CHAR(10) Refer to the OM5200P file for the list of valid object
attributes
OBJ ECT
ATTRIBUTE
OBJ 2 System 2 object CHAR(10) User-defined name, *ALL, generic*, *OBJ 1 SYSTEM 2
OBJ ECT
LIB2 System 2 library CHAR(10) User-defined name, generic*, *LIB1 SYSTEM 2
LIBRARY
OBJ AUD Object auditing value (configured
value)
CHAR(10) *CHANGE, *ALL, *NONE OBJ ECT
AUDITING
VALUE
PRCTYPE Process type CHAR(10) *INCLD, *EXCLD PROCESS TYPE
COOPDB Cooperate with database CHAR(10) *YES, *NO COOPERATE
WITH DATABASE
REPSPLF Replicate spooled files CHAR(10) *YES, *NO REPLICATE
SPOOLED FILES
MXDGOBJE outfile (WRKDGOBJE command)
702
KEEPSPLF Keep deleted spooled files CHAR(10) *YES, *NO KEEP DLTD
SPOOLED FILES
OBJ RTVDLY Retrieve delay (Object retrieve
processing)
CHAR(10) 0-999, *DGDFT OBJ RTVPRC
DELAY
USRPRFSTS User profile status CHAR(10) *DGDFT, *DISABLED, *ENABLED, *SRC, *TGT USER PROFILE
STATUS
J RNIMG J ournal image (File entry options) CHAR(10) *DGDFT, *AFTER, *BOTH FEOPT
J OURNAL IMAGE
OPNCLO Omit open and close entries (File
entry options)
CHAR(10) *DGDFT, *YES, *NO FEOPT OMIT
OPEN CLOSE
REPTYPE Replication type (File entry
options)
CHAR(10) *DGDFT, *POSITION, *KEYED FEOPT
REPLICATION
TYPE
APYLOCK Lock member during apply (File
entry options)
CHAR(10) *DGDFT, *YES, *NO FEOPT LOCK
MBR ON APPLY
APYSSN Apply session (File entry options) CHAR(10) A-F, *DGDFT, *ANY FEOPT
CURRENT
APYSSN
CRCLS Collision resolution (File entry
options)
CHAR(10) User-defined name, *DGDFT, *HLDERR,
*AUTOSYNC
FEOPT
COLLISION
RESOLUTION
DSBTRG Disable triggers during apply (File
entry options)
CHAR(10) *YES, *NO, *DGDFT FEOPT DISABLE
TRIGGERS
PRCTRG Process trigger entries (File entry
options)
CHAR(10) *YES, *NO, *DGDFT FEOPT
PROCESS
TRIGGERS
PRCCST Process constraint entries (File
entry options)
CHAR(10) *YES FEOPT
PROCESS
CONSTRAINTS
LIB1ASP System 1 library ASP number PACKED(3,0) 0 =*SRCLIB, 1-32,
-1 =*ASPDEV
SYSTEM 1
LIBRARY ASP
Table 138. MXDGOBJ E outfile (WRKDGOBJ E command)
Field Description Type, length Valid values Column head-
ings
MXDGOBJE outfile (WRKDGOBJE command)
703
LIB1ASPD System 1 library ASP device
(File entry options)
CHAR(10) *LIB1ASP, User-defined name SYSTEM 1
LIBRARY ASP
DEV
LIB2ASP System 2 library ASP number PACKED(3,0) 0 =*SRCLIB, 1-32,
-1 =*ASPDEV
SYSTEM 2
LIBRARY ASP
LIB2ASPD System 2 library ASP device
(File entry options)
CHAR(10) *LIB2ASP, User-defined name SYSTEM 2
LIBRARY ASP
DEV
NBROMTDTA Number of omit content
(OMTDTA) values
PACKED(3 0) 1-10 NUMBER OF
OMIT CONTENT
VALUES
OMTDTA Omit content values (File entry
options)
CHAR(100) *NONE, *FILE, *MBR (10 characters each) OMIT CONTENT
SPLFOPT Spooled file options CHAR(10) *NONE, *HLD, *HLDONSAV SPOOLED FILE
OPTIONS
NUMCOOPTYP Number of cooperating object
types
PACKED(3 0) 0-999 NUMBER OF
COOPERATING
OBJ ECT TYPES
COOPTYPE Cooperating object types CHAR(100) *FILE, *DTAARA, *DTAQ COOPERATING
OBJ ECT TYPES
NBRATROPT Number of attribute options PACKED (3 0) -1, 1-50 NUMBER OF
ATTRIBUTE
ATROPT Attribute options CHAR(500) *ALL ATTRIBUTE
Table 138. MXDGOBJ E outfile (WRKDGOBJ E command)
Field Description Type, length Valid values Column head-
ings
MXDGTSP outfile (WRKDGTSP command)
704
MXDGTSP outfile (WRKDGTSP command)

Table 139. MXDGTSP outfile (WRKDGTSP command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group
definition)
CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group
definition)
CHAR(8) User-defined system name DGDFN SYSTEM
1
DGSYS2 System 2 name (Data group
definition)
CHAR(8) User-defined system name DGDFN SYSTEM
2
DTASRC Data source CHAR(10) *SYS1, *SYS2 DATA SOURCE
APYSSN Apply session CHAR(10) A-F APPLY SESSION
CRTTSP Create Timestamp (YYYY-MM-
DD.HH.MM.SS.mmmmmm
TIMESTAMP SAA timestamp - normalized to the target system
(Timestamp when the journal entry is created.)
CREATE
TIMESTAMP
SNDTSP Send Timestamp (YYYY-MM-
DD.HH.MM.SS.mmmmmm
TIMESTAMP SAA timestamp - normalized to the target system
(Timestamp value is set equal to the create
timestamp (CRTTSP) when using remote journaling.
For non-remote journaling, this is the time the journal
entry is read on the source system and is sent by the
MIMIX send process.)
SEND
TIMESTAMP
RCVTSP Receive Timestamp (YYYY-MM-
DD.HH.MM.SS.mmmmmm
TIMESTAMP SAA timestamp - normalized to the target system
(Timestamp when the journal entry is received by the
journal reader on the target system when using
remote journaling or received by the target system by
the MIMIX send process for non-remote journaling).
RECEIVE
TIMESTAMP
APYTSP Apply Timestamp (YYYY-MM-
DD.HH.MM.SS.mmmmmm
TIMESTAMP SAA timestamp - normalized to the target system
(Timestamp when the journal entry is applied on the
target system.)
APPLY
TIMESTAMP
MXDGTSP outfile (WRKDGTSP command)
705
CRTSNDET Elapsed time between create and
send process (milliseconds)
PACKED(10 0) Calculated, 0-9999999999
(Elapsed time between generation of the timestamps
and the time the MIMIX send process is received on
the target system for non-remote journaling. For
remote journaling, the create and send times are set
equal so elapsed time will be a value of 0.
SEND ELAPSED
TIME
SNDRCVET Elapsed time between send and
receive process (milliseconds)
PACKED(10 0) Calculated, 0-9999999999
(Elapsed time between the send time and the receive
time.)
RECEIVE
ELAPSED TIME
RCVAPYET Elapsed time between receive and
apply process (milliseconds)
PACKED(10 0) Calculated, 0-9999999999
Elapsed time between the receive time and the apply
time.)
APPLY ELAPSED
TIME
CRTAPYET Elapsed time between create and
apply timestamps (milliseconds)
PACKED(10 0) Calculated, 0-9999999999
(Elapsed time between generation of the timestamp
to the time when the journal entry is applied on the
target system.)
TOTAL ELAPSED
TIME
SYSTDIFF The time differential between the
source and target systems, where
time differential =source time -
target time
PACKED(10 0) -9999999999-0, 0-9999999999 TIME
DIFFERENCE
Table 139. MXDGTSP outfile (WRKDGTSP command)
Field Description Type, length Valid values Column head-
ings
MXDGTSP outfile (WRKDGTSP command)
706
MXJRNDFN outfile (WRKJRNDFN command)
707
MXJRNDFN outfile (WRKJRNDFN command)

Table 140. MXJ RNDFN outfile (WRKJ RNDFN command)
Field Description Type, length Valid values Column head-
ings
J RNDFN J ournal definition name (J ournal
definition)
CHAR(10) User-defined journal definition name J RNDFN
NAME
J RNSYS System name (J ournal definition) CHAR(8) User-defined system name J RNDFN
SYSTEM
J RN J ournal name (J ournal) CHAR(10) J ournal, *J RNDFN J OURNAL
J RNLIB J ournal library (J ournal) CHAR(10) J ournal library J OURNAL
LIBRARY
J RNLIBASP J ournal library ASP PACKED(3 0) Numeric value
0 =*CRTDFT
1-32
- 1 =*ASPDEV
J OURNAL
LIBRARY ASP
J RNRCVPFX J ournal receiver prefix (J ournal
receiver prefix)
CHAR(10) *GEN, user-defined name J RNRCV
PREFIX
J RNRCVLIB J ournal receiver library (J ournal
receiver prefix)
CHAR(10) User-defined name, *J RNLIB J RNRCV
LIBRARY
RCVLIBASP J ournal receiver library ASP PACKED(3 0) Numeric value
0 =*CRTDFT
1-32
- 1 =*ASPDEV
J RNRCV
LIBRARY ASP
CHGMGT Receiver change management CHAR(20) 2 x CHAR(10) - *NONE, *TIME, *SIZE,
*SYSTEM The only valid combinations are:
*TIME *SIZE
*TIME *SYSTEM
RECEIVER
CHANGE
MANAGEMEN
T
THRESHOLD Receiver threshold size (MB) PACKED(7 0) 10-1000000 RECEIVER
THRESHOLD
SIZE (MB)
MXJRNDFN outfile (WRKJRNDFN command)
708
RCVTIME Time of day to change receiver ZONED(6 0) Time RECEIVER
CHANGE TIME
RESETTHLD Reset sequence threshold PACKED(5 0) 10-1000000 RESET
SEQUENCE
THRESHOLD
DLTMGT Receiver delete management CHAR(10) *YES, *NO RECEIVER
DELETE
MANAGEMEN
T
KEEPUNSAV Keep unsaved journal receivers CHAR(10) *YES, *NO KEEP
UNSAVED
J RNRCV
KEEPRCVCNT Keep journal receiver (days) PACKED(3 0) 0-999 KEEP J RNRCV
COUNT
KEEPJ RNRCV J ournal receiver ASP PACKED(3 0) 0-999 KEEP J RNRCV
(DAYS)
TEXT Description CHAR(50) *BLANK, User-defined text DESCRIPTION
J RNRCVASP J ournal receiver ASP PACKED(3 0) Numeric value (0 =*LIBASP) J RNRCV ASP
MSGQ Threshold message queue CHAR(10) User-defined name, *J RNDFN MSGQ
THRESHOLD
MSGQ
MSGQLIB Threshold message queue library CHAR(10) *J RNLIB, user-defined name (See field J RNLIB if
this field contains *J RNLIB)
MSGQ
THRESHOLD
MSGQ
LIBRARY
RJ LNK Remote journal link CHAR(10) *NONE, *SOURCE, *TARGET RJ LINK
EXITPGM Exit program CHAR(10) *NONE, user-defined name EXIT
PROGRAM
EXITPGMLIB Exit program library CHAR(10) User-defined name EXIT
PROGRAM
LIBRARY
Table 140. MXJ RNDFN outfile (WRKJ RNDFN command)
Field Description Type, length Valid values Column head-
ings
MXJRNDFN outfile (WRKJRNDFN command)
709
MINENTDTA Minimal journal entry data CHAR(100) Array of 10 CHAR(10) fields *DTAARA,
*FLDBDY, *FILE, *NONE
MIN J RN
ENTRY DATA
REQTHLDSIZ Requested threshold size PACKED(7 0) Numeric value REQUESTED
THRESHOLD
SIZE
SAVTYPE Save type CHAR(10) SAVE TYPE
J RNLAGLMT J ournaling lag limit (seconds) PACKED(3 0) J OURNALING
LAG LIMIT
(SEC)
J RNLIBASPD J ournal library ASP device CHAR(10) *J RNLIBASP, user-defined name J OURNAL
LIBRARY ASP
DEV
RCVLIBASPD J ournal receiver library ASP
device
CHAR(10) *RCVLIBASP, user-defined name J RNRCV
LIBRARY ASP
DEV
TGTSTATE Target journal state CHAR(10) *ACTIVE, *STANDBY TARGET
J OURNAL
STATE
J RNCACHE J ournal cache option CHAR(10) *SRC, *TGT, *BOTH, *NONE J OURNAL
CACHING
RCVSIZOPT Receiver size option CHAR(10) *MAXOPT2, *MAXOPT3 RECEIVER
SIZE OPTION
RESETTHLD2 Reset large sequence threshold PACKED(15, 0) 10-100,000,000,000,000 RESET
SEQUENCE
THRESHOLD2
Table 140. MXJ RNDFN outfile (WRKJ RNDFN command)
Field Description Type, length Valid values Column head-
ings
MXJRNDFN outfile (WRKJRNDFN command)
710
MXRJLNK outfile (WRKRJLNK command)
711
MXRJLNK outfile (WRKRJLNK command)

Table 141. MXRJ LNK outfile (WRKRJ LNK command)
Field Description Type, length Valid values Column head-
ings
SRCJ RNDFN J ournal definition name on
source
CHAR(10) J ournal definition name SOURCE
J OURNAL
DEFINITION
SRCSYS Source system name of
journal definition
CHAR(8) System name SOURCE
SYSTEM
SRCJ EJ RNA Source J ournal Library ASP DEC(3) "0 =*CRTDFT
-1 =*ASPDEV
SRC J RN
LIBRARY
ASP
SRCJ EJ LAD Source J ournal Library ASP
Device
CHAR(10) *J RNLIBASP, *ASPDEV, ASP Primary
Group name
SRC J RN
LIBRARY
ASP DEV
SRCJ ERCVA Source J ournal Receiver
Library ASP
DEC(3) "0 =*CRTDFT
-1 =*ASPDEV
SRC J RNRCV
LIBRARY
ASP
SRCJ ERLAD Source J ournal Receiver
Library ASP Device
CHAR(10) *RCVLIBASP, *ASPDEV, ASP Primary
Group name
SRC J RNRCV
LIBRARY
ASP DEV
TGTJ RNDFN J ournal definition name on
target
CHAR(10) J ournal definition name TARGET
J OURNAL
DEFINITION
TGTSYS Target system name of
journal definition
CHAR(8) System name TARGET
SYSTEM
TGTJ EJ RNA Target J ournal Library ASP DEC(3) "0 =*CRTDFT
-1 =*ASPDEV
TGT J RN
LIBRARY
ASP
MXRJLNK outfile (WRKRJLNK command)
712
TGTJ EJ LAD Target J ournal Library ASP
Device
CHAR(10) *J RNLIBASP, *ASPDEV, ASP Primary
Group name
TGT J RN
LIBRARY
ASP DEV
TGTJ ERCVA Target J ournal Receiver
Library ASP
DEC(3) "0 =*CRTDFT
-1 =*ASPDEV
TGT J RNRCV
LIBRARY
ASP
TGTJ ERLAD Target J ournal Receiver
Library ASP Device
CHAR(10) *RCVLIBASP, *ASPDEV, ASP Primary
Group name
TGT J RNRCV
LIBRARY
ASP DEV
RJ MODE Delivery mode of remote
journaling
CHAR(10) *ASYNC, *SYNC, blank RJ MODE
(DELIVERY)
RJ STATE Remote journal state CHAR(10) *ASYNC, *ASYNCPEND, *SYNC,
*SYNCPEND, *INACTIVE, *CTLINACT,
*FAILED, *NOTBUILT, *UNKNOWN
STATE
PRITFRDFN Primary transfer definition CHAR(10) Transfer definition name, *SYSDFN PRIMARY
TFRDFN
SECTFRDFN Secondary transfer definition CHAR(10) Transfer definition name, *SYSDFN,
*NONE
SECONDARY
TFRDFN
PRIORITY Async process priority Packed(3 0) 0=*SYSDFN, 1-99 PRIORITY
TEXT Text description CHAR(50) Plain text TEXT
Table 141. MXRJ LNK outfile (WRKRJ LNK command)
Field Description Type, length Valid values Column head-
ings
MXRJLNK outfile (WRKRJLNK command)
713
MXSYSDFN outfile (WRKSYSDFN command)
714
MXSYSDFN outfile (WRKSYSDFN command)

Table 142. MXSYSDFN outfile (WRKSYSDFN command)
Field Description Type, length Valid values Column head-
ings
SYSDFN System definition CHAR(8) User-defined name SYSDFN NAME
TYPE System type CHAR(10) *MGT, *NET SYSTEM TYPE
PRITFRDFN Configured primary transfer
definition
CHAR(10) User-defined name CONFIGURED
PRITFRDFN
SECTFRDFN Configured secondary transfer
definition
CHAR(10) User-defined name CONFIGURED
SECTFRDFN
CLUMBR Cluster member CHAR(10) *YES, *NO CLUSTER
MEMBER
CLUTFRDFN Cluster transfer definition CHAR(20) User-defined name, *PRITFRDFN, *SECTFRDFN
(Refer to the PRITFRNAME, PRITFRSYS1 and
PRITFRSYS2 fields if this field contains
*PRITFRDFN)
CLUSTER
TFRDFN
PRIMSGQ Primary message queue (Primary
message handling)
CHAR(10) User-defined name PRIMARY MSGQ
PRIMSGQLIB Primary message queue library
(Primary message handling)
CHAR(10) User-defined name, *LIBL PRIMARY MSGQ
LIB
PRISEV Primary message queue severity
(Primary message handling)
CHAR(10) *SEVERE, *INFO, *WARNING, *ERROR, *TERM,
*ALERT, *ACTION, 0-99
PRIMARY MSGQ
SEV
PRISEVNBR Primary message queue severity
number (Primary message
handling)
PACKED(3 0) 0-99 PRIMARY MSGQ
SEV NBR
PRIINFLVL Primary message queue
information level (Primary
message handling)
CHAR(10) *SUMMARY, *ALL PRIMARY MSGQ
INFO LEVEL
SECMSGQ Secondary message queue
(Secondary message handling)
CHAR(10) User-defined name SECONDARY
MSGQ
MXSYSDFN outfile (WRKSYSDFN command)
715
SECMSGQLIB Secondary message queue library
(Secondary message handling)
CHAR(10) User-defined name, *LIBL SECONDARY
MSGQ LIB
SECSEV Secondary message queue
severity (Secondary message
handling)
CHAR(10) *SEVERE, *INFO, *WARNING, *ERROR, *TERM,
*ALERT, *ACTION, 0-99
SECONDARY
MSGQ SEV
SECSEVNBR Secondary message queue
severity number (Secondary
message handling)
PACKED(3 0) 0-99 SECONDARY
MSGQ SEV NBR
SECINFLVL Secondary message queue
information level (Secondary
message handling)
CHAR(10) *SUMMARY, *ALL (Refer to the TFRSYS1 field if this
field contains *SYS1)
SECONDARY
MSGQ INFO
LEVEL
TEXT Description CHAR(50) *BLANK, user-defined text DESCRIPTION
J RNMGRDLY J ournal manager delay (seconds) PACKED(3 0) 5-900 J RNMGR DELAY
(SEC)
SYSMGRDLY System manager delay (seconds) PACKED(3 0) 5-900 SYSMGR DELAY
(SEC)
OUTQ Output queue (Output queue) CHAR(10) User-defined name OUTQ
OUTQLIB Output queue library (Output
queue)
CHAR(10) User-defined name OUTQ LIBRARY
HOLD Hold on output queue CHAR(10) *YES, *NO HOLD ON OUTQ
SAVE Save on output queue CHAR(10) *YES, *NO SAVE ON OUTQ
KEEPSYSHST Keep system history (days) PACKED(3 0) 1-365 KEEP SYS
HISTORY (DAYS)
KEEPDGHST Keep data group history (days) PACKED(3 0) 1-365 KEEP DG
HISTORY (DAYS)
KEEPMMXDTA Keep MIMIX data (days) PACKED(3 0) 1-365, 0 =*NOMAX KEEP MIMIX
DATA (DAYS)
DTALIBASP MIMIX data library ASP PACKED(3 0) Numeric value, 0 =*CRTDFT MIMIX DATA LIB
ASP
Table 142. MXSYSDFN outfile (WRKSYSDFN command)
Field Description Type, length Valid values Column head-
ings
MXSYSDFN outfile (WRKSYSDFN command)
716
DSKSTGLMT Disk storage limit (GB) PACKED(5 0) 1-9999, 0 =*NOMAX DISK STORAGE
LIMIT (GB)
SBMUSR User profile for submit job CHAR(10) *J OBD, *CURRENT USRPRF FOR
SUBMIT J OB
MGRJ OBD Manager job description (Manager
job description)
CHAR(10) User-defined name MANAGER J OBD
MGRJ OBDLIB Manager job description library
(Manager job description)
CHAR(10) User-defined name MANAGER J OBD
LIBRARY
DFTJ OBD Default job description (Default job
description)
CHAR(10) User-defined name DEFAULT J OBD
DFTJ OBDLIB Default job description library
(Default job description)
CHAR(10) User-defined name DEFAULT J OBD
LIBRARY
PRDLIB MIMIX product library CHAR(10) User-defined name MIMIX PRODUCT
LIBRARY
RSTARTTIME J ob restart time CHAR(8) 000000 - 235959, *NONE
(Values are returned left-justified)
RESTART TIME
KEEPNEWNFY Keep new notification (days) PACKED(3 0) 1-365, 0 =*NOMAX KEEP NEW
NFY (DAYS)
KEEPACKNFY Keep acknowledged notification
(days)
PACKED(3 0) 1-365, 0 =*NOMAX KEEP ACK
NFY (DAYS)
ASPGRP ASP Group CHAR(10) *NONE, User-defined name ASP GROUP
DEVDMN Cluster device domain CHAR(10) *NONE, User-defined name CLUSTER
DEVICE DOMAIN
Table 142. MXSYSDFN outfile (WRKSYSDFN command)
Field Description Type, length Valid values Column head-
ings
MXSYSDFN outfile (WRKSYSDFN command)
717
MXTFRDFN outfile (WRKTFRDFN command)
718
MXTFRDFN outfile (WRKTFRDFN command)
The Work with Transfer Definitions (WRKTFRDFN) command generates new outfiles based on the MXTFRDFN record format.
Table 143. MXTFRDFN outfile (WRKTFRDFN command)
Field Description Type, length Valid values Column head-
ings
TFRDFN Transfer definition name (Transfer
definition)
CHAR(10) User-defined journal definition name TFRDFN NAME
TFRSYS1 System 1 name (Transfer
definition)
CHAR(8) User-defined system name TFRDFN NAME
SYSTEM 1
TFRSYS2 System 2 name (Transfer
definition)
CHAR(8) User-defined system name TFRDFN NAME
SYSTEM 2
PROTOCOL Transfer protocol CHAR(10) *TCP, *SNA, *OPTI TRANSFER
PROTOCOL
HOST1 System 1 host name or address CHAR(256) *SYS1, user-defined name (Refer to the TFRSYS1
field if this field contains *SYS1)
SYSTEM 1 HOST
OR ADDRESS
HOST2 System 2 host name or address CHAR(256) *SYS2, user-defined name (Refer to the TFRSYS2
field if this field contains *SYS2)
SYSTEM 2 HOST
OR ADDRESS
PORT1 System 1 port number or alias CHAR(14) User-defined port number SYSTEM 1 PORT
NBR OR ALIAS
PORT2 System 2 port number or alias CHAR(14) User-defined port number SYSTEM 2 PORT
NBR OR ALIAS
LOCNAME1 System 1 location name CHAR(8) *SYS1, user-defined name SYSTEM 1
LOCATION
LOCNAME2 System 2 location name CHAR(8) *SYS2, user-defined name SYSTEM 2
LOCATION
NETID1 System 1 network identifier CHAR(8) *LOC, user-defined name, *NETATR, *NONE SYSTEM 1
NETWORK
IDENTIFIER
NETID2 System 2 network identifier CHAR(8) *LOC, User-defined name, *NETATR, *NONE SYSTEM 2
NETWORK
IDENTIFIER
MXTFRDFN outfile (WRKTFRDFN command)
719
MODE SNA mode CHAR(8) User-defined name, *NETATR SNA MODE
TEXT Description CHAR(50) *BLANK, user-defined text DESCRIPTION
THLDSIZE Reset sequence threshold PACKED(7 0) 0-9999999 THRESHOLD
SIZE
RDB Relational database CHAR(18) *GEN, user-defined name RELATIONAL
DATABASE
RDBSYS1 System 1 Relational database
name
CHAR(18) *SYS1, User-defined name RELATIONAL
DATABASE
RDBSYS2 System 2 Relational database
name
CHAR(18) *SYS2, User-defined name RELATIONAL
DATABASE
MNGRDB Manage RDB Directory Entries
Indicator
CHAR(10) *DFT, *YES, *NO MANAGE
DIRECTORY
ENTRIES
TFRSHORTN Transfer definition short name CHAR(4) Name TFRDFN
SHORT NAME
MNGAJ E Manage Autostart J ob Entry CHAR(10) *YES, *NO MANAGE AJ E
Table 143. MXTFRDFN outfile (WRKTFRDFN command)
Field Description Type, length Valid values Column head-
ings
MZPRCDFN outfile (WRKPRCDFN command)
720
MZPRCDFN outfile (WRKPRCDFN command)

Table 144. MZPRCDFN outfile (WRKPRCDFN command)
Field Description Type,
length
Valid values Column
headings
PRCDFN Process definition
name
(Process definition)
CHAR(10) *ANY, user-defined name PRCDFN
NAME
PRCSYS System name
(Process definition)
CHAR(10) *ANY, *BACKUP, *PRIMARY,
*REPLICATE, user-defined name
PRCDFN
SYSTEM
TYPE Process type CHAR(10) *ANY, *CRGADDNOD, *CRGCHG,
*CRGCRT, *CRGDLT, *CRGDLTCMD,
*CRGEND, *CRGENDNOD, *CRGFAIL,
*CRGREJ OIN, *CRGRESTR,
*CRGRMVNOD, *CRGSTR, *CRGSWT,
*CRGUNDO, User-defined value
PROCESS
TYPE
PRDLIB Product library CHAR(10) User-defined name PRODUCT
LIBRARY
TEXT Description CHAR(50) User-defined value DESCRIPTI
ON
MZPRCE outfile (WRKPRCE command)
721
MZPRCE outfile (WRKPRCE command)

Table 145. MZPRCE outfile (WRKPRCE command)
Field Description Type, length Valid values Column head-
ings
PRCDFN Process definition name
(Process definition)
CHAR(10) *ANY, user-defined name PRCDFN
NAME
PRCSYS System name (
Process definition)
CHAR(10) *ANY, *BACKUP, *PRIMARY, *REPLICATE,
user-defined name
PRCDFN
SYSTEM
TYPE Process type CHAR(10) *ANY, *CRGADDNOD, *CRGCHG, *CRGCRT,
*CRGDLT, *CRGDLTCMD, *CRGEND,
*CRGENDNOD, *CRGFAIL, *CRGREJ OIN,
*CRGRESTR, *CRGRMVNOD, *CRGSTR,
*CRGSWT, *CRGUNDO, User-defined value
PROCESS
TYPE
SEQNBR Sequence number PACKED(6 0) 1-999999 SEQUENCE
NUMBER
LABEL Label CHAR(10) User-defined name LABEL
MSGID Message identifier CHAR(10) *ANY, user-defined value MESSAGE ID
ACTION Action CHAR(10) *CMD, *CMDPMT, *CMP, *CMT, *GOTO, *RTN ACTION
MZPRCE outfile (WRKPRCE command)
722
OPERAND1 Compare operand 1 CHAR(10) BLANK, *ACTCODE, *APPCRGSTS, *BCKNOD1,
*BCKNOD2, *BCKNOD3, *BCKNOD4, *BCKNOD5,
*BCKSTS1, *BCKSTS2, *BCKSTS3, *BCKSTS4,
*BCKSTS5, *CHGNOD, *CHGROLE, *CLUNAME,
*CRGNAME, *CRGTYPE, *DTACRGSTS,
*ENDOPT, *LCLNOD, *LCLPRVROL, *LCLPRVSTS,
*LCLROLE, *LCLSTS, *NODCNT, *PRDLIB,
*PRINOD,*PRIPRVROL, *PRIPRVSTS, *PRISTS,
*PRVACTCDE, *PRVROL1, *PRVROL2,
*PRVROL3, *PRVROL4, *PRVROL5, *PRVSTS1,
*PRVSTS2, *PRVSTS3, *PRVSTS4, *PRVSTS5,
*REPNOD1, *REPNOD2, *REPNOD3, *REPNOD4,
*REPNOD5, *REPSTS1, *REPSTS2, *REPSTS3,
*REPSTS4, *REPSTS5, *ROLETYPE, User-defined
type
COMPARE
OPERAND1
OPERATOR Compare operator CHAR(10) COMPARE
OPERATOR
OPERAND2 Compare operand 2 CHAR(10) BLANK, *ACTCODE, *APPCRGSTS, *BCKNOD1,
*BCKNOD2, *BCKNOD3, *BCKNOD4, *BCKNOD5,
*BCKSTS1, *BCKSTS2, *BCKSTS3, *BCKSTS4,
*BCKSTS5, *CHGNOD, *CHGROLE, *CLUNAME,
*CRGNAME, *CRGTYPE, *DTACRGSTS,
*ENDOPT, *LCLNOD, *LCLPRVROL, *LCLPRVSTS,
*LCLROLE, *LCLSTS, *NODCNT, *PRDLIB,
*PRINOD, *PRIPRVROL, *PRIPRVSTS, *PRISTS,
*PRVACTCDE, *PRVROL1, *PRVROL2,
*PRVROL3, *PRVROL4, *PRVROL5, *PRVSTS1,
*PRVSTS2, *PRVSTS3, *PRVSTS4, *PRVSTS5,
*REPNOD1, *REPNOD2, *REPNOD3, *REPNOD4,
*REPNOD5, *REPSTS1, *REPSTS2, *REPSTS3,
*REPSTS4, *REPSTS5, *ROLETYPE, User-defined
type
COMPARE
OPERAND2
CMD Command details CHAR(1000) BLANK, user-defined value COMMAND
DETAILS
Table 145. MZPRCE outfile (WRKPRCE command)
Field Description Type, length Valid values Column head-
ings
MZPRCE outfile (WRKPRCE command)
723
ACTLBL Action label CHAR(10) BLANK, user-defined value ACTION
LABEL
RTNVAL Return value CHAR(10) *FAIL, *SUCCESS RETURN
VALUE
COMMENT Comment text CHAR(50) BLANK, user-defined value COMMENT
TEXT
Table 145. MZPRCE outfile (WRKPRCE command)
Field Description Type, length Valid values Column head-
ings
MXDGIFSTE outfile (WRKDGIFSTE command)
724
MXDGIFSTE outfile (WRKDGIFSTE command)

Table 146. MXDGIFSTE outfile (WRKDGIFSTE command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group
definition)
CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 2
OBJ 1 System 1 object name (unicode) GRAPHIC(512)
VARLEN(75)
User-defined name SYSTEM 1 IFS
OBJ ECT
(UNICODE)
FID1 System 1 file identifier (binary) BIN(16 0) IBM i-defined file identifier SYSTEM 1
FILE ID
(BINARY)
FID1HEX System 1 file identifier
(hexadecimal-readable)
CHAR(32) IBM i-defined file identifier SYSTEM 1
FILE ID (HEX)
OBJ 2 System 2 object name (unicode) GRAPHIC(512)
VARLEN(75)
User-defined name SYSTEM 2 IFS
OBJ ECT
(UNICODE)
FID2 System 2 file identifier (binary) BIN(16 0) IBM i-defined file identifier SYSTEM 2
FILE ID
(BINARY)
FID2HEX System 2 file identifier
(hexadecimal-readable)
CHAR(32) IBM i-defined file identifier SYSTEM 2
FILE ID (HEX)
CCSID Object CCSID BIN(5 0) Defaults to job CCSID. If job CCSID is 65535 or data
cannot be converted to job CCSID, OBJ 1 and OBJ 2
values remain in Unicode.
CCSID
MXDGIFSTE outfile (WRKDGIFSTE command)
725
OBJ 1CVT System 1 object name (converted
to job CCSID)
CHAR(512)
VARLEN(75)
User-defined name converted using CCSID value.
Zero length if conversion not possible.
SYSTEM 1 IFS
OBJ ECT
CONVERTED
OBJ 2CVT System 2 object name (converted
to job CCSID)
CHAR(512)
VARLEN(75)
User-defined name converted using CCSID value.
Zero length if conversion not possible.
SYSTEM 2 IFS
OBJ ECT
CONVERTED
TYPE Object type CHAR(10) *DIR, *STMF, *SYMLNK OBJ ECT TYPE
STSVAL Entry status CHAR(10) *ACTIVE, *HLD, *HLDERR, *HLDIGN, *HLDRNM,
*RLSWAIT
CURRENT
STATUS
J RN1STS J ournaled on system 1 CHAR(10) *YES, *NO SYSTEM 1
J OURNALED
J RN2STS J ournaled on system 2 CHAR(10) *YES, *NO SYSTEM 2
J OURNALED
APYSSN Apply session CHAR(10) A (only supported apply session) APPLY
SESSION
Table 146. MXDGIFSTE outfile (WRKDGIFSTE command)
Field Description Type, length Valid values Column head-
ings
MXDGOBJTE outfile (WRKDGOBJTE command)
726
MXDGOBJTE outfile (WRKDGOBJTE command)

Table 147. MXDGOBJ TE outfile (WRKDGOBJ TE command)
Field Description Type, length Valid values Column head-
ings
DGDFN Data group name (Data group
definition)
CHAR(10) User-defined data group name DGDFN NAME
DGSYS1 System 1 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 1
DGSYS2 System 2 name (Data group
definition)
CHAR(8) User-defined system name DGDFN
SYSTEM 2
OBJ 1 System 1 object CHAR(10) User-defined name SYSTEM 1
OBJ ECT
LIB 1 System 1 library CHAR(10) User-defined name SYSTEM 1
LIBRARY
TYPE Object type CHAR(10) *DTAARA, *DTAQ OBJ ECT TYPE
OBJ 2 System 2 object CHAR(10) User-defined name SYSTEM 2
OBJ ECT
LIB 2 System 2 library CHAR(10) User-defined name SYSTEM 2
LIBRARY
STSVAL Entry status CHAR(10) *ACTIVE, *HLD, *HLDERR, *HLDIGN, *RLSWAIT CURRENT
STATUS
J RN1STS J ournaled on system 1 CHAR(10) *YES, *NO SYSTEM 1
J OURNALED
J RN2STS J ournaled on system 2 CHAR(10) *YES, *NO SYSTEM 2
J OURNALED
APYSSN Current apply session CHAR(10) A (only supported apply session) CURRENT
APYSSN
RQSAPYSSN Requested apply session CHAR(10) A (only supported apply session) REQUESTED
APYSSN
MXDGOBJTE outfile (WRKDGOBJTE command)
727
OBJ 1APY System 1 object (known by apply) CHAR(10) User-defined name SYSTEM 1
OBJ ECT
(APPLY)
LIB1APY System 1 library (known by apply) CHAR(10) User-defined name SYSTEM 1
LIBRARY
(APPLY)
OBJ 2APY System 2 object (known by apply) CHAR(10) User-defined name SYSTEM 2
OBJ ECT
(APPLY)
LIB2APY System 2 library (known by apply) CHAR(10) User-defined name SYSTEM 2
LIBRARY
(APPLY)
Table 147. MXDGOBJ TE outfile (WRKDGOBJ TE command)
Field Description Type, length Valid values Column head-
ings
Notices
Copyright 1999, 2010, Vision Solutions, Inc. All rights reserved.
The information in this document is subject to change without notice and is furnished under a license
agreement. This document is proprietary to Vision Solutions, Inc., and may be used only as authorized in our
license agreement. No portion of this manual may be copied or otherwise reproduced, translated, or
transmitted in whole or part, without the express consent of Vision Solutions, Inc.
If you are an entity of the U.S. government, you agree that this documentation and the program(s) referred to in
this document are Commercial Computer Software, as defined in the Federal Acquisition Regulations (FAR),
and the DoD FAR Supplement, and are delivered with only those rights set forth within the license agreement
for such documentation and program(s). Use, duplication or disclosure by the Government is subject to
restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software
clause at DFAR 252.227-7013 (48 CFR) or subparagraphs (c)(1) & (2) of the Commercial Computer Software -
Restricted Rights clause at FAR 52.227-19.
Vision Solutions, Inc. makes no warranty of any kind regarding this material and assumes no responsibility for
any errors that may appear in this document. The program(s) referred to in this document are not specifically
developed, or licensed, for use in any nuclear, aviation, mass transit, or medical application or in any other
inherently dangerous applications, and any such use shall remove Vision Solutions, Inc. from liability. Vision
Solutions, Inc. shall not be liable for any claims or damages arising from such use of the Program(s) for any
such applications.
Examples and Example Programs:
This book contains examples of reports and data used in daily operation. To illustrate them as completely as
possible the examples may include names of individuals, companies, brands, and products. All of these names
are fictitious. Any similarity to the names and addresses used by an actual business enterprise is entirely
coincidental.
This book contains small programs that are furnished by Vision Solutions, Inc. as simple examples to provide
an illustration. These examples have not been thoroughly tested under all conditions. Vision Solutions,
therefore, cannot guarantee or imply reliability, serviceability, or function of these example programs. All
programs contained herein are provided to you AS IS. THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE EXPRESSLY DISCLAIMED.
MIMIX and Vision Solutions are registered trademarks of Vision Solutions, Inc.
IntelliStart, MIMIX dr1, MIMIX AutoGuard, MIMIX AutoNotify, MIMIX Availability Manager, MIMIX
Enterprise, MIMIX Professional, MIMIX DB2 Replicator, MIMIX Object Replicator, MIMIX Monitor, MIMIX
Promoter, MIMIX Switch Assistant, RJ Link, Replicate1, Vision AutoValidate, and MIMIX Global are
trademarks of Vision Solutions, Inc.
AS/400, DB2, eServer, i5/OS, IBM, iSeries, OS/400, Power, System i, and WebSphere are trademarks of
International Business Machines Corporation.
Internet Explorer, Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of
Microsoft Corporation in the United States and/or other countries.
Netscape is a registered trademark of AOL LLC.
Mozilla and Firefox are trademarks of the Mozilla Foundation.
UNIX is a registered trademark of The Open Group in the United States and other countries.
J ava and all J ava-based trademarks are trademarks or registered trademarks of Sun Microsystems, Inc. in the
United States and other countries.
All other trademarks are the property of their respective owners.
Corporate Headquarters
Vision Solutions, Inc.
Irvine, California USA
Tel: +1 (949) 253-6500
729
Index
Symbols
*FAILED activity entry 42
*HLD, files on hold 94
*HLDERR, held due to error 357
*HLDERR, hold error status 73
*MAXOPT3 sequence number size 196
*MSGQ, maintaining private authorities 95
A
access paths, journaling 196
access types (file) for T-ZC entries 362
accessing
MIMIX Main Menu 83
active server technology 411
additional resources 18
advanced journaling
add to existing data group 78
apply session balancing 80
benefits 68
conversion examples 79
convert data group to 78
loading tracking entries 257
planning for 78
replication process 69
serialized transactions with database 78
advanced journaling, data areas and data
queues
synchronizing 475
advanced journaling, IFS objects
journal receiver size 191
restrictions 109
synchronizing 475
advanced journaling, large objects (LOBs)
journal receiver size 191
synchronizing 447
APPC/SNA, configuring 147
application group
create resource groups for a 295
define primary node 296
application group definition
creating 294
apply session
constraint induced changes 349
default value 215
specifying 212
apply session, database
load balancing 80
ASP
basic 548
concepts 547
group 548
independent 548
independent, benefits 547
independent, configuration tips 551
independent, configuring 551
independent, configuring IFS objects 552
independent, configuring library-based ob-
jects 552
independent, effect on library list 553
independent, journal receiver considerations
552
independent, limitations 550
independent, primary 548
independent, replication 546
independent, requirements 550
independent, restrictions 550
independent, secondary 548
SYSBAS 546
system 547
user 547
asynchronous delivery 61
attributes of a step, changing 516
attributes, supported
CMPDLOA command 596
CMPFILA command 581
CMPIFSA command 594
CMPOBJ A command 586
audit
authority level to run 558
automatic recovery 559
before switching 558
best practice 558
comparison levels 558
differences, resolving 569
improve performance of #MBRRCDCNT 330
job log 571
recommendations 558, 559
requirements 558
results 569
audit level
best practice 558
audit results 569
#DGFE rule 572, 630
#DLOATR rule 596, 632
#DLOATR rule, ASP attributes 602
#FILATR rule 581, 634
#FILATR rule, ASP attributes 602
#FILATR rule, journal attributes 598
#FILATRMBR rule 581, 634
730
#FILATRMBR rule, ASP attributes 602
#FILATRMBR rule, journal attributes 598
#FILDTA rule 574, 636
#IFSATR rule 594, 642
#IFSATR rule, ASP attributes 602
#IFSATR rule, journal attributes 598
#MBRRCDCNT rule 574, 640
#OBJ ATR rule 586, 644
#OBJ ATR rule, ASP attributes 602
#OBJ ATR rule, journal attributes 598
#OBJ ATR rule, user profile password attribute
608
#OBJ ATR rule, user profile status attribute
605
interpreting, attribute comparisons 577
interpreting, file data comparisons 574
resolving problems 569, 572
timestamp difference 116
troubleshooting 571
auditing and reporting, compare commands
DLO attributes 405
file and member attributes 396
file data using active processing 436
file data using subsetting options 439
file data with repair capability 430
file data without active processing 427
files on hold 433
IFS object attributes 402
object attributes 399
auditing level, object
used for replication 302
auditing value, i5/OS object
set by MIMIX 55
auditing, i5/OS object 24
performed by MIMIX 270
audits 458
authorities, private 95
authority level for auditing, product 558
automatic recovery
audit recommendations 559
automation 480
autostart job entry 161
changing job description 172
changing port information 173
created by MIMIX 171
identifying 171
when to change 172
B
backlog
comparing file data restriction 413
backup system 22
restricting access to files 215
basic ASP 548
batch output 495
benefits
independent ASPs 547
LOB replication 98
best practice
audit level 558
audit level before switch 559
bi-directional data flow 339
broadcast configuration 64
build journal environment
after changing receiver size option 183
C
candidate objects
defined 373
cascade configuration 64
cascading distributions, configuring 343
catchup mode 59
change management
overview 36
remote journal environment 36
change management, journal receivers 181
changing
RJ link 203
startup programs, remote journaling 278
changing from RJ to MIMIX processing
permanently 205
temporarily 204
checklist
convert *DTAARA, *DTAQ to user journaling
138
convert IFS objects to user journaling 138
convert to application groups 132
converting to remote journaling 133
copying configuration data 537
legacy cooperative processing 141
manual configuration (source-send) 129
MIMIX Dynamic Apply 135
new preferred configuration 125
pre-configuration 76
collision points 481
collision resolution 481
default value 215
731
requirements 358
working with 357
commands
changing defaults 505
displaying a list of 496
commands, by mnemonic
ADDDGDAE 262
ADDMSGLOGE 490
ADDRJ LNK 202
ADDSTEP 516
CHGDGDAE 262
CHGJ RNDFN 194
CHGRJ LNK 203
CHGSYSDFN 154
CHGTFRDFN 167
CHKDGFE 276, 572
CLOMMXLST 504
CMPDLOA 392
CMPFILA 392
CMPFILDTA 411, 427
CMPIFSA 392
CMPOBJ A 392
CMPRCDCNT 408
CPYCFGDTA 536
CPYDGDAE 263
CPYDGFE 263
CPYDGIFSE 263
CRTAGDFN 294
CRTCRCLS 359
CRTDGDFN 221, 225
CRTJ RNDFN 192
CRTSYSDFN 153
CRTTFRDFN 166
DLTCRCLS 360
DLTDGDFN 230
DLTJ RNDFN 230
DLTSYSDFN 230
DLTTFRDFN 230
DSPDGDAE 265
DSPDGFE 265
DSPDGIFSE 265
ENDJ RNFE 306
ENDJ RNIFSE 309
ENDJ RNOBJ E 312
ENDJ RNPF 306
LODDGDAE 261
LODDGFE 246
LODDGOBJ E 243
LODDTARGE 295
MIMIX 83
OPNMMXLST 504
RMVDGDAE 264
RMVDGFE 264
RMVDGFEALS 264
RMVDGIFSE 264
RMVRJ CNN 206
RUNCMD 497
RUNCMDS 497
RUNRULE 556
RUNRULEGRP 556
SETDGAUD 270
SETIDCOLA 350
SNDNETDLO 479
SNDNETIFS 478
SNDNETOBJ 446, 476
STRJ RNFE 305
STRJ RNIFSE 308
STRJ RNOBJ E 311
STRMMXMGR 269
STRSVR 170
SWTDG 24
SYNCDGACTE 444, 450
SYNCDGFE 444, 451, 460
SYNCDLO 443, 449, 470
SYNCIFS 443, 449, 466, 475
SYNCOBJ 443, 449, 462, 475
VFYCMNLNK 175, 176
VFYJ RNFE 307
VFYJ RNIFSE 310
VFYJ RNOBJ E 313
VFYKEYATR 338
WRKCRCLS 359
WRKDGDAE 261, 263
WRKDGDFN 229
WRKDGDLOE 263
WRKDGFE 263
WRKDGIFSE 263
WRKDGOBJ E 263
WRKJ RNDFN 229
WRKRJ LNK 283
WRKSYSDFN 229
WRKTFRDFN 229
commands, by name
Add Data Group Data Area Entry 262
Add Message Log Entry 490
Add Remote J ournal Link 202
Add Step 516
Change Data Group Data Area Entry 262
Change J ournal Definition 194
Change RJ Link 203
732
Change System Definition 154
Change Transfer Definition 167
Check Data Group File Entries 276, 572
Close MIMIX List 504
Compare DLO Attributes 392
Compare File Attributes 392
Compare File Data 411, 427
Compare IFS Attributes 392
Compare Object Attributes 392
Compare Record Counts 408
Copy Configuration Data 536
Copy Data Group Data Area Entry 263
Copy Data Group File Entry 263
Copy Data Group IFS Entry 263
Create Applicaiton Group Definition 294
Create Collision Resolution Class 359
Create Data Group Definition 221, 225
Create J ournal Definition 192
Create System Definition 153
Create Transfer Definition 166
Delete Collision Resolution Class 360
Delete Data Group Definition 230
Delete J ournal Definition 230
Delete System Definition 230
Delete Transfer Definition 230
Display Data Group Data Area Entry 265
Display Data Group File Entry 265
Display Data Group IFS Entry 265
End J ournal Physical File 306
End J ournaling File Entry 306
End J ournaling IFS Entries 309
End J ournaling Obj Entries 312
Load Data Group Data Area Entries 261
Load Data Group File Entries 246
Load Data Group Object Entries 243
Load Data Resource Group Entry 295
MIMIX 83
Open MIMIX List 504
Remove Data Group Data Area Entry 264
Remove Data Group File Entry 264
Remove Data Group IFS Entry 264
Remove Remote J ournal Connection 206
Run Command 497
Run Commands 497
Run Rule 556
Run Rule Group 556
Send Network DLO 479
Send Network IFS 478
Send Network Object 476
Send Network Objects 446
Set Data Group Auditing 270
Set Identity Column Attribute 350
Start J ournaling File Entry 305
Start J ournaling IFS Entries 308
Start J ournaling Obj Entries 311
Start Lakeview TCP Server 170
Start MIMIX Managers 269
Switch Data Group 24
Synchronize Data Group Activity Entry 450
Synchronize Data Group File Entry 451, 460
Synchronize DG Activity Entry 444
Synchronize DG File Entry 444
Synchronize DLO 443, 449, 470
Synchronize IFS 449
Synchronize IFS Object 443, 466, 475
Synchronize Object 443, 449, 462, 475
Verify Communications Link 175, 176
Verify J ournaling File Entry 307
Verify J ournaling IFS Entries 310
Verify J ournaling Obj Entries 313
Verify Key Attributes 338
Work with Collision Resolution Classes 359
Work with Data Group Data Area Entries 261,
263
Work with Data Group Definition 229
Work with Data Group DLO Entries 263
Work with Data Group File Entries 263
Work with Data Group IFS Entries 263
Work with Data Group Object Entries 263
Work with J ournal Definition 229
Work with RJ Links 283
Work with System Definition 229
Work with Transfer Definition 229
commands, run on remote system 497
commit cycles
effect on audit comparison 574, 576
policy effect on compare record count 330
commitment control 98
#MBRRCDCNT audit performance 330
journal standby state, journal cache 321, 323
journaled IFS objects 69
communications
APPC/SNA 147
configuring system level 143
job names 47
native TCP/IP 143
OptiConnect 148
protocols 143
starting TCP sever 170
compare commands
733
completion and escape messages 483
outfile formats 391
report types and outfiles 390
spooled files 390
comparing
DLO attributes 405
file and member attributes 396
IFS object attributes 402
object attributes 399
when file content omitted 364
comparing attributes
attributes to compare 394
overview 392
supported object attributes 393, 416
comparing file data 411
active server technology 411
advanced subsetting 422
allocated and not allocated records 413
comparing a random sample 422
comparing a range of records 419
comparing recently inserted data 419
comparing records over time 422
data correction 411
excluding unchanged members 422
first and last subset 425
interleave factor 423
job ends due to network timeout 416
keys, triggers, and constraints 414
multi-threaded jobs 412
network inactivity considerations 416
number of subsets 423
parallel processing 412
processing with DBAPY 412, 433
referential integrity considerations 415
repairing files in *HLDERR 412
restrictions 412
security considerations 413
thread groups 421
transfer definition 421
transitional states 412
using active processing 436
using subsetting options 439
wait time 421
with repair capability 430
with repair capability when files are on hold
433
without active processing 427
comparing file record counts 408
concepts
procedures and steps 506
configuration
additional supporting tasks 266
copying existing data 541
results of #DGFE audit after changing 572
configuring
advanced replication techniques 332
bi-directional data flow 339
cascading distributions 343
choosing the correct checklist 123
classes, collision resolution 359
data areas and data queues 103
DLO documents and folders 111
file routing, file combining 341
for improved performance 314
IFS objects 106
independent ASP 551
Intra communications 543, 544
job restart time 285
keyed replication 335
library-based objects 91
message queue objects for user profiles 95
omitting T-ZC journal entry content 363
spooled file replication 93
to replicate SQL stored procedures 368
unique key replication 335
configuring, collision resolution 358
confirmed journal entries 60
considerations
journal for independent ASP 552
what to not replicate 77
constraints
apply session for dependent files 349
auditing with CMPFILA 392
comparing file data 414
omit content and legacy cooperative process-
ing 364
referential integrity considerations 415
requirements 348
requirements when synchronizing 452
restrictions with high availability journal perfor-
mance enhancements 323
support 348
when journal is in standby state 321
constraints, CMPFILA file-specific attribute 581
constraints, physical files with
apply session ignored 102
configuring 98
legacy cooperative processing 102
constraints, referential 101
contacting Vision Solutions 19
734
container send process 53
defaults 218
description 51
threshold 218
contextual transfer definitions
considerations 165
RJ considerations 164
continuous mode 59
convert data group
to advanced journaling 138
to application group environment 132
COOPDB (Cooperate with database) 104, 108
cooperative journal (COOPJ RN)
behavior 97
cooperative processing
and omitting content 364
configuring files 96
file, preferred method for 48
introduction 48
journaled objects 49
legacy 49
legacy limitations 102
MIMIX Dynamic Apply limitations 101
cooperative processing, legacy
limitations 102
requirements and limitations 102
COOPJ RN 97
COOPJ RN (Cooperative journal) 211
COOPTYPE (Cooperating object types) 104
copying
data group entries 263
definitions 229
create operation, how replicated 116
creating
procedure 514
CustomerCare 19
customize
switch procdures 511
customizing 480
replication environment 481
D
data area
restrictions of journaled 104
data areas
journaling 68
polling interval 213
polling process 73
synchronizing an object tracking entry 475
data areas and data queues
verifying journaling 313
data distribution techniques 339
data group 23
convert to remote journaling 133
database only 101
determining if RJ link used 283
ending 40, 63
RJ link differences 63
sharing an RJ link 62
short name 209
starting 40
starting the first time 282
switching 23
switching, RJ link considerations 66
timestamps, automatic 213
type 210
data group data area entry 261
adding individual 262
loading from a library 261
data group definition 34, 208
creating 221
parameter tips 209
data group DLO entry 259
adding individual 260
loading from a folder 259
data group entry 374
defined 85
description 23
object 242
procedures for configuring 241
data group file entry 246
adding individual 252
changing 253
loading from a journal definition 250
loading from a library 249, 250
loading from FEs from another data group
251
loading from object entries 247
sources for loading 246
data group IFS entry 255
with independent ASPs 552
data group object entry
adding individual 243
custom loading 242
independent ASP 552
with independent ASP 552
data library 33, 151
data management techniques 339
data queue
735
restrictions of journaled 104
data queues
journaling 68
synchronizing journaled objects 475
data resource group entry
in data group definition 209
data source 210
database apply
caching 320
serialization 78
with compare file data (CMPFILDTA) 412,
433
database apply caching 320
database apply process 72
description 62
parallel access path maintenance 315
threshold warning 216
database reader process 62
description 62
threshold 216
database receive process 72
database send process 72
description 72
filtering 212
threshold 216
DDM
password validation 280
server in startup programs 278
server, starting 279
defaults, command 505
definitions
data group 34
journal 34
named 34
remote journal link 34
renaming 232
RJ link 34
system 34
transfer 34
delay times 150
delay/retry processing
first and second 213
third 214
delete management
journal receivers 182
overview 36
remote journal environment 37
delete operations
journaled *DTAARA, *DTAQ, IFS objects 121
legacy cooperative processing 121
deleting
data group entries 264
definitions 230
procedure 514
delivery mode
asynchronous 61
synchronous 59
detail report 493
differences, resolving audit 569
directory entries
managing 162
RDB 161
display output 492
displaying
data group entries 265
definitions 231
distribution request, data-retrieval 52
DLOs
example, entry matching 112
generic name support 111
keeping same name 217
object processing 111
documents, MIMIX 16
duplicate identity column values 350
dynamic updates
adding data group entries 252
removing data group entries 264
E
ending CMPFILDTA jobs 426
ending journaling
data areas and data queues 312
files 306
IFS objects 309
IFS tracking entry 309
object tracking entry 312
error code, files in error 612
error messages
switch procedures 510
example
user-generated notification 564
examples
convert to advanced journaling 79
DLO entry matching 112
IFS object selection, subtree 388
job restart time 288
journal definitions for multimanagement envi-
ronment 188
journal definitions for switchable data group
736
185
journal receiver exit program 530
load file entries for MIMIX Dynamic Apply 247
monitor for scheduling user rule 566
object entry matching 93
object retrieval delay 366
object selection process 380
object selection, order precedence in 381
object selection, subtree 383
port alias, complex 145
port alias, simple 144
querying content of an output file 699
SETIDCOLA command increment values 354
user-defined rule 562
WRKDG SELECT statements 699
exit points 481
journal receiver management 523, 526
MIMIX Monitor 523
MIMIX Promoter 524
exit programs
journal receiver management 183, 527
requesting customized programs 525
expand support 494
extended attribute cache 325
configuring 325
F
failed request resolution 42
FEOPT (file and tracking entry options) 214
file
new 302
file id (FID) 71
file identifiers (FIDs) 284
files
combining 341
omitting content 362
output 494
routing 342
sharing 339
synchronizing 451
filtering
database replication 72
messages 44
on database send 212
on source side 212
remote journal environment 62
firewall, using CMPFILDTA with 413
folder path names 111
G
generic name support 375
DLOs 111
generic user exit 523
guidelines for auditing 558
H
history retention 151
hot backup 20
I
IBM i5/OS option 42 321
IBM objects to not replicate 77
IFS directory, created during installation 28
IFS file systems 106
unsupported 106
IFS object selection
examples, subtree 388
subtree 378
IFS objects 106
file id (FID) use with journaling 71
file IDs (FIDs) 284
journaled entry types, commitment control
and 69
journaling 68
not supported 106
path names 107
supported object types 106
verifying journaling 310
IFS objects, journaled
restrictions 109
supported operations 117
sychronizing 453, 475
independent ASP 548
limitations 550
primary 548
replication 546
requirements 550
restrictions 550
secondary 548
synchronizing data within an 448
independent ASP threshold monitor 555
independent ASP, journal receiver change 36
information and additional resources 18
installations, multiple MIMIX 22
interleave factor 423
Intra configuration 542
IPL, journal receiver change 36
737
J
job classes 29
job description parameter 495
job descriptions 29, 151
in data group definition 219
in product library 29
list of MIMIX 29
job log
for audit 571
job name parameter 495
job names 46
job restart time 285
data group definition procedure 291
examples 287
overview 285
parameter 151, 219
system definition procedure 291
jobs
procedures, used in 507
jobs, restarted automatically 285
journal 24
improving performance of 314
maximum number of objects in 25
security audit 50
system 50
journal analysis 42
journal at create 114, 213
requirements 302
requirements and restrictions 303
journal caching 181, 322
journal caching alternative 320
journal code
failed objects 617
files in error 610
system journal transactions 617
journal codes
user journal transactions 610
journal definition 34
configuring 177
created by other processes 179
creating 192
fields on data group definition 211
parameter tips 180
remote journal environment considerations
184
remote journal naming convention 185
remote journal naming convention, multiman-
agement 187
remote journaling example 185
journal entries 24
confirmed 60
filtering on database send 212
minimized data 318
OM journal entry 117
receive journal entry (RCVJ RNE) 326
unconfirmed 60, 66
journal entry codes 617
for data area and data queues 615
supported by MIMIX user journal processing
615
journal image 214, 334
journal manager 32
journal receiver 24
change management 36, 181
delete management 36, 37, 182
prefix 181
RJ processing earlier receivers 38
size for advanced journaling 191
starting point 25
stranded on target 38
journal receiver management
interaction with other products 37
recommendations 36
journal sequence number, change during IPL
36
journal standby state 321
journaled data areas, data queues
planning for 78
journaled IFS objects
planning for 78
journaled object types
user exit program considerations 80
journaling 24
cannot end 306
data areas and data queues 68
ending for data areas and data queues 312
ending for IFS objects 309
ending for physical files 306
IFS objects 68
IFS objects and commitment control 69
implicitly started 302
requirements for starting 302
starting for data areas and data queues 311
starting for IFS objects 308
starting for physical files 305
starting, ending, and verifying 301
verifying 458
verifying for data areas and data queues 313
verifying for IFS objects 310
738
verifying for physical files 307
journaling environment
automatically creating 211
building 195
changing to *MAXOPT3 196
removing 206
source for values (J RNVAL) 195
journaling on target, RJ environment consider-
ations 38
journaling status
data areas and data queues 311
files 305
IFS objects 308
K
keyed replication 334
comparing file data restriction 413
file entry option defaults 215
preventing before-image filtering 212
verifying file attributes 338
L
large object (LOB) support
user exit program 99
large objects (LOBs)
minimized journal entry data 318
legacy cooperative processing
configuring 99
limitations 102
requirements 102
libraries
objects in installation libraries 77
to not replicate 77
library list
adding QSOC to 148
library list, effect of independent ASP 553
library-based objects, configuring 91
limitations
database only data group 101
list detail report 493
list summary report 493
load leveling 54
loading
tracking entries 257
LOB replication 98
local-remote journal pair 59
log space 25
logical files 96, 97
long IFS path names 107
M
manage directory entries 162
management system 23
maximum size transmitted 160
MAXOPT3
receiver size option 183
MAXOPT3 value 191
menu
MIMIX Configuration 268
MIMIX Main 83
message handling 150
message log 490
message queues
associated with user profiles 95
journal-related threshold 183
message, step 520
messages 43
CMPDLOA 485
CMPFILA 483
CMPFILDTA 486
CMPIFSA 484
CMPOBJ A 484
CMPRCDCNT 485
comparison completion and escape 483
MIMIX AutoGuard 458
MIMIX Dynamic Apply
configuring 96, 99
recommended for files 96
requirements and limitations 101
MIMIX environment 28
MIMIX installation 22
MIMIX jobs, restart time for 285
MIMIX Model Switch Framework 523
MIMIX performance, improving 314
MIMIX rules 556
automatic audit recovery 559
command prompting 560
replacement variables 560
MIMIXOWN user profile 31, 280
MIMIXQGPL library 33
MIMIXSBS subsystem 33, 82
minimized journal entry data 318
LOBs 98
MMNFYNEWE monitor 114
monitor
new objects not configured to MIMIX 114
monitors
examples for creating 566
move/rename operations
739
system journal replication 117
user journal replication 118
multimanagement
journal definition naming 187
multi-threaded jobs 412
N
name pattern 378
name space 50
names, displaying long 107
naming conventions
data group definitions 209
journal definitions 180, 185, 187
multi-part 26
transfer definitions 159
transfer definitions, contextual (*ANY) 165
transfer definitions, multiple network systems
155
network inactivity
comparing file data 416
network systems 23
multiple 155
new objects
automatically journal 213
automatically replicate 114
files 114
files processed by legacy cooperative pro-
cessing 115
files processed with MIMIX Dynamic Apply
114
IFS object journal at create requirements 302
IFS objects, data areas, data queues 115
journal at create selection criteria 303
notification of objects not in configuration 114
notification retention 151
notifications
user-defined 563
user-generated 557
O
object
journal entry codes 617
object apply process
defaults 218
description 51
threshold 218
object attributes, comparing 394
object auditing
used for replication 302
object auditing level, i5/OS
manually set for a data group 270
set by MIMIX 55, 270
object auditing value
data areas, data queues 103
DLOs 111
IFS objects 108
library-based objects 89
omit T-ZC entry considerations 363
object entry, data group
creating 242
object locking retry interval 213
object processing
data areas, data queues 103
defaults 217
DLOs 111
high volume objects 329
IFS objects 106
retry interval 213
spooled files 93
object retrieval delay
considerations 366
examples 366
selecting 366
object retrieve process 53
defaults 218
description 50
threshold 218
with high volume objects 329
object selection 372
commands which use 372
examples, order precedence 381
examples, process 380
examples, subtree 383
name pattern 378
order precedence 374
parameter 374
process 372
subtree 377
object selector elements 374
by function 375
object selectors 374
object send process 51
description 50
threshold 217
object types supported 87, 533
objects
new 302
Omit content (OMTDTA) parameter 363
and comparison commands 364
740
and cooperative processing 364
open commit cycles
audit results 574, 576
OptiConnect, configuring 148
outfiles 620
MCAG 622
MCDTACRGE 625
MCNODE 628
MXAUDHST 646
MXAUDOBJ 648
MXCDGFE 630
MXCMPDLOA 632
MXCMPFILA 634
MXCMPFILD 636
MXCMPFILR 639
MXCMPIFSA 642
MXCMPOBJ A 644
MXCMPRCDC 640
MXDGACT 651
MXDGACTE 653
MXDGDAE 660
MXDGDFN 661
MXDGDLOE 669
MXDGFE 671
MXDGIFSE 675
MXDGIFSTE 724
MXDGOBJ E 701
MXDGOBJ TE 726
MXDGSTS 677
MXDGTSP 704
MXJ RNDFN 707
MXSYSDFN 714
MXTFRDFN 718
MZPRCDFN 720
MZPRCE 721
user profile password 608
user profile status 605
WRKRJ LNK 711
outfiles, supporting information
record format 620
work with panels 621
output
batch 495
considerations 491
display 492
expand support 494
file 494
parameter 491
print 492
output file
querying content, examples of 699
output file fields
Difference Indicator 574, 577
System 1 Indicator field 579
System 2 Indicator field 579
output queues 151
overview
MIMIX operations 40
remote journal support 57
starting and ending replication 40
support for resolving problems 41
support for switching 23, 43
working with messages 43
P
parallel access path maintenance 315
parallel processing 412
path names, IFS 107
performance
improved record count compare 330
policy, CMPRCDCNT commit threshold 330
polling interval 213
port alias 144
complex example 145
creating 146
simple example 144
primary node
configure for application group 296
print output 492
printing
controlling characteristics of 151
data group entries 265
definitions 232
private authorities, *MSGQ replication of 95
problems, journaling
data areas and data queues 311
files 305
IFS objects 308
problems, resolving
audit results 569, 572
proccedures
last started run 515
procedure
begin at step 298, 509
displaying steps 515
procedures 506, 512
adding a step 516
components 506
creating 514
741
customizing user application steps 511
displaying available 512
history 510
invoking 509
job processing 507
programming support 519, 522
removing a step 517
status 510
step attributes 508
step error processing 508
swtich customizing 510
types of 507
process
container send and receive 53
database apply 72
database reader 62
database receive 72
database send 72
names 46
object apply 53
object retrieve 53
object send 51
process, object selection 372
processing defaults
container send 218
database apply 216
file entry options 214
object apply 218
object retrieve 218
user journal entry 212
production system 22
programs, step 517
publications, IBM 18
Q
QAUDCTL system value 50
QAUDLVL system value 50, 94
QDFTJ RN data area 213
restrictions 303
role in processing new objects 303
QRETSVRSEC system value 281
QSOC
library 148
subsystem 278
R
RCVJ RNE (Receive J ournal Entry) 326
configuring values 327
determining whether to change the value of
327
understanding its values 326
RDB 161
directory entries 161
RDB directory entry 169
reader wait time 210
receiver library, changing for RJ target journal
200
receivers
change management 181
delete management 182
recommendation
multimanagement journal definitions 187
recommendations
audit automatic recovery 559
auditing 558
audits and rules 559
relational database (RDB) 161
entries 161, 167
remote journal
i5/OS function 24, 57
i5/OS function, asynchronous delivery 61
i5/OS function, synchronous delivery 59
MIMIX support 57
relational database 161
remote journal environment
changing 200
contextual transfer definitions 164
receiver change management 36
receiver delete management 37
restrictions 58
RJ link 62
security implications 280
switch processing changes 43
remote journal link 34, 62
remote journal link, See also RJ link
remote journaling
data group definition 211
repairing
file data 430
files in *HLDERR 412
files on hold 433
replacement variables 560
replication
advanced topic parameters 212
by object type 87
configuring advanced techniques 332
constraint-induced modifications 349
data area 73
defaults for object types 87
742
direction of 22
ending data group 40
ending MIMIX 40
independent ASP 546
maximum size threshold 160
positional vs. keyed 334
process, remote journaling environment 62
retrieving extended attributes 325
spooled files 93
SQL stored procedures 368
starting data group 40
starting MIMIX 40
supported paths 20
system journal 20
system journal process 50
unit of work for 23
user journal 20
user profiles 447
user-defined functions 368
what to not replicate 77
replication path 45
reports
detail 493
list detail 493
list summary 493
types for compare commands 390
requirement
objects and journal in same ASP 25
requirements
audits 558
independent ASP 550
journal at create 302
journaling 302
keyed replication 334
legacy cooperative processing 102
MIMIX AutoGuard 558
MIMIX Dynamic Apply 101
standby journaling 323
user journal replication of data areas and data
queues 103
restarted 285
restore operations, journaled *DTAARA,
*DTAQ, IFS objects 121
restriction
journal receiver size *MAXOPT3 183
restrictions
comparing file data 412
data areas and data queues 104
independent ASP 550
journal at create 303
journal receiver management 37
journaled *DTAARA, *DTAQ objects 104
journaled IFS objects 109
legacy cooperative processing 102
LOBs 99
MIMIX Dynamic Apply 101
number of objects in journal 25
QDFTJ RN data area 303
remote journaling 58
standby journaling 323
retrying, data group activity entries 42
RJ link 34
adding 202
changing 203
data group definition parameter 211
description 62
end options 63
identifying data groups that use 283
sharing among data groups 62
switching considerations 66
threshold 212
RJ link monitors
description 64
displaying status of 64
ending 64
not installed, status when 64
operation 64
rule groups
MIMIX 567
rules 556
considerations for using 559
creating user-defined 562
example of user-defined 562
messages from 560
MIMIX 556
notifications from 560
relationship with rules 556
replacement variables 560
requirements 558
run command considerations 559
types 556
user-defined 557
user-generated, creating monitors for 566
S
save-while-active 370
considerations 370
examples 371
options 371
743
wait time 370
search process, *ANY transfer definitions 163
security
considerations, CMPFILDTA command 413
general information 75
remote journaling implications 280
security audit journal 50
sending
DLOs 479
IFS objects 478
library-based objects 476
sequence number
maximum size option 183
sequence number size option, *MAXOPT3 196
serialization
database files and journaled objects 78
object changes with database 68
servers
starting DDM 279
starting TCP 170
short transfer definition name 159
source physical files 96, 97
source system 22
spooled files 93
compare commands 390
keeping deleted 94
options 94
retaining on target system 217
SQL stored procedures 368
replication requirements 368
SQL table identity columns 350
alternatives to SETIDCOLA 352
check for replication of 355
problem 350
SETIDCOLA command details 353
SETIDCOLA command examples 354
SETIDCOLA command limitations 351
SETIDCOLA command usage notes 354
setting attribute 355
when to use SETIDCOLA 351
standby journaling
IBM i5/OS option 42 321
journal caching 322
journal standby state 321
MIMIX processing with 322
overview 321
requirements 323
restrictions 323
starting
data groups initially 282
procedure at step 298, 509
procedures 509
system and journal managers 269
TCP server 170
TCP server automatically 171
starting journaling
data areas and data queues 311
file entry 305
files 305
IFS objects 308
IFS tracking entry 308
object tracking entry 311
startup programs
changes for remote journaling 278
MIMIX subsystem 82
QSOC subsystem 278
status
journaling data areas and data queues 311
journaling files 305
journaling IFS objects 308
journaling tracking entries 308, 311
procedures and steps 510
status, values affecting updates to 213
step
begin procedure at 298, 509
step messages 520
list available 521
removing 521
step nessaages
adding 521
step program
changing 518
creating a custom program 518
custom, for swtiching 510
ENDUSRAPP 511
format STEP0100 519
STRUSRAPP 511
step programs 517
display available 518
steps 515
adding to procedure 516
changing attributes 516
enabling and disabling 517
remove from procedure 517
runtime attributes 508
storage, data libraries 151
stranded journal on target, journal entries 38
subsystem
MIMIXSBS, starting 82
QSOC 278
744
subtree 377
IFS objects 378
switch procedure customization 510
switch procedure error messages 510
switching
allowing 210
change audit level before 559
data group 23
enabling journaling on target system 210
example RJ journal definitions for 185
independent ASP restriction 551
MIMIX Model Switch Framework with RJ link
66
preventing identity column problems 350
remote journaling changes to 43
removing stranded journal receivers 38
RJ link considerations 66
synchronization check, automatic 212
synchronizing 443
activity entries overview 450
commands for 445
considerations 445
data group activity entries 473
database files 460
database files overview 451
DLOs 470
DLOs in a data group 470
DLOs without a data group 471
establish a start point 454
file entry overview 451
files with triggers 451
IFS objects 466
IFS objects by path name only 467
IFS objects in a data group 466
IFS objects without a data group 467
IFS tracking entries 475
including logical files 452
independent ASP, data in an 448
initial 456
initial configuration 454
initial configuration MQ environment 454
limit maximum size 445
LOB data 447
object tracking entries 475
object, IFS, DLO overview 449
objects 462
objects in a data group 462
objects without a data group 463
related file 452
resources for 455
status changes caused by 447
tracking entries 453
user profiles 445, 447
synchronous delivery 59
unconfirmed entries 60
SYSBAS 546, 548
system ASP 547
system definition 34, 149
changing 154
creating 153
parameter tips 150
system journal 50
system journal replication 20
advanced techniques 332
journaling requirements 302
omitting content 362
system library list 148, 553
system manager 31
system user profiles 77
system value
QAUDCTL 50
QAUDLVL 50, 94
QRETSVRSEC 281
QSYSLIBL 148
system, roles 22
T
target journal state 181
target system 22
TCP server, autostart job entry for 161
TCP/IP
adding to startup program 278
configuring native 143
creating port aliases for 144
temporary files to not replicate 77
thread groups 421
threshold, backlog
adjusting 225
container send 218
database apply 216
database reader/send 216
object apply 218
object retrieve 218
object send 217
remote journal link 212
threshold, CMPRCDCNT commit 330
timestamps, automatic 213
tracking entries
loading 257
745
loading for data areas, data queues 258
loading for IFS objects 257
purpose 70
tracking entry
file identifiers (FIDs) 284
transfer definition 34, 157, 421
changing 167
contextual system support (*ANY) 27, 163
fields in data group definition 210
fields in system definition 150
multiple network system environment 155
other uses 157
parameter tips 159
short name 159
transfer protocols
OptiConnect parameters 160
SNA parameters 160
TCP parameters 159
trigger programs
defined 346
synchronizing files 347
triggers
avoiding problems 415
comparing file data 414
disabling during synchronization 451
read 414
update, insert, and delete 414
T-ZC journal entries
access types 362
configuring to omit 363
omitting 362
U
unconfirmed journal entries 60, 66
unique key
comparing file data restriction 413
file entry options for replicating 215
replication of 334
user ASP 547
user exit points 526
user exit program
data areas and data queues 80
IFS objects 80
large objects (LOBs) 99
user exit, generic 523
user journal replication 20
advanced techniques 332
journaling requirements 302
requirements for data areas and data queues
103
supported journal entries for data areas, data
queues 615
tracking entry 70
user profile
MIMIXOWN 280
password 608
status 605
user profiles
default 151
do not replicate MIMIX supplied 77
do not replicate system supplied 77
MIMIX 31
replication of 95
specifying status 217
synchronizing 445
system distribution directory entries 447
user-defined functions 368
V
verifying
communications link 175, 176
initial synchronization 458
journaling, IFS tracking entries 310
journaling, object tracking entries 313
journaling, physical files 307
key attributes 338
send and receive processes automatically
213
W
wait time
comparing file data 421
reader 210
WRKDG SELECT statement 699

S-ar putea să vă placă și