Documente Academic
Documente Profesional
Documente Cultură
Disclaimer
1.1 AVEVA does not warrant that the use of the AVEVA software will be uninterrupted, error-free or free from
viruses.
1.2 AVEVA shall not be liable for: loss of profits; loss of business; depletion of goodwill and/or similar losses; loss of
anticipated savings; loss of goods; loss of contract; loss of use; loss or corruption of data or information; any
special, indirect, consequential or pure economic loss, costs, damages, charges or expenses which may be
suffered by the user, including any loss suffered by the user resulting from the inaccuracy or invalidity of any data
created by the AVEVA software, irrespective of whether such losses are suffered directly or indirectly, or arise in
contract, tort (including negligence) or otherwise.
1.3 AVEVA's total liability in contract, tort (including negligence), or otherwise, arising in connection with the
performance of the AVEVA software shall be limited to 100% of the licence fees paid in the year in which the user's
claim is brought.
1.4 Clauses 1.1 to 1.3 shall apply to the fullest extent permissible at law.
1.5 In the event of any conflict between the above clauses and the analogous clauses in the software licence under
which the AVEVA software was purchased, the clauses in the software licence shall take precedence.
Copyright
Copyright and all other intellectual property rights in this manual and the associated software, and every part of it
(including source code, object code, any data contained in it, the manual and any other documentation supplied
with it) belongs to, or is validly licensed by, AVEVA Solutions Limited or its subsidiaries.
All rights are reserved to AVEVA Solutions Limited and its subsidiaries. The information contained in this document
is commercially sensitive, and shall not be copied, reproduced, stored in a retrieval system, or transmitted without
the prior written permission of AVEVA Solutions Limited. Where such permission is granted, it expressly requires
that this copyright notice, and the above disclaimer, is prominently displayed at the beginning of every copy that is
made.
The manual and associated documentation may not be adapted, reproduced, or copied, in any material or
electronic form, without the prior written permission of AVEVA Solutions Limited. The user may not reverse
engineer, decompile, copy, or adapt the software. Neither the whole, nor part of the software described in this
publication may be incorporated into any third-party software, product, machine, or system without the prior written
permission of AVEVA Solutions Limited, save as permitted by law. Any such unauthorised action is strictly
prohibited, and may give rise to civil liabilities and criminal prosecution.
The AVEVA software described in this guide is to be installed and operated strictly in accordance with the terms
and conditions of the respective software licences, and in accordance with the relevant User Documentation.
Unauthorised or unlicensed use of the software is strictly prohibited.
Copyright 1974 to current year. AVEVA Solutions Limited and its subsidiaries. All rights reserved. AVEVA shall not
be liable for any breach or infringement of a third party's intellectual property rights where such breach results from
a user's modification of the AVEVA software or associated documentation.
AVEVA Solutions Limited, High Cross, Madingley Road, Cambridge, CB3 0HB, United Kingdom.
Trademark
AVEVA and Tribon are registered trademarks of AVEVA Solutions Limited or its subsidiaries. Unauthorised use of
the AVEVA or Tribon trademarks is strictly forbidden.
AVEVA product/software names are trademarks or registered trademarks of AVEVA Solutions Limited or its
subsidiaries, registered in the UK, Europe and other countries (worldwide).
The copyright, trademark rights, or other intellectual property rights in any other product or software, its name or
logo belongs to its respective owner.
Running Global Projects
Revision Sheet
Contents Page
1 Introduction
This document proposes a set of guidelines for the effective use of the AVEVA Global
product. The guidelines result from current working experience and may be amended in the
light of future experience. Global manages a project distributed over several different
geographical locations connected by a Wide Area Network (for example the Internet) and so
presents special situations for the administrator and engineering user, which the guidelines
address.
AVEVA Global can be used to enhance projects created in either the AVEVA Plant or
AVEVA Marine group of products - henceforth known as the “base product” in this
document.
In standard projects, commands are processed one at a time so that the next command
cannot begin until the previous one has finished. In principle, the state of the system is
therefore always known. In Global, remote commands are processed in parallel and so the
next command may be initiated before the previous one has finished. This mode of
operation is called non-blocking and its advantage in Global is to prevent a slow long-
transaction command from blocking the user. Its disadvantage is that the user needs to work
in a new way to exploit this parallel nature of Global.
If a remote command traversing the Global network becomes held up at a particular location
(for example due to a comms line fault) then, for most commands, the command is placed in
a transaction database at that location for later processing. A small number of commands,
known as ‘kernel’ commands, bypass the transaction database and are stored in a pending
file for later processing. The use of the transaction database and the pending file means
that commands are guaranteed to complete, but some commands may not succeed. Some
may roll back, while others may just fail.
For further information about the transaction database, see Transaction Audit Trail, and
Transaction Database Management.
3 Global Daemon
The Global daemon (sometimes referred to as the ADMIN daemon) is supplied with the
Global product, in the default install folder.
During installation the user can choose between an RPC or WCF version of Global. RPC is
part of the standard Windows software, and no additional software has to be installed. The
WCF version has some dependencies and these are described in the Global WCF
Configuration Guide. The information contained in this guide applies equally to both RPC
and WCF versions of the Global daemon.
Installing the Global daemon is described in the Global Installation Guide, configuring and
starting the daemon is described in the Global User Guide.
There must be one Global daemon running for each Project at a Location.
Note: The user must plan which protocol to use before implementing a Global project,
including firewall and security considerations.
Important: An RPC Global daemon cannot detect whether a WCF Global daemon is
already running for that project.
that the buffer size should be at least this value in projects where distributed Extracts are
being used.
• The Dabacon buffer size can be changed by using the MODULE command. See the
Administrator Command Reference Manual for details.
4 Daemon Diagnostics
4.1 Tracing
Tracing can be switched on when the daemon is started. If running the Global daemon as a
service, add a line to the startup batch file singleds.bat to set the environment variable
DEBUG_ADMIND as follows:
DEBUG_ADMIND=1023
If the Global daemon in not being used as a service, set DEBUG_ADMIND from the
command line.
The value of the DEBUG_ADMIND variable determines the type of activities that are traced:
0 = Not used
1 = Not used
2 = Trace
8 = Thread Library
16 = Systems DB Access
32 = Dabacon Thread
These values are bit settings, so to trace a combination of activities, add the above values
together. For example, to trace Systems DB access and the Event Loop thread only, set
DEBUG_ADMIND as follows:
DEBUG_ADMIND=80
To enable tracing for all activities, set the DEBUG_ADMIND value to 3071. A useful level of
tracing for tracking commands is 896.
Full tracing can be verbose and fill disk space rapidly, the recommended value of 896 allows
the administrator to gain an idea of the current number of commands running through the
system.
This may help when bringing down a daemon at a particular location. Further tracing may be
required when investigating a particular problem.
4.2 Logging
It is beneficial to have the Daemon log setting activated for troubleshooting purposes as well
as helping the System Administrator to know how the Global daemons are functioning. We
can activate the diagnostics by configuring the Global ADMIN comms log.
The Global ADMIN comms log is activated from Daemon>Daemon Settings. This will
display the Local Daemon Settings window. In the appropriate text boxes, enter the
Diagnostic Logfile name, the Diagnostic Level (see below), and finally Enable the
Diagnostic Logging using the drop-down list.
Note: If using an environment variable in the log file path, it must be defined in the daemon
script or in the window from which the daemon was started.
0 = None
1 = Received summary
2 = Received detail
4 = Send summary
8 = Send detail
0 = None
The log files can be sent to the administering location at regular intervals.
The log file will get bigger over time. If the log record needs to be kept, start a new file, move
the log file to another directory.
The daemon checks for the log file location every 15 minutes. It will keep writing to the
moved log file until it checks the log file location and finds it has moved, and then a new log
file will be generated.
Note: Logging does not capture the same data as tracing, for full debugging purposes the
trace facility provides much more comprehensive internal diagnostics.
5 Database Allocation
Note: Once all allocations have been committed, it is worth checking that all commands are
complete, whether the command has been executed through the GUI, or as a
manual command. This is described in the next section.
A Get Work must be done prior to listing the DBALL, (carry on doing a Get Work to see
when the databases have been allocated). Allocation is successful when the DBALL list
contains all of the databases allocated.
getwork
Overwriting of locked databases may be enabled by using the ‘MODIFY’ dialogue for the
location on the Admin Elements window to enable Overwriting, or by setting LCPOVW
TRUE for the appropriate LOC element on the command-line.
See also Database File Locks.
6 Merging Databases
When setting up a project in a Global environment, the user is likely to create many
sessions in the Global database. This is because when ADMIN issues a Daemon
command, it first does a SAVEWORK to give the Daemon an up-to-date view of the Global
database. The Daemon also may add sessions to the Global database.
We recommend that the user should merge changes for the Global database and possibly
the system database after setting up a Global project. This should also be done after making
significant changes to the project setup.
• Use CHANGE PRIMARY to return the child extracts back to their original primary
location.
• Optionally, the databases could be copied (by ftp or similar) to all secondary locations
manually after the MERGE (and before the second set of CHANGE PRIMARY
commands). This avoids the need for the next Update to copy the entire file.
Normally merging would be carried out on the entire extract hierarchy at the project
Hub. However if an extract database owns working extracts, it must be merged at its
original primary location, since the working extract files only exist at that location.
The Global Daemon stores most of the commands that it is asked to perform in its
transaction database. Kernel commands (high level control commands) are stored in the
pending file until complete.
The transaction database can be navigated using the command line in ADMIN using
standard navigation commands. The information in the database will give the system
administrator more information about the progress of commands, and details of why
commands have failed. Much of this information is available through the user interface but
this section is included to instruct the system administrator on how to interpret the
transaction database and the audit trail information stored there.
Each Location in the Global Project has a Global Daemon (also known as ADMIN Daemon)
running. The Daemons at each location communicate and co-operate with each other to
perform actions that a user at a particular site wishes to effect: for example to allocate
databases (from ADMIN), or to claim elements (say from MODEL).
progressing these commands. Only unfinished commands will be read. All others will be
ignored and not validated for errors.
If there are any errors found in reading the database, the daemon will not start. It will then be
necessary to provide a (probably) empty database so that the daemon will start from fresh
and not progress any previously running commands.
failure can terminate the TRINCO. Its TRPASS will be set to FALSE, its state will be
“Complete” or a later state, and it may own a TRMLST/TRMESS and perhaps a TRFAIL but
no TROPERs or TROUCOs. Input commands can be given a delayed start time (EXTIME)
after which operations will be generated. It will wait in the “Waiting” state until this time has
passed. This stay of execution will persist until EXTIME has expired, even if this is a longer
period than the Time out.
The TRINCO stays in ‘Ready’ state for as long as all its operation and output commands
take to complete. Once the TRINCO has been set to ready the command cannot time out
until all operations have also timed out.
When all member operations and output commands have completed INCSTA is set to
“Complete”. All failures and successes generated by them are collected together and
handed on to the sending TROUCO (which stores them). The success state of the
command (TRPASS) is set to true if all operations have succeeded. INCSTA is now at
“Replied”.
Once a reply acknowledgement has been received back from the previous location, INCSTA
is set to ”Processed” and no more actions will take place.
There are other terminating conditions of a TRINCO; “Timed Out” means that the command
did not manage to start before either its end time was reached, or the number of retries
allowed was exceeded. It will not own any TROUCOs or TROPERs.
The state is set to “Cancelled” if the command is cancelled before any significant action took
place. Owned TROUCOs and TROPERs may be set to cancelled if they have not yet
started work: subsequent operations that depend on them will be set to “Redundant”.
has been created to store the command. This is stored in the TROUCO’s CMREF attribute.
For remote locations this will usually be an unknown reference since the specific transaction
database is not visible. It can be used to track the command down the chain of locations if
the administrator can see all the databases.
When a reply is received OUTSTA becomes “Replied”. Any reply data is stored under
TRFLST and TRSLST elements and the TRPASS attribute and OUTSTA goes to
“Processed”.
TROUCOs can terminate by timing out if they fail to send in the lifetime prescribed (“Timed
Out”. They may never be sent if dependencies are not met, in which case they terminate as
“Redundant”.
TRINCO:
TROUCO:
TROPER:
In this case, all successful database updates report ‘no data to send’ since the database
was up to date. This is reflected in the summary, which reports the number of successful
Copies and Updates. Note that the success for the Global db is also reported as database
=0/0.
A scheduled update normally only sends the latest sessions for a database - this is an
Update. However, if the database has been merged or had another non-additive change
(reconfigure, backtrack), then the entire database file must be copied. Database copies are
always executed at the destination (the location to which the file must be copied).
The file is copied from the remote location to a temporary file with the suffix .admnew and
then committed. The database copy cannot be committed in the following circumstances:
• There are users in the database (recorded in the Comms db)
• There are dead users (file is locked) and Overwriting is disabled (see below)
• If the commit fails, the .admnew file will be retained. The next copy attempt will test this
file against the remote original to see whether the remote copy stage must be repeated.
In the case of updates, the number of sessions and pages sent is also reported in the
success for each database as well as cumulated in the update summary. In the case of
copies, the number of pages sent will only be reported if the copy is executed locally. For
DRAW databases, the number of picture-files sent is also reported.
The update summary also reports on the number of other data files transferred (see also
success for ‘Exchange of other data’). Note that this will always report a success even if
there is nothing to transfer or ‘Other data transfer’ is not set up.
In this case, the databases could not be propagated, since the secondary database had a
higher compaction number than the primary database. This may happen when a remote
merge is executed without stopping scheduled updates. Normally it will be necessary to
recover the database to resolve this error.
Prevention of Reverse propagation may also be reported in the following situation - a
satellite has executed a direct update (UPDATE DIRECT from the command-line) with a
non-neighbour satellite. The next scheduled update with the intermediate location will report
‘Prevented reverse propagation’. In this case, scheduled updates will eventually resolve the
situation.
The following table summarises Failure messages that can be generated for Scheduled
updates. This does not include all possible failures that may be generated from failed file
copies.
In this example, the database still had readers, so the copy could not be completed. An
additional failure reports that 18 pages have been copied from the remote location. The next
retry validates the .admnew file, but still cannot commit it due to readers. A further retry
validates the .admnew file again and attempts to commit it. In this case there are no
readers, but the file is locked.
In this case, the SYNCHRONISE command eventually succeeded, since Overwriting was
enabled. Note that the ‘Successful file copy’ success reports that nothing has been copied,
since the remote copy stage was executed successfully on an earlier try, when the copy
failed.
Detailed failures for file copies can only be reported at the destination. During a scheduled
update, the success of a copy is verified by checking that the compaction number has
changed. If the copy was executed at the location which executes the scheduled update,
then additional failures may show more detail. (Note this is the partner location for a
scheduled update, not the originator!)
Refer to Extract Flush Commands Failing and Reasons Claims and Flushes can Fail for non
Admin command failures.
8 Pending File
On a Global network, most remote commands that are stalled for any reason at a location
are placed in the transaction database at that location, for later processing, (see next
chapter).
A small number of commands that cannot be carried out at once, known as ‘kernel’
commands, are instead stored in a location’s pending file for later processing. There are
various situations where kernel commands may be added to a pending file. For example:
• Too many commands have been issued in quick succession.
• A communication link is down.
The kernel commands are:
• ISOLATION TRUE/FALSE
• LOCK/UNLOCK
• PREVOWNER HUB
Also, for a Satellite’s transaction database:
• ALLOCATE (PRIMARY)
• CHANGE PRIMARY
All other commands use the transaction database to achieve a similar effect (see next
chapter).
Once a pending file has been created at a location, it will continue to exist. When the kernel
commands stored in it have been executed, they will be deleted from the file. The user can
tell if there are any outstanding commands by the size of the file: if it is empty, it will be zero
size. The contents of the pending file can be read by using a utility available from AVEVA.
The pending file is named pending, and it will be saved in the project directory (for example,
abc000). It can be read using the glbpend.exe utility provided in the Global install folder.
For example, if the pending file is C:\AVEVA\projects\abc000\pending, the command to read
it is:
install path\glbpend.exe C:\AVEVA\projects\abc000\pending
The pending commands can be read from the output.
Now, navigate to the location of the new Hub and query its Locrf; for example:
/Tokyo
q locrf
If the Locrf of Tokyo is set to Nulref, then the hub change has been successful. The new
hub, Tokyo, has no parent location.
10.1 Synchronisation
Synchronisation can be carried out at both Hub and Satellite locations. This process can be
used to synchronise databases at one location with the corresponding databases at a
different location. This is a one-way process: project data is only received.
To learn more about Reverse Propagation errors, see Recovery from Reverse Propagation
Errors.
case of a SCHE database) at both locations to work out any changes. New and missing files
will be copied from the Primary Location to all Secondary Locations.
Note: The picture file will not be deleted if it is also used by another extract of the database.
The picture-file will ONLY be deleted if it is owned by the same extract as the current extract.
This may result in picture-files which have been created by other extracts in being retained.
This is to avoid the situation where a picture-file which is still in use in another extract is
deleted. The command PURGE DB <dbname> may be used to purge unused picture-files.
If the Database is found to be the same at both locations, then it is considered not to require
Propagation, and the Picture and Neutral Format File directories are not compared.
Therefore if there is a genuine mis-match in the file directories this will not be resolved.
By Default Picture and Neutral Format File Propagation is Disabled (non-propagating), it is
possible to enable the Propagation of these files by ticking the check box on the Modify/
Create dB window, as below. This will allow Picture and Neutral Format files to be
propagated to any other location.
If this is done it is possible to regenerate all Picture and Neutral Format Files at the satellite,
even though the Database is secondary.
For Picture and Neutral Format Files to be successfully propagated the environment
variables %ABCPIC% and %ABCDIA% must be set in the Daemon kick-off script.
Note: Global limits the number of picture files that can be propagated per database.
Because of this, the user should not create more than 500 pictures or Neutral Format
Diagrams per DRAW database. This generally means limiting the number of SHEE
and OVER elements; if necessary, create extra DRAW databases to allow more
pictures to be created.
Marine Drawings files are always propagated, even if Picture/Neutral Format File
Propagation is disabled.
For Marine there are drawings in the ASSI, ASSP, BACK, BTEM, CPAR, MARK, NPLD,
NSKE, PDB, PICT, PINJ, PLIS, PLJI, PPAR, PRSK, RECE, SETT, STD and WCOG
directories.
Note: Global does not create folders, these must already exist at the secondary location.
Linked document propagation is disabled by default for the project. To enable Link
propagation, the GLINKP attribute of the element GLOCWL /*GL must be set to TRUE:
Once enabled for the project, Link propagation is enabled for all Design databases, whether
or not they actually contain Linked documents. Additional operations are invoked to
propagate Linked documents. It may therefore be useful to disable Link propagation for
those Design databases which will never contain links. There is a check box on the Create
Database window for this:
Important: Linked documents will only be transferred when a database update has
transferred data. By default only missing documents will be copied and by
default documents will not be replaced if they already exist.
Propagated Link documents are not deleted at secondary locations even if they
are at the primary location.
This command checks whether any Linked documents need transferring for a specified
Design database. It is possible to overwrite existing Linked documents at the secondary
location using the keyword FORCE:
SYNCHRONISE <dbname> LINKDOC/UMENTS FORCE
When a database is ALLOCATEd or RECOVERed, then any existing Linked documents will
be replaced.
These test timings were taken when propagating 11080 pages (22695936 bytes) of data
between two machines.
files in these directories from one Satellite to another during scheduled updates (or when
the UPDATE ALL command is used). Files can only be transferred between neighbouring
locations, and this method cannot be used to send files to/from off-line locations.
For example, myfile has been produced at Satellite AAA and is needed at neighbouring
location BBB. The user at AAA must make sure that myfile has been placed in directory
%EXP_BBB%. During the next scheduled update with BBB, this file will be sent to BBB, and
received in directory %IMPORT% at location BBB. A user at BBB can then use myfile. If
myfile is to be sent on to other locations, it will need to be copied into the export directories
at BBB for those locations.
Offline locations: The TRANSFER command only copies databases and picture files to or
from the transfer directory, ready for onward manual transfer to the specified location.
Transfer of other data files must be done manually.
It is possible to assign a batch script to run both before and after the Update Event occurs.
This can be used to copy data into the EXPORT directories before the Update is executed,
and then copy it out of the IMPORT directory once the Update Event has completed. This
process will include the transfer of Other Data.
The batch scripts are assigned to an Update Event through the Create/Modify Update
window, see below.
Batch Scripts
The script itself can be of any type of batch script, for instance perl, and can be as complex
as required.
11 Deleting Databases
The procedure for deleting a database is summarised below. If the database owns extracts,
see Deleting a Database that owns Extracts.
Note: A dB does not need to be primary at the HUB, just as long as it is not primary at the
location where it is being deallocated.
12 Database Recovery
If for any reason a database at a location is corrupt, it can be recovered by transferring the
database from a neighbouring location. It is important to remember that this could result in
loss of work. The main objective when a recovery is carried out is obviously to restore the
database(s) and minimise the work loss.
Global does not verify that the file from which the database is being recovered is a valid
database. It is the user's responsibility to make sure that this is the case. Remote DICE
checking may be used to verify the state of the database at the remote location from which
the database is to be recovered.
Note: To avoid data consistency errors, changes to the transaction database should not be
made while the daemon is running
However, the REMOTE MERGE command cannot be used when the transaction database
is full, since this command cannot be recorded properly. In this case, it may be necessary to
merge it by reconfiguring. To manage the transaction dB efficiently TRINCOs (and their child
elements) need to be deleted at regular intervals. Only completed transactions should be
deleted. It only makes sense to merge the transaction dB after TRINCOs have been
deleted, otherwise the dB will not be compacted.
Backing up projects regularly is good practice in any environment, including Global projects.
With a Global project, extra attention has to be given to any restoring process that is carried
out. The following guidelines are outlined in relation to backing up Global projects:
• Backup all files at all location regularly.
• When restoring a project, be aware that the user may be able to restore project
databases by using Global’s Recover functionality. This may give the opportunity to
minimise work loss.
• Use the backups for a location only for that location. (In some cases the only option
may be to use backups from other locations. In this case, be aware of the implication it
could have on the amount of work lost.)
Remember, Global database (for example abcglb) at the Hub is the ‘master’ Global
database. Back this up before carrying out any major Global administration work.
When using databases from backups, it is feasible for a secondary database to have newer
sessions than a primary database. If so, at the next update, changes may be posted back
from the secondary database to the primary database. If new sessions have been written at
the primary location, this could cause corruption. Therefore make sure that their secondary
database backups do not have newer sessions than the primary database.
To resolve this, it may be necessary to RECOVER some databases from the primary
location after the restore.
An extract is created from an existing database. When an Extract is created, it will be empty,
with pointers back to the owning or master database. Thus all data visible in the master will
be visible in the extract. Extracts can only be created from Multiwrite databases, and all
extracts are themselves Multiwrite. The user can create Extract DBs from any type of
database that can be multiwrite, that is DESI, PADD, CATA and ISOD, and in the case of
Marine projects, MANU and SCHE.
• Extracts from foreign DBs cannot be created.
• Extracts from copy DBs cannot be created.
The user can work on an extract at the same time as another user is working on the master
or another extract. When a user works on the extract, elements are claimed to the extract in
a similar way to simple multiwrite databases, so no other User can work on them. When an
extract User does a SAVEWORK, the changed data will be saved to the Extract. The
unchanged data will still be read via pointers back to the master DB. When appropriate, the
changes made to the extract are written back to the master. Also, the extract can be
updated when required with changes made to the master.
The original database is known as the Master database. The Master database is the parent
of the first level of extracts. If a more complex hierarchy of extracts is created, the lower
level extracts will have parent extracts which are not the master.
The extracts immediately below an extract are known as extract children. The maximum
number of extract children is 408.
If a hierarchy of extracts is created, the parent of an extract, and its parents up to and
including the Master DB, are known collectively as the Extract Ancestors.
The following diagram illustrates an example of an extract family hierarchy:
In this example:
Label Description
PIPES is the Master and the parent of PIPES_X1.
PIPES_X1 is a child of PIPES and the parent of PIPES_X10.
PIPES_X10 is a child of PIPES_X1.
Note: The children of PIPES are PIPES_X1 and PIPES-X2. PIPES and PIPES_X1 are the
ancestors of PIPES_X10.
Write access to extracts is controlled in the same way as any other database:
• The user must be a member of the Team owning the Extract. Extracts in the same
family can be owned by the same team or by different teams.
• The user must select an MDB containing the extract (or containing its parent, if the
extract is a working extract).
• Data Access Control can be applied.
• An extract database cannot be opened in a constructor module (such as MODEL) at a
satellite unless all its parent extracts are also allocated to that satellite.
Note: At this release, you can only create an extract at the bottom of an extract tree: you
cannot insert a new extract between existing generations. At the Hub, you can also
create a new master database above the original master.
LVAR Variant
LCTROL Controlled
Note that the ALLOCATE Command allows child extracts to be allocated to a satellite
without their parent being allocated, but the user will not be able to open the extract until all
its ancestors have been allocated to the location. Also note that the ancestor extracts may
need to be synchronised, if timed updates of extracts has not been implemented.
Extract creation is controlled by the NOEXTC attribute of a location. If this is TRUE, then
extract creation is disabled and extracts cannot be created by that location. However the
Hub or its administering location (if authorised) may create extracts.
The purpose of the NOEXTC attribute is to prevent a satellite from creating databases on
the fly without authorisation, and it applies to the administering location, not the
administered location. However, if the HUB is doing it, it is by definition authorised. Thus the
HUB is always able to create extracts.
Similarly, we could have a situation where one satellite AAA is administering another BBB.
Satellite AAA might have NOEXTC false, and BBB might have NOEXTC true. In this case,
AAA would be allowed to create extracts for itself and for satellite BBB.
But BBB would not be allowed to create any extracts itself. The screenshots below show
how to set the NOEXTC attribute in the Modify Location window.
A working extract inherits the write access of the parent access. That is if the parent is
primary at the location of the working extract than it can be written to, otherwise the user will
only have read access.
Note: To query extract number ranges by navigating to the appropriate element and giving
the commands:
Q EXTLO
Q EXTHI
When using the ADMIN menu bar, use the Location version of the Admin Elements window
to create or modify a Location. On the window, specify the range of numbers available for
working extracts at the location. Refer to Global User Guide for further information.
allocated reference numbers from the local reference block(s). If no reference block is
allocated manually, the system will allocate reference blocks as required. For a Global
project, this may require daemon activity.
To avoid this, we recommend that the user should assign a block of reference numbers to
the extract when its created, using the REFBLOCK n option. The block of reference
numbers will then be available locally. n should reflect the number of users writing to the
extract, for example, if you expect to have five users writing to the extract, set n to 5.
Note: There are 8191 reference blocks available for each extract hierarchy, so there is no
need to be conservative when allocating them.
Note: The databases shown are all part of the same extract family, and so they will all have
the same database number as part of their filenames, for example ttt0200_0001.
• Modify the ADMIN Module definition to give access to DICT databases using the EDIT
MODULE command in ADMIN as follows:
EDIT MOD ADMIN MODE DICT READ
• Set up an MDB containing the DICT database in which the UDAs are stored, and make
sure the user selects it on entering ADMIN. Users will also need to have read access to
the DICT database via their MDBs.
The following simple scenario illustrates how to use UDAs in Data Access Control
combined with extracts, to control workflow.
The Designer Role would give access to all Piping elements except those with the UDA
:ISSUED set to TRUE.
Note: In a Global project, we recommend that multiwrite databases should be created with
EXPLICIT claim mode, unless all the children are primary at the same location.
User claims can be explicitly released (unclaimed) by the user during a session, and
elements are always unclaimed when the user changes or exits from a module.
The commands for user claims are:
CLAIM . . .
UNCLAIM . . .
Extract Users can check daemon availability before claiming or flushing using the following
command line syntax:
Q COMMS (TO) <loc>
Q COMMS (TO) <loc> PATH
PING <loc>
Q ISOLAT AT <loc>
Q PROJ LOCK AT <loc>
These commands are now available in MODEL and other modules. This is particularly
useful to Claiming/Flushing, since commands fail if the connection is down.
|- FLUSH --------|
| |
|- FLUSHW -------|
| |
|- RELEASE ------|
| |
|- ISSUE --------| .-----<---.
| | / |
|- DROP ---------+-*- element -+- HIERARCHY -.
| | | |
| | ‘-------------|
| | |
|- FULLREFRESH --| |
| | |
|- REFRESH ------+--- DB dbname -------------+--->
|
‘- FLUSH RESET ------ DB dbname ----------------->‘
FLUSH Writes the changes back to the parent extract. The Extract claim
is maintained. The extract is refreshed with changes that have
been made to its owning database.
FLUSHW Writes the changes back to the parent extract. The Extract claim
is maintained. The extract is not refreshed.
FLUSH RESET Resets the database after a failed EXTRACT FLUSH command.
(See note below under Flushing Changes.).
REFRESH Refreshes an extract with changes that have been made to its
parent extract.
FULLREFRESH Refreshes an extract and all its ancestors. A full refresh takes
place from the top of the database hierarchy downwards, ending
with a refresh of the extract itself. Each extract is refreshed with
changes that have been made to its parent extract.
ISSUE Writes the changes back to the owning extract, and releases the
extract claim.
RELEASE Releases the extract claim: this command can only be used to
release changes that have already been flushed.
DROP Drops changes that have not been flushed or issued. The user
claim must have been unclaimed before this command can be
given.
The HIERARCHY keyword must be the last on the command line. It will attempt to claim to
the extract all members of the elements listed in the command which are not already
claimed to the extract.
The elements required can be specified by selection criteria, using a PML expression. For
example:
EXTRACT CLAIM ALL PIPE WHERE (:OWNER EQ ‘USERA’) HIERARCHY
Note: Under normal operation use of EXTRACT FLUSH RESET should be avoided. The
system will reset the dB automatically after a failed Flush.
If the databases are set up with explicit claim, then the user will need to use the CLAIM
command before modifying the element.
USERA creates a Pipe and flushes the database back to the parent database, PIPE/PIPE.
The results of various Q CLAIMLIST commands by the three Users, together with the
extract control commands which they have to give to make the new data available, are
shown in the following diagram.
Note that:
Q CLAIMLIST EXTRACT
tells the user what can be flushed; and:
Q CLAIMLIST OTHERS
tells the user want cannot be claimed
To query the extract claimlist for a named database. The database can be the current one or
its parent:
Q CLAIMLIST EXTRACT DB dbname
When you create an element, it is only seen as a user claim, not an extract claim, until a
SAVEWORK. It will then be reported as an extract claim (as well as a user claim, if it has not
been unclaimed).
Note that a change in the claim status of an existing element will be shown by the
appropriate Q CLAIMLIST command as soon as appropriate updates take place, but a user
will have to GETWORK as usual to see the changes to the MODEL model data.
We recommend that:
• Databases that are going to own extracts which are primary at other locations, should
be created with explicit claim mode.
• Before you make an extract claim, you should do an EXTRACT REFRESH (or an
EXTRACT FULLREFRESH, if necessary) and GETWORK.
• If you need to claim many elements to an extract, it improves performance if the
elements are claimed in a single command, for example, by using a collection:
EXTRACT CLAIM ALL FROM !COLL
The Global daemon will only be involved in the claiming process if the user is claiming an
element from a secondary database / extract to their current primary extract. In this
instance, the user will be warned that the element is now being claimed by the Global
daemon. The user will know when the claim is completed, by using GETWORK and
checking the claim list.
The Global daemon will only be involved in the flush process if the user is flushing changes
to a secondary database / extract from their current primary extract.
Note: If a flush fails, the database needs to be reset to allow subsequent Flushes and
Refreshes to work. This is normally done automatically as part of the Global Flush
command. In exceptional circumstances, the EXTRACT FLUSH RESET command
may be used to undo the failed flush. However this will not normally be necessary.
This situation can arise when more than one user is issuing the same database extract.
Flush and release commands might then be processed in the wrong order, causing a flush
to fail and preventing subsequent refreshes of the extract.
changes made to the database at the primary location, then these changes will not yet
be visible at the local satellite. Extracts below the database will only see the latest
version of the secondary database when they are refreshed. To see the changes made
to the primary database, wait for the next scheduled automatic update before
refreshing.
While a user is making changes only to the extract, the linked session number in the owner
stays the same. On refreshing, the local extract is linked to the most recent version of the
parent extract.
The new session number linked to in the owner depends on the number of flushes done by
other users. In the example the linked session number goes from 10 to 15, indicating that
five flushes have been made by other users in the meantime (assuming that no work is
being done directly on the owner).
Note: BACKTRACK is not allowed for extract databases. REVERT must be used instead.
• The database must not own any extracts, either working or standard ones
Thus to delete a database that owns extracts (and may own working extracts) may
involve doing a number of CHANGE PRIMARY commands to get rid of any working
extracts at satellites where the database is secondary.
The procedure for deleting a database that owns extracts is summarised in the diagram
below.
No Is the DB
allocated to a
location?
Yes
No Is the DB
primary at a
location?
satellite
Yes
Yes
DELETE DB dbname
Note: A DB does not need to be primary at the HUB, just as long as it is not primary at the
location where it is being de-allocated
Symptom Cause
Unable to savework. Perhaps you have Daemon has been expunged. Modifications
been Expunged to database (other than updates) will fail.
Flush may have overtaken another flush. In this case, the Flush will stall for a retry.
Previous flush could not be found
Previous flush failed Subsequent flushes will fail until failed flush
has been reset.
Unable to claim <item> because element is Valid failure - another extract or user has it
already claimed by <extract or user> from claimed
Extract <no>
Unable to claim <item> from parent extract EXTRACT REFRESH is required, to bring
<no> because element is modified in a later the child extract’s view of the parent up to
session. date
Nothing to claim locally - all claims failed in Cannot claim to child extract, because
owning extract failed to claim anything from its parent
Symptom Cause
You cannot claim <item> without doing an The item has not been claimed into the
extract claim from the parent extract extract before the User has claimed it. This
is only applicable to Explicit dBs.
Unable to claim <item> from parent extract The item has been deleted in the parent,
<no> as element has been deleted in a later and the child extract has not been brought
session up to date yet.
Element reference <item> is invalid or has The reference number of <item> cannot be
been deleted found in the database, it is an invalid
reference number.
Element <item> has been modified, so The item must be saved the database
cannot be released. Savework must be before an extract operation can be
done first undertaken on it.
Element <item> has been deleted by The item you are trying to Claim has been
another User deleted by another user.
Name clash on <item>. Please rename The name of the item that has just been
created already exists.
Cannot flush/abandon <item> as old and The parent of the owner has been changed.
new owners must both be in the list, or Both the old and the new owners need to
neither in the list be flushed/issued/abandoned at the same
time, and the list currently only contains one
or the other.
Cannot flush/abandon <item> without its The item is either new or moved to a
owner another item. Both need to be flushed/
issued/abandoned at the same time.
Cannot flush/abandon <item> without its The member list of item has changed in
members some way. The item needs to be flushed/
issued/abandoned with its members.
Cannot abandon/release <item>. Element is The item is claimed by a User (possible the
claimed out by a user (maybe yourself) or to user doing the EXTRACT ABANDON/
an extract RELEASE) or to a child extract.
Element <item> kerror <no> Internal error. Please contact AVEVA
support desk for more information.
Dabacon error <NUMBER> for DB <item> Internal error. Please contact AVEVA
support desk for more information.
17 Off-line Locations
Normally there is a communications link between pairs of locations, and these locations are
referred to as on-line. (Their ICONN attribute is 1, and RHOST points to a valid computer
name.) However, Global can operate if there is no direct communications link between the
Hub and certain locations. These locations are referred to as off-line. (Their ICONN is 0,
and RHOST may be unset.)
A tape, CD or other medium is used to copy the databases from one location to the other.
It should be noted that:
• The TRANSFER command copies databases to or from the project directory to a
special transfer directory, ready for the physical transfer to another location. The
physical transfer must be made as well as using the TRANSFER command from
ADMIN.
• The existence of off-line locations limits the administration capabilities of a project.
• Off-line locations can only be children of the Hub. An on-line satellite cannot have off-
line children.
• Database transfer to and from the media used for communication with an off-line
location can only be made at the Hub and the off-line location.
• Commands such as ALLOCATE and CHANGE PRIMARY are not self-contained.
Working practices are required to make sure the correct transfer of data.
• Transfer of other data, such as ISODRAFT files, external PLOT files and MODEL
manager files, must be done manually to an off-line location and from it.
• To change a satellite from on-line to off-line, shut down its daemon and change ICONN
to 0. Manually copy the Global database to the off-line location. The TRANSFER
command will then work.
• Picture Files and DWG files are transferred to Offline Locations and should be copied
on CD/through ftp when they reside in the locations Transfer area.
18 Firewall Configuration
daemon across a firewall. On Windows, a reboot of the system is required after registry
modifications.
Once the RPC ports are defined, the firewall can be configured. As shown below, the
firewalls for both organisations are opened to allow only communications to and from each
other’s Global Servers on TCP ports 135 and 5000-5020.
These ports must be opened bi-directionally to allow Global to operate. It is possible to limit
access to these ports using the UUID for Global;
d2af263a-b21d-1001-8e31-0800690811cc
(this is not the same as the project UUID).
The following solution can be applied to any modern firewall with the functionality of packet
filtering.
The procedure for restricting the use of dynamic ports for RPC is through additions in the
Microsoft Windows registry.
Note: Incorrect modification of the registry can lead to serious problems. Always back up
the registry before making changes.
To change the registry, the user must use REGEDT32 and not REGEDT, as the latter does
not allow modification of the string data type. If REGEDT32 is not used the following
message will appear on daemon startup:
Can’t establish protocol sequences: Not enough resources
are available to complete this operation
The user must add a subkey and three values to the registry.
Under the following key, add a subkey called Internet:
HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc
Note: The RPC configuration procedure described in this document can also be found in
Microsoft TechNet Knowledge base: Article number: Q154596. Note that Microsoft
recommend a minimum of 20 ports to be open for other services; for more
information on this please refer to the article which is available on the Internet at
http://www.microsoft.com/technet. The number of open ports suggested in the
example above is just that: a suggestion. However it is generally true that the more
Global projects you are using, the more ports you are going to require to be open.
This Section gives general advice on the “Housekeeping” activity for Administrators running
projects. We use the term “Housekeeping” as a metaphor to compare the work you would
perform to create and maintain a house and its contents in a state of good repair, well
organised, tidy and clean with the similar goals for the data of an engineering project.
Similarly to a house, the larger the scale the more substantial a task this can be. However, if
you establish the basis and practices early, you can keep the task to a routine activity that
increases in efficiency with practice over the duration of the project.
This Section should be seen as supplementary material to the standard Administration
documentation and not a replacement.
As with all advice it is not mandatory and should be taken as points for consideration in
creating a stable Administration environment. Also, although efforts have been made to be
as comprehensive as possible it is not exhaustive and will be subject to modification and
addition as the base product and its use across wide industry sectors increases and
experience in good practice improves to match.
It is written for an audience who are assumed to have undertaken training in Administration
and have a thorough background in maintaining projects. Moreover, not everything
described here necessarily applies to all project set-ups under all circumstances.
However, IT managers may find it useful as background information in deciding how to
organise base product Administration.
19.1 Dice
This is the Data Integrity Checking tool supplied as part of the ADMIN module.
Its purpose is to provide a report on the base product Dabacon databases that informs the
administrator if there are any issues with the database that require extra attention. In
addition, you can also run it in a “patch” mode that will actually facilitate a repair on the
database.
It is recommended that a full Dice report it is run as a matter of routine daily on all databases
in the project. This includes the full extract family and secondary databases if Global is in
use.
Foreign projects, such as a centralised Catalogue, should also be Dice checked, although
the frequency should not need to be so frequent if they are not being updated on a daily
basis. Often this is done as a scheduled batch routine during no working periods.
However, if the project is in a period of intense activity and the window for running bulk
processes for reports, drawings, material take-off is small, it can be run with users and batch
processes continuing to run on the model.
Having produced the report it is imperative that it is closely scanned for issues of concern
and then action taken to address them. Ideally, the Administrator should take action to
remove all errors and warnings; however some warnings can be deemed to be acceptable
and of no risk to the healthy running of the project e.g. Element =18585/38329 Warning-
Attribute TREF contains invalid ref =18585/74770.
This error will also be highlighted to the normal users as they check their designs so it will be
picked up there. However, if the identical reference numbers in these messages recur the
Administrator should follow up with the last user to access the element (info in session data)
to make sure it is cleared.
The Fatal Errors listed in a Dice report are usually ones that need immediate attention and
action to repair the database will be needed. Nevertheless, on occasion the error can either
be tolerated for a period as it is not truly critical, or may have been wrongly categorised as
Fatal and constitutes only a warning e.g. Error in level 2 NAME table, session no. 10469,
page no. 42385 - incorrect value of first key on lower level page no. 42386 (extract 1).
While AVEVA provide analysis of each error message outlining how it should be addressed,
the nature of an individual project set-up can make the method on how they should be
addressed variable. Therefore it is recommended that as the Administrator becomes
familiar with the action needed to address each warning or error it is documented and
recorded in project work instructions.
Certain database errors can be fixed by running Dice again against the problem database,
this time in “patch” mode to repair the fix. Two typical examples are:
Child extract 12 not listed on header page
Element SBFITTING /
SBFIT99 needs clearing from mainlist in header extract
This should normally be done when there is no Write access to the database. Even though
the Dice report will report the problem cleared, it may be a good idea to rerun a Dice full
check on the repaired db with “patch” mode disable to be 100% sure the problem is cured.
Other database errors can only be fixed by a Reconfiguration of the database. For example:
Element =35021/
13323 has an inconsistent entry in the name table. Name
exists on the element but is not in the name table
itself. Thus the element can not be navigated to by name
Please reconfigure this DB to resolve the problem
This work should be done when there are no Read or Write access to the database, but to
avoid a complete project shutdown it is possible to remove the problem db from all MDB’s
do the repair and then replace it. Because of the additional complexity this may involve,
looking for a window in the project workload is normally the preferred choice.
Two or three days before a phase of major deliverable production it is recommended to be
especially diligent in Dice checking to make sure that all databases are in good shape and
reduce the risk of an interruption in the bulk process.
If a user reports an unusual problem with part of the project data, such as a Dabacon crash,
the first step should always be to perform a Dice check on the database(s) involved. If the
report shows issues that cannot be repaired by patching or reconfiguration then the Dice
19.2 Global
This section provides information to advise Administrators on good practices. We
recommend you read it fully.
If it is felt desirable to run the updates sequentially then a script will be required that uses
the EXECA and EXECB script attributes on the Update event (LCOMD) to run pre- and
post-execution scripts on a scheduled update. This could also:
• Record update start and finish times
• Report on Database sessions
• Lock out other updates by creating/deleting a lock file
This script is not a standard delivery as it needs tailoring for each project set-up. If
required the customer can request services from AVEVA to deliver this.
• Legend
P Primary location
S Secondary location
Locations aligned
This macro is not a standard delivery as it needs tailoring for each project set-up. If required
you can request services from AVEVA to deliver this.
each update process completes successfully and that the realignment has been successful
before kicking off the next update.
19.2.6 Flushing/Issuing
It is common practice for all users on a project that uses Extract databases, whether Global
or not, to follow common practices of Flushing. Generally it is expected that each user will
be expected to Claim, Flush and or Issue on an object-by-object (or small group of objects).
However, it may be decided by some customers to manage the Flush and Issue on a
collective basis at managed intervals, say once a day. If this is done then the Flush or Issue
should be done at as high a level in the database e.g. SITE. This reduces both the number
of sessions created and the database file size.
Note that if the Model Object Manager software is in use, the program does background
flushing and issuing to keep the Primary data as synchronised with the Oracle data as
possible. If Model Object Manager is in use regular Global Updates will also reduce the risk
of the user viewing Oracle data that is not synchronised.
may not have successfully updated ALL databases, although the overall command has
been successful.
If the MESSAGE reads 'Update All succeeded (NNNN DBs) with MMMM failures' then the
administrator MUST investigate the failures. The FAILUREs pane of the Transaction
messages form indicates this. If this check is considered to be worth separating to a distinct
procedure a macro may be written to collect TRFAIL elements below the TRINCO for the
TIMEDUPDATES user.
Merge has to be done at the Primary location unless a Leaf extract organisation has been
used where the Remote Merge functionality can be used from the hub. Remote Merge can
be done with the Daemon running, but for normal Merge operations it is recommended that
the Daemon is stopped to prevent any updates occurring. The steps to be taken prior to a
merge are covered more thoroughly in the Database File locks section of this document.
Note: A Leaf extract is a database which does not own other database extracts.
19.4.1 Background
On occasion there may be circumstances where after a piece of Administration work such
as session merging, the database file has been found to be locked by the Windows
Operating System. To resolve file locks, the Administrator has two options:
Reboot the computer where the databases reside (this assumes the Administrator has the
privilege to do this, for a lot of cases this is not a practical solution).
Resolve file locks using a specific tool as described in admnew Files.
Also note that if the project is not used as a foreign project, you have a third choice in the
Overwrite DB Users flag, which is the LCPOVW attribute of the LOC element.
This attribute controls whether a ‘locked’ file at a location may be overwritten. If this attribute
is set TRUE and there are no database READERS in the project, then Global will overwrite
the ‘locked’ file by the .admnew file.
Important: Do not do this if other projects include this database as a foreign project, since
these are valid READERS that are not recorded in the session data for the
Global project.
• Removing Users
After a session has been illegally exited, either deliberately or due to an unexpected system
fault, the Users who were accessing the databases may be left as phantom users (also
known as dead users) in the system. To clear these users from the databases and release
their claims the Administrator can use the Expunge syntax for all users or specific db’s (see
ADMIN Command Reference manual for details of all Expunge options, including how to set
the Overwrite DB Users option to allow non-foreign projects to copy over locked files
provided there are no users recorded in the COMMS db. Overwriting is disabled by default
because it may cause sessions of “dead” users to crash).
You can use the ADMIN Module for this also.
To force live “rogue” users out of the system who have not followed the request to leave the
system before Admin work is carried out, the Expunge User Process can be used. This will
not stop the process on the Workstation but it will sever the link with the database file and
the next time the user tries to access the process (Module Window) it will crash. After the
Expunge User Process has been done it is common practice to then use Expunge All Users
to remove any lingering phantom users and release all claims.
However, it is necessary after the Expunge processes (or other illegal exits) to make sure
that the database files have not been locked by Windows or left open and they should be
closed so that further work in the databases can be done. As the files normally reside on a
separate File Server, administration access to that server will be required.
Note: PsFile only shows files opened remotely, so won't show files open by processes
running on the File Server itself - e.g. if scheduled jobs are being run on the File
Server itself.
Alternatively, you can use the Microsoft NETFILE API on the server to free locked db files.
Summary of steps before conducting an ADMIN task on a database
1. Broadcast a message to all users on the project telling them that they should cleanly
exit by a required time. If the ADMIN MESSAGE command is used note that it will only
be visible to those logged in at the time, and when they change module.
2. At the advised time Lock the project via ADMIN to prevent any users accessing the
databases further.
3. If a Global project, stop the Daemon to stop updates and/or remote claiming.
4. Check the project for any users still logged in and try to get in contact with them and
ask them to leave the project cleanly.
5. Any users who cannot be contacted should be severed from the project by Expunge
User Process.
6. Expunge All Users to remove any phantom users and release any claims.
7. Using pstools PsFile check for any open or locked db files on the db File Server.
8. Using pstools PsFile close any open or locked db files on the db File Server.
Note: In truth the only databases that should not be being accessed in the project in Read
or Write mode are those on which an ADMIN task such as reconfiguration or session
is being undertaken. However to secure this without getting all users out of the
project is to isolate the databases (inclusive of the whole extract family) from use by
removing them from all MDB’s and then performing steps 1-8 with the exception of 6.
Deferring them is not recommended as the user can overwrite the deferral. After the
Admin task has been performed on the specific databases they can then be re-
added to the MDB’s.
As this adds an extra level of complexity to the Admin task it is therefore suggested that a
window of time is sought where the whole project can be shut down.
In this scenario the SAT2 users working on the EX2_SAT1 db are claiming objects from EX1
Primary at SAT1.
This can be done dynamically in Explicit Claim mode over the Daemon. However, the
response can be variable causing the SAT2 users to be unsure as to the status of their
claim. Therefore it is recommended that the project is organised in such a way that the EX1:
Primary objects to be worked on at SAT2 are identified and marked by the SAT1 users and
then an Admin process is run to Extract Claim the collection to the EX2_SAT1 Primary db at
SAT2. When the work on the objects is complete the SAT2 users mark the objects as ready
to be Issued and an Admin process is run to Extract Issue the collection back to EX1:
Primary.
For a standard project setup refer to Chapter 5 of the Administrator User Guide. There is an
extra project setup for Global which is:
• Set up environment variables for location transfer directories. The environment
variables must be added to the {proj}evars.bat file in the project folder.
set {PROJ}_HUB=C:\{your project path}\TRANSFER\HUB
set {PROJ}_PFB=C:\{your project path}\TRANSFER\PFB
etc.
These will contain the project directories.
Launch base product and at the login screen make sure that the project that is to be made
Global is selected in the login page. Then from the login page make sure that the Admin
module is selected.
In the Admin module select Display > Command Window.
At command line type the following commands in sequence:
• Lock
• make Global
Note: The user will be prompted to close and re-open the Admin module.
• unlock
• savework
Then quit the Admin module and reload as prompted.
Note: When re-starting the Admin module a prompt will inform the user that the Location is
uninitialised.
In the Admin module select Locations from the Elements pulldown and highlight /projecthub.
Click Modify and rename /projecthub to /hub then click Apply.
A prompt will ask if the user wants to initialise the location. Click Yes.
A prompt will be displayed indicating that a new transaction database has been created.
Click OK then Dismiss on the Modify Location window.
Start the Global daemon by typing the following from the Windows command line. Click
Start > Run and then type CMD to open a Windows command line window.
C:\AVEVA\Global{version}\admind start {proj}
To verify that the command has run successfully the user can query the linit flag of the
location. To do this:
At the Satellite, use Windows Explorer to copy the files in {proj}_PFB to the location
directory where the project will reside as {proj}000 (i.e. the Satellite).
Set up the base product environment at the satellite location (executables, Project
directories etc).
• Set up the base product environment at satellite location (executables, Project
directories etc.)
set AVEVA_DESIGN_EXE=C:\ita_test_env\E3D
set {proj}000=C:\net\project\{proj}000 …..etc.
• Start the daemon at the location PFB
%AVEVA_DESIGN_EXE%\admind start {proj}
AT LOC PFB
The database properties NACCNT, HCCNT and CLCCNT may be queried in the normal way
by navigating to the DB element for the database, for example, /*MYTEAMDESI. Attributes.
It should be emphasised that these attributes are properties of the database file, and may
differ at each location.
Alternatively, a PML object <DB> may be constructed for the database:
!DD = OBJECT DB (‘/*MYTEAM/DESI’)
Then the properties may be queried:
!DD.NACCNT
!DD.HCCNT
!DD.CLCCNT
!DD.LatestSession()
Note that the last property is a member, not a method. Primary location and filename may
also be queried:
!DD.FileName
!DD.Prmloc
The same properties may be queried for a database at a remote location ABC by using:
Q REMOTE ABC MYTEAM/DESI FILEDETAILS
Q REMOTE ABC MYTEAM/DESI LASTSESSION
The FILEDETAILS option returns the Compaction number (NACCNT), last session, Extract
list/Header changes count (HCCNT) and Claim changes count (CLCCNT) for the database
at the specified location. These may be compared with the local values.
Note: These commands return data in CSV format when used with a variable e.g.
!REMOTE ABC MYTEAM/DESI FILEDETAILS
However, the Daemon trace log does include this information if the right Trace level is
turned on. (Trace bit 3). This information is only present in the log of the location which
issued the Update. The relevant lines might read:
(6) At Tue Oct 04 01:03:24 2005 Processing DB %ABC000%/abc2315_0001
(6) At Tue Oct 04 01:03:24 2005 Compaction numbers: local 0 remote 0
(6) At Tue Oct 04 01:03:24 2005 Session numbers: local 3 remote 2
(6) At Tue Oct 04 01:03:24 2005 Claim Changes counts: local 17 remote 1
(6) At Tue Oct 04 01:03:24 2005 Extract List counts: local 3 remote 10
In this case this indicates that the current location has a more recent session than the
remote location. The Claim count only applies to a session, so its value will be ignored
unless the session numbers are the same. In this example, the implied propagation
direction is from the current location to the remote location.
However, before making the update, the Daemon checks the update direction, to make sure
that the propagation direction is consistent with the direction away from the primary location
of the database. If this check fails, then the ‘Prevented reverse propagation’ error causes
the update to fail.
Occasionally, it is not possible for the daemon to check the Update direction (Global db may
be in use). In this case, the failure will read ‘Update skipped’. This is normally a temporary
problem, and the database will be propagated as normal on the next scheduled update.
Note: Some commands (such as Claims) use Successes as a way of passing data
between operations, so contain fairly obscure data.
Global can be used to distribute the catalogue databases around the world, so that projects
can include them as foreign databases.
A project cannot include a database from a Global project unless the project itself is Global
(that is, the MAKE GLOBAL command has been executed). Therefore, in order to have a
set-up where you have several projects all using a Global distributed project, they
themselves must be Global, in the MAKE GLOBAL sense: they don't have to be distributed
themselves- Single location Global projects do not require a Global licence.
Single location Global projects may be created and made Global at their resident location.
You will then be able to include the catalogue data in the usual way.
If there are many multiple-location projects that share this project, then it will be necessary
(because of your HUB licence) to create and make each project Global at the HUB and then
copy them to their eventual resident locations. You will then be able to include the catalogue
data in the usual way.
Note that if a Global project is being used to distribute catalogue databases for other
projects to include, the Overwrite DB Users flag (see admnew Files) should be disabled.
The PML Function below allows transactions older than a specified number of days to be
deleted. This is an alternative to using Transactions Merge/Purge in Automatic Merging and
Purging of a Transaction Database. This function must be copied into PMLLIB (under
Global\functions). It may be run using !!purgeTransaction(value) where value is the number
of days to retain:
define function !!PurgeTransaction(!days is REAL)
if (!days gt 28) then
!!Alert.error('Maximum purge time is 28 days')
return
endif
if (not !!Alert.Confirm('The local daemon must be shut down before you
can continue with the purge/merge operation.
Do you wish to continue?').Boolean()) then
return
endif
$P Searching for complete transactions...
!monlengths = '31,28,31,30,31,30,31,31,30,31,30,31'
!today = object DATETIME()
!year = !today.year()
!month = !today.month()
!day = !today.date()
!hour = !today.hour()
!minute = !today.minute()
!second = !today.second()
!day = !day - !days
if (!day lt 1) then
!month = !month - 1
if (!month lt 1) then
!year = !year - 1
!month = 12
endif
if (!month eq 2) then
!leaptest = (!year - 2000) / 4
if (!leaptest eq !leaptest.int()) then
!day = 29 + !day
else
!day = 28 + !day
endif
else
!day = !monlengths.split(',')[!month].real() + !day
endif
endif
!date = object DATETIME(!year,!month,!day,!hour,!minute,!second)
!collection = object COLLECTION()
GOTO FRSTW TRAN
!collection.scope(!!ce)
!filter = object EXPRESSION('upc(TSTATE) eq |COMPLETE|')
!collection.filter(!filter)
!collection.type('TRINCO')
!trincos = !collection.results()
!promptstr = 'Found ' & !trincos.size().string() & ' complete transactio
ns...'
$P $!promptstr
!promptstr = 'Deleting obsolete transactions more than ' & !days.string(
& ' days old...'
$P $!promptstr
!numdel = 0
!numh = 0
do !trinco values !trincos
!datecm = object DATETIME(!trinco.datecm)
!datend = object DATETIME(!trinco.datend)
if (!trinco.incsta.upcase() eq 'PROCESSED' and !datecm.lt(!date) or !t
rinco.incsta.upcase().inset('TIMED OUT','CANCELLED','REDUNDANT') and !date
nd.lt(!date)) then
!numdel = !numdel + 1
!!CE = !trinco
DELETE TRINCO
if (!!CE.members.size() eq 0) then
DELETE TRLOC
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRUSER
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRDAY
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRMONT
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRYEAR
!numh = !numh + 1
endif
endif
endif
endif
endif
endif
enddo
$P $!numdel obsolete transactions deleted
$P $!numh associated hierarchy elements deleted
if (!numdel eq 0) then
$P No merge necessary
!!Alert.Message('No obsolete transactions found')
else
Index
A E
ADMIN Daemon . . . . . . . . . . . . . . . . . . . 3:1 Extracts . . . . . . . . . . . . . . . . . . . . . . . . 16:1
Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:4 access . . . . . . . . . . . . . . . . . . . . . . 16:7
children . . . . . . . . . . . . . . . . . . . . . 16:1
C claim restrictions . . . . . . . . . . . . . 16:11
creating . . . . . . . . . . . . . . . . . . . . . 16:3
Command Processing . . . . . . . . . . . . . . . 2:1 creating working . . . . . . . . . . . . . . . 16:5
dropping changes . . . . . . . . . . . . 16:14
D explicit claim . . . . . . . . . . . . . . . . 16:11
extract claim . . . . . . . . . . . . . . . . . . 16:9
Database flushing . . . . . . . . . . . . . . . . . . . . 16:13
allocation check . . . . . . . . . . . . . . . . 5:1 flushing command failure . . . . . . . 16:11
allocation to location . . . . . . . . . . . . . 5:1 hierarchy . . . . . . . . . . . . . . . . . . . . 16:7
creating extract . . . . . . . . . . . . . . . . 16:3 implicit claim . . . . . . . . . . . . . . . . 16:11
creating master . . . . . . . . . . . . . . . . 16:3 issuing changes . . . . . . . . . . . . . . 16:14
de-allocation . . . . . . . . . . . . . . . 5:2, 5:3 master . . . . . . . . . . . . . . . . . . . . . . 16:1
deleting . . . . . . . . . . . . . . . . . . . . . . 11:1 merging changes . . . . . . . . . . . . . 16:16
macros . . . . . . . . . . . . . . . . . . . . . . 10:4 numbers . . . . . . . . . . . . . . . . . . . . . 16:6
manual update . . . . . . . . . . . . . . . . 10:1 parent database . . . . . . . . . . . . . . . 16:1
master of extract . . . . . . . . . . . . . . . 16:1 partial operations . . . . . . . . . . . . . 16:15
merging . . . . . . . . . . . . . . . . . . . . . . 6:1 querying family . . . . . . . . . . . . . . . . 16:2
reconfiguring . . . . . . . . . . . . . . . . . . 13:1 reference blocks . . . . . . . . . . . . . . 16:6
recovery . . . . . . . . . . . . . . . . . . . . . 12:1 refreshing . . . . . . . . . . . . . . . . . . . 16:14
recovery of global . . . . . . . . . . . . . . 12:2 releasing claims . . . . . . . . . . . . . . 16:14
recovery of primary . . . . . . . . . . . . . 12:2 sessions . . . . . . . . . . . . . . . . . . . . 16:15
recovery of primary location . . . . . . 12:2 user claim . . . . . . . . . . . . . . . . . . . 16:9
recovery of secondary . . . . . . . . . . 12:1 using in . . . . . . . . . . . . . . . . . . . . . 16:8
synchronisation . . . . . . . . . . . . . . . 10:1 variant . . . . . . . . . . . . . . . . . . . . . 16:18
update delay . . . . . . . . . . . . . . . . . . 10:2
update protection . . . . . . . . . . . . . . 10:9
F
update timing . . . . . . . . . . . . . . . . . 10:7
updating . . . . . . . . . . . . . . . . . . . . . 10:1 Firewall . . . . . . . . . . . . . . . . . . . . . . . . . 18:1
DESIGN Manager files . . . . . . . . . . . . . 10:7
G writing to . . . . . . . . . . . . . . . . . . . . . 7:1
Global Daemon
access rights . . . . . . . . . . . . . . . . . . 3:1
diagnostics . . . . . . . . . . . . . . . . . . . . 4:1
location . . . . . . . . . . . . . . . . . . . . . . . 3:1
H
Hub
changing . . . . . . . . . . . . . . . . . . . . . . 9:1
recovering . . . . . . . . . . . . . . . . . . . . . 9:2
I
ISODRAFT files . . . . . . . . . . . . . 10:7, 17:2
K
Kernel Command . . . . . . . . . . . . . . 2:1, 7:1
L
Locations
off-line . . . . . . . . . . . . . . . . . . . . . . . 17:1
M
Macros . . . . . . . . . . . . . . . . . . . . . . . . . 10:4
P
Pending file . . . . . . . . . . . . . . . . . . . 2:1, 8:1
PLOT files . . . . . . . . . . . . . . . . . . 10:7, 17:2
Projects
backing up . . . . . . . . . . . . . . . . . . . 15:1
T
Transaction Audit . . . . . . . . . . . . . . . . . . 7:1
Transaction database
audit trail cancelled commands . . . . 7:7
audit trail dates and counts . . . . . . . 7:5
audit trail from TRINCO . . . . . . . . . . 7:2
audit trail from TROPER . . . . . . . . . . 7:4
audit trail from TROUCO . . . . . . . . . 7:3
audit trail results and messages . . . . 7:7
commands . . . . . . . . . . . . . . . . 2:1, 7:1
management . . . . . . . . . . . . . . . . . 12:3
merging . . . . . . . . . . . . . . . . . . . . . 12:3
merging and purging . . . . . . . . . . . 7:13
reading from . . . . . . . . . . . . . . . . . . . 7:1
reconfiguring . . . . . . . . . . . . . . . . . . 12:4
renewing . . . . . . . . . . . . . . . . . . . . . 12:3