Documente Academic
Documente Profesional
Documente Cultură
Administration Guide
AR160200-00-ACN-EN-03
OpenText™ Archive Center
Administration Guide
AR160200-00-ACN-EN-03
Rev.: 2017-May-02
This documentation has been created for software version 16.2.
It is also valid for subsequent software versions as long as no new document version is shipped with the product or is
published at https://knowledge.opentext.com.
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Support: https://support.opentext.com
For more information, visit https://www.opentext.com
Disclaimer
Every effort has been made to ensure the accuracy of the features and techniques presented in this publication. However,
Open Text Corporation and its affiliates accept no responsibility and offer no warranty whether expressed or implied, for the
accuracy of this publication.
Table of Contents
Part 1 Overview 21
Part 2 Configuration 47
Table 8-3: “Send the certificate to an Archive Center (putCert)” on page 145
OpenText Archive Center (short: Archive Center) provides a full set of services for
content and documents. Archive Center can either be used as an integral part of
OpenText™ Content Suite Platform or as stand-alone server in various scenarios.
For more information about the differences between the scenarios of the “classic”
Archive Server and those that were introduced by Archive Center, see also
“Scenarios leveraging Archive Center Application Layer and web apps“
on page 39.
“Overview” on page 21
Read this part to get an introduction of Archive Center, the architecture, the
storage systems and basic concepts like logical archives and pools. You find also
a short introduction to the Administration Client and its main objects.
“Configuration” on page 47
This part describes the preparation of the system and the configuration of
Archive Center: logical archives, pools, jobs, security settings, connections to
SAP and scan stations.
Audience and This document is written for administrators of Archive Center, and for the project
knowledge managers responsible for the introduction of archiving. Further, all readers who
share an interest in administration tasks and have to ensure the trouble-free
operation of Archive Center. The following knowledge is required to take full
advantage of this document:
• Familiarity with the relevant operation system Windows® or UNIX®/Linux®.
• A general understanding of TCP/IP networks, HTTP protocol, network and data
security, and databases.
• Additional knowledge of NFS file systems is helpful.
On the basis of this information you can decide which scenario you are going to use
for archiving and how many logical archives you need to configure. You can
determine the size of disk buffers and caches in order to guarantee fast access to
archived data.
In this scenario, the total archiving load can be distributed among multiple
instances (“nodes”) of Archive Center. For more information, see also “Running
Archive Center as a cluster” on page 28.
Applications
Enterprise
Document
Library SAP Others ...
Pipeline
Services
Storage Devices
Applications
Archive Server
Archive Server, the core server component of Archive Center, incorporates the
following components for storing, managing, and retrieving documents and data:
• Document Service (DS), handles the storage and retrieval of documents and
components.
• Storage Manager (STORM), manages and controls the storage devices.
• Administration Server, provides the interface to the Administration Client
which helps the administrator to create and maintain the environment of Archive
Centers, including logical archives, storage devices, pools, etc.
Administration tools
To administer, configure and monitor the components mentioned above, you can
use the following tools:
• Administration Client is the tool to create logical archives and to perform most of
the administrative work like user management and monitoring. See also
“Important directories on Archive Center” on page 27.
• Archive Monitoring Web Client is used to monitor information regarding the
status of relevant processes, the file system, the size of the database and available
resources. This information is gathered by the Archive Monitoring Server from
Archive Server. See also “Using OpenText Archive Server Monitoring“
on page 303.
• Document Pipeline Info is used to monitor the processes in the OpenText
Document Pipeline.
Storage devices
Various types of storage devices offered by leading storage vendors can be used by
Archive Center for longtime archiving. See “Storage devices” on page 34.
<OT logging>
Directory used for Archive Center log files.
Windows default: C:\ProgramData\OpenText\log
UNIX/Linux default: /var/adm/opentext/log
<OT var>
Directory used for Archive Center variables.
Windows default: C:\ProgramData\OpenText\var
UNIX/Linux default: /var/adm/opentext
Related Topics
• “Features of Archive Center” on page 25
• “Pools and pool types” on page 36
• “Scenarios leveraging Archive Center Application Layer and web apps“
on page 39
• ““Infrastructure” node” on page 42
• “Installing and configuring storage devices” on page 49
• “Adding a single file (VI) device” on page 57
• “Configuring disk volumes” on page 60
• “Creating and modifying disk volumes” on page 61
• “Configuring caches” on page 69
• “Data compression” on page 76
• “Timestamps” on page 133
• “Configuring scan clients for a clustered installation” on page 188
• “Adding and modifying known servers“ on page 191
• “Configuring Archive Cache Server“ on page 205
• “Adding an Archive Cache Server to the environment” on page 209
• “Reassigning the jobs of a node in a cluster installation” on page 247
• “About migration “ on page 259
Within this guide, “content” is used to label all components belonging together.
Normally, all content components are stored together on the same type of medium.
However, it is also possible to separate the components and store them on different
media. For example, you can store documents and notes on different hard disks.
Documents are identified by a unique ID. The leading application uses this ID for
content retrieval. Archive Center delivers all components belonging to this ID to the
leading application.
In the “classic” scenarios, Archive Center only stores the content of documents. The
metadata describing the business context of the documents are stored in the leading
application. The link between the metadata and the content is the unique ID
mentioned above.
Archive Center represents a large virtual storage system, which can be used by
various applications. All documents that belong to a business process can be
grouped together by the concept of a logical archive. In general, a logical archive is a
collection of documents that have similar properties.
Application
Content input
Archive Server
Logical Archive
Buffer Cache
Pool
Write activity
Storage Device
(Volumes)
3. Content is copied to the associated storage platform for longtime archiving. The
time scheduling is configured in the Write job. If a cache is used, the content is
copied simultaneously to the cache. This can also be done by the scheduled
purge buffer job.
5. When at least one copy of the document has successfully been written to the
long-term storage, the document can be deleted from the disk buffer.
Application
Content
request
Content
delivery
Archive Server
Logical Archive
Buffer
Cache
Pool
Content
delivery
Storage Device
(Volumes)
1. Content is requested by a client. For this, the client sends the unique document
ID and archive ID to Archive Center.
2. Archive Center checks whether the content consists of more components and
where the components are stored.
3. If the content is still stored in the buffer or in the cache, it is delivered directly to
the client.
4. If the content is already archived on the storage device, Archive Center sends a
request to the storage device, gets the content and leads it forward to the
application. Content is returned in chunks, so the client does not have to wait
until the complete file is read. That is important for large files or if the client only
reads parts of a file.
The logical archive does not determine where and the way the content is archived.
The archive settings define the general aspects of data handling during archiving,
retrieval, and at the end of the document lifecycle.
Archive 1
Archive 2
• Pool(s) to specify the storage platform and to assign the buffer(s) to the
designated storage platform(s); see also “Pools and pool types” on page 36.
• Buffer(s) and disk volumes to store incoming content temporarily; see also “Disk
buffers” on page 34.
• Storage devices and storage volumes for longtime archiving of content; see also
“Installing and configuring storage devices” on page 49.
• Cache to accelerate content retrieval. Only necessary if slow storage devices are
used; see also “Caches” on page 37.
• Retention period for content; see also “Retention” on page 79.
• Compression and encryption settings; see also “Data compression” on page 76
and “Encrypted document storage” on page 128.
• Security settings and certificates; see also “Configuring the archive security
settings” on page 87.
• An Archive Cache Server, if used; see also “Configuring Archive Cache Server“
on page 205.
Sufficient free disk space must be available in the buffer to accommodate new
incoming documents. The documents that have already been written to the storage
media must therefore be deleted from the disk buffer at regular intervals. This can
only be done if a copy of the document has successfully been stored on the long-
term storage. This is usually done by the Purge Buffer job.
Documents can be fast retrieved as soon as they are in the disk buffer. The disk
buffer works as read cache in this case. Retrieval time can increase if the content is
written to the final storage platform.
Related Topics
Archive Center primarily supports storage devices that offer WORM functionality,
retention handling, or HSM functionality. Depending on their type, the storage
devices are connected via STORM, VI (vendor interface) or API (application
programming interface).
Related Topics
Below you find criteria for single file storage and ISO images.
ISO images
• Very small files
• Same document type
• Same lifecycle
• Bulk deletion at the end of the lifecycle
• Less administration effort
• Simple backup or migration
• Partial read access to documents
Related Topics
• “Installing and configuring storage devices” on page 49
• “Creating and modifying pools” on page 92
• “Pools and pool types” on page 36
Note: For backing up the documents stored in a pool, so-called shadow pools
can be assigned to the original pool; see “Creating and configuring shadow
pools” on page 97
The same storage platform can be used in different archives with different pool
types. The following pool types are currently available:
Figure 2-4 illustrates the dependencies between pool types and storage systems.
Archive Server
Single File Storage Container Storage
Document Service
STORM
Storage Devices
NAS, HSM,
SAN
NAS, HSM,
CAS CAS
VI: Vendor interface SAN
FS: File system interface
Related Topics
• “Creating and modifying pools” on page 92
• “Installing and configuring storage devices” on page 49
2.3.5 Caches
Caches are used to speed up the read access to documents. Archive Center can use
several caches: the disk buffer, the local cache volumes and an Archive Cache Server.
The local cache resides on the Archive Center and can be configured. The local cache
is recommended to accelerate retrieval actions. An Archive Cache Server is intended
to reduce and speed up the data transfer in a WAN. It is installed on its own host in
a separate subnet.
Related Topics
• “Configuring caches” on page 69
• “Configuring disk volumes” on page 60
• “Configuring Archive Cache Server“ on page 205
2.4 Jobs
Jobs are recurrent tasks, which are automatically started according to a time
schedule or when certain conditions are met. This allows, for example, that
temporarily stored content is transferred automatically from the disk buffer to the
storage device. See also “Configuring jobs and checking job protocol“ on page 115.
Note: Archive Center 16 EP2 can be used with or without the Application
Layer (CMIS).
Without the Application Layer, Archive Center continues the feature set of previous
versions (Archive Server 10.5.0 and before). This guide describes the basic
functionality of Archive Center, which includes both the core scenarios and
scenarios that use the Application Layer and web applications (here called
“extended functionality”).
Further For more information about the extended functionality in regard to tasks for the
information system administrator, see section 7 “Administering the Archive Center server” in
OpenText Archive Center - Scenario Configuration Guide (AR-CGD).
Operating Archive Center with extended functionality runs in one of the following modes: in
mode multi tenant mode (as a public cloud server or as a private cloud server), or in single
tenant mode (on-premises infrastructure). The mode must be chosen during
installation but can be changed later in the OpenText Administration Client. The
operating mode defines default values for certain properties of logical archives. For
more information, see section 1.2 “Operation mode” in OpenText Archive Center -
Scenario Configuration Guide (AR-CGD) and “Default values for the administration
web client” on page 495.
Web clients The extended functionality introduces the following additional web clients:
OpenText™ My Archive
Allows the tenant’s users to access their documents
Related Topics
• “Scenarios leveraging Archive Center Application Layer and web apps“
on page 39
Buffers
Documents are collected in disk buffers before they are finally written to the
storage medium. To create disk buffers, see “Configuring buffers” on page 63.
To get more information about buffer types, see “Disk buffers” on page 34.
Caches
Caches are used to accelerate the read access to documents. To create caches, see
“Configuring caches” on page 69.
Cluster Nodes
Cluster topic: This object shows information about the cluster nodes (for
example, IP addresses and ports).
For more information about cluster topics, see also “Running Archive Center as a
cluster” on page 28.
Storage Devices
Storage devices are used for longtime archiving. To configure storage devices,
see “Installing and configuring storage devices” on page 49.
Disk Volumes
Disk volumes are used for buffers and pools. To configure disk volumes, see
“Configuring disk volumes” on page 60.
Original Archives
Logical archives of the selected server. To create and modify archives, see
“Configuring archives and pools“ on page 75.
Replicated Archives
Shows replicated archives; see “Logical archives” on page 75.
External Archives
Shows external archives of known servers; see “Logical archives” on page 75.
Cache Servers
Cache servers can be used to accelerate content retrieval in a slow WAN. See
“Configuring Archive Cache Server“ on page 205
Known Servers
Known servers are used for replicating archives in remote standby scenarios.
See “Adding and modifying known servers“ on page 191.
SAP Servers
The configuration of SAP gateways and systems to connect SAP servers to
Archive Center. See “Connecting to SAP servers“ on page 175.
Scan Stations
The configuration of scan stations and archive modes to connect scan stations to
Archive Center. See “Configuring scan stations“ on page 181.
Alerts
Displays alerts of the “Admin Client Alert” type. See “Checking alerts”
on page 301. To receive alerts in the Administration Client, configure the events
and notifications appropriately. See, “Monitoring with notifications“
on page 293.
Events and Notifications
Events and notifications can be configured to get information on predefined
server events. See “Monitoring with notifications“ on page 293.
Jobs
Jobs are recurrent tasks which are automatically started according to a time
schedule or when certain conditions are met, for example, to write content from
the buffer to the storage platform. A protocol allows the administrator to watch
the successful execution of jobs. See “Configuring jobs and checking job
protocol“ on page 115.
Key Store
The certification store is used to administer encryption certificates, security keys
and timestamps. See “Configuring a certificate for document encryption”
on page 149.
Policies
Policies are a combination of rights which can be assigned to user groups. See
“Checking, creating, or modifying policies” on page 164.
Reports
Reports contains the tabs "Reports" and "Scenarios" which display the generated
reports and available scenarios respectively. See “Scenario reports“ on page 219.
Storage Tiers
Storage tiers designate different types of storage. See “Creating and modifying
storage tiers” on page 111.
Utilities
Utilities are tools which are started interactively by the administrator; see
“Utilities“ on page 249.
Archive Server
Shows configuration variables related to the Archive Center. This includes
Administration Server, database server, Document Service logging, Notification
Server, Archive Timestamp Server.
Monitor Server
Shows configuration variables related to the Archive Monitoring Server.
Document Pipeline
Shows configuration variables related to the document server.
For a description of how to set, modify, delete, and search configuration variables,
see “Setting configuration variables“ on page 221.
For a complete list including short descriptions of all configuration variables, see
“Configuration parameter reference” on page 335.
Before you can start configuring the archive system, in particular the logical
archives, their pools and jobs, you have to prepare the infrastructure on which the
system is based.
1. Create and configure disk volumes at the operating system level to use it as
buffer, cache, or storage device.
2. Configure the storage device for longtime archiving. Set up the connection to
the Archive Center.
• Set up the connection between the storage device and Archive Center.
• Add prepared disk volumes for various uses as buffers or local storage
devices (HDSK).
• Create disk buffers and attach hard disk volumes.
• Create caches and specify volume paths.
• Check whether the storage device is usable.
Note: Storage devices can now be connected to Archive Center using the
Administration Client.
Storage devices are configured and administered either in the Storage Devices or in
the Disk Volumes in the Infrastructure object in the console tree. See the tables 5-1
and 5-2 below for specific systems.
Storage There are two main types of devices that are connecting using the Storage Devices
Devices node:
• Single file storage: hard disk-based storage devices (“Generalized Store”, GS)
that are connected with an API.
These kinds of devices are also called “single file (vendor interface)” and are
described in “Adding a single file (VI) device” on page 57.
Disk Volumes NAS and local hard disk devices are administered in the Disk Volumes node; see
“Configuring disk volumes” on page 60.
Important
Released and certified storage platforms can be found in the Storage
Platforms Release Notes on My Support (https://knowledge.opentext.com/
knowledge/llisapi.dll/open/12331031).
Note: Storage devices can only be added in the Administration Client, not
edited or deleted.
Note: You can also restart STORM later using the corresponding
commands in the action pane.
For example, you can create multiple virtual jukeboxes and then restart
STORM once.
Number of slots
The available storage capacity is dynamically allocated as Archive Server writes data
to the device. However, the server internally works with a fixed number of available
slots that are to be filled. If all available slots are exceeded, no new data can be
written to the device, because no new blank area is being found.
Usually, the internal limit is sufficient for most cases, but for large installations the
limit needs to be raised.
If you want to put more than 1000 ISO images (default) into one virtual jukebox, the
DS write job will return an error (not enough blank partitions). For more
information, see My Support (https://knowledge.opentext.com/knowledge/cs.dll/
open/15536782).
The maxslots value also specifies the size of the devices SAVE file. Lowering
the maxslots value is not allowed and may lead to unexpected results!
• NetApp SnapLock
Further Detailed information about configuring a CFS storage device can be found in the
information corresponding dedicated guide: OpenText Archive Center - Compliant File System
Installation and Configuration Guide (AR-ICF).
2. On the Settings page, enter the File system path to your device, that is the
mount path of the volume in the file system. The path is a drive under Windows
and a volume directory under UNIX/Linux.
On Windows, you can either specify fully-qualified paths of the form x:
\directory\ or UNC paths like \\NASserver\win_share1.
The Archive Spawner service must be able to access the path. You might have to
run the service under a dedicated user to achieve this. If you use a drive letter,
you will have to make sure that the drive is mapped at boot time before the
Spawner service is started and will not disconnect after being idle for a while.
For the latter reason, OpenText recommends using UNC paths and not mapped
network drives with drive letters.
Click Browse to open the directory browser. Select the designated directory and
click OK to confirm.
If you enter the directory path manually, ensure that a backslash is inserted in
front of the directory name if you are using volume letters (for example, e:
\vol2).
Click Test Connection to verify your settings.
5. In the action pane, click Refresh to update the view in Administration Client.
1. Add EMC Centera as write at-once (ISO) device by following the description in
“Adding a write at-once (STORM) device” on page 51.
2. On the Settings page, enter the Connection string to your device.
Prerequisites Follow the instructions in Section 2 “Configuring SSAM” in OpenText Archive Center
- IBM TSM SSAM Installation and Configuration Guide (AR-IDR) before continuing.
1. Add IBM TSM SSAM as write at-once (ISO) device by following the description
in “Adding a write at-once (STORM) device” on page 51.
2. On the Settings page, enter the following:
Management class
Enter the name of the policy that defines how objects are stored and
managed in TSM.
For details, see Section 2.3 “Management classes and retention initiation” in
OpenText Archive Center - IBM TSM SSAM Installation and Configuration
Guide (AR-IDR).
OPT file
Enter the path to the OPT file defining the connection parameters for TSM
SSAM. The OPT file must be located on the Archive Center host and the
path must be a server path.
For details, see Section 1.1 “TSM client configuration files” in OpenText
Archive Center - IBM TSM SSAM Installation and Configuration Guide (AR-
IDR).
You can now attach the device; see “Configuring STORM storage devices”
on page 56.
1. Add HDS HCP as write at-once (ISO) device by following the description in
“Adding a write at-once (STORM) device” on page 51.
2. On the Settings page, enter the
Connection URL (<protocol>://
<namespace>.<tenant>.<cluster>:<port>/rest/<basedir>)
and the
User name (name of the Data Access Account for the namespace).
Click Set Password and enter the (unencrypted) password for the Data Access
Account.
For details, see Section 3 “HCP HTTP connection information” in OpenText
Archive Center - HDS HCP Installation and Configuration Guide (AR-IHC).
Click Test Connection to verify your settings.
3. Optional To change the Maximum number of slots, click Advanced.
For details, see “Number of slots” on page 52.
4. Click Next and then click Finish.
The HDS HCP device is added in the result pane.
You can now attach the device; see “Configuring STORM storage devices”
on page 56.
Note: To determine the name of the STORM server, select Storage Devices in
the Infrastructure object in the console tree. The name of the STORM server is
displayed in brackets behind the device name, for example: WORM
(STORM1).
To attach a device:
2. Select the designated device in the top area of the result pane.
To detach a device:
2. Select the designated device in the top area of the result pane.
This device can no longer be accessed and can be turned off. The status is set to
“Detached”.
Cluster topic: You cannot use the Add Storage Device wizard in Administration
Client but must configure the device manually.
For details about a specific device, see the section that corresponds to your device
below.
Note: We assume that the Windows Azure account has been created and
configured properly.
Tip: You can use the certificates provided in <OT config AC>/gs/
azure_cert.pem.
1. In the result pane, select the Windows Azure device you created before.
2. In the action pane, click Add Connection and enter the following:
Container name
Basically the top-level directory your data is being stored in. <Container> has
a minimum length of 3 characters.
Account name
The name of the Windows Azure storage account. This account must be
created using the Azure Management Portal (https://
manage.windowsazure.com/).
Access Key
The Primary Access Key generated after creating the Storage Account.
1. In the lower part of the result pane, select the connection you created before.
Further See OpenText Archive Center - Windows Azure Installation and Configuration Guide (AR-
information IAZ) for details about the configuration.
Prerequisites Follow the instructions in Section 2.1 “Centera server” in OpenText Archive Center -
EMC Centera Installation and Configuration Guide (AR-ICE) before continuing.
3. On the General page of the Add Storage Device wizard, type a name for the
new device in the Storage Device name field.
Select EMC Centera as Storage type and Single File as Storage strategy.
Click Next.
1. In the result pane, select the EMC Centera device you created before.
2. In the action pane, click Add Connection and enter the Connection string.
1. In the lower part of the result pane, select the connection you created before.
3. On the General page of the Add Storage Device wizard, type a name for the
new device in the Storage Device name field.
Select HDS HCP as Storage type and Single File as Storage strategy.
Click Next.
1. In the result pane, select the EMC Centera device you created before.
1. In the lower part of the result pane, select the connection you created before.
Hard disk volumes are used for disk buffers, for local caches, and as local storage
devices. At first, you create these volumes at operating system level. The used
hardware must comply with the following:
Number and size of the volumes depend on many factors and are usually defined
together with OpenText experts or partners when the installation is prepared.
Important factors are:
4. Click New Disk Volume in the action pane. The New Disk Volume window
opens.
Cluster topic: This dialog looks different when running a cluster. In case of
volumes for buffers you additionally must select the cluster node on which you
want to create the disk volume.
Volume name
Unique name of the volume
Mount path
Mount path of the volume in the file system. The mount path is a drive
under Windows and a volume directory under UNIX/Linux.
On Windows, you can either specify fully-qualified paths of the form x:
\directory\ or UNC paths like \\NASserver\win_share1.
The Archive Spawner service must be able to access the path. You might
have to run the service under a dedicated user to achieve this. If you use a
drive letter, you will have to make sure that the drive is mapped at boot
time before the Spawner service is started and will not disconnect after
being idle for a while. For the latter reason, OpenText recommends using
UNC paths and not mapped network drives with drive letters.
Click Browse to open the directory browser. Select the designated directory
and click OK to confirm.
If you enter the directory path manually, ensure that a backslash is inserted
in front of the directory name if you are using volume letters (for example,
e:\vol2).
Volume class
Select the storage medium or storage system to ensure correct handling of
documents and their retention.
Hard Disk
Hard disk volume that provides WORM functionality or that can be
used as disk buffer. Documents are written from the buffer to the
volume without additional attributes. Use this volume class for buffers.
6. Click Finish.
Create as many hard disk volumes as you need.
Renaming disk To rename a disk volume, select it in the result pane and click Rename in the action
volumes pane.
Note: If you want to rename a disk volume, make sure that an existing
replicated disk volume is also renamed. Then start the Synchronize_Replicates
job on the remote server. This will update the volume names on both servers.
Procedure
• “Creating and modifying a disk buffer” on page 63
• “Creating and modifying a HDSK (write-through) pool” on page 92
3. Create buffers and caches as required (see sections below for details).
4. Create logical archive(s) with pools of type Single File (FS); see “Configuring
archives and pools“ on page 75.
Preconditions The hard disks must be partitioned at the operating system level and then created in
Administration Client. See “Creating and modifying disk volumes” on page 61.
Purge job
Name of the Purge_Buffer job.
Number of threads
You can change the number of threads used by the Purge_Buffer job to
improve performance (1-50 threads); default: 3.
Note: If both conditions Purge documents older than ... days and Cache
documents before purging are specified, the job runs in a way which
satisfies both conditions to the greatest possible extent. Documents that are
older than n days are also deleted even if the required storage space is
available. Conversely, documents that are more recent than n days are
deleted until the required percentage of storage space is free.
7. Schedule the Purge_Buffer job. The command and the arguments are entered
automatically and can be modified later. See “Setting the start mode and
scheduling of jobs” on page 122.
Modifying a disk To modify a disk buffer, select it and click Properties in the action pane. Proceed in
buffer the same way as when creating a disk buffer. The name of the disk buffer and the
Purge_Buffer job cannot be changed.
Deleting a disk To delete a disk buffer, select it and click Delete in the action pane. A disk buffer can
buffer only be deleted if it is not assigned to a pool.
Replicated volumes are attached to a replicated buffer on the Remote Standby Server
in the same way.
2. Select the designated disk buffer in the top area of the result pane.
3. Click Attach Volume in the action pane. A window with all available volumes
opens.
4. Select an existing volume. The volume must have been created previously; see
“Creating and modifying disk volumes” on page 61.
Related Topics
• “Creating and modifying disk volumes” on page 61
• “Creating and modifying a disk buffer” on page 63
Note: If a buffer is attached to a pool, it must have at least one attached hard
disk volume. Thus, the last hard disk volume cannot be detached.
2. Select the designated disk buffer in the top area of the result pane.
3. Select the volume to be detached in the bottom area of the result pane.
Job name
The job name is set during buffer creation and cannot be changed.
Command
The command is set to Purge_Buffer during buffer creation.
Arguments
The argument is set to the buffer's name during buffer creation.
Start mode
Configures whether the job starts at a certain time or after a previous job
was finished. See also “Setting the start mode and scheduling of jobs”
on page 122.
5. Click Next.
7. Click Finish.
Related Topics
• “Creating and modifying jobs” on page 121
• “Setting the start mode and scheduling of jobs” on page 122
Volume name
The name of the volume
Type
Original or replicated
Capacity (MB)
Maximum capacity of the volume
Free (MB)
Free capacity of the volume
Last Backup or Last Replication
Date when the last backup or the last replication was performed. Depends
on the type of the volume.
Host
Specifies the host on which the replicated volume resides if the disk buffer
is replicated
6. Modify the volume status if necessary. To do this, select or clear the status. The
settings that can be modified depend on the volume type.
Full, Offline
These flags are set by Document Service and cannot be modified.
Write locked
No more data can be copied to the volume. Read access is possible; write
access is protected.
Locked
The volume is locked. Read or write access is not possible.
Modified
Is automatically selected, if the Document Service performs a write access to
a HDSK volume. If cleared manually, Modified is selected with the next
write access again.
7. Click OK.
To synchronize servers:
2. Select the designated disk buffer in the top area of the result pane.
3. Select the disk buffer you want to replicate in the bottom area of the result pane.
Note: If you want to rename a replicated disk volume, you also have to
rename the original disk volume to the same new name. Then start the
Synchronize_Replicates job on the remote server. This will update the
volume names on both servers.
6. Click Finish.
A cache must have at least one assigned hard disk volume. It is also possible to
assign more disk volumes to a cache and to configure their priority.
Note: Do not mix up the local cache and Archive Cache Servers. See also
“Configuring Archive Cache Server“ on page 205).
Buffers Caches
Disk Volume Disk Volume Disk Volume Disk Volume Disk Volume Disk Volume
... ...
Purge_Buffer
activity
Pools ...
Global cache
If no cache path is configured and assigned to a logical archive, the global cache is
used. The global cache is usually created during installation but there is no volume
assigned. To use the global cache a volume must be assigned. See “Adding hard disk
volumes to caches” on page 71.
Depending on the time when you want to cache documents, you select the
appropriate configuration setting:
Enable caching for the Caching option in the archive configuration; see “Configuring the
logical archive archive settings” on page 88
Caching when the If the Write job is performed, documents are also written to the
document is written cache.
Caching when the Cache documents before purging option in the disk buffer
buffer is purged properties. See “Creating and modifying a disk buffer” on page 63.
Related Topics
To create a cache:
1. Create the volumes for the caches on the operating system level.
7. Click Finish.
Note: If you want to change the priority of assigned hard disk volumes, see
“Defining priorities of cache volumes” on page 72.
Deleting a To delete a cache, select it and click Delete in the action pane. It is not possible to
cache delete a cache which is assigned to a logical archive. The global cache cannot be
deleted either.
Related Topics
Caution
Be aware that your cache content gets invalid if you change the volume
priority.
Note: If you want to change the priority of hard disk volumes, see “Defining
priorities of cache volumes” on page 72.
Related Topics
• “Configuring caches” on page 69
• “Defining priorities of cache volumes” on page 72
To delete a HD volume:
Note: If you want to change the priority of hard disk volumes, see “Defining
priorities of cache volumes” on page 72.
Related Topics
• “Configuring caches” on page 69
• “Defining priorities of cache volumes” on page 72
Caution
Be aware that your cache content gets invalid if you change the volume
priority.
2. Select the designated cache in the top area of the result pane. In the bottom area
of the result pane the assigned hard disk volumes are listed.
3. Click Change Volume Priorities in the action pane. A window to change the
priorities of the volumes opens.
4. Select a volume and click the designated arrow button to increase or decrease
the priority.
5. Click Finish.
2. Select the Unavailable Volumes tab in the result pane to list all unavailable
devices.
1. Change the password on the database. Make sure to create a secure password.
2. In the console tree, expand Archive Center > Configuration and search for the
User password of database variable (internal name: AS.DBS.DBPASSWORD;
see “Searching configuration variables” on page 222).
3. Open the User password of database configuration parameter, enter the new
password and click OK.
The password is encrypted automatically.
4. For the changes to take effect, restart the Apache Tomcat and Archive Spawner
services.
1. In the console tree, expand Archive Center > Configuration and search for the
Number of minutes to wait for reconnect variable (internal name:
AS.DBS.MAXWAITTIMETORECONNECTMINUTES; see “Searching
configuration variables” on page 222).
2. Open the Number of minutes to wait for reconnect variable and enter the time
in minutes during which Archive Center tries to reconnect to the database.
Click OK.
Before you can work effectively with Archive Center, you have to perform some
configuration steps:
• Create and configure logical archives
• Create storage tiers
• Create and configure pools
• Schedule and configure jobs
• Configure security settings
• Configure the storage system
When you configure the archive system, you often have to name the configured
element. Make sure that all names follow the naming rule:
For each original archive, you give a name and configure a number of settings:
• Encryption, compression, BLOBs and single instance affect the archiving of a
document.
• Caching and Archive Cache Servers affect the retrieval of documents (see
“Configuring archive access using Archive Cache Server” on page 215).
• Signatures, SSL and restrictions for document deletion define the conditions for
document access.
• Timestamps and certificates for authentication ensure the security of documents.
• Auditing mode, retention and deletion define the end of the document lifecycle.
Some of these settings are pure archive settings. Other settings depend on the
storage method, which is defined in the pool type. The most relevant decision
criterion for their definition is single file archiving or container archiving.
Of course, you can use retention also with container archiving. In this case, consider
the delete behavior that depends on the storage method and media (see “When the
retention period has expired” on page 227).
Formats to All important formats including email and office formats are compressed by default.
compress You can check the list and add additional formats in Configuration, search for the
List of component types to be compressed variable (internal name:
COMPR_TYPES; see “Searching configuration variables” on page 222).
Pools with For pools using a disk buffer, the Write job compresses the data in the disk buffer
buffer (unless Archive Center Proxy was used). The job then copies the compressed data to
the medium. After compressing a file, the job deletes the corresponding
uncompressed file.
Note: From version 16 on, Write jobs do not compress or encrypt data
anymore if the new compression/encryption format is present, that is if
Archive Center Proxy was used to encrypt or to compress the documents
before.
This means that documents that were only compressed on Archive Center
Proxy will not be encrypted on Archive Center.
With the introduction of OpenText™ Archive Center Proxy and the option to install
Archive Center as a cluster, only the new, streaming-enabled format is supported.
If ISO images are written, the Write job checks whether sufficient compressed data
is available after compression as defined in Minimum amount of data to write. If so,
the ISO image is written. Otherwise, the compressed data is kept in the disk buffer
and the job is finished. The next time the Write job starts, the new data is
compressed and the amount of data is checked again.
HDSK pool When you create an HDSK pool, the Compress_<archive name>_<pool name> job is
created automatically for data compression. This job is activated by default.
Important
Compress jobs are only allowed for HDSK write-through pools. Do not create
Compress jobs for any other kind of volume!
Cluster topic: You must not enable compression for HDSK pools.
By default, Single Instance Archiving is disabled. You can enable it, for example, for
email archives; see “Configuring the archive settings” on page 88.
Important
Excluding If necessary, you can exclude component types (formats) from Single Instance
formats from Archiving. Microsoft Exchange and Lotus Notes emails are excluded by default
SIA
because their bodies are unique, although the attachments are archived with SIA.
2. In the console tree, expand Archive Center > Configuration and search for the
List of component/application types that are NOT using SIA variable (internal
name: AS.DS.SIA_TYPES; see “Searching configuration variables”
on page 222.
3. Open the Properties window of the configuration variable and add the MIME
types to be excluded.
SIA and ISO Be careful when using Single Instance Archiving and ISO images: Emails can consist
images of several components, for example, logo, footer, attachment, which are handled by
Single Instance Archiving. Using ISO images, these components can be distributed
over several images. When reading an email, several ISO images must be accessed to
read all the components in order to recompose the original email. Caching for
frequently used components and proper parameter settings will improve the read
performance.
SIA for emails For emails, archiving in single instance mode decomposes emails, which means that
attachments are removed from the original email and are stored as separate
components on Archive Center. As soon as an email is retrieved from Content
Server, it is checked whether the email needs to be recomposed. If so, the
appropriate attachments are reinserted into the email and the complete email is
passed to Content Server.
Important
If you use OpenText Email Archiving or Management, do not use the Email
Composer additionally.
Configuring Composing or decomposing emails can use a lot of memory, which has impact on
email (de-)com- the performance. Therefore, you can configure how large emails are handled as
posing
described below.
6.1.3 Retention
Introduction This part explains the basic retention handling mechanism of Archive Center.
OpenText strongly recommends reading this part if you use retention periods for
documents. For more information about administering retention, see “Configuring
the archive retention settings” on page 89.
Retention The retention period of a document defines a time frame, during which it is
period impossible to delete or modify the document.
The retention period – more precisely, the expiration date of the retention period – is
a property of a document and is stored in the database and additionally together
with the document on the storage medium, if possible.
Compliance Various regulations require storing documents for a defined retention period. To
facilitate compliance with regulations and meet the demand of companies, Archive
Center can handle retention of documents in cooperation with the leading
application and the storage subsystem. The leading application manages the
retention of documents, and Archive Center executes the requests or passes them to
the storage system.
Retention Modern storage systems support retention periods on hardware level. Archive
handling Center can propagate the retention period to those storage systems.
behavior) during the creation of a document. Archive Center sets the retention
period on the storage systems.
• If nothing is specified by the leading application, the document can inherit a
default retention period and a retention behavior on the Archive Center. The
retention behavior is then part of the document, i.e. modifying the archive-
specific retention does not modify the document’s retention. The default values
are configured per logical archive within OpenText Administration Client (see
“Configuring the archive retention settings” on page 89).
• When the retention period has expired, the leading application has to trigger the
deletion of the document. Archive Center then triggers the purge of the files on
the storage system.
If both explicit and default retention period are given, the leading application has
priority.
Archive Center only reacts to requests sent by the leading application. That is why
we talk about retention handling in Archive Center. Thereby, we avoid the situation
that a leading application still might have index information for documents already
deleted in Archive Center.
Changing the retention settings on the archive has no influence on already archived
documents. However, it is possible to prolong the retention period using the
ArchiveLink API.
Note: As regulations can change in the course of time, you can adapt the
retention period of documents by means of a complete document migration;
see “Migration” on page 257.
Handling of Notes and annotations can be added to a document, they are add-ons and do not
add-ons change the document itself. Components that are defined as add-ons and that can be
modified during the retention period are listed in the List of addon components
variable (retrieve the variable in Configuration; see “Searching configuration
variables” on page 222; internal variable name: ADDON_NAMES (row1 to.rowN).
Fixed retention
The retention period is known at creation time, and can be propagated to the
storage system. The storage system protects against illegal deletion: neither an
application nor Archive Center are able to delete the object on the storage
system before the retention period has expired.
Variable retention
The retention period is unknown at creation time, or can change during the
document life cycle. In this case, retention periods have to be handled by the
leading application only (i.e., the leading application sets retention to
READ_ONLY), and cannot be passed to Archive Center (i.e. no retention is set at
the archive).
Retention types Different retention types can be applied during the creation of a document by the
leading application or by inheritance of default values on the Archive Center (see
“Configuring the archive retention settings” on page 89).
Retention The following table lists settings and their impact on the retention behavior (see
behavior “Configuring the archive retention settings” on page 89):
Setting Description
Deferred Deferred archiving prevents Archive Center from writing the content from
archiving the disk buffer to the storage system until another call removes the deferred
flag from the document.
Destroy Destroy activates overwriting the document several times before purging.
Destroy is not available for all storage system.
Terms used The terms storage system or storage platform are used for any long-term storage device
supported by Archive Center, such as Content-Addressed Storage (CAS), Network-
Attached Storage (NAS), Hierarchical Storage Management Systems (HSM) and
others. The term delete refers to the logical deletion of a component and the term
purge is used to describe the cleanup of content on the storage system.
Related Topics
Using retention periods requires a thorough planning. The storage system, the pool
type in use, and other settings (Single File, ISO, BLOBs, single instance archiving,
etc.) can influence retention handling.
Tips
• If you use retention for archives with Single Instance Archiving (SIA), make
sure that documents with identical attachments are archived within a short
time frame and the documents in one archive have similar retention
periods. See also “Single instance” on page 77.
• You cannot export volumes containing at least one document with
nonexpired retention.
• If retention periods vary strongly, delete requests for the documents will
spread over a long period. In this case, single document storage should be
preferred.
• If documents stored within the same archive have a similar retention
period, the retention will expire within a short time window for these
documents. In this case, ISO images can be used for storage.
Retention on The following table lists the storage systems and their retention handling.
storage
systems
Table 6-3: Retention on storage systems
For the concrete retention support of the storage system, refer to the storage release
notes.
Note: You can also use the AutoDelete job to find and remove documents
with expired retention. For more information, see “Other jobs” on page 118.
When the retention periods of documents have expired, documents can be deleted
mainly to
• free storage space and thus to save costs,
• get rid of documents that might cause liability of the company.
In this case, the document has to be deleted as soon as possible after the retention
period has expired. This case cannot be fulfilled immediately if the document is
stored within a container like an ISO image, a BLOB, a meta-document, or
referenced by other objects (Single Instance Archiving).
Purge process
A document or component can be deleted after the retention of the document has
expired or no retention has been applied.
The leading application can delete a single component or delete the document.
Deleting a document implies that all components are deleted and then the document
itself. Due to the nature of storage, deletion cannot be handled within a transaction.
Purge process
ISO, BLOB
Delete requests cannot be propagated to the storage system.
The document is deleted in Archive Center. The content remains on the storage
system until all documents on the media or container have been deleted. The
DELETE_EMPTY_VOLUMES job purges the container files on the storage
system.
Single file pools
Delete requests for the components and documents initiate a synchronous purge
request on the storage system.
The following error situation can arise:
Storage system reports an error when the document or component is to be
deleted.
• For documents: The document information in Archive Center is deleted (as all
component information is already deleted).
• For components: The component information in Archive Center is deleted.
Note: This is new for versions from 10.0 on. In former versions, the
leading applications received an error message and the component
information was not deleted.
Purging content In single file archiving scenarios, the content on the storage system is purged during
the delete command. Content on ISO images cannot be purged, and an additional
job is necessary to purge the content as soon as all content of the partition is deleted
from Archive Center.
The purging capabilities depend on storage system and pool type. The following
table lists the purge behavior depending on the pool type.
Use DELETE_EMPTY_
PARTITIONS job.
Single File (FS) YES Destroy is propagated to the storage
system but not all storage systems will
execute the destruction.
Note: If the document’s retention date has changed on the original server due
to a migrate call, the new values are only held by Archive Center and not
written to the ATTRIB.ATR file, which holds the technical metadata of the
document. The ATTRIB.ATR file will only be updated if the document is
updated, for example, if a component is added on the original server or if the
document is copied to a different volume.
As soon as the updated ATTRIB.ATR has been replicated to the Remote Standby
Server, the new retention value will be known on the Remote Standby Server.
Export of Export of volumes is prohibited if the volume contains document components under
volumes retention. Exception: there is at least one logical copy of each component under
retention on another volume. This is typically the case after a volume migration.
Note: Fast volume migration and local backups do not create logical copies of
components.
Fast Volume Migration does not change nor apply retention periods to single
documents. Only a retention period for the ISO image file is set according to the
rules listed below.
• The retention of the source image has not yet expired: The target image will
inherit the retention of the remaining period.
• The retention has already expired or was set to NONE: No retention will be
applied to the target image.
2. Click New Archive in the action pane. The window to create a new logical
archive opens.
Archive name
Unique name of the new logical archive. Consider the Naming rule for
archive components on page 75.
In the case of SAP applications, the archive name consists of two
alphanumeric characters (only uppercase letters and digits).
Description
Brief, self-explanatory description of the new archive.
Note: After creating the logical archive, default configuration values for all
settings are provided. If you want to change these settings, open the Properties
window and modify the settings of the corresponding tab.
General The description of the new archive can be viewed and modified (open Properties in
information the action pane and select the General tab).
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Check the settings and modify it, if needed.
• Read documents
• Update documents
• Create documents
• Delete documents
Each permission marked for the current archive has to be checked when
verifying the signed URL. With their first request, clients evaluate the access
permissions required for the current archive and preserve this information.
With the next request, the signed URL contains the access permissions
required, if these are not in conflict with other access permission settings
(for example, set per document).
The settings determine the access rights to documents in the selected
archive which were archived without a document protection level, or if
document protection is ignored. The document protection level is defined
by the leading application and archived with the document. It defines for
which operations on the document a valid SecKey is required.
See also “Activating SecKey usage for a logical archive” on page 126.
Select the operations that you want to protect. Only users with a valid
SecKey can perform the selected operations. If an operation is not selected,
everybody can perform it.
SSL
Specifies whether SSL is used in the selected archive for authorized,
encrypted HTTP communication between the Imaging Clients, Archive
Centers, Archive Cache Servers and OpenText Document Pipelines.
Document deletion
Here you decide whether deletion requests from the leading application are
performed for documents in the selected archive, and what information is
given. You can also prohibit deletion of documents for all archives of the
Archive Center. This central setting has priority over the archive setting.
See also “Setting the operation mode of Archive Center” on page 328.
Deletion is allowed
Documents are deleted on request, if no maintenance mode is set and
the retention period is expired.
Deletion Causes error
Documents are not deleted on request, even if the retention period is
expired. A message informs the administrator about deletion requests.
4. Click OK to resume.
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Settings tab. Check the settings and modify them, if needed.
Compression
Activates data compression for the selected archive.
See also “Data compression” on page 76.
Encryption
Activates the data encryption to prevent that unauthorized persons can
access archived documents.
See also “Encrypted document storage” on page 128.
Blobs
Activates the processing of BLOBs (binary large objects).
Very small documents are gathered in a meta document (the BLOB) in the
disk buffer and are written to the storage medium together. The method
improves performance. If a document is stored in a BLOB, it can be
destroyed only when all documents of this BLOB are deleted.
Single instance
Enables single instance archiving.
See also “Single instance” on page 77.
Deferred archiving
Select this option, if the documents should remain in the disk buffer until
the leading application allows Archive Center to store them on final storage
media.
Example: The document arrives in the disk buffer without a retention period and
the leading application will provide the retention period shortly after. The
document must not be written to the storage media before it gets the retention
period.
Audit enabled
If auditing is enabled, all document-related actions are audited (see
“Configuring auditing” on page 313).
Cache enabled
Activates the caching of documents to the DS cache at read access.
Cache
Pull down menu to select the cache path. Before you can assign a cache
path, you must create it. (See “Creating and deleting caches” on page 70
and “Configuring caches” on page 69).
Important
After assigning a cache to an archive you must restart Archive
Center.
4. Click OK to resume.
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Retention tab. Check the settings and modify them, if needed.
No retention
Use this option if the leading application does not support retention, or if
retention is not relevant for documents in the selected archive. Documents
can be deleted at any time if no other settings prevent it.
Infinite retention
Documents in the archive never can be deleted. Use this setting for
documents that must be stored for a very long time.
Destroy (unrecoverable)
This additional option is only relevant for archives with hard disk storage.
If enabled, the system at first overwrites the file content several times and
then deletes the file.
4. Click OK to resume.
Important
Documents with expired retention period are only deleted
• if document deletion is allowed; see “Configuring the archive security
settings” on page 87, and
• if no maintenance mode is set; see “Setting the operation mode of Archive
Center” on page 328.
Related Topics
• “Retention” on page 79
• “When the retention period has expired” on page 227
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Timestamps tab. In the Timestamps area, select one of the following
options:
Old Timestamps
Use old timestamps.
No Timestamps
No use of timestamps, i.e., Archive Center generates no timestamp for the
archived documents.
ArchiSig
Enables ArchiSig timestamp usage, i.e., an ArchiSig timestamp is generated
for the archived documents.
For a description of ArchiSig, see “Timestamps” on page 133.
None
Timestamps are not verified. Each requested document is delivered.
Relaxed
Timestamps are verified. Each requested document is delivered. If the
timestamp cannot be verified, an auditing entry is written (if auditing is
enabled).
Strict
Timestamps are verified. Requested documents are delivered only if the
timestamp is verified.
In addition, an auditing entry is written (if auditing is enabled).
5. Click OK to resume.
The procedure for creating and configuring a pool depends on the pool type. The
main differences in the configuration are:
• Usage of a disk buffer. All pool types, except the HDSK (write through) pools,
require a buffer.
• Settings of the Write job. The Write job writes the data from the buffer to the
final storage media. For all pool types, except the HDSK (write through) pools, a
Write job must be configured.
• Backup of documents in the pool(s) of a logical archive.
• For the ISO pool type, a backup jukebox can be created.
• Shadow pools can be created for all pool types, except for HDSK (write
through) pools. Multiple shadow pools can be assigned to a single original
pool. A Copy job copies the documents from the original pool to the shadow
pool(s) according the current archive settings.
To determine the pool type that suits the scenario and the storage system in use, see
the Storage Platform Release Notes on My Support (https://
knowledge.opentext.com/knowledge/llisapi.dll/open/12331031)).
Background
• “Pools and pool types” on page 36
6. Select a Storage tier (see “Creating and modifying storage tiers” on page 111).
The name of the associated compression job is created automatically.
8. Select the pool in the top area of the result pane and click Attach Volume. A
window with all available hard disk volumes opens (see “Creating and
modifying disk volumes” on page 61).
Scheduling the To schedule the associated compression job, select the pool and click Edit Compress
compression Job in the action pane. Configure the scheduling as described in “Configuring jobs
job
and checking job protocol“ on page 115.
Modifying a To modify pool settings, select the pool and click Properties in the action pane. Only
HDSK pool the assignment of the storage tier can be changed.
To create a pool:
3. Click New Pool in the action pane. The window to create a new pool opens.
4. Enter a unique (per archive), descriptive Pool name. Consider the naming
conventions; see Naming rule for archive components on page 75
a. Select the pool in the top area of the result pane and click Attach Volume.
A window with all available hard disk volumes opens (see “Creating and
modifying disk volumes” on page 61).
b. Select the designated disk volume and click OK to attach it.
9. Schedule the Write job; see “Configuring jobs and checking job protocol“
on page 115.
Modifying a To modify pool settings, select the pool and click Properties in the action pane.
pool Depending on the pool type, you can modify settings or assign another buffer.
Important
You can assign another buffer to the pool. If you do so, make sure that:
• all data from the old buffer is written to the storage media,
• the backups are completed,
• no new data can be written to the old buffer.
Data that remains in the buffer will be lost after the buffer change.
Deleting a pool
If a shadow pool has been assigned to an original pool, the Delete option of the
Properties in the action pane is not available for the original pool.
Storage Selection
Storage tier
Select the designated storage tier (see “Creating and modifying storage tiers”
on page 111).
Buffering
Buffer assignment
Make sure that each buffer is assigned to one pool only (original pool or
shadow pool). Do not assign the same buffer to pools in different archives.
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 115.
Original jukebox
Select the original jukebox.
(POOL) for the pool name and $(SEQ) for an automatic serial number. The prefix
$(PREF) is defined in Configuration, search for the Volume name prefix
variable (internal name: ADMS_PART_PREFIX; see “Searching configuration
variables” on page 222). You can define any pattern, only the placeholder $
(SEQ) is mandatory. You can also insert a fixed text. The initialization of the
medium is started by the Write job.
Click Test Pattern to view the name planned for the next volume based on this
pattern.
Note: For some storage systems, the maximum size is not required; see the
documentation of your storage system on My Support (https://
knowledge.opentext.com/knowledge/llisapi.dll/Open/12331031).
Note: Make sure that the size of the smallest document to be written is less
than the difference between Minimum amount of data and Maximum
volume size.
• The size of the ISO image created by the Archive Center is larger than
the Minimum amount of data value and less than the Maximum
volume size value. If an ISO image in creation does not meet this
criterion, no image is written.
• If compression is enabled for the archive, the size of the compressed
documents (components) is applicable.
Backup
Backup enabled
Enable this option if the volumes of a pool are to be backed up locally on a
second device (jukebox) of this Archive Center. During the backup operation,
the Local_Backup jobs only considers the pools for which backup has been
enabled.
Backup jukebox
Select the backup jukebox. For virtual jukeboxes with HD-WO media, we
strongly recommend to configure the original and backup jukeboxes on
physically different storage systems.
Related Topics
• “Creating and modifying pools with a buffer” on page 93
Storage selection
Storage tier
Select the designated storage tier (see “Creating and modifying storage tiers”
on page 111).
Buffering
Buffer assignment
Make sure that each buffer is assigned to one pool only (original pool or
shadow pool). Do not assign the same buffer to pools in different archives.
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 115.
Related Topics
3. Select the pool, which should be the default pool, in the top area of the result
pane.
4. Click Set as Default Pool in the action pane and click OK to confirm.
• If the pool configuration does not allow local backups, for example, Single File
(VI or FS) original pools, shadow pools provide local document copies.
• If the required type of backup pool is not supported for the original pool, a
shadow pool can be used for the backup copies.
Shadow pools can be created for the following original pool types: FS, ISO, and VI.
Pool cluster Multiple shadow pools can be assigned to an original pool. The group of an original
pool and its assigned shadow pool(s) is called a pool cluster.
A1 Archive layer
P1 P2 P3 Original pools
Note: When logical archives are replicated, only the original pools are
replicated. Shadow pools assigned to the original pools are not replicated.
A1 A1 ~
1. The application sends the incoming content to a logical archive. The logical
archive stores the content temporarily in the disk buffer of an original pool.
2. Write jobs copy the content from the disk buffer to the associated storage
volumes of the original pool for longtime archiving.
3. Copy jobs copy the content from the disk buffer or the attached storage volumes
of the original pool to the corresponding shadow pool(s):
• to the disk buffer of the shadow pool and then, by executing a Write job, to
the storage volumes of the shadow pool
• directly to the storage volume of the shadow pool, if the shadow pool uses an
FS-type storage volume.
The handling of Copy jobs is similar to the handling of Write jobs, except for the
error handling. The special settings for Copy jobs are described in separate
sections.
Copy jobs and Copy jobs require copy orders to copy components from the original pool to a shadow
copy orders pool. Copy orders are automatically created for copying documents that are newly
archived to the original pool’s buffer or storage volumes after the shadow pool was
created. However, in the following cases, specific copy orders must be explicitly
created for each document:
• The documents are archived in the storage volumes of the original pool before the
shadow pool was created.
• The documents are contained in storage volumes that are attached to the original
pool after the shadow pool was created.
The Create Copy Orders utility is provided to create the missing copy orders (see
“Creating copy orders for shadow pools” on page 104).
3. Select the original pool, which is to be backed up by a shadow pool, in the top
area of the result pane.
4. Click New Shadow Pool in the action pane. The window to create a new
shadow pool opens.
5. Enter a unique (per archive), descriptive Shadow Pool Name. Consider the
naming conventions; see Naming rule for archive components on page 75.
7. Enter the Backup and Buffering settings according to the selected shadow pool
type:
9. For FS or VI shadow pool types, select the shadow pool in the top area of the
result pane and click Attach Volume. A window with all available storage
volumes opens (see “Creating and modifying disk volumes” on page 61).
10. Select the designated storage volume and click OK to attach it.
11. Schedule the Copy job; see “Configuring jobs and checking job protocol“
on page 115.
Modifying a To modify the shadow pool settings, select the pool and click Properties in the
shadow pool action pane. Depending on the pool type, you can modify the settings.
Backup
Copy job
Enter a unique (per archive), descriptive name for the Copy job. The name can be
modified later via the Properties settings in the action pane. To schedule the
Copy job, see “Configuring jobs and checking job protocol“ on page 115.
Number of components
Maximum number of components copied during a single run of the Copy job.
Note: The Create copy orders for existing documents option is available
only when a new shadow pool is created.
Copy orders cannot be created when modifying a shadow pool’s
properties.
Buffering
• You can only assign an already existing buffer to a shadow pool. Create
the required number of buffers before creating the shadow pools (see
“Configuring buffers” on page 63).
• Original pool and shadow pool must have different buffers assigned.
• A disk buffer cannot be shared by two shadow pools of the same
original pool. The disk buffer must be unique within a pool group.
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 115.
Related Topics
Backup
Copy job
Enter a unique (per archive), descriptive name for the Copy job. The name can be
modified later via the Properties settings in the action pane. To schedule the
Copy job, see “Configuring jobs and checking job protocol“ on page 115.
Number of components
Maximum number of components copied during a single run of the Copy job.
Note: The Create copy orders for existing documents option is available
only when a new shadow pool is created.
Copy orders cannot be created when modifying a shadow pool’s
properties.
Buffering
• You can only assign an already existing buffer to a shadow pool. Create
the required number of buffers before creating the shadow pools (see
“Configuring buffers” on page 63).
• Original pool and shadow pool must have different buffers assigned.
• A disk buffer cannot be shared by two shadow pools of the same
original pool. The disk buffer must be unique within a pool group.
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 115.
Original jukebox
Select the original jukebox.
Note: For some storage systems, the maximum size is not required; see the
documentation of your storage system on My Support (https://
knowledge.opentext.com/knowledge/llisapi.dll/Open/12331031).
Note: Make sure that the size of the smallest document to be written is less
than the difference between Minimum amount of data and Maximum
volume size.
• The size of the ISO image created by the Archive Center is larger than
the Minimum amount of data value and less than the Maximum
volume size value. If an ISO image in creation does not meet this
criterion, no image is written.
• If compression is enabled for the archive, the size of the compressed
documents (components) is applicable.
Related Topics
• “Creating and modifying pools with a buffer” on page 93
• “Pools and pool types” on page 36
Backup
Copy job
Enter a unique (per archive), descriptive name for the Copy job. The name can be
modified later via the Properties settings in the action pane. To schedule the
Copy job, see “Configuring jobs and checking job protocol“ on page 115.
Number of components
Maximum number of components copied during a single run of the Copy job.
Note: The Create copy orders for existing documents option is available
only when a new shadow pool is created.
Copy orders cannot be created when modifying a shadow pool’s
properties.
Buffering
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 115.
Documents written in parallel
Number of simultaneously written documents.
Related Topics
• “Creating and modifying pools with a buffer” on page 93
• “Pools and pool types” on page 36
Copy order The Create Copy Orders utility creates the copy orders if required.
utility
There are various ways to start the copy order utility:
• Check the Create copy orders for existing documents check box when creating a
new shadow pool or attaching a new storage volume to an original pool.
• Select Archive Server > System > Utilities > Create Copy Orders.
• Select Create Copy Orders in the action pane of the original pool.
Create Copy Orders is displayed only if at least one shadow pool is defined for
the original pool.
• Select Create Copy Orders in the context menu of a volume attached to the
original pool if, at least, one shadow pool is defined for the original pool.
This option is typically used if copy orders were not created when the volume
was attached to the original pool.
Notes
• If multiple shadow pools are defined, the copy order utility creates the copy
orders for all existing shadow pools. Therefore, check Create copy orders for
existing documents only when creating the last shadow pool for an original
pool.
• Copy orders can only be created during the creation of a shadow pool. Copy
orders cannot be created when modifying a shadow pool’s properties.
• The copy order utility creates copy orders for all storage volumes of the
original pool. This may be time consuming. Always wait until the Create
Copy Orders status is FINISHED. However, working on the original pool is
possible while the copy order utility is running.
If the copy order utility is started from the Attach Volume dialog, working
on the original pool is not possible while the utility is running.
• Only one instance of the copy order utility can run at the same time.
• Restarting the server while the copy order utility is running stops the utility
before all required copy orders have been created. The copy order utility
does not resume copy order processing when the server restarts. To get all
required copy orders, you must start the copy order utility again.
1. In the New Shadow Pool window of the selected shadow pool type, check
Create copy orders for existing documents in the Backup settings.
2. Complete the Backup and Buffering settings according to the selected shadow
pool type and click Finish to create the shadow pool.
4. Click Close when the Creating Copy Orders for pool ... utility has finished.
The copy orders for all document components in the buffer and storage
volumes of the original pool have been created.
1. When the Copy job run is completed, check for copy errors.
a. Select Archive Server> System> Jobs and select the Copy job. The job
protocol shows the status of the copy errors.
• Pending copy orders are executed with the next run of the Copy job.
Run the Copy job again to clear all Pending-status copy orders
• Failed copy orders are not executed with the next run of the Copy job.
b. To investigate failed copy orders, use the Report Shadow Copy Errors
utility (see “Report of shadow copy errors” on page 107).
2. Use the Clear Shadow Copy Errors utility to clear shadow copy errors from the
Copy job (see “Clearing shadow copy errors” on page 107).
1. Select Archive Server > System > Utilities > Report Shadow Copy Errors.
2. Enter the Archive Name and Shadow Pool Name.
3. Select the type of error report.
Note: Failed copy orders are not executed with the next run of the
Copy job.
• Detailed report of each error
Detailed report for each error, including error type, document ID, and
component name.
4. Click Run.
• Failed-status copy orders of a Copy job can be set to the Pending status.
Pending copy orders are executed with the next run of the Copy job.
• Copy orders for nonexistent components (ERROR_SOURCE_MISSING errors) can be
deleted from the Copy job; see “To delete copy orders for nonexistent
components from the Copy job:“ on page 108.
1. Select Archive Server > System > Utilities > Clear Shadow Copy Errors.
2. Enter the Archive Name and Shadow Pool Name.
3. Enter the Error Type:
• Enter a specific error type retrieved from the Report Shadow Copy Errors
report.
• Leave Error Type empty to reset all Failed-status copy orders to the
Pending status.
• Reset errors
The Failed-status copy orders of the specified Error Type are reset to the
Pending status. The Pending copy orders are executed with the next run of
the Copy job.
Note: To select Delete error, you must specify a valid error type from
the detailed Report Shadow Copy Errors report in the Error Type
field.
Caution
Delete error deletes the copy order. The copy order is no longer
executed when running the Copy job.
Use Delete error only for ERROR_SOURCE_MISSING errors, that is, to
delete copy orders for nonexistent components from the Copy job.
Contact OpenText Customer Support before deleting copy orders
for any other copy error type.
5. Click Run.
To delete copy orders for nonexistent components from the Copy job:
1. If, after re-running a Copy job, there are still Failed-status copy orders reported,
check the detailed report for ERROR_SOURCE_MISSING errors. This copy error
indicates a copy order for nonexistent components. See “Report of shadow copy
errors” on page 107).
3. Use the Clear Shadow Copy Errors utility (see “Clearing shadow copy errors”
on page 107).
Caution
Use Delete error only for ERROR_SOURCE_MISSING errors, that is, to
delete copy orders for nonexistent components from the Copy job.
Contact OpenText Customer Support before deleting copy orders
for any other copy error type.
• Click Run.
The original pool containing the defective storage volumes is replaced by a new pool
of the same pool type. The existing shadow pool is kept as backup pool.
Note: The recovery procedure described here also works if the type of the
existing shadow pool is different from the type of the original pool.
Prerequisites
• At least one shadow pool is assigned to the original pool with the defective
storage volumes.
• Data of the original pool are contained in the original pool’s buffers and/or the
shadow pool(s).
Note: Data that are exclusively stored in the defective storage volumes, that
is, all data that are not additionally stored in a buffer or shadow pool,
cannot be recovered by this procedure and may be lost.
1. Create new storage volumes (see “Configuring disk volumes” on page 60).
a. Create new storage volumes for the recovered original pool if the pool type
is FS or VI. Do not attach the new volumes yet
b. One additional local hard disk volume as temporary disk buffer volume
Note: If the original pool’s disk buffer is shared with other pools, for
example, in different archives, you must create spare hard disk volumes to
be used for the disk buffers of the new pool that replaces the original pool.
2. In the disk buffer of the original pool, set all storage volumes to Write locked
(see “Checking and modifying attached disk volumes” on page 67).
3. Restart the dsaux spawner service.
> spawncmd restart dsaux
5. Create an additional shadow pool for the original pool (see “Creating and
configuring shadow pools” on page 97).
a. Specify the original pool’s type as pool type for the new shadow pool.
Select the Create copy orders for existing documents option (see “Creating
copy orders when defining new shadow pools” on page 105).
b. Assign the disk buffer created in Step 4 to the new shadow pool.
c. Wait until the Create Shadow Pool wizard utility has completed its run.
Assign the storage volumes created in step 1a to the new shadow pool.
d. Wait until the Create Copy Orders utility has completed its run (see
“Creating copy orders when defining new shadow pools” on page 105).
6. Copy all documents from the original pool to the existing shadow pool and to
the newly created shadow pool until no more documents can be copied. To do
so, run the following jobs:
• Purge jobs for the disk buffers of the existing and new shadow pools
7. Detach all storage volumes from the original pool (see “Detaching a volume
from a disk buffer” on page 65).
Note: For the recovery procedure, all storage volumes are considered
defective.
8. Restore the new shadow pool (created in Step 5) as original pool. Use Restore To
Original Pool in the context menu of the shadow pool.
9. If the old original pool’s disk buffer was shared with other pools: Clean up the
volumes of this disk buffer.
• Make sure that 10 minutes have passed since restarting the dsaux Spawner
service (see Step 3).
• Run the Write jobs and the Purge jobs for the pools sharing the old original
pool’s disk buffer.
10. Using the Export Volumes utility, export all hard disk volumes from the
original pool’s buffer (see “Exporting volumes” on page 230).
11. Attach the spare hard disk volumes (see step 1) to the new original pool’s disk
buffer.
12. Clean up the orphaned ds_job entries for the old original pool that has
disappeared:
a. Run clnJobs -d -x
• Business-critical
Description: Important to the enterprise, reasonable performance, good
availability
• Accessible Online Data
Description: Low access
• Nearline Data
Description: Rare access, large volumes
1. Select Storage Tiers in the System object. The present storage tiers are listed in
the result pane.
4. Click Finish.
Modifying To modify a storage tier, select it and click Properties in the action pane. Proceed in
storage tiers the same way as when creating a storage tier.
Related Topics
Important
In case you are using Archive Cache Server, consider that a re-initialization
in secure environments can only work if the current certificates are available
on the Archive Cache Server. To avoid problems, the Update documents
security setting must be deselected before certificates are enabled; see Step 3.
To enable certificates:
1. Select the logical archive in the Original Archives or Replicated Archives object
of the console tree.
Tip: Alternatively, you can also navigate to System > Key Store >
Certificates.
2. Select the Certificates tab in the result pane.
For scenarios using an Archive Cache Server, go on with Step 3.
Otherwise, go on with Step 4.
4. Select the respective certificate by its name (in the result pane).
3. In the Change Server Priorities window, select the server(s) to add from the
Related servers list on the left.
Click the button to move the selected server(s) to the Set priorities list.
4. Use the arrows on the right to define the order of the servers: Select a server and
click the or to move the server up or down in the list, respectively.
If you want to remove a server from the priorities list, select the server to
remove and click the button.
5. Click Finish.
pagelist job See “Configuring security settings for pagelist job” on page 119 below for further
details on the pagelist job.
Command Description
Write_CD Writes data from disk buffer to storage media as ISO images, belongs
to ISO pools.
Command Description
ShadowCopy Copies the documents of an original pool to the specified shadow
pool.
Command Description
AutoDelete Finds and optionally deletes all documents with expired retention;
syntax:
AutoDelete [-d <duration>] [-g <graceperiod>] <mode>
<archive>
Arguments:
• -d <duration>
Optional; max. processing time in seconds, default unlimited, min.
1s
• -g <graceperiod>
Optional; number of days since the retention has expired, default
10 d, min. 0 d
• <mode>
QUERY or Q: report number of documents to be deleted; DELETE or
D: find and destroy; REPORT or R: report deleted documents
• <archive>
Name of the logical archive
Copy_Back Transfers cached documents from the Archive Cache Server to the
Archive Center. The Copy_Back job is disabled by default and must
only be enabled for Archive Centers with enabling “write back” mode.
See “Configuring Archive Cache Server“ on page 205.
Command Description
start<DPname> Starts the Document Pipelines for the import scenarios:
• Import content (documents/data) with extraction of attributes from
content (CO*),
• Import content (documents/data) and attributes (EX*),
• Import forms (FORM).
For more information, see OpenText Document Pipelines - Overview and
Import Interfaces (AR-CDP).
The certificate is sent to the Archive Center with the putCert command or imported
with the Import Certificate for Authentication utility (see “Configuring a certificate
for authentication” on page 146). You can use the certtool utility (command line)
to create a certificate, or to generate a request to get a trusted certificate. For more
information, see “Creating a certificate using the Certtool” on page 144.
Always signing You can configure the pagelist job to always sign the URL.
URL
To always sign the URL for the pagelist job:
Background
• “Certificates” on page 141
2. Depending on the actual status of the scheduler click Start Scheduler or Stop
Scheduler in the action pane to change the status. The actual status is displayed
in the first line of the jobs tab.
To start and stop certain jobs, see “Starting and stopping jobs” on page 120.
2. Select the Jobs tab in the top area of the result pane. The jobs are listed.
4. Depending on the actual status of the job, click Start or Stop in the action pane
to change the status of the job.
2. Select the Jobs tab in the top area of the result pane. The jobs are listed.
4. Click Enable or Disable in the action pane to change the status of the job.
1. To check, create, modify and delete jobs, select Jobs in the System object in the
console tree.
2. Select the Jobs tab in the top area of the result pane. The jobs are listed.
3. Select the job you want to check. The latest message of this job is listed in the
bottom area of the result pane.
4. Click Edit to check details of the job. See also “Creating and modifying jobs”
on page 121.
Pool-related Copy jobs are configured for backing up documents in a shadow pool.
See also “Creating and configuring shadow pools” on page 97. The name of a Copy
job is specified during the creation of the shadow pool and can be modified later.
To create a job:
2. Select the Jobs tab in the top area of the result pane.
3. Click New Job in the action pane. The wizard to create a new job opens.
4. Enter a name for the new job. Select the command and enter the arguments
depending on the job.
Name
Unique name of the job that describes its function so that you can
distinguish between jobs having the same command. Do not use blanks and
special characters. You cannot modify the name later.
Command
Select the job command to be executed. See also “Important jobs and
commands” on page 115.
Argument
Entries can expand the selected command. The entries in the Arguments
field are limited to 250 characters. See also “Important jobs and commands”
on page 115.
6. Depending on the start mode, define the scheduling settings or the previous job.
See also “Setting the start mode and scheduling of jobs” on page 122.
Modifying jobs To modify a job, select it and click Edit in the action pane. Proceed in the same way
as when creating a job.
• at a certain time,
• when another job is finished,
• when another job is finished with a certain return value,
• at a certain time when an job has finished.
Start Mode
Specification of the start mode. Check the mode to define specific settings.
Scheduled
If you use this start mode, you can define the start time of the job, specified
by month, day, hour and minute. Thus, you can define daily, weekly and
monthly jobs or define the repetition of jobs by setting a frequency (hours or
minutes).
• Jobs accessing jukebox drives must not collide: different Write jobs,
Local_Backup, Synchronize_Replicates (Remote Standby Server) and
Save_Storm_Files.
• Only one drive is used for Write jobs on WORM/UDO. Therefore, only one
WORM/UDO can be written at a time. That means, only one logical archive can
be served at a time.
• Backup jobs need two drives, one for the original, one for the backup media.
The entries in the job protocol are regularly deleted by the SYS_CLEANUP_PROTOCOL
job that usually runs weekly. You can modify the maximum age and number of
protocol entries in Configuration, search for the Max. number of job protocol
entries variable (internal name: ADMS_PROTOCOL_MAX_SIZE; see “Searching
configuration variables” on page 222).
2. Select the Jobs tab in the top area of the result pane.
2. Select the Protocol tab in the top area of the result pane. All protocol entries are
listed. Protocol entries with a red icon are terminated with an error. Green icons
identify jobs that have run successfully.
3. Select a protocol entry to see detailed messages in the bottom area of the result
pane.
2. Select the Protocol tab in the top area of the result pane. All protocol entries are
listed.
Configuration The main GUI elements used for configuration and administration of security
and administra- settings include:
tion
• The Archives node: each time a new archive is added or new pools are created,
security settings are to be configured (Security tab of the Properties dialog).
• The Key Store in the System object of the console tree: used for configuration of
certificates and system keys.
Further You can find more information on security topics in the “Security” folder on My
information Support (https://knowledge.opentext.com/knowledge/llisapi.dll/open/15491557).
Configuration settings concerning security topics are described in more detail in
“Configuration parameter reference” on page 335, in particular:
• “Archive Server“ on page 343
• “Security settings” on page 384
• “Key Export Tool (RCIO)” on page 395
• “Timestamp Server (TSTP)” on page 445
To archive “clean” documents, you must protect the documents from viruses
before archiving. Archive Center does not perform any checks for viruses. To
Signed URLs are verified using public keys within certificates; see “Certificates”
on page 141.
If SecKeys are used, the administrator must provide the necessary certificate
comprising the appropriate public key for each application. Thus, he has to send or
import the certificates comprising their public keys to the Archive Center. In
addition, the administrator must configure the usage of SecKeys on the Archive
Center.
SecKey usage A SecKey requests the right of access. When a document is accessed, Archive Center
checks whether the SecKey should be checked.
Procedure
These signed URLs must include information on these permissions. If the SecKey of
a request does not meet the permissions required by the archive, access is denied.
Each permission marked for the current archive has to be checked when verifying
the signed URL.
Activating Select the operations that you want to protect. Only client applications using a valid
SecKey usage SecKey can perform the selected operations. If an operation is not selected,
everybody can perform it.
To activate SecKeys:
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Check the settings and modify them, if needed.
• Read documents
• Update documents
• Create documents
• Delete documents
4. Click OK to resume.
1. Create a certificate with the certtool utility (command line), or create the
request and send it to a trust center (see “Generate self-signed certificates”
on page 144 and “Request a certificate from a trust center ” on page 145).
Example for the a result: the <key>.pem file contains the private key and is used
to sign the URL. <cert>.pem contains the public key and the certificate that
Archive Center uses to verify the signatures.
2. Store the certificate and the private key on the server of your leading
application (see the corresponding Administration Guide for details). Correct
the path, if necessary, and add the file names.
By storing the certificates in the file system, they are recognized by Enterprise
Scan and the client programs.
Important
For security reasons, limit the read permission for these directories to
the system user (Windows) or the archive user (UNIX/Linux).
3. To provide the certificate to the Archive Center use one of the following
options:
Or:
• Send the certificate with the putcert command; see “Send the certificate to
an Archive Center (putCert)” on page 145.
Repeat this step if you want to use the certificate for several archives.
Procedure
2. Send the certificate to Archive Center using the OAHT transaction. There, you
enter the target Archive Center and the archives for which the certificate is
valid.
Document encryption can be activated per logical archive. It is performed when the
documents are transferred to the buffer of the logical archive for temporary storage.
For document encryption, a symmetric key (system key) is used. The administrator
creates this system key and stores it in the Archive Center's keystore. The system key
itself is encrypted on the Archive Center with the Archive Center’s public key and
can then only be read with the help of the Archive Center's private key. RSA
(asymmetric encryption) is used to exchange the system key between the Archive
Center and the remote standby server.
Keys per One system key can be active for all archives of Archive Center or each archive can
archive use its own key. Several system keys can be used in parallel for different archives.
Encryption for HDSK pools do not use a buffer. To encrypt documents, use the designated
documents in Compress_ job. For more information, see “Data compression” on page 76.
HDSK pools
(write through)
Procedure
• “Activating encryption usage for a logical archive” on page 129
• “Creating a system key for document encryption” on page 129
• “Exporting and importing system keys” on page 131
• “Configuring a certificate for document encryption” on page 149
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Activate Encryption (mark the check box).
4. Click OK to resume.
System keys are encrypted using the encryption certificate (see “Configuring a
certificate for document encryption” on page 149).
Caution
Be sure to store the system key securely, so that you can re-import it if
necessary.
If the key gets lost, the documents that were encrypted with it can no
longer be read!
Do not delete any key if you set a newer one as current. The old key is still
used for decryption.
3. Application Layer scenario only: Define the system key folder to which the keys
are exported.
Click System Key Folder in the action pane and specify the path to the Export
folder.
You can split the contents of the key store into different files (Number of token
files, maximum: 8). Further, you can specify how many of them must be
reimported at least to restore the complete key store (Number of required
token files).
Notes
• Specifying the system key folder is required for scenarios using the
Application Layer. Business administrators can trigger the creation of a
new system key from within the Archive Center Administration web
client. In this case, the new system key is exported to the system key
folder automatically.
• Collections cannot use encryption before the system key folder is set.
4. Click Generate System Key in the action pane. A new key is generated.
5. Unless using the Application Layer: Export the new system key using the recIO
command line tool and store it at a safe place (see “Exporting and importing
system keys” on page 131).
6. Make a backup of the key/certificate pair used by recIO to encrypt the System
Keys:
Copy the <OT config AC>/config/setup/as.pem file and store it alongside
with the exported system key and at a save place.
Important
In the case of system failure or restore scenarios it can be vital to have
backups of the system keys (and the related certificates).
7. Select the created system key and click Set as current key. A key can only be set
as current key if it is successfully exported (see Step 5).
New documents are now encrypted with the current key, while decryption
always uses the appropriate key.
Handling for The Synchronize_Replicates job updates the system keys and certificates between
replicated Archive Centers before it synchronizes the documents. The system keys are
archives
transmitted encrypted.
If you do not want to transmit the system keys through the network, you can also
export them from the original server to an external data medium and re-import
them on the remote standby server. See “Exporting and importing system keys”
on page 131.
L
Lists the contents of the System key node (without the keys themselves) in a
table.
The user must log on.
Example:
E
Exports the contents of the System key node. Use the export in particular to
store the system keys for document encryption.
The user must log on and specify a path for the export files. The option -t NN:MM
splits the contents of the key store into MM different files (maximum: 8). At least
NN files must be reimported to restore the complete key store.
Example:
C:\Program Files\OpenText\Archive Server 10.5.0\bin>recIO E -t 3:5
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.5.0.332
IMPORTANT: -----------------------------------------------------
recIO 10.5.0.332 Copyright ¬ 2013 Open Text Corporation
Please authenticate!
User :dsadmin
Password :
Writing keystore with 2 system-keys to 5 token-files (3 required to restore)
Token[1/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey.pem
Token[2/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey2.pem
Token[3/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey3.pem
Token[4/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey4.pem
Token[5/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey5.pem
V
Verifies the contents of the System key node against the exported files.
The user must log on and specify the path for the exported data. Then the
exported data is compared with the key store on the Archive Center.
Example:
C:\Program Files\OpenText\Archive Server 10.5.0\bin>recIO V
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.5.0.332
IMPORTANT: -----------------------------------------------------
recIO 10.5.0.332 Copyright ¬ 2013 Open Text Corporation
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey2.pem
Token[2/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey3.pem
Token[3/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey4.pem
key 1 : EB9C088BFA4F1847 : OK
key 2 : 7CB5CA683339CC60 : OK
D
Displays the information on the exported files. The information is shown in a
table.
Example:
C:\Program Files\OpenText\Archive Server 10.5.0\bin>recIO D
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.5.0.332
IMPORTANT: -----------------------------------------------------
recIO 10.5.0.332 Copyright ¬ 2013 Open Text Corporation
Token[1/?] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey2.pem
Token[2/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey3.pem
Token[3/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey5.pem
idx ID created origin
---------------------------------------------------
1 EB9C088BFA4F1847 2014/01/14 11:58:23 <servername>
2 7CB5CA683339CC60 2014/02/20 11:41:20 <servername>
I
Imports the saved contents of the System key node.
The user must log on and specify the path for the exported data. The data in the
System key node is restored, encrypted with the Archive Center's public key
and sent to the administration server. The results are displayed. Keys already
contained in the Archive Center's store are not overwritten.
Example:
C:\Program Files\OpenText\Archive Server 10.5.0\bin>recIO I
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.5.0.332
IMPORTANT: -----------------------------------------------------
recIO 10.5.0.332 Copyright ¬ 2013 Open Text Corporation
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey5.pem
Token[2/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey4.pem
Token[3/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey2.pem
8.4 Timestamps
Timestamps are used to verify that documents have not been altered since archiving
time. The verification process checks these timestamps. A timestamp service is
required for this. Creating a timestamp means: The computer calculates a unique
number, a cryptographic checksum or hash value, from the content of the document.
The timestamp provider (a qualified Time Stamping Authority or Archive Timestamp
Server) adds the time to this checksum, creates a checksum of this created object and
signs the new checksum with its private key.
The signature is stored together with the document component. When a document is
requested, Archive Center can verify whether the component was modified after
storing it by looking at the signature. Archive Center needs the public key of the
timestamp provider’s certificate for verification. The OpenText products Windows
Viewer or Java Viewer can be used to display the verification result.
ArchiSig With ArchiSig timestamps, the timestamps are not added per document, but for
timestamps containers of hash trees calculated from the documents (Figure 8-1).
Time stamp
Time Stamp
h7=Hash(h5|h6)
Fingerprint
h5=Hash(h1|h2) h6=Hash(h3|h4) (hash value)
A job builds the hash tree that consists of hash values of as many documents as
configured, and adds one single timestamp. Thus, you can collect, for example, all
documents of a day in one hash tree. Only one timestamp per hash tree is required.
The verification process needs only the document and the hash chain leading from
the document to the timestamp but not the whole hash tree (Figure 8-2).
Document d1
Document Each document component gets a timestamp when it arrives in the archive, or more
timestamps precisely: when it arrives in the disk buffer and is known to the Document Service.
This (old) method requires a huge amount of timestamps, depending on the number
of documents. Thus, it is available only for archives that used timestamps in former
Archive Center versions. You can migrate these timestamps to ArchiSig timestamps;
see “Migrating existing document timestamps” on page 141.
Configuration You can set up signing documents with timestamps and the verification of
timestamps including the response behavior for each archive; see “Configuring the
archive settings” on page 88. Consider the recommendations given above.
If you use both methods in parallel, the document timestamp secures the document
until the hash tree is built and signed. As this time period is short, a document
timestamp is sufficient for these documents, while the hash tree, in general, gets a
timestamp created with a certificate of an accredited provider. This trusted
certificate is used for verification.
Timestamps and hash trees may become invalid or unsafe. To prevent this, they can
be renewed. See “Renewing timestamps of hash trees” on page 140 and “Renewing
hash trees” on page 140.
Related Topics
• “Configuring Archive Timestamp Server“ on page 201
Procedure
• “Basic timestamp settings” on page 135
• “Activating and configuring timestamp usage” on page 90
• “Creating a hash tree” on page 139
• “Configuring a certificate for timestamp verification” on page 149
Configuration The following description includes the most relevant parameters for ArchiSig
timestamps. In general, you do not need to modify the further parameters described
in “Configuring connection parameters” on page 136.
1. Select Configuration, and one by one, search for the following variables (see
“Searching configuration variables” on page 222).
Archive Further, you can use OpenText Archive Timestamp Server for testing. Archive
Timestamp Timestamp Server is not a TSA and is not recommended for production systems. For
Server
more information, see “Configuring Archive Timestamp Server“ on page 201.
Timestamps Classic timestamps are neither supported nor recommendable with a timestamping
(old) service over the Internet. The cost would be extremely high since every document
component is signed and you would be charged for each timestamp. Finally, dsSign
does not communicate using SSL/TLS.
8.4.2.2 QuoVadis
Introduction QuoVadis offers qualified timestamps over the Internet. This kind of service
provides the highest level of trustworthiness.
Timestamps Classic timestamps are neither supported nor recommendable with a timestamping
(old) service over the Internet.
Example: tshost1:32001;tshost2:10318
1. In the Archives object of the console tree, create a new archive (for example,
with the name ATS) and a pool named POOL to define where the hash trees are
stored.
Important
The name of the pool is determined by the Pool for timestamps
configuration variable (internal name: AS.DS.TS_POOL). Its default
value is ATS_POOL, which means that you must call the pool POOL.
If the name of the pool and the value of the variable do not fit, the job
building the hash tree will fail.
2. In the System > Jobs object of the console tree, create jobs to build the hash
trees. You need one job for each archive that uses timestamps.
See also “Configuring jobs and checking job protocol“ on page 115.
Command
hashtree
Arguments
Archive name
Scheduling
If you use ArchiSig timestamps, schedule a nightly job. If the hash trees are
written to a storage system, make sure that the job is finished before the
Write job starts.
If you need to renew your hash trees, contact OpenText Customer Support.
You need only one new timestamp per hash tree. No access to the documents is
necessary.
To renew timestamps:
3. In the resulting list, find the distinguished subject name(s) of your timestamp
service (subject of the service’s certificate).
Note: The name of the logical archive (<archive name>) must always be
included in the dsHashTree commands.
The utility finds all timestamps for the given archive that were created with the
certificate indicated in the command. It calculates hash values for the timestamps
and builds new hash trees. Each hash tree is signed with a new timestamp.
Note: Do not delete the old time stamp server certificate. It may still be used
for another logical archive.
Important
You can migrate document timestamps only once! Never disable ArchiSig
timestamps after starting migration.
2. In a command line, run the timestamp migration tool for each pool to be
migrated:
dsReSign -p <pool name>
3. Call the hash tree creation tool for each archive with migrated timestamps:
dsHashTree <archive name>
The tools calculate hash values from the existing timestamps, build hash trees, and
get a timestamp for each tree.
8.5 Certificates
Certificates A certificate is an electronic document which uses a digital signature to bind
together a public key with information on the client issuing this public key
(information such as the name of a person or an organization, their address, and so
forth). The certificate can be used to verify that a public key belongs to an
individual, for example, an archive uses this information to verify requests based on
signed URLs from various clients.
Certificate use Archive Center uses certificates for various use cases:
cases
• Authentication certificates, used for signed URLs; see “Configuring a certificate
for authentication” on page 146.
PEM files Privacy Enhanced Mail Security Certificate (PEM) files are encoded certificate files
used to store the public key and the certificate. Archive Center uses various PEM
files.
Certificates for In a Remote Standby environment, the Synchronize_Replicates job copies the
Remote certificates for authentication. Only enabled certificates are copied. The certificate on
Standby
the remote server is disabled after synchronization. To enable it, follow the
instructions in “Enabling a certificate” on page 143.
To establish validity of someone's certificate, you can trust that a third individual
has gone through the process of validating it. A Certification Authority (CA), for
example, is responsible for ensuring that prior to issuing a certificate, he or she
carefully checks it to be sure the public key portion really belongs to the purported
owner. Anyone who trusts the CA will automatically consider any certificates
signed by the CA to be valid.
To check a certificate:
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective tab and the designated certificate and click View
Certificate in the action pane.
General
This tab provides detailed information to identify the certificate
unambiguously: the certificate's issuer, the duration of validity, and the
fingerprint.
Certification Path
Here you can follow the certificate's path from the root to the current
certificate. A certificate can be created from another certificate. The path
shows the complete derivation chain. You can also view the parent
certificate information from here.
To enable a certificate:
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective certificate by its name and click Enable in the action pane.
To delete a certificate:
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective tab and the designated certificate and click Delete
Certificate in the action pane.
If you have to manage a large number of certificates, make sure that the AuthIDs and
the names of the certificates are unique.
Send your <requestOutFile> file to a trust center. The trust center will return you a
certificate including the public key. The certificate from the trust center must be in
PEM format.
After using the Refresh action (System > Key Store > Certificates), the certificates
sent using putCert are displayed in Administration Client.
Note: putCert cannot be used with SSL. To transfer the certificate to the
server, switch the SSL settings for the logical archive to May use or Don’t use.
Alternatively, if provided, you can also use dsh to send the certificate to Archive
Center.
A global certificate can be imported (that is added) and assigned to all logical
archives (globally) at once. Global certificates are valid for all logical archives –
also for archives that will be created later on. A global certificate can only be
enabled or disabled generally.
• Assigned to one single archived (assigned to one archive only)
These certificates are valid for a single logical archive of the Archive Center.
Procedure
• “Importing an authentication certificate” on page 147
• “Granting privileges for a certificate” on page 148
• “Checking a certificate” on page 142
• “Enabling a certificate” on page 143
• “Generate self-signed certificates” on page 144
• “Send the certificate to an Archive Center (putCert)” on page 145
1. Select the Certificates node of the Key Store in the System object of the console
tree.
In the console tree, select System > Key Store > Certificates.
4. Click Browse to open the file browser for the Archive Center file system and
select the designated Certificate. Click OK to resume.
For example, a scan station may not be allowed to delete documents. Thus, the
privilege “delete documents” must not be set in the certificate that is used to
communicate with the scan station.
Important
Any change to the settings affects all archives that use this certificate!
To grant privileges:
2. Select the Certificates entry in the result pane and then the Global tab. All
imported certificates are listed.
3. Select the designated certificate and click Change Privileges in the action pane.
4. Select (set check box) the privileges you want to assign to the certificate. The
following privileges are available:
• Read documents
• Create documents
• Update documents
• Delete documents
• Pass by
This privilege is evaluated in Enterprise Library scenarios, for example. Pass
by must be set for the certificate of the
Pass by must not be set for all other kinds of client certificates, for example,
SAP.
1. Select the Certificates entry of the Key Store node in the System object of the
console tree.
2. Select the Encryption Certificates tab in the result pane. All available
certificates are listed.
4. Enter the path and the complete file name of the certificate or click Browse to
open the file browser. Select the designated Certificate and click OK to confirm.
Procedure
• “Generate self-signed certificates” on page 144
• “Send the certificate to an Archive Center (putCert)” on page 145
• “Importing an encryption certificate” on page 149
• “Checking a certificate” on page 142
• “Enabling a certificate” on page 143
1. Select the Certificates entry of the Key Store node in the System object of the
console tree.
4. Click Browse to open the file browser and select the designated Certificate.
Click OK to resume.
Procedure
• “Importing a certificate for timestamp verification” on page 150
• “Checking a certificate” on page 142
• “Enabling a certificate” on page 143
+
Write Read Read
Content
Archive Center Governikus LZA
+
Content + XAIP
Storage System
Prerequisites The Governikus infrastructure consists of Governikus Service Components (SC) and
Governikus LZA. Governikus SC provides core functionality (like cryptography as a
service, Certificate Authority verification). Governikus LZA provides the ArchiSig/
ArchiSafe modules, with which Archive Center communicates. For more
information, see the Governikus website (https://www.governikus.de/produkte-
loesungen/).
Any certificates used for signature validation must be imported globally and have
the “Pass By” flag set.
Setup Setting up Governikus support for Archive Center comprises the following steps:
3. Deploy the JCA Resource Adapter and configure the Storage Module in
Governikus LZA.
Related Topics
Important
Each property name must start with the same case-sensitive string used
to set up the TRESOR_PROVIDER as described in Step 1.a.
GOVERNIKUS_API_DIR
Directory containing the Governikus client libraries and APIs. The path can
be a relative path in <Catalina home>. Use Java path notation with slashes (/).
Important
Do not put adapter client classes in any of classpath directories
configured in Tomcat.
GOVERNIKUS_ADAPTER_CONFIG
Name of the configuration file for the Governikus adapter. The name must
match the file configured in “Configuring the Governikus adapter”
on page 153. Default value: GOVERNIKUS.Setup
Example:
GOV_ARCHISAFE_URL
TR-Esor ArchiSafe URL:
GOV_ARCHISAFE_URL=https://
<governikus_lza_host>:<governikus_lza_port>/archisafe/esor/
exec?wsdl
Example: GOV_ARCHISAFE_URL=https://governikus:8444/archisafe/
esor/exec?wsdl
GOV_ARCHISAFE_SEARCH_URL
TR-Esor Archisafe Search URL:
GOV_ARCHISAFE_SEARCH_URL=https://
<governikus_lza_host>:<governikus_lza_port>/archisafe/search?
wsdl
Example: GOV_ARCHISAFE_SEARCH_URL=https://governikus:8444/
archisafe/search?wsdl
GOV_AUTHOR
TR-Esor author name:
GOV_AUTHOR=<name>
Example: GOV_AUTHOR=govauthor
GOV_CLIENT
TR-Esor client name:
GOV_CLIENT=<ClientID>
Example: GOV_CLIENT=govclient
GOV_UNIT
TR-Esor unit name:
GOV_UNIT=<Unit>
Example: GOV_UNIT=govunit
GOV_CLIENTKEYALIAS
TR-Esor client key alias if store has more than one certificate:
GOV_CLIENTKEYALIAS=<keystore alias for client key>
Example: GOV_CLIENTKEYALIAS=myclient
GOV_CLIENTKEYPASS
TR-Esor client keystore password:
GOV_CLIENTKEYPASS=<client key password>
Example: GOV_CLIENTKEYPASS=passwd
GOV_CLIENTKEYSTORE
TR-Esor client keystore path:
GOV_CLIENTKEYSTORE=<full path to client keystore>
GOV_CLIENTKEYSTORE_TYPE
TR-Esor client keystore type:
GOV_CLIENTKEYSTORE_TYPE=<client keystore type>
Example: GOV_CLIENTKEYSTORE_TYPE=JKS
GOV_TRUSTKEYALIAS
TR-Esor trusted root CA of TR-Esor server certificate if truststore has more
than one certificate:
GOV_TRUSTKEYALIAS=<trusted root alias in keystore>
Example: GOV_TRUSTKEYALIAS=otcarsa
GOV_TRUSTPASS
TR-Esor truststore password:
GOV_TRUSTPASS=<truststore password>
Example: GOV_TRUSTPASS=passwd
GOV_TRUSTSTORE
TR-Esor truststore path:
GOV_TRUSTSTORE=<full path to truststore>
GOV_TRUSTSTORE_TYPE
TR-Esor truststore type:
GOV_TRUSTSTORE_TYPE=<truststore type>
Example: GOV_TRUSTSTORE_TYPE=JKS
Dateninhalte separieren
Governikus LZA is able to store XAIP and data objects separately.
OpenText strongly recommends enabling this option. Data separation helps
to reduce network message sizes significantly and reduces storage space
usage noticeably. Furthermore, XAIP-only archiving is very different from
typical Archive Center document formats. However, it is possible to use the
adapter in single XAIP mode.
Archivierungsart
SelectOpenText Archive Center (16 EP2), OpenText Software GmbH to
activate the storage adapter.
Archive Server Host
Hostname of Archive Center
Archive Server Port
HTTP or HTTPS port of Archive Center
Use SSL
Select if SSL/TLS is required
Archive Server SecKey PrivateKey for ArchiveLink Signature
Path to PEM file containing a valid private and public key pair to use for
signing ArchiveLink URLs
3. Switch to the Storage Module main configuration page, enter a description for
your changes and click Save to activate the configuration.
Related Topics
• “Configuring a certificate for authentication” on page 146
Enterprise Scan Enterprise Scan generates checksums for all scanned documents and passes them on
to Document Service. Document Service verifies the checksums and reports errors
(see “Monitoring with notifications“ on page 293). On the way from Document
Service to STORM, the documents are provided with checksums as well, in order to
recognize errors when writing to the media.
Timestamp and The leading application, or some client, can also send a timestamp (including
checksum checksum) instead of the document checksum; see “Timestamps” on page 133.
Verification can check timestamps as well as checksums.
The certificates for those timestamps must be known to the Archive Center and
enabled, before the timestamp checksums can be verified (see “Importing a
certificate for timestamp verification” on page 150).
Enterprise This topic describes the special treatment when using ArchiveLink connections and
Library only Enterprise Library. Signed ArchiveLink connections between external applications
and Enterprise Library require that the Common Name (CN) Subject of the
certificate and the name of the client application (for example, Enterprise Library
Server) for Enterprise Library are identical. This can be achieved in two ways:
• You can define the name of the application and configure the certificate
correspondingly (for example, if you set up a whole new system). Thus, use the
application name as Common Name when creating the certificate, for example,
using the Certtool (see “Creating a certificate using the Certtool” on page 144).
• You can retrieve the Subject from the certificate and use it as application ID
(name of the application); see the procedure below.
2. In the console tree, expand Archiving and Storage and log on to the Archive
Center.
3. Select the Archives > Original Archives > <archive to connect> node.
4. In the result pane, from the Certificates tab, select the imported certificate.
6. From the Subject entry, note or copy the value after CN=
Use this value as the application ID when creating the application (<server name>
> Enterprise Library Services > Applications).
Archive Center needs a few specific administrative users for proper work. They are
managed in the System object of the Archive Center. The required settings are preset
during installation. Use the user management in the following cases:
• You want to change the password of the dsadmin administrator of the Archive
Center.
Important
See “Password security and settings” below for additional information on
passwords.
• You want to change settings of users, groups, or policies.
• You need a user with specific rights.
The users of the leading application are managed in other user management
systems, for example OpenText Directory Services (OTDS). To set up a connection to
Directory Services, see “Connecting to Directory Services” on page 172.
Important
Do not add users to the system partition of OTDS (“OTInternal”). Instead,
create a partition for your users and add the corresponding users and user
groups as members to tenant groups within the system partition.
For more information, see Section 4 “User partitions” in OpenText Directory Services -
Installation and Administration Guide (OTDS-IWC).
• Change the password for the administrative users after installation, for example,
dsadmin and dp*, if pipelines are in use.
• Change the password regularly.
• In case the administrator password has been lost: Contact OpenText Customer
Support to create an initial password for the archive administrator.
Important
Changing the password of dsadmin is also required in the OTDS scenario!
Although signing in as dsadmin into Administration Client is not possible if
OTDS is used, dsadmin is still used by other components.
2. In the console tree, open the Archive Server> System> Users and Groups node,
and in the result pane, select the Users tab.
3. Open the Properties of the dsadmin user and change the password.
2. In the console tree, select Archive Center and in the action pane, click Set
Password.
3. Enter the old and the new password, confirm the new password and then click
OK.
Password settings
You can specify a minimum length for passwords, if a user is locked out after
several unsuccessful logons and how long the lockout is to be.
Minimum length You can define a minimum character length for passwords. If you do not set this
for passwords property, the default value is eight.
1. In the console tree, expand Archive Center> Configuration and search for the
Min. password length variable (internal name: AS.DS.DS_MIN_PASSWD_LEN).
1. In the console tree, expand Archive Center> Configuration and search for the
Max. retries before disabling variable (internal name: AS.
DS.DS_MAX_BAD_PASSWD).
2. In the Properties window of the variable, change the Value as required (in
number of retries).
A value of 0 means that users will never be locked out.
3. Click OK and restart the Archive Spawner service.
4. Enter the following line (or modify it if present already):
=<number of failed attempts>
Unlock after You can define how long a user is locked out after a failed attempt; default is zero
failed logons seconds.
1. In the console tree, expand Archive Center> Configuration and search for the
Time after which bad passwords are forgotten variable (internal name: AS.
DS.DS_BAD_PASSWD_ELAPS).
2. In the Properties window of the variable, change the Value as required (in
seconds).
A value of 0 means that users will never be locked out.
3. Click OK and restart the Archive Spawner service.
User groups
A user group is a set of users who have been granted the same rights. Users are
assigned to a user group as members. Policies are also assigned to a user group.
The rights defined in the policy apply to every member of the user group.
Users
A user is assigned to one or more user groups, and he is allowed to perform the
functions that are defined in the policies of these groups. It is not possible to
assign individual rights to individual users.
Policies
A policy is a set of rights, i.e. actions that a user with this policy is allowed to
carry out. You can define your own policies in addition to using predefined and
unmodifiable policies.
Standard users During the installation of Archive Center, some standard users, user groups, and
policies are configured:
Tenants Tenants are special user groups intended for the Application Layer (“extended
functionality”). For more information, see “Creating tenants” on page 169.
Note: The standard policies are write-protected (read only) and cannot be
modified or deleted.
Group Description
Archive Administration Summary of rights to control creation, configuration and deletion
of logical archives.
Archive Users Summary of rights to control creation, configuration and deletion
of users and groups and their associated policies.
Notifications Summary of rights to control creation, configuration and deletion
of notifications and events.
Policies Summary of rights to control creation, configuration and deletion
of policies.
Important
Rights out of the following policy groups should no longer be used. These
rights are still available to ensure compatibility to policies created for former
versions of Archive Center.
• Accounting
• Administration Server
• DPinfo
• Scanning Client
• Spawner
1. Select Policies in the System object in the console tree to check, create, modify
and delete policies. All available policies are listed in the top area of the result
pane. In the bottom area the assigned rights are shown as a tree view.
2. To check a policy, select it in the top area of the result pane. The assigned rights
are listed in the bottom area.
1. Select Policies in the System object in the console tree. All available policies are
listed in the top area of the result pane.
2. Click New Policy in the action pane. The window to create a new policy opens.
Name
Name of the policy. Spaces are not allowed. The name cannot be modified
after creation.
Description
Short description of the role the user can assume by means of this policy.
4. The Available Rights tree view shows all rights that are currently not
associated with the policy. Select a single right or a group of rights that should
be assigned to the policy and click Add >>.
5. To remove a right or a group of rights, select it in the Assigned Rights tree view
and click << Remove.
Modifying a To modify a self-defined policy, select the policy in the top area of the result pane
policy and click Edit Policy in the action pane. Proceed in the same way as when creating a
new policy. The name of the policy cannot be changed.
Deleting a To delete a self-defined policy, select the policy in the top area of the result pane and
policy click Delete in the action pane. The rights themselves are not lost, only the set of
them that makes up the policy. Pre-defined policies cannot be deleted.
Related Topics
• “Checking, creating, or modifying users” on page 166
• “Checking, creating, or modifying user groups” on page 168
• “About users, groups, and policies ” on page 163
1. Select Users and Groups in the System object in the console tree to check,
create, modify and delete users.
2. Select the Users tab in the top area of the result pane to list all users.
3. To check a user, select the entry in the top area of the result pane. The groups
which the user is assigned to are listed in the bottom area.
4. To create and modify a user, see “Creating and modifying users” on page 167.
To create a user:
1. Select Users and Groups in the System object in the console tree.
2. Select the Users tab in the result pane. All available users are listed in the top
area of the result pane.
3. Click New User in the action pane. The window to create a new user opens.
Username
Name of the user to administer the Archive Center. The name can be a
maximum of 14 characters in length. Spaces are not permitted. This name
cannot be changed subsequently.
Password
Password for the specified user.
Confirm password
Enter exactly the same input as you have already entered under Password.
Click Next.
5. Select the groups the user should be assigned to. Click Finish.
Modifying user To modify a user's settings, select the user and click Properties in the action pane.
settings Proceed in the same way as when creating a new user. The name of the user cannot
be changed.
Deleting users To delete a user, select the user and click Delete in the action pane.
Related Topics
• “Creating and modifying policies” on page 166
• “Checking, creating, or modifying user groups” on page 168
• “About users, groups, and policies ” on page 163
1. Select Users and Groups in the System object in the console tree to check,
create, modify and delete user groups.
2. Select the Groups tab in the top area of the result pane to list all groups.
3. To check a user group, select the entry in the top area of the result pane.
Depending on the tab you selected, additional information is listed in the
bottom area:
Members tab
List of users who are members of the selected group.
Policies tab
List of policies which are assigned to the selected group.
4. To create and modify a user group, see “Creating and modifying user groups”
on page 168.
1. Select Users and Groups in the System object in the console tree.
2. Select the Groups tab in the top area of the result pane. All available groups are
listed in the top area of the result pane.
3. Click New Group in the action pane. The window to create a new group opens.
Name
A name that clearly identifies each user group. The name can be a
maximum of 14 characters in length. Spaces are not permitted.
Implicit
Implicit groups are used for the central administration of clients. If a group
is configured as implicit, all users are automatically members. If users who
have not been explicitly assigned to a user group log on to a client, they are
considered to be members of the implicit group and the client configuration
corresponding to the implicit group is used. If several implicit groups are
defined, the user at the client can select which profile is to be used.
5. Click Finish.
Modifying group To modify the settings of a group, select it and click Properties in the action pane.
settings Proceed in the same way as when creating a user group.
Deleting a user To delete a user group, select it and click Delete in the action pane. Neither users
group nor policies are lost, only the assignments are deleted.
Related Topics
1. Select the user group in the top area of the result pane for which users and
policies should be added.
2. Select the Members tab in the bottom area. Click Add User in the action pane. A
window with available users opens.
3. Select the users which should be added to the group and click OK.
4. Select the Policies tab in the bottom area. Click Add Policy in the action pane. A
window with available policies opens.
5. Select the policies which should be added to the group and click OK.
Removing To remove a user or a policy, select it in the bottom area and click Remove in the
users and action pane.
policies
Important
• Ensure that every user of Archive Center is member of at least one tenant
group. Otherwise, certain scenarios do not work correctly because
Archive Center protects user information based on the tenant groups.
• The <name>_SU group is intended for technical users only. Do not add
any human users to this group as these users would have access to ACLs
and the BCC fields of emails.
On-premises
1. Select Users and Groups in the System object in the console tree.
2. Click New Tenant in the action pane. The window to create a new tenant opens.
3. Enter the Tenant name and a Short name.
Tip: The short name is used as a prefix for the names of this tenant’s
logical archives, buffers, and jobs. Thereby, you can easily sort the
corresponding lists by tenants.
Administration User
This user is added to the new tenant, with assigned policy
“BusinessAdministration,” and thereby is allowed to perform all tasks
related to Archive Center Administration.
Access User
This user is added to a new user group. The new group has the name <new
tenant>_ED and the assigned policy “ArchiveAccess,” and thereby is
allowed to perform all tasks related to Archive Center Access. This policy
enables the eDiscovery user to search for holds and create EDRM exports,
for example. The policy does not allow writing to archives; in particular,
setting holds is not possible.
6. Click OK.
The tenant user group with assigned policy is created.
1. Select Users and Groups in the System object of the console tree.
2. Select the Users tab in the top area of the result pane and select the user. Note
the groups listed under Members in the bottom area.
3. Select the Groups tab in the top area of the result pane and select Policies in the
bottom area of the result pane.
4. Select one of the groups you noted and note also the assigned policies listed in
the bottom area.
6. Select one of the policies you noted. The associated groups of rights and
individual rights appear in the bottom area. Make a note of these.
7. Repeat Step 6 for all policies that you noted for the user group.
8. Repeat steps 4 to 7 for the other user groups which the user is a member of.
Note: If Archive Center was installed using the Archive Center Installer, the
connection to OTDS has been configured automatically.
OTDS administrator
Enter the name of the OTDS administrator; default: otadmin@otds.admin
5. On the Summary page, verify your entries and click Finish. The resource is
created.
Restart Archive Center to activate the new resource.
Securing If you configure Archive Center to connect to OTDS using SSL (that is using https
connection as <protocol>) the identity of the OTDS server will not be checked by default. For a
most secure connection, you can force Archive Center to trust the OTDS server only
if its server certificate has been issued by a trusted certification authority.
Note: The “strict” verification requires fully and properly set up trust and keys
stores at both Archive Center and OTDS application servers.
The default (“lazy”) verification performs basic validity checks of the provided
certificate and checks of the server’s host name against the information in the
certificate but does not require a corresponding trust store setup.
Linking You can easily transfer the permissions and policies in Archive Center’s built-in user
permissions management to a corresponding user in OTDS as follows:
and policies
1. For a logged-in OTDS user, all OTDS groups are checked for whether there is a
group of the same name in the Archive Center’s built-in user management.
Note: Only the group name is important for the OTDS groups. The check
does not consider the user partition.
2. In case of matching groups, the policies assigned to the corresponding group in
the built-in user management are looked up.
3. It is checked whether the permission of the OTDS user allows to execute the
desired command.
1. Create groups with the same group names in the Archive Center’s built-in user
management and in OTDS (in any user partition).
2. Assign the policies as required to the group in the built-in user management.
Further For more information about OTDS, see OpenText Directory Services - Installation and
information Administration Guide (OTDS-IWC).
If you use SAP as leading application, you configure the connection not only in the
SAP system but also in Administration Client. OpenText Document Pipeline for
DocuLink and OpenText Document Pipeline for SAP Solutions – in particular the
DocTools R3Insert, R3Formid, R3AidSel, and cfbx – require some connection
information. These Document Pipelines can send data back to the SAP server, for
example, the document ID in bar code scenarios. For these scenarios, Document
Pipeline for SAP Solutions must be installed. The basic and scenario customizing for
SAP is described in OpenText Archiving and Document Access for SAP Solutions -
Scenario Guide (ER-CCS). The configuration in the OpenText Administration Client
includes:
• “Creating and modifying SAP gateways” on page 177
• “Creating and modifying SAP system connections” on page 175
• “Assigning an SAP system to a logical archive” on page 178
3. Click SAP System Connection in the action pane. A window to configure the
SAP system opens.
Connection name
SAP system connection name with which the administered server
communicates. You cannot modify the name later.
Description
Enter an optional description (restricted to 255 characters).
Server name
Name of the SAP server on which the logical archives are set up in the SAP
system.
Client
Three-digit number of the SAP client in which archiving occurs.
Feedback user
Feedback user in the SAP system. The cfbx process sends a notification
message back to this SAP user after a document has been archived using
asynchronous archiving. A separate feedback user (CPIC type) should be
set up in the SAP system for this purpose.
Password
Password for the SAP feedback user. This is entered, but not displayed,
when the SAP system is configured. The password for the feedback user
must be identical in the SAP system and in OpenText Administration
Client.
Instance number
Two-digit instance number for the SAP system. Usually, the value 00 is
used here. It is required for the sapdp<xx> service on the gateway server in
order to determine the number of the TCP/IP port being used (<xx> =
instance number).
Codepage
Specifies the encoding of the document metadata fields as defined by the
ATTRIBUTES statements in the pipeline attribute definition file (IXATTR).
This is mainly relevant for free-text fields with characters outside the 7-bit
range. A four-digit number specifies the type of character set that is used by
the functions in SAP RFC libraries. These libraries convert the metadata
from the character set specified by this setting to the character set of the
SAP server. The default value is 1100 (ISO-8859-1). Other possible values
are, for example, 4110 (UTF-8) or 8000 (Shift-JIS).
If you run Document Pipeline version 16 or later only, this setting is
ignored. You can keep the default value in the Codepage field.
Note: Document Pipeline 16 and later expect the character set UTF-8
within the IXATTR file in any case. If you need to convert to UTF-8, see
the parameters in the enqueue jobs, like startEXR3.
Language
Language of the SAP system; default is English. If the SAP system is
installed exclusively in another language, enter the SAP language code here.
Test Connection
Click this button to test the connection to the SAP system. A window opens
and shows the test result.
5. Click Finish.
Modifying SAP To modify a SAP system, select it in the SAP System Connections tab and click
system Properties in the action pane. Proceed in the same way as when creating a SAP
connections
system connection.
Deleting SAP To delete a SAP system, select it in the SAP System Connections tab and click
system Delete in the action pane.
connection
Testing a SAP To test a SAP connection, select it in the SAP System Connections tab and click Test
connection Connection in the action pane. A window opens and shows the test result.
Subnet address
Specifies the address for the subnet in which an Archive Center or
Enterprise Scan is located. At least the first part of the address (for example,
NNN.0.0.0 in case of IPv4) must be specified. A gateway must be
established for each subnet.
IPv6
If you use IPv6, do not enclose the IPv6 address with square brackets.
Gateway address
Name of the server on which the SAP gateway runs. This is usually the SAP
server.
Gateway number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapgwxx service on the gateway server to
determine the number of the TCP/IP port (xx = instance number; for
example, instance number = 00, sapgw00, port 3300).
5. Click Finish.
Modifying SAP To modify a SAP gateway, select it in the SAP Gateways tab and click Properties in
gateways the action pane. Proceed in the same way as when creating a SAP gateway.
Deleting SAP To delete a SAP gateway, select it in the SAP Gateways tab and click Delete in the
gateways action pane.
Requirements
• The gateway to the SAP system is created and configured; see “Creating and
modifying SAP gateways” on page 177.
• The SAP system is created and configured; see “Creating and modifying SAP
system connections” on page 175.
2. Select the Archive Assignments tab in the result pane. All archives are listed in
the top area of the result pane.
3. Select the archive to which a SAP system should be assigned. Keep in mind that
SAP systems can be assigned only to original archives.
4. Click New Archive SAP Assignment in the action pane. A window to configure
the SAP archive assignment opens.
Protocol
Communication protocol between the SAP application and Archive Center.
Fully configured protocols, which can be transported in the SAP system, are
supplied with the SAP products of OpenText.
6. Click Finish.
Modifying To modify an archive assignment, select it in the bottom area of the result pane and
archive click Properties in the action pane. Proceed in the same way as when assigning a
assignments
SAP system.
Removing To delete an archive assignment, select it in the bottom area of the result pane and
archive click Remove Assignment in the action pane.
assignments
There are archiving scenarios in which scan stations submit scanned content to
logical archives. For these scenarios, the scan stations needs information about the
archiving operation. They need to know to which logical archives the documents are
sent, and how the documents are to be indexed when archived. The archive mode
contains this information.
Archive modes are assigned to every scan station. When a scan station starts, it
queries the archive modes that are defined for it at the specified Archive Center. The
employee at the scan station assigns the appropriate archive mode to the scanned
documents in the course of archiving.
The following details must be configured correctly to archive from scan stations:
• Archive in which the documents are stored, scenario and conditions, workflow.
See “Adding and modifying archive modes” on page 183.
• Scan station to which an archive mode applies. See “Adding a new scan host and
assigning archive modes” on page 186.
• If SAP is the leading application: the SAP system to which the barcode and the
document ID are sent, and the communication protocol and version of the
ArchiveLink interface. See “Assigning an SAP system to a logical archive”
on page 178.
•
If Archive Center runs as an active-active cluster, the configuration variable
ADMS_EXEC_TARGET must be set. See “Configuring scan clients for a clustered
installation” on page 188.
You need the Document Pipelines for SAP (R3SC) for all archiving scenarios.
Note: For scenarios in which archiving is started from the SAP GUI, you do not
need an archive mode.
PS_ENCODING_BASE64_UTF8N 1
Pre-indexing to Tasks inbox of PDMS GUI
Documents are indexed in Enterprise Scan first. The archiving process archives the
document to the Transactional Content Processing Servers and creates a task in the TCP
Application Server PDMS GUI inbox for a particular user, or for any user in a particular
group.
DMS_Indexing n/a n/a BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>
User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<name>
Late indexing to Process Inbox of TCP GUI
Archives the document to the Transactional Content Processing Servers and starts a process
with the document in the TCP GUI inbox. Documents are indexed in TCP.
DMS_Indexing n/a <processname> PS_MODE LEA_9_7_0
PS_ENCODING_BASE64_UTF8N 1
BIZ_REG_INDEXING
Leave the values empty
BIZ_APPLICATION<name>
Late indexing to Tasks inbox of PDMS GUI
Archives the document to the Transactional Content Processing Servers and creates a task in
the TCP Application Server PDMS GUI inbox for a particular user, or for any user in a
particular group. Documents are indexed in TCP.
DMS_Indexing PILE_INDEX n/a BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>
User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<group>
Late indexing for plug-in event
Archives the document to the Transactional Content Processing Servers and calls a plug-in
event in the TCP Application Server. Documents are indexed in TCP.
DMS_Indexing PILE_INDEX n/a BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>
BIZ_PLG_EVENT=<plugin>:
<event>
5. Click Finish.
Thus you can create several archive modes, for example, if you want to assign
document types to different archives.
Modifying an To modify the settings of an archive mode, select it in the Archive Modes tab in the
archive mode result pane and click Properties in the action pane. Proceed in the same way as
when adding an archive mode. For more information, see “Archive Modes
properties” on page 184.
Deleting an To delete an archive mode, select it in the Archive Modes tab in the result pane.
archive mode Click Delete in the action pane. If the archive mode is assigned to a scan host, it
must be removed first; see “Removing assigned archive modes” on page 188.
Scenario
Name of the archiving scenario (also known by the technical name Opcode).
Scenarios apply to leading applications.
Archive name
Name of the logical archive, to which the document is sent.
Pipeline Info
Use local Pipeline configuration: The Document Pipeline configuration
installed on the client is used (the actual pipeline to be used can be remote,
though).
Use the following Remote Pipeline: The Document Pipelines can be installed
on a separate computer. The pipeline is accessed via an HTTP interface. For this
configuration, the protocol, the pipeline host, and the port must be set.
Protocol
Protocol that is used for the communication with the pipeline host. For security
reasons, HTTPS is recommended.
Pipeline host
The computer where the Document Pipeline is installed.
Port
Port that is used for the communication with the pipeline host. Use 8080 for
HTTP or 8090 for HTTPS.
Advanced tab
Workflow
Name of the workflow that will be started in Enterprise Process Services when
the document is archived. For more information about creating workflows, see
the Enterprise Process Services documentation.
Conditions
These archiving conditions are available:
R3EARLY
Early archiving with SAP.
BARCODE
If this option is activated, the document can only be archived if a barcode
was recognized. For Late Archiving, this is mandatory. For Early Archiving,
the behavior depends on your business process:
• If a barcode or index is required on every document, select the Barcode
condition. This makes sure that an index value is present before
archiving. The barcode is transferred to the leading application.
• If no barcode is needed, or it is not present on all documents, do not
select the Barcode condition. In this case, no barcode is transferred to the
leading application.
PILE_INDEX
Sorts the archived documents into piles for indexing according to certain
criteria. For example, the pile can be assigned to a document group, and the
access to a document pile in a leading application like Transactional Content
Processing can be restricted to a certain user group.
INDEXING
Indexing is done manually.
ENDORSER
Special setting for certain scanners. Only documents with a stamp are
stored.
Extended Conditions
This table is used to hand over archiving conditions to the COMMANDS file, for
example, to provide the user name so that the information is sent to the correct
task inbox. The extended conditions are key-value pairs. Click Add to enter a
new condition. To modify a extended condition select it and click Edit. Click
Remove to delete the selected condition.
Related Topics
• “Adding a new scan host and assigning archive modes” on page 186
4. Click Add Scan Host in the action pane. A window with available scan hosts
opens.
Related Topics
• “Adding and modifying archive modes” on page 183
• “Adding a new scan host and assigning archive modes” on page 186
Site
Describes the location of the scan host.
Description
Brief, self-explanatory description of the scan host.
5. Click Finish.
Deleting an To delete an archive mode, select it in the Archive Mode tab in the result pane. Click
archive mode Delete in the action pane. If the archive mode is assigned to a scan host, it must be
removed first; see “Adding a new scan host and assigning archive modes”
on page 186.
Related Topics
• “Adding and modifying archive modes” on page 183
• “Adding additional archive modes” on page 187
• “Archive Modes properties” on page 184
4. Click Add Archive Mode in the action pane. A window with available archive
modes opens.
Related Topics
• “Adding and modifying archive modes” on page 183
• “Archive Modes properties” on page 184
3. Select the scan host for which you want to change the default archive mode.
3. Select the scan host in the top area of the result pane.
4. Select the archive mode which you want to remove in the bottom area of the
result pane.
6. Click OK to confirm.
3. Locate the ADMS_EXEC_TARGET variable, and change its value from localhost to
a URL like this:
<protocol>://<dedicated node>:<port>/archive
where <dedicated node> is the fully-qualified name of the node that will handle
the administration of the archive modes.
Example: https://archive.example.com:8090/archive
4. For the changes to take effect, restart the Apache Tomcat and Archive Spawner
services on all nodes.
Known servers are used to realize remote standby scenarios to increase data
security. If a server is added as a known server to the environment, all archives of
this server can be checked in External Archives in the Archives object of the console
tree. If a logical archive of a known server is replicated to the original server, this
archive can be checked in Replicated Archives in the Archives object of the console
tree. See “Configuring remote standby scenarios“ on page 195.
Cluster topic: Known servers and remote standby scenarios are not supported.
Note: Instead of the host name, you can also use IPv4 addresses. IPv6
addresses are not supported.
You can configure whether HTTP or HTTPS is used in the following way:
• If you only want to allow secure connections using HTTPS, set the value
of Port to 0 (zero) and specify the HTTPS port in Secure port.
• If you only want to allow connections using HTTP, set the value of
Secure port to 0 (zero) and specify the HTTP port in Port.
If both Port and Secure port are set to a value larger than 0, the
ADMS_KNOWN_SERVER_PROTOCOL variable is used to determine the used
protocol. At least one of the port values must be larger than 0.
To enable replication:
4. In the dialog box, click OK to enable the encryption certificate of the known
server.
Disabling You can disable replication to a known server again by selecting the known server in
replication the result pane and clicking Disable Replication in the action pane. After you have
confirmed with OK, also the encryption certificate of the known server will be
disabled.
4. To modify the settings of a known server, proceed in the same way as when
adding a known server. Additional to the New known server window, you get
more information of the known server:
Version
The version number of the known server.
Startup time
The date and time when the known server was started last.
Build Information
Detailed information of the software build and revision of the known
server.
Description
Shows the short description of the known server, if available.
5. Click OK.
Modifying To modify the settings of a known server, select it in the top area of the result pane
known server and click Properties in the action pane. Proceed in the same way as when adding a
settings
known server.
Disk Volume(s) Buffer P1b Pool 1b Disk Volume(s) Buffer P3b Pool 3b
Disk Volume(s) Buffer P2b Pool 2b Disk Volume(s) Buffer P1b Pool 1b
In a remote standby scenario, all new and modified documents are asynchronously
transmitted from the original archive to the replicated archive of a known server.
This is done by the Synchronize_Replicates job on the Remote Standby Server.
The job physically copies the data on the storage media between these two servers.
Therefore, the Remote Standby Server provides more data security than the local
backup of media.
With a Remote Standby Server, not the entire server is replicated but just the logical
archives. Further, it is possible to use two servers crosswise, that is one Archive
Center is the Remote Standby Server of the other and vice versa.
2. Add the Remote Standby Server as known server (see “Adding known servers”
on page 191). Ensure that Remote server is allowed to replicate from this host
is set.
3. Click OK. The Remote Standby Server is listed in Known Servers in the
Environment object of the console tree.
Important
The replicate volumes must have the same names as the original volumes.
The replicate volumes need at least the same amount of disk space.
2. Add the original server as known server (see “Adding known servers”
on page 191).
Unless the two servers mutually replicate each others’ archives, you must not
enable Remote server is allowed to replicate from this host.
4. Select External Archives in the Archives object in the console tree. All logical
archives of the known servers are listed.
5. Select the archive which should be replicated in the result pane and click
Replicate in the action pane.
The archive is moved to Replicated Archives. A message is shown, that the
pools of the replicated archive must be configured (see “Backups on a Remote
Standby Server” on page 199).
6. Select the replicated archive, and then select the Server Priorities tab in the
result pane.
7. Click Change Server Priorities in the action pane. A wizard to assign the
sequence of server priorities opens (see “Changing the server priorities”
on page 113).
8. Assign the server priorities. The order should be: first the Remote Standby
Server, then the original server.
9. Select the Replicated Archives object in the console tree, and then click
Synchronize Servers in the action pane.
3. Select the archive to be replicated, and then select the Server Priorities tab in
the result pane.
5. Assign the server priorities. The order should be: first the original server, then
the Remote Standby Server.
1. On the Remote Standby Server, select the replicated archive, and then select the
Pools tab in the result pane.
2. Select the first pool in the top area. In the bottom area, the assigned volumes are
listed. Volumes that are not configured are labeled with the missing type.
a. Select the first missing volume and click Attach or Create Missing Volume
in the action pane.
b. Enter Mount Path and Device Type and click OK. Repeat this for every
missing volume.
ISO volumes
ISO volumes will be replicated by the asynchronously running
Synchronize_Replicates job (see also “ISO volumes” on page 199).
a. Select Replicated Archives in the console tree and select the designated
archive.
b. Select a replicated pool in the console tree and click Properties in the action
pane.
c. Select the backup jukebox. For virtual jukeboxes with HD-WO media,
OpenText strongly recommends configuring the original and backup
jukeboxes on physically different storage systems.
d. Configure the Synchronize_Replicates job according to your needs (see
“Setting the start mode and scheduling of jobs” on page 122).
Note: On the original Archive Center, the backup jobs can be disabled if
no additional backups should be written.
2. Select the known server which disk buffer needs to be replicated in the top area
of the result pane. The assigned disk buffers are listen in the bottom area of the
result pane.
3. Select the disk buffer which needs to be replicated and click Replicate in the
action pane.
6. Select the Replicated Disk Buffers tab in the result pane. The replicated buffers
are listed in the top area.
7. Select the replicated buffer in the top area. In the bottom area, the assigned
volumes are listed. Volumes which are not configured are labeled with the
missing type.
8. Select the first missing volume and click Attach or Create Missing Volume in
the action pane.
9. Enter Mount Path and click OK. Repeat this for every missing volume.
Related Topics
• “Configuring disk volumes” on page 60
• “Installing and configuring storage devices” on page 49
Note: For backup and recovery of GS, ISO (HD-WO), and FS volumes, contact
OpenText Customer Support.
2. Select Replicated Archives in the console tree and select the designated archive.
3. Select a replicated pool in the console tree and click Properties in the action
pane.
4. Select the Backup Jukebox (see “Write at once (ISO) pool settings” on page 94).
2. Add a new EMC Centera or Hitachi HCP GS device as single file (VI) storage
device with a separate internal storage pool.
3. Select Replicated Archives in the console tree and select the designated archive.
4. Select a replicated pool in the console tree and click Properties in the action
pane.
5. Select the newly created GS single file device and confirm with OK.
Archive Timestamp Server is installed and configured together with Archive Center.
It handles the incoming requests, creates the timestamps, and sends the reply. It
runs as an Archive Center component.
After the installation of Archive Center and Archive Timestamp Server, basic
settings of Archive Timestamp Server are preset, for example, default signature key
and certificate are provided. You can also configure other settings, if required.
Note: Archive Timestamp Server allows you to use the timestamp features
independent from external software, for example, for test cases. However, it
does not provide the same high-security level as a trusted service provider.
Configuration The configuration and administration of Archive Timestamp Server is done in the
and administra- Administration Client. See “Configuration variables for Archive Timestamp Server”
tion
on page 202.
Background
• “Timestamps” on page 133
However, this method provides no security against an intruder with read access
to the server configuration.
Configuration You must administer the required settings using configuration variables in
variables Administration Client. Search the following configuration variables in the
Configuration node (see “Searching configuration variables” on page 222):
• File with certificate (internal name: TS_REQ_CERTFILE)
• Passphrase for the private key (internal name: TSTP_KEY_PASSPHRASE)
• File with private key (internal name: TS_REQ_PRIVKEYFILE)
Configuration recommendation
ArchiSig timestamps
Timestamps are not added per document, but for containers of hash trees calculated
from the documents (new method).
Document timestamps
Example: tshost1:32001;tshost2:10318
Background
• “Timestamps” on page 133
Checking the You can retrieve and display the general status of Archive Timestamp Server
status together with some details about its configuration with a standard Web browser.
Enter the following URL:
http://<servername>:<port>
As <servername>, use the host name of Archive Timestamp Server and as <port>,
use the configured port. The default port is 32001.
Note: The status can only be retrieved on computers that are configured as
administration hosts in the Archive Timestamp Server setup. If Allow remote
administration from any host is enabled, the web status can be accessed from
any host.
Timestamps From the command line, enter the following command: dsSign -t
(old)
The result should be similar to this:
IMPORTANT: about to mount server WORM on host localhost, port 0, mount point /views_hs
IMPORTANT: about to mount server CDROM on host localhost, port 0, mount point /views_hs
Success!
Date/Time: Thu Jun 18 09:41:48 2015
cert 0:
expired: Wed Apr 01 02:00:00 2020
Archive Cache Server distinguishes between read and write requests. In case of read
requests, the Archive Cache Server tries to satisfy the request from its local cache
instead of transferring the document via slow WAN from an Archive Center. If not
found in local cache, the document will be cached for later access.
write through
In this mode, all documents are transferred to the Archive Center, but on the fly,
they are also cached in the local store to speed up later read requests.
write back
In this mode, all the documents are cached in the local store of the Archive
Cache Server. Archive Center just will be informed that there are new
documents residing on the Archive Cache Server. The configured Copy_Back job
will later transfer these documents to the Archive Center.
Cluster topic: The write-back mode is not supported.
The following figure shows a simple outlay of a scenario with only one Archive
Center and one Archive Cache Server. In real environments, one Archive Cache
Server can support more than one Archive Center and one Archive Center can have
more than one Archive Cache Server attached. Clients can also access the Archive
Center directly without using Archive Cache Server. This depends on the
configuration; see “Configuring access using Archive Cache Server” on page 214.
Remote site
Clients
Document
transfer
WAN
Administrative calls
Archive Server
Document Service
As the diagram hints, the Administration Server is central to the coordination of the
cache scenario at large. Administration Client is used to configure the settings of
each Archive Cache Server and the associated clients and archives.
Important
To ensure accurate retention handling, the clock of the Archive Cache Server
must be synchronized with the clock of the Archive Center.
Topic Description
Restrictions valid for “write back”
MTA documents MTA documents can be stored but the single document in an
MTA document cannot be accessed until they are transferred
to an Archive Cache Server.
Attribute Search Attribute Search in print lists is not available until the content
is transferred from an Archive Cache Server to the related
Archive Center.
VerifySig The signature verification is processed for write back items but
the signer chain is not verified (no timestamp certificates are
available on related Archive Center).
Deletion behavior To avoid problems with deletion, do not use the following
archive settings:
• Original Archive > Properties > Security > Document
Deletion > Deletion is ignored (see also “Configuring the
archive security settings” on page 87)
• Archive Server > Modify Operation Mode > Documents
cannot be deleted, no errors are returned (see also “Setting
the operation mode of Archive Center” on page 328
Retention behavior As long as write back documents are just stored on the Archive
Cache Server, there is no protection based on the document
retention. After transferring documents to a related Archive
Center, the retention behavior gets effective. If there is no client
retention, the retention setting of the logical archive is used.
Audit There are no audit trails for documents as long as they are not
transferred to the related Archive Center.
Update Document This call is not supported for write back documents.
migrateDocument Results in an error if just the pool name or storage tier is
changed.
Important
Target archives must be enabled to be cached by this
Archive Cache Server, otherwise update calls will fail.
Topic Description
Versioning of components As long as components are just stored on the Archive Cache
Server, there is no version control! This means, after a
successful modification, the modified component is available,
but the version number will not be increment. A subsequent
info call still will deliver back version “1” of the just modified
component, until the component has been transferred to the
related Archive Center.
Transfer and commit Write-back documents are transferred to the related Archive
Center in a two-phase process:
Description
Brief, self-explanatory description of the Archive Cache Server.
Host (client)
Physical host name of the Archive Cache Server, used by a client when
accessing Archive Cache Server.
Note: Instead of the host name, you can also use IPv4 addresses.
However, IPv6 addresses are not supported.
Note: Instead of the host name, you can also use IPv4 addresses.
However, IPv6 addresses are not supported.
The <name to use by ACS for itself> name and the Host (archive server)
name must be identical. Otherwise, problems will arise during the
write-back scenario.
http://<host>:<port><context>?...
https://<host>:<secure port><context>?...
4. Click Finish.
5. Configure the Copy_Back job. See also “Configuring jobs and checking job
protocol“ on page 115 and “Other jobs” on page 118.
Note: Be aware that this job is disabled by default. If you intend to use the
"write back" mode, enable this job.
6. Click Finish. The new Archive Cache Server is added to the environment.
7. Cluster only: Disable the global certificate of Archive Cache Server on one node,
and then enable it again. Do the same on all other cluster nodes.
Note: If <name to use by ACS for itself> and Host (archive server) are different
from each other, it is required to rename one or the other to make them
identical. To rename the Archive Cache Server, change the value of the
MY_HOST_NAME variable in the ACS.Setup file to <name to use by ACS for
itself>.
Caution
Do not modify the host name while writing back.
• Select the Copy_Back job that is assigned to the Archive Cache Server and click
Start in the action pane. The cached documents are transferred to the related
Archive Center. A window to watch the transfer status opens.
2. Select the Archive Cache Server you want to modify and click Properties in the
action pane.
3. Modify the Archive Cache Server parameters. See also “Adding an Archive
Cache Server to the environment” on page 209.
4. Click Finish.
1. Detach the Archive Cache Server from all logical archives it is attached to. See
“Deleting an assigned Archive Cache Server” on page 218.
3. Select the Copy_Back job which is assigned to the Archive Cache Server and
click Start in the action pane. The cached documents are transferred to the
related Archive Center. A window to watch the transfer status opens.
Caution
This step ensures that pending write-back documents are transferred
to the related Archive Center. If this step fails, the Archive Cache
Server must not be deleted before the problem is solved.
7. Click Yes to confirm. The Archive Cache Server is deleted from the
environment.
For more information about write-back volumes and write-through volumes, see
“Configuring Archive Cache Server“ on page 205.
Adding cache Adding a write-back volume or write-through volumes involves the same steps.
volumes There can only be one write-back volume but several write-through volumes.
For each new cache volume, two new properties are required:
• Path where the volume is located
• Volume size
a. Volume path - Add the volume path name of the new volume to the WBVOL
variable. Make sure this path already exists.
b. Volume size - Add the volume size of the new volume (in MB) to the
WBSIZE variable.
a. Volume path - Add the volume path name of the new volume to the
VOL<n> variable, where <n> is the number of the first unassigned volume.
Make sure this path already exists.
b. Volume size - Add the volume size of the new volume (in MB) to the
SIZE<n> variable, where <n> is the number of the first unassigned volume.
Note: The new volume is not yet available. See “Activating the
modification” on page 213.
Resizing cache You can change the size of existing cache volumes if necessary.
volumes
Caution
Danger of loss Make sure not to remove the write-back volume accidentally or to change
of data the path of the write-back volume.
To resize volumes:
2. To resize the write-back volume, change the volume size of the volume (in MB)
in the WBSIZE variable.
3. To resize a write-through volume, change the volume size of the volume (in
MB) in the SIZE<n> variable, where <n> is the number of the volume to be
changed.
Note: The new volume size is not yet valid. See “Activating the
modification” on page 213.
Note: Running cscommand requires that a JDK or JRE is included in the PATH
environment variable.
Note: Resized volumes can be viewed only after restart of the server.
4. Copy all data from the current database location (see step 2) to the new location
(provided in step 1). The file permissions of the copy and the original must
match.
5. Configure the Archive Cache Server to use the new database location:
In the ACS.Setup file, change the value of the DERBY variable to the new
database directory name.
Client n (123.144.130.m)
Subnet 123.235.155.0
Client 1 (123.235.155.46)
Archive Server
Important
The subnet configuration will only be evaluated by clients using the
OpenText Archive Center API.
Note: Archive Cache Server keeps track of any relevant changes to the archive
settings and is synchronized automatically.
Cache server
The name of the Archive Cache Server assigned to this archive.
Caching enabled
If caching is enabled, one of the following modes can be set.
Write through
The Archive Cache Server will operate in “write through” mode for this
logical archive.
Write back
The Archive Cache Server will operate in “write back” mode for this
logical archive.
Note: If caching is disabled, the Archive Cache Server does not cache any
new documents for this logical archive. Instead, it acts as a proxy and
forwards all requests to Archive Center. Outstanding write-back
documents can still be retrieved.
5. Click Next and enter settings for subnet address and subnet mask/length.
The combination of subnet mask and subnet address specifies a subnet. Clients
residing in this subnet will use the selected Archive Cache Server. Typically, the
Archive Cache Server resides in the same subnet. It is possible to add more than
one subnet definition to an Archive Cache Server; see also “Subnet assignment
of Archive Cache Server” on page 214.
Several subnets
If a client belongs to more than one subnet, it will use the Archive Cache
Server that is assigned to the best matching subnet.
Subnet address
Specifies the address for the subnet in which a Archive Cache Server is
located. At least the first part of the address (for example, NNN.0.0.0 in case
of IPv4) must be specified. A gateway must be established for each subnet.
IPv6
If you use IPv6, do not enclose the IPv6 address with square brackets.
IPv4
Enter a subnet mask, for example 255.255.255.0.
IPv6
Enter the address length, i.e. the number of relevant bits, for example
64.
Modifying To modify the settings of an Archive Cache Server, select it in the top area of the
settings result pane and click Properties in the action pane. Proceed in the same way as
when configuring an Archive Cache Server.
Important
The certificate must be enabled regardless of the security settings of the
archive.
2. On Archive Center, import and enable the Archive Center certificate as global
authentication certificate unless this has already been done during the Archive
Center configuration.
Important
The certificate must be imported and enabled regardless of the security
settings of the archive.
Background
• “Certificates” on page 141
2. Select the logical archive which the Archive Cache Server is assigned to.
3. Select the Cache Servers tab in the top area of the result pane and select the
Archive Cache Server. In the bottom area, the subnet definitions are listed.
4. Click New Subnet Definition in the action pane and enter settings for subnet
mask and subnet address. See also “Configuring archive access using Archive
Cache Server” on page 215
5. Click Finish.
2. Select the logical archive which the Archive Cache Server assigned to.
3. Select the Cache Servers tab in the top area of the result pane and select the
Archive Cache Server. In the bottom area, the subnet definitions are listed.
4. Select the subnet definitions in the bottom area of the result pane and click
Properties.
Modify the settings for subnet mask and subnet address. See also “Configuring
archive access using Archive Cache Server” on page 215
5. Click Finish.
Note: The steps 3 to 6 are only necessary if you use an Archive Cache Server
that operates in “write-back” mode.
2. Select the logical archive to which the Archive Cache Server is assigned.
3. Select the Cache Servers tab in the top area of the result pane and select the
Archive Cache Server you want to delete.
5. Deselect enabled to stop caching. See also “Configuring archive access using
Archive Cache Server” on page 215.
7. Select the Copy_Back job which is assigned to the Archive Cache Server you
want to delete and click Start. The cached documents are transferred to the
related Archive Center. A window to watch the transfer status opens.
8. Select the Archive Cache Server you want to delete again and click Delete in the
action pane.
9. Click Yes to confirm. The Archive Cache Server is no longer assigned to the
logical archive.
The Reports node is used to generate reports comprising information on certain well
defined scenarios. Reports are based on scripts describing a specific scenario. A
scenario is a kind of template (or order form) describing the content and the layout
of a report. Running the script generates a report, an output file in html format.
Multiple reports can be generated per scenario. Currently, the Reports node is used
to generate reports comprising details of archives and pools currently available on
the Archive Center. You can use a report when asking for support. The information
provided by reports can be evaluated by the service personnel.
The Reports node comprises the Reports tab and the Scenarios tab.
To generate a report:
2. Select the Scenarios tab in the top area of the result pane.
Information The following information per report is displayed in the result pane:
about a report
Name Name of the report. The name is predefined, it is derived from the respective
scenario name extended by a serial number.
Date Date and time when the report was generated.
Format YYYY-MM-DD HH:MM:SS.
Size Size of the HTML file displayed in KB.
Deleting reports To delete a report, select it and click Delete in the action pane. Confirm the
displayed message with OK.
To display a report:
2. Select the Reports tab in the top area of the result pane.
3. Click Refresh.
Within this object, you can set the configuration variables for:
• Archive Server
• Document Pipeline
• Monitoring Server
For a complete list including short descriptions of all configuration variables, see
“Configuration parameter reference” on page 335.
3. Select a component.
A list of related variables is displayed below the list of components.
4. Select a variable using double-click or using the Properties action in the action
pane.
The Configuration Variable Properties window opens, displaying two tabs:
General tab
Displays the name, the current value, a short description and information
on whether a server restart is required upon modifying this variable
Advanced tab
Displays the full qualified internal name of the variable
Resetting to To reset a value to its default value, select it and click Reset to Default in the action
default value pane. This action is sensitive only if the value is currently not the default value.
Retrieving In the list of configuration variables, undefined values are marked with *** Value
unspecified not defined ***. In the properties window, undefined values are marked with an
values
icon:
Example: Search for port and you will get results with port as name, as internal name and,
if set, as value.
Example: If you enter port, the result, among others, can be the following:
Note: Click on the arrow icon to the right of the search icon (see figure
below) and select Search All Configuration Variables to display all
configuration variables.
1. Select the Configuration object (or one of the objects assigned to it).
This chapter describes tasks that are relevant for storage systems: export and import,
consistency checks. If you archive documents with retention periods, you also have
to check for correct deletion of the documents and clear volumes whose documents
are deleted completely.
Document When the leading application sends the delete request for a document, the archive
deletion system works as follows:
2. The delete request is not propagated to the storage system and the content
remains in the storage. Only logically empty volumes can be removed in a
separate step.
Delete empty If documents with retention periods are stored in container files, the container
partitions volume gets the retention period of the document with the longest retention. The
retention period of the volume is propagated to the storage subsystem if possible.
The volume – and the content of all its documents – can be deleted only if all
documents are deleted from the archive database. The volume is purged by the
Delete_Empty_Volumes job. It checks for logically empty volumes meeting the
conditions defined in Configuration (see “Searching configuration variables”
on page 222):
Delete volumes which have not been modified since days variable
(internal name: ADMS_DEL_VOL_NOT_MODIFIED_SINCE_DAYS)
Delete volumes which are more than percent full variable
(internal name: ADMS_DEL_VOL_AT_LEAST_FULL)
and deletes these volumes automatically. You can schedule the job and run it
automatically, or use the List Empty Volumes/Images utility to display the empty
volumes first and then start the deletion job manually (see “Checking for Empty
Volumes and Deleting Them Manually” on page 229).
Important
To ensure correct deletion, you must synchronize the clocks of the Archive
Center and the storage subsystem, including the devices for replication.
Notes
• Not all storage systems release the space of the deleted volumes (see
documentation for your storage system).
• Blobs are handled like container file archiving.
2. Click List Empty Volumes in the action pane. A window to start the utility
opens.
3. Enter settings.
6. Select the Delete_Empty_Volumes job and click Start in the action pane.
During export, the entries about documents and their components on the volume
are deleted from the archive database. The volume gets the internal status exported
and is treated as nonexistent. After that, you remove the ISO medium together with
its local backups from the virtual jukebox. The database entries can be restored by
importing the volume.
Important
• Do not use the Export utility for volumes belonging to archives that are
configured for single instance archiving (SIA). A SIA reference to a
document may be created long after the document itself has been stored,
and the reference may be stored on a newer medium than the document.
SIA documents can be exported only when all references are outdated.
Further, if the Export from database option is enabled, the Export utility
does not analyze references to the documents.
• Volumes containing at least one document with nonexpired retention are
not exported. In this case, no document of the volume will be exported.
To export volumes:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Export Volumes utility.
3. Click Run in the action pane.
4. Enter the export parameters.
Volume name(s)
Name of the volumes(s) to be exported. You can use wildcards to export
multiple volumes at the same time.
5. Click Run. A protocol window shows the progress and the result of the export.
The export process can take some time.
Related Topics
• “Utilities“ on page 249
• “Checking utilities protocols” on page 250
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import ISO Volume utility in the result pane and click Run in the
action pane.
3. Enter settings:
Volume name
Name of the volume(s) to be imported.
STORM server
Name of the STORM server by which the imported volume is managed.
Backup
The volume is imported as a backup volume and entered in the list of
volumes as a backup type. Not available for ISO volumes.
Arguments
Additional arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact OpenText
Customer Support.
4. Click Run.
The import process can take some time. A message box shows the progress of
the import.
Related Topics
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import HD Volume utility in the result pane and click Run in the
action pane.
3. Enter settings:
Volume name
Name of the hard disk volume to be imported.
Base directory
Mount path of the volume.
Backup
The volume is imported as a backup volume and entered in the list of
volumes as a backup type.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional Arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact OpenText
Customer Support.
4. Click Run.
The import process can take some time. A message box shows the progress of
the import.
5. Select Original Archives in the Archives object in the console tree.
6. Select the designated archive and the FS or HDSK pool.
7. Click Attach Volume in the action pane.
8. Select the volume and define the priority.
9. Click Finish to attach the imported volume to the pool.
Related Topics
• “Utilities“ on page 249
• “Checking utilities protocols” on page 250
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import GS Volume utility in the result pane and click Run in the
action pane.
3. Enter settings:
Volume name
Name of the hard disk volume to be imported.
Base directory
Mount path of the volume.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact OpenText
Customer Support.
4. Click Run.
The import process can take some time. A message box shows the progress of
the import.
Related Topics
• “Utilities“ on page 249
• “Checking utilities protocols” on page 250
You can start the utilities in the System object in the console tree. When the utility is
started, a message window shows the progress of the utility.
The volume to be checked must be online. You can only check the volume, or try to
repair inconsistencies.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
4. Type the volume name and specify how inconsistencies are to be handled.
Volume
Name of the volume that is to be checked.
copy document/component from other partition
The utility attempts to find the missing component on another volume. If
the component is found, it is copied to the checked volume. If not, the
component entry is deleted from the database, i.e. the component is
exported.
export component
The database entry for the missing component on the checked volume is
deleted.
Repair, if needed
Check this box if you really want to repair the inconsistencies.
If the option is deactivated, the test is performed and the result is displayed.
Nothing is copied and no changes are made to the database.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the
archive. If in doubt, contact OpenText Customer Support.
5. Click Run.
A protocol window shows the progress and the result of the check.
Related Topics
• “Utilities“ on page 249
• “Checking utilities protocols” on page 250
The volume to be checked must be online. You can only check the volume, or try to
repair inconsistencies.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
4. Type the volume name and specify how documents missing in the database are
to be handled.
Volume
Name of the volume that is to be checked.
5. Click Run.
A protocol window shows the progress and the result of the check.
Related Topics
• “Utilities“ on page 249
• “Checking utilities protocols” on page 250
To check a document:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
DocID
Type the document ID accordingly to the Type setting.
You can determine the string form of the document ID by searching for the
document in the application (for example, on document type and object
type) and displaying the document information in Windows Viewer or in
Java Viewer.
Type
Select the type of document ID. The ID can be entered in numerical
(Number) or string (String) form.
Repair document, if needed
Check this box if you want to repair defective documents. The utility
attempts to copy the document from another volume. If this option is
deactivated, the utility simply performs the test and displays the result.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the
archive. If in doubt, contact OpenText Customer Support.
5. Click Run.
A protocol window shows the progress and the result of the check.
Related Topics
• “Utilities“ on page 249
• “Checking utilities protocols” on page 250
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Count Documents/Components utility.
3. Click Run in the action pane.
4. Enter the name of the volume.
5. Click Run.
A protocol window shows the progress and the result of the counting.
Related Topics
• “Utilities“ on page 249
• “Checking utilities protocols” on page 250
To check a volume:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
5. Click Run.
A protocol window shows the progress and the result of the check.
Related Topics
• “Utilities“ on page 249
• “Checking utilities protocols” on page 250
Basically, you can backup archived data by means of the storage system or by means
of the Archive Center (local backup, Remote Standby). Some scenarios can be
restricted to one of these ways. The backup medium should be the same type as the
original medium. For more information, see the Storage Platform Release Notes on
My Support (https://knowledge.opentext.com/knowledge/llisapi.dll/open/12331031).
Number of Partitions 1
Number of Backups 1
Backup Jukebox Must be different from Original Jukebox
Backup On for Local_Backup job
The backup concept used by Archive Center ensures that documents are protected
against data loss throughout their entire path to, through, and in the Archive Center.
Archive Server
Storage
Systems
There are several parts that have to be protected against data loss:
Volumes
All hard disk volumes that can hold the only instance of a document must be
protected against data loss by RAID. For more information about the volumes to
be protected, see the “Configuring basic settings” chapter in the Archive Center
installation guides.
Document Pipelines
The Document Pipeline of OpenText Imaging Enterprise Scan must be protected
against data loss. For more information, see Section 19.2 “Backing up the
Document Pipeline directory” in OpenText Imaging Enterprise Scan - User and
Administration Guide (CLES-UGD).
Database
The database with the configuration for logical archives, pools, jobs, relations to
other Archive Centers, and leading applications must be protected against data
loss. The process depends on the type of database you are using; see “Backing
up the database” on page 242.
Storage Manager configuration
The configuration of the Storage Manager must be saved. For more information,
see “Backing up and restoring of the Storage Manager configuration”
on page 243.
Data in storage systems
Data that is archived on storage systems like HSM, NAS, CAS also needs a
backup, either by means of the storage system or with Archive Center tools. For
more information, see “Backup for storage systems” on page 238.
Archive Cache Server
If “write back” mode is enabled, the Archive Cache Server stores newly created
documents locally without saving them immediately to the destination. It is
recommended to perform regular backups of the Archive Cache Server data. For
more information, see “Backup and recovery of an Archive Cache Server”
on page 244.
Directory Services
OpenText recommends backing up the server hosting OpenText Directory
Services (OTDS) on a regular basis (for example, weekly).
To avoid data loss and extended down times you, as system administrator, should
back up the database regularly and in full, and complement this full backup with a
daily backup of the log files. In general: The more backups are performed, the safer
the system is. Backups should be performed at times of low system load.
It is advisable to back up the archive database at the same time as the database of the
leading application if possible.
The database must be backed up at regular intervals. However, because its data
contents are constantly changing, all database operations are written to special files
(online and archived redo logs under Oracle, transaction logs for SQL Server). As a
result, the database can always be restored in full on the basis of the backup and
these files.
Important
During the configuration phase of installation, you can either select default
values for the database configuration or configure all relevant values. To
make sure that this guide remains easy to follow, the default values are used
below. If you configured the database with non-default values, replace these
defaults with your values.
For more information about password change, see “Changing the database user
password” on page 73.
Caution
If “write back” mode is enabled, the Archive Cache Server stores newly
created documents locally without saving them immediately to the
destination. This means that “highly critical” data are hold on the local disk
of the related Archive Center. For security reasons, OpenText strongly
recommends storing data on a RAID system. For performing regular
backups of Archive Cache Server data, you should include relevant items
in your backup.
Tip: To find out whether “maintenance mode” is active, start a command line
and enter
cscommand -c isOnline
or
cscommand -c getStatistics
cscommand With the Archive Cache Server installation comes a small utility (cscommand), which
utility allows to activate or deactivate the maintenance mode. The commands to activate
and deactivate maintenance mode can be called from any script or batch file.
Usually, the commands are added to the script that controls your backup. You can
find cscommand in the <OT config>\Archive Cache Server\bin directory.
Note: Running cscommand requires that a JDK or JRE is included in the PATH
environment variable.
3. Start your backup. Make sure that all relevant directories are included.
Directories to be backed up
Note: The directories used by Archive Cache Server are configured during the
installation.
Cache volumes One or more cache volumes to be used for write through caching. Not
highly critical but useful for reducing time to rebuild cached data.
Write-back One single cache volume to be used for write back caching. This
volume volume contains the following subdirectories:
dat
Components are stored here.
idx
Per document, additional information is stored, which contains all
necessary information to reconstruct the data in case of a crash.
log
Special protocol files (one per day) are stored here. Containing
relevant info when a document is transferred to and committed by
the Document Service.
Important
Protocol files are not deleted automatically. Ensure regular
deletion of protocol files to avoid storage problems.
Path to store The absolute path to the volume where the Archive Cache Server stores
database files its metadata for the cached documents. Necessary to recover.
As with “Backup of Archive Cache Server data” on page 244, you need the
cscommand in the <OT config>\Archive Cache Server\bin directory.
Note: Running cscommand requires that a JDK or JRE is included in the PATH
environment variable.
This proceeding recovers the Archive Cache Server to the state of a previous backup.
This means all data in the time span between last backup and crash are lost.
Documents that are already transferred to the Archive Center are not affected.
If successful, this proceeding recovers the actual state of the Archive Cache Server.
2. If the write-back volume is still available, rename the root directory of the write-
back volume (see Step 5, <location of write back data>).
3. Copy your backup of the data to the correct location to replace the corrupt one.
If you have also a partial loss of data volumes, copy the lost data from your
backup to the correct location.
Important
Each successfully recovered document is listed on the command line
and removed from <location of write back data>. This means that
the recover operation can just be processed once.
6. If you do not get any error messages, the renamed directory (<location of
write back data>) can be deleted. Any data left in this subtree is no longer
needed for operation.
Important
If you get error messages, do not delete any data. If you cannot fix the
problem, contact OpenText Customer Support.
1. In Administration Client, start the Reassign Work of Node utility. For more
information, see “Starting utilities” on page 250.
2. Enter the fully-qualified name of the failed node, for example: node2.
example.com
For a full recovery, add a node to replace the lost one.
Utilities
Utilities are tools that are started interactively by the administrator. The following
table provides an overview of all utilities that can be reached in Utilities in the
System object in the console tree. Cross references are leading to detailed
descriptions in the relevant chapters. You also find a description of how to start
utilities and how to check the utility protocol in this chapter.
Some utilities are assigned directly to objects and can be reached in the action pane.
Protocols of these utilities can also be reached in Utilities in the System object in the
console tree
Note: Some utilities need to enter the name of the STORM server. To
determine the name, select Storage Devices in the Infrastructure object in the
console tree. The name of the STORM server is displayed in brackets behind
the device name; for example:
WORM(STORM1)
2. Select the Utilities tab in the top area of the result pane. All available utilities
are listed in the top area of the result pane.
2. Select the Utilities tab in the top area of the result pane. All available utilities
are listed in the top area of the result pane.
4. Select the Results tab in the bottom area of the result pane to check whether the
execution of the utility was successful
or
select the Message tab in the bottom area of the result pane to check the
messages created during execution of the utility.
2. Select the Protocol tab in the top area of the result pane.
To clear protocols:
2. Select the Protocol tab in the top area of the result pane.
Re-reading Utilities and jobs are read by Archive Center during the startup of the server. If
scripts utilities or jobs are added or modified, they can be re-read. This avoids a restart of
Archive Center.
To re-read scripts:
2. Select the Protocol tab in the top area of the result pane.
OpenText recommends keeping your system up to date to avoid security and other
issues. You should install all mandatory patches for Archive Center, and you should
use the latest updates of Java and Tomcat.
2. Change the value of the following system environment variables to the path to
the new Java version (for example, C:\Program Files\Java\jre1.8.0_121):
• ECM_DP_ACCMIS_JAVA_HOME
• ECM_DP_BASE_JAVA_HOME
• ECM_DP_FSA_JAVA_HOME
• ECM_DP_INFO_JAVA_HOME
• ECM_JAVA_HOME
Tip: On Windows, run set ecm at a command prompt. Verify that all
shown variables ending with _JAVA_HOME are set to the correct Java path.
3. If you imported certificates into the cacerts file of the JRE, you may have to
import them again into the corresponding cacerts file of the updated JRE.
4. Restart the Apache Tomcat and Archive Spawner services.
To update Tomcat:
1. Download the latest 64-bit container file from the Tomcat website (https://
tomcat.apache.org/download-80.cgi).
• conf folder
• In the lib folder:
• archive-help-config.jar
• as_bizprovAPI.jar
• as_intf.jar
• as_metadataAPI.jar
• commons-logging-1.1.1.jar
• ixosBaseIntf.jar
• jicsdb_intf.jar
• jicsx_intf.jar
• log4j.jar
• docs
• manager
• ROOT
• examples (if installed)
• host-manager (if installed)
7. Copy the following folders from the temporary folder of the new Tomcat to the
<Tomcat_home> folder:
• bin
• lib
• webapps (accept to merge the folders)
8. Copy the saved files from the lib folder (Step 3) into the <Tomcat_home>\lib
folder.
About migration
1. Create copy orders for the volume components, using the VolMig Migrate
Components on Volume utility.
3. Check the migration status using the VolMig Status utility. For more
information, see “Monitoring the migration progress“ on page 275.
Attribute Apart from the volume migration, you can use the attribute migration job to move
migration the metadata information that is stored in the ATTRIB.ATR files of archived
documents to the database; see “Attribute migration“ on page 287. In particular,
you must run the attribute migration job after upgrading to version 10.5.0.
Important
Attribute migration must be finished for all documents to be migrated.
Otherwise, the volume migration will fail.
As a remedy, use copy orders for shadow pools. For more information, see
“Creating and configuring shadow pools” on page 97 and “Creating copy orders
for shadow pools” on page 104.
Caution
Consider that replication and backup settings are not transferred to the
target archive during migration. Therefore, the configuration for backup
and replicated archives must be performed for the migrated archive again.
See “Configuring remote standby scenarios“ on page 195 and “Creating
and modifying pools” on page 92.
1. Select Configuration object in the console tree and search for the respective
variable (see “Searching configuration variables” on page 222).
1. Select Configuration object in the console tree, search for the respective variable
(see “Searching configuration variables” on page 222).
2. Specify the logging parameters for the volume migration:
1. If migrating documents archived with Archive Server versions before 10.5.0: Ensure
that the attribute migration is done for all documents to be migrated by running
the SYS_MIGRATE_ATTRIBUTES job. For more information, see “Attribute
migration“ on page 287.
Important
Attribute migration must be finished for all documents to be migrated.
Otherwise, the volume migration will fail.
2. Start the Administration Client, select the dedicated logical archive and create a
new pool for the migration. See “Creating and modifying pools” on page 92.
Note: Components not listed in the ds_comp table are ignored. To ensure
that all components of one medium are listed in the ds_comp table,
OpenText recommends that you call volck first.
4. Create and schedule a job in the OpenText Administration Client for the
Migrate_Volumes command. See “Configuring jobs and checking job protocol“
on page 115.
1. Create and schedule a job in the OpenText Administration Client for the
Migrate_Volumes command. See “Configuring jobs and checking job protocol“
on page 115.
2. Disable backup for the original pool to avoid that the server creates additional
(unwanted) backups in the original pool.
To allow no more data to be copied to the migrated volume, you can set the volume
to write locked. Read access is possible; write access is protected.
3. Select the Pools tab in the top area of the result pane. The attached volumes are
listed in the bottom area of the result pane.
4. Select the volume to be write locked and click Properties in the action pane.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
4. Enter appropriate settings to all fields (see Settings for local migration
on page 269).
Click Run.
• the scheduler of the Administration Server calls the job Migrate_Volumes and
• all previous jobs have been processed.
Source Volume
Specify the source volume(s) name. The following characters are provided
therefore:
Character Description
* Wildcard: 0 to n arbitrary characters
For example, vol5*, matches all volumes that name begins with vol5;
for example, vol5a, vol5c78, vol52e4r
? Wildcard: exactly one arbitrary character
For example, volx?x, matches volxax to volxzx and volx0x to
volx9x
\ Is used to escape wildcards (*, ?), if they are used as “real” characters in
volume names.
[] Specifies a set of volume names:
• “[ ]” can be used only once
• “,” can be used to separate numbers
• “-” can be used to specify a range
For example, [001,005-099]
Target archive
Enter the target archive name.
Target pool
Enter the target pool name.
Migrate only components that were archived: On date or after
You can restrict the migration operation to components that were archived after
or on a given date. Specify the date here. The specified day is included.
Note: The retention date of migrated documents can only be kept or extended.
The following table provides allowed settings:
Verification mode
Select the verification mode that should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying attributes of a
migration job” on page 282 (-v parameter).
Additional arguments
-e
Export source volumes after successful migration.
-k
Keep exported volume (export only the document entries, allow
dsPurgeVol to destroy this medium).
-i
Migrate only latest version, ignore older versions.
-A <archive>
Migrate components only from a certain archive.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
Source Volume
Specify the source volume(s) name. The following characters are provided
therefore:
Character Description
* Wildcard: 0 to n arbitrary characters
For example, vol5*, matches all volumes that name begins with
vol5; for example, vol5a, vol5c78, vol52e4r
? Wildcard: exactly one arbitrary character
For example, volx?x, matches volxax to volxzx and volx0x to
volx9x
\ Is used to escape wildcards (*, ?), if they are used as “real”
characters in volume names.
[] Specifies a set of volume names:
• “[ ]” can be used only once
• “,” can be used to separate numbers
• “-” can be used to specify a range
For example, [001,005-099]
• the scheduler of the Administration Server calls the Migrate_Volumes job and
• all previous jobs have been processed.
You can display an overview of migration jobs to check the progress of migration.
Each migration job has a unique ID, optional flags and a status. This information is
also needed to manipulate migration jobs. See “Manipulating migration jobs“
on page 279
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
• New
• In progress
• Finished
• Cancelled
• Error
5. Click Run.An overview of migration jobs with the demanded job status opens.
• New (enqueued)
VolMig has not yet started to process this migration job.
• Prep (prepare component list)
VolMig has started to query the components on the current medium to be
migrated.
• Iso (create and write an ISO image file)
For fast migration jobs, entire ISO images are replicated at once. This state
indicates that VolMig is retrieving an ISO image file from a volume or is writing
that image file to the target storage.
• Copy (create write jobs)
VolMig is now instructing the DS to copy the components from the source
medium to the migration pool. Entries in the ds_activity table are created.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to pause via the VolMig Status
utility; see “Monitoring the migration progress“ on page 275.
5. Enter the ID of the migration job that you want to pause in the Migration Job
ID(s) field.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to continue via the VolMig
Status utility; see “Monitoring the migration progress“ on page 275.
5. Enter the ID of the migration job that you want to continue in the Migration Job
ID(s) field.
6. Click Run.A protocol window shows the progress and the result of the
migration. The migration job is set back to the status before it has been paused
or the error occurred.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to cancel via the VolMig Status
utility. See “Monitoring the migration progress“ on page 275.
5. Enter the ID of the migration job that you want to cancel in the Migration Job
ID(s) field.
6. Click Run.
A protocol window shows the progress and the result. The migration job is set
to the Canc status. All copy jobs for this migration job are deleted.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to renew via the VolMig Status
utility. See “Monitoring the migration progress“ on page 275.
5. Enter the ID of the migration job that you want to renew in the Migration Job
ID(s) field.
6. Click Run.A protocol window shows the progress and the result of the
migration. The migration job is set to the New status and is started from the
beginning.
The volume migration suite provides additional utilities to support you to perform
your migration. These utilities must be executed in a command shell. The following
sections explains the most important vmclient commands with their corresponding
attributes.
jobID
The ID of the migration job to be deleted.
jobID
The ID of the migration job to be finished.
jobID
The ID of the migration job to be modified.
attribute
The attributes which can be modified.
-e (export)
Export source volumes after successful migration.
-k (keep)
Do not set the exported flag for the volume (so dsPurgeVol can destroy it).
-r <value> (retention)
Set a new value for the retention of the migrated documents.
Not supported in Fast Migration scenarios.
old poolname
Is constructed by concatenating the source archive name, an underscore
character and the source pool name, for example, H4_worm.
new poolname
Is constructed by concatenating the target archive name, an underscore
character and the target pool name, for example, H4_iso.
-d
Update pools in ds_job only.
-v
Update pools in both, ds_job and vmig_jobs.
jobID
The ID of the migration job which components should be listed.
max results
How many components should be listed at most.
archive
The archive name.
pool 1
Name of the first pool.
pool 2
Name of the second pool.
archive
The archive name.
pool
The pool name.
sequence number
New number of the sequence.
sequence letter
New letter (for ISO pools only).
volume name
Name of the primary volume.
output file
File to write the output to instead of stdout.
Attribute migration
“On the fly” The information within the document’s ATTRIB.ATR file is migrated to the database
migration upon the first access of the document. This automatic migration process does not
require any user interaction.
“Bulk” migration Additionally to the automatic migration mentioned before, a job exists that migrates
the metadata in the ATTRIB.ATR files to the database. By default, the job is scheduled
to run every Sunday at 0:30.
• Follow the procedure in “Starting and stopping jobs” on page 120 to start the
attribute migration. The name of the job is SYS_MIGRATE_ATTRIBUTES.
The job runs the AttribAtrMigrate command, which requires the following
parameters:
AttribAtrMigrate { [-t <threads>] migrate {null|err} <time to run> |
report }
where
Failed If the migration to the database failed for a document (states other than O or Y) you
migrations can run a job to retry the migration. If the error persists, the problem must be fixed
manually. Use the Review Attribute Migration Errors utility to list the failed
documents; see below.
• Follow the procedure in “Starting and stopping jobs” on page 120 to start the
attribute migration. The name of the job is
SYS_RETRY_ATTRIBUTE_MIGRATION.
The job runs the AttribAtrMigrate command with the following parameters:
AttribAtrMigrate migrate err 60
1. To start the utility, follow the procedure in “Starting utilities” on page 250.
2. When requested, enter the number of errors to review or keep the field empty to
use the default value (1000).
To monitor the archiving system, you can use Administration Client, Archive Server
Monitoring, and Document Pipeline Info. Administration Client and Document
Pipeline Info must be installed on the administrator's computer and can connect to
different Archive Centers and Document Pipeline hosts via network. Archive
Monitoring Web Client is installed on the Archive Center and is performed in a
browser, accessible with an URL.
Administration Client
• Checking the success of jobs, in particular of the Write and Backup jobs
• Checking for notifications according to your configuration (emails, alerts,
execution of files; see “Monitoring with notifications“ on page 293)
• Checking free disk space
For more information about Archive Server Monitoring, see “Using OpenText
Archive Server Monitoring“ on page 303.
By setting up a notification service, you can reduce the amount of work associated
with monitoring the archive system. The Notification Server sends notifications
when certain predefined server events occur. You can define both the events and the
type and recipient of the notification. You can also restrict the time slot in which
particular notifications are sent. For example, you can define notifications sent to the
workstation during working hours and by email to the on-call service outside
working hours. Thus, you ensure that responsible persons are addressed directly
when a particular event occurs.
1. Define the events filter to which the system should respond; see “Creating and
modifying event filters” on page 293.
2. Create the type and settings of the notifications and assign them specific event
filters; see “Creating and modifying notifications” on page 297.
Some important event filters are already predefined. You can change them and
define new event filters.
1. Select Events and Notifications in the System object in the console tree.
2. Select the Event Filters tab. All available event filters are listed in the top area of
the result pane.
3. Click New Event Filter in the action pane. The window to create a new event
filter opens.
4. Enter the conditions for the new event filter. See “Conditions for event filters”
on page 294.
5. Click Finish.
Modifying event To modify an event filter, select it in the top area of the result pane and click
filters Properties in the action pane. Proceed in the same way as when creating a new
event filter. The name of the event filter cannot be changed.
Deleting event To delete an event filter, select it in the top area of the result pane and click Delete in
filters the action pane.
Related Topics
• “Conditions for event filters” on page 294
• “Available event filters” on page 296
• “Creating and modifying notifications” on page 297
• “Checking alerts” on page 301
Name
A self-explaining name
Message class
Classifies and characterizes events
• Any (all classes are recorded)
• Administration: events that affect administration
• Database: database event
• Server: server event
Component
Specifies the software component that issues the message. If nothing is specified
here, all components are recorded (Any). The most important components are:
• Administration Server: mainly monitors the execution of the jobs
• Monitor Server: reports status changes of archive components, i.e. whenever
a status display changes in Archive Monitoring Web Client
• Document Service: monitors the jds, which provides archived documents
and archives documents
• Storage Manager: reports errors that occur when writing to storage media
• Archive Timestamp Server: reports errors that occur when creating or
administering timestamps
• High Availability: reports errors associated with High Availability software
and the cluster software it uses
• Volume Migration: reports errors that occur during volume migration
• BASE DocTools: reports errors associated with BASE DocTools
• R/3 DocTools: reports errors associated with R/3 DocTools (SAP)
• Filter Service: not used
Severity
Specifies the importance.
Message codes
Specifies which message codes should be considered by the event filter. The
codes are used to filter out concrete events and are usually defined in a message
catalog, which belongs to a component. For each component, the catalog is
installed in
<OT config>\msgcat\<COMPNAME>_<lang>.cat
Example: ADMS_us.cat is the English message catalog for the Administration Server
component.
It is possible to enter the code number directly, but it is recommended and more
comfortable to use the Select button. This offers a window with current
available message codes and associated descriptions.
2. Click Select. A window with current available message codes opens. The
available message codes depend on the selected combination of message
class, component and severity.
3. Select the designated message code and click OK to resume. If you define a
range, select the first and the last message code (from – to).
Related Topics
User-defined In addition, you can define other events to get notifications if they occur. Useful
events events are:
Job Error
This event records errors that are listed in the job protocol and notifies you with
a particular message. Use this configuration:
Severity: Error
Message class: Server or <any>
Component: Administration Server
Message code: 1
Severity: Error
Message class: Server or <any>
Component: Monitor Server
Message code: -
Severity: Warning
Message class: Server or <any>
Component: Monitor Server
Message code: -
Related Topics
• “Conditions for event filters” on page 294
• “Checking alerts” on page 301
• “Creating and modifying notifications” on page 297
• Alert, passive notification type, alerts must be checked by the administrator; see
“Checking alerts” on page 301
• Mail Message, active notification type, when the assigned event occurs, a
message is sent
• TCL Script, active notification type, when the assigned event occurs, a tcl script
is executed
• Message File, passive notification type, notifications are written in a specific file
• SNMP Trap, active notification type, notifications are sent to an external
monitoring system via the SNMP protocol
To create a notification:
1. Select Events and Notifications in the System object in the console tree.
2. Select the Notifications tab. All available notifications are listed in the top area
of the result pane.
3. Click New Notification in the action pane. The wizard to create a new
notification opens.
4. Enter the name and the type of the notification and click Next. Enter the
additional settings for the new notification event. See “Notification settings”
on page 298.
6. Select the new notification in the top area of the result pane.
7. Click Add Event Filter in the action pane. A window with available event filters
opens.
8. Select the event filters which should be assigned to the notification and click
OK.
Modifying notifi- To modify the notification settings, select the notification in the top area of the result
cations settings pane and click Edit in the action pane. Proceed in the same way as when creating a
new notification. The name of the notification cannot be changed.
Deleting notifi- To delete a notification, select the notification in the top area of the result pane and
cations click Delete in the action pane.
Adding event To add event filters, select the notification in the top area of the result pane. Click
filters Add Event Filter in the action pane. Proceed in the same way as when creating a
new notification.
Remove an To remove an event filter, select it in the bottom area of the result pane and click
event filter Remove in the action pane. The notification events are not lost, only the assignments
is deleted.
Related Topics
• “Notification settings” on page 298
• “Checking alerts” on page 301
• “Using variables in notifications” on page 300
Name
The name should be unique and meaningful.
Notification Type
Select the type of notification and enter the specific settings. The following
notification types and settings are possible:
Alert
Alerts are notifications, which can be checked by using Administration
Client. They are displayed in Alerts in the System object in the console tree
(see “Checking alerts” on page 301).
Mail Message
Emails can be sent to respond immediately to an event or in standby time. If
you want to send it via SMS, consider that the length of SMS text (includes
Subject and Additional text) is limited by most providers. Enter the
following additional settings:
• Sender address: Email address of the sender. It appears in the from field
in the inbox of the recipient. The entry is mandatory.
• Mail host: Name of the target mail server. The mail server is connected
via SMTP. The entry is mandatory.
• Recipient address: Email address of the recipient. If you want to specify
more than one recipient, separate them by a semicolon. The entry is
mandatory.
• Subject of the mail, $ variables can be used (see “Using variables in
notifications” on page 300). If not specified, the subject is $SEVERITY
message from $HOSTNAME/$USERNAME($TIME).
• Include Standard Text: If selected, you get an introduction in the
notification: “The preceding notification message was generated by ...”.
This introduction is followed by the message text. If you send SMS
messages, clear this check box.
• Max. Length of mail message text: Use this setting to restrict the number
of characters in the email body. If you send notifications as SMS
message, thus you can enter a value according to the limitation of your
provider.
TCL Sript
Enter the name and the path of the tcl script. It will be executed if the event
occurs.
Message File
The notification is written to a file. Enter name and path of the target file or
click Browse to open the file browser. Select the designated message file and
click OK to confirm.
Enter also the maximum size of the message file in bytes.
SNMP Trap
Provides an interface to an external monitoring system that supports the
SNMP protocol. Enter the information on the target system.
Text
Free text field with the maximum length of 255 characters. $ variables can be
used (see “Using variables in notifications” on page 300).
Active Period
Weekdays and time of the day at which the notification is to be sent.
Related Topics
• “Creating and modifying notifications” on page 297
$CLASS
Message class, characterizes the event
$COMP
Component that has output the message
$SEVERITY
Type of message, characterizes the importance
$TIME
Date and time when the message was output from the component (system time
of the computer on which the component is installed)
$HOST
Name of the computer on which the reported event occurred. For server
processes, “daemon” is output
$USER
Name of the user under which the processes run on the $HOST machine
$MSGTEXT
Message text from the message catalog. Important messages are listed first. If
there is no catalog message, the default text provided by the component is used
$MSGNO
Code number from the message catalog
Related Topics
• “Notification settings” on page 298
• “Checking alerts” on page 301
To check alerts:
1. Select Alerts in the System object in the console tree. All notifications of the
alert type are listed in the top area of the result pane.
2. Select the alert to be checked in the top area of the result pane. Alert details are
displayed in the bottom area of the result pane. The yellow icon of the alert
entry turns to grey if read.
Marking To mark all messages as read click Mark All as Read in the action pane. The yellow
messages as icons of the alert entries turn to grey.
read
Tasks The OpenText Archive Server Monitoring Web Client provides the following
monitoring functions:
• Archive Center Statistics - Checking the archiving and retrieving activities and
the Archive Center’s read/write performance.
• Archive Center Health Status - Checking the status of the Archive Center
components.
• Checking free storage space in the log directories
• Checking free storage space in pools and volumes
• Checking the Document Service and access to unavailable volumes
• Checking the Storage Manager
• Archive Center Threat Detection - Checking quota limit violations reported for
Archive Center users.
OpenText Archive Server Monitoring is used solely to observe the global system and
to identify problem areas. Monitoring collects information about the status of
Archive Center components at regular intervals.
Monitoring cannot be used to eliminate errors, modify the configuration, or start and
stop processes.
OpenText Archive Server Monitoring can be started using the URL of the Archive
Center host, for example,
https://alpha.opentext.com:8090/archive/monitoring (see “Starting the
Archive Monitoring Web Client” on page 306).
Warning and With Administration Client, you can configure warning and error messages that are
error messages sent when the status of Archive Center components changes (see “Monitoring with
notifications“ on page 293). You can also use external system management tools
within the scope of special project solutions.
Security
• The Archive Monitoring Web Client requires authentication.
• As the Archive Monitoring Web Client uses basic authentication, it is strongly
recommended to use the HTTPS protocol.
To create a dedicated user and group for Archive Monitoring Web Client (built-
in user management):
a. Full access:
Create a new group and assign the MonitoringChangeAccess and
MonitoringReadAccess policies to it:
i. In the console tree, select Archive Center > System > Users and Groups
ii. In the action pane, click New Group.
iii. Enter a Group name and click OK.
iv. In the result pane, on the Groups tab, select the group you just created.
In the action pane, click Add Policy.
v. Select the MonitoringChangeAccess and MonitoringReadAccess
policies and click OK.
b. Read-only access:
Create a new group and assign the MonitoringReadAccess policy to it:
i. In the console tree, select Archive Center > System > Users and Groups
ii. In the action pane, click New Group.
iii. Enter a Group name and click OK.
iv. In the result pane, on the Groups tab, select the group you just created.
In the action pane, click Add Policy.
The user now has full access or read-only access to the Archive Monitoring Web
Client.
a. Full access:
Create a new group and assign the MonitoringChangeAccess and
MonitoringReadAccess policies to it:
i. In the console tree, select Archive Center > System > Users and Groups
ii. In the action pane, click New Group.
iii. Enter a Group name and click OK.
iv. In the result pane, on the Groups tab, select the group you just created.
In the action pane, click Add Policy.
v. Select the MonitoringChangeAccess and MonitoringReadAccess
policies and click OK.
b. Read-only access:
Create a new group and assign the MonitoringReadAccess policy to it:
i. In the console tree, select Archive Center > System > Users and Groups
ii. In the action pane, click New Group.
iv. In the result pane, on the Groups tab, select the group you just created.
In the action pane, click Add Policy.
4. Create a new (OTDS) group and name it exactly as the group in built-in user
management you created before. The user partition of the new group must be a
member of the access role for the Archive Center resource:
a. In the console tree, select Directory Services > User Partitions > <your user
partition>.
Tip: To verify that the user partition is a member of the access role for
the Archive Center resource, select Directory Services > Access Roles
> <your access role>. In the result pane, the user partition must be listed
in the Members tab.
5. You can add an existing user to the group or create a new one. In case of a non-
synchronized resource, you can create a user in the following way:
a. In the console tree, select Directory Services > User Partitions > <your user
partition> and in the action pane, click New User.
b. In the New User wizard, specify all required information and click Finish.
6. In the result pane, on the Groups tab, select the group you’ve just created.
The user now has full access or read-only access to the Archive Monitoring Web
Client.
Example: https://archiveserver.example.com:8090/archive/monitoring
After signing in, the Archive Server Monitoring main page displays the links to the
monitoring menus:
• Archive Center Statistics
see “Archive Center Statistics” on page 307
• Archive Center Health Status
see “Archive Center Health Status” on page 308
• Archive Center Threat Detection
see “Threats” on page 310
Diagrams show the number of Components and the Data volume handled by the
Archive Center during a specific period of time, as well as the read/write
Performance.
Note: The monitor does not provide archive-specific statistics. The monitor
diagrams refer to all Archive Center activities.
• Supported diagrams:
• Number of components handled by the Archive Center (read/write)
• Data volume (MB) handled by the Archive Center (read/write)
• Read/write performance (MB/s) of the Archive Center
• Supported time periods:
• Last 24 hours (hourly)
• Last 7 days (daily)
• Last 30 days (daily)
The component status can be Ok, Warning and Error. Details are displayed for the
following groups:
• “Database” on page 309
• “Storage Manager” on page 309
• “Services” on page 309
• “Pools and Volumes” on page 310
Note: Depending on the installed Document Pipelines and the current Archive
Center configuration, the Health Status can report more status change groups.
32.4.1 Database
The monitor checks the logfiles of tools for database errors.
<jukebox_name>
Provides an overview of the volumes for each attached jukebox. The possible
status specifications are Ok, Warning or Error. Warning means that there are no
writeable volumes or no empty slots in the jukebox. Error is displayed if at least
one corrupt medium is found in a jukebox (display -bad- in Devices in
OpenText Administration Client).
The following information is displayed in Details:
32.4.3 Services
The monitor checks the Document Service, the Archive Center component that
archives documents and delivers them for display. The monitor checks comprise the
following services:
The status of admsrv, bksrvr, tstp, and auxsrvr is Active or Error. Error means that
the component cannot be executed and must be restarted.
The status of the Storage Manager is Active if the server is running. A status of
either Can't call server, Can't connect to server, or Not active indicates that the
server is either not reachable or not running. Check the jbd.log log file for errors. If
necessary, solve the problem and start the Storage Manager again.
32.5 Threats
For each user, the monitor reports the number of components and the data volume
(number of bytes) downloaded per day during the last 30 days.
When a defined download quota limit per day and per user is exceeded, a threat
report (event) is created.
• Only one threat report will be sent per day and per user, unless the Threat
Settings are changed during the day.
• Component quota
• Data volume quota (MB)
• Block user
• Notify to
2. Specify the quota limits that, if exceeded, trigger a threat report that is displayed
in the Threats menu.
Click the Back button after each change to the Settings.
• Component quota
Maximum number of components a user has downloaded per day.
• Data volume quota (MB)
Maximum data volume in MB a user has downloaded per day.
• Block user
Specify whether a user is blocked from further downloading, when a quota
limit is exceeded.
Move the slider to the On or Off position.
• Notify to
Specify the E-MAIL SETTINGS for sending a notification message if a user
has exceeded the quota limit.
User history list The Threats menu displays the list of users who have exceeded the specified quota
limits.
• If Block user is set to Off for the quota limit, a warning is displayed.
• If Block user is set to On for the quota limit, further downloading activities are
blocked until midnight..
You can unblock a user’s downloading activities if the specified quota limits
were exceeded:
• Set Block user to Off for the quota limits.
• Specify higher quota limits.
• At midnight, all users’ downloading activities are automatically unblocked.
For each USER HISTORY, the record of retrieving and downloading activities is
displayed in Charts and in Table format.
33.1 Auditing
The auditing feature of Archive Center traces events of two aspects:
Important
Administrative changes are only recorded if they are done with
Administration Client. To get complete audit trails, make sure that other
ways of configuration cannot be used, for example, editing configuration
files directly. At least, such tasks must be logged by other means.
The auditing data is collected in separate database tables and can be extracted from
there with the exportAudit command to files, which can be evaluated in different
ways.
To audit the lifecycle of the documents, activate the Auditing option of the archive.
As the auditing mode is related to logical archives, enable it for each archive that is
subject to auditing.
Cleanup job
exportAudit To extract the data of the document-related information to files, use the command
exportAudit -S
To extract the data of the administration-related information to files, use the
command
exportAudit -A
Options With further options, you can adapt the output to your needs. For example, you
should probably define the timeframe for data extraction (-s and -e options).
Without these dates, you get all audit data until the current date and time, which
could result in very big files and exporting times.
Run exportAudit /? to get a list of all options.
Command:
exportAudit -S -s 2005/07/14:12:00:00 -e 2005/07/19:08:00:00 -o
csv -h -a
Event Description
EVENT_CREATE_DOC Document created
EVENT_CREATE_COMP Document component created on volid1
EVENT_UPDATE_ATTR Attributes updated
EVENT_TIMESTAMPED Document timestamped on volid1 (dsSign,
dsHashTree)
Event Description
EVENT_TIMESTAMP_VERIFIED Timestamp verified on volid1
EVENT_TIMESTAMP_VERIF_FAILED Timestamp verification failed on volid1
EVENT_COMP_MOVED Document component moved from HDSK volid1
to volid2 (dsCD etc. with -d)
EVENT_COMP_COPIED Document component copied from volid1 to
volid2 (dsCD etc. without -d)
EVENT_COMP_PURGED Document component purged from HDSK volid1
(dsHdskRm)
EVENT_COMP_DELETED Component deleted from volid1
EVENT_COMP_DELETE_FAILED Component deletion from volid1 failed
EVENT_COMP_DESTROYED Component destroyed from volid1
EVENT_DOC_DELETED Document deleted
EVENT_DOC_MIGRATED Document migrated
EVENT_DOC_SET_EVENT setDocFlag with retention called
EVENT_DOC_SECURITY Security error when attempting to read doc
Related Topics
• “Searching configuration variables” on page 222
33.2 Accounting
Archive Center allows collecting of accounting data for further analysis and billing.
To use accounting:
2. Evaluate the accounting data; see “Evaluating accounting data” on page 317.
2. Edit the Disable accounting for certain jobs variable (internal name:
ACC_SUPPRESSED_JOBS).
The value of the variable must hold all job numbers to be disabled for
accounting, separated by commas. A value of 0 means that no job is disabled.
Further For more information about configuration settings, see “Configuration parameter
information reference” on page 335.
If you archive the old accounting data, you can also access the archived files. The
Organize_Accounting_Data job writes the DocIDs of the archived accounting files
into the ACC_STORE.CNT file which is located in the accounting directory (defined in
Path to accounting data files).
The tool saves the files in the <target directory> where you can use them as usual.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the View Installed Archive Server Patches utility.
3. Click Run in the action pane.
4. In the field View patches for packages enter the package whose patches you
want to list. Leave the field empty to view all packages.
5. Click Run to start the utility.
Related Topics
• “Utilities“ on page 249
• “Checking utilities protocols” on page 250
If, however, any of these parameters have been chosen inappropriately, you still can
correct them by taking the following steps:
1. Create the two correct directories in the file system and make sure that they are
owned and writeable by the Archive Spawner user.
2. Correct the directory settings in the configuration:
Archive The Archive Administration Utilities are Archive Monitoring Web Client, Document
Administration Pipeline Info, and Administration Client. You can find a short summary of their use
Utilities
in “Everyday monitoring of the archive system“ on page 291.
System tools The most important error messages are also displayed in the Windows Event Viewer
or in the UNIX/Linux syslog. This information is a subset of the information
generated in the log files. Use these tools to see the error messages for all
components at one place.
You can prevent the transfer of error messages to the system tools in general or for
single components with the setting Write error messages to Event Log / syslog; see
“Log settings for Archive Center components (except STORM)” on page 332.
Log files record the jobs of the archive components. The number of log entries and
thus the size of the log files depend on the log level that has been set. Check the size
of the log files regularly and delete larger files. They will be automatically recreated
when Archive Center is started.
The log files for Archive Center can be found in the <OT logging> directory.
Important
Stop the Archive Spawner service before you delete the log files!
On client workstations, other log files are used. For more information, see the
OpenText Imaging documentation.
The Oracle database also generates log and trace files for diagnostic purposes. As
administrator, you should regularly check the size of the following files and delete
them from time to time:
Windows
<ORACLE_HOME>\network\log\listener.log (log file)
<ORACLE_HOME>\network\trace\* (trace file)
<ORACLE_HOME>\rdbms\trace\*trc (trace files)
UNIX/Linux
$ORACLE_HOME/network/log/listener.log (log file)
$ORACLE_HOME/network/trace (trace file)
$ORACLE_HOME/rdbms/log/*.trc/* (trace files)
Archive Center and the database are automatically started by the operating system
when the hardware is started. However, there are situations in which you have to
start or stop Archive Center components manually without shutting down the
hardware, for example, when you back up the system data or when you perform
system administration tasks that require a manual stop of Archive Center
components. A restart can also help to figure out the reason of a problem.
After the restart, read the log file spawner.log in the directory <OT logging>. You
can see whether all the processes have started correctly (see also “Spawner log file”
on page 329).
You can simply use the OpenText Administration Client to start and stop Archive
Center components. If the tool is not available, you can use the Windows Services, or
command line calls. Note that the order in which the components are started or
stopped is important. Call the commands in the given order.
Note: The following commands are not valid for installations in cluster
environments.
Starting
Windows To start Archive Center using the Windows Services, proceed as follows:
Services
1. To open the Windows Services, do one of the following:
Command line To start Archive Center from the command line, enter the following commands in
this order:
Stopping
Windows To stop Archive Center components using the Windows Services, proceed as
Services follows:
Command line To stop Archive Center components from the command line, enter the following
commands in this order:
Starting
Use the commands listed below to restart Archive Center after the archive system
has been stopped without shutting down the hardware.
1. Log on as root.
2. Start the archive system including the corresponding database instance with:
HP-UX /sbin/rc3.d/S910spawner
start
Stopping
1. Log on as root.
HP-UX /sbin/rc3.d/S910spawner
stop
AIX /etc/rc.spawner stop
Solaris /etc/rc3.d/S910spawner
stop
Linux /etc/init.d/spawner stop
Linux, HP-UX, Under Linux, HP-UX, and Solaris, symbolic links to the startup scripts ensure that
Solaris the archive system is automatically terminated when the host is shut down or
rebooted.
AIX Under AIX, insert the line sh /etc/rc.spawner stop into the /etc/rc.shutdown
script to ensure automatic termination. After a new installation of AIXthis script
does not exist; the system administrator must create it.
1. Under UNIX/Linux, load the Archive Center environment first: <OT config
AC>/setup/profile
2. Check the status of the process with spawncmd status (see “Analyzing
processes with spawncmd” on page 329).
Description of parameters
{start|stop}
To start or stop the specified process.
<process>
The process you want to start or stop. The name appears in the first column of
the output generated by spawncmd status.
Important
You cannot simply restart a process if it was stopped, regardless of the
reason. This is especially true for Document Service, since its processes must
be started in a defined sequence. If a Document Service process was stopped,
it is best to stop all the processes and then restart them in the defined
sequence. Inconsistencies can also occur when you start and stop the monitor
program or the Document Pipelines this way.
No maintenance mode
No restrictions to access the server.
3. Click OK.
Note: The following commands and paths for log files are not valid for
installations in cluster environments.
Note: The Spawner must be running on the computer for these commands to
take effect.
Command Under UNIX/Linux, load Archive Server environment first: <OT config AC>/setup/
profile. Under all environments, open a command line.
• exit
• reread
• restart <service>
• start <service>
• stop <service>
• startall
• status
• stopall
spawncmd The following table briefly lists the description of some processes. Enter spawncmd
status status to get the current status.
Process Description
bksrvr Backup server process
Clnt_dp Client to monitor the Document Pipelines
Clnt_ds Client to monitor the Document Service
dp Document Pipelines
ixmonSvc Monitor server process
jbd STORM daemon
notifSrvr Notification server process
timestamp Timestamp Server
doctods, Various DocTools
docrm, ...
• R means the process is running. All processes should have this status with the
exception of chkw (checkWorms), stockist and dsstockist; and under
Windows, additionally db.
• T means the process was terminated. This is the normal status of the
processes chkw (check worms), stockist, and dsstockist; and under
Windows, additionally db. If any other process has the status T, it indicates a
possible problem.
The processes chkw and db are validation processes; stockist and
dsstockist are initializing processes. They are terminated automatically as
soon as they finished their task.
• S means the Spawner waits for the process to synchronize.
• Process ID, start and stop time.
You can find information about the DocTools in Document Pipeline Info. This
interface allows you to start and stop single DocTools and to resubmit documents
for processing.
Note: The system might write several log files for a single component, or
several components are affected by a problem. To make sure you have the
most recent log files, sort them by the date.
Log file analysis When analyzing log files, consider the following:
• The message class, that is the error type, is shown at the beginning of a log entry.
• The latest messages are at the end of the file.
Note: In jbd.log, old messages are overwritten if the file size limit is
reached. In this case, check the date and time to find the latest messages.
• Messages with identical time label normally belong to the same incident.
• The final error message denotes which action has failed. The messages before
often show the reason of the failure.
• A system component can fail due to a previous failure of another component.
Check all log files that have been changed at the same or similar time. The time
labels of the messages help you to track the causal relationship.
The logging of the Storage Manager differs from the logging of other archive
components. To configure the STORM log levels, see OpenText Archive Center -
STORM Configuration Guide (AR-IST).
3. In the result pane, expand Logging. To change the log level for a certain
component, edit the configuration variable for the corresponding component in
the lower part of the result pane.
Permanent log The following incidents are always written to the log files, and usually also to the
levels Event Viewer or Syslog. You cannot switch off the corresponding log levels.
• Fatal errors indicate fatal application errors that mostly lead to server crashes
(message type FTL).
• Important errors (message type IMP).
• Security errors indicate security violations such as invalid signatures (message
type SEC).
• Errors indicate serious application errors (message type ERR).
• Warnings indicate potential problem causes (message type WRN).
Important
Higher log levels can generate a large amount of data and even can slow
down the archive system. Reset the log levels to the default values as soon as
you have solved the problem. Delete the log files only after you have
stopped the Spawner.
Time setting Additionally to the log levels, you can define the time label in the log file for each
component. Normally, the time is given in hours:minutes:seconds. If you select
Log using relative time, the time elapsed between one log entry and the next is
given in milliseconds instead of the date, additionally to the normal time label. This
is used for debugging and fine tuning.
CMIS stacktraces
By default, the logging of stacktraces is disabled for the CMIS component. In rare
cases, it may be required to enable the logging of stacktraces.
Note: This option is available for Archive Center 16.2 and later only.
WINDOWS
Start the Configure Tomcat tool or run tomcat8w.exe in the
<Tomcat_home>\bin folder.
UNIX/LINUX
In the <Tomcat home>/bin directory, open the setenv.sh file in an editor.
WINDOWS
On the Java tab, add the option as a new line in the Java Options field.
UNIX/LINUX
Change the value of the CATALINA_OPTS variable according to this example:
CATALINA_OPTS="‑Xmx2048m ‑Djava.awt.headless=true
‑Dorg.apache.chemistry.opencmis.stacktrace.enable=true
$CATALINA_OPTS"
3. For the changes to take effect, restart the Apache Tomcat service.
This is a reference of all parameters (also called variables) that are relevant for the
administration of
• Archive Center
• Document Pipeline
• File Share (Application Layer scenarios)
• Archive Monitoring Server
• Archive Cache Server
Notes
• Parameters that are listed by the Administration Client but are not described
in this reference, are provided for service purposes only and should not be
modified.
• The configuration parameter documentation uses a modular approach.
Therefore, the order of the documented components does not appear in an
alphabetical order like in the dialogs, but is grouped by certain functional
aspects.
For the individual components and building blocks described in this documentation
module, the reference lists all relevant configuration parameters and usually
provides the following information for each of them:
• Storage location: The file where the parameter is stored. This information is for
your reference; note that you should preferably access the configuration
parameters via the administration client to ensure that your settings are
consistent. See “Configuration files” on page 338 for details.
• Variable name: The name of the parameter
• Description: The meaning of the parameter
• Type: Data type of the variable, often with upper and/or lower limit
• Predefined value
• Allowed value: Lists all allowed values, if there is a specific set of allowed values.
Note that an allowed value range can also be specified by the limits noted with
the Type information (see above).
• Protection status: Some variables are read-only in Administration Client, i.e. they
can be displayed but cannot be changed. This is specified by the protection
status. If no protection status is specified for a variable, it can be read and written
by the Server Configuration.
Therefore, the references for the related parameters specify this storage location at the
beginning of each page. It applies to all parameters listed on that page.
The path specified in the storage location refers to the following variables:
ECM_ARCHIVE_SERVER_CONF
Installation folder of Archive Center; the folder used on your system is listed in
the file %ProgramData%\OpenText\conf_dirs\10AS.conf (Windows) or /etc/
opentext/conf_dirs/10AS.conf (UNIX/Linux).
ECM_DOCUMENT_PIPELINE_CONF
Installation folder of Document Pipelines; the folder used on your system is
listed in the file %ProgramData%\OpenText\conf_dirs\20DP.conf (Windows)
or /etc/opentext/conf_dirs/20DP.conf (UNIX/Linux).
ECM_MONITOR_SERVER_CONF
Installation folder of Monitor Server; the folder used on your system is listed in
the file %ProgramData%\OpenText\conf_dirs\80MONS.conf (Windows) or /
etc/opentext/conf_dirs/80MONS.conf (UNIX/Linux).
ECM_CACHE_SERVER_CONF
Installation folder of Cache Server; the folder used on your system is listed in the
file %ProgramData%\OpenText\conf_dirs\30AS.conf (Windows) or /etc/
opentext/conf_dirs/30AS.conf (UNIX/Linux).
37.2 Priorities
Although some parameters can be defined in more than one place, the parameter
with the highest priority will have precedence over the same parameter with a lower
priority. The priorities are listed here.
The listed default values are values included in the program codes. Some of these
values are not set in the setup files during the installation process. The default values
will be used if configuration parameters are missing or have no value in the registry
or in the setup files.
Installation type
• Read-only variable
• Variable name: INSTALL_TYPE
• Description: Type of installation/configuration: INSTALL or UPGRADE
Installed version
• Read-only variable
• Variable name: INSTALL_VERS
• Description: Version as found in the version.txt file of the corresponding
package.
LOG_SECU are always set and cannot be disabled. In the case of LOG_FATAL
and LOG_ERROR, a message is output to the system logger (i.e. to syslog in
UNIX/Linux or to the event logger in Windows) in addition to that entered in
the log file, provided the USE_EVENT_LOG parameter is not set to 0. The levels
LOG_WARNING to LOG_ENTRY are normally not set, although there is no harm
in setting LOG_WARNING and LOG_RESULT. A log level can be switched on
using LOG_WARNING=on.
Archive Server
• http ("http")
• https ("https")
38.1.1.1 SYS_CLEANUP_PROTOCOL
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
38.1.1.2 Local_backup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
38.1.1.3 Delete_Empty_Volumes
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
38.1.1.4 SYS_EXPIRE_ALERTS
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
38.1.1.5 SYS_CLEANUP_ADMAUDIT
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
38.1.2 Buffers
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
38.1.3 Archives
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
38.1.3.1 Security
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
38.1.3.2 Settings
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
• @AD=y ("on")
• @AD=n ("off")
Blobs (ADMS_ARCH_BLOBS)
• @B=y ("on")
• @B=n ("off")
Compression (ADMS_ARCH_CMP)
• Variable name: AS.ADMS.ADMS_ARCH_CMP
• Description:
This variable specifies the default value for the archive property
“Compression” assigned to new created archives. This setting can be
changed later in the archive properties.
• Type: Enum
• Predefined Value: @Cmp=y
• Allowed Value:
• @Cmp=y ("on")
• @Cmp=n ("off")
Encryption (ADMS_ARCH_ENC)
• Variable name: AS.ADMS.ADMS_ARCH_ENC
• Description:
This variable specifies the default value for the archive property
“Encryption” assigned to new created archives. This setting can be changed
later in the archive properties.
• Type: Enum
• Predefined Value: @E=n
• Allowed Value:
• @E=y ("on")
• @E=n ("off")
Hold (ADMS_ARCH_HOLD)
• Variable name: AS.ADMS.ADMS_ARCH_HOLD
• Description:
This variable specifies the default value for the archive property “Hold”
assigned to newly created archives.
• Type: Enum
• Predefined Value: @HLD=n
• Allowed Value:
• @HLD=y ("on")
• @HLD=n ("off")
• @SI=n ("off")
• @TSV=s ("Strict")
• @TSV=r ("Relaxed")
• @TSV=n ("No verification")
38.1.3.3 Retention
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
This variable specifies the default “retention mode” for new created archives.
This setting can be changed later in the archive properties.
• Type: Enum
• Predefined Value: @Mode=ncmpl
• Allowed Value:
• @Mode=cmpl ("Compliance")
• @Mode=ncmpl ("Noncompliance")
• @Mode= ("None")
38.1.4 Pools
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
• on
• off
• on
• off
• on
• off
• http ("http")
• https ("https")
38.1.6 Certificates
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
38.1.7 Notifications
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
This value must be equal to the port of the application server which runs on
the same machine.
• Type: Integer (min: 0, max: 65535)
• Predefined Value: 8080
38.1.9 Directories
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
• Description:
The column separator for accounting files.
• Type: String
• Predefined Value: TAB
Max. size of a HDSK volume for which full backups are started
(HDSK_MAX_FULL_BKUP_SIZE)
• 0
• 1
The number of Megabytes a cache volume must keep free. Must not be less
than 16. Can be made larger, if the OS gets problems, when the volume gets
too full.
• Type: Integer
• Predefined Value: 16
38.2.5.1 Compression
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
38.2.5.2 Blobs
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
38.2.5.3 Encryption
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
• on
• off
• 128 ("128")
• 192 ("192")
• 256 ("256")
The DS handles the (old) SIGIA4 timestamp format and the (new) IETF (RFC
3161) format. The "+" means send HTTP header, "-" means no HTTP header.
• Type: Enum
• Predefined Value: SHA256
• Allowed Value:
• off
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\TstpHttp.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/
TstpHttp.Setup
The filename of the file containing the private key used to sign timestamp
requests, in the config/setup directory. See TS_SIGN.
• Type: Path
• on
• off
• Description:
The name of the pool into which timestamps are archived
• Type: String
• Predefined Value: ATS_POOL
• off
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\SiaType.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/
SiaType.Setup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\SiaName.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/
SiaName.Setup
List of component names that are NOT stored using SIA (SIA_NAMES)
• Variable name: AS.SiaName.SIA_NAMES
• Type: Structure, consisting of subvariables - see below for details
• Sub variables:
Component name
• Variable name: compname
• Type: String
• Predefined Value: compname=REFERENCES
• Predefined Value: compname=REFERENCES2
• Predefined Value: compname=REFERENCES3
• Predefined Value: compname=INFO.TXT
• Predefined Value: compname=DATA.XML
• Predefined Value: compname=META_DOCUMENT
• Predefined Value: compname=META_DOCUMENT_INDEX
38.2.6 Directories
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
Intended size of ISO image for HSM systems, e.g. EMC (EMC_SIZE)
Min. size of blank volumes (MB, used e.g. for EMC) (CDMINBLANK)
This specifies the minimum size of the image to be copied by dsCD to an ISO
image, in percent of the capacity of the medium. If the available data is less
than this size, dsCD terminates with exit code 2, without burning the image.
• Type: Integer
• Predefined Value: 70
• off
• on
• on
• off
• on
• off
Temp directory for compressed files, used by dsWorm, dsHdsk and dsGs
(COMPR_DIR)
Time (secs) after which NFS write requests time out (NFS_WRTIMO)
38.2.10 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
If 0, users are never disabled. Otherwise, users are disabled if they logon
more than DS_MAX_BAD_PASSWD times within
DS_BAD_PASSWD_ELAPS seconds.
• Type: Integer
• Predefined Value: 0
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
• Type: String
• Predefined Value: CDROM,localhost,0,/views_hs
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
• 2: NO_DSADMIN
• 4: NO_HTTP
• 8: NO_RPC
• 16: NO_DELETE
• 32: NO_DELETE_NO_ERROR
• Type: Integer (min: 0, max: 255)
• Predefined Value: 8
• Type: Enum
• Allowed Value:
• Oracle ("Oracle")
• MS SQL Server ("MS SQL Server")
• SAP HANA ("SAP HANA")
• PostgreSQL ("PostgreSQL")
• Predefined Value: 5
• Description:
Time in seconds; a message occurring several times will be prevented from
being mapped to a notification until the message has not occurred for
<DEFAULT_NOTS_REOCC> seconds. If the class of the message is SRV, the
identification key for this message is class-comp-msgno-hostname-
msgtext, else the identification key is class-comp-msgno.
• Type: Integer (min: 0)
• Predefined Value: 30
Background: During startup ADMS takes some time before being ready for
answering connection requests from the Notification Server (= notifSrvr).
• Type: Integer (min: 0)
• Predefined Value: 200
Note: This parameter is effective only if the trace level for the
component scsi is at least 1!
• Type: Enum
• Predefined Value: 0
• Allowed Value:
• 0 ("0: no logging")
• 1 ("1: logging SCSI errors")
• 2 ("2: tracing of nearly all SCSI commands")
Number of saved logging file versions after jbd restart (includes logfile of current
jbd run) (logfile.cyclicNo)
• Variable name: AS.STORM.logfile.cyclicNo
• Description:
Specifies number of logfiles to create. For every start of JBD a new logfile is
created. Old logfiles are renamed from *.log to .000, .001,... This number is
valid for the tracefile, the lastwords and the error file. The minimum value is
0, that means no cyclic logfiles. If a logfile already exists messages are
appended, else they are written in the common way (if the value is set below
0, then 1 is taken). A value of 1 creates exactly one logfile (.log) which is
erased and new created by every start of JBD.
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
[ID]
• Variable name: ID
• Type: String § §
Trace level
• Variable name: trace
• Type: Integer (min: 0, max: 4)
• Predefined Value: 1
• Predefined Value:
• ID=ixwuser
• trace=1
• lwords=4
• Predefined Value:
• ID=ixwmedia
• trace=1
• lwords=4
• Predefined Value:
• ID=ixwinout
• trace=1
• lwords=4
• Predefined Value:
• ID=ixwcache
• trace=1
• lwords=4
• Predefined Value:
• ID=cache
• trace=1
• lwords=4
• Predefined Value:
• ID=glow
• trace=1
• lwords=4
• Predefined Value:
• ID=io
• trace=1
• lwords=4
• Predefined Value:
• ID=hal
• trace=1
• lwords=4
• Predefined Value:
• ID=serial
• trace=1
• lwords=4
• Predefined Value:
• ID=scsi
• trace=1
• lwords=4
• Predefined Value:
• ID=doscsi
• trace=1
• lwords=4
• Predefined Value:
• ID=journal
• trace=1
• lwords=4
• Predefined Value:
• ID=voldb
• trace=1
• lwords=4
• Predefined Value:
• ID=file
• trace=1
• lwords=4
• Predefined Value:
• ID=config
• trace=1
• lwords=4
• Predefined Value:
• ID=utils
• trace=1
• lwords=4
• Predefined Value:
• ID=backup
• trace=1
• lwords=4
• Predefined Value:
• ID=rfs
• trace=1
• lwords=4
• Predefined Value:
• ID=jbd
• trace=1
• lwords=4
• Predefined Value:
• ID=devctl
• trace=1
• lwords=4
• Predefined Value:
• ID=fsifs
• trace=1
• lwords=4
• Predefined Value:
• ID=dyn
• trace=1
• lwords=4
• Predefined Value:
• ID=nots
• trace=0
• lwords=4
• Predefined Value:
• ID=watch
• trace=0
• lwords=4
• 0
• 1 ("Send to NOTS disabled")
• Predefined Value: 0
• Allowed Value:
• 0 ("No preserve")
• 1 ("Preserve")
Note: All sub variables of the device variable with a name starting with devfile_*
are not stored at the storage location specified above, but in a file according to the
following pattern:
%IXOS_ARCHIVE_ROOT%\config\storm\devices\<device>.dev (Windows) or
$IXOS_ARCHIVE_ROOT/config/storm/devices/<device>.dev (UNIX)
where <device> means the string specified in the device name sub variable.
Devices (devices.list)
• Variable name: AS.STORM.devices.list
• Type: Structure, consisting of subvariables - see below for details
• Sub variables:
ID
• Variable name: ID
• Description:
ID
• Type: String
ID of (cluster) node
• Variable name: ID
• Type: String
Alternate Path
• automatic
• manual
Comment
Comment Text
Device Name
Robot
Drives
Format
• Variable name: format
Raw access
• Variable name: raw
• Protection: Read-only variable
• Type: Enum
• Allowed Value:
• 0
• 1 ("enabled")
[HOST]
• Variable name: HOST
• Type: String
• Predefined Value: HOST=localhost
• Predefined Value: HOST=<empty>
• Predefined Value: HOST=<empty>
ID
• Variable name: ID
• Description:
ID
• Type: String
Limit of space usage (in MB) for backup (space must be free in defined
path)
• Variable name: size
• Description:
The maximum allowed size of the file in MegaByte.
• Type: Integer (min: 100, max: 100000)
• Predefined Value: 1024
• Predefined Value:
• ID=dest1
• path=<empty>
• size=1024
Accept of also non-ISO9660 format (e.g. more than 64KB Directories) (ixworm.
isoFinNonStandard)
• 0
• 1 ("enabled")
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
Accept of also non-ISO9660 format (e.g. more than 64KB Directories) (ixworm.
isoFinNonStandard)
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_NAME_PATH/hashname
• size=35
• mode=mapped
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_FILE_PATH/hashfile
• size=35
• mode=mapped
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Accept of also non-ISO9660 format (e.g. more than 64KB Directories) (ixworm.
isoFinNonStandard)
• Description:
If the WORM file system is getting full, then there will be more and more
rehashes for a new entry. This values is set to the limit after which a warning
is generated. (the log level for the warning message is output at: MAX (9 +
(value of this parameter -<number of rehashes> ), 0) ), but only if the number
of rehashes is > rehashWarning.
• Type: Integer (min: 1, max: 100)
• Predefined Value: 40
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_NAME_PATH/hashname
• size=200
• mode=mapped
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_FILE_PATH/hashfile
• size=200
• mode=mapped
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_INODE_PATH/inodes1
• size=600
• mode=file
• Predefined Value:
• ID=chunk2
• path=REPLACE_WITH_INODE_PATH/inodes2
• size=600
• mode=file
• Predefined Value:
• ID=chunk3
• path=REPLACE_WITH_INODE_PATH/inodes3
• size=600
• mode=file
• Predefined Value:
• ID=chunk4
• path=REPLACE_WITH_INODE_PATH/inodes4
• size=600
• mode=file
• Predefined Value:
• ID=chunk5
• path=REPLACE_WITH_INODE_PATH/inodes5
• size=600
• mode=file
Accept of also non-ISO9660 format (e.g. more than 64KB Directories) (ixworm.
isoFinNonStandard)
• Description:
Directory path to the temporary files stored on disc while the files are written
by the clients. The path should never be a network attached HD. Note: There
must be enough space to hold (worst case) “maxOpenDatafiles” <max.
fileSize of one WORM file>.
• Type: Path
• Predefined Value: <empty>
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• mode=file
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_FILE_PATH/hashfile
• size=700
• mode=file
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_INODE_PATH/inodes1
• size=800
• mode=file
• Predefined Value:
• ID=chunk2
• path=REPLACE_WITH_INODE_PATH/inodes2
• size=800
• mode=file
• Predefined Value:
• ID=chunk3
• path=REPLACE_WITH_INODE_PATH/inodes3
• size=800
• mode=file
• Predefined Value:
• ID=chunk4
• path=REPLACE_WITH_INODE_PATH/inodes4
• size=800
• mode=file
• Predefined Value:
• ID=chunk5
• path=REPLACE_WITH_INODE_PATH/inodes5
• size=800
• mode=file
• Predefined Value:
• ID=chunk6
• path=REPLACE_WITH_INODE_PATH/inodes6
• size=800
• mode=file
• Predefined Value:
• ID=chunk7
• path=REPLACE_WITH_INODE_PATH/inodes7
• size=800
• mode=file
• Predefined Value:
• ID=chunk8
• path=REPLACE_WITH_INODE_PATH/inodes8
• size=800
• mode=file
• Predefined Value:
• ID=chunk9
• path=REPLACE_WITH_INODE_PATH/inodes9
• size=800
• mode=file
• Predefined Value:
• ID=chunk10
• path=REPLACE_WITH_INODE_PATH/inodes10
• size=800
• mode=file
• Predefined Value:
• ID=chunk11
• path=REPLACE_WITH_INODE_PATH/inodes11
• size=800
• mode=file
• Predefined Value:
• ID=chunk12
• path=REPLACE_WITH_INODE_PATH/inodes12
• size=800
• mode=file
• Predefined Value:
• ID=chunk13
• path=REPLACE_WITH_INODE_PATH/inodes13
• size=800
• mode=file
• Allowed Value:
• none ("same as in TS request")
• MD5 ("MD5")
• SHA1 ("SHA1")
• RMD160 ("RipeMD160")
• SHA256 ("SHA256")
• SHA512 ("SHA512")
• Description:
Because the internal clock of a computer has limited precision, this setting
provides a possibility to set a timeout period in hours after which the service
refuses to timestamp incoming requests. The timeout counter is reset every
time you transmit the signing key. A timeout setting of “0” will disable this
feature and leave the server running unlimited.
• Type: Integer (min: 0)
• Predefined Value: 168
• on
• off
• Description:
Allow usage of a component's ArchiSig timestamp's hash value for checksum
verification if it is a SHA-1 digest.
• Type: Flag
• Predefined Value: on
• Allowed Value:
• on
• off
• on
• off
• Description:
If true use “offline” instead of “onTape” state if request accesses a
component which is stored on a tape. This is for “old” only, default is “false”.
“Internal” setting only, do not modify until otherwise instructed!
• Type: Flag
• Predefined Value: false
• Allowed Value:
• true
• false
Time interval during which a pending update request may be delayed further
(JDS_ADM_REFRESH_MAXIMUM_DELAY)
• Variable name: AS.AS.JDS_ADM_REFRESH_MAXIMUM_DELAY
• Protection: Read-only variable
• Description:
After “JDS_ADM_REFRESH_MAXIMUM_DELAY” seconds a pending
update request will not be delayed further by successive update requests.
• Type: Integer (min: 0, max: 10)
• Predefined Value: 8
• true
• false
• true
• false
• on
• off
• Description:
Case sensitivity flag for original email message names (default=on).
“Internal” setting only, do not modify until otherwise instructed!
• Type: String
• on
• off
• Type: String
• Predefined Value: com.opentext.ecm.lea.filter.email.composer.
EmailComposerFilter
• Description:
Attachments in eml files smaller than this parameter will not be extracted/
decomposed!
• Type: Integer
• Predefined Value: 1024
38.9.4.1 Database
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
38.9.4.2 Command
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
38.9.4.3 Audit
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
• true
• false
38.9.4.4 OTDSconnection
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Name of the OTDS query field: domain name (user or group) (OTDS_QUERY_NAME)
• Variable name: AS.AS.OTDS_QUERY_NAME
• Protection: Read-only variable
• Description:
This setting specifies the name of the field containing the Active Directory
attribute for the domain name, user or group.
• Type: String
• Predefined Value: oTExternalID4
This setting specifies the name of the field containing the Active Directory
attribute 'objectSid'.
• Type: String
• Predefined Value: oTExternalSID
• lazy ("lazy")
• strict ("strict")
38.9.4.5 AllowedUsers
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
admuser1 (ADS_AllowedUsers_admuser1)
• Variable name: AS.AS.ADS_AllowedUsers_admuser1
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
admuser2 (ADS_AllowedUsers_admuser2)
• Variable name: AS.AS.ADS_AllowedUsers_admuser2
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
admuser3 (ADS_AllowedUsers_admuser3)
• Variable name: AS.AS.ADS_AllowedUsers_admuser3
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
admuser4 (ADS_AllowedUsers_admuser4)
• Variable name: AS.AS.ADS_AllowedUsers_admuser4
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
admuser5 (ADS_AllowedUsers_admuser5)
• Variable name: AS.AS.ADS_AllowedUsers_admuser5
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
aradmins (ADS_AllowedUsers_aradmins)
• Variable name: AS.AS.ADS_AllowedUsers_aradmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: DS/GROUP/aradmins
dsadmin (ADS_AllowedUsers_dsadmin)
• Variable name: AS.AS.ADS_AllowedUsers_dsadmin
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
ldadmins (ADS_AllowedUsers_ldadmins)
• Variable name: AS.AS.ADS_AllowedUsers_ldadmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: DS/GROUP/ldadmins
ldagents (ADS_AllowedUsers_ldagents)
• Variable name: AS.AS.ADS_AllowedUsers_ldagents
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: DS/GROUP/ldagents
otadsadmins (ADS_AllowedUsers_otadsadmins)
• Variable name: AS.AS.ADS_AllowedUsers_otadsadmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: OTDS/GROUP/otadsadmins@otds.admin
otasadmins (ADS_AllowedUsers_otasadmins)
• Variable name: AS.AS.ADS_AllowedUsers_otasadmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: OTDS/GROUP/otasadmins@otds.admin
otldadmins (ADS_AllowedUsers_otldadmins)
• Variable name: AS.AS.ADS_AllowedUsers_otldadmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: OTDS/GROUP/otldadmins@otds.admin
otldagents (ADS_AllowedUsers_otldagents)
• Variable name: AS.AS.ADS_AllowedUsers_otldagents
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: OTDS/GROUP/otldagents@otds.admin
38.9.4.6 Policy
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
38.9.4.6.1 Assignments
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
38.9.4.7 Reports
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
38.9.4.8 SolutionRegistry
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
• true
• false
This setting specifies the directory where all elastic index and config data is
stored. If this variable is empty the default directory will $ECM_VAR_DIR/
es.
• Type: Path
• Predefined Value: <empty>
38.9.7 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Administration (LOG_ADMIN)
• Variable name: AS.AS.LOG_ADMIN
• Description:
This setting specifies the log level for “Administration” category. See also
key: “LOG_ADMIN_GROUP” There are 4 distinct settings which add
additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Administration (LOG_ADMIN_GROUP)
• Variable name: AS.AS.LOG_ADMIN_GROUP
• Protection: Read-only variable
• Description:
This setting specifies an own log category for the “Administration” by
listening all related java packages. “Internal” setting only, do not modify
until otherwise instructed!
• Type: String
• Predefined Value: com.opentext.ecm.admin,com.opentext.ecm.api,com.
opentext.ecm.archiveadmin,com.opentext.ecm.container,com.
opentext.ecm.exceptions,com.opentext.ecm.rcs.script,
com.opentext.ecm.services.administration,com.opentext.ecm.
services.adminroot,com.opentext.ecm.services.archiveadmin,com.
opentext.ecm.services.asm,com.opentext.ecm.services.
leaauthentication,com.opentext.ecm.services.leaauthorization,
com.opentext.ecm.services.leanotifications,com.opentext.ecm.
services.leausergroup,com.opentext.ecm.api.srws.impl,
com.opentext.ecm.asm.webclient,com.opentext.ecm.asm.bizadmin,
com.opentext.ecm.asm.bizconfig,com.opentext.ecm.asm.bizutil,
com.opentext.ecm.services.tenantmgmt,com.opentext.ecm.as.cmis.
client
This setting specifies an own log category for the “Notification Client” by
listening all related java packages. “Internal” setting only, do not modify
until otherwise instructed!
• Type: String
• Predefined Value: ixos.w3nots,ixos.notifx
• true ("enable")
• false ("disable")
OTDS attribute for history of CMIS repositories of user's personal email archives
(CMIS_REPOSITORY_HISTORY_OTDS_ATTR)
• Variable name: AS.AS.CMIS_REPOSITORY_HISTORY_OTDS_ATTR
• Description:
OTDS attribute to store the history of CMIS repository IDs of the user's
personal email archives.
• Type: String
• Predefined Value: oTACEmailCmisRepositoryIdHistory
• Description:
When looking-up the OTDS user from an email address: look only for users,
who have access to this OTDS resource (restrict), or look for any OTDS user
(don't restrict)?
• Type: Flag
• Predefined Value: true
• Allowed Value:
• true ("restrict")
• false ("don't restrict")
• on
• off
<num> with num=1,2,3,.. . The host identifier should not be longer than 5
characters. Allowed characters are [A-Za-z0-9].
• Type: String
• Predefined Value: <empty>
Time after which an export expires and can be deleted by cleanup task
(BIZ_EXPORT_EXPIRATION)
• Variable name: AS.AS.BIZ_EXPORT_EXPIRATION
• Description:
This setting specifies the time after which an export expires and can be
deleted by cleanup task.
• Type: Integer (min: 60, max: 10080)
• Predefined Value: 1440
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: Day
• Allowed Value:
• Day ("Day")
• Month ("Month")
• Year ("Year")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
“DS_DEFAULT_CONTENTTYPE,
DS_REPLACE_ILLEGAL_CONTENTTYPE” Setting this key in an “Archive
Server” scenario has NO effect. “Internal” setting only, do not modify until
otherwise instructed!
• Type: Flag
• Allowed Value:
• true
• false
• Type: String
• Type: Integer
Number of check phases before a full update is executed. (frontend server only)
(REINIT_AFTER_N_PERIODS)
• For Archive Server scenario this port is not evaluated and should point to
the local port.
• For Archive Cache Server scenario this port should point to the port of
the backend Archive Server.
This setting specifies SSL port number of the backend server in dependence
to the scenario. It is set by Installation.
• For Archive Server scenario this port is not evaluated and should point to
the local SSL port.
• For Archive Cache Server scenario this port should point to the SSL port
of the backend Archive Server.
Time period after which an offline backend server will be probed again
(ARCHIVEOFFLINETIME)
• Variable name: AS.ICS.ARCHIVEOFFLINETIME
• Protection: Read-only variable
• Description:
If any backend server is no longer available this setting specifies the time
period [ms] before probing it again. Setting this key in an “Archive Server”
scenario has NO effect. “Internal” setting only, do not modify until otherwise
instructed!
• Type: Integer
Time to pass when reinitializing thread checks for and administrative changes
(frontend server only) (LAST_REINIT_PERIOD)
• Variable name: AS.ICS.LAST_REINIT_PERIOD
• Protection: Read-only variable
• Description:
This setting specifies the time period [ms] passed before a background
thread checks for any administrative changes on the backend server. Setting
this key in an “Archive Server” scenario has NO effect. “Internal” setting
only, do not modify until otherwise instructed!
• Type: Integer
• Predefined Value: 10000
• Allowed Value:
• true
• false
• Throw exception
• Prefer AL semantic
• Prefer HTTP semantic
• Type: Enum
• Predefined Value: HTTP
• Allowed Value:
• true
• false
• false
• true
• false
38.12.4 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ICS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ICS.
Setup
Authentication (LOG_AUTH)
• Variable name: AS.ICS.LOG_AUTH
• Protection: Read-only variable
• Description:
This setting specifies the log level for “Authentication” category. See also
key: “LOG_AUTH_GROUP” There are 4 distinct settings which add
additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
Legacy (LOG_LEGACY)
• Variable name: AS.ICS.LOG_LEGACY
• Description:
This setting specifies the log level for “Legacy Code” category. See also key:
“LOG_LEGACY_GROUP” There are 4 distinct settings which add additional
logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ICS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ICS.
Setup
Authentication (LOG_AUTH_GROUP)
Communication port for the Archive Link interface of the Archive Server
(AL_PORT)
• Variable name: FSA.FSA.AL_PORT
• Description:
This setting specifies the communication port for the Archive Link interface
of the Archive Server. Only used in the Cost-Saving scenario (Create Shortcut
option in Business Administration).
• Type: Integer (min: 1, max: 65535)
• Predefined Value: 8090
Communication protocol for the Archive Link interface of the Archive Server
(AL_PROTOCOL)
• Variable name: FSA.FSA.AL_PROTOCOL
• Description:
This setting specifies the communication protocol for the Archive Link
interface of the Archive Server. Only used in the Cost-Saving scenario
(Create Shortcut option in Business Administration).
• Type: Enum
• Predefined Value: https
• Allowed Value:
• https ("HTTPS")
• http ("HTTP")
Expire time (in minutes) for entries in FSA user cache (LRU) (FSA_USER_LRU_TTL)
• Variable name: FSA.FSA.FSA_USER_LRU_TTL
• Protection: Read-only variable
• Description:
“Internal” setting only, do not modify until otherwise instructed!
• Type: Integer (min: 1, max: 600)
• Predefined Value: 30
Name of the directory where the shortcut icons are stored (FSA_ICON_DIR)
• Variable name: FSA.FSA.FSA_ICON_DIR
• Protection: Read-only variable
• Description:
“Internal” setting only, do not modify until otherwise instructed!
• Type: String
• Predefined Value: _otx_fsa_icons_
This setting specifies the amount of disk space [MB] per volume which the
underlying cache administration is trying to to keep free. “Internal” setting
only, do not modify until otherwise instructed!
• Type: Integer (min: 0)
External server name (in environments with multiple host names) (MY_HOST_NAME)
• Variable name: AS.ACS.MY_HOST_NAME
• Description:
When working with different networks, domains, or hostnames, the external
server name can be specified by this setting. “Internal” setting only, do not
modify until otherwise instructed!
• Type: String
Maximum number of worker threads to retrieve content (must not be smaller than
Minimum) (CS_MAX_WTHREADS)
• Variable name: AS.ACS.CS_MAX_WTHREADS
• Protection: Read-only variable
• Description:
This setting specifies the maximum number of worker threads the ACS
internally uses to retrieve content from the backend. “Internal” setting only,
do not modify until otherwise instructed!
• Type: Integer (min: 1, max: 255)
• Description:
This setting states the second possible volume to be used for write-through
caching. Volume is specified by giving an absolute pathname.
Recommendation: use separate path only be used by the Archive Cache
Server. Best be used located on separate disk or partition.
• Type: Path
• Type: Enum
• Predefined Value: http
• Allowed Value:
• https ("https")
• http ("http")
Proxy ID (BIZ_PROXYID)
• Variable name: AS.ACS.BIZ_PROXYID
• Description:
This ID uniquely identifies the Proxy installation.
• Type: String
• Description:
This setting specifies the maximum size [MB] used for the corresponding
volume.
Recommendation: ensure that there is always sufficient space to be used by
this volume.
• Type: Integer (min: 20)
• true
• false
41.1.1 Scheduler
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ACS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ACS.
Setup
41.1.2 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ACS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ACS.
Setup
Administration (LOG_ADMIN)
• Variable name: AS.ACS.LOG_ADMIN
• Description:
This setting specifies the log level for “Administration” category. See also
key: “LOG_ADMIN_GROUP” There are 4 distinct settings which add
additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
Scheduler (LOG_SCHED)
• Variable name: AS.ACS.LOG_SCHED
• Description:
This setting specifies the log level for “Scheduler” category. There are 4
distinct settings which add additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ACS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ACS.
Setup
Administration (LOG_ADMIN_GROUP)
• Variable name: AS.ACS.LOG_ADMIN_GROUP
• Protection: Read-only variable
• Description:
This setting specifies an own log category for the “Administration” by
listening all related java packages. “Internal” setting only, do not modify
until otherwise instructed!
• Type: String
• Predefined Value: com.opentext.ecm.asm.bizadmin,com.opentext.
ecm.asm.ca,com.opentext.ecm.as.cmis.client,com.opentext.ecm.
admin,com.opentext.ecm.persistence
This setting specifies a separate log category for the “CMIS Interface” by
listing all related java packages. “Internal” setting only, do not modify unless
instructed otherwise!
• Type: String
• Predefined Value: com.opentext.ecm.as.cmis.proxy,com.opentext.ecm.
asm.cmis
Scheduler (LOG_SCHED_GROUP)
• Variable name: AS.ACS.LOG_SCHED_GROUP
• Protection: Read-only variable
• Description:
This setting specifies an own log category for the “Scheduler” by listening all
related java packages. “Internal” setting only, do not modify until otherwise
instructed!
• Type: String
• Predefined Value: com.opentext.ecm.admin.schedule,com.opentext.
ecm.scheduling
Time after which an export expires and can be deleted by cleanup task
(BIZ_EXPORT_EXPIRATION)
This setting specifies the time after which an export expires and can be
deleted by cleanup task.
• Type: Integer (min: 60, max: 10080)
• Predefined Value: 1440
type is replaced by any set default content type. See also key:
“DS_DEFAULT_CONTENTTYPE,
DS_REPLACE_ILLEGAL_CONTENTTYPE” Setting this key in an “Archive
Server” scenario has NO effect. “Internal” setting only, do not modify until
otherwise instructed!
• Type: Flag
• Allowed Value:
• true
• false
• false
This setting specifies the default value of the URL parameter “ixUser”, which
is used by internal request. Setting this key in an “Archive Server” scenario
has NO effect.
• Type: String
• Description:
“Internal” setting only, do not modify until otherwise instructed!
• Type: Integer
Number of check phases before a full update is executed. (frontend server only)
(REINIT_AFTER_N_PERIODS)
• Variable name: AS.ICS.REINIT_AFTER_N_PERIODS
• Protection: Read-only variable
• Description:
This setting indicates how much check phases must pass before a full re-
initialization is triggered. See also key: “LAST_REINIT_PERIOD”. Setting
this key in an “Archive Server” scenario has NO effect. “Internal” setting
only, do not modify until otherwise instructed!
• Type: Integer (min: 2)
• Predefined Value: 30
• Description:
This setting specifies SSL port number of the backend server in dependence
to the scenario. It is set by Installation.
• For Archive Server scenario this port is not evaluated and should point to
the local SSL port.
• For Archive Cache Server scenario this port should point to the SSL port
of the backend Archive Server.
• true
• false
• true
• false
• true
• false
Time period after which an offline backend server will be probed again
(ARCHIVEOFFLINETIME)
Time to pass when reinitializing thread checks for and administrative changes
(frontend server only) (LAST_REINIT_PERIOD)
If true a frontend server writes a warning to the log file in case it connects to
an backend server older than V9.6.1. Setting this key in an “Archive Server”
scenario has NO effect.
• Type: Flag
• Predefined Value: true
• Allowed Value:
• true
• false
• true
• false
• Allowed Value:
• true
• false
• false
• true
• false
41.2.4 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ICS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ICS.
Setup
Authentication (LOG_AUTH)
• Variable name: AS.ICS.LOG_AUTH
• Protection: Read-only variable
• Description:
This setting specifies the log level for “Authentication” category. See also
key: “LOG_AUTH_GROUP” There are 4 distinct settings which add
additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
Legacy (LOG_LEGACY)
• Variable name: AS.ICS.LOG_LEGACY
• Description:
This setting specifies the log level for “Legacy Code” category. See also key:
“LOG_LEGACY_GROUP” There are 4 distinct settings which add additional
logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ICS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ICS.
Setup
Authentication (LOG_AUTH_GROUP)
Document or folder objects can have an access control list, which controls access
to the object. An ACL is a list of access control entries (ACEs). An ACE grants one
or more permissions to a principal. A principal is a user, group, role, or
something similar.
ACL
See Access control list (ACL).
One AFP file usually contains multiple, similar business documents, which share
common resources. Hence, AFP is a very storage-efficient format.
AFP
See Advanced Function Presentation (AFP).
Annotation
Application Layer
Archive Box
If enabled for a File Share data source, all folders and documents below the
specified path are archived and replaced by a single folder shortcut.
This option is intended for documents, and optionally folders, that need to be
archived but are no longer in daily use. Thus, the required disk space on the file
server, including the total number of files, can be reduced. This is in contrast to
the shortcut scenario where every file is replaced by an individual link. (Extended
Archive Center feature)
Separate machine, on which documents are stored temporarily. That way the
network traffic in the WAN can be reduced.
Archive Center
See Application Layer.
Archive ID
Archive mode
Specifies the different scenarios for the scan client (such as late archiving with
barcode, preindexing).
Web based administration tool for monitoring the state of the processes, storage
areas, OpenText Document Pipeline and database space of the Archive Center.
Archive Spawner
Service program that starts and terminates the processes of the archive system.
A timestamp provider signs documents by adding the time and signing the
cryptographic checksum of the document. To ensure evidence of documents, use
an external, qualified timestamp provider like Timeproof or AuthentiDate.
OpenText Archive Timestamp Server is a timestamp provider for demonstration
or testing.
See Also Time Stamping Authority (TSA).
ArchiveLink
See SAP ArchiveLink.
BLOB
When archiving BLOBs (binary large objects), small documents are gathered in a
meta document (the BLOB) in the disk buffer and are written to the storage
medium together to improve performance. If a document is stored in a BLOB, it
can be destroyed only when all documents of this BLOB are deleted.
BLOBs are not supported in single-file storage scenarios and should not be used
together with retention periods.
Buffer
Burn buffer
A special burn buffer is required for ISO pools in addition to a disk buffer. The
burn buffer is required to physically write an ISO image. When the specified
amount of data has accumulated in the disk buffer, the data is prepared and
transferred to the burn buffer in the special format of an ISO image. From the
burn buffer, the image is transferred to the storage medium in a single,
continuous, uninterruptible process referred to “burning” an ISO image. The
burn buffer is transparent for the administration.
Cache
CMIS
See Content Management Interoperability Services (CMIS).
Collection
Primary time standard by which the world regulates clocks and time. It is one of
several closely related successors to Greenwich Mean Time (GMT). For most
purposes, UTC is synonymous with GMT.
Data source
Specifies the origin and properties of the documents that are archived by a
collection. (Extended Archive Center feature)
Device
Short term for a storage device in the Archive Center environment. A device is a
physical unit that contains at least storage media, but can also contain additional
Digital signature
Disk buffer
See Buffer.
DocID
See Document ID (DocID).
DocTools
Document ID (DocID)
DP
See Document Pipeline (DP).
DPDIR
The directory in which the documents are stored that are being currently
processed by a document pipeline.
DS
See Document Service (DS).
Enterprise Scan
Exchange directory
The directory which is used for exchange of data to be retrieved or archived. This
directory is dedicated to the exchange between the leading application, the
Document Pipeline, and Archive Center.
FS pool
The FS (file system interface) pool points to mounted hard disk volumes of an
HSM, NAS, or SAN system over the network. FS pools support single file storage.
They require a disk buffer.
GMT
Greenwich Mean Time; former global time standard. For most purposes, GMT is
synonymous with UTC.
See Also Coordinated Universal Time (UTC).
Governikus
G
See Also TR-Esor.
HDSK pool
In an HDSK (hard disk) pool, documents are stored directly to the storage, which
can be a local file system directory or a local SAN system. HDSK pools support
single file storage. It is the only pool type that works without a buffer. No WORM
functionality is available.
Hold
Logical archives can be put on hold, which means that its documents and
components cannot be changed or deleted. Adding further documents to the
archive is still possible.
Hot standby
ISO image
An ISO image is a container file containing documents and their file system
structure according to ISO 9660. It is written at once and fills one volume.
ISO pool
Job
Known server
A known server is an Archive Center whose archives and disk buffers are known
to another Archive Center. Making servers known to each other provides access
to all documents archived in all known servers. Read-write access is provided to
other known servers. Read-only access is provided to replicate archives. When a
request is made to view a document that is archived on another server and the
server is known, the inquired Archive Center is capable of displaying the
requested document.
Late Archiving
In the Late Archiving with Barcode scenario, paper documents are passed through
the office and are not archived until all document-related work has been
completed. If documents are archived in this way, indexing by barcode, patch
code, or another indexing method is used to join the documents to the
corresponding business entries in SAP. Documents are identified by a barcode or
patch code on their first page.
See Also Enterprise Scan.
Log file
Log level
Adjustable diagnostic level of detail on which the log files are generated.
Logical archive
Logical area on the Archive Center in which documents are stored. The Archive
Center can contain many logical archives. Each logical archive can be configured
to represent a different archiving strategy appropriate to the types of documents
archived exclusively there. An archive can consist of one or more pools. Each pool
is assigned its own exclusive set of volumes which make up the actual storage
capacity of that archive.
Media
Short term for “long term storage media” in the Archive Center environment. A
medium is a physical object: hard disks and hard disk storage systems with or
without WORM feature.
Obtains status information about archives, pools, hard disk and database space
on the Archive Center. MONS is the configuration parameter name for the
Monitor Server.
MONS
See Monitor Server (MONS).
MTA documents
Meta (MTA) documents, also known as document lists, are one comprehensive
file containing several individual documents of the same file format. If indexing
information is provided for the meta document (META_DOCUMENT_INDEX
component), the individual documents can be searched for and retrieved quickly
and easily.
Notes
Pool
A pool is a logical unit, a set of volumes of the same type that are written in the
same way, using the same storage concept. Pools are assigned to logical archives.
RC
See Read Component (RC).
Part of the Document Service that provides documents by reading them from the
archive.
Remote Standby
Archive server setup scenario including two (ore more) associated Archive
Centers. Archived data is replicated periodically from one server to the other in
order to increase security against data loss. Moreover, network load due to
document display actions can be reduced since replicated data can be accessed
directly on the replication server.
Replication
Retention
Using the Application Layer, retention periods can also be assigned to documents
using rules.
SAP ArchiveLink
A standardized interface, mainly used to connect an SAP system with the archive
system.
Scan station
Workstation for high volume scanning on which the Enterprise Scan client is
installed and to which a scanner is connected. Incoming documents are scanned
here and then transferred to Archive Center.
SecKey
With SecKeys, you can protect the connections between a client and Archive
Center. A SecKey is an additional parameter in the URL of the archive access. It
contains a digital signature and a signature time and date. The client application
creates a signature for the relevant parameters in the URL and the expiration
time, and signs it with a private key. Archive Center verifies the signature with
the public key, and accepts requests only with a valid signature and if the
SecKey's expiration time has not been reached.
SIA
See Single Instance Archiving (SIA).
Single Instance Archiving means that requests to archive the same component do
not result in an additional copy of the component on Archive Center. Instead, the
component is archived only once and then referenced by subsequent instances.
Slot
Spawner
See Archive Spawner.
Tenant
A tenant consists of a defined user group for a customer. Tenants are entirely
delimited from one another.
In Archive Server a tenant is defined by a user group with an associated,
dedicated policy. (Extended Archive Center feature)
TR-Esor
TSA
See Time Stamping Authority (TSA).
UTC
See Coordinated Universal Time (UTC).
VI pool
The VI (vendor interface) pool is connected to the storage system via the API of
the storage vendor. VI pools support single file storage. They require a disk buffer.
Sometimes also referred to as GS (Generalized Store) scenario.
Volume
• A volume is a memory area of a storage media that contains documents.
Depending on the device type, a device can contain many volumes (for
example, real and virtual jukeboxes), or is treated as one volume (for example,
storage systems without virtual jukeboxes). Volumes are logically attached
(assigned or linked) to pools.
• Volume is a technical collective term with different meaning in STORM and
Document Service (DS). A DS volume is a virtual container of volumes with
identical documents (after the complete backup is written). A STORM volume
is a virtual container of all identical copies of a volume. For ISO volumes,
there is no difference between DS and STORM volumes.
WC
See Write Component (WC).
Windows Viewer
WORM
WORM means Write Once Read Multiple. A WORM disk supports incremental
writing. On storage systems, a WORM flag is set to prevent changes in
documents.
Component of the Document Service that carries out all possible modifications. It is
used to archive incoming documents (store them in the buffer), modify and delete
existing documents, set, modify and delete attributes, and manage pools and
volumes.
Write job