Documente Academic
Documente Profesional
Documente Cultură
Manual Part C
- Administration -
Release 3.4
Compatible with
SAP NetWeaver BW 7.0, 7.3 and 7.31
April 9, 2013
PBS CBW NLS - Administration - 2
Email: info@pbs-software.com
Internet: www.pbs-software.com
®
PBS archive add ons is a registered trademark of PBS Software GmbH,
Bensheim.
SAP, the SAP Logo, R/3, R/3 Enterprise, SAP ArchiveLink, SAP ERP, SAP
BW, SAP NetWeaver, SAP NetWeaver BW, ABAP/4 are registered trademarks of
SAP AG, Walldorf/Baden.
Recommendation
Warning
Example
Consultation required
PBS CBW NLS (Basis Component of all PBS CBW NLS Products)
The PBS CBW NLS manual is modular and comprises the following partial manuals:
• Part C: Administration
• Part E: Implementation*
* in preparation
PBS CBW NLS - Administration - 5
This manual describes the administration that is required for the product PBS CBW NLS /
PBS CBW NLS IQ.
Should you have questions regarding the installation or if you have problems when
installing the software please contact the Service Hotline of the PBS Software GmbH:
Email: hotline@pbs-software.com
Release Compatibility
The PBS CBW NLS discussed in this manual runs with the basis programs of SAP
AG, 69190 Walldorf/Baden, Germany, SAP NetWeaver BW, release 7.0 from SP
12, 7.01, 7.02, 7.3 and 7.31.
PBS CBW NLS - Administration - 6
Table of Contents
1 Introduction ....................................................................... 11
1.1 Supplied Menus and Transactions ..................................................... 11
1.2 Basic Information on the PBS CBW NLS ........................................... 11
1.3 Basic Information on PSA Archiving ................................................... 12
1.4 Basic Information on PBS CBW NLS IQ ............................................. 13
5 Modeling ............................................................................ 56
5.1 Defining the Indices and Index Attributes ........................................... 56
5.2 Techniques for the Index Determination ............................................. 62
5.3 Defining Aggregates ........................................................................... 65
5.4 Aggregate Modeling ............................................................................ 68
PBS CBW NLS - Administration - 7
1 Introduction
The menu for the PBS CBW NLS can be started by including the area menu
/PBS/CBW0 in the corresponding role. Alternatively, you can just add transaction
/PBS/CBW to the desired role, restricting yourself to the CBW Administration
Cockpit. The programs described in Chapters 3 to 9 can be executed via the CBW
Administration Cockpit, which is started in transaction /PBS/CBW. The programs
described in Chapters 11 to 14 are executable via the administration interface of
PSA archiving, which is started in transaction /PBS/PSA. The CBW Data Export
Interface that is used to carry out data extracts from database and archive is
treated in chapter 15.
The PBS CBW NLS is a SAP-ADK-based software solution for SAP NetWeaver
BW, which allows transparent access to archived InfoCube and DataStore data via
standard BW queries. The archived data can be analyzed separately or integrated
with data from the BW database. When using the CBW and the SAP data archiving
process for BW together, a fast increase in the size of the BW database can be
reduced and controlled. This process results in a significant reduction of the disk
space and maintenance services required, but at the same time guarantees the
availability of all information.
The PBS CBW NLS contains software components for defining and generating
indices and aggregates of archived InfoCube and DataStore object data as well as
for then selecting this data. Integrated nearline services enable the usage of the
PBS CBW NLS as a nearline storage. In addition, the PBS archive add on CBW
method used in previous releases (SAP BW 3.x) and which is based on access to
archive data via VirtualProviders is still fully supported.
All software components are written in the programming language ABAP and are
implemented using the SAP transport infrastructure. All programs are defined in the
PBS namespace (/PBS/), so that naming conflicts with SAP or user-defined
programs can be avoided. All software components are available additively in the
BW system, which means that SAP programs are not modified. Generated objects
are stored in the BW generation namespace /B19/, in order to avoid conflicts with
the customer namespace.
The CBW Administration Cockpit, which is the central transaction for modeling and
administration, is used to model aggregates and indices. As in the archive data, the
built aggregates and indices are stored in SAP ADK format outside the BW
database in the file system.
PBS CBW NLS - Administration - 12
The software is supplied as an SAP add-on and installed using transaction SAINT.
The product can be used immediately after installation.
The administrative task of the PBS CBW NLS is to model and construct archive
aggregates and indices for each InfoProvider.
Important Note:
The current version of the PBS CBW NLS does not support InfoCubes with
non-cumulative key figures.
Within SAP, NearlineProviders are used to select nearline data using queries.
However, in the current release (SAP NetWeaver BI 7.31), these
NearlineProviders are not yet able to support non-cumulative key figures. Up to
now, SAP has not issued a statement as to when this functionality will be
available.
In SAP BW, archiving is not planned for the data of the »Persistent Staging Area«
(PSA) as standard, as the information temporarily held here is in general no longer
required after the data transfer to DataStore or InfoCube structures, and is deleted
from the PSA tables. However, for many users PSA data retention is desirable in
order to be able to repeat or supplement such a transfer to the DataStore or
InfoCube structures, for example, after changes, if necessary. As the PSA table
data is only required exceptionally for this purpose, retention in the database does
not make sense. We offer ADK-based and an NLS request-based data archiving
for the PSA tables, in order to be able to supply the exceptional cases mentioned
above with the required PSA data. Using the PSA archiving function of the CBW,
ADK/NLS archiving objects can be defined and generated in a manner suitable to
the required PSA data. The generated objects are managed on a transaction basis
in a user-friendly interface. From here, the PSA data required for the new
DSO/InfoCube transfer can be identified in the archive files and scheduled for
reloading. Alternatively to reloading, a VirtualProvider can also be generated for the
transfer structure, which procures its data from the archive. The archived data can
thus be updated to the respective data destinations without reloading.
When using the PBS CBW NLS, a fast increase in the size of the PSA tables can
be reduced and controlled. This process results in a significant reduction of the disk
space and maintenance services required, but at the same time guarantees the
availability of all information.
PBS CBW NLS - Administration - 13
The administrative task of PSA archiving is to generate the archiving objects and
manage them in transaction ’SARA’ or the request-based interface.
The PBS CBW NLS IQ, which is available optionally, is a nearline storage solution
for SAP NetWeaver BW 7. This solution moves the InfoCube, DSO and PSA data
to be archived to the analytics server Sybase IQ. With its column-based storage,
the Sybase IQ database offers compression of five to eight. In addition to
compression, the main advantage is the highly optimized analysis that allows
extremely fast queries even for large data volumes. The administration effort
involved is minimal. It is worth using the Sybase IQ for customers with a large data
volume.
2.1 Architecture
From SAP NetWeaver 7.0, it is possible to source out data in a Nearline Storage
(NLS) system as well as in ADK-based archiving in the BW system. The PBS CBW
NLS, which requires ADK-based archiving, uses the NLS interface to manage and
select the archived data (see Diagram 1). Indices and aggregates can also be set
up in order to increase the performance. These are either set up during the
archiving run (presumably available from Q1/2009) or in a separate run from the
CBW Administration Cockpit. Indices and aggregates are stored outside the BI
database in the form of ADK files, in the same way as the archive data.
NLS
NLS CBW
CBW NLS
NLS
Services
Services
Indices & Aggregates
Archive modeling no longer takes place within the InfoProvider definition, but using
a data archiving process (DAP), which represents a separate BW object. In the
current SAP Release, the data archiving process supports archiving of InfoCubes
and DataStore objects. In later releases, SAP plans archiving of further BW objects
such as master data.
PBS CBW NLS - Administration - 15
BW Query
Read DB
and archive
PBS
InfoCube /
NearlineProvider NLS Service
DataStore Object
for ADK
NearlineProviders, which are adjoined with the original InfoProviders, are used for
query access to the archive data via the NLS interface internally in the SAP
System, thus rendering the use of custom-defined VirtualProvider and
MultiProviders unnecessary (see Diagram 2). The selection of Nearline data is
controlled via the characteristics of the individual queries.
2.2 Procedure
The individual steps for setting up an archive are shown in Diagram 3. The initial
setup also contains the modeling of the archive to be generated. Step 1 only needs
to be performed once, while steps 2 and 3 need to be performed for each
InfoProvider.
PBS CBW NLS - Administration - 16
Initial setup
1. Create link to CBW Nearline Service
2. Create data archiving process (ADK + NLS)
3. Modeling of indices and aggregates (opt.)
4. Execute data archiving (ADK + NLS)
5. Setup of indices and aggregates (opt.)
Actual data archiving takes place either using the classic method via ADK Archive
Administration or via InfoProvider Data Administration, as explained in detail in
chapters 2.5 and 2.6. Archiving is object-related, with a separate data archiving
process and a separate archiving object being defined for each InfoCube or
DataStore object. The archived data is written in sequential files outside the
database by the SAP archiving program.
The reorganized data can only be evaluated sequentially directly after archiving.
The load relief on the BW database can be measured using the BW Database
Analyzer, for example.
Indices and aggregates are set up using the CBW Administration Cockpit. This
step can also be triggered automatically by archiving in a later expansion stage.
PBS CBW NLS - Administration - 17
Before CBW can be used as a Nearline Storage, a connection must first be set up
to the CBW Nearline Service. This requires making a new entry in Table
RSDANLCON in table maintenance (transaction SM31), as shown in Diagram 4.
The customer can select any name for the Nearline connection, such as CBW. The
name of the connection class is a fixed default and is called
/PBS/CL_CBW_CONNECTION.
After the entry has been made in Table RSDANLCON the connection with the
CBW Nearline Service is set up.
From SAP NetWeaver 7.0, the archiving method is no longer modeled within the
InfoProviders, but is defined using a separate BW object. This central object is the
data archiving process that must be created for each InfoProvider and which is
linked with this. One data archiving process exists for each InfoProvider.
In the maintenance interface, data archiving process modeling is split up into five
tab pages. In the first tab page “General Settings“ you choose the Nearline
connection which has been defined for the CBW Nearline Service (see chapter
2.3). As the CBW is based on ADK-based archiving, ensure that the flag ”ADK-
based archiving“ is set.
In the tab page “Selection Profile“ you define the characteristics that form the
criteria for data selection. In the case of time slice archiving, the primary
characteristic is first defined and can be extended by further characteristics.
The tab page “Semantic Group“ enables you to sort and write the data that has
been selected for archiving in the archive. The sorted data is grouped into data
objects and written in the archive. For the purpose of subsequent data access per
index, it makes sense to ensure that the data objects have a certain minimum size
(approx. 1000 data records). In practice, it is sufficient to leave the semantic group
empty, as in this case storage is not sorted and the system splits up the objects
according to fixed rules.
PBS CBW NLS - Administration - 20
The tab page “ADK“ contains all settings that are required for ADK-based
archiving. You can find a detailed explanation of the individual parameters in PSA
archiving in Chapters 0 to 0. Only the most important parameters are therefore
listed below.
The settings for the size of an NLS data package are made in the tab page “NLS“.
As ADK and NLS are utilized in the method used, the size of an ADK archive file is
synchronous with the size of an NLS data package. No more settings are therefore
necessary here.
ADK Archive Administration is well known from early releases. ADK Archive
Administration is either called up via the InfoProvider context menu in the Data
Warehousing Workbench (see Diagram 11) or directly in transaction SARA. The
system then displays the usual access screen where you can parameterize and
start the write and delete run, as shown in Diagram 12.
Write and delete runs are executed in batch via jobs. When you parameterize the
write run, the system first displays a screen for scheduling (see Diagram 13),
where you need to maintain the selection parameters of the write run in a report
variant (see Diagram 14). You can then start the delete run or schedule its
execution for a later point in time.
After the write run has been completed successfully, the delete run is scheduled
(see Diagram 15). You can make a selection from a list of incomplete archiving
runs (see Diagram 16) for all of which a delete run has not yet been performed. It is
thus possible to execute several write runs first and then complete archiving of
these runs with a single common delete run.
PBS CBW NLS - Administration - 23
If you press the button “Management“, the system displays an overview of all
archiving runs of the InfoProvider (see Diagram 17). However, as the overview of
the archiving runs does not provide sufficient information, it is more sensible to use
the Request Administration in InfoProvider Administration for this purpose (details
see chapter 2.6). The data displayed here for the archiving request is considerably
more detailed. The archiving runs can be directly assigned to the respective
archiving requests using the available 1:1 relationship between the request SID
and the run number.
PBS CBW NLS - Administration - 27
All existing archiving runs of this InfoProvider are listed in the tab page “Archiving“
(see Diagram 19), together with details. Both the total status and the statuses of
the individual stages “Copy“, ”Verify“ and ”Delete“ are listed. Further details are the
selection condition, the document number of the respective ADK archiving run and
further information on the individual data packages.
In order to start an archiving run, you first need to generate an archiving request.
This is started using the button “Archiving Request“ (red marking in Diagram 19).
PBS CBW NLS - Administration - 28
The selection conditions and the process control must be set in the following
screen, shown in Diagram 20.
The respective (time) characteristics have been defined in the data archiving
process for the selection conditions. A relative time restriction of the time
characteristic for time slices and the additional option of using absolute selection
criteria for all selected partitioning characteristics are now available for selection.
The data archiving process is split up into several phases. In process control, you
have the option of letting the run only proceed up to a certain phase. This means
that parts of the archiving process can be executed separately. For example, if you
want to run the write run and the delete run in separate processes, as is usual in
previous archiving techniques, you can do so using the process control function. In
ADK-based archiving, as in CBW, write and delete processes must be performed
separately, and from Support Package 12 this is the only selectable option.
PBS CBW NLS - Administration - 29
The selection screens for separate write and delete runs are shown in Diagram 20
and Diagram 21.
The write run is parameterized such that it runs up to status “40“ of the process
control (“Write phase successfully finished“) and then terminates. If you double-
click on the request line in Administration, you can start or schedule the
subsequent delete run. This can take place up to the completion of archiving
(status ”70“ or ”Delete phase confirmed and request completed“).
After archiving has been performed successfully, the status “green“ is displayed in
Request Administration, as you can see from Diagram 22.
Data archiving via process chains is completely integrated from SP13 for ADK-
based archiving. A description of this functionality will be given here in the near
future.
PBS CBW NLS - Administration - 31
After archiving, query access to outsourced data is not automatic, but needs to be
activated first in the query characteristics. The query characteristics are adapted in
the Query Monitor, which can be called up in transaction RSRT (see Diagram 23).
The attribute “Read Near-Line Storage As Well” must be activated in the query
characteristics. This step must be performed individually for each query.
Diagram 25: Data Transfer Process for the Access to Archive Data
The update of data in one InfoProvider to another InfoProvider is carried out from
SAP NetWeaver 7.0 using a data transfer process (DTP). In order to select data
from the archive, you must select the archive as an extraction source, as shown in
Diagram 25. In this case, selection of the possible extraction modes is restricted to
the extraction mode “Full“, which means an extraction of the whole archive. It is
thus possible to set optional filters.
You can make possible restrictions in selection by pressing the button “Filter“. As
shown in Diagram 26, the system displays a screen with selection options for
characteristics of the InfoCube and key fields of the DataStore object. If necessary,
you can supplement the list with further characteristics/fields.
PBS CBW NLS - Administration - 33
After the extraction parameters have been defined as described, the target
InfoProviders need to be selected and the DTP then activated and executed as
usual.
You can call up the reload function in the SARA administration interface by
choosing the menu items “Goto “ ”Reload“ (see Diagram 27).
The system then displays the reload run selection screen (see Diagram 28) in
which you can choose the usual job parameters and the respective archive run,
and then create a variant. When you create the variant, you simply need to ensure
that the process control function is set to “Reload“ (see Diagram 29).
PBS CBW NLS - Administration - 35
When you select the archiving run for reloading, you will see both the number of
the archiving run and the archiving date, as shown in Diagram 30. Further
information identifying the run is contained in InfoProvider Administration, where
the run number is also entered in the list of archiving requests (right column in the
list in Diagram 22). Run 77, selected as an example in Diagram 30 has the request
SID 302 and contains all data records from the fiscal year 1998.
After the archiving run has been selected, parameterization is complete and the job
can either be executed immediately or scheduled for execution.
Once the reload run has been completed successfully, the status of the run is
displayed in InfoProvider Administration (see Diagram 31). In ADK Archive
Administration management, it is currently (SP17) not possible to see whether the
run has been reloaded. InfoProvider Administration is thus the reference for the
status of all archiving runs/requests.
3.1 Architecture
In addition to the use of the PBS CBW NLS as a nearline storage solution you can
carry out ADK-based archiving without using the nearline service. This case
corresponds to the PBS archive add on CBW method for SAP BW 3.x in which
aggregates and indices are always built separately from the archiving run. The
archive data can then be selected via VirtualProviders. As with the archive data,
the indices and aggregates are also stored here as ADK files outside the BW
database. Modeling and building of aggregates and indices are carried out centrally
via the CBW Administration Cockpit, which is also used for the generation of
VirtualProviders and other BW objects.
Diagram 32: Data archiving and index/aggregate setup without nearline service
The advantages of this method are the compatibility with 3.x data archiving and the
complete functionality even if a low support package level is used. If 3.x data
archiving is also active for InfoProviders, NetWeaver 7 data archiving cannot be
used. Since this means that nearline services cannot be used either, this method
must be applied.
The indices and aggregates are built using the CBW Administration Cockpit.
VirtualProviders are used for archive data selection in the method without nearline
service (see Diagram 33). The CBW contains a separate own interface which is
called the CBW-ADK interface for selection via VirtualProviders. This interface also
supports the use of aggregates and indices.
InfoCube /
MultiProvider VirtualProvider
DataStore Object
3.2 Procedure
The actual data archiving is carried out either using the classic method with ADK
archive administration or via InfoProvider data administration (details see chapters
2.5 and 2.6). 3.x data archiving can only be carried out via the ADK archive
administration. Archiving is object-orientated, with a separate data archiving
process and a separate archiving object being defined per InfoCube or DataStore
object. The archived data is written in sequential files outside the database by the
SAP archiving program.
The reorganized data can only be evaluated sequentially. The reduction load of the
BW database can be measured using, for example, the BW Database Analyzer.
Before constructing the indices which are required for direct access, they must first
be defined individually for each InfoProvider. First, the characteristics for which an
index is to be built are selected using the CBW Administration Cockpit. Afterwards,
the characteristics and key figures which are to be included in the index structure
as attribute fields should be selected.
First setup
1. SAP standard BW data archiving process
2. Modeling of indices and aggregates
3. Setup of indices and aggregates
After archiving the BW object data, the CBW modeling for each InfoProvider must
be carried out. Apart from table structures and setup of archiving objects in the
Data Dictionary, this also includes the generation of programs for index and
aggregate generation, activation and administration during the activation of indices
and aggregates. BW objects such as InfoProviders and queries must also be
generated. These actions are only required for the initial setup as well as for
changes in the index structure and are performed completely in the CBW
Administration Cockpit (see Chapter 4.2 for details).
From SAP NetWeaver 7.0 the archiving method is no longer modeled within the
InfoProviders, but is defined using a separate BW object. This central object is the
data archiving process that must be created for each InfoProvider and which is
linked with this. One data archiving process exists for each InfoProvider.
Diagram 35: General settings of the DAP for ADK without Nearline
In the maintenance interface, data archiving process modeling is split up into five
tab pages. In the first tab page “General Settings“ you set the flag ”ADK-based
archiving“.
Modeling is also carried out using the Data Warehousing Workbench, which is
started with transaction RSA1. After selecting the InfoProvider, you first double
click to go to InfoProvider modeling. You can call 3.x archiving modeling via the
menu option "Extras 3.x archiving".
PBS CBW NLS - Administration - 43
Only an overview of the modeling of the 3.x data archiving is given below. The
most important point is the choice of the selection parameters (see Diagram 36). In
addition to this, there are further archiving parameters, of which the cluster key is
described in more detail.
The cluster key, that is called semantic group in the DAP, is responsible for the
structuring and sequence of the archived data within the archive files. It can consist
of one or more characteristics via which the data records of the InfoProvider to be
archived are sorted and written to the archive file.
In practice, it has been found that the modeling of the archive index (see chapter
5.1) or archive aggregates (see chapter 5.3) has more bearing on performance
during data access than the selection of the cluster key. Thus it is sufficient to
avoid entering a cluster key characteristic (see Diagram 37).
PBS CBW NLS - Administration - 44
If the cluster key should, however, consist of one or more characteristics then the
following points must be observed with regard to later performance:
• The average number of data records per data object should be between
100 and 10,000. If the number of records in an object is too large, this will
directly influence the average access times, as all previous records of a
data object must be read when the object is accessed. Data objects with
an average of more than 10,000 records should be avoided if possible. On
the other hand, you should not create data objects that contain very few
data records as this increases the administrative overhead, which can lead
to a very poor compression rate.
• It is best to use characteristics in the cluster key which are also used as
selection criteria in the queries of the corresponding InfoProvider.
PBS CBW NLS - Administration - 45
4.1 Overview
All activities within the PBS CBW NLS can be started from a central interface, the
CBW Administration Cockpit (see Diagram 38).
The activities consist of the modeling of archive indices and possible archive
aggregates, as well as the setup and administration of these. This requires
generating a suitable infrastructure. For this purpose, structures and archiving
objects must be created and object-specific programs generated. Activities
required for later access, such as creating PBS InfoProvider, copying queries and
adapting workbooks can be performed centrally from this interface.
Start the CBW Administration Cockpit in transaction /PBS/CBW. The menu that
appears after you call up the transaction is divided into four areas:
• Object selection
Here, the archiving object and the corresponding InfoProvider, for whose
archived data the index will be created, are selected. All actions in the
other areas “Build”, “Administration” and “Tools” refer to the selected
archiving object.
• Modeling
This menu option is used to define the indices and build the infrastructure
(DDIC objects, programs, InfoProviders), required to set up the indices as
well as for the subsequent access to the archived data. The traffic lights
display the current status of the respective item. However, as they only
perform short tests, the submenu item “Diagnosis” should be used for a
complete test.
• Administration
It is here that the index is generated and then activated. Reports for index
administration as well as a browser with a technical view are also available.
If customers use their own tools for job scheduling and job control, the
report names can be determined using the menu options.
• Tools
This area contains further archiving object-dependent functions such as a
generic SAP archive file browser or the modification tool for workbooks.
PBS CBW NLS - Administration - 46
Maintain indices
A separate index can be set up and activated for all characteristics of the
InfoProvider. Here, the desired indices are determined by means of a
selection list. All characteristics and key figures of the InfoProvider can be
used as index attributes. These additional fields can be added by means of
a second selection list to the index structure (see chapter 5.1 for details).
Maintain Aggregates
Generate InfoProviders
This item can only be used for the setup without nearline service. It
generates the VirtualProvider as well as the MultiProvider, which are used
for accessing the archived data. The original InfoProvider is used as a
template. Both InfoCubes as well as DataStore objects are supported. For
detailed adjustments the generated InfoProviders can be maintained via
the Business Workbench (transaction RSA1). For further details see
Chapter 4.2.
This item can only be used for the setup without nearline service. With this
option the queries of the original InfoProvider (InfoCube or DataStore
object) are copied into the VirtualProvider as well as into the MultiProvider.
It is possible to select the queries to be copied as well as to explicitly
specify the names of the copies. See Chapter 4.2 for further details.
The area ’Administration’ consists of the items listed below which are started
using the Start button. Alternatively, you can call up CBW job scheduling by
pressing the button , which can be used to maintain a variant and to schedule
the respective run as a background job.
Generate Index
This item starts the report to generate the indices. Optionally, the
generated indices can be activated and consolidated automatically. For a
detailed description see Chapter 6.
PBS CBW NLS - Administration - 48
Activate Index
This activates the indices that were generated in the previous step. During
the activation, pointer information of the indices grouped into clusters is
written into an administration table. This information is required for
subsequent access to the indices. The disk space requirement in the
database is only about 0.1 % of the index stock. For a detailed description
see chapters 6.3 and 6.5, where the concept of main and long-term
archives is also explained in more detail.
This function deletes PBS index generation runs completely (see also
Chapter 6.8).
Consolidate Index
The report for the index consolidation is started, which reorganizes the
index stock of the main archive. Separate index runs are merged so that
uniformly sorted indices are produced (see chapter 6.4 for details).
Index Browser
This function provides you with a technical overview of the created indices.
In particular, it is possible to display the statistics of the current as well as
all previous index generation runs (see also chapter 6.8).
This file browser provides you with a technical overview of the SAP original
archive file independent of its structure. In particular, if the internal
structure of the archive file is unknown you can obtain valuable information
by using the file browser, such as information about the number of data
records per data object.
Modify Workbooks
The index used for each query call is stored in a special statistic table for
each query call. This function shows the contents of this table, with all
existing indices listed and sorted according to the frequency in which they
are used.
PBS CBW NLS - Administration - 49
Like the data display of InfoCube data (transaction RSA1, menu item
“InfoCube Data Display“), archived data can be displayed here via the
VirtualProvider.
Archive overview
This item displays all indices with their respective generation runs as well
as the assigned SAP archiving sessions and all possible archive aggregate
runs in a clearly laid out tree structure.
This file browser provides a technical overview of any SAP original archive
file, independent of its structure. In particular, if the internal structure of the
archive file is unknown you can obtain valuable information using the file
browser, such as the number of records per data object.
Transport search
This function produces a list of all CBW transports that have been imported
into the BW system. As an option you can limit the function to certain
source systems or user names.
• Check the ADK files using Table ADMI_RUN. During the check, all
existing index files belonging to a certain InfoProvider are checked.
If the option “SAP check object” is also selected, and an archiving object is
selected, only SAP archive files are checked.
Modify Workbooks
After the successful import of the transport to the destination system (using
tp), this function accepts the Customizing.
Using this button you can call up the archive administration (transaction SARA) of
the PBS index to which a separate SAP archiving object is assigned. Here, for
example, you can maintain the Customizing for the archiving object, such as logical
file name, file path, etc.
PBS CBW NLS - Administration - 51
Using this button you get to the object-independent global settings, which
allow you to define the ADK customizing of index and aggregate files as well
as the sort area size (see chapter 8.1). This function is described in detail in
chapter 6.1.
Note:
With the nearline method it may also be necessary to generate a
VirtualProvider. Since SAP NetWeaver 7 does not contain yet nearline support
for MultiProvider, there is no way of accessing nearline data via MultiProvider
queries in the standard system. You can use the CBW Administration Cockpit
here to generate a VirtualProvider to access nearline data. This VirtualProvider
can be integrated in the MultiProvider manually or via the cockpit.
PBS CBW NLS - Administration - 52
When you select the button “Generate InfoProviders” the system displays a selection
screen as shown in Diagram 40. Here, the names for the VirtualProvider and
MultiProvider suggested by the system can be replaced by names that you define. In the
case of the MultiProvider, three actions can be selected. With the first option, a
MultiProvider is generated that contains the original InfoProvider and the VirtualProvider.
If a MultiProvider already exists in which the VirtualProvider is to be included, this name
must be made known to the system via the item ”Use existing MultiProvider“.
The MultiProvider listed in the "Edit InfoProvider" dialog is not displayed in this view. After
confirming the dialog and taking on the settings, the Virtual Provider is integrated into the
selected MultiProviders, and the assignment of the characteristics and IDs
("identification") is carried out according to the settings from the original InfoProvider.
PBS CBW NLS - Administration - 53
In addition, the item “Do not enter a MultiProvider“ exists, if a MultiProvider is not used or
if the administration of the MultiProviders should be performed separately from the PBS
CBW NLS.
The "Update" button allows the subsequent identification of the characteristics / key
figures of the VirtualProvider in all MultiProviders selected under "Use existing
MultiProviders". If necessary, then the activation of the MultiProviders is done
automatically after that.
In the next step, the queries are copied from the original InfoProvider (InfoCube or
DataStore object) into the VirtualProvider as well as into the MultiProvider. By pressing
the button “Copy BW queries into InfoProviders” you access a user interface as shown in
Diagram 42. Within the interface all queries (technical name and description) that are
assigned to the InfoProvider are listed. As the technical names of the queries in the target
InfoProvider must differ from the names of the source InfoProvider to avoid name
conflicts, you can use automatic naming assignment or change the name manually. By
pressing the button “Generate names” the names for all marked queries are assigned by
the system without conflicts. You can double-click to display an input window as shown in
Diagram 44. Here you can change the technical name of the query from the target
InfoProvider.
PBS CBW NLS - Administration - 54
Diagram 42: Screen for Copying the Queries from the Original InfoProvider to
the VirtualProvider and the MultiProvider
If all query names are maintained correctly (manually and/or automatically) you have to
first define the target InfoProvider. After you select either the VirtualProvider or the
MultiProvider as a destination, you can start the actual copy process by pressing the
button “Copy”.
After the names have been defined and the queries for both destination
InfoProviders copied, setup of the infrastructure for index setup as well as for
access to archived data is complete.
PBS CBW NLS - Administration - 56
5 Modeling
The modeling of indices and aggregates is described in this chapter. Both types of
modeling are optional in the nearline method and are intended to optimize access
performance. In the classic method, however, it is mandatory to execute the index
modeling even if you do not want to define an index.
The initial screen is located in the "Modeling" area of the CBW Administration
Cockpit (see Diagram 45). The traffic lights show the modeling status. If a traffic
light does not exist, this means that the modeling is optional and has not yet taken
place.
The connection between the index, index structure and archived data is explained
in the following section with an example. Diagram 46 displays the structure of an
ADK file with extracted data from R/3 Cost Accounting. The characteristic fields
(blue) are followed by the key figures (white). Indices must now be created for the
three characteristics. Each index structure consists of the corresponding
characteristic field as key index field in the first position as well as the pointer to the
corresponding data record in the archive. The characteristic field “Vendor” and the
key figure “Amount” have been added to the index structure as additional
attributes. You can thus perform a complete selection at index level for queries
that, for example, contain the characteristics “Cost center” and “Vendor” in the
PBS CBW NLS - Administration - 57
WHERE condition. This means that only the required data records are selected
during the subsequent archive access.
When you access index modeling you can go to an interface as shown in Diagram 47.
Here the index definition is divided into two areas. In the "Index" area you define for which
characteristics of the archive structure indices are defined. Then the index structure is
defined. This structure is the same for all indices and only differs in the first field. In
addition further characteristics and key figure fields can be added to the structure to make
selection of the archive data via the index as complete as possible for more complex
WHERE conditions. If all necessary characteristic and key figure fields are already
contained in the index, you may not even need to access the archive data. These
additional fields are selected in the "Attributes" area.
You can go to a selection screen in which all InfoProvider characteristic fields are listed
such as shown in Diagram 48 using the pushbutton in the "Index" area. Each
characteristic can be used as a value. To do this, copy it to the left-hand section using the
arrow print keys or the mouse.
Six additional buttons (in the case of DataStore objects, seven) that can be used to
activate or deactivate certain field groups provide additional support during index
definition:
Query analysis: only select characteristics that are used in the queries
of the InfoProvider (see below for details)
“Select indicator“ – using this, the threshold value for copying the
characteristic values from the query analysis can be varied
Particular attention should be paid to the button because you can execute an
extensive query analysis by selecting it. For example only those fields that are
used by the selected queries as an index can be selected (see also chapter 5.2).
Note:
Index modeling and index generation runs are mandatory in the classic method
without a nearline connection. However it is possible to leave the characteristics
list for the index definition blank. In this case, indices are not built during the
index generation run and instead the selected archives are only made known to
the CBW. Additionally, internal statistics are generated for sequential access.
PBS CBW NLS - Administration - 60
In the second part of index modeling, the index attributes must be defined. You can
go to a selection screen using the button as shown in Diagram 49, in which all
the characteristics and key figure fields of the InfoProvider are listed. All fields to be
included in the index structure as additional attributes must be copied to the left
part.
In this case, all fields are deactivated (button ), which means that no
additional attributes are contained in the index structure. The space
required for this solution is minimal; however the access time for the
queries is maximum.
All fields are activated (button ), which means that each additional
attribute is contained in the index structure. The space required for this
solution is maximum, however the access time for the queries is minimal.
As each characteristic and each key figure is contained in the index
PBS CBW NLS - Administration - 61
structure, access to the archived data is not required. The space required
for the indices can be roughly calculated from the space required by the
data archive multiplied by the number of indices.
All characteristic fields that are used in the queries of the InfoProvider are
activated (button ). The space required for this solution is medium.
Ideally, access to the archived data is no longer required. However, this
only applies for queries that are used for selecting the additional attributes
and are not changed subsequently. For more details see chapter 5.2.
The buttons and are available for further support, and can be used
to activate all characteristics as well as key fields (for DataStore objects).
The button enables the list of selected index attributes to be copied from
another InfoProvider.
The reports for the generation, activation and administration of the index are then
created. In this way an optimal adjustment to the InfoProvider is achieved. The
generated report names are within the dedicated namespace /B19/. This
generation namespace is reserved for generations of objects from the PBS
namespace within SAP NetWeaver BW.
PBS CBW NLS - Administration - 62
After you start query analysis, the system displays a selection screen with all
available queries of the original InfoProvider and the MultiProvider (see Diagram
50). Query selection can be changed using the following buttons:
The names of the selected queries are taken from the role that has
been selected
After the selection for the examination, you can choose between four examination
methods that you can select individually. The method “Analyse field usage“,
which is set as a default value, examines the queries according to the frequency of
the characteristics used. In combination with the method “Only restricted
characteristics“, in which only the restricted characteristics are taken into account,
the system displays a list according to the frequency of all restricted characteristics
in the selected queries.
The two remaining methods are intended as a supplement. The method “Analyse
field granularity“ examines the characteristics contained in the list according to
the size of the value set. A characteristic with a large value set, such as material, is
suitable as an index, while a characteristic with a small value set, such as the
calendar year, is less suitable as an index. The result of this method is
incorporated into the valuation of the individual characteristics. The method
“Evaluate index usage“ uses the internal query statistics of the PBS CBW NLS
and allows the results to also be incorporated in the complete evaluation. However,
it only makes sense to use the latter after the CBW has been used for a certain
period of time, during which statistics can be created.
PBS CBW NLS - Administration - 63
After query analysis is completed, the selection screen of the indices contains three
additional columns (see Diagram 51): ”Occurrence“, ”Restricted“ and ”Relevance“.
The first column “Occurrence“ shows the degree of usage of the individual
characteristics, while the second column shows whether the characteristics are
restricted in the queries or not. The third column “Relevance“ displays a complete
evaluation which has been incorporated in all methods.
To incorporate the characteristics in the selection screen, a standard value of 70%
relevancy is used, which can be modified by the button .
The query analysis can also be used to determine index attributes. Except for the
options the selection screen is identical to the screen of the index determination
(see Diagram 52).
Diagram 52: Selection screen for the query analysis for index attributes
The selection for examining the relevant queries is decisive for which
characteristics and key figure fields are included in the index structure. The option
"Add compound characteristics" completes the list by the InfoObjects that are
superior to the selected characteristics. The option is always active as a default
setting.
After having executed the function the selection list of the index attributes (see
Diagram 49) contains all characteristics and key figure fields that are included in
the selected queries.
PBS CBW NLS - Administration - 65
When executing queries of large archived data stocks, the runtimes of the queries
can be very long. Like the aggregates in the SAP-BW standard, it is also possible
in the PBS CBW NLS to define aggregates, which can be used to set up
summarized data stocks via the archive. Like the indices, the summarized data is
set up outside the database and written to the file system in ADK format. If an
external archive system is available, the aggregates can be stored in this system
using ArchiveLink.
Aggregates of archived data largely correspond in their definition to the aggregates
in the database, but differ in their internal structure. Standard aggregates are
stored in the database in an InfoCube structure, while archive aggregates are
stored in the form of archive files in the file system of the application server.
A1 and A2, are set up for this archived data. The first aggregate, A1, exists in two
different attributes (A1.1 and A1.2), which differ in the sorting of their
characteristics.
If a query to the archive is started after aggregate setup, the system first checks
whether all the required characteristics and key figures are contained in the
respective aggregate. All suitable aggregates are then compared with possible
index accesses as regards their sorting (see also aggregates instances A1.1 and
A1.2) as well as summarization and the best access method is determined.
The selected access method is stored in the CBW statistics and can be queried
using the function “Display query statistics“ if necessary.
The differences between the archive aggregates of the PBS CBW NLS and the
aggregates in the SAP-BW database are listed below:
The design of the maintenance screen for modeling archive aggregates is based
on aggregate maintenance in the Administrator Workbench (transaction RSA1), in
order to provide the Administrator with a familiar environment in modeling and
administration tasks (see Diagram 55).
The left window displays a tree structure that contains the time dimensions and all
characteristic dimensions of the VirtualProvider generated by CBW. You can
include characteristics in the aggregate definition by selecting an InfoObject or a
dimension and dragging and dropping it to an aggregate in the right window.
The upper right window shows all aggregate definitions and detailed information
such as the status of the generation and the time of the last call. Context menus
are available for all tree levels via the right-hand mouse key. Please see Chapter
6.6 for a more detailed description of the functions.
The message window is located below the right window. Notes on the course of an
action are displayed here. All messages can be removed by calling up the menu
item Goto -> Delete message list.
Diagram 56 shows the aggregate definitions (right) and the characteristic list of the
VirtualProvider (left) in further detail.
PBS CBW NLS - Administration - 70
New aggregates can be created and existing ones managed using the application
toolbar that can be seen above the upper edge of the screen. The individual
buttons are described below:
Using a pop-up menu at aggregate level (see Diagram 57), parts of the
aggregate definition can be changed and activation and setup can be started. The
menu is structured as follows:
Display data
If an archive aggregate has already been set up, you branch to the
technical display.
Remove aggregate
When the aggregate is activated the Data Dictionary objects and the
programs are updated. The aggregate setup is then started, if desired.
Setup is carried out for all archives that have not yet been processed.
PBS CBW NLS - Administration - 72
Synchronize aggregate
Aggregates for archives that have not yet been processed are set up and
merged and summarized with the existing aggregate data. The DDIC
objects are not updated.
Using a pop-up menu at characteristic level (see Diagram 58) several items of
the aggregate definition with regard to the selected characteristic can be changed.
The following items can be selected:
All characteristics
Fixed value
Only one value is selected from the value set of the characteristic with this
function. The aggregate is thus set up using only this value.
Use as attribute
Remove component(s)
The first option ”... from SAP aggregates“ uses the SAP aggregate as a template
and adapts the definition accordingly when copying, if necessary. Please note that
no major characteristics are defined in the proposed aggregates. Before the
aggregate can be set up, these must be defined.
The second option ”... from query“ enables the user to obtain aggregates using a
query analysis. The available queries can be freely selected.
The third option ”... via analysis“ includes the field granularity (value set of a
characteristic), freely selectable queries and CBW access statistics that are
updated by the CBW when the queries are called up, in the determination. As the
granularity analysis can require some time, it is possible to start it in the
background.
PBS CBW NLS - Administration - 74
The archive indices are built internally in two steps as shown in Diagram 60. First
the indices are created by the generation program, and then they are activated.
This separation has the advantage that you have a clearly defined time from which
the indices can be used. If any indices already exist, they can be merged with
newly generated indices beforehand, if necessary (see Diagram 61).
Generate PBS
Archive Index
Activate PBS
Archive Index
The index generation program imports the SAP archive files and generates the
selected indices. The indices are stored in SAP ADK format in a separate archiving
object, generated by the CBW Cockpit.
PBS CBW NLS - Administration - 75
The archiving objects that are generated by the CBW use the naming convention
below:
The physical directory or the files where this data is located can be seen from the
logical path which was assigned to the respective archiving object in transaction
AOBJ. The logical standard path of the SAP archiving object is used as a default
setting. However, we recommend that you define a separate logical path according
to the Customizing for PBS archiving objects.
Generate
PBS Archive Index
BW Archive Files
ADK Format
File System/Archive Server
Merge
PBS Archive Index
A special feature is the definition of the parameter for the sort area:
Internal SORT: This is an SAP internal sort which reads the profile
parameters DIR_EXTRACT and DIR_SORTTMP to define
the directories for temporary files.
All settings in the field "SORT path (external)" are valid for both the index as well
as the aggregates if the external SORT is used.
The default setting for the index is the Internal SORT, for aggregates the External
SORT. The directory is defined via the profile parameter DIR_SORTTMP in both
cases.
The maintenance interface is displayed in diagram 61. The table is empty in the
default setting, meaning there are no downtimes.
PBS CBW NLS - Administration - 78
Two cases have to be distinguished when entering the start and stop times:
Stop time > start time: Stop time on the same day as start time
Stop time < start time: Stop time on the following day
The example that is illustrated in diagram 61 takes the weekly downtime into account. It
starts on Friday evening at 20:00:00 and ends on Saturday morning at 04:00:00.
PBS CBW NLS - Administration - 79
You can branch via the pushbutton in a dialog, in which enhanced options are
provided for the query access.
No aggregation during query runtime: This option prevents the execution of internal
aggregation and leaves this to the OLAP.
CBW creates different InfoObjects that are used during the generation of VirtualProviders
as an example. By default, the system chooses the InfoObject names. If you want to use
your own names or already existing InfoObjects, they can be maintained via the
pushbutton .
The index generation is started in the CBW Administration Cockpit under the item
’Generate index’. The selection screen of the report called up for this purpose,
/B19/CBW_<ArchObj>_LOAD, is shown in Diagram 64.
PBS CBW NLS - Administration - 80
The restrictions in the selection screen and their consequences are explained
below.
‘
If the select button Select manually’ is active, a window is opened when the
request screen is confirmed. This window displays a list of available SAP archives
created on or after the date you have entered (see Diagram 66).
Diagram 66: Proposal List of the ADK Archives to be Set Up for the Indices
(method for 3.x archiving)
Now select those archives (for example, for the archiving object BWCZCCA_~1)
from () that you want to include in your archive index. In automatic mode, all
requests or SAP archives of the corresponding InfoProvider are selected without
the list being displayed, which were created on or after the date that you have
entered. SAP archives that have already been loaded into the CBW from this
period are not selected.
PBS CBW NLS - Administration - 82
Synchronize aggregates
If this checkbox is flagged, all active aggregates are set to the same status as the
indices after index generation. This means that the corresponding aggregate
generation runs are performed for all SAP archives that are still missing.
Alternatively, aggregate generation can also be started in the modeling and
administration screens for aggregates (see Chapters 5.3 and 6.6).
Activate index
Sequential processing
If the number of archive files or the size of the archive file itself is very large, then
the system resources are often not sufficient to set up all indices in a single
generation run. In this case the setup has to be executed in several steps. With the
setting ‘Files per run’, you can define the number of archive files that should be
processed per generation run. With the setting ‘Temporary sort area size’ you
can define the number of archive files at runtime. As a size reference, the size of
the sorting area which is provided in the selected application server is used. This
setting is useful if the disk space for the sorting area is not very large. In addition, if
you use this setting a large archive file can be processed in several steps whereby
the interval limits of each partial step at runtime are determined.
The following three options describe the manner in which the index generation is
performed. The option ’Perform sequential index generation with
consolidation’ allows you to merge the generated indices with the existing stock
at the end of index generation. You thus achieve optimum performance during later
access. Using the option ’Perform consolidation after processed runs’ you can
define the number of index generation runs after which a consolidation is
performed automatically. Using the option ’Do not perform consolidation’ you
can perform the consolidation at a later point in time. For this purpose, one of the
two first options can be used in a later run, or you can start the consolidation
directly via the CBW Administration Cockpit via the item ’Consolidate index’.
PBS CBW NLS - Administration - 83
Note:
The size of the sort area provided in the setting ’Temporary sort area size’
applies both for the temporary extract file as well as for the temporary sort file that
are created during the index generation run. The directory for the extract file is
defined using the parameter DIR_EXTRACT. The directory for the sort file is
defined using the parameter DIR_SORTTMP. Both parameters can be displayed
and changed in the system profile in transaction RZ10. Both parameters often
show the same directory, so that the partition in which the directory is located must
exhibit approximately double the free disk space of the value that has been
specified in the setting ’Temporary sort area size’.
Parallel processing
For very large systems we recommend that the index generation is not executed in
one process but in several parallel processes to reduce the runtime during index
generation. In this case the selected archive files are split equally in the processes.
If you press the button ‘Applicat. server’ a selection screen is displayed in which
you can define the application server with the number of applicable processes per
server. After closing the screen the total number of processes is adapted
automatically in the line. The option ‘Perform consolidation after parallel index
generation’ enables you to merge the indices generated in the individual runs with
the existing stock. You thus achieve optimum performance during later access.
PBS CBW NLS - Administration - 84
Once all required entries have been made in the selection screen of the index
generation report, the selection screen can be stored as a selection variant for
setting up the index archive and the report started as a background process.
Using the alternative button in the CBW Administration Cockpit, you can call up
CBW job scheduling, with which the variant can be maintained and the job
scheduled (see Diagram 67).
After successful generation of the index archive, the index generation report issues
archive selection statistics (see Diagram 68).
If the checkbox “Activate index“ was not flagged when the index archive was set
up, activation can still be carried out at any time via menu item ’Activate index’ in
the CBW Administration Cockpit. Like the index setup (see Diagram 67) you can
reach CBW job scheduling using the button at activation, where you can create
variants and schedule jobs.
If you set this parameter, the last run is automatically used to create the indices.
Particularly in the introductory and test phase it can be useful not to delete older
generation runs but instead to keep them for a while. You can activate an earlier
run and set up the corresponding administration indices at any time with the
PBS CBW NLS - Administration - 86
parameter “Select run manually”. Diagram 70 shows a selection list as shown after
starting in manual mode.
Diagram 70: Selection List for Setting Up the Administration Indices for a
Chosen Generation Run
After successful setup of the administration indices a log is displayed (see Diagram
71).
This parameter enables you to delete the current indices without having to activate
the indices for another generation run at the same time.
PBS CBW NLS - Administration - 87
Each generation run creates a separate index archive, which is sorted in itself. With
a large number of runs, there are therefore several index archives, which have to
be searched individually at access. Using the consolidation function, all individual
index archives are combined into one single index archive with, in turn, continuous
sorting. You can thus achieve optimum performance during later access. The
prerequisite for the consolidation run is at least two index archives.
Consolidation is normally started from the index generation (see Chapter 0).
However, it can also be started manually via the menu item ’Consolidate index’.
Here too, you can reach CBW job scheduling by pressing the alternative button
in order to create variants and schedule jobs.
This number describes the maximum number of index archives to be merged which
are merged in an internal process simultaneously. If the number of index archives
to be merged exceeds this limit, several merge runs will be carried out internally.
The number may be between 2 and 90. In practice, the default setting 90 should be
retained.
If you set this parameter, the data from the previous partial runs is automatically
deleted after successful conclusion of the consolidation run. In the introductory
phase, it is often useful not to set this parameter and to retain the previous runs for
security purposes. Older runs can also be deleted separately at a later point in time
via the menu item ’Delete PBS index run’ in the CBW Administration Cockpit.
Consolidate aggregates
With this parameter, all active aggregates are also consolidated at the end of the
index consolidation, if required. This means that the respective aggregate
consolidation runs are performed for all aggregates for which the data stock is
distributed to at least two runs. Alternatively, aggregate consolidation can also be
started in the modeling and administration screen for aggregates (see Chapter
6.6).
PBS CBW NLS - Administration - 89
Important Note:
It is only possible to decide whether and which archives should be transformed into
long-term archives when using 3.x data archiving. If you use data archiving
processes (DAPs) the respective index stock or aggregate stock is transformed
automatically into a long-term archive per archiving request (= data archiving run).
This procedure was introduced to keep the individual requests also with respect to
the indices and aggregates separate from each other. It refers to the function that
is provided by SAP that enables archiving requests that were created via DAPs to
be reloaded at any time.
The long-term archiving concept consists of dividing up the index stock from one
single main archive into several independent "runs" that are, however, merged
themselves. In this way, runtimes of often unbearable levels (such as more than 10
hours) are avoided during the consolidation.
For this, it is possible in the index administration to transform the currently active
CBW (main) archive using the index activation program (see Chapter 6.3) into a
long-term archive and to start with the setup of a new main archive in the next
index generation run.
This enables the runtime to be reduced to a minimum again. The previous CBW
index archive is now active as a long-term archive and is no longer changed.
The current main archive and all active long-term archives are always considered
during the read access. As the CBW index archive is stored in ADK format, it is
also possible to remove all older long-term archives, which are accessed less
frequently, from storage into an external storage system (Content Repository),
such as an optical archive.
If you set this parameter, the current main archive will be deactivated and activated
as a long-term archive.
However, the main archive must be consolidated and not consist of several partial
archives (partial runs).
These two parameters enable you to activate or deactivate CBW index archives
that have been converted into long-term archives. An overview of all available
archives is displayed if you press the button “Select. list”. You can select the
activatable and deactivatable index archives by clicking on the corresponding
check box (see Diagram 74).
When setting up aggregates, two cases must be distinguished – the initial setup
with an existing index stock, and the synchronization of the aggregates after an
executed index generation run. The synchronization is described in detail in
Chapter 6.7. The main focus of this chapter is the initial setup of aggregates with
explanations on the status information.
The initial setup of an aggregate can either be accomplished by pressing the button
in the application toolbar or by the context menu that appears in the right
window when you select the aggregates (see Diagram 75). The respective
generation run is then started directly after the selected aggregate is activated.
The aggregate generation run can take some time, which is why it is started
automatically in the background. You can track the progress of the generation in
the job overview (button ). Once this is completed, you can update the view with
the button .
PBS CBW NLS - Administration - 93
The parts of the menu that are used for setup and synchronization of an aggregate
are explained again below.
Synchronize aggregate
Aggregates for archives that have not yet been processed are built and can
be merged and summarized with existing aggregate data. The table
structures are not updated.
After the generation run, the status line of the built aggregate in the modeling
screen looks as displayed in Diagram 76.
14 Number calls Statistics of how often this aggregate has been used
18 Last changed by Name of user who last changed the aggregate definition
The requirement for using an aggregate is the green status of all four traffic lights in
the status line of the aggregate. In particular, the synchronization with the index
archive (column 7) and the switched on status (column 5) are two important points.
The degree of usage can be determined via the number of calls (column 14) and
the time of the last call (column 15). The query statistics (see Chapter 7.1) are
available in the Tools area of the CBW Administration Cockpit for a complete
overview.
In periodic operation, the indices are generated via an index generation run after
the SAP data archiving. However, after the generation run the archive aggregates
are no longer synchronous, so that they can no longer be used. This can be easily
checked in aggregate modeling via the traffic light in column 7 of the status line,
which displays the synchronous status.
Automatic synchronization is the easy method for rolling up the aggregates. The
checkbox “Synchronize aggregates“ must be flagged in the selection screen of the
index generation run, as shown in Diagram 79. This means that a synchronization
run is started after the index generation, which synchronizes all aggregates
simultaneously. Please note that this method has a high resource requirement and
a significantly higher runtime than the manual method, in which the aggregates are
built individually. On the other hand, the administrative effort is considerably less
with the automatic method.
Further functions are available for managing the index and aggregate data in the
CBW Administration Cockpit, area Administration (see also Chapter 4, Diagram
38).
Index Browser
After start, the system displays a selection screen as shown in Diagram 80.
PBS CBW NLS - Administration - 98
• Index:
Display of the indices with drilldown up to the archived data record
• Log tables:
Display of the log of the selected runs
• Statistics:
Display of the archive and index statistics generated during the index
generation run. The statistics display the value areas for each
characteristic in the index or archived data record.
After execution, the system displays a list of the data objects (clusters) in the
archive. You can double-click on one of the lines to call up the actual indices (log
tables,…). Drilldown to the corresponding archived data record is then also
possible with another double-click.
If you enter the parameter “Number of clusters“ you can restrict the list of selected
PBS CBW NLS - Administration - 99
index clusters to a certain number. 200 lines are selected in the default setting. In
the case of very large archives, it is useful to enter a start value for the cluster
values, which is enabled with the parameter ”Start at key...“.
After start, the system displays a selection screen, as shown in Diagram 81.
Diagram 81 shows three active index archives – the main archive and two long-
term archives. If, for example, you only want to rebuild the main archive, you select
the respective line and delete it using the delete button.
Important Note:
Please be very careful when you use the delete function, as a deletion
operation cannot be reversed. The respective index archive is
irretrievably deleted.
PBS CBW NLS - Administration - 100
7 Tools
This chapter describes the most important functions in the areas “Tools“ and
”Object-Independent Tools“ of the CBW Administration Cockpit. The function for
modifying the workbooks is described in Chapter 9.
7.1 Monitoring/Statistics
At each query call, the index used is stored in a special statistics table. The
function “Display query statistics“ lists the contents of this table, in which all existing
indices are displayed sorted according to frequency of usage (see Diagram 82).
The statistics are used for support in performance optimization, to show which
indices and aggregates are suitable for the selection of archived data. Indices or
aggregates that are not used for a long period of time can thus be removed from
the stock with a clear conscience. In addition, sequential accesses can be
recognized and accelerated by adding an index or an aggregate (see also Chapter
8.3).
PBS CBW NLS - Administration - 101
In the CBW Administration Cockpit, area “Tools“, there are two methods of
displaying the archived data – the “SAP Archive File Browser“ for a technical
overview of individual archive files and the “Data Display VirtualProvider“ for a
more logical selection with restriction according to characteristics.
The SAP Archive File Browser enables you to view the contents of one or more
data objects of an archive file. The archiving object is entered as a default setting.
Alternatively, you can also call up objects in the area “Object-Independent Tools“,
where you can select any archiving object.
After you access the function, the system displays a selection screen as shown in
Diagram 83. In the parameter “Object area“ you can restrict the selection of the
individual data objects to an interval. A data object is a grouping of data records for
an individual object, which is compressed in itself. The grouping rule is specified
via the cluster key (see also Chapter 3.4). As the number of data records in a data
object can greatly vary, you should only select a very small interval at the
beginning.
The system then displays a list of available archive files, followed by information on
the selected intervals (see Diagram 84). After you select a file by double-clicking,
the system displays a detailed list of the data records together with all
characteristics and key figures.
PBS CBW NLS - Administration - 102
After you start the function the system displays a selection screen, as shown in
Diagram 85. The list of parameters consists of characteristics, grouped by
dimensions, and optional settings for selection and display. The field selection for
output of individual requirements can also be adapted.
After execution, the system displays a list of the selected data records in ALV
format, which can be exported, for example, for further examination.
PBS CBW NLS - Administration - 103
The function “Archive overview“ enables you to display the main archive and
existing long-term archives together with all corresponding index generation runs
and the respective archive runs in a tree structure. This also contains possible
aggregates whose generation runs are separated by main archive and long-term
archives. SAP archives, for which no indices have yet been built, are also
displayed. This enables you to easily see which archive runs are available for direct
access and which are still missing.
Diagram 86 shows this type of tree structure with a main archive, two long-term
archives and three aggregates.
Diagram 86: Overview of the Archive Runs with Index and Aggregates
PBS CBW NLS - Administration - 105
Since the PBS CBW NLS is installed as an add-on in the BW system, its status can
be easily determined via the SAP system data (component information) (see
Diagram 87). Sometimes, however, it is better to search for patches that were
imported in the form of transport requests into the system.
Transport search is available to manage and check the history of the installed
transport requests of the PBS archive add on CBW and PBS CBW NLS, which can
be found in the area ”Object-Independent Tools“. Alternatively, you can also start
transaction /PBS/CBW_Z9CP directly to call up transport search.
PBS CBW NLS - Administration - 106
In the overview (see Diagram 88) all installed PBS products are listed by transport
number and sorted in descending order. You can select the transports to be
displayed according to requirements by making suitable entries in the fields via the
actual display table. The column "Transp. no." contains the number of the PBS
transport request. "System", "User", "Date" and "Description" contain further
information for identifying a particular transport precisely. The following parameters
are available for entering restrictions:
We recommend setting the parameters System and User to “*“ in order to be able
to enter all the CBW transports delivered by PBS.
PBS CBW NLS - Administration - 107
When complete index modeling including the setup of the infrastructure in the
development system is performed, all settings and generated objects must be
transported to the test and production systems. For this purpose, the following
functions are available in the area ”Object-Independent Tools“:
When you start the export function, the system displays a selection screen, as
shown in Diagram 89. One or more InfoProviders must be selected as an
obligatory entry.
At export, all the index modeling settings as well as, optionally, the InfoProviders
generated by CBW are then placed in a SAP transport request and, if desired,
released.
After the transport has been imported into the target system the import function
must be started in this system in order to accept the modeling settings in the
Customizing tables of the CBW. Diagram 90 shows the selection screen after start.
The actual import is carried out when you press the button “Start import “ (or F8).
PBS CBW NLS - Administration - 108
The import can also be performed in test mode (button ”Start import” (test mode)“
or F9).
8 Administrative Topics
This chapter covers all topics concerned with administrative activities, such as role
maintenance, resource requirements, performance tuning, etc.
The PBS CBW NLS uses indices and aggregates for optimized selection of archive
data. Before using them they must be generated with reports that are usually
started in the background. The resources that are needed for the generation runs
consist of the required disk space for the data storage and the temporary disk
space for the internal sorting as well as the CPU and RAM load of the application
server on which the batch run will be running. The CPU and RAM load is
comparable with the load of an archiving run whereas the RAM load is a bit higher
due to the sorting used (about 100 MB at a maximum). In contrast to an archiving
run the load for the database server is negligible because the archive and not the
database is used as a source for the index and aggregate build. The required disk
space for the build is described in detail below.
The indices and aggregates of the PBS CBW NLS are set up in database-
independent file systems. The system administrator’s task is to reserve free
storage capacity in order to set up the archive. The disk space capacity for the
indices to be provided depends on the number of records which should be stored
in the archive, the number of indices per data record, and the number of index
attributes. The disk space of aggregates depends on the aggregation level of the
respective aggregate. For example the aggregate needs about 1/50 of the archive
for an aggregation level of 50. The average disk space of aggregates is rather
small in comparison with the archive or the indices.
When using a minimum index structure you have to assume an occupancy of about
5 to 10 bytes per index. When using index attributes, for example in a maximum
index structure, no concrete recommendation can be made as this depends on the
size of the archive structure and the number of index attributes used. An
approximate upper limit is the size of the archive structure itself divided by a
minimum compression rate of 5. In practice, however, compression rates of 10 to
20 are quite usual.
You have to reserve additional temporary disk space for extract as well as for
sorting during an index generation run. In practice, it has been observed that
approximately 200 MB disk space per index should be reserved for extract and
sorting with an index generation run for an archive file of 10 MB.
PBS CBW NLS - Administration - 110
With the setting “Temporary sort area size“ the available disk space is specified
by the user and the index generation program splits the run automatically into
several partial runs, as required. In practice, you should reserve about 1 to 2 GB
for the temporary sort area. A more precise estimation of the temporary disk space
actually required is only necessary if the setting “Files per run“ is used in the
index generation run. With this setting, the disk space must be sufficiently large in
order to set up indices for a complete archive file. The setting “Parallel
processing“, however, has the largest requirement, as the temporary disk space
for all selected archive files within one run must be available there.
The following example describes how the required disk space can be determined
for extract and sort. The InfoProvider should contain 20 characteristics and 10 key
figures, corresponding to an archive structure with approximately 30 fields and 500
bytes in length. 10 MB archive data corresponds to about 200,000 archived data
records, due to a very good compression rate at archiving of more than 10:1.
Extract files are NOT compressed. Their size can be determined as follows:
The length of the index structure can vary, depending on the definition, between a
minimum structure of about 120 bytes per index and a maximum structure, which
corresponds approximately to the archive structure. In the example, 5 indices are
to be created per data record. This corresponds to approximately 5*300 bytes = 1.5
KB for the extract in the case of an average index structure, which means that with
200,000 data records an extract file of approximately 300 MB would be created.
The temporary sort file, which is created by the SORT itself, should need about 400
MB in addition. In our example, this corresponds to a total demand of 700 MB for
extract and sort.
In an extreme case, an archive file of 100 MB or more requires a very high disk
space capacity. To avoid such large disk space requirements, you should choose
the setting “Temporary sort area size“.
In larger SAP systems, the database and the applications normally run on different
servers, known as the database and application servers. It is often recommended
to use several application servers for performance reasons. However, as the PBS
CBW NLS is set up in file systems independent of the database, all application
servers must have direct access to these file systems. It is the system
administrator’s responsibility to organize the application servers such that the file
PBS CBW NLS - Administration - 111
systems of the PBS CBW NLS can be always reached with the same path. In Unix,
NFS (Network File System) offers a practical solution. The example below of a
three-level client/server architecture shows how this can be done. The system
should consist of both a database server and an application server (and several
presentation servers), the CBW being created on the database server. In this case,
proceed as follows:
The file systems containing the CBW must first be released on the database server
using the command share. The following command (syntax of Sun/Solaris)
releases the file system /pbs/cbw:
The released file systems can then be mounted on the file system of the
application server. A prerequisite for this is that the directories used as mount point
exist. The following example of a command mounts the released file system
/pbs/cbw of the database server sun20 at the mount point /archive/cbw:
Please note that in the case of several application servers, the same mount point is
to be used for each of them, as only one single path can be assigned (in this case,
for example, /archiv/cbw).
Note:
The server containing the data of the PBS CBW NLS must also be an NFS server
so that the application servers can access the NFS server as NFS clients.
Under Windows NT/2000/2003 – if NFS was not installed here – the local drive of
the database server on which the PBS archive add on is contained must be
released. The area “Release” is available for this purpose in the characteristics of
the respective drive. After successful release, the drive can be connected to the
application server, for example, by using the Explorer (“Tools Map Network
Drive”).
PBS CBW NLS - Administration - 112
As far as the size of the index files is concerned, several factors are decisive. On
the one hand these are the ADK technology itself, the number of selected indices,
as well as the number of index attributes. The use of ADK technology to store the
index data offers the advantage of compression by approximately a factor of 10.
However, one of the most important factors is the number of selected indices, as
the size of the index files is directly proportional to this number. This means that
ten indices per data record also need 10 times the space for the index files. The
number of index attributes however, does not have a direct influence on the space
requirement as fields are often still blank and this therefore results in a very good
compression.
The aggregate files are usually much smaller than the archive files, as the
relationship is directly proportional to the degree of summarization. The space
requirements of highly summarized aggregates can therefore be disregarded.
In general, this means that, for an optimum performance and low disk
requirements, you should only create a few indices that are used by as many
queries as possible, and in addition build aggregates for frequently used queries.
The administrator of the PBS CBW NLS should have authorization for using all the
transactions listed in Table 3. All activities for setting up and managing the CBW
can be performed using the three most important transactions /PBS/CBW,
/PBS/PSA and SARA. Archive modeling from the Data Warehousing Workbench is
not included here and requires further authorizations.
PBS CBW NLS - Administration - 113
Transaction Description
There are two methods of starting the tool via the CBW Administration Cockpit.
In the first case, select an archiving object in the overview of the Object-dependent
tools (initial view). Under the area Tools you can branch directly to workbook
modification. Users are then offered all workbooks that contain queries for the
selected InfoProvider.
The other possibility is to branch to the Object-independent tools. If you start the
workbook modification program from there, all workbooks are displayed for which a
conversion is possible.
The actual modification of the workbook is carried out via an MS Windows program
(PBSWBMOD.EXE) which is delivered with the PBS CBW NLS. This must be
copied to an appropriate directory on the (Windows) front-end before being
executed. Correct installation of MS Excel is also essential.
As MS Windows front-end functions are used for the conversion, processing in the
background is not possible.
PBS CBW NLS - Administration - 115
In the area Settings, options are defined that are essential for successful execution
of the program.
The Temporary path contains files which are only needed for a short time to process the
workbook. By default the temp directory of the front-end is used, however a change, for
example, on D:\Temp, can also be made.
The workbook is modified via an MS Windows program which is part of the delivery of the
PBS CBW NLS. This program must be copied first to an appropriate directory on the
Windows front-end, and the Path to the PBS modification tool should refer to it. In
addition, please note that MS Excel must be installed on the front-end. The system
supports the user when selecting appropriate directories by displaying a selection dialog
via the input help (function key F4).
Before modifying workbooks the tool creates a backup of its Excel components in PBS-
specific tables which makes it possible to restore the original workbook. If a backup
exists, a new modification is not possible as it would overwrite the original
workbook. A new modification should only be made via the option "Delete existing
backup if necessary" if the backup is no longer needed. Please also refer to the notes in
Chapter 9.3 “Notes on Conversion”.
If a problem occurs the conversion can be terminated with the option “Cancel on error“.
This option is only significant if several workbooks are to be converted.
The selection in the area Mode enables you to modify (convert) the selected workbooks
or to restore automatically created backups. Depending on the selection, the lists
displayed are updated.
Actual workbook selection is carried out in the area Workbooks. The Title, Creator
(responsible person), Creation date and (technical) ID are used for identification and
default sorting is in ascending order of the title. If you press the CTRL key, you can make
multiple selections which can then be shifted between the two list fields or using the drag
and drop function via the arrow keys.
The area Errors contains a list of errors that occurred dependent on the workbook.
If conversion is successful, the area contains a corresponding message.
PBS CBW NLS - Administration - 117
Before the actual conversion can be executed, the queries of the original workbook
should be analyzed, and it should be determined whether corresponding queries
exist in the PBS MultiProvider. Only queries for which this applies can be converted
automatically. It is thus absolutely essential that queries have been copied using
the tools offered by and executable in the CBW Administration Cockpit, as the
required information can only be set up in this way.
If a modified workbook is overwritten (for example via the SAP Business Explorer),
it can no longer be restored automatically. In this case you can retrieve the original
workbook by immediately restoring the Excel component via the PBS modification
tool, loading it (for example via the SAP BEx Browser), and storing it without
updating. If this is not possible, the workbook should be recreated. It is therefore
recommended to back up an original workbook before performing the modification.
PBS CBW NLS - Administration - 118
To make this easier for customers, the “Conversion Tool“ has been integrated into
the PBS CBW NLS. This enables simple ABAP/4 programs to be adapted to the
archive by pressing a button. The tool searches for the select commands and
adapts these to the access using a PBS function module.
The limits of the “Conversion Tools“ lie in the possibility of making a loop structure
out of a select structure.
In all cases, it is always necessary to check the parts in the program that have
been converted during and after conversion. In addition, not every select command
can be adapted to the archive.
The conversion tool for the PBS CBW NLS is restricted to the conversion of selects
to the active table of DataStore objects. Selects to the fact table of InfoCubes are
not supported.
An ABAP/4 program can be converted to the PBS CBW NLS in the following steps:
Conversion using the Conversion Tool is described here using the example
program /PBS/CBW_UTIL_CONV_DEMO.
PBS CBW NLS - Administration - 119
In the initial screen, you first need to enter a source program. The dummy program
“/PBS/CBW_CONV_DUMMY“ is always the target program. The functions Change,
Display, Copy and Execute are available for the source and target program.
If you do not want to convert a specified source program to the dummy program
but to a different target program, you can only do so by using the copy function of
the source program. When you copy, the system requests the target program
name and takes over the name entered.
PBS CBW NLS - Administration - 120
Conversion Status
The status displayed in the first table column can have the following statuses:
The select has been converted automatically: The Conversion Tool could
find all the required information for conversion.
The select could not be converted: Possible reasons for this could be that
the table is not supported by PBS CBW NLS or that it is a fact table.
External program run: An external program call was found in the source
program (Include, external perform or call function).
SeqNo.
Sequential number of the select command: starting with 1, the found select
commands are numbered sequentially.
Line
Perf
Performance for the access to the PBS CBW NLS – the following statuses are
distinguished:
PBS CBW NLS - Administration - 122
F11 You can save the result of the conversion und the name of the target
program using function key F11.
PBS CBW NLS - Administration - 123
This display enables you to compare the source and target programs directly. The
cursor is positioned on the current line number of the select command.
10.6 Particularities
CLIENT SPECIFIED
Access to the PBS CBW NLS is always client-specific. Please remove this clause.
PACKAGE SIZE n
The Conversion Tool terminates immediately with the following error message:
Source program not check-free or unknown SQL statement in line 40 .
Please remove the clause “Package size n“ and the corresponding ENDSELECT
command from the source program.
UP TO n ROWS
GROUP BY
Aggregate functions
Aggregate functions such as MAX, MIN, SUM, COUNT, AVG are not supported.
LIKE
Program not check-free as loop does not support the statement ”Like“.
Distinct
The select instruction Distinct is supported by the Conversion Tool by deleting all
multiple entries from the internal table created using the instruction
Please note that the table is sorted and deletion of the duplicates always refers to
the primary key of the internal table.
PBS CBW NLS - Administration - 125
BYPASSING BUFFER
SY-SUBRC
Correction of the system variable SY-SUBRC is only integrated into the target
program with a SELECT-INTO-TABLE .
If direct value entries and select options are found in the WHERE statement of the
SELECT command for the key field, the SELECT option is transferred. The direct
value entry is then checked out with the loop.
JOIN/Subqueries
The CBW Conversion Tool does not support JOIN statements and subqueries.
The Conversion Tool of the PBS CBW NLS supports the conversion of selects with
dynamic components in the form
SELECT * FROM TABNAME WHERE (COND)
or
SELECT * FROM (TABNAME) WHERE (COND).
Any select instruction can of course be used instead of the * (for restrictions see
Chapter 10.6).
WHERE instructions with statistic and dynamic components in the form
WHERE STAT_COND AND (DYN_COND)
11.1 Architecture
The PBS CBW NLS also contains software components for archiving PSA data. In
addition to the archiving and deletion programs, software components for
generating the archiving object and for the data selection of archived PSA data are
also included. All software components are written in the programming language
ABAP and are implemented using the SAP transport infrastructure. All programs
are in the /PBS/ namespace, which means that naming conflicts with SAP or
custom programs are avoided. The generated programs are in the namespace
/B19/. All software components are available additively in the BW system, which
means that no SAP programs are modified. Archiving, deletion and reloading can
be executed via four ways:
Archiving object
• Write program VirtualProvider
PSA • Delete program
• Read program
• Reload program
Archive Files
(ADK/Sybase IQ)
InfoCube
You can call up the PSA archiving using the transaction /PBS/PSA. The following
administrative activities are performed here:
From SAP NetWeaver 7.0 it is also possible to remove data in the BW system via
nearline storage (NLS) in addition to the ADK-based archiving. For this, the PBS
CBW NLS IQ uses the analytics server Sybase IQ as a performant nearline
storage.
PSA archiving generates archiving objects with all corresponding programs, such
as write and deletion programs and other required objects. All characteristics for
the archiving objects are set in the central program interface of the PSA archiving
function.
First, the type of archiving must be determined for the definition of the archiving
object. You can choose between ADK, NLS, or ADK+NLS. According to the
selection the necessary tabs to define the archiving method are shown or hidden. If
you decide to use ADK-based archiving (without NLS), it is possible to use the
request-based archiving interface instead of the transaction SARA (see chapter
13). This program interface is used as standard for NLS-based archiving. A
precondition for NLS-based archiving is the connection of a Sybase IQ database
server to the SAP NetWeaver BW System.
The structure definition must not be changed in transaction AOBJ under any
circumstances.
If you use the ADK-based data archiving, please also see the chapter ’Data
Archiving/Customizing’ in the SAP Library: http://help.sap.com/saphelp_nw2004s/
helpdata/en/8d/3e4ea8462a11d189000000e8323d3a/content.htm.
PBS CBW NLS - Administration - 129
The DataSources to be archived are selected here. Different PSA tables can be
assigned to each DataSource. Any number of DataSources can be assigned to one
archiving object. If various PSA versions exist for one DataSource, all PSA
versions are automatically copied to the archive structure. If a new version of the
transfer structure is available at a later point in time, you can make a comparison
by pressing the button ’Refresh’. Optionally, a VirtualProvider can be generated for
each DataSource (see Chapter 11.4).
The Customizing for the archiving object is defined here in the same way as when
using transaction AOBJ.
File Structure
Storage
• Content Repository
The Repository is a SAP Knowledge Provider administration unit for mapping
physical storage media. Various different physical storage media can be
addressed (database, archive, etc.), which represent different types of Content
Repositories.
• Automatic Start
Indicates that archived data is automatically transferred to a connected storage
system after successful processing.
Deletion Jobs
• Event
Name of the event with which the job is concatenated. An event is a signal that
indicates that a predefined status has been reached in the system. The
background processing system receives events and starts the background
job(s) that are concatenated with the event. You can specify predefined events
from the SAP applications or events that you have defined yourself.
PBS CBW NLS - Administration - 132
• Parameters
If an event requires a parameter, enter the desired parameter in this field. A
parameter qualifies an event. With the event SAP_END_OF_JOB, for example,
the job name and job number are output as parameters.
Via the SAP nearline storage connection it is possible to store the data to be
archived in the database system Sybase IQ using CBW NLS IQ. A precondition is
the connection of a Sybase IQ database server to the SAP NetWeaver BW System
(details see chapter 17)
Attention: Since InfoObjects that are defined as pure attribute fields cannot be
transferred to the dimensions of an InfoProvider, they can also not be read via the
VirtualProvider.
Since the nearline connection (Diagram 99) in the target system might differ from
the connection in the source system, the corresponding tables in the database
cannot be created in the transport postprocessing program. After the import, use
the transaction /PBS/PSA02 to define the nearline connection in the target system.
When doing this, the tables are created in the database.
PBS CBW NLS - Administration - 135
Please see also the schematic process of data archiving in the SAP Library:
http://help.sap.com/saphelp_nw2004s/helpdata/en/8d/3e4cee462a11d189000000e8323d3a
/content.htm
For data archiving purposes, a variant for the archive write program must be
created for each archiving session. The selection parameters are explained below.
• Request
The request number is optional. If necessary, it can be determined in
transaction RSA1.
• Test Run
If the deletion program is started automatically, the parameter controls whether
the program should run in 'Test' status.
• Generate Archive
Using this parameter, you can control whether you want to generate an archive
file. If you do not generate an archive file, the system only creates statistics that
would correspond directly to the statistics that would be created if you had
generated an archive file. No entries are generated in archive administration
and the delete program is not called up either.
The write program does not archive the PSA data via the PSA structure
/BIC/B0000######, but via a copy of this structure. The copied structure
/B19/B0000###### is in the PBS generation namespace. The B19 structure is
therefore available in the write and delete log in place of the BIC structure. Using
the B19 structure, it is possible to read archived data if (for whatever reason) the
original PSA table no longer exists. In addition, the field information of the structure
is stored in the archive. If the B19 structure is lost, it can be generated
automatically using the program /PBS/PSA_CREATE_B19_FROM_ARCH.
The delete program reads the archived data from the archive file and deletes it
from the database. This procedure guarantees that only data in the database that
has been successfully stored in the archive file is deleted.
The delete program does not delete the requests in the Administration Workbench
(Diagram 104) automatically. This must be performed using the tools in the
Administration Workbench, but is not absolutely essential. If the PSA data is
reloaded later, the request numbers are regenerated if they have been deleted in
transaction RSA1. In this context, check whether the display of the requests in the
Administration Workbench is correspondingly restricted.
PBS CBW NLS - Administration - 138
In this context, please refer again to Chapter 11.4. If you generate a VirtualProvider
you can avoid having to reload to the PSA table.
All existing archiving runs are listed here with their details. In addition to the overall
status, the status of the single steps "Copy", "Verify" and "Delete" are also listed.
For further details see among others the DataSource, PSA table, PSA request, last
job and the ADK document number (if available).
Depending on the archiving method ADK, NLS or ADK+NLS (chapter 11.3), the
PSA data is written to ADK files and/or to the Sybase IQ database. You can display
the data via Sybase IQ by selecting the hyperlink in the column "PSA tables".
You first have to generate an archive request to start an archiving run (button
"Archiving request" in Diagram 106). The data archiving processing is divided into
different phases (Diagram 107). Via the process flow control it is possible to stop
the run at a specific phase. Thus parts of the archiving can be executed separately.
If, for example, you want to execute the write run and the delete run in separate
processes, as it was usually done when using the ADK archiving technique, you
can use the process flow control to do this.
Finally, archiving is realized using the program /PBS/PSA_NLS. This program can
also be started directly via the transaction /PBS/PSA_REQ (Diagram 108). You can
create variants here and start them via job scheduling.
PBS CBW NLS - Administration - 140
• Request
The request number is optionally. If necessary, it can be determined via the
transaction RSA1 (Diagram 104).
• Target Status
Archiving can be executed step by step.
In this respect we refer again to chapter 11.4. You can avoid reloading in the PSA
table by generating a VirtualProvider.
To reload a request, click on the button in the corresponding request line (Diagram
106). This calls the dialog that is displayed in Diagram 110. During the execution a
reload request is generated for the archiving request.
You can call up the archive read program by choosing the menu item ’Goto/Read
archive’ [F5] in transaction /PBS/PSA. Using this program, you can select archive
files specifically, for example, via the request date (see Diagram 111) and further
restrict the data via dynamic selections (Diagram 112). Specific search for data
records is thus possible (Diagram 113).
You can call up the overview of archiving sessions by choosing the menu item
’Goto/Archiving sessions’ [F6] in PSA archiving. The display corresponds to a large
extent to the administration in transaction SARA, but has been adapted in certain
points and the display function has been enhanced.
During archiving of the PSA tables, a list of contents is created for each archive file.
It is thus possible to restrict the archive files you are searching for via a selection
screen. Cross-archiving object selection is also possible.
PBS CBW NLS - Administration - 145
In the overview of archiving sessions, the request number, request date and
number of data records per request is displayed additionally, in comparison with
transaction SARA. You can call up the archive read program directly by double-
clicking on the request number (Chapter 14.1). Default values are assigned to the
selection criteria of the read program (see Diagram 116).
The Data Export Interface of the PBS CBW NLS contains software components for
generating programs that extract InfoProvider data from the BW database and
archives.
The CBW Data Export Interface generates extraction programs using a multi-level
procedure. The Report Generator is a central tool in this procedure. This Generator
creates a temporary intermediate report from the Data Export Interface report
template from which the actual extraction program is then generated in a second
step.
The report template contains the coding of both the intermediate and final report.
The report generator reads from the template line by line, checks whether this is an
instruction of the temporary report or the destination report, replaces code
passages if necessary using a substitution table that has been created in advance,
and writes the instructions in an internal table. Using subroutine pool technology,
the intermediate source code is then executed and the extraction program is
created.
PBS CBWDEI report generator soft copy XML file: Data medium
report data description (CD, DVD, etc.)
and linkage
template
PBS CBWDEI PBS CBWDEI
temporary extraction
report program
substitution
table
SAP AIS
archive
archivefile
BW archivefile
file
database
SAP archive data
Diagram 117: Conceptual Design for the Data Export via CBW Data
Export Interface
PBS CBW NLS - Administration - 148
The data basis of the extract offers the database of the BW system and any
existing archive data. Extracted data can either be output on the screen or stored in
a sequential file in the file system of the SAP BW system. In the latter case, SAP-
AIS is used as the data format.
The extracted data is stored for Z3 access in the SAP standard format AIS (Audit
Information System). The data format is defined as follows:
Line No Characteristic value
type
Header 1 Field name
2 Description
3 Field description
4 Data type (C,N,P,I,D,T,F): C = text, N =
numeric text on the left filled with zeroes, P =
packed (number of digits = field length * 2 - 1)
This is the SAP internal type. The format [-
]digits[,|.digits] is used in the data records.
Example: Type P length 7 decimal places 2
results in -12345678901.23, I = integer, D =
date with format DD.MM.YYYY, T = time with
format HH:MM:SS, F = floating point number.
5 Field length
6 Number of decimal places
7 Currency code or quantity indicator (F,W,M,E):
A currency unit field always follows a currency
amount field. A unit of measure field always
follows a quantity field.
F = currency amount, W = currency unit, M =
quantity, E = unit of measure
8 Special field types (R,C,P,M): R = rank
(statistics), C = counter (statistics), P =
percentage (statistics), M = mean value
(statistics)
Data 9 Data records
line cont'd
It is an official SAP AG interface format and has been agreed between SAP AG
and the following suppliers of revision software for external and internal auditors:
• (IDEA) http://www.caseware.com
PBS CBW NLS - Administration - 149
15.1.3 Security
When the data is exported to a sequential file, it leaves the SAP context.
Consequently, the SAP authorization concept does not apply. Protect the data from
unauthorized access.
The PBS CBW Data Export Interface is called up in transaction /PBS/BWEI. The
subfunctions of the menu tree, which is split up into two parts, generate extraction
programs (technical name /PBS/BWEI_Y01) and create XML description files for
Z3 data extracts (technical name /PBS/BWEI_Y02), which are described in
Chapters 15.2.1 and 15.2.2.
Diagram 118: Initial Menu PBS CBW Data Export Interface (transaction code
/PBS/BWEI)
All the functions implemented at the present time can be started from the central
initial screen. The screen is divided into five subareas, in which extraction
programs for the specified InfoProvider types can be created. The following data
sources are currently supported:
• InfoCubes
• DataStore objects
• Master data
• Texts
• Other tables
PBS CBW NLS - Administration - 150
Diagram 119: Initial Screen PBS CBW Data Export Interface – Generate Data
Extracts
The functionality of the PBS CBW Data Export Interface in the context of the data
sources is described below. As the functional scope of InfoCubes and DataStore
objects as well as master data and texts is very similar, the explanations of the
data sources will be summarized:
PBS CBW NLS - Administration - 151
An InfoProvider can be specified either via direct entry or by using the F4 Help.
The description, report name and report status are information fields that refer to
the data source and any existing standard report to extract data from the BW
database and the archives.
The description contains the info text maintained in the system for the selected
InfoProvider. A report name is only displayed if a standard extraction program
has already been created in the system. The same applies for the status of the
report, which is shown graphically by the traffic light control function.
In the current version, the report status only refers to the status of the
standard extraction program. This report is the report whose name has been
automatically assigned by the export interface.
a) The InfoProvider report does not exist yet. Using the template, a
temporary intermediate report can be generated which in turn dynamically
generates the extraction program and then executes it.
b) The InfoProvider report exists, but is older than the date of the
template: as described under 'a)', a new extraction program is created and
generated. The old report is then overwritten.
c) The InfoProvider report exists and is more recent than the template
date. The extraction program is executed directly. Optionally, it is possible to
force generation of the extraction program by pressing the key .
By pressing the button "Choose fields for selection", you can maintain the
selection fields of the report to be generated individually. As you can see from the
Diagram, you also enter a report name via the modular dialog box:
PBS CBW NLS - Administration - 152
Please note that a report of the same name can be overwritten both in
standard and user-defined extraction programs. If you want to prevent
overwriting, you should rename the existing report before generating a new
extraction program with the same name.
a) Data from SAP BW archive: If one or more archiving runs exist for a
specified InfoProvider, the extraction program can read data both from the
BW database as well as data from the archive files and include it in the
extract.
o The "PBS 1-2-3 parameter" is decisive for the scope of the data
sources:
Data from the database as well as from the archive is taken into
account at selection.
d) Output option: Output can either be on the screen or in a file. At the point of
export to a file, the obligatory parameters "File name" and "Directory" must be
maintained. The target directory must exist in the server of the SAP BW
system.
Please note that the data leaves the SAP context at the point of export
to the file system of the application server. Consequently, the SAP
authorization concept is ineffective for a Z3 data extract. Protect your data
from unauthorized access.
The data source of master data and texts can be named either directly or
indirectly using the integrated F4 Help, as in the InfoCubes and DataStore object
functions. The description, report name and report status are information fields.
The description contains the info text maintained for the specified data source
contained in the system.
A report name is only displayed if the extraction program has already been
created. The same applies for the graphical display of the report status.
Diagram 123: Example of a master data source. As can be seen from the
status of the traffic light, a standard extraction program does
not yet exist.
You create the report by entering the name of the data source and then
pressing the button "Generate/execute report".
In the same way as for InfoCubes and DataStore objects, the following cases
exist:
a) The extraction program for the data source does not yet exist. Using
the template, a temporary intermediate report is generated, which in turn
dynamically generates and executes the destination report.
b) The extraction program for the data source exists, but is older than the
date of the template: as described under 'a)', a new extraction program is
created and executed. The old destination report is then overwritten.
The extraction program for the data source exists and is more recent than
the date of the template. The destination report is executed directly.
b) Output option: Output can either be on the screen or in a file. At the point of
export to a file, the obligatory parameters "File name" and "Directory" must be
maintained. The target directory must exist in the server of the SAP BW
system.
If the file name of the extract corresponds to an existing file in the destination
path of the server, the system asks in a confirmation prompt whether the
destination file should be overwritten.
Please note that the data leaves the SAP context at the point of export to the
file system of the application server. Consequently, the SAP authorization
concept is ineffective for a Z3 data extract. Protect your data from unauthorized
access.
• Other Tables
The desired table name is maintained either by direct input or by using the F4
Help, in the same way as for the data sources described above.
The description, report name and report status are information fields that refer to
the specified table and any existing standard report for extracting data from the
BW database and the archive data of the selected archiving object.
PBS CBW NLS - Administration - 157
The description contains the info text for the selected table maintained in the
system. A report name is only displayed if a standard extraction program has
already been created. The same applies for the status of the report, which is
shown graphically by a traffic light control function.
In the current version, the report status only refers to the status of the
standard extraction program. This report is the report whose name is
automatically assigned by the export interface.
Diagram 125: Example of a Report for a Table Extraction with Archive Data
of the specified Archiving Object
The example shows those archiving objects defined for Table SBOOK that
can be found using the F4 Help and selected by double-click:
You create a standard report by entering the table name, and, optionally, by
entering a respective archiving object. By then pressing the button
"Generate/execute report" you create the desired extraction program.
a) The table report does not exist yet. Using the template, a temporary
intermediate report can be generated which in turn dynamically generates the
extraction program and then executes it.
b) The table report exists, but is older than the date of the template: as
described under 'a)', a new extraction program is created and generated. The
PBS CBW NLS - Administration - 158
c) The table report exists and is more recent than the template date. The
extraction program is executed directly. If you change the archiving object,
you must first force generation of the extraction program by pressing the
button .
By pressing the button "Choose fields for selection", you can maintain the
selection fields of the report to be generated individually. You also enter a report
name in the modular dialog box.
Please note that a report of the same name can be overwritten both in
standard and user-defined extraction programs. If you want to prevent
overwriting, you should rename the existing report before generating a new
extraction program with the same name.
e) Data from SAP BW archive: If one or more archiving runs exist for a
specified table and the selected archiving object, the extraction program can
read data both from the BW database as well as data from the archive files
and include it in the extract.
2) The "PBS 1-2-3 parameter" is decisive for the scope of the data
sources:
Data from the database as well as from the archive should be taken
into account at selection.
table.
h) Output option: Output can either be on the screen or in a file. At the point of
export to a file, the obligatory parameters "File name" and "Directory" must be
maintained. The target directory must exist in the server of the SAP BW
system.
Please note that the data leaves the SAP context at the point of export
to the file system of the application server. Consequently, the SAP
authorization concept is ineffective for a Z3 data extract. Protect your data
from unauthorized access.
PBS CBW NLS - Administration - 161
You call up the sub-functionality for generating XML description files for Z3 data
extracts in transaction /PBS/BWEI_Y02. Alternatively, you can select the second
entry in the menu tree of the PBS CBW Data Export Interface:
Important fields of the input screen for creating a description file are
described below using an example:
If the name of the description file corresponds to an existing file in the destination
path of the server, the system asks in a confirmation prompt whether the destination file
should be overwritten.
(number of items X line width of the layout) + 2000 bytes header record SAP-AIS
The file size of the program is limited to 600 MB to enable the data to be
transferred to a CD. If the data volume to be selected is greater, subsequent files
are created with name extensions Vxxxx (xxxx = sequential number).
Once the generation of the extract is completed, the files from the Unix/Windows
NT server must be transferred to the PC environment in binary mode (ftp> bin).
The PBS Database Export Interface can also be used for AS400 systems. Transfer
must therefore be performed as follows:
PBS CBW NLS - Administration - 163
From To Mode
Various options are available in the PC analysis program for checking the factual
consistency of the extracted data.
The CBW Data Export Viewer provides the option to make data extracts that were
generated for a tax audit using the CBW Data Export Interface available again in
the SAP system for evaluation purposes.
This can be useful if you would like to check the data before handing it over to the
auditor. In order to evaluate the dataset, it must be available in SAP AIS (Audit
Information System) format. In addition to the data from the Data Export Interface,
views from SAP DART and data from the SAP Audit Information System can also
be imported.
A separate evaluation program is generated in the SAP system – based on the file
and structure information of the dataset – for each dataset that is to be imported via
the Data Export Viewer. Because all information for the generation is only taken
from the AIS file, the current customizing settings of the existing SAP system do
not play a role. This means that it is also possible to process datasets from third-
party systems as long as the datasets are available in SAP AIS format.
PBS CBW NLS - Administration - 165
The functions of the Data Export Viewer can be accessed via the CBW menu
Data Export Interface Export Viewer.
The check-in path contains the AIS files that should be used for evaluation
purposes in the SAP system. The contents of the check-in path must be
accessible from the SAP system. If the path is known, it can be specified
directly. Otherwise it can be located using the function "Find".
PBS CBW NLS - Administration - 166
Here, the AIS files of the check-in path are provided for selection. The field
structures of the individual datasets can be selected specifically for display
via the function button in the column “Structure”. The files are selected
individually or comprehensively using the function keys “Select files” for
further processing. In the column “Comment”, additional information can be
maintained for each file. This is then displayed on the selection screen
when generating the evaluation program. The files for the generation of the
evaluation programs are marked via the function key 'Adopt settings'.
3. Integrate authorization check
Executable programs are stored in the SAP database for the selected
extract data using this function.
5. Administration
All view programs that have been created are made available to the
administrator here for test purposes. The program can be started via the
function “Execute data view”.
PBS CBW NLS - Administration - 167
On the selection screen, selection criteria are generated from the fields that are
contained in the extract data. The criteria can be used to restrict the scope of the
selection.
PBS CBW NLS - Administration - 168
Administration of indices/aggregates:
Aggregate generation:
Generation of aggregates executed?
PBS CBW NLS - Administration - 170
17.1 Overview
From SAP NetWeaver 7.0, it is possible to move data to a Nearline storage (NLS) system
in addition to using ADK-based archiving in the BW system. The PBS CBW NLS IQ uses
the analytics server Sybase IQ as high-performance Nearline storage. Sybase IQ
supports the operating system environments listed in Table 4.
* on request
17.2 Architecture
The data is moved to Sybase IQ during data archiving, and is deleted from the
database after successful verification. When data is selected, for example via
queries, it is read from the analytics server and provided via the NLS interface.
ADK optionally,
ADK
Archive recommended by PBS
BI Data
Database DAP
DAP
NLS
NLS CBW
CBW NLS
NLS
Services
Services
Analytic Server
Sybase IQ
Archive modeling of InfoCubes and DataStore objects takes place using a data
archiving process (DAP) that is created with transaction RSA1. A decision is made
here on whether the data should be stored in the Nearline database only or
whether the archived data should also be stored in ADK files. For safety reasons,
we recommend storage in ADK.
PBS CBW NLS - Administration - 172
BW Query
Read DB
and Archive
PBS
InfoCube /
NearlineProvider NLS Service
DataStore Object
for ADK / IQ
NearlineProviders, which are adjoined with the original InfoProviders, are used for
query access to the archive data via the NLS interface internally in the SAP
System, thus rendering the use of custom-defined VirtualProviders and
MultiProviders unnecessary (see Diagram 134). The selection of Nearline data is
controlled via the characteristics of the individual queries.
17.3 Procedure
The individual steps for moving the data to IQ Nearline storage are shown in
Diagram 135.
PBS CBW NLS - Administration - 173
Connection
1. Make known the exchange directories
2. Set up RFC connection to NLS IQ Interface
3. Create connection to CBW Nearline Service
Initial setup
1. Create data archiving process
2. Execute data archiving process
The complete data archiving takes place via the InfoProvider data administration as
explained in detail in chapter 2.6. Archiving is object-related, with a separate data
archiving process and a separate archiving object being defined for each InfoCube
or DataStore object. The archived data is transferred to the CBW NLS IQ interface
by the SAP archiving program. This interface transfers the data to the Sybase IQ
database. If the option "ADK-based archiving" is selected during creation of the
data archiving process, the data is also written to sequential files outside the
database.
The reorganized data can only be evaluated from the database Sybase IQ directly
after archiving, which means after the deletion run. A precondition is that the option
"Nearline storage should be read" is selected in the query characteristics. The load
relief on the BW database can be measured using the BW Database Analyzer, for
example.
PBS CBW NLS - Administration - 174
Using CBW NLS IQ it is possible to save archived data in the database system
Sybase IQ via the SAP Nearline storage connection. This enables the response
time to be improved further. The precondition is that the network of the completely
installed Sybase database server is connected to the SAP NetWeaver BW system.
The Customizing in the SAP NetWeaver BW system is described below.
A TCP/IP connection is created with transaction SM59 (see Diagram 136). Any
name can be selected. The name of the JAVA program that was started on the
Sybase IQ server is entered as the registered server program. We recommend that
you add an identification of the database instance and server in the name (in the
example PBSNLSIQ_3330@titan: Database 3330 on server neptun). The standard
gateway options are selected for "Start Type of External Program" and "CPIC
timeout". You enter the application server as a gateway host and the value
sapgw<system number> as a gateway service. <system number> is a placeholder
for the system number from the SAPLOGON.
PBS CBW NLS - Administration - 175
You can select any name for the RFC destination; the default value is
"PBSNLSIQ".
PBS CBW NLS - Administration - 176
To transfer correctly the data to the Nearline database during the archiving
process, a temporary directory on the application server and a directory on the
Sybase IQ server are required if ADK files are not written. If the ADK switch is
selected, only the directory on the Sybase server is necessary.
If you choose the method to transfer the data via a temporary exchange directory,
both the directory in the BW system and the directory on the Sybase IQ have to be
introduced to the system (Diagram 137) via the NLS monitor (transaction
/PBS/NLSA_MONITOR; see also chapter 18). The corresponding entries are made
in the tab "Exchange directory". Please note that the directory has to be entered as
a logical path on the BI application server. This logical path name is defined using
the transaction FILE (Diagram 138). The directory name of the physical path on the
Sybase database server must end with a forward slash for UNIX and a backslash
for Windows.
The paths must be created with the respective authorizations in the servers.
If you choose the method to process the data in parallel, only the directory on the
Sybase IQ and no exchange directory in the BW system is required. The number of
additional batch or dialog processes available should be entered in the field “No. of
parallel processes” (generally batch processes as the processing is normally
performed in the background).
Make sure to select imperatively one of the two methods. In other words, either a
directory in the BW system or the number of parallel processes has to be indicated.
If no directory has been indicated while at the same time the number of parallel
processes equals 0, the archiving procedure terminates.
If the number of parallel processes indicated is higher than 0, the data processing
is performed in any case in parallel processes, independently of the specification of
an exchange directory in the BW system.
PBS CBW NLS - Administration - 178
The data is saved on the Sybase database under a database user that is assigned to the
respective BW system. The user name is composed of the string "PBS" and the SAP
NetWeaver BW system ID. If the system ID of the SAP NetWeaver BW system is BW7,
for example, the database user PBSBW7 must exist on the Sybase database. The
database user can be created with the tool scjview in the Sybase IQ database.
If these tasks have been executed, a database archiving process can be created and
archiving can be started.
An entry in the table RSDANLCON must be created so that the Nearline connection can
be recognized by the SAP archiving programs. This is carried out via a maintenance
dialog with transaction SM31 as of BW support package 14. Before this, the transaction
SE16 must be used. Any connection name can be selected (for example: CBW_IQ),
whereas the class name is defined with /PBS/CL_NLSA_CONNECTION. The RFC
connection name that was maintained with SM59 (in the example
PBSNLSIQ_3331@TITAN) is entered under Destination. The table entry is shown in
Diagram 140.
The customer can define freely the Nearline connection name, for example CBW_IQ.
The connection class name is predefined and is /PBS/CL_NLSA_CONNECTION. The
TCP/IP connection name that was maintained with SM59 must be entered as the
destination (in the example: PBSNLSIQ_3331@TITAN).
The connection to the CBW Nearline service is set up after the entry has been added to
table RSDANLCON.
The Nearline connection that was just created can now be used when you create a data
archiving process with transaction RSA1. You have to decide whether the data is written
to the Nearline database (Sybase IQ) only or whether the ADK files should also be
created. Both methods (with or without ADK) are possible but an ADK archiving is
preferable for safety reasons.
The creation of a data archiving process is described in detail in chapter 2.4. The
execution of the archiving process is explained in chapter 2.6 .Diagram 141: General
settings of DAP in NLS without ADK
PBS CBW NLS - Administration - 180
If queries contain master data attributes or hierarchies the runtime can be reduced
substantially if master data and hierarchies are saved as a copy in the nearline database.
The cube data can be linked by nearline database means (table joins) if a current copy is
available. Otherwise the attributes and hierarchy data have to be read individually in the
SAP system which takes much more time. Moreover, it puts much more load on the
interface when the number of transferred data records is increased.
The situation without the current snapshot is displayed in Diagram 142. Characteristics,
key figures, and basic characteristics are transferred via the interface.
Characteristics
Key Figures Characteristics
Key Figures
Master Data Attributes
Basic Characteristics
Hierarchy Nodes
Master Data
Hierarchy Structure
The correct result is already compiled for copied master data and hierarchies in the
nearline storage. Only the required data is transferred via the interface (Diagram 143).
PBS CBW NLS - Administration - 181
Characteristics Characteristics
Key Figures Key Figures
Master Data Attributes Master Data Attributes
Hierarchy Nodes Hierarchy Nodes
InfoProvider
SQL
command
JOIN Master Data
Hierarchy Structure
Diagram 143: A copy of the master data and hierarchies is in the nearline
storage Sybase IQ
To use this functionality, the InfoProvider that was saved in the nearline storage has to
be read in addition via a VirtualProvider.
PBS CBW NLS - Administration - 182
If you select the pushbutton "Snapshot master data/hierarchies" you navigate to the
administration transaction for loading master data and hierarchies in the CBW Cockpit
(transaction /PBS/CBW).
An overview of the characteristics that have already been loaded or defined for loading is
displayed on the tab "Master data" (Diagram 144). The hierarchies that have been loaded
or defined for loading are displayed in the same way on the tab "Hierarchies".
The list can be extended using the pushbutton "New characteristic" or "New hierarchy".
By doing this, a Data Snapshot Process (DSP) is created. This means that the tables that
are necessary to include the data are created in the nearline database.
The DSP can be removed again by clicking the right mouse key and selecting the context
menu "Delete DSP". Since the tables are deleted from the nearline database during this
process, the data is also lost. If you only want to delete the data and keep the tables,
select "Delete nearline data".
PBS CBW NLS - Administration - 183
After the DSP has been created, master data or hierarchies can be copied to the nearline
database via the option "Load nearline data". This procedure is carried out automatically
in the background as it involves longer runtimes. The background job can be monitored
using the button "Job".
After having completed the job, the fields "NL request" and "Last load run" are updated in
the list, and the traffic light turns to green in the column "Current". If the master data or
hierarchies have been changed compared to the copy loaded, the traffic light turns to
yellow. The loaded copy is only used in the queries if the traffic light color is green.
Using "Display nearline data" you can branch to the transaction /PBS/NLSA_SE16MD
that is described in chapter 17.9.3.
The column "DSP active" shows if the snapshot data that was built should be used
during the access. The usage can be activated (green LED) or deactivated (yellow
LED) when you select the button.
If different master data is to be updated within one job, you can use the button "Selection
mode individual/multiple proc." to change to the multiple processing mode. Then it is
possible to select several characteristics by holding the CTRL key. If you select the
pushbutton "Generate snapshots for several objects", the selected characteristics are
updated in the background.
The copied master data or hierarchies are displayed as an ALV list (in the same way as
in the SAP transaction SE16) using the PBS NLS Browser – transaction
/PBS/NLSA_SE16MD. By selecting the individual fields you can define the field
composition of the list. You can delimit the result using "From value" and "To value"
(Diagram 145).
Depending on the button "time-depend." the time-independent part of the master data (P
table) or the dependent part (Q table) is displayed. Using "Only calculate runtime" the list
display is suppressed and an evaluation of the runtime is displayed. The evaluation can
also be called via the button "Selection statistics" in the displayed list.
PBS CBW NLS - Administration - 184
Access to an external system
If you want to read data that was transferred from another system to the nearline
database in an SAP system, meaning, for example, if data should be accessed
from a test system and this data was saved in the NLS DB from a productive
system, you can define the other system as an external system in one system and
then enable access to the data that was outsourced to the NLS database.
As you can see from the Diagram 146, archived data or transaction data snapshots
can be read from the external system, provided that the corresponding data
archiving process (DAP) – and for transaction data snapshots, the corresponding
data snapshot process (DSP) – is available both in the current as well as in the
external system for archived data.
The master data is always read from the current system and never from the
external system.
18.1 Overview
The NLS monitor displays the details for the nearline connection that was set up
using the transaction SM59 (see chapter 17.7). If different nearline connections
exist, a selection screen is displayed when calling the NLS monitor (Diagram 147).
The required nearline connection to the Sybase IQ database can be selected in
this screen. Via the pushbutton or the menu option "NLS Monitor -> Selection
NLS database" you can switch between the different NLS connections (if
available).
The status of the NLS database and the NLS interface is displayed using traffic
light symbols in the upper part of the NLS monitor overview screen (Diagram 148).
If either the database or the interface is not active, a yellow or red traffic light is
displayed instead of a green one. This means that you can see at a glance if the
NLS database and NLS interface are available.
The nearline connection that has been selected is displayed again in the area
'Nearline connection'.
PBS CBW NLS - Administration - 186
This tab (Diagram 149) provides an overview of the technical details of the Sybase
IQ database instance and allows a direct insight into Sybase IQ from the BW
system.
All further available details that are not in the overview can be displayed via the
pushbuttons "DB Connections", "DB Options" and "DB Details".
The details for the NLS interface are also displayed in the NLS monitor in the same
way as the details for the NLS database (Diagram 150).
This tab is used to display the already executed data archiving processes.
In the upper half of the screen you can see a list of the data archiving processes
where you can further restrict the scope of list according to InfoProviders (field
"InfoCube/DSO") and status of the archiving process.
In the lower half of the screen, further details are displayed via a double-click on a
list line or the button "Archiving Session Detail". They contain information for the
tables that are created in the NLS database and the respective indices as well as
the number of table entries and the space requirement on the NLS database
Sybase IQ. In particular, the ratio of data compression is displayed. This represents
the proportion of the raw data size from the SAP system compared to the table size
on Sybase IQ (field "Compression to %").
You can navigate directly to the PBS Data Browser via the pushbutton "Data
Browser" (transaction /PBS/NLSA_SE16N).
PBS CBW NLS - Administration - 189
It is possible to load data from ADK files directly into Sybase IQ using the PBS
archive add on CBW NLS IQ. These load runs are displayed here in exactly the
same way as when displaying via the tab "DAP Nearline".
Using the button "Data Browser" you can navigate to the PBS NLS Browser –
transaction /PBS/NLSA_SE16A.
It is also possible to navigate from here to the PBS NLS Browser via the
pushbutton "Data Browser".
The buttons "Load master data", "Load hierarchies", and "Manage transaction data
snapshots" (or "Load snapshots") take you directly to the respective administration
transaction that can be used to create and manage the snapshots.
The following Diagram 152 shows, for example, the tab “Snapshot hierarchy”.
The traffic light symbol in the column "Current" indicates whether the data that was
loaded into Sybase IQ is current, meaning synchronized with the data located in
the online database.
On this tab details about PSA tables that have been archived request-based are
provided. Here, you can also call the corresponding administration transaction.
PBS CBW NLS - Administration - 191
Details for data snapshot processes of arbitrary tables are displayed here.
So-called query plans are generated during the selection of data from Sybase IQ.
From these query plans you can see exactly how the data was accessed. They are
used for analysis purposes and can be displayed via the button "Display query
plan" or a double-click on an entry in the overview list (Diagram 153), and saved as
local file.
The Index Advisor (Diagram 154) is another function that Sybase IQ provides to
analyze and monitor accesses. It determines and displays which additional indices
might improve the access to the data. An index that the Index Advisor suggests
can be created directly from the NLS monitor in Sybase IQ using the button
"Create index". Before doing that, you should contact PBS if necessary.
This tab (Diagram 155) completes the Index Advisor and indicates which indices
are actually used or not used during the data selection. The indices can be deleted
from IQ again if necessary.
PBS CBW NLS - Administration - 192
Analogue to the tab that was described in the previous section, the tables that are
loaded in the NAI database are listed that are actually used or not used.
PBS CBW NLS - Administration - 193
The NLS interface also has tools for analyzing and monitoring in the same way as
the NLS database.
In particular, entries are written for all activities of the NLS interface into log files
that are listed on the tab "Logging". A log file can either be displayed in a separate
window, searched and – if necessary – saved as local file via double-click or via
the button "Display log file".
You can see the log level that has currently been selected in the field "current log
level". The log level can be changed if required (for example, for more detailed
analysis) via the button "Change log level".
In addition, the existence of the exchange directories can be verified and the
contents and the still available space in the directory of the application server can
be checked and monitored ("File system Snapshot").
Two additional functions are provided for the exchange directory on Sybase IQ:
When loading data into Sybase IQ the data is first written to a file in the directory
specified and then written to the nearline database. The maximum size of this file is
defined in the field "Maximum file size in MB". In principle, the larger the file, the
faster the load process is performed although the available space in the exchange
directory has to be taken into account when defining the file size.
If you activate the option "Direct loading/NFS Mount", the data is not written via the
NLS interface but directly from the SAP application server into the file in the
exchange directory. Precondition for this is that an NFS mount exists from the SAP
application server to the server on which Sybase IQ has been installed. The
advantage of direct loading is that the load process is accelerated even more.
Furthermore, the load on the RFC connection that might be a bottleneck during
loading is reduced.
The settings that are performed on this tab are used for the load distribution and
the downtime security when reading and writing NLS data.
The prerequisite to use this functionality is the set up of diverse interface instances,
which communicate with different SAP application servers via several gateways
and the creation of RFC connections to these interface instances as described in
chapter 17.4 of the existing manual or chapter 3.3.8 “Setup of an RFC connection
in the SAP BW system” of the manual part B-1: Installation PBS CBW NLS IQ.
The connection method determines whether and in which way a load balancing
should be performed. The following three connection methods are available
(Diagram 162):
1 – Off
2 – Random
3 – Hosts
The RFC connections and the application servers, which gateways are used for the
connection to the NLS database, are entered in table "Connections for Load
Balancing".
PBS CBW NLS - Administration - 197
The logical destinations are added to the list of connections in analogy to the
maintenance of the external systems described below (see diagram 28f). An F4
help exists for the selection of the logical destination, in which the TCP/IP
connections created via transaction SM59 are offered for selection. The name of
the server, whose gateway is used for the data transfer, should be indicated in the
field application server.
1 – Off
If you choose the option "1 – Off" the NLS data is written and read (no load
balancing) solely via the logical destination, which can be seen in the field “Logical
destination” in the header area of the NLS monitor (see Diagram 148). The table
“Connections for Load Balancing” can remain empty.
2 – Random
With option "2 – Random", the data transfer is performed randomly via all logical
destinations; in other words via the own logical destination mentioned above and
all destinations entered additionally (Diagram 164).
3 – Hosts
In contrast, with option "3 – Hosts" only RFC connections to read and write NLS
data are used, which are assigned to the current application server (Diagram 165).
The specification of the application server is mandatory as the application server is
required for the determination of the gateway. This way, you can determine via
which gateway the data transfer should be performed.
18.2.16 Customizing
You have different Customizing options to increase the comfort when using the
NLS monitor.
When operating various NLS database connections it may be desired that different
users monitor different NLS database connections preferably with the NLS monitor.
Each preferred connection can be entered user-specifically in the list "Preselection
NLS database connection" and is provided with a validity end date. When starting
the NLS monitor, the screen with the selection of the NLS connection (see Diagram
147) does not appear. Rather, the NLS monitor is called directly for the entered
connection. Even if you define a preferred connection it is possible to display the
data of each further NLS database connection (if available) via the menu option
NLS monitor -> Selection NLS database or the button "Select NLS database (F9)".
PBS CBW NLS - Administration - 199
For runtime reasons, the current status of the snapshots of master data
hierarchies, and InfoCubes/DSO is not displayed by default on the respective tabs
(see chapter 18.2.5). However, if you wish to display the column “Current” in the
overview list as described in Diagram 152, it can be activated by setting a
checkmark in the area “Current status of snapshot” on the Customizing tab.
From release SAP NetWeaver 7.3, it is possible to delete nearline requests in the
SAP standard. This is only the case if this option is also supported by the supplier
of the nearline interface. PBS supports the deletion of nearline requests after this
functionality has been activated. The necessary settings for the activation are
performed on the Customizing tab in the NLS monitor.
Diagram 167: Additional Customizing Option from Rel. SAP NetWeaver 7.3
If the deletion of NLS requests has been activated as described in Diagram 167,
the option “Delete Near-Line Request” is shown in the maintenance dialog for
archiving requests (Diagram 168).
PBS CBW NLS - Administration - 200
Correspondingly, the option is not displayed if the deletion has not been enabled
(Diagram 169).
This customizing option is only available in the NLS monitor from release SAP
NetWeaver 7.3
PBS CBW NLS - Administration - 201
This tab is only displayed if the NLS monitor is called for a mere read connection
(see also chapter 17.10, page 184).
The NLS interface is operated in read-only mode for a mere read connection
(Diagram 170). Setting up the read-only mode is not a covered in this manual.
Corresponding notes for this can be found in manual part A: Installation.
Diagram 171: Operation of the NLS Interface in the read-only mode, value of
the parameter "perm. read-only Mode" is "On"
A mere read connection is advisable if you want to read data in an SAP system
and this data was transferred from another system to the nearline database. Thus,
for example, if you want to access data that was transferred from the production
system to the NLS DB (see also chapter 17.10, page 184).
PBS CBW NLS - Administration - 202
As a first step, you have to make known the name of the external SAP system.
Then you can define additionally for individual InfoProviders if the data should be
read from the external or the original system.
New entries can be added using the pushbutton "Append Row" (Diagram 174).
Furthermore, corresponding pushbuttons are available to delete rows and save
settings.
The system ID of the external SAP system is entered in the field "SAP system ID".
If snapshots of master data and hierarchies are created in the logon system (see
chapter 17.9) and if these are to be used to accelerate the query accesses, the
NLS database connection that was used to copy the data to the nearline database
must be entered in the field "Write/Read connection". An F4 help is available for
this field.
Please note that the goal of a mere read connection and write/read connection
must be the same, meaning the archived data/transaction data snapshots and the
master data/hierarchy snapshots must be in the same database.
If no master data and hierarchy snapshots are created and if the master data and
hierarchies of the logon system should be read, the field "Write/Read connection"
should remain empty.
If principally the data of the external system should be read, the checkbox
"preferred" should be marked. It is possible to add several external systems but
only one external system can be marked as preferred.
In addition, exceptions can be defined for the above mentioned list in the list
"Specify access to external system (optional)", meaning it can be defined for
individual InfoProviders if the data should be selected from the logon or the
external system. For this, the respective InfoProvider is entered in the list (Diagram
176).
Conversely, data of individual InfoProviders from the logon system can be selected
from the external system for a general reading if a system was marked as preferred
in the list "Connection to external system" and the field "Original system" was
marked in the specification list for an InfoProvider (Diagram 178).
In this way data from different systems can be selected for test purposes.
Diagram 178: Data of the external system B71 is selected (including master
data/hierarchy snapshots), for the InfoProviders ZCCA_C11D and
ZCCA_C11N, data of the original or the logon system is read
PBS CBW NLS - Administration - 206
You can find out from the list that is displayed which query/transaction accessed
which InfoProvider.
19 NLS Watchdog
Another tool for monitoring the NLS database Sybase IQ is the NLS Watchdog. It is
called via the transaction code /PBS/NLSA_WD or by executing the program
/PBS/NLSA_WATCHDOG.
The availability and specific attributes of the NLS database can be monitored using
the NLS Watchdog. An email is sent automatically to selected recipients or an entry
is written in a log file in the case of an incorrect status.
A precondition for using the NLS Watchdog is usage of SAPconnect. Please refer
to the corresponding SAP documentation for all details regarding integration and
configuration of SAPconnect since these topics are not covered in this manual.
19.1 Overview
Using the NLS Watchdog you can monitor on the one hand the availability of the
NLS interface and the NLS database as well as the fill level of the NLS database,
on the other hand the current status of the snapshots that are loaded into the NLS
database (Diagram 180).
PBS CBW NLS - Administration - 208
It is mandatory to enter the name of the nearline connection. The check of further
parameters – availability of connection for load balancing, fill level of nearline DB
and current status of snapshots – is optional. If the parameters are not filled, a
check is not carried out. The program should be scheduled as a periodic batch job.
As soon as the NLS Watchdog detects an incorrect status, meaning that the
nearline connection or one or more connections for load balancing are not
available, the specified fill level of the nearline database has been reached or
snapshots are not current, an NLS alert message is sent to the selected mail
recipients or an entry is written in a log file.
In the following section, the individual parameters and their meaning are explained.
This is a nearline connection that was maintained via the transaction SM59 and
entered in table RSDANLCON. Via the F4 help you can select from the available
nearline connections.
The NLS Watchdog checks on the one hand the availability of the NLS interface
and on the other hand the availability of the NLS database.
If the NLS interface is not active, further checks cannot be carried out and an NLS
alert message is sent or an entry is created in the log file. Further checks can only
be made if the NLS interface is active.
First the availability of the NLS database is checked. If it is not active, further
checks can also not be carried out. In this case, an NLS alert message is also sent
or an entry is created in the log file.
The fill level of the NLS database and the current status of the snapshots can only
be checked if both the NLS interface and the NLS database are active.
PBS CBW NLS - Administration - 210
In case RFC connections which should be used for load balancing (see 18.2.15)
have been entered in the NLS monitor on tab “Load Balancing“, their availability
can be monitored optionally via the NLS Watchdog. To do this, the checkbox
“Check connections for Load Balancing” should be activated. The RFC connections
that should be monitored are selected via the button “Selection”.
The NLS Watchdog determines the current fill level of the nearline database
(parameter "Main IQ Blocks Used" of Sybase IQ) and compares it with the value in
field "Warning from %". If the fill level has reached or exceeded the specified
threshold value, an NLS alert message is sent or another entry is written in the log
file.
If these parameters are marked, the NLS Watchdog checks if the data that has
been loaded with the so-called data snapshot processes into the Sybase IQ is up-
to-date, meaning synchronous with the data in the online database. Checking the
current status of master data, hierarchy and InfoProvider snapshots can be done
independently of one another.
If the program detects a non-current snapshot, an NLS alert message is sent that
contains the name of the non-synchronous snapshot or a corresponding entry is
created in the log file.
PBS CBW NLS - Administration - 212
The list of the possible mail recipients can be maintained in this area of the
selection screen. For this, you can call the maintenance dialog of table
/PBS/NLSA_AREC via the pushbutton "Maintenance".
The NLS Watchdog only accepts external email addresses. It is not possible to
send internal messages to SAP users within an SAP system.
The external mail addresses of SAP users can be determined via the F4 help of the
field "E-Mail Address" if the users are entered in the user master (transaction
SU01).
PBS CBW NLS - Administration - 213
The list of possible addressees is displayed when you click on the button
"Selection". The selected addresses are transferred as mail recipients to the
program via "Transfer". The pushbutton "Remove" deletes all selected entries from
the recipient list.
Diagram 189: List of selected Mail Recipients with the Option to send a Test
Mail
If mail recipients were selected, they are also displayed in the selection screen.
The value in field "Send again after (hhmmss)" defines after which minimum period
of time the same NLS alert message should be sent again.
The sender of an NLS alert message is the user who executed the NLS Watchdog.
PBS CBW NLS - Administration - 214
Diagram 191: Maintain File Name and Storage Path for Log File
Error messages can be written in a log file via the NLS Watchdog alternatively or in
addition to the NLS alert messages that are sent by email.
The log file is created and updated if the field “Create/Write log file” has been
selected and a logical path has been entered in the field “Logical file path”. It is not
possible to enter and use a physical path directly.
Any name can be entered in the field “File name” but the length of the file name is
also limited to 32 characters. If the field remains empty, the name
"WATCHDOG_LOGFILE.TXT" is used as default value.
The field “Language” indicates in which language the messages are written in the
log file. In order to avoid umlauts or special characters it is recommended to
issuing the messages always in English (default value).
In order to create and update files, the corresponding access authorizations to the
file path have to be ensured.
If the field “Display log file” is selected, the complete content of the log file is listed
after the program has been executed.
The log file is always set up as a text file. Each message is written in a new line.
The messages are created according to the following pattern:
Each check has its own error code. The following error codes can appear:
The OK message to the error codes 04, 05 and 06 is not issued till all snapshots of
the respective category are up-to-date.
Diagram 192: Extract of a Log File with Error Messages and OK Messages
The log file cannot be deleted via the NLS Watchdog. It has to be deleted using
tools of the operating system.