Sunteți pe pagina 1din 11

SAP NetWeaver for Retail:

Successful project conclusion for XI - POS


Data Management Volume Test

The project "Volume Test XI / POS Data Management has been completed
successfully. This project was done in cooperation with Hewlett Packard. The
objective was to measure the performance for an integrative Point of Sales
process, exemplary for the requirements of a large retail enterprise, using SAPs
current solution range for retail customers.

The overall scenario included SAP Exchange Infrastructure, SAP Business
Information Warehouse with SAP POS Data Management (consisting of POS
Inbound Processing Engine (PIPE) and POS Analytics) and SAP for Retail. The
POS sales were made available in an XML format closely derived from the
guidelines of ARTS (Association of Retail Technology Standards). For the first
time, a retail-specific NetWeaver scenario was subject to a precise check for
performance in the form of a stress test. From the current Netweaver04, the
Application Server 6.40 and Exchange Infrastructure 3.0 were used along with
Business Information Warehouse 3.5.

The Technical Retail Consulting unit of the EMEA Retail hub was responsible for
the project. The IBU Retail & Wholesale, which is responsible for the
development of SAP POS DM, and XI Development actively supported the project
execution and have incorporated the results in their further development
activities.

In this document we would like to explain the project as follows:
1. Objectives of XI / POS DM Volume test
2. Description of the overall scenario
3. Hardware used
4. Results
5. Lessons learned / conclusion




Final_Project_Report_XI_POS_Data_Managment.doc Page 2 of 11
1 Objectives of the XI / POS DM volume test

The project has the following primary goals:

Mapping large processing volumes at Point of Sales with acceptable
hardware costs
Verification and refinement of the sizing already formulated for the
solution in a concrete system environment
Support for ROI / TCO considerations for better positioning of the solution
in the market
Volume test of an initial run-capable retail-specific solution in the SAP
NetWeaver environment
Support of standardization efforts in retail (mapping the XML ARTS
standard)
Creation of a prototype for the use of SAP XI as POS converter in the retail
environment



Fig.: XI / POS DM scenario with solution components






Final_Project_Report_XI_POS_Data_Managment.doc Page 3 of 11
The focus of the stress test was on the Exchange Infrastructure (XI) as a POS
converter and the solution for the POS business process in retail, POS Data
Management. The sales data from POS was made available in the form of an XML
file. The format of the XML file was closely adjusted to the ARTS Standard
(Association of Retail Technology Standards). During the stress test, content was
developed for XI.

Currently SAP works on the certification of the ARTS Content in XI. This
certification will be done by the ARTS certification board.



Fig.: IX Retail POS Log Input Data



In the POS Data Management solution (POS DM), the provision of data to the
Business Information Warehouse and the aggregated provision of data to the
SAP Retail System were the primary considerations.

During the stress test it was demonstrated that even a very large amount of
receipt-specific sales data - in the form of XML messages - can be processed
quickly on the servers provided by the hardware partner HP. For XI 3.0 and BW
3.5 (as the basis for POS DM), two ItaniumII servers from HP were used in
various testing combinations. The retail system was installed on to an industry
standard PA-RISC server.


Final_Project_Report_XI_POS_Data_Managment.doc Page 4 of 11
The volume test is not to be considered a benchmark. The point of the project
was not to achieve the best possible through-put on the highest-performance
hardware that a partner has on the market. The point was to determine that
excellent through-put can be achieved on cost-efficient hardware. For the first
time, SAP XI was used as a POS converter, which is certainly a typical application
for an Enterprise Application Integration (EAI) Tool.

2 Description of the overall scenario

The goal during definition of the test scenario was to re-create a realistic
environment as closely as possible. The guideline was the "most common case
principle: for example the articles to be processed were divided into two
categories which disperse the data in the frequency of their occurrence in POS.
The article quantities Fast-Seller and Slow-Seller were mapped and appeared in
the POS data with different probabilities. With this data allocation, an optimum
occurrence of the test data in terms of performance was avoided (above all
regarding data buffering) in order to assure realistic test cases. Data of up to
2,000 different stores was processed, and the size of the XML messages varied
during the testing. An average of approx. 10 sales items per receipt was
assumed, which in practice is actually somewhat high, depending on the
characteristics of the business field - normally the receipts of a retail enterprise
average fewer items.

After the generation of the XML messages (through the simulation of a POS cash
register system that supports the ARTS standard), the receipt-specific sales data
was stored in the transaction database of the POS DM (PIPE) via XI. The XML
files of the sales data included information about the articles sold, their EAN
numbers and quantities and the payment method (cash, credit card, etc.). The
technical transfer of the data to POS DM was done either with a Remote Function
Call / BAPI Call (RFC Adapter of the J2EE Engine) or by addressing POS Data
Managements as a web service via the http protocol (ABAP Proxy). The method
of the address as web service was no included in the standard solution at the
time of the test but was built as a prototype as part of the project. The POS DM
Development plans to make this functionality available in the next release.

During the downstream processing of the cash register data on the basis of the
data saved in the receipt database (or transaction database), a check of the
master data takes place. For example the existence of the article or the EAN
number sent by the cash register system is verified. This master data check is
necessary so that only consistent data is transferred to the target system (e. g.
SAP BW or SAP Retail). Therefore the POS DM solution was used as the
consolidation layer for the operational POS data. The POS DM Standard also
offers User-Exits (BADIs) that can be used to enable project-specific checks (e.
g. Business Rules for the Sales Audit business process).


Final_Project_Report_XI_POS_Data_Managment.doc Page 5 of 11
The data provision to SAP BW is also done on the basis of the transaction data,
with different data retention levels (monthly, weekly, receipt-specific) in data
staging. During the stress test, various alternatives for the retention of receipt-
specific data (BW Cubes or ODS Object) were checked. The content delivered for
BW as part of the POS DM solution was also validated. Along with the supply of
POS data to SAP BW, individual reporting requirements that use receipt-specific
information as the basis for Data Mining were also investigated Data Mining (e.
g. shopping basket analysis).

The SAP Retail System is also supplied with POS data from the transaction
database of PIPE. The sales data from POS are only needed on a compressed
level (day/store/article) in SAP Retail, however. This data is then used as the
consumption values as the basis for downstream ERP processes such as
planning, replenishment, and the posting of the appropriate material/financial
documents in the ERP system. In the SAP Retail system, multi-step
replenishment was assumed to be the replenishment scenario. In this test,
however, only the orderly transfer and processing of the transaction data was
checked, not the performance of the downstream processes, since there are
already results in these areas (HPR Benchmark, customer stress tests, etc.).

In addition to the processing of POS inbound data, the cash register systems are
also supplied with master data from the ERP system. This data is usually made
available by the ERP system in SAP interface format (IDoc). The conversion of
the IDoc data into the appropriate XML format of an ARTS standard was a test
case during this stress test.


Final_Project_Report_XI_POS_Data_Managment.doc Page 6 of 11
3 Hardware used

The allocation of the hardware resources in the various test cases is not a
recommendation for actual projects; it simply reflects the logical configuration of
the hardware for the respective test cases. In practice, there are other
considerations for the sizing of the hardware (e. g. use of POS Data
Managements as BW for operative reporting, customer-specific validation coding,
XI used not only as cash register converter, etc.). The use of the hardware
varied within the test cases and test series. For example, one server would be
the database server in one test and then application server in another. For all
results of the stress test, the specific arrangement of the hardware resources is
explained and the results should be seen in this context.

The following hardware components were used:



Fig.: Hardware components









Final_Project_Report_XI_POS_Data_Managment.doc Page 7 of 11
4 Results

Representative for all results of the stress test, the following is a sample of the
test cases for XI and POS DM. A comprehensive depiction of the results
(particularly for the BW processing and the POS outbound) is included in the
projects final presentation, which can be presented as needed (e.g. as a part of
customer workshops).
4.1 Posting the POS Data as XML messages (ARTS Standard) via XI in
the transaction database of POS Data Management

This test case involves the processing of the POS data as XML messages (ARTS
Standard) via the Exchange Infrastructure (as POS converter) and the storage of
the data in the transaction database of POS Data Managements (PIPE).

During the project the through-put was optimized to a record 59.12 million
receipt items processed per hour. In this test case, an 8-Way ItaniumII Server
with 1.5 GHz CPUs was used for the Exchange Infrastructure. For POS DM, a
server of the same model with only 4 CPUs was used (see above for exact
hardware configuration). The fact that only 4 CPUs were used for the POS DM
server (compared to the 8 CPUs for XI) reflects that the main burden during
processing falls on XI. In an actual POS DM project requiring the processing of
nearly 60 million uncompressed sales per hour, SAP would naturally expect a
much higher-performing server for POS DM. The 4-way server was fully sufficient
for the upload of POS data during the test case, however.

XML files in the size of 7 MB were processed, which proved to be the optimum
message size for the test case. Such an XML file includes 1,400 receipts with 10
articles each. A series of test with differing XML file sizes provided important
knowledge about through-put that is achieved even with much smaller
messages. In total, XML messages of the same size were used processed in the
test runs for approx. 2,000 stores.

During the test runs, both the XI and the POS DM server CPU and network
capacity was fully utilized but not overloaded.

The bandwidth used in the tests of 100 MBit/s was not fully utilized. In practice,
a bandwidth of 1 GBit/s between servers could be expected. The through-put
achieved in the tests would have risen further with an increase in the bandwidth,
as shown during a corresponding test.




Final_Project_Report_XI_POS_Data_Managment.doc Page 8 of 11
The improvements in the run-time in this test case can be traced to the following
optimizations, all of which however relate to the specific test case and could only
be applied to other XI scenarios to a limited degree (and not to POS scenarios):

Optimization of the parameters of the entire XI landscape, especially the
J2EE-Server, Gateway and QRFC Queue administration.
Optimization of the File Adapter configuration
Fine tuning of the parameters of Java VM regarding the Garbage
Collection (J2EE Server Java processes).
Together with the HP Unix development the OS settings used were
optimized for this specific application. These parameter settings were
included in the HP-Unix Standard (e. g. mprotect).
Optimization of the mapping of XI
Transfer of data from XI to POS Data Management via address of the
ABAP-Proxy (via http), instead of transfer via the J2EE RFC Server. This
proved itself to be the better-performing method. SAP Development is
currently working on providing this option for POS Data Management.
Empirical determination of the optimum message size for this particular
test scenario and hardware landscape. The optimum message size for this
test proved to be 7 MB per ARTS-XML file. This value should be verified in
an actual project environment as part of a test series. The results of the
stress test could serve as a rough orientation for processing in a "Trickle
Feed (timely processing during the day).


Detail information of the test case configuration:

Hardware configuration:
SAP XI
o 8 * 1,5 GHz ItaniumII
o 64 GB RAM
o Oracle 9.2.0.4
o HP-UX B.11.23
SAP BW (POS Data Management (PIPE))
o 4 * 1,5 GHz ItaniumII
o 32 GB RAM
o Oracle 9.2.0.4
o HP-UX B.11.23


System configuration:
File adapter 10-fold
qRFC (eo_inbound_parallel) 20-fold (20 queues)
http connections 20-fold (HTTP/1.0)
Mapping (5 J2EE Server nodes @ 3 connections) 15-fold
qRFC queues on the PIPE 10-fold

System utilization:
XI server CPU utilization (8-way 1.5 GHz): Average load appr. 95 %
POS DM server CPU utilization (4-way 1.5 GHz): Average load appr. 80 %
Memory usage: XI Server (appr. 20 GB / 64 GB available)
Network Load: XI Server: without compression: Average 95 %



Final_Project_Report_XI_POS_Data_Managment.doc Page 9 of 11
4.2 POS Data Management (PIPE) - SAP Retail

Regarding POS DM, the following test cases were addresses in the project:

Importing the transaction data in the transaction database (PIPE)
Compressing the transaction data and transferring it in intermediate
documents (IDocs) for forwarding to the ERP System (SAP Retail). For the
sales (receipt data) data, the following interface formats were used:

o Compressed sales (summary of the same article in the selected
amount of the receipt items)
- IDOC message WPUUMS
o Compressed payment method (summary per receipt)
- IDOC message WPUTAB

During the compression of receipt data, a verification check of the master data
takes place simultaneously. The compression and master data verification is
critical in the processing. In practice, the processing of a backlog (multiple days
of POS data processed at once) or the upload of historical data is conceivable
along with the already high daily volumes, which would place even greater
requirements on the functionality.

As a part of the stress test, the compression methods offered in the standard
were investigated and optimized in cooperation with Development. The scalability
and the performance of the compression were improved. This was only possible
due to close and efficient cooperation with the POS DM development. The
improvements were quickly incorporated into the standard so that they could
also have a positive effect on the Ram Up of the solution also taking place at the
same time.


Final_Project_Report_XI_POS_Data_Managment.doc Page 10 of 11
For this test case, an optimum message size of approx. 500 articles in the
compressed IDoc was assumed. This size was determined empirically in the SAP
Retail system and keeps the large RFC communication burden and central
memory usage in proper proportion during the follow-on processes in ERP (e. g.
multi-step replenishment).

For this test case two ItaniumII based server with 12 CPUs altogether were used.
The compression was done with 30 Dialog work processes internally and in
parallel. In one hour, 67 million uncompressed receipt data were compressed
into approx. 20 million compressed POS data and made available to the ERP
system in the form of IDocs. Along with providing compressed sales, the
compression of payment method information was measured and substantial
through-put was achieved here as well. Based on these test results, it is
recommendable to compress the sales and payment method data in one step,
because this requires only 86% of the runtime of separate processing. The
reason for this is the fixed amount of time required for the validation of data.

Detail information of the test case configuration:

Hardware configuration:
Application Server PIPE
o 8 * 1,5 GHz ItaniumII
o 64 GB RAM
o Oracle 9.2.0.4
o HP-UX B.11.23
Database Server POS DM (PIPE)
o 4 * 1,5 GHz ItaniumII
o 32 GB RAM
o Oracle 9.2.0.4
o HP-UX B.11.23

System configuration:
Test with 12 CPUs (two servers)
30 dialog processes

Basic data and test result:
700 stores (@ 1000 receipts @ 10 items), each with 1 payment type
15 payment types total
5 different header combinations per store
Joint processing task 13 + 14
Test result: 67.02 mil. receipt items per hour



Final_Project_Report_XI_POS_Data_Managment.doc Page 11 of 11
5 Lessons learned / Conclusion

The processes of the POS scenarios tested here proved themselves to be highly
scalable. The test results show that very high data volumes can be processed
successfully both with XI and POS Data Management on cost-efficient servers.

In particular, SAP XI was shown to be a sensible alternative to conventional POS
converters for the conversion of POS inbound data. It should be considered that
the medium-term intention is to deliver content for the ARTS Standard on XI,
which should reduce the implementation costs of the solution in projects where
the ARTS standards are used. The supply of master data and other information
to the cash register systems was also tested and satisfactory performance was
achieved here as well. However, this functionality was not the focus of the
testing since much higher volumes are normally to be found in POS inbound. For
the upload of POS sales via POS DM, the message size was also varied in a series
of tests in order to simulate a trickle-feed (timely processing of POS data more
than once during the day). The result was that these trickle-feed scenarios are
feasible, particularly when a combination of the ideal trickle-feed (an XML
message for each receipt) and batch-oriented processing of POS data is used.
Actual project requirements can now be better qualified on the basis of these test
results.

The functionality of POS DM proved itself to high-performing and scalable during
this testing project. Even very high demands regarding the upload of POS
transaction data and its compression are possible with acceptable hardware
costs. For SAP BW, the content delivered in standard was optimized for
performance based on the results of the test. In BW as well, the requirements
specific to POS DM proved to be high-performing and scalable. Various design
alternatives regarding the provision of receipt-specific data were compared. The
actual design depends greatly on the requirements of the individual project and
should be validated with a stress test.

S-ar putea să vă placă și