Sunteți pe pagina 1din 3

WRITE OPTIMIZED DSO

http://sapedw.blogspot.in/2011/12/write-optimized-dso.html

WRITE OPTIMIZED DSO


The objective of Write-Optimised DSO is to save data as efficiently as possible to further process it without any activation, additional effort of generating SIDs, aggregation and data-record based delta. This is a staging DataStore used for a faster upload. Write-Optimized DSO has been primarily designed to be the initial staging of the source system data from where the data could be transferred to the Standard DSO or the InfoCube. The data is saved in the write-optimized Data Store object quickly. Data is stored in at most granular form. Document headers and items are extracted using a DataSource and stored in the DataStore. The data is then immediately written to the further data targets in the architected data mart layer for optimized multidimensional analysis. The key benefit of using write-optimized DataStore object is that the data is immediately available for further processing in active version. YOU SAVE ACTIVATION TIME across the landscape. The system does not generate SIDs for write-optimized DataStore objects to achive faster upload. Reporting is also possible on the basis of these DataStore objects. However, SAP recommends to use Write-Optimized DataStore as a EDW inbound layer, and update the data into further targets such as standard DataStore objects or InfoCubes. Funtionality of write optimised DSO Only active data table (DSO key: request ID, Packet No, and Record No): No change log table and no activation queue. Size of the DataStore is maintainable. Technical key is unique. Every record has a new technical key, only inserts. Data is stored at request level like PSA table. No SID generation: Reporting is possible(but not optimized performance) BEx Reporting is switched off. Can be included in InfoSet or Multiprovider. Performence improvement during dataload. Fully integrated in data flow: Used as data source and data target Export into info providers via request delta Can be included in Process chain without activation step. Partitioned on request ID (automatic). Allows parallel load. The system generates a unique technical key for the write-optimized DataStore object. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD). Only new data records are loaded to this key. Semantic Keys can be defined as standard keys in further target Data Store. The purpose of the semantic key is to identify error in the incoming records or duplicate records. All subsequent data records with same key are written to error stack along with the incorrect data records. These are not updated to data targets; these are updated to error stack. A maximum of 16 key fields and 749 data fields are permitted. Semantic Keys protect the data quality. Semantic keys wont appear in database level. In order to process error records or duplicate records, you must have to define Semantic group

1 of 3

2/2/2014 2:46 PM

WRITE OPTIMIZED DSO

http://sapedw.blogspot.in/2011/12/write-optimized-dso.html

in DTP (data transfer process) that is used to define a key for evaluation. If you assume that there are no incoming duplicates or error records, there is no need to define semantic group, its not mandatory. The semantic key determines which records should be detained when processing. For example, if you define "order number" and item as the key, if you have one erroneous record with an order number 123456 item 7, then any other records received in that same request or subsequent requests with order number 123456 item 7 will also be detained. This is applicable for duplicate records as well. Delta Administration: Data that is loaded into Write-Optimized Data Store objects is available immediately for further processing. The activation step that has been necessary up to now is no longer required. Note here that the loaded data is not aggregated. If two data records with the same logical key are extracted from the source, both records are saved in the Data Store object, since the technical key for the both records not unique. The record mode (0RECORDMODE) responsible for aggregation remains, however, the aggregation of data can take place at a later time in standard Data Store objects. Write-Optimized DataStore does not support the image based delta, it supports request level delta, and you will get brand new delta request for each data load. Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted. Write-Optimized Data Store supports request level delta. In order to capture before and after image delta, you must have to post latest request into further targets like Standard DataStore or Infocubes. Reporting Write-Optimized DataStore Data: For performance reasons, SID values are not created for the characteristics that are loaded. The data is still available for BEx queries. However, in comparison to standard DataStore objects, you can expect slightly worse performance because the SID values have to be created during reporting. However, it is recommended that you use them as a consolidation layer, and update the data to standard DataStore objects or InfoCubes. OLAP BEx query perspective, there is no big difference between Write-Optimized DataStore and Standard DataStore, the technical key is not visible for reporting, so the look and feel is just like regular DataStore. If you want to use write-optimized DataStore object in BEx queries, it is recommended that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the writeoptimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query. In a nut shell, Write Optimized DSO is not for reporting purpose, its a staging DataStore used for faster upload. The direct reporting on this object is also possible without activation but keeping in mind the performance perspective you can use an infoset or multiprovider. Posted 8th December 2011 by Ashwin
0

Add a comment

2 of 3

2/2/2014 2:46 PM

WRITE OPTIMIZED DSO

http://sapedw.blogspot.in/2011/12/write-optimized-dso.html

Comment as:

Publish

3 of 3

2/2/2014 2:46 PM

S-ar putea să vă placă și