Sunteți pe pagina 1din 104

Guide to SAP Beginners

Navigating in SAP

Toolbar




Screen Icons


SAP Log on







o
o
o BI - Business Intelligence (Reporting and Analysis)




o OLAP: Online Analytical Process (SAP BI)
o OLTP: Online Transaction Process (SAP SD, MM, FICO, ABAP, HR)
o Basics:
o BI is a data warehousing tool
o ETL: Extraction > Transformation > Loading
o BI is used by middle level and high level management


PSA (Persistent Storage Area): Used to correct errors.

Chapter 2: Info Objects (SDN)

Info objects are the fields in BI system. These are divided into two types:
1. Characteristics: Used to refer key figure
Ex: Material, Customer

The characteristics are divided into three types. They are:
a. Time Characteristics
b. Unit Characteristics
c. Technical Characteristics

a. Time Characteristics include day, month, year, and half-yearly, quarterly. They are
generated by the system.
Note: Info objects are of two types,
i.System generated (0)
ii.Customer generated (Z)

b. Unit Characteristics include currency, unit. (0Currency, 0Unit)
Material Amount 0Currency Quantity 0Unit
E620 400 Rs 10
E621 500 $ 12
They are always assigned to key figures type amount and quantity (as shown in the above example).
c. Technical Characteristics include 0requestID, 0changeID, 0recordID.

2. Key Figures: Used for calculation purpose
Ex: Amount, Quantity


The key figures are divided into two types. They are:
a. Cumulative key figures
b. Non-cumulative key figures

a. Cumulative key figures are used when the data in the key figure field need to be added.

b. Non-
cumulative key
figures are used in
MM and HR related
reports
Plant Material Stock Value Date
4002 Pencil 500 28/04/2012
4002 Pencil 600 29/04/2012
Records in the 'Stock Value' field are not added.

Steps to create info objects of type characteristics and key figures:

Part 1:
1. Go to RSA1
2. Go to 'Info Object' selection
3. Right click on the context menu > Select 'Create Info Area'
4. Give the technical name (Always unique)
5. Give description
6. Click on Continue

Part 2:
1. Right click on Info Area > Select create 'Info Object Catalog'
2. Give technical name
3. Give description
4. Select Info object type 'Character'
5. Click on Activate button

Part 3:
1. Right click on Info area > Select create 'Info Object Catalog'
2. Give technical name and description
3. Select info object type 'Key Figure'
4. Click on Activate button

Part 4:
1. Right click on Info Object Catalog for characteristics
2. Select create Info Object
3. Give technical name (length between 3 to 8)
4. Give description
Material Amount
E621 100
E622 200
E623 300
Total: 600
5. Click on Continue
6. Give mandatory options in the 'General' tab page (like Data type, length .. )
7. Click on Activate button

Part 5:
1. Right click on the Info Object Catalog for key figures
2. Select create Info Object
3. Give technical name (length between 3 to 8)
4. Give description
5. Click on Continue
6. For key figure of type 'Amount' and 'Quantity' we have to give 'Unit Characteristic'
(0Currency/ 0Unit)
7. Click on Activate button)

There are two types of data in SAP (ERP). They are:
1. Master Data
2. Transaction Data

1. Master Data: It is always assigned to characteristic. From SAP BI point of view, master
data doesn't change frequently

Note: Master Data is always assigned to a characteristic. A characteristic is called master data
characteristic if it has attributes, text and hierarchies.


i.Attributes: These are info objects which explain a characteristic in detail. These are divided into
two types:
a. Navigational attributes
b. Display attributes

Steps to create Attributes (type characteristic):

Part 1:
1. Go to Info object of type characteristic
2. Go to 'Display/Change'
3. In the 'Master data text' tab page, check the 'With Master Data' checkbox
4. Go to the Attribute tab page
5. Give technical name of attribute
6. Click Enter
7. Give description
8. Give data type, length
9. Click on continue
10. Activate the info object

Part 2:
1. If the info object is already in the system, copy the technical name of the info object
2. Go to attribute tab page of char
3. Paste the technical name of the info object
4. Click on Activate button

Note: Key Figure can be an attribute to a characteristic and it can only be a display attribute

Steps to enable Texts:
1. Right click on the info object, select change, go to Master Data Text tab page, select the
check box Text





Company Code Amount
India 2000
USA 2500

Company Code Sales Org Amount
India Hyderabad 2000
Bangalore 2000
USA New York 2500
Washington D.C 2500

Company Code Sales Org Division Amount
India Hyderabad Ameerpet 1000
Begumpet 1000
Bangalore Electronic City 1000
Silk Board 1000
USA New York 7
th
Street 1250
9
th
lane 1250
Washington D.C 8
th
street 1250
10
th
street 1250

Navigational Attribute: We can drill down using navigational attribute. It acts as
characteristic in the report.
Display Attribute: We cannot drill down using display attribute

Note:
1. Attribute Only: If you mark the characteristic as exclusive attribute, it can only be used
as display attribute but not as navigational attribute.
2. The characteristic cannot be transferred into info cube directly.

Steps to change attributes from navigational to display:

1. Go to 'Attribute' tab page, in column 'Navigation On/Off', select the pencil like structure.
2. When changing display to navigation, give a description, click on activate button.

Steps to create attribute (type Key Figure):
1. Go to info object, go to 'Attribute' tab page
2. Give technical name
3. Click on Enter, Select radio button 'Create attribute as key figure'
4. Click on Continue
5. Give description and data type
6. Click on continue
7. Click on activate button

Tab Pages of Characteristic:
1. General tab page:
2. Data Element: Naming convention of data element (technical name of info object). It is
like a field on database level
3. Data Type: Here we have Char (1-60), string, Numeric (1-60), Date (8), Time (6)
4. Lower Case Letters: If the characteristic is having lower case letters, select lower case
allowed option
5. SID Tables: Surrogate ID or Master Data ID
6. Business Explorer: The selections which are in the Business Explorer tab page are by
default displayed at report level
7. Master Data/Text tab page: Info object has the following tables
P -> Time Independent display attributes
Q -> Time dependent display attributes
X -> Time Independent navigational attributes
Y -> Time dependent navigational attributes
Text: If we select this option, we can have text for the characteristic
Hierarchy: To enable hierarchies, we have to select the hierarchies
Attribute: In this, we give the attributes for a characteristic
ii. Text: The same report can be selected in different language in different country. This is because of
the 'Text' functionality
iii. Hierarchy

Chapter 3: Extended Star Schema (SCN)



o Fact table consists of DIM ID and key figures.
o Every Info cube has two types of tables
a. Fact table
b. Dimension tables
o Info cube consists of one fact table (E and F), surrounded by multiple dimension tables.
o Maximum number of dimension tables in an info cube is 16 and the minimum number is 4.
o There are 3 system generated tables
a. Data Package dimension table (Technical dimension)
b. Time dimension
c. Unit dimension
o Maximum number of key figures in an info cube are 233
o Maximum number of characteristics in an info cube are 248

Advantages of Extended Star Schema:
o Faster loading of data/ faster access to reports
o Sharing of master data
o Easy loading of time dependent objects

Classical Star Schema:

o In classical star schema, the characteristic record is directly stored in DIM tables.
o For every Dimension table, a DIM ID is generated and it is stored in the fact table.

Differences between Classical Star Schema and Extended Star Schema:
o In Classic star schema, dimension and master data table are same. But in Extend star schema,
dimension and master data table are different. (Master data resides outside the Info cube and
dimension table, inside Info cube).
o In Classic star schema we can analyze only 16 angles (perspectives) whereas in extended star
schema we can analyze in 16*248 angles. Plus the performance is faster to that extent.



Create an InfoCube
Create an InfoCube

Creating an InfoCube In BW,Customer ID,Material Number, Sales Representative ID,Unit of Measure,
and Transaction Date are called characteristics. Customer Name and Customer Address are attributes of
Customer ID,although they are characteristics as well. Per Unit Sales Price, Quantity Sold, and Sales
Revenue are referred to as key figures. Characteristics and key figures are collectively termed Info
Objects.

A key figure can be an attribute of a characteristic. For instance, Per Unit Sales Price can be an attribute
of Material Number. In our examples, Per Unit Sales Price is a fact table key figure. In the real world, such
decisions are made during the data warehouse design phase. InfoCube Design provides some guidelines
for making such decisions.

InfoObjects are analogous to bricks. We use these objects to build InfoCubes. An InfoCube comprises the
fact table and its associated dimension tables in a star schema.

In this chapter,we will demonstrate how to create an InfoCube that implements the star schema from
figure. We start from creating an InfoArea. An InfoArea is analogous to a construction site,on which we
build InfoCubes.

Creating an InfoArea

In BW,InfoAreas are the branches and nodes of a tree structure. InfoCubes are listed under the branches
and nodes. The relationship of InfoAreas to InfoCubes in BW resembles the relationship of directories to
files in an operating system. Let's create an InfoArea first,before constructing the InfoCube.

Work Instructions
Step 1. After logging on to the BW system,run transaction RSA1,or double-click Administrator
Workbench.



Step 2. In the new window,click Data targets under Modelling in the left panel. In the right panel,right-click
InfoObjects and select Create InfoArea.


Note
In BW,InfoCubes and ODS Objects are collectively called data targets.
Step 3. Enter a name and a description for the InfoArea,and then mark to continue.


Result
The InfoArea has been created.
Creating InfoObject Catalogs
Before we can create an InfoCube,we must have InfoObjects. Before we can create
InfoObjects,however,we must have InfoObject Catalogs. Because characteristics and key figures are
different types of objects,we organize them within their own separate folders,which are called InfoObject
Catalogs. Like InfoCubes,InfoObject Catalogs are listed under InfoAreas.

Having created an InfoArea,let's now create InfoObject Catalogs to hold characteristics and key figures.

Work Instructions
Step 1. Click InfoObjects under Modelling in the left panel. In the right panel,right-click InfoArea
demo,and select Create InfoObject catalog.
Step 2. Enter a name and a description for the InfoObject Catalog,select the option Char.,and then click
to create the InfoObject Catalog.
Step 3. In the new window,click to check the Info Object Catalog. If it is valid,click to activate
the InfoObject Catalog. Once the activation process is finished,the status message InfoObject catalog
IOC_DEMO_CH activated appears at the bottom of the screen.


Result
Click to return to the previous screen. The newly created InfoObject Catalog will be displayed,as
shown in Screen

Following the same procedure,we create an InfoObject Catalog to hold key figures. This time,make sure
that the option Key figure is selected Screen.

Creating InfoObjects-Characteristics

Now we are ready to create characteristics.

Work Instructions
Step 1. Right-click InfoObject Catalogdemo: characteristics,and then select Create InfoObject.
Step 2. Enter a name and a description,and then click to continue.
Step 3. Select CHAR as the DataType,enter 15 for the field Length,and then click the tab Attributes.
Step 4. Enter an attribute name IO_MATNM,and then click to create the attribute.

Note: Notice that IO_MATNM is underlined. In BW,the underline works like a hyperlink. After IO_MATNM
is created,when you click IO_MATNM,the hyperlink will lead you to IO_MATNM's detail definition window.
Step 5. Select the option Create attribute as characteristic,and then click to continue.


Step 6. Select CHAR as the DataType,and then enter 30 for the field Length. Notice that the option
Exclusively attribute is selected by default. Click to continue.

Note: If Exclusively attribute is selected,the attribute IO_MATNM can be used only as adisplay
attribute,not as a navigational attribute. "InfoCube Design Alternative I Time Dependent Navigational
Attributes," discusses an example of the navigation attributes.

Selecting Exclusively attribute allows you to select Lowercase letters. If the option Lowercase letters is
selected,the attribute can accept lowercase letters in data to be loaded.

If the option Lowercase letters is selected,no master data tables,text tables,or another level of attributes
underneath are allowed. "BW Star Schema," describes master data tables and text tables,and explains
how they relate to a characteristic.
Step 7. Click to check the characteristic. If it is valid,click to activate the characteristic.

Step 8. A window is displayed asking whether you want to activate dependent InfoObjects. In our
example,the dependent InfoObject is IO_MATNM.

Click to activate IO_MAT and IO_MATNM.


Result
You have now created the characteristic IO_MAT and its attribute IO_MATNM.

Note: Saving an InfoObject means saving its properties,or meta-data. You have not yet created its
physical database objects,such as tables.

Activating an InfoObject will create the relevant database objects. After activating IO_MAT,the names of
the newly created master data table and text table are displayed under the Master data/texts tab. The
name of the master data table is /BIC/PIO_MAT,and the name of the text table is /BIC/TIO_MAT.

Notice the prefix /BIC/ in the database object names. BW prefixes /BI0/ to the names of database objects
of Business Content objects,and it prefixes /BIC/ to the names of database objects of customer-created
BW objects.

Repeat the preceding steps to create the other characteristics listed.
CHARACTERISTICS


The column "Assigned to" specifies the characteristic to which an attribute is assigned. For
example,IO_MATNM is an attribute of IO_MAT.

The Material Description in Table will be treated as IO_MAT's text,as shown in Table,"Creating
InfoPackages to Load Characteristic Data." We do not need to create a characteristic for it.

IO_SREG and IO_SOFF are created as independent characteristics,instead of IO_SREP's attributes.
Section 3.6,"Entering the Master Data,Text,and Hierarchy Manually," explains how to link IO_SOFF and
IO_SREG to IO_SREP via a sales organization hierarchy. "InfoCube Design Alternative ITime-
Dependent Navigational Attributes," discusses a new InfoCube design in which IO_SOFF and IO_SREG
are IO_SREP's attributes.

BW provides characteristics for units of measure and time. We do not need to create them.From
Administrator Workbench,we can verify that the characteristics in Table have been created by clicking
InfoAreademo,and then clicking InfoObject Catalogdemo: characteristics.


Creating Info Objects-Key Figures

Next,we start to create the keys.

Work Instructions
Step 1. Right-click InfoObject Catalogdemo: key figures,and then select Create InfoObject.
Step 2. Enter a name and a description,and then click to continue.
Step 3. Select Amount in the block Type/data type,select USD as the Fixed currency in the block
Currency/unit of measure,and then click to check the key figure. If it is valid,click to activate the
key figure.

Result
You have created the key figure IO_PRC. A status message All InfoObject(s) activated will appear at the
bottom of Screen.

Repeat the preceding steps to create other key figures listed.
KEY FIGURES


From Administrator Workbench,we can verify that the key figures in Table have been created (Screen) by
clicking InfoAreademo,and then clicking InfoObject Catalogdemo: key figures.

Having created the necessary InfoObjects,we now continue to create the InfoCube.

Creating an InfoCube

The following steps demonstrate how to create an InfoCube,the fact table and associated dimension
tables,for the sales data shown in Table

Work Instructions
Step 1. Select Data targets under Modelling in the left panel. In the right panel,right-click InfoAreademo
and then select Create InfoCube.

Step 2. Enter a name and a description,select the option Basic Cube in block InfoCube type,and then
click to create the InfoCube.

Note: An InfoCube can be a basic cube,a multi-cube,an SAP remote cube,or a general remote cube.A
basic cube has a fact table and associated dimension tables,and it contains data. We are building a basic
cube.

A multi-cube is a union of multiple basic cubes and/or remote cubes to allow cross-subject analysis. It
does not contain data. See,Aggregates and Multi-Cubes,for an example.

A remote cube does not contain data;instead,the data reside in the source system. A remote cube is
analogous to a channel,allowing users to access the data using BEx. As a consequence,querying the
data leads to poor performance.

If the source system is an SAP system,we need to select the option SAP RemoteCube. Otherwise,we
need to select the option Gen. Remote Cube. This book will not discuss remote cubes.

Step 3. Select IO_CUST,IO_MAT,and IO_SREP from the Template table,and move them to the Structure
table by clicking

Next,click the Dimensions button to create dimensions and assign these characteristics to the
dimensions.

Step 4. Click, and then enter a description for the dimension.

Note: BW automatically assigns technical names to each dimension with the format <InfoCube
name><Number starting from 1>.

Fixed dimension <InfoCube name><P|T|U> is reserved for Data Packet,Time,and Unit. Section,"Data
Load Requests," discusses the Data Packet dimension.

A dimension uses a key column in the fact table. In most databases,a table can have a maximum of 16
key columns. Therefore,BW mandates that an InfoCube can have a maximum of 16 dimensions: three
are reserved for Data Packet,Time,and Unit; the remaining 13 are left for us to use.

Repeat the same procedure to create two other dimensions. Next,click the Assign tab to assign the
characteristics to the dimensions.

Step 5. Select a characteristic in the Characteristics and assigned dimension block,select a dimension to
which the characteristic will be assigned in the Dimensions block,and then click to assign the
characteristic to the dimension.
Step 6. After assigning all three characteristics to their dimensions,click to continue.
Step 7. Select the Time characteristics tab,select 0CALDAY from the Template table,and move it to the
Structure table by clicking
Step 8. Select the Key figures tab,select IO_PRC,IO_QUAN,and IO_REV from the Template table and
move them to the Structure table by clicking.
Next,click to check the InfoCube. If it is valid,click to activate the InfoCube.

Result:
You have created the InfoCube IC_DEMOBC. A status message InfoCube IC_DEMOBC activated will
appear at the bottom of Screen.

Summary

In this chapter,we created an InfoCube. To display its data model,you can right-click InfoCubedemo:
Basic Cube,then select Display data model.

The data model appears in the right panel of Screen .

Note:
IO_SREG and IO_SOFF are not listed under IO_SREP as attributes; rather,they have been created as
independent characteristics. "Entering the Master Data,Text,and Hierarchy Manually," describes how to
link IO_SOFF and IO_SREG to IO_SREP via a sales organization hierarchy. "InfoCube Design
Alternative I Time-Dependent Navigational Attributes,"discusses a new InfoCube design in which
IO_SOFF and IO_SREG are IO_SREP's attributes.


InfoCube
Info Cube is structured as Star Schema (extended) where a fact table is surrounded by different dim
table that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes.
Infocube contains maximum 16(3 are sap defines and 13 are customer defined) dimensions and
minimum 4(3 Sap defined and 1 customer defined) dimensions with maximum 233 key figures and 248
characteristic.
The following InfoCube types exist in BI:
. InfoCubes
. VirtualProviders
There are two subtypes of InfoCubes: Standard, and Real-Time. Although both have an extended star schema
design, Real-Time InfoCubes (previously called Transactional InfoCubes) are optimized for direct update, and do
not need to use the ETL process. Real-Time InfoCubes are almost exclusively used in the BI Integrated Planning
tool set. All BI InfoCubes consists of a quantity of relational tables arranged together in a star schema.

Star Schema
In Star Schema model, Fact table is surrounded by dimensional tables. Fact table is usually very large,
that means it contains millions to billions of records. On the other hand dimensional tables are very small.
Hence they contain a few thousands to few million records. In practice, Fact table holds transactional data
and dimensional table holds master data.
The dimensional tables are specific to a fact table. This means that dimensional tables are not shared to
across other fact tables. When other fact table such as a product needs the same product dimension data
another dimension table that is specific to a new fact table is needed.
This situation creates data management problems such as master data redundancy because the very
same product is duplicated in several dimensional tables instead of sharing from one single master data
table. This problem can be solved in extended star schema.





Extended star schema
In Extended Star Schema, under the BW star schema model, the dimension table does not contain
master data. But it is stored externally in the master data tables (texts, attributes, hierarchies).
The characteristic in the dimensional table points to the relevant master data by the use of SID table. The
SID table points to characteristics attribute texts and hierarchies.
This multistep navigational task adds extra overhead when executing a query. However the benefit of this
model is that all fact tables (info cubes) share common master data tables between several info cubes.
Moreover the SID table concept allows users to implement multi languages and multi hierarchy OLAP
environments. And also it supports slowly changing dimension.





Info Area
InfoArea
In BW, InfoArea are the branches and nodes of a tree structure. InfoProviders are listed under the branches and
nodes. The relationship of InfoArea to InfoProviders in BW is similar to the relationship of directories to files in an
operation system.

Steps to create an InfoArea:
Step 1: After logging in to BW system, run transaction RSA1.
Step 2: In the new window, click InfoProvider tab under Modeling in the left panel. In the right panel, right click on
InfoProvider and select Create InfoArea.


Step 3: Enter a name and a description for the InfoArea, and then click to continue.





nfoObjects
InfoObjects are the smallest pieces in SAP BW puzzle. They are used to describe business information
and processes. Typical examples of InfoObjects are: Customer Name, Region, Currency, Revenue,
Fiscal year.

There are five types of SAP BW InfoObjects: Key figures, Characteristics, Unit characteristics, Time
characteristics, and Technical characteristics.
The following picture illustrates the different InfoObject types and their examples.

Key figures
Key figures describe numeric information that are reported on in a query. The most popular types of key
figures are:
Quantity - numeric values with associated unit of measure;
Amount - numeric values with associated currency;
Date - enable date computation;
Time - enable time computations;
Number;
Integer.
Characteristics
Characteristics describe business objects in BW like products, customers, employee, and attributes like
color, material, company code. They enable us to set select criteria during which we display required
data.
Unit characteristics
Unit characteristics provide a meaning of key figures values, stores currencies or units of measure
(e.g., CURRENCY unit, value unit).
Time characteristics
Time characteristics describe time reference of business events. They build the time dimension -
obligatory part of InfoCube. The complete time characteristics (clearly assigned to a point in time)
provided by SAP: calendar day (0CALDAY), calendar week (0CALWEEK), calendar month
(0CALMONTH), calendar quarter (0CALQUARTER), calendar year (0CALYEAR), fiscal year
(0FISCYEAR), and fiscal period (0FISCPER). Incomplete time characteristics: CALMONTH2,
0CALQUART1, 0HALFYEAR1, 0WEEKDAY1, 0FISCPER3.
Technical characteristics
Technical characteristics have administrative purposes (e.g., stores request ID, change ID).
InfoObjects catalogs
SAP BW InfoObjects are stored in InfoObjects catalogs, separately Key figures and Characteristics (all
types). Usually there are two InfoObjects catalogs (for Key figures and Characteristics) defined for every
business context in SAP BW implementation.

Detailed information on particular InfoObject you can find in the Modeling area of the Data Warehousing
Workbench (TCode: RSA1 -> Modeling -> InfoObjects).



Data Store Objects
Since a DataStore object is designed like a table, it contains key fields (document number and item, for
example) and data fields. Data fields can not only be key figures but also character fields (order status,
customer, or time, for example). You can use a delta update to update DataStore object data into
connected InfoCubes or into additional DataStore objects or master data tables (attributes or texts) in the
same system or in different systems. In contrast to multidimensional DataStores for InfoCubes, data in
DataStore objects is stored in flat, transparent database tables. Fact and dimension tables are not
created.

With DataStore objects, you can not only update key figures cumulatively, as with InfoCubes, but also
overwrite data fields. This is especially important for transaction-level documents that change in the
source system. Here, document changes not only involve numerical fields, such as order quantities, but
also non-numerical ones such as ship-to parties, delivery date, and status. Since the OLTP system
overwrites these records when changes occur, DataStore objects must often be moceled to overwrite the
corresponding fields and update to the current value in BI.

DS Oject Types
SAP BI distinguishes between three DataStore object types: Standard, Write Optimized, and Direct
Update. These three flavors of DataStore Objects are shown in the following figure.

1. The Standard DataStore Object consists of three tables (activation queue, active data table, and
change log). It is completely integrated in the staging process. In other words, data can be loaded into
and out of the DataStore Objects during the staging process. Using a change log means that all changes
are also written and are available as delta uploads for connected data targets.

Architecture and Functions of Standard DataStore Objects

Standard DataStore objects consist of three tables:
Active Data table
This is where the current status of the data is stored. This table contains a semantic (business-related)
key that can be defined by the modeler (order number, item, or schedule line, for example). It is very
important that the key be correctly defined by the modeler, as a match on the key initiates special delta
processing during the activation phase (discussed later). Also, reporting via the BEx uses this table.
Change Log table
During the activation run, changes are stored in the change log. Here, you can find the complete history
of the changes, since the content of the change log is not automatically deleted. The connected targets
are updated from the change log if they are supplied with data from the DataStore object in the delta
method. The change log is a PSA table and can also be maintained in the PSA tree of the Data
Warehousing Workbench. The change log has a technical key consisting of a request, data package, and
data record number.
Activation Queue table
During the DTP, the records are first written to this table. This step is necessary due to the complex logic
that is then required by the activation process.




Schema for a Standard DataStore Objects

2. Write optimized is a new kind of DataStore Object . It is targeted for the warehouse level of the
architecture, and has the advantage of quicker loads.
3. A direct update DataStore object (previous 3.x transactional ODS) has only the table with active
data. This means it is not as easily integrated in the staging process. Instead, this DataStore object type
is filled using APIs and can be read via a BAPI.


A MultiProvider is a special InfoProvider that combines data from several InfoProviders, providing it for
reporting. The MultiProvider itself (InfoSets and VirtualProviders) does not contain any data. Its data
comes exclusively from the InfoProviders on which it is based. A MultiProvider can be made up of various
combinations of the following InfoProviders:
. InfoCubes
. DataStore objects
. InfoObjects
. InfoSets
. Aggregation levels (slices of a InfoCube to support BI Integrated Planning)

Use
A BEx query can only be written against a single InfoProvider. A MultiProvider is a single InfoProvider to a
query but through it, multiple providers can be indirectly accessed.





Difference between ODS vs CUBE

The main difference between the ODS Object and the PSA and InfoCube is that the ODS Object allows
existing data to be changed. Whereas an InfoCube principally allows inserts, and only allows deletion on
the basis of requests, data in an ODS Object can be changed during the staging process.

This enables an ODS Object to be used as a consolidation Object in a Data Warehouse. PSA data
change is only supported by manual change or by customer programs and not by the staging mechanism.

Unlike ODS Objects; InfoCubes have a mandatory time-dimension that allows you to look at particular
relationships in relation to time-periods. For example, you can look at how relationships have changed
over a certain time-period.

An ODS Object is principally used for analyzing the status of data at a certain point in time. This allows
you to see what relationships are currently like. Exceptionally you can also track history in ODS Objects
by adding a date to the key fields of the ODS Object.

It is generally true to say that it is not always necessary to implement ODS Objects in every scenario.
Rather, it depends on the requirements of each scenario. You should only use ODS if the requirements of
your scenario fit one of the three usage possibilities outlined above (Inbound ODS, Consistent ODS,
Application-related ODS). An ODS Object placed in the data flow
to an InfoCube without having a function does nothing except hinder loading performance.



Infocube, DSO, Multiprovider
Info Cube
Info Cube is structured as Star Schema (extended) where a fact table is surrounded by different dim
table that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes.
Infocube contains maximum 16(3 are sap defines and 13 are customer defined) dimensions and
minimum 4(3 Sap defined and 1 customer defined) dimensions with maximum 233 key figures and 248
characteristic.
The following InfoCube types exist in BI:
. InfoCubes
. VirtualProviders

There are two subtypes of InfoCubes: Standard, and Real-Time. Although both have an extended star schema
design, Real-Time InfoCubes (previously called Transactional InfoCubes) are optimized for direct update, and do
not need to use the ETL process. Real-Time InfoCubes are almost exclusively used in the BI Integrated Planning
tool set. All BI InfoCubes consists of a quantity of relational tables arranged together in a star schema.

Star Schema
In Star Schema model, Fact table is surrounded by dimensional tables. Fact table is usually very large,
that means it contains millions to billions of records. On the other hand dimensional tables are very small.
Hence they contain a few thousands to few million records. In practice, Fact table holds transactional data
and dimensional table holds master data.
The dimensional tables are specific to a fact table. This means that dimensional tables are not shared to
across other fact tables. When other fact table such as a product needs the same product dimension data
another dimension table that is specific to a new fact table is needed.
This situation creates data management problems such as master data redundancy because the very
same product is duplicated in several dimensional tables instead of sharing from one single master data
table. This problem can be solved in extended star schema.




Extended star schema
In Extended Star Schema, under the BW star schema model, the dimension table does not contain
master data. But it is stored externally in the master data tables (texts, attributes, hierarchies).
The characteristic in the dimensional table points to the relevant master data by the use of SID table. The
SID table points to characteristics attribute texts and hierarchies.
This multistep navigational task adds extra overhead when executing a query. However the benefit of this
model is that all fact tables (info cubes) share common master data tables between several info cubes.
Moreover the SID table concept allows users to implement multi languages and multi hierarchy OLAP
environments. And also it supports slowly changing dimension.



MultiProvider
A MultiProvider is a special InfoProvider that combines data from several InfoProviders, providing it for
reporting. The MultiProvider itself (InfoSets and VirtualProviders) does not contain any data. Its data
comes exclusively from the InfoProviders on which it is based. A MultiProvider can be made up of various
combinations of the following InfoProviders:
. InfoCubes
. DataStore objects
. InfoObjects
. InfoSets
. Aggregation levels (slices of a InfoCube to support BI Integrated Planning)

Use
A BEx query can only be written against a single InfoProvider. A MultiProvider is a single InfoProvider to a
query but through it, multiple providers can be indirectly accessed.




DataStore object
Since a DataStore object is designed like a table, it contains key fields (document number and item, for
example) and data fields. Data fields can not only be key figures but also character fields (order status,
customer, or time, for example). You can use a delta update to update DataStore object data into
connected InfoCubes or into additional DataStore objects or master data tables (attributes or texts) in the
same system or in different systems. In contrast to multidimensional DataStores for InfoCubes, data in
DataStore objects is stored in flat, transparent database tables. Fact and dimension tables are not
created.

With DataStore objects, you can not only update key figures cumulatively, as with InfoCubes, but also
overwrite data fields. This is especially important for transaction-level documents that change in the
source system. Here, document changes not only involve numerical fields, such as order quantities, but
also non-numerical ones such as ship-to parties, delivery date, and status. Since the OLTP system
overwrites these records when changes occur, DataStore objects must often be moceled to overwrite the
corresponding fields and update to the current value in BI.

DS Oject Types
SAP BI distinguishes between three DataStore object types: Standard, Write Optimized, and Direct
Update. These three flavors of DataStore Objects are shown in the following figure.

1. The Standard DataStore Object consists of three tables (activation queue, active data table, and
change log). It is completely integrated in the staging process. In other words, data can be loaded into
and out of the DataStore Objects during the staging process. Using a change log means that all changes
are also written and are available as delta uploads for connected data targets.

Architecture and Functions of Standard DataStore Objects

Standard DataStore objects consist of three tables:
Active Data table
This is where the current status of the data is stored. This table contains a semantic (business-related)
key that can be defined by the modeler (order number, item, or schedule line, for example). It is very
important that the key be correctly defined by the modeler, as a match on the key initiates special delta
processing during the activation phase (discussed later). Also, reporting via the BEx uses this table.
Change Log table
During the activation run, changes are stored in the change log. Here, you can find the complete history
of the changes, since the content of the change log is not automatically deleted. The connected targets
are updated from the change log if they are supplied with data from the DataStore object in the delta
method. The change log is a PSA table and can also be maintained in the PSA tree of the Data
Warehousing Workbench. The change log has a technical key consisting of a request, data package, and
data record number.
Activation Queue table
During the DTP, the records are first written to this table. This step is necessary due to the complex logic
that is then required by the activation process.




Schema for a Standard DataStore Objects

2. Write optimized is a new kind of DataStore Object . It is targeted for the warehouse level of the
architecture, and has the advantage of quicker loads.
3. A direct update DataStore object (previous 3.x transactional ODS) has only the table with active
data. This means it is not as easily integrated in the staging process. Instead, this DataStore object type
is filled using APIs and can be read via a BAPI.

Attributes

Attributes are InfoObjects that exist already, and that are assigned logically to the new characteristic

Navigational Attributes

A Navigational Attibute is any attribute of a Characteristic which is treated in very similar way as we treat
as Characteristic while Query Designer. Means one can perform drilldowns, filters etc on the same while
Query designing.

Imp Note:
While Creating the Info Object -- Attributes Tab Page -- Nav Attri butes to be switched on .
While Designign the Cube we need to Check mark for the Nav. Attributes to make use of them.
Features / Advantages
Nav. Attr. acts like a Char while Reporting. All navigation functions in the OLAP processor are
also possible
Filters, Drilldowns, Variables are possible while Reporting.
It always hold the Present Truth Data .
Historic Data is not possible through Nav. Attributes.
As the data is fetching from Master Data Tables and Not from Info Cube.
Disadvantages:
Leads to less Query Performance.
In the enhanced star schema of an InfoCube, navigation attributes lie one join further out than
characteristics. This means that a query with a navigation attribute has to run an additional join
If a navigation attribute is used in an aggregate, the aggregate has to be adjusted using a change
run as soon as new values are loaded for the navigation
attribute.http://help.sap.com/saphelp_nw04s/helpdata/EN/80/1a63e7e07211d2acb80000e829fbfe/frames
et.htm
Transitive Attributes

A Navigational attribute of a Navigational Attributes is called Transitive Attribute. Tricky right Let me
explain
If a Nav Attr Has the further more Nav. Attributes ( as its Attributes ) in it those are called
Transitive Attributes .
For Example Consider there exists a characteristic Material .It has Plant as its navigational
attribute. Plant further has a navigational attribute Material group. Thus Material group is the transitive
attribute. A drilldown is needed on both Plant and Material group.
And again we need to have both Material & Plant in your Info Cube to Drill down. (To fetch the
data through Nav. Attrs. we need Master Data tables hence, we need to check mark/select both of them
in the Cube
)http://help.sap.com/saphelp_nw04s/helpdata/EN/6f/c7553bb1c0b562e10000000a11402f/frameset.htm
If Cube contains both Material and Plant
Dimension table having both Material and Plant will have Dim ID, Sid of Material, and Sid of Plant.
Since both the Sids exists reference of each navigational attribute is made correctly.
If Cube contains only 'Material
Dimension table having only Material will have Dim ID, Sid of Material. Since Sid for first level
navigational attribute (Plant) does not exists, reference to navigational attribute is not made correctly.
Exclusive Attributes / Attributes Only/ Display Attribute
If you set the Attribute Only Indicator(General Tab page for Chars / Addl Properties tabpage for Key Figs)
for Characteristic while creating, it can only be used as Display Attribute for another Characteristic and
not as a Nav. Attr.
Features:
And it cannot be included in Info Cubes
It can be used in DSO, Infoset and Char as InfoProviders. In this Info Provider , the char is not
visible during read access (at run time)
This means, it is not available in the query. If the Info Provider is being used as source of
Transformation or DTP the characteristic is not visible.
It is just for Display at Query and cannot be used for Drill downs while reporting.
Exclusive attributes:

If you choose exclusively attribute, then the created key figure can only be used as an attribute for
another characteristic, but cannot be used as a dedicated key figure in the InfoCube.
While Creating A Key Figure -- The Tabpage:Additional Properties -- Check Box for Attributies
Only
http://help.sap.com/saphelp_nw04s/helpdata/en/a0/eddc370be9d977e10000009b38f8cf/frameset.htm

Info Set ( Join )

Info Set is a Virtual Provider. Info Sets allow you to analyze the data in several Info Providers by using
combinations of master data-bearing characteristics, Info Cubes and Data Store objects. The system
collects information from the tables of the relevant Info Providers. If you are joining large sets of data from
the master data or from DSO objects, SAP recommends that you use an Info Set. This improves
performance as fewer temporary tables are required and the join is executed in the database itself.
Joins are 4 types

: 1) Left outer Join
2) Right outer Join.
3) Temporary Join...based on any Date field.
4) Equal Join
Inner Join:

In case of inner join there should be an entry in all the tables use in the view.


Outer Join:

With the use of outer join you can join the tables even there is no entry in all the tables used in the view.
Inner join between table 1 and table 2, where column D in both tables in the join condition is set the
same:

Table 1 Table 2
Inner Join
---- ---- ---- ---- ---- ---- ---- ---- ----
A B C D D E F G H
---- ---- ---- ---- ---- ---- ---- ---- ----
a1 b1 c1 1 1 e1 f1 g1 h1
a2 b2 c2 1 1 e1 f1 g1 h1
a4 b4 c4 3 3 e2 f2 g2 h2
---- ---- ---- ---- ---- ---- ---- ---- ----

Left outer join between table 1 and table 2 where column D in both tables set the join
condition: Table 1 Table 2
A B C D D E F G H
a1 b1 c1 1 1 e1 f1 g1 h1
a2 b2 c2 1 3 e2 f2 g2 h2
a3 b3 c3 2 4 e3 f3 g3 h3
a4 b4 c4 3 --- ---- ------------

Left Outer Join

A B C D D E F G H
a1 b1 c1 1 1 e1 f1 g1 h1
a2 b2 c2 1 3 e2 f2 g2 h2
a3 b3 c3 2 4 e3 f3 g3 h3
a4 b4 c4 3 --- ---- ------------
-----------------------------------------------------
A B C D D E F G H
------------------------------------------------------
a1 b1 c1 1 1 e1 f1 g1 h1
a2 b2 c2 1 1 e1 f1 g1 h1
a3 b3 C3 2 NULLNULLNULLNULLNULL
a4 b4 c4 3 3 e2 f2 g2 h2

What makes difference between Inner Join & Left Outer Join

Inner join returns only the matching records from both tables.Left outer join returns complete details of
left table which are matching with right table and non matching records also.

The data that can be selected with a view depends primarily on whether the view implements an inner
join or an outer join. With an inner join, you only get the records of the cross-product for which there is an
entry in all tables used in the view. With an outer join, records are also selected for which there is no entry
in some of the tables used in the view.

The set of hits determined by an inner join can therefore be a subset of the hits determined with an outer
join.

Database views implement an inner join. The database therefore only provides those records for which
there is an entry in all the tables used in the view. Help views and maintenance views, however,
implement an outer join. Temporal Join

Join containing at least one time-dependent characteristic For example, a join contains the following time-
dependent Info Objects (in addition to other objects that are not time-dependent).

InfoObjects in the join Valid from Valid to
Cost center (0COSTCENTER) 01.01.2009 31.05.2009
Profit center (0PROFIT_CTR) 01.06.2009 31.09.2009

Where the two time-intervals overlap, means the validity area that the Info Objects have in common, is
known as the valid time-interval of the temporal join.

Temporal join Valid from Valid to

Valid time-interval 01.03.2009 31.05.2009

You define an Info Set via the characteristic PROFIT Center, which contains the responsible person
(RESP) as the time-dependent attribute and the characteristic COSTCENTER that also contains person
responsible as a time-dependent attribute.

These characteristics contain the following records:


Profit Center Responsible Person DATEFROM* DATETO*
BI A 01.01.2009 30.06.2009
BI B 01.07.2009 31.12.9999

Cost Center Profit Center Responsible Person DATEFROM* DATETO*
4711 BI X 01.01.2009 31.05.2009
4711 BI Y 01.06.2009 31.12.2009
4711 BI Z 01.01.2009 31.12.9999

If both characteristics are used in a join and connected via PROFITCENTRE, it is not true that all six
possible combinations are valid for the above records. Instead only the following four:


PROFITC RESP Cost Center Profit Center Responsible Person
BI A 4711 BI X (01.01.2009-31.05.2009)
BI A 4711 BI Y (01.06.2009-30.06.2009)
BI B 4711 BI Y (01.07.2009-31.12.2009)
BI B 4711 BI Z (01.01.2009-31.12.9999)

Equal Join

A join condition determines the combination of records from the individual objects that are included in the
resulting set. Before an Info Set can be activated, the join conditions have to be defined in such a way (as
equal join condition) that all the available objects are connected to one another either directly or
indirectly.

An Equal Join is possible only with Same Values Technical Requirement is such way that both the Values
has same Data Type & Length for Equal Join.

Definitions of some Objects in BI/BW
Info Area:

Element for grouping meta-objects in the BI system.

Each InfoProvider is assigned to an InfoArea. The resulting hierarchy is displayed in the Data
Warehousing Workbench.

In addition to their properties as an InfoProviders, InfoObjects can also be assigned to different Info
Areas.
Info Area is a Top Level Object, which contains the data models in it .


In general, Info Areas are used to organize InfoCubes and InfoObjects. Each InfoCube is assigned to an
Info Area. Through an InfoObject Catalog, each Info Object is assigned to an Info Area as well.


The Info Area contains all objects used to evaluate for a business logical process.


Info Area in which Info Providers are stored.


Info Object Catalogs:


An Info Object catalog is a collection of InfoObjects grouped according to application-specific criteria.


There are two types of InfoObject catalogs: Characteristic and Key figure.


Info Objects, (characteristics and key figures) are the basic data model of SAP Business information
warehouse(BW/BI).


And the Info objects are stored in folders, it also called the InfoObject catalogs, the infoobject catalogs
are also stored in a folder called Info Areas.


An Info Object catalog is assigned to an Info Area.


An Info Object can be included in several Info Object catalogs.


Info Objects:


Business evaluation objects are known in BI/BIW as InfoObjects. They are divide into characteristics (for
example, customers), key figures (for example, revenue), units (for example, CURRENCY , amount
unit), time characteristics (for example, fiscal year) and technical characteristics (for example, request
number).


Info Objects are the smallest information units in BI/BW.


They structure the Information needed to create the data targets.

Info Objects with attributes or texts can be either a pure data target or an Info Provider (if it is being
reported).



Application Component


Application Components are used to organize Data Sources. They are analogous to the Info Areas .


DataSource
A DataSource is not only a structure in which source system fields are logically grouped together, but also
an object that contains ETTL-related information.
Four types of DataSources exist:
DataSources for transaction data
DataSources for characteristic attributes
DataSources for characteristic texts
DataSources for characteristic hierarchies

If the source system is R/3, replicating DataSources from a source system will create identical DataSource
structures in the BI/BW system.

Info Package:
An InfoPackage specifies when and how to load data from a given source system. BW generates a 30-digit
code starting with ZPAK as an InfoPackage's technical name.



PSA
Persistent Staging Area is a data staging area in BW. It allows us to check data in an intermediate location,
before the data are sent to its destinations in BW.
The PSA stores data in its original source system format. In this way, it gives us a chance to examine /
analyse the data before we process them to further destinations. Most probably it is a temporary storage
area, based on the client data specifications and settings.
SID
SID (Surrogate-ID) translates a potentially long key for an InfoObject into a short four-byte integer, which
saves I/O and memory during OLAP.

Star schemaA star schema is a technique used in the data warehouse database design to help data
retrieval for online analytical processing(OLAP).






Business Content
Business Content is a complete set of BI/BW objects developed by SAP to support the
OLAP tasks. It contains roles, workbooks, queries, Info Cubes, key figures,
characteristics, Transformations and extractors for SAP R/3, and other mySAP
solutions.



Compound attribute
A compound attribute differentiates a characteristic to make the characteristic uniquely
identifiable. For example, if the same characteristic data from different source systems
mean different things, then we can add the compound attribute 0SOURSYSTEM (source
system ID) to the characteristic; 0SOURSYSTEM is provided with the Business Content.



Data packet size
For the same amount of data, the data packet size determines how work processes will
be used in data loading. The smaller the data packet size, the more work processes
needed.



Data Warehouse
Data Warehouse is a dedicated reporting and analysis environment based on the star
schema database design technique and requiring special attention to the data ETTL
process.



Delta update
The Delta update option in the InfoPackage definition requests BI/BW to load only the
data that have been accumulated since the last update. Before a delta update occurs, the
delta process must be initialized.

Equal Join

Table1 Table 2

Equal Join
X ------------------- X


Surrogate ID - Master Data Tables
Standard Info cube consists of fact table surrounded by many dimension table. SID table links these
Dimension tables to master data tables.

SID is surrogate ID generated by the system. The SID tables are created when we create a master data IO.
In SAP BI Extended star schema, the distinction is made between two self contained areas: Info cube &
Master data tables and connecting SID tables.

The master data doesn't reside in the Extended Star schema but resides in separate tables which are
shared across all the star schemas in SAP BI.

An Unique Numeric ID , the SID is generated which connects the dimension tables of the infocube to that
of the master data tables.
The dimension tables contain the DIM IDs and SIDs of a particular Characteristic Info Object. Using this
SID Table the Master data ( attributes and texts of Info Object) is accessed.

List of Technical Tables

F - Fact Table - Uncompressed Fact Table - Contains Direct data for cube Request wise ( Based on B-Tree
Index )
E - Fact Table - Compress cube - Contains compressed data without Request IDs( Request ID would be
'zero') ( based on Bitmap Index )
M - View of Master Data Table - /BI0/MMATERIAL
P - Time Independent Master Data Table - /BI0/PMATERIAL
Q - Time Dependent Master Data Table - /BI0/QMATERIAL

H - Hierarchy table - /BI0/HMATERIAL
J - Hierarchy interval table - /BI0/JMATERIAL
K - Hierarchy SID table - /BI0/KMATERIAL
I - SID Hierarchy structure - /BI0/IMATERIAL
S - SID table - /BI0/SMATERIAL
X - Time Independent SID table for Attr - /BI0/XMATERIAL
Y - Time Dependent SID table fir Attr - /BI0/YMATERIAL

T - Text Table - /BI0/TMATERIAL


SID -- Master data Tables


Surrogate Keys are automatically generated uniform keys, which are uniquely identifying specific real-
world key values.


SID are only the connectivity link between DIM IDs and Master Data Tables .


Let us take an Example of a Material Master Data Tables and understand the various connectivities with
SID table.

(Click on the image to enlarge for better view)




Compounding InfoObject
In Compounding a Field or another Object is attached to an Info Object. A
Compounding Characteristic is when object's definition is incomplete without the
definition of the another Characteristic Info Object.


For the better understanding the Info Object - Location (0PP_LOCAT) has to be
assigned with a Compound Info Object - Plant (0PLANT).


Here Plant(0PLANT) is the Superior Info Object of the Location(0PP_LOCAT)


The Info Object 0Plant has to be Installed/Created/ Activated first, later followed by
Location(0PP_LOCAT)


While creating the Info Object itself we need to assign the Superior Object like below at
Compounding Tab Page of Info Object.


Compounding Info Object Acts as a compounding Primary Key at the Master Data
Table.


When a compounded Info object is included in an Info cube, all corresponding info
objects are added to the Info cube.


If Location(0PP_LOCAT) is to be included in the Info Cube , Plant (0Plant) is
automatically added to the Info Cube.




When a Compounded Info object is included in the DSO , all corresponding Objects are
added to the DSO Key fields/Data Fields.


If Location(0PP_LOCAT) is to be included in the DSO , Plant (0Plant) is automatically
added to the DSO.


If an info object is defined as an attribute, it cannot be included as compounding object.


The total length of the compounding info objects cannot exceed 60 characters.


An Info Object which is defined as an Attribute only setting can not be included in
Compounding object.


The Compounding Info Objects at BEx Report out put will be 0PLANT/0PP_LOCAT.


SAP BI Terminology
Info Area
Info Area is like Folder in Windows. InfoArea is used to organize InfoCubes, InfoObjects,
MultiProviders, and InfoSets in SAP BW.
InfoObject Catalog
Similar to InfoArea, InfoObject Catalog is used to organize the InfoObject based on their type. So
we will have InfoObjects Catalogs of type Characteristics & KeyFigures.
Info Objects
It is the bsic unit or object in SAP BI used to create any structures in SAP BI.
Each field in the source system is referred as InfoObject on SAP BI.
We have 5 types of Info Objects: Characteristic, KeyFigure, Time Characteristic, Unit
Characteristic, and Technical Characteristic.
Data Source
Data Source defines Transfer Structure.
Transfer Structure indicates what fields and in what sequence are they being transferred from the
source system.
We have 4 types of data source:
o Attr: used to load master data attr
o Text: Used to load text data
o Hier: used to load hierarchy data
o Transcation data: used to load transaction data to Info cube or ODS.
Source System
Source system is an application from where SAP BW extracts the data.
We use Source system connection to connect different OLTP applications to SAP BI.
We have different adapters / connectors available:
o SAP Connection Automatic
o SAP Connection Manually
o My Self Connection
o Flat file Interface
o DB connect
o External Systems with BAPI
Info Package
Info package is used to schedule the loading process.
Info package is specific to data source.
All properties what we see in the InfoPackage depends on the properties of the DataSource.

BI/BW Tips


BI Tip # 1

Struggling as there is no sample data for your newly developed Infocube, why not try this?

Try using ABAP program CUBE_SAMPLE_CREATE. It allows you to enter required sample data directly
into your cube without using any flat files, source system configuration etc. Records are added to the
cube in one request using the APO interface without monitor log.

Try exploring even further with all the options available there.

Needless to say try it on Sandbox, development system before attempting on production environment.


BI Tip # 2
To check whether your query is migrated to SAP BI7 version already

Check the Table RSZCOMPDIR: Enter your Query Technical Name as input to the field COMPID and
execute.

If Field VERSION in the Table RSZCOMPDIR has the value less than 100 that means Query is in 3.x
version. If it is more than 100 means, it is already migrated.


BI Tip # 3
Couple of interesting tricks -

RSDG_MPRO_ACTIVATE is a program to activate the Multi providers in production system directly. If
there are any inactive MultiProviders due to some transport or any other reason, this will activate the
Multiprovider without affecting reporting.
Needless to say try it on Sandbox, development system before attempting on production environment.


BI Tip # 4
Worried about Data Loss while changing extract structure, why not try this?

Run the Report RMCSBWCC before doing changes to the LO extract structure or while importing any
changes done to the extract structure.

This report checks whether any of the clients in the system contains data in the V3 update (Extraction
queue) for that application (specific to extract structure provided as input). If there is data in V3 updates
then you need to start the update for that application. Without doing this you will not be able to change the
extract structure and if you are importing the changes then you may end up losing data.


BI Tip # 5
RSICCONT is a table used to delete the request of a data target including DSO or Cube. Facing any
problem in deleting the request from a DSO or a cube while loading the data. Try this.

Needless to say try it on Sandbox, development system before attempting on production environment.


BI Tip # 6
Most of you are aware of RSZDelete tcode, but lot of them face the issue of how to delete enmass
queries on one infoprovider or something in multiples. Well the answer is in same tcode.
For E.g.: You need to delete 10 queries out of 20 based on a infocube. Normally , developer tend to
delete one by one. But you can delete multiple queries also.
Infocube : ZSD_C03
Total Queries : 25
To Delete : 15

In RSZDelete,
Type = REP
Infocube = ZSD_C03

Execute.
You get list of all queries.. select the one that requires to be deleted.

PS: This is an extremely DANGEROUS Transaction Please use responsibly.


BI Tip #7
Replicate Single DataSource

- Use RSAOS_METADATA_UPLOAD function module to replicate single datasource. Logical system -
put in field I_LOGSYS OLTP datasource name - put in field I_SOURC. and execute.

Trust you shall check this in Sandbox / Development System first.


BI Tip #8
Load Monitoring Table..
Did you explore table RSSTATMANPART?

Do you struggle every day while doing data load status monitoring and reporting? Did you ever thought of
checking table RSSTATMANPART?

Check the Manage tabs of multiple Targets or InfoProviders together ? Use the table
RSSTATMANPART to obtain the same. This data will tell you whether the load to the target was
successful or not & also will let you know whether the target is reportable or not.

I happen to come across couple of interesting content covering it on SDN, sharing for your reference
This one is ABAP code for status monitoring
http://wiki.sdn.sap.com/wiki/display/BI/Current+Day+Data+Load+Monitor+Program

Other one is the link of one of the forum thread
https://forums.sdn.sap.com/thread.jspa?threadID=1232358

Needless to say try it on Sandbox / IDES system before going forward.

Happy Learning.


Display Cube Dimension Percentage Used
To display, per dimension, the percentage used with regard to the number of entries in the fact table(s),
function moduleRSDEW_INFOCUBE_DESIGNS can be used.
Enter the name of the cube for input parameter I_INFOCUBE and execute the function module.
Export parameter E_T_TABLSIZE will provide you the desired result as shown in the attached picture.




Infoproviders (Physical and Virtual)
Data Store Objects

A Data Store object is used to store consolidated and cleansed data
(transaction data or master data) on a document level(atomic level).
Although Datastore Objects can store master data, and and there are
valid reasons for this, they primarily store detailed transaction data.
They can be used to support detailed operational reporting, or can
be part of the warehouse, where they canbe used to hold years of
"potentially needed" data.

One major difference between Datastore Objects and Infocubes is that
DataStore Objects have the option to overwrite records, where infocubes
do not. This is a huge difference.

In contrast to multidimensional Datastores for Infocubes, data in Data
Store objects is stored in flat, transparent database tables. Fact and
dimension tables are not created.

With Datastore objects, we cannot only update keyfigures cumulatively,
as with Infocubes, but also overwrite data fields. This is especially
important for transaction-level documents that change in the source
sytem. Here, document changes not only involve numerical fields, such
as order quantities, but also non-numerical ones such as ship-to parties,
delivery date, and status. Since the OLTP system overwrites these
records when changes occur, Datastore objects must often be moceled
to overwrite the corresponding fields and update to the current value
in BI.

The Standard Datastore Object consists of thre tables(activation queue,
active data table, and change log). It is completely integrated in the
staging process. In other words, data can be loaded into and out of the
Datastore Objects during the staging process. Using a change log means
that all changes are also written and are available as delta uploads
for connected data targets.

Write Optimized is a new kind of Datastore Object. It is targeted for
the warehouse level of the architectur, and has the advantage of
quicker loads.

A direct update Datastore object has only the table with active data.
This means it is not as easily integrated in the staging process.
Instead, this Datastore object type is filled using APIs and can be
read via a BAPI.

The following code is delivered for this purpose.

BAPI BAPI_ODSO_READ_DATA_UC
RSDRI_ODSO_INSERT
RSDRI_ODSO_INSERT_RFC
RSDRI_ODSO_MODIFY
RSDRI_ODSO_MODIFY_RFC
RSDRI_ODSO_UPDATE
RSDRI_ODSO_UPDATE_RFC
RSDRI_ODSO_DELETE_RFC

Direct Update DS Object usage in APD:
The APD is a robust tool set for use by the best analysis. It allows
analysts to manipulate and mine data for specific analysis goals.

Direct Update DS Object usage in SEM Business Consolidation(BCS):
During consolidation of two legal entities, accounting entities are
made to direct update DS Objects to reflect the elimination of
internal transactions.

The number of Datastore Objects that must be implemented depends
on the complexity of the scenario that is to be implemented. Furthermore,
a Datastore object can also form the end of a staging process. In otherwords,
an Infocube does not necessarily have to be updated from the Datastore
object.

Integrating a New Infocube Into an Existing Into an Existing Data Flow:

1. Create a transformation between the original source and the new
target objects.
2. Create both a full and delta DTP.
3. Manually execute the full DTP.
4. Create a new process chain to execute the delta DTP.
5. Integrate the new chain into your existing nightly process chains.

Infoproviders are all objects that provide information to queries.
Infoproviders are broken down into two grouping. Infoproviders that
store the data persistently(in database tables) and those that do not
store the data in BI, but rather collect the data when the query is
executed. The former grouping of infoproviders are sometimes called
data targets. The ones that do not store data persistently in BI include
Infosets, Virtual Providers, and multiproviders.

Virtual providers are very special. Like all providers, they feed
information to queries. However, a virtual provider represents a logical
view. Unlike Infocubes, no data is physically stored in BI. The data is
taken from the source systems only after a query has been executed.
There are three types of Virtual Providers, and each type can be
distinguished by the way in which it retrives data.

Based on DTP For direct access
Based on a BAPI
Based on a function module.

Direct Access, a Definition:

A BI tool set that allows queries to be executed on temporary Virtual
Providers that are tied directly to the source system.

We require up-to-date data from a SAP source system
Only a small quantity of data is transferred(good query design)
Only a small number of users work with queries in the data
set at any one time.

There are differences between analysis reporting and operational reporting.
For example, a analysis of why accounts receivable is growing 5% a
year would be a BI Report. On the other hand, a list of unpaid invoices
to support dunning the customer for what they owe would be an
OLTP-based report.

This theory of separtation of duties was completely valid when BI Systems
were first developed, but now the line is blurred. It becomes even more
so with the introduction of Real-Time Data Acquisition(RDA). RDA is a new
SAP Netweaver 2004s tool set to support some limited operational
reporting needs inside BI.

With RDA, data is transferred into BI at regular intervals during the
day and is then updated in the Datastore objects, which are directly
available for reporting Background processes(daemons) in the BI System
initiate the Infopackages and data transfer processes assigned to them
(to update the PSA data in Datastore objects).

Real-Time Data Warehousing(RDA)

RDA is a framework for analyzing information from various sources in
real time as soon as the data becomes available in the source system.

Lower time scale than for schedeuled/batch data acquisition
Stream oriented
Almost immediate availability for reporting(less than 1 minute)

RDA is used in tactical decision making.

Using a Webservice Push:

A web service push can write the data directly from the source to the
PSA. The data transfer is not controlled by BI. An infopackage(for full
upload) is required only to specify request-related settings for RDA;it
is never executed, as the data is pushesd into the BI PSA by a web service.

Using the BI Service API: If the source data is based on a source in an
SAP Source system, the BI Service API is used. Many of the steps are
the same as with normal delta extractions, such as the requirement for
an infopackage to initialize delta.

With RDA, it is these delta loads that are special. If the Datasourceallows for RDA ( a checkbox on
RSA2), we can choose to utilize it in
this way. This involves the creation of a specific RDA Data Transfer Process.

The RDA processes focuses on obtaining data very frequently from your
source system. Due to the limitations discussed above, many times you
only get to decide if the feed to your targets will be normal, periodically
schedulled infopackage, or if it be RDA.

Infoproviders exist for Plan and Actual data of cost center transaction.
This separate plan vs actual design suports BI Integrated Planning with
one dedicated cube, and to support the loading of actual data from
SAP Source system. Your users now have requirements for plan add actual
comparision reports. We want to investigate a Multiprovider to solve
this need

Virtual Provider( Remote Cube)
VirtualProviders represent a logical view. Unlike Info-Cubes, data is not stored physically in
BI. Only the data is taken from the Source System as query executed. You use this Virtual-
Provider if you want to display data from non-BI data sources in BI without having to copy the
data set into the BI structures. The data can be local or remote.
Following are the three type of Virtual Provider.
. Based on DTP for Direct Access
The Direct Access (DTP-filled) Virtual-Provider allows you to define queries
With direct access to transaction or master data in other source systems.
. Based on a BAPI
The BAPI-based option allows reporting using data from non-SAP systems.
The external system transfers the requested data to the OLAP processor
via the BAPI.
. Based on a Function Module
Function-Module-Based Virtual-Provider supplies a clean interface to
allow your custom code to be the source data. It is a very flexible way to
populate a Virtual-Provider.


These options are shown in the creation GUI for Virtual-Providers


Aggregates
In an aggregate the dataset of an infocube is saved redundantly and persistantly in a consolidated form
into the database.

USE: The objective of using the aggregates is to improve the reporting performance in reporting.
Aggregates makes it possible to access infocube data quickly in reporting. Aggregates serve in a similar
way to database
indexes to improve performance.
The BW OLAP Processor selects an appropriate aggregate during a query run in the navigation
step. If no appropriate aggregate exists, the BW OLAP Processor retrives data from the infocube.
Aggregates are multidimensional data structures similar to infocube containing aggregated subset of
information, in a summarized from. An aggregate is also called as baby cube for an infocube. An
aggregates stores the data of an Infocube redundantly and in consolidated from in the aggregate table
RSDDAGGRDIR.
Aggregates are used mainly for one reason primarily to improve the reporting performance. When
queries run faster, they may take less processing time and resources in addition to the enduser getting
the information back i.e., response more quickly.

Life Cycle of Aggregates:
1. Aggregates are defined by the DBA against an existing Infocube.
2. Aggregates are updated when loading data into Infocubes from Infosource using the same updated
rules basic infocube.
3. During data loading, data is aggregated to the specific level of infocube dimension(characteristic)
4. During querying, the OLAP Processor dynamically determines if an aggregate exist to satisfy the
query.

Aggregates have 3 names:
1. A system defined 25 digit unique name.
2. A 6 DIGIT integer number
3.A user defined description only
Relationship between Aggregates queries and Infocubes:
Infocube 1:N Aggregates i.e.,
One Infocube maintains more than one aggregate
2. Querystep 1:1 aggregate i.e.,
One Aggregate is used in one query.
When do you use Aggregates:
It is recommed to use aggregates,
1. If an Infocube contains lots of Data
2. If attributes are used in queries often
3. If the execution and the navigation of a query data leads to deploys with a group of queries.
4.If you want to speed up queries reporting the characteristics hierarchies by aggregating data to a
specific hierarchy level then we use aggregates.

Aggregation Level:
An aggregation level indicates the degree of details to which the data of the Infocube is compressed.
There are 3 levels of aggregates.

1. All characteristic values (*): The data is grouped by all the values of the characteristic or navigation
attributes.
2.Hierarchy Level(H): The data is grouped by Hierarchy node.
3.Fixed value(F): The data is filled according to a constant or single value.

Important Information about Aggregates:
An aggregate holds transaction data.
An Infocube can have several aggregates.
Each query can have only one aggregate
Aggregates must be recreated after the changes in the MD Attributes or hierarchies.(change run process)
Aggregates are built against infocubes only but not with ODS Objects.
Aggregates are useful for keyfigures with SUM,MIN,MAX Properties.
Aggregates are selected by OLAP Processor during query processing.
Aggregates can be built with display attributes.
Aggregates are maintained in a table RSDDAGGRDIR.
Switch OFF Aggregates during dataloading to improve the loading performance.
Apply Rollup Process to fill the aggregates with the infocube data.
Switch On Aggregates after ROLLUP to improve the reporting performance
Maintanance of aggregates includes
Create New Aggregate
Activate and Rollup.
Deactivate
Delete
Copy with Template
Aggregate Tree
Pre Analysis of the Aggregate filling.
Swith off aggregates will result in no data loss; structure remains as it is but aggregate or Baby cube wont
be fill with data.
aggregation deactivation will result in the data loss for the aggregates, but structure of aggregate remains
as it is.
Delete aggregate will result in data loss as well as structure loss.
copy with template allows you to create new aggregates using with the existing aggregates.
To check the data in aggregate(Baby Cube) place the cursor on the specific aggregate, then select

GOTO MENU........> Aggregates data.
The technical name of Aggregate is a 6 digit integer number. Eg: 100426.
Its dimension tables are; /BIC/D100426I, /BIC/D100426P, /BIC/100426U
The fact table of aggregate maintains an adding keyfigure 0factcount(It is the couter for occurance of
request)

BI Table Types (MD, SID, DIM, etc)

Attribute tables:
Attribute tbl for Time Independent attributes:
/BI*/P<characteristic_name>
stored with characteristic values
Attribute tbl for Time Dependent attributes:
/BI*/Q<characteristic_name>
Fields DATETO & DATEFROM are included in time dependent attribute tbl.
stored with characteristic values
Dimension tables:
Dimension tbls (i.e. DIM tables): /BI*/D<Cube_name><dim.no.>
stores the DIMID, the pointer between fact tbl & master data tbl
data is inserted during upload of transact.data (data is never changed, only inserted)
Examples:
/bic/D(cube name)P is the package dimension of a content cube
/bic/D(cube name)U is the unit dimension of a content cube
/bic/D(cube name)T is the time dimension of a content cube
/bic/D(cube name)I is the user defined dimension of a content cube
External Hierarchy tables:
/BI*/I*, /BI*/J*, /BI*/H*, /BI*/K*
/BI0/0P...
are tables that occur in the course of an optimized preprocessing that contains many
tables.
bic/H(object name) hierarchy data of object
For more information see Note 514907.
Fact tables:
In SAP BW, there are two fact tables for including transaction data for Basis
InfoCubes: the F and the E fact tables.
/bic/F(cube name) is the F-fact table of a content cube
/bic/E(cube name) is the E-fact table of a content cube
The Fact tbl is the central tbl of the InfoCube. Here key figures (e.g. sales volume) &
pointers to the dimension tbls are stored (dim tbls, in turn, point to the SID tbls).
If you upload data into an InfoCube, it is always written into the F-fact table.
If you compress the data, the data is shifted from the F-fact table to the E-fact table.
The F-fact tables for aggregates are always empty, since aggregates are
compressed automatically
After a changerun, the F-fact table can have entries as well as when you use the
functionality 'do not compress requests for Aggregates.
E-fact tbl is optimized for Reading => good for Queries
F-fact tbl is optimized for Writing => good for Loads
see Note 631668
Master Data tables
/BI0/P<char_name>
/bic/M(object name) master data of object
Master data tables are independent of any InfoCube
Master data & master data details (attributes, texts & hierarchies) are stored.
Master data table stores all time independent attributes (display & navigational
attribues)
Navigational attributes tables:
SID Attribute table for time independent navigational attributes:
/BI*/X<characteristic_name>
SID Attribute tbl for time dependent navigational attributes:
/BI*/Y<characteristic_name>
Nav.attribs can be used for naviagtion purposes (filtering, drill down).
The attribs are not stored as char values but as SIDs (master data IDs).
P table:
P-table only gets filled if you load master data explicitly.
As soon as the SID table is populated, the P tbl is populated as well
SID table:
SID tbl: /BI*/S<characteristic>
stores the char value (eg customer number C95) & the SID. The SID is the pointer
that is used to link the master data tbls & the dimension tbls. The SID is generated during
the upload (uniqueness is guaranteed by a number range obj).
Data is inserted during the upload of master data or of transactional data
S table gets filled whenever transaction gets loaded. That means if any new data is there for
that object in the transactions then SID table gets fillled.
Text table:
Text tbl: /BI*/T<characteristic>
stores the text for the chars
data is inserted & changed during the upload of text data attribs for the InfoObject
stored either language dependent or independent



M - View of master data table

Q - Time Dependent master data table

H - Hierarchy table

K - Hierarchy SID table

I - SID Hierarchy structure

J - Hierarchy interval table

S - SID table

Y - Time Dependent SID table

T - Text Table

F - Fact Table - Direct data for cube ( B-Tree Index )

E - Fact Table - Compress cube ( Bitmap Index )

What are the Setup Tables?
The Setup Tables are the objects from which the Business Warehouse system is going to extract data for
Full loads and Initialization on LO DataSources.

When a Full extraction or a Delta Initialization is going to be performed, the Setup Tables need to be filled
with all the historical data to be loaded to the Business Warehouse system. This ensures the extractors
won't need to access the always busy application tables (like EKKO or VBAK).

Their content can be deleted any time, without affecting the R/3 transaction data. In fact, after the full/Init
load has been performed, it is highly likely that the information contained in the Setup Tables is already
not up-to-date anymore. Therefore, it would be no harm at all to delete their content. You may think of it
as a "PSA" on the R/3 side.

If the load performed was an Initialization, the next records to be extracted will be sent to the Delta
Queues.

RSRV
Analysis and Repair of BW Objects:

This transaction contains a collection of Reports to check the consistency of the Metadata and the data
in the
system and offers repair payments for most inconsistencies.
These reports should be periodically run as a preventive maintenance measures to create any data
corruption etc.
RSRV Transaction is used as a Testing tool.

Naming Convention in SAP BW

SAP BW has a naming convention related to its objects.

SAP BW prefixes /BIO/ to the names of Business Content database objects. It prefixes /BIC/ to
the database objects created by users.

If a user creates characteristics type info object ZPRODUCT and activates it, information will be
stored in following:

Data element: /BIC/IOZPRODUCT
SID table: /BIC/SZPRODUCT
Master data table: /BIC/PZPRODUCT
Text table: /BIC/TZPRODUCT
View: /BIC/MZPRODUCT

When an info cube ZSALES is created and activated, information will be stored in following:

View Fact table: /BIC/VZSALESF
Transparent Fact table: /BIC/FZSALES
Dimension tables: /BIC/DZSALES1 to /BIC/DZSALESN where N being no. of dimensions
/BIC/DZSALESP, /BIC/DZSALEST, /BICDZALESU for Data Packet, Unit & Time (maximum
16 dimensions
Classes of Data
There are 3 classes of data in SAP-BW.

1. Master Data : It Describes Business
2. Transaction Data : It Describes Business Event.
3. Configuration Data :

Master Data is further classified into 3 types:
1. Attribute Data: It describes
2. Text Data:
3. Hierarchical Data:

Transaction Data is further divided into 2 types:
1. Document Data
1. Header Data
2. Item Data
3. Schedule line Data
2. Summary Level Data

If a hierarchy is used in an info object ZDATE, following tables will be created:

Hierarchy table: /BI0/HZDATE
Hierarchy SID table: /BI0/KZDATE
SID hierarchy structure: /BI0/IZDATE
HierInterval table: /BI0/JZDATE

Routine Lesson 1

SCENARIO: THE DATA SOURCE DOES NOT HAVE DIVISION AND WE NEED
TO DERIVE IT FROM MATERIAL WHICH EXISTS IN THE DATASOURCE.
POPULATE THE CUBE WITH THE DIVISION.
SOLUTION:
DIVISION NEEDS TO BE DERIVED FROM MATERIAL AS DIVISION IS NOT
RETRIEVED FROM THE DATASOURCE AND THE DIVISION NEEDS TO BE
DERIVED FROM MATERIAL USING THE /BI0/PMATERIAL TABLE.
WA_TH_MATERIAL IS AN INTERNAL TABLE DERIVED FROM A WORK
AREA WHICH IS WA_MATERIAL AND WA_MATERIAL IS A WORK AREA
DERIVED FROM THE STRUCTURE T_MATERIAL
SINCE T_MATERIAL HAS MATERIAL AND DIVISION AS THE 2 FIELDS AND
THIS IS READ INTO A WORK AREA WA_MATERIAL USING A KEY WHICH IS
THE -MATERIAL I.E. THE MATERIAL THAT IS LOADED INTO THE END
ROUTINE OF THE TRANSFORMATION.START ROUTINE: USE A SELECT
STATEMENT TO LOAD THE INTERNAL TABLE.
CODE SNIPPET:
IF WA_TH_MATERIAL[] IS INITIAL.
*LOAD DIVISION BY MATERIAL
SELECT MATERIAL DIVISION
INTO TABLE WA_TH_MATERIAL
FROM /BI0/PMATERIAL
WHERE OBJVERS = A.
END ROUTINE: USE A READ STATEMENT AND READ THE INTERNAL TABLE
POPULATED IN THE START ROUTINE INTO A WORK AREA USING A KEY. IF
DATA IS FOUND MAKE THE DATA FOUND EQUAL TO THE END ROUTINE
FIELD.
CODE SNIPPET:
READ TABLE WA_TH_MATERIAL
INTO WA_MATERIAL
WITH TABLE KEY MATERIAL = -MATERIAL.
IF SY-SUBRC = 0.
-DIVISION = WA_MATERIAL-DIVISION.
DATA DEFINITION:
DATA:
BEGIN OF T_MATERIAL,
MATERIAL TYPE /BI0/OIMATERIAL,
DIVISION TYPE /BI0/OIDIVISION,
END OF T_MATERIAL,
DATA: WA_TH_MATERIAL TYPE HASHED TABLE OF T_MATERIAL WITH UNIQUE
KEY MATERIAL,
DATA: WA_MATERIAL TYPE T_MATERIAL,


Routine Lesson 2

Scenario: cube needs a customer number and the datasource does not provide the customer
number. The datasource however contains the country code such as DE,FR etc. based on the
country code a particular customer number is assigned for eg: for DE it is DE01J45 and for
FR it is FR023J4. This customer number needs to be populated in the cube.
SOLUTION:
In this scenario the transformation from the DSO to the cube is worked on where the start
routine is coded to load a data element from a standard table with a field in the standard
table as a reference. This is then used in the END routine with a CASE statement and the
RESULT_FIELDS are loaded accordingly.
START ROUTINE:
Code snippet:
select single low from ZBW_CONSTANT_TAB
into g_de_billto
where vnam = JV_DE_BILLTO.
END ROUTINE:
Code snippet:
case -/bic/zjvsource.
when DE.
-Ship_to = g_de_billto.
-Sold_to = g_de_billto.
-billtoprty = g_de_billto.
-payer = g_de_billto
end case.
<RESULT_FIELDS>-/bic/zjvsource.
case <RESULT_FIELDS>-/bic/zjvsource.
when DE.
<RESULT_FIELDS>-Ship_to = g_de_billto.
<RESULT_FIELDS>-Sold_to = g_de_billto.
<RESULT_FIELDS>-billtoprty = g_de_billto.
<RESULT_FIELDS>-payer = g_de_billto
end case.

Routine lesson 3

Scenario: An info object in the cube has to be updated with a constant value and this info
object does not come from the datasource. Update the info object in the cube with a
constant value.
Solution: go to the DSO and add the info object where the data is not being sourced from the
datasource and in the transformation right click on the info object and click on RULE
DETAILS which will provide the below screen shot. Now choose constant and enter the
value.

BW InfoProvider Design Specifications




InfoProvider Identification

InfoProvider Name: InfoCube: 0IC_C03 Material Movements
Standard/Custom: Standard
Business Content

Std.Business Content
w/ Modifications
Custom
Module(s): CO FI HR MM PM PP SD PS
Other (specify):


Document History
Created By: Rodrick Gary Date Created: 04/03/2006
Approved by: Date Approved:
Change History (To track changes to Request spec. after the specifications have been approved)
Date
Modified
Modified
by
Brief Description of Change Approved
by
Date
Approved



This Is What a Mothers Body Really Looks Like Without Airbrushing!
Continued. | BuzzWok.com | The Best Buzzing Stories Frying In One
Place (Buzzwok)
1. Overview

This InfoCube allows you to evaluate STOCK inventory. This InfoCube contains valuated stock,
consignment stock, stock in transit, blocked stock, and inspection stock. Also, this InfoCube contains the
issue quantity and receipt quantity of valuated stock, inspection stock, consignment stock, blocked stock,
and stock in transit.

The InfoSources and DataSources are:
InfoSource Description DataSource
2LIS_03_BX Material Stocks 2LIS_03_BX
2LIS_03_BF Material Movements From Inventory Mgmt 2LIS_03_BF
2LIS_03_UM Revaluations In Inventory Management 2LIS_03_UM

The InfoSource 2LIS_03_BX allows you to transfer material stocks from an SAP R/3 system to SAP BW.
The InfoSource allows you to set up stocks for stock InfoCubes.
The InfoSource 2LIS_03_BF delivers the data for material movements from MM Inventory Management
(MM-IM).
The InfoSource 2LIS_03_UM delivers the data for revaluations from MM Inventory Management (MM-IM).

Standard reports such as the following are available:
Stock Overview
Stock In Transit
Inventory Aging
Vendor Consignment Stock
Consignment Stock at Customer


RICE Number Description

<insert report names here >









2. Data Flow



3. Business Content Installation
3.1. R/3: Data Transfer to the SAP Business Information Warehouse
3.1.1. Classification:
Standard Business Content X
Standard Business Content w/ Modifications __ Refer to: __________
Custom __ Refer to: __________

3.1.2. In each of the designated source systems, transfer the following DataSource(s):
DataSource: 2LIS_03_BX
DataSource: 2LIS_03_BF
DataSource: 2LIS_03_UM
Source System:
F Box (Development Client TBD) X
G Box (Development Client TBD) X
E Box (Development Client TBD) X
Note: This assignment has been activated per standard business content. No modifications have
been made.

Use R/3 Menu Path:
Data Transfer to the SAP Business Information Warehouse Business Content DataSources Transfer
Business Content DataSources

3.2. BW: Set up InfoSource / DataSource Assignment
3.2.1. Classification:
Standard Business Content
Standard Business Content w/ Modifications Refer to: __________
Custom Refer to: __________

3.2.2. Modeling: Source Systems: Replicate the DataSources of the following Application Components:
Application Component(s): Inventory Management
3.2.3. In each of the designated source systems, assign the following DataSource - InfoSource relationships:
DataSource InfoSource
2LIS_03_BX 2LIS_03_BX
2LIS_03_BF 2LIS_03_BF
2LIS_03_UM 2LIS_03_UM

Source System:
F Box (Development Client TBD) X
G Box (Development Client TBD) X
E Box (Development Client TBD) X

3.2.4. Install the Transfer Rules and Communication Structure.
Note: This assignment has been activated per standard business content. No modifications have
been made.

3.3. Activate InfoCube: 0IC_C03
3.3.1. Classification:
Standard Business Content X
Standard Business Content w/ Modifications __ Refer to: __________
Custom __ Refer to: __________

3.3.2. From the Business Content area of the Administrator Workbench:
3.3.2.1. Set Grouping to:
Only Necessary Objects __
In Data Flow Before X
In Data Flow Afterwards __
In Data Flow Before & Afterwards __

3.3.2.2. Set Collection Mode to:
Collect Autmatically X
Start Manual Collection __ Refer to: __________

3.3.2.3. Insert the following object(s) for Collection:
Object: 0IC_C03
3.3.2.4. Install
Note: This object has been activated per standard business content. No modifications have been
made.

4. Dimensional Model (Include InfoProvider, master data, related ODS, and related aggregated
cubes)
4.1. Material Stocks/Movements (as of 3.0B) ( 0IC_C03 )
Note: This InfoCube is set to be activated per standard business content.













Compound Key
Navigational Attribute




Navigational Attributes
InfoObject_NAV
Name
for NAV
NAV
turned
on
(yes=x)
Standard
Business
Content
Custom Reference Section
0PLANT__0COUNTRY
Country
Of Plant
X X

0MATERIAL__0DIVISION Division X X

0MATERIAL__0MATL_CAT
Material
Category
X X

0MATERIAL__0MATL_GROUP
Material
Group
X X

0MATERIAL__0MATL_TYPE
Material
Type
X X





5. DataSource to InfoSource Mappings

5.1. DataSource 2LIS_03_BX to Infosource 2LIS_03_BX

Note: This DataSource assignment has been activated per standard business content. No
modifications have been made.

InfoObject
(InfoSource)
Description
Data Element
in R/3
Field in
Transfer
Structure
Transfer
Routine
Standard
Business
Content
Custom Reference
0BASE_UOM


Base Unit

MEINS BASME N/A X

0BATCH

Batch

CHARG_D CHARG N/A X

0BWAPPLNM

Application
comp.

RSAPPLNM BWAPPLNM N/A X

0CPPVLC

BW:
Purchase
Value

MCBW_GEO BWGEO N/A X

0CPQUABU

BW:
Amount in
BUnitM

MCBW_MNG BWMNG N/A X

0CPSTLC

Sales Val.
Loc Curr.

MCBW_GVP BWGVP N/A X

0CPSVLC

BW: Sales
Value LC

MCBW_GVO BWGVO N/A X

0INDSPECSTK

Valn of
Spec. Stock

KZBWS KZBWS N/A X

0LOC_CURRCY
Local
currency

HWAER HWAER N/A X

0MATERIAL

Material

MATNR MATNR N/A X

0PLANT

Plant

WERKS_D WERKS N/A X

0PSTNG_DATE

Posting
date

BUDAT BUDAT N/A X

0SOLD_TO

Sold-to
party

WEMPF WEMPF N/A X

0STOCKCAT
Stock
Category
BSTTYP BSTTYP N/A X

0STOCKTYPE Stock type MCBW_BAUS BSTAUS N/A X

0STOR_LOC
Storage
location
LGORT_D LGORT N/A X

0VAL_TYPE
Valuation
type
BWTAR_D BWTAR N/A X

0VENDOR Vendor LIFNR ELIFN N/A X



5.2. DataSource 2LIS_03_BF to Infosource 2LIS_03_BF

Note: This DataSource assignment has been activated per standard business content. No
InfoObject
(InfoSource)
Description
Data Element
in R/3
Field in Transfer
Structure
Transfe
r
Routine
Standar
d
Busines
s
Content
Custo
m
Reference
0STORNO Reversal indicator STORNO STORNO N/A X

0RT_PROMO Promotion WAKTION AKTNR N/A X

0VAL_CLASS Valuation class BKLAS BKLAS N/A X

0DOC_DATE Document Date BLDAT BLDAT N/A X

0STOCKTYPE Stock type BSTAUS BSTAUS N/A X

0STOCKCAT Stock Category BSTTYP BSTTYP N/A X

0PSTNG_DATE Posting date BUDAT BUDAT N/A X

0COMP_CODE Company code BUKRS BUKRS N/A X

0BWAPPLNM Application comp.
BWAPPLN
M
RSAPPLNM
N/A X

0MOVETYPE Movement Type BWART BWART N/A X

0STOCKRELEV
BW: Stock
Relevance
BWBREL MCBW_BREL
N/A X

0CPPVLC
BW: Purchase
Value
BWGEO MCBW_GEO N/A X

0CPSVLC
BW: Sales Value
LC
BWGVO MCBW_GVO N/A X

0CPSTLC
Sales Val. Loc
Curr.
BWGVP MCBW_GVP
N/A X

0CPQUABU
BW: Amount in
BUnitM
BWMNG MCBW_MNG
N/A X

0VAL_TYPE Valuation type BWTAR BWTAR_D N/A X

0PROCESSKEY
BW: Transaction
Key
BWVORG MCW_BWVOR
G
N/A X

0BATCH Batch CHARG CHARG_D N/A X

0MATMREA Reason for Mvt. GRUND MB_GRUND N/A X

0BUS_AREA Business area GSBER GSBER N/A X

0COSTCENTER Cost Center KOSTL KOSTL N/A X

0SOLD_TO Sold-to party WEMPF WEMPF N/A X

0WHSE_NUM
Warehouse
number
LGNUM LGNUM N/A X

0STOR_LOC Storage location LGORT LGORT_D N/A X

0STRGE_BIN Storage bin LGPLA LGPLA N/A X

0STRGE_TYPE Storage type LGTYP LGTYP N/A X

0VENDOR Vendor LIFNR ELIFN N/A X

0MATERIAL Material MATNR MATNR N/A X

0DOC_NUM
BW: Document
Number
KDAUF KDAUF N/A X

0BASE_UOM Base Unit MEINS MEINS N/A X

0DOC_YEAR
BW: Document
Year
MJAHR MJAHR N/A X

0PROFIT_CTR Profit Center PRCTR PRCTR N/A X

0DCINDIC Debit/Credit SHKZG SHKZG N/A X

0LOC_CURRCY Local currency WAERS HWAER N/A X

0PLANT Plant WERKS WERKS_D N/A X

0FISCVARNT Fiscal Year Variant NOPOS MC_NOPOS N/A X

0CPNOITEMS BW: Number PERIV PERIV N/A X

0CO_AREA Controlling area KOKRS KOKRS N/A X

0DOC_ITEM
BW: Document
Line No ZEILE MBLPO
N/A X

0VALUE_LC Amt. in local curr. DMBTR MC_DMBTR N/A X

0COORDER Order AUFNR AUFNR N/A X

0QUANT_B Qty in Base UoM MENGE MC_MENG N/A X

0MOVE_PLANT Receiving Plant UMWRK UMWRK N/A X

0RECORDMOD
E Update Mode ROCANCEL ROCANCEL
N/A X

0RT_RMAPIDA
RMA
Phys.Invent.Date

N/A X

0BWCOUNTER Counter BWCOUNTER MCBW_COUNTER N/A X

0INDSPECSTK
Valn of Spec.
Stock KZBWS KZBWS
N/A X

5.3. DataSource 2LIS_03_UM to Infosource 2LIS_03_UM

Note: This DataSource assignment has been activated per standard business content. No
modifications have been made.
InfoObject
(InfoSource)
Description
Data Element in
R/3
Field in Transfer
Structure
Transfer
Routine
Standard
Business
Content
Custom Reference
0STORNO
Reversal
indicator
STORNO STORNO
N/A X

0RT_PROMO Promotion WAKTION AKTNR N/A X

0VAL_CLASS
Valuation
class
BKLAS BKLAS
N/A X

0DOC_DATE
Document
Date
BLDAT BLDAT
N/A X

0COMP_CODE
Company
code BUKRS BUKRS
N/A X

0BWAPPLNM
Application
comp.
BWAPPLNM RSAPPLNM
N/A X

0MOVETYPE
Movement
Type
BWART BWART
N/A X

0CPPVLC
BW:
Purchase
Value BWGEO MCBW_GEO
N/A X

0CPQUABU
BW:
Amount in
BUnitM
BWMNG MCBW_MNG
N/A X

0PROCESSKEY
BW:
Transaction
Key
BWVORG MCW_BWVORG
N/A X

0FISCYEAR Fiscal year N/A X

0BUS_AREA
Business
area GSBER GSBER
N/A X

0SOLD_TO
Sold-to
party WEMPF WEMPF
N/A X

0DCINDIC Debit/Credit SHKZG SHKZG N/A X

0FISCVARNT
Fiscal Year
Variant NOPOS MC_NOPOS
N/A X

0CPNOITEMS
BW:
Number PERIV PERIV
N/A X

0LOC_CURRCY
Local
currency WAERS HWAER
N/A X

0BASE_UOM Base Unit MEINS MEINS N/A X

0COSTCENTER Cost Center KOSTL KOSTL N/A X

0CO_AREA
Controlling
area KOKRS KOKRS
N/A X

0DOC_NUM
BW:
Document
Number KDAUF KDAUF
N/A X

0MATERIAL Material MATNR MATNR N/A X

0PSTNG_DATE
Posting
date
BUDAT BUDAT
N/A X

0VENDOR Vendor LIFNR ELIFN N/A X

0PLANT Plant WERKS WERKS_D N/A X

0QUANT_B
Qty in Base
UoM MENGE MC_MENG
N/A X

0VALUE_LC
Amt. in local
curr. DMBTR MC_DMBTR
N/A X

0RECORDMODE
Update
Mode ROCANCEL ROCANCEL
N/A X

0STOCKCAT
Stock
Category ROCANCEL ROCANCEL
N/A X

0STOCKTYPE Stock type BSTAUS BSTAUS N/A X

0INDSPECSTK
Valn of
Spec. Stock KZBWS KZBWS
N/A X




6. Update Rules to Data Target

6.1. InfoSource 2LIS_03_BF to Data Target Z0IC_ODS1

Note: Note: This Data Target mapping has been activated per standard business content. No
modifications have been made.

6.2. InfoSource 2LIS_03_BX to Data Target Z0IC_ODS2

Note: Note: This Data Target mapping has been activated per standard business content. No
modifications have been made.

6.3. InfoSource 2LIS_03_UM to Data Target Z0IC_ODS3

Note: Note: This Data Target mapping has been activated per standard business content. No
modifications have been made.

6.4. Data Target(ODS) Z0IC_ODS1 to Data Target(IC) 0IC_C03

Note: Note: This Data Target mapping has been activated per standard business content. No
modifications have been made.

6.5. Data Target(ODS) Z0IC_ODS2 to Data Target(IC) 0IC_C03

Note: Note: This Data Target mapping has been activated per standard business content. No
modifications have been made.

6.6. Data Target(ODS) Z0IC_ODS3 to Data Target(IC) 0IC_C03

Note: Note: This Data Target mapping has been activated per standard business content. No
modifications have been made.




7. CUSTOM DATA MAPPINGS
N/A
8. CUSTOM TABLE DEFINITIONS
N/A

9. Security Design (Authorization Objects)

TBD


10. Additional Design Specifications
There are no special design specifications for this cube.
A. Appendix
The following sections will explain in detail any customization that needs to be performed in the
respective areas

I. Infoobjects
This section contains configuration settings required for custom / customized info objects.
II. DataSources
This section contains configuration settings required for custom / customized data sources.
III. InfoSources
This section contains configuration settings required for custom / customized info objects.
a. Transfer Rules
This section contains configuration settings required for custom / customized transfer rules.
b. Communication Structures
This section contains configuration settings required for custom / customized communication structures.
IV. ODS
This section contains configuration settings required for custom / customized ODS.




Options Active?
BEx Reporting

ODS Object Type Standard

Unique Data Records

Check table for InfoObject

Set quality status to 'OK' automatically

Activate ODS object data automatically

Update data targets from ODS object automatically



InfoObject
(Data Target)
Description
InfoObject
(InfoSource)
Update
Routine
Standard
Business
Content
Key
Field
Custom
Reference
Section


N/A



N/A



N/A



N/A



N/A



N/A




N/A

N/A


V. InfoCube
This section contains configuration settings required for custom / customized Info Cube.

VI. Update Rules
This section contains configuration settings required for custom / customized Update Rules.

a. Into ODS
This section contains configuration settings required for custom / customized update rules into ODS.

b. Into InfoCube

INFOSET
Defining InfoSet:
An InfoSet is a semantic layer over the data sources and is not
itself a data target and describes the data sources that are
usually defined as JOINS for ODS Objects or INFOCUBES-
Characteristics with Master data.

What is Join?
A time dependent join or temporal join is a join that contains
an InfoObject that is a time dependent characteristic.
InfoSets are 2 dimensional query that we build upon
ODS/InfoCube.

Use of InfoSets:
InfoSets allows you to report on several InfoProviders by using
combinations of master data bearing characteristics and ODS
objects.

InfoSets are good for simple reporting needs with low data
volumes and conservative performance expectations.
InfoSets are best suited for snap shot-type reporting.
InfoSets are often used in combination with Virtual
InfoProviders for data reconciliation purposes.

So what are Classic InfoSets and InfoSet?
Important points to remember:
Below 3.0 SAP release Versions we call as Classic InfoSets.
>3.0 SAP Release versions, we call as InfoSets.

Classic InfoSet gives you a view of data set that you report on
using InfoSet query.
InfoSet is a BW-Specific view of data.
InfoSets are not BW repository objects but SAP Web
Application server objects.
InfoSet query can be used to carry out tabular (Flat) reporting
on InfoSets.

FEW QUESTIONS:
What is Inner join & Left outer join in InfoSet?
What are classic InfoSets?
What are InfoSets?
Differences between Classic InfoSet and InfoSet?

AGGREGATES AND MULTICUBES
AGGREGATES
Aggregates are the small Baby Cubes.
an aggregate is a materialized, aggregated view of the data in
an InfoCube. in an aggregate, the dataset of an InfoCube is
saved redundantly and persistently in a consolidated form into
the database and mainly used to improve the reporting
performance.

SAP BW OLAP processor retrieves data from an appropriate
aggregate during a query run in the navigation step. if no
appropriate aggregate exists, the BW OLAP Processor retrieves
data from the original InfoCube instead.

Aggregates are the information stored in a DWH in a
summarized form.

LIFE CYCLE OF AGGREGATES:
aggregates are defined by the DBA against an existing InfoCube
and are updated when loading data into InfoCubes from
InfoSource using the same update rules of the InfoCube.

AGGREGATES HAVE 3 NAMES:
1. A system defined 25 digit unique name.
2. A 6 digit integer number.
3. A user defined description.

When do you choose Aggregates?
It is recommended to use aggregates in following situations:
1. If an InfoCube contains lot of data.
2. If attributes are used in queries often.
3. If the execution and the navigation of a query data leads to
delays with a group of queries.
4. If you want to speed up the execution time and the
navigation of a specific query.
5. If you want to speed up reporting with characteristics
hierarchies by aggregating the data into a specific hierarchy
level.

AGGREGATION LEVEL:
An aggregation level indicates the degree of details to which
the data of the InfoCube is compressed.

There are three levels o Aggregation:
1. ALL CHARACTERISTICS (*)
2. HIERARCHY LEVEL (H)
3. FIXED VALUE (F)

1. ALL CHARACTERISTICS (*)
Data is grouped by all the values of characteristics (or)
navigational attributes.
2. HIERARCHY LEVEL (H)
Data is grouped at Hierarchy level.
3. FIXED VALUE (F)
The data is filled according to a single value.

IMPORTANT NOTES ON AGGREGATES:
an aggregate holds transaction data
an InfoCube maintains more than one aggregate
aggregates are built against InfoCubes only but not with
ODS.
Aggregates are used for KeyFigures with aggregation
property as ( SUM[], MAX[], AVG[]) and not on display
attributes.
Aggregates must be recreated after the changes in the
master data or hierarchies.

NOTE: We use the function Attribute Change Run to update the
aggregates with the modified master data attributes and
Hierarchies.
Aggregates will be maintained in a table RSDDAGGRDIR

WHAT ARE THE KEY POINTS TO IMPROVE PERFORMANCE
DURING LOADING & REPORTING:
During data loading, switch off aggregates to improve loading
performance.
During reporting, switch off aggregates to improve reporting
performance.

MULTICUBE:
Defining a MultiCube:
A MultiCube is a union of basic cubes. The MultiCube itself does
not contain any data, rather the data reside in the Basic Cubes.
To a user, MultiCube resemble a Basic Cube. When creating a
query, the user can select characteristics and KeyFigures from
different BasicCubes.


Why we use MultiCube?
Most users need to access only a subset of information(data) in
an InfoCube.

Example: Among 4 regions North, South, East & West, East
region users are not allowed to access other regions data. In
that case, we could create a new InfoCube, which contains only
East Region sales data, a subset of the original InfoCube. By this
during query execution, the process will be smaller and thus
performance is increased or enhanced.

What SAP recommends, Aggregate or MultiCube?
SAP recommends MultiCube. Why?
Because queries can be created on InfoCubes but not on
Aggregates.

As we know that the InfoCube contains huge data regarding
SALES and DELIVERY data from 4 regions. If we are running a
query on both sales and delivery of a particular East Region, it
searches data in the whole InfoCube which is larger, may
increase the query run time, consequently degrading the
performance in response time which will be slow.
Also when we do cross subject analysis from purchase to
inventory, to sales to delivery and billing, the InfoCube
becomes soo large that could not manage it. Thus query
performance will be degraded.
To overcome this problem, BW offers a technique called
MultiCube, so we must build a larger InfoCube that contains
both sales and delivery data.

The MultiCube contains no data, but simply links the basic
cubes together.
On MultiCube we can create queries as we did on cube.





FEW QUESTIONS:
What are aggregates?
What are MultiCubes?
Differences between Aggregates & MultiCubes
Can we create queries on aggregates or MultiCubes, how?
What are aggregation levels?
Aggregates have 3 names. What are they?
In what situations you use or recommend aggregates?
Why SAP recommends MultiCube?
In which table aggregates maintained ?
Are the aggregates used for KeyFigures or display
attributes?
Did you created Aggregates & MultiCubes? Explain the
scenario
SAP BIW INFOCUBE REALTIME
CONCEPTS
CONTENTS
1. INFOCUBE-INTRODUCTION:
2. INFOCUBE - STRUCTURE
3. INFOCUBE TYPES
3.1 Basic Cube: 2 Types
3.1.1 Standard InfoCube
3.1.2 Transactional InfoCube
3.2 Remote Cubes: 3 Types
3.2.1 SAP Remote Cube
3.2.2 General Remote Cube
3.2.3 Remote Cube With Services
4. INFOCUBE TABLES- F,E,P,T,U,N
5. INFOCUBE-TOOLS
5.1 PARTITIONING
5.2 ADVANTAGES OF PARTITIONING:
5.3 CLASSIFICATION OR TYPES OF PARTITIONING
5.3.1 PHYSICAL PARTITIONING/TABLE/LOW LEVEL
5.3.2 LOGICAL PARTITIONING/HIGH LEVEL PARTITIONING
5.3.3 EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEAR
5.3.3.1 ERRORS ON PARTITIONING
5.3.4 REPARTITIONING
5.3.4.1 REPARTITIONING TYPES
5.3.5 Repartitioning - Limitations- errors
5.3.6 EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEAR
5.4 COMPRESSION OR COLLAPSE
5.5 INDEX/INDICES
5.6 RECONSTRUCTION
5.6.1 ERRORS ON RECONSTRUCTION
5.6.2 Key Points to remember while going for reconstruction
5.6.3 Why Errors Occur in Reconstruction?
5.7 STEPS FOR RECONSTRUCTION
5.8 ROLLUP
5.9 LINE ITEM DIMENSION/DEGENERATE DIMENSION
5.9.1 LINE ITEM DIMENSION ADVANTAGES
5.9.2 LINE ITEM DIMENSION DISADVANTAGES
5.10 HIGH CARDINALITY
6. INFOCUBE DESIGN ALTERNATIVES
6.1 ALTERNATIVE I : TIME DEPENDENT NAVIGATIONAL ATTRIBUTES
6.2 ALTERNATIVE II : DIMENSION CHARACTERISTICS
6.3 ALTERNATIVE III : TIME DEPENDENT ENTIRE HIERARCHIES
6.4 OTHER ALTERNATIVES:
6.4.1 COMPOUND ATTRIBUTE
6.4.2 LINE ITEM DIMENSION
7. FEW QUESTIONS ON INFOCUBES

1. INFOCUBE-INTRODUCTION:
The central objects upon which the reports and analyses in BW are based are called InfoCubes & we can
seen as InfoProviders. an InfoCube is a multidimensional data structure and a set of relational tables that
contain InfoObjects.

2. INFOCUBE- STRUCTUREStructure of InfoCube is considered as ESS-Extended Star Schema/Snow
Flake Schema, that contains
1 Fact Table
n Dimension Tables
n Surrogate ID (SID) tables
n Fact Tables
n Master Data Tables
Fact Table with KeyFigures
n Dimension Tables with characteristics
n Surrogate ID (SID) tables link Master data tables & Hierarchy Tables
n Master Data Tables are time dependent and can be shared by multiple InfoCubes. Master data table
contains Attributes that are used for presenting and navigating reports in SAP(BW) system.

3. INFOCUBE TYPES:


Basic Cubes reside on same Data Base
Remote Cubes Reside on remote system
SAP remote cube resides on other R/3 System uses SAPI
General remote Cube resides on non SAP System uses BAPI
Remote Cube wit Services reside on non SAP system

3.1. BASIC CUBE: 2 TYPES: These are physically available in the same BW system in which they are
specified or their meta data exist.
3.1.1. STANDARD INFOCUBE: FREQUENTLY USEDStandard InfoCube are common & are optimized
for Read Access, have update rules, that enable transformation of Source Data & loads can be scheduled

3.1.2. TRANSACTIONAL INFOCUBE:The transactional InfoCubes are not frequently used and used only
by certain applications such as SEM & APO. Data are written directly into such cubes bypassing
UpdateRules

3.2. REMOTE CUBES: 3 TYPES:Remote cubes reside on a remote system. Remote Cubes gather
metadata from other BW systems, that are considered as Virtual Cubes. These are the remote cube
types:

3.2.1. SAP REMOTE CUBE:the cube resides on non SAP R/3 system & communication is via the service
API(SAPI)

3.2.2. GENERAL REMOTE CUBE:Cube resides on non SAP R/3 Source System & communication is via
BAPI.

3.2.3. REMOTE CUBE WITH SERVICES:Cube resides on any remote system i.e. SAP or non SAP & is
available via user defined function module.

4. INFOCUBE TABLES- F,E,P,T,U,NTransaction Code: LISTSCHEMA
LISTSCHEMA>enter name of the InfoSource OSD_C03 & Execute. Upon execution the primary (Fact)
table is displayed as an unexpanded node. Expand the node and see the screen.
These are the tables we can see under expanded node:


5. INFOCUBE-UTILITIES
5.1. PARTITIONINGPartitioning is the method of dividing a table into multiple, smaller, independent or
related segments(either column wise or row wise) based on the fields available which would enable a
quick reference for the intended values of fields in the table.
For Partitioning a data set, at least among 2 partitioning criteria 0CALMONTH & 0FISCPER must be
there.

5.2. ADVANTAGES OF PARTITIONING: Partitioning allows you to perform parallel data reads of
multiple partitions speeding up the query execution process.
By partitioning an InfoCube, the reporting performance is enhanced because it is easier to search in
smaller tables, so maintenance becomes much easier.
Old data can be quickly removed by dropping a partition.
you can setup partitioning in InfoCube maintenance extras>partitioning.

5.3. CLASSIFICATION OR TYPES OF PARTITIONING
5.3.1. PHYSICAL PARTITIONING/TABLE/LOW LEVELPhysical Partitioning also called table/low level
partitioning is restricted to Time Characteristics and is done at Data Base Level, only if the underlying
database allows it.
Ex: Oracle, Informix, IBM, DB2/390
Here is a common way of partitioning is to create ranges. InfoCube can be partitioned on a time slice like
Time Characteristics as below.
FISCALYEAR( 0FISCYEAR)
FISCAL YEAR VARIANT( 0FISCVARNT)
FISCALYEAR/PERIOD(0FISCPERIOD)
POSTING PERIOD(OFISCPER3)
By this physical partitioning old data can be quickly removed by dropping a partition.
note: No partitioning in B.I 7.0, except DB2 (as it supports)

5.3.2. LOGICAL PARTITIONING/HIGH LEVEL PARTITIONINGLogical partitioning is done at
MultiCubes(several InfoCubes joined into a MultiCube) or MultiProvider level i.e. DataTarget level . in this
case related data are separated & joined into a MultiCube.
Here Time Characteristics only is not a restriction, also you can make position on Plan & Actual data,
Regions, Business Area etc.
Advantages:
As per the concept, MultiCube uses parallel sub-queries, achieving query performance ultimately.
Logical partitioning do not consume any additional data base space.
When a sub-query hits a constituent InfoProvider, a reduced set of data is loaded into smaller InfoCube
from large InfoCube target, even in absence of MultiProvider.

5.3.3. EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEARTHERE ARE TWO
PARTITIONING CRITERIA:
calendar month (0CALMONTH)
fiscal year/period (0FISCPER)
At an instance we can partition a dataset using only one type among the above two criteria:
In order to make partition, at least one of the two InfoObjects must be contained in the InfoCube.
If you want to partition an InfoCube using the fiscal year/period (0FISCPER) characteristic, you have to
set the fiscal year variant characteristic to constant.
After activating InfoCube, fact table is created on the database with one of the number of partitions
corresponding to the value range.
You can set the valuerange yourself.
Partitioning InfoCubes using Characteristic 0CALMONTH:
Choose the partitioning criterion 0CALMONTH and give the value range as
From=01.1998
to=12.2003
So how many partitions are created after partitioning?
6 years * 12 months + 2 = 74 partitions are created
2 partitions for values that lay outside of the range, meaning < 01.1998 or >12.2003.
You can also determine how many partitions are created as a maximum on the database for the fact table
of the InfoCube.
You choose 30 as the maximum number of partitions.
Resulting from the value range:
6 years *12 calendar months + 2 marginal partitions (up to 01.1998, from 12.2003)= 74 single values.
The system groups three months at a time together in a partition
4 Quarters Partitions = 1 Year
So, 6 years * 4 partitions/year + 2 marginal partitions = 26 partitions are created on the database.
The performance gain is only gained for the partitioned InfoCube if the time dimension of the InfoCube is
consistent.
This means that all values of the 0CAL* characteristics of a data record in the time dimension must fit
each other with a partitioning via 0CALMONTH.
Note: You can only change the value range when the InfoCube does not contain any data.

PARTITIONING INFOCUBES USING THE CHARACTERISTIC 0FISCPERMandatory thing here is, Set
the value for the 0FISCVARNT characteristic to constant.

5.3.4. STEPS FOR PARTITIONING AN INFOCUBE USING 0CALDAY & 0FISCPER:Administrator
Workbench
>InfoSet maintenance
>double click the InfoCube
>Edit InfoCube
>Characteristics screen
>Time Characteristics tab
>Extras
>IC Specific Properties of InfoObject
>Structure-Specific Properties dialog box
>Specify constant for the characteristic 0FISCVARNT
>Continue
>In the dialog box enter the required details

5.3.5. Partition Errors:
F fact tables of partitioned InfoCube have partitions that are empty, or the empty partitions do not have a
corresponding entry in the related package dimension.
Solution1: the request SAP_PARTITIONS_INFO_GET_DB4 helps you to analyze these problems. The
empty partitions of the f fact table are reported . In addition, the system issues an information manage. If
there is no corresponding entry for a partition in the InfoPackage dim table(orphaned).
When you compressed the affected InfoCube, a database error occurred in drop partition, after the actual
compression. However, this error was not reported to the application. The logs in the area of
compression do not display any error manages. The error is not reported in the developer trace
(TRANSACTION SM50), the system log ( TRANSACTION SM21) and the job overview (TRANSACTION
SM37) either.
The application thinks that the data in the InfoCube is correct, the data of the affected requests or
partitions is not displayed in the reporting because they do not have a corresponding entry in the package
dimension.
Solution2: use the report SA P_DROP_FPARTITIONS</Z1) to remove the orphaned or empty partitions
from the affected f fact tables, as described in note 1306747, to ensure that the database limit of 255
partitions per database table is not reached unnecessarily.

5.3.6. REPARTITIONING:Repartitioning is a method of partitioning, used for a cube which is already
partitioned that has loaded data. Actual & Plan data versions come here. As we know, the InfoCube has
actual data which is already loaded as per plan data after partition. If we do repartition, the data in the
cube will be not available/little data due to data archiving over a period of time.
You can access repartitioning in the Data Warehousing Work Bench using Administrator>Context Menu
of your InfoCube.
5.3.6.1. REPARTITIONING - 3 TYPES: A) Complete repartitioning,
B) Adding partitions to an e fact table that is already partitioned and
C) Merging empty or almost empty partitions of an e fact table that is already partitioned

5.3.7. REPARTITIONING - LIMITATIONS- ERRORS:SQL 2005 partitioning limit issue: error in SM21
every minute as we reached the limit for number of partitions per SQL 2005(i.e. 1000)

5.4. COMPRESSION OR COLLAPSE:Compression reduces the number of records by combining
records with the same key that has been loaded in separate requests.
Compression is critical, as the compressed data can no longer deleted from the InfoCube using its
request ID's. You must be certain that the data loaded into the InfoCube is correct.
The user defined partition is only affecting the compressed E-Fact Table.
By default F-Fact Table contains data.
By default SAP allocates a Request ID for each posting made.
By using Request ID, we can delete/select the data.
As we know that E-Fact Table is compressed & F-Fact Table is uncompressed.
When compressed, data from F-Fact Table transferred to E-Fact Table and all the request ID's are lost /
deleted / set to null.
After compression, comparably the space used by E-Fact Table is lesser than F-Fact Table.
F-Fact Table is compressed uses BITMAP Indexes
E-Fact Table is uncompressed uses B-TREE Indexes

5.5. INDEX/INDICES
PRIMARY INDEXThe primary Index is created automatically when the table is created in the database.
SECONDARY INDEX(Both Bitmap & B-Tree are secondary indices)
Bitmap indexes are created by default on each dimension column of a fact table
& B-Tree indices on ABAP tables.

5.6. RECONSTRUCTION:Reconstruction is the process by which you load data into the same cube/ODS
or different cube/ODS from PSA. The main purpose is that after deleting the requests by
Compression/Collapse by any one, so if we want the requests that are deleted (old/new) we don't need to
go to source system or flat files for collecting requests, we get them from PSA.
Reconstruction of a cube is a more common requirement and is required when:1) A change to the
structure of a cube: deletion of characteristics/key figures, new characteristics/key figures that can be
derived from existing chars/key figures
2) Change to update rules
3) Missing master data and request has been manually turned green - once master data has been
maintained and loaded the request(s) should be reconstructed.

5.6.1. KEY POINTS TO REMEMBER WHILE GOING FOR RECONSTRUCTION: Reconstruction must
occur during posting free periods.
Users must be locked.
Terminate all scheduled jobs that affect application.
Deactivate the start of RSBWV3nn update report.

5.6.2. WHY ERRORS OCCUR IN RECONSTRUCTION?Errors occur only due to document postings
made during reconstruction run, which displays incorrect values in BW, because the logic of before and
After images are no longer match.

5.6.3. STEPS FOR RECONSTRUCTIONTransaction Codes:
LBWE : LO DATA EXTRACTION: CUSTOMIZING COCKPIT
LBWG : DELETE CONTENTS OF SETUP TABLES
LBWQ : DELTA QUEUED
SM13 : UPDATE REQUESTS/RECORDS
SMQ1 : CLEAR EXTRACTOR QUEUES
RSA7 : BW DELTA QUEUE MONITOR
SE38/SA38 : DELETE UPDATE LOG

STEPS:1. Mandatory - User locks :
2. Mandatory - (Reconstruction tables for application 11 must be empty)
Enter transaction - LBWG & application = 11 for SD sales documents.
3. Depending on the selected update method, check below queues:
SM13 serialized or un-serialized V3 update
LBWQ Delta queued
Start updating the data from the Customizing Cockpit (transaction LBWE) or
start the corresponding application-specific update report RMBWV3nn (nn = application number) directly
in transaction SE38/SA38 .
4. Enter RSA7 & clear delta queues of PSA, if it contains data in queue
5. Load delta data from R/3 to BW
6. Start the reconstruction for the desired application.
If you are carrying out a complete reconstruction, delete the contents of the corresponding data targets
in your BW (cubes and ODS objects).
7. Use Init request (delta initialization with data transfer) or a full upload to load the data from the
reconstruction into BW.
8. Run the RMBWV3nn update report again.

5.6.4. ERRORS ON RECONSTRUCTION:Below you can see various errors on reconstruction. I had read
SAP Help Website and SCN and formulated a simple thesis to make the audience, easy in understanding
the concepts
ERROR 1: When I completed reconstruction, Repeated documents are coming. Why?
Solution: The reconstruction programs write data additively into the set-up tables.
If a document is entered twice from the reconstruction, it also appears twice in the set-up table.
Therefore, the reconstruction tables may contain the same data from your current reconstruction and from
previous reconstruction runs (for example, tests). If this data is loaded into BW, you will usually see
multiple values in the queries (exception: Key figures in an ODS object whose update is at overwrite).

ERROR 2: Incorrect data in BW, for individual documents for a period of reconstruction run. Why?
Solution: Documents were posted during the reconstruction.
Documents created during the reconstruction run then exist in the reconstruction tables as well as in the
update queues. This results in the creation of duplicate data in BW.
Example: Document 4711, quantity 15
Data in the PSA:
ROCANCEL DOCUMENT QUANTITY
4711 15 delta, new record
4711 15 reconstruction
Query result:
4711 30
Documents that are changed during the reconstruction run display incorrect values in BW because the
logic of the before and after images no longer match.
Example: Document 4712, quantity 10, is changed to 12.
Data in the PSA:
ROCANCEL DOCUMENT QUANTITY
X 4712 10- delta, before image
4712 12 delta, image anus
4712 12 reconstruction
Query result:
4712 14

ERROR 3: After you perform the reconstruction and restart the update, you find duplicate documents in
BW.
Solution: The reconstruction ignores the data in the update queues. A newly-created document is in the
update queue awaiting transmission into the delta queue. However, the reconstruction also processes
this document because its data is already in the document tables. Therefore, you can use the delta
initialization or full upload to load the same document from the reconstruction and with the first delta after
the reconstruction into BW.
After you perform the reconstruction and restart the update, you find duplicate documents in BW.
Solution: The same as point 2; there, the document is in the update queue, here, it is in the delta queue.
The reconstruction also ignores data in the delta queues. An updated document is in the delta queue
awaiting transmission into BW. However, the reconstruction processes this document because its data is
already contained in the document tables. Therefore, you can use the delta initialization or full upload to
load the same document from the reconstruction and with the first delta after the reconstruction into BW.

ERROR 4:Document data from time of the delta initialization request is missing from BW.
Solution: The RMBWV3nn update report was not deactivated. As a result, data from the update queue
LBWQ or SM13 can be read while the data of the initialization request is being uploaded. However, since
no delta queue (yet) exists in RSA7, there is no target for this data and it is lost.

5.7. ROLLUPRollup creates aggregates in an InfoCube whenever new data is loaded.

5.8. LINE ITEM DIMENSION/DEGENERATE DIMENSIONlf the size of a dimension of a cube is more
than 20% of the normal fact table, then we define that dimension as a Line Item Dimension.
Ex: Sales Document Number in one dimension is Sales Cube.
Sales Cube has sales document number and usually the dimension size and the fact table size will be the
same. But when you add the overhead of lookups for DIMID/SID's the performance will be very slow.
By flagging is as a Line Item Dimension, the system puts SID in the Fact Table instead of DMID for sales
document Number.
This avoids one lookup into dimension table. Thus dimension table is not created in this case. The
advantage is that you not only save space because the dimension table is not created but a join is made
between the two tables Fact & SID table(diagram 3) instead of 3 Tables Fact, Dimension & SID tables
(diagram 2)

Below image is for illustration purpose only( ESS Extended Star Schema)




Dimension Table, DIMID=Primary Key
Fact Table, DIMID-Foreign Key
Dimension Table Links Fact Table And A Group Of Similar Characteristics
Each Dimension Table Has One DIMID & 248 Characteristics In Each Row

5.8.1. LINE ITEM DIMENSION ADVANTAGES:
Saves space by not creating Dimension Table

5.8.2. LINE ITEM DIMENSION DISADVANTAGES: Once a Dimension is flagged as Line Item, You
cannot ass additional Characteristics.
Only one characteristic is allowed per Line Item Dimension & for (F4) help, the Master Data is displayed,
which takes more time.

5.9. HIGH CARDINALITY:If the Dimension exceeds 10% of the size of the fact table, then you make this
as High Cardinality Dimension. High Cardinality Dimension is one that has several potential occurrences.
when you flag a dimension as High Cardinality, the database is adjusted accordingly.
BTREE index is used rather than BITMAP index, Because in general, if the cardinality is expected to
exceed one fifth that of a fact table, it is advisable to check this flag
NOTE: SAP converts from BITMAP index to BTREE index if we select dimension as High Cardinality.

6. INFOCUBE DESIGN ALTERNATIVES:
Refer: SAP R/3 BW Step-by-Step Guide by Biao Fu & Henry Fu
InfoCube Design techniques of helps us to reveal automatic changes in the InfoCube. These alternatives
may be office/region/sales representative.
6.1. ALTERNATIVE I : TIME DEPENDENT NAVIGATIONAL ATTRIBUTES
6.2. ALTERNATIVE II : DIMENSION CHARACTERISTICS METHOD
6.3. ALTERNATIVE III : TIME DEPENDENT ENTIRE HIERARCHIES
6.4. OTHER ALTERNATIVE:
6.4.1. COMPOUND ATTRIBUTE
6.4.2. LINE ITEM DIMENSION

7. FEW QUESTIONS ON INFOCUBES
What are InfoCubes?
What is the structure of InfoCube?
What are InfoCube types?
Are the InfoCubes DataTargets? How?
What are virtual Cubes(Remote Cubes)?
How many Cubes you had designed?
What are the advantages of InfoCube?
Which cube do SAP implements?
What are InfoCube tables?
What are Sap Defined Dimensions?
How many tables are formed when you activate the InfoCube structure?
What are the tools or utilities of an InfoCube?
What is meant by table partitioning of an InfoCube?
What is meant by Compression of an InfoCube
Do you go for partitioning or Compression?
Advantages and Disadvantages of an InfoCube partitioning?
What happens to E-Fact Table and F Fact Table if you make partition on an InfoCube?
Why do u go for partitioning?
What is Repartitioning?
What are the types of Repartitioning?
What is Compression? Why you go for Compression?
What is Reconstruction? Why you go for Reconstruction?
What are the mandatory steps to do effective error free reconstruction, while going Reconstruction?
What are the errors occur during Reconstruction?
What is Rollup of an InfoCube?
How can you measure the InfoCube size?
What is Line Item Dimension?
What is Degenerated Dimension?
What is High Cardinality?
How can you analyze that the cube as a LineItem Dimension or HighCardinality?
What are the InfoCube design alternatives?
Can you explain the alternative time dependent navigational attributes in InfoCube design?
Can you explain the alternative dimension characteristics in InfoCube design?
Can you explain the alternative time dependent entire hierarchies in InfoCube design?
What are the other techniques of InfoCube design alternatives
What is Compound Attribute?
What is LineItem Dimension? Will it affect designing an InfoCube?
What are the maximum number of partitions you can create on an InfoCube?
What is LISTSCHEMA?
I want to see the tables of an InfoCube. How? Is there any Transaction Code?
When the InfoCube tables created ?
Are the tables created after activation or Saving the InfoCube structure ?
Did you implemented RemoteCube? Explain me the scenario?
Can you consider InfoCube as Star Schema or Extended Star Schema?
Is Repartitioning available in B.W 3.5 or B.I 7.0? Why?
On what basis you assign Characteristics to Dimensions?

Customer Exit Variables

- Characteristic values
- Hierarchies
- Hierarchy nodes
- Text
- Formula Elements
Variables: reusable objects


1. Characteristic values
a. Selecting Single Value Variable
b. Selecting Single Value Variable as Variable Value Range Limit
c. (Combination of several a. or b.) Selecting Variables with Several Single Values or Value
Ranges
2. Text
- Format: Techinical name enclosed by ampersands (&)
3. Hierarchies
4. Hierarchy nodes
a. Variable hierarchy node with a fixed hierarchy
b. Variable hierarchy node with a variable hierarchy
5. Formula Elements
Variable Processing Types
1. User Entry/Default value
2. Replacement Path
a. Text variables and formula variables with the replacement path processing type replaced
by characteristic value.
b. Characteristic value vriables with the replacement path processing type replaced by
query result.
3. Customer Exit
Determine values for variables using function module exit. (EXIT_SAPLRRS0_001)
Create a project in tcode CMOD by selecting SAP enhancement RSR00001 and assigning
this to enhancement project. Activate project.
Tcode SMOD. Enter name of enhancement (RSR00001), choose Documentation then Edit
Display/Change.
4. SAP Exit
Delivered within SAP BW Business Content
5. Authorization
Data selection carried according to authorization
Learn your BW in hard way

Posted by Martin Maruskin

Useful information about BW system is spread into several tables in the system as itself. Thats usual case. Skilled
BW guy knows where to look in order to lookup information that is needed. It is common to know basic tables with
IOs (RSDIOBJ), reports (RSRREPDIR), cubes (RSDCUBE), DSOs (RSDODSO), process chains (RSPCCHAIN) etc.
More technical BW guy would also know where to find information about basic BW system settings (tables
RSADMINA, RSADMINS, RSBASIDOC, etc.).
Well we all know that digging into the tables is not very convenient. Therefore I was wondering either there is a
functionality that could reveal those systems secrets at least a bit. More-less by accident I came across one
interesting function module. Its name is RS_SYSTEM_INFORMATION and it basically does provide what I just
described above. Lets have a look on it in more detail.
The module has one import parameter which is optional. You do not need to use it and you get (depending on your
system) following output:



What we get is basically set of different information in one place related to: BW backend server version and its
highest level of Support Package, HTTP(s) prefixes, either SAP Portal is connected, either information broadcasting
is available, server code page, BEx web runtime, RFC destinations (Portal), URL prefixes (web reporting, JAVA
based BW-IP Modeler) and ports, workbooks, system category, etc. In addition there is information on BW's frontend
requirements from table Parameters of table RSFRONTENDINIT - BW's frontend requirements:


What we get is basically set of different information in one place related to: BW backend server version and its
highest level of Support Package, HTTP(s) prefixes, either SAP Portal is connected, either information broadcasting
is available, server code page, BEx web runtime, RFC destinations (Portal), URL prefixes (web reporting, JAVA
based BW-IP Modeler) and ports, workbooks, system category, etc. In addition there is information on BW's frontend
requirements from table Parameters of table RSFRONTENDINIT - BW's frontend requirements :-)


Usefull Tables for InfoCube
Listing of commonly used tables in SAP BI and to understand the way data is stored in the backend of
SAP BI


InfoCube
RSDCUBE Directory of InfoCubes
RSDCUBET Texts on InfoCubes
RSDCUBEIOBJ Objects per InfoCube (where-used list)
RSDDIME Directory of Dimensions
RSDDIMET Texts on Dimensions
RSDDIMEIOBJ InfoObjects for each Dimension (Where-Used List)
RSDCUBEMULTI InfoCubes involved in a MultiCube
RSDICMULTIIOBJ MultiProvider: Selection/Identification of InfoObjects
RSDICHAPRO Characteristic Properties Specific to an InfoCube
RSDIKYFPRO Flag Properties Specific to an InfoCube
RSDICVALIOBJ
InfoObjects of the STOCK Validity Table for the
InfoCube

Usefull Tables for Aggregates
Listing of commonly used tables in SAP BI and to understand the way data is stored in the backend of
SAP BI


Aggregates
RSDDAGGRDIR Directory of Aggregates
RSDDAGGRCOMP Description of Aggregates
RSDDAGGRT Text on Aggregates
RSDDAGGLT Directory of the aggregates, texts

Usefull tables for DSO (Data Store Object)
Listing of commonly used tables in SAP BI and to understand the way data is stored in the backend of
SAP BI


ODS Object
RSDODSO Directory of all ODS Objects
RSDODSOT Texts of all ODS Objects
RSDODSOIOBJ InfoObjects of ODS Objects
RSDODSOATRNAV Navigation Attributes for ODS Object
RSDODSOTABL Directory of all ODS Object Tables


How to find out total number of records in
Multiprovider?
In case of cube and DSo , you can go to manage screen and there you will get an idea, how
many number of records transfered and how many added but in case of multiprovider , there is no such
option to find out the total number of records.To find out the total number of records in a multiprovider ,
we can follow the below steps...go to the Multiprovider , navigate to display datathen ,select your required
selections,then check the check box output number of hits,make maximum number hits blank(all
hits)please follow the below screen shot for more clarity..
----=



after this you will be getting 1rowcount(number of record) column in your list out put, select that column
and go for summation , at the endyou will get total no of records ,that will also help you finding the
duplicate records in your MP.Please follow the screen shot.

S-ar putea să vă placă și