Sunteți pe pagina 1din 58

SAP BW Interview Questions

BELOW ARE SET OF QUESTIONS FOR BW INTERVIEW


BW Administration and Design

Basic Concepts
Q. What are the differences between OLAP and OLTP applications

OLAP OLTP
a. Summarized data Detailed
b. Read only Read write
c. Not optmized Optimized for data applications
d. Lot of historical data Not much old data

Q. What is a star schema?
A fact table at the center and surrounded (linked) by dimension tables.

Q. What is a slowly changing dimension?
A dimension containing a characteristics which changes over a time; for example take employee job
title; this changes over a period of times with different job titles.

Q. What are the advantages of Extended star schema of BW vs. the star schema
a. use of generated keys (numeric) for faster access
b. external hierarchy
c. multi language support
d. master data common to all cubes
e. slowly changing dimensions supported
f. aggregates in its own tables for faster access

Q. What is the namespace for BW?
All SAP objects start with 0 and customer is A-Z; all tables begin with /BIO for
SAP and /BIC for customers; All generated objects start with l-8 (like export
data source); prefix 9A is used in APO.

Q. What is an info object?
Business object like customer, product, etc; they are divided into characteristics
and key figures; characteristics are evaluation objects like customer and key
figures are measurable objects like sales quantity, etc; characteristics also
include special objects like unit and time.

Q. What are the data types supported by characteristics?
NUMC, CHAR (up to 60), DATS and TIMS

Q. What is an extemal hierarchy?
Presentation hierarchies are stored in its own tables (hierarchy tables) for
characteristic values

Q. What are time dependent text / attribute of characteristics?
If the text (for example a name of the product /person over time) or if the
attribute changes over time (for example job title) then these must be marked as
time dependent.

Q. Can you create your own time characteristics?
No

Q. What are the types of attributes?
Display only and navigational; display only attributes are only for display and
no analysis can be done; navigational attributes behave like regular
characteristics; for example assume that we have a customer characteristics
with country as a navigational attribute; you can analyze the data using
customer and country.

Q. What is Alpha conversion?
Alpha conversion is used to store data consistently by storing any numeric
values with prefixing with Os; for example if you defined material as 6 Numc
then number 1 is stored as 000001 but displayed as 1; this removes
inconsistencies between 01 vs. 001.

Q. What is the alpha check execution program?
This is used to check consistency for BW 2.x before upgrading the system to
3.x; the transaction is RSMDCNVEXIT

Q. What is the attributes only flag?
If the flag is set, no master data is stored; this is only used as attribute for other
characteristics; for example comments on a AR document.

Q. What is compounding?
This defines the superior info object which must be combined to define an
object; for example when you define cost center then controlling area is the
compounding (superior) object.

Q. What is the Bex options for characteristics like F4 help for query definition and
execution?
This defines how the data is displayed in query definition screen or when query
is executed; options are from the data displayed, from master data table (all
data) and from dimension data; for example let us assume that you have 100
products in all, 10 products in a cube; in bex you display the query for 2
products; the following options for product will display different data
a. selective data only - will display 2 products
b. dimension data - will display 10 products
c. from master data - will display all 100 products

Q. What are the data types allowed for key figures?
Amount, number, integer, date and time.

Q. What is the difference between arnount/quantity and number - amount/quantity
always comes with units; for example sales will be amount and inventory
quantity.

Q. What are the aggregation options for key figures?
If you are defining prices then you may want to set no aggregation or you can
define max, min, sum; you can also define exception aggregation like first, last,
etc; this is helpful in getting headcount; for example if you define a monthly
inventory count key figure you want the count as of last day of the month.

Q. What is the maximum number of key figures you can have in an info cube?
233

Q. What is the maximum number of characteristics you can have per dimension?
248

Q. What are the nine decision points of data warehousing?
a. Identify fact table
b. Identify dimension tables
c. Define attributes of entities
d. Define granularity of the fact table (how often
e. Pre calculated key figures
f. Slowly changing dimensions
g. Aggregates
h. How long data will be kept
i. How often data is extracted

Q. How many dimensions in a cube
Total 16 out of which 3 are predefined time, unit and request; customer is left
with 13 dimensions

Q. What is a SID table and advantages
The SID table (Surrogate ID table) is the interface between master data and the
dimension tables; advantages :-
a. uses numeric as indexes for faster access
b. master data independent of into cubes
c. language support
d. slowly changing dimension support

Q. What are the other tables created for master data?
a. P table - Time independent master data attributes
b. Q table - Time dependent master data attributes
c. M view - Combines P and Q
d. X table - Interface between master data SIDS and time independent
navigational attributes SIDS ( P is linked to the X table)
e. Y table - Interface between master data SIDS and time dependent
navigational attributes SIDS (Q is linked to the Y table)

Q. What is the transfer routine of the info object?
It is like a start routine; this is independent of the data source and valid for all
transfer routines; you can use this to define global data and global checks.

Q. What is the DIM ID?
Dim ids link dimensions to the fact table

Q. What is table partition?
SAP is using fact table partitioning to improve performance; you can partition
only on OCALMONTH or OFISCPER;

Q. How many extra partitions are created and why?
Usually 2 extra partitions are created to accommodate data before the begin date
and after the end date

Q. Can you partition a cube which has data already?
No; the cube must be empty to do this; one work around is to make a copy of
the cube A to cube B; export data from A to B using export data source; empty
cube A; create partition on A; re-import data from B; delete cube B

Q. What is the transaction for Administrator work bench?
RSA1

Q. What is a source system?
Any system that is sending data to BW like R/3, flat file, oracle database or
external systems.

Q. What is a data source?
The source which is sending data to a particular info source on BW; for
example we have a 0CUSTOMER_ATTR data source to supply attributes to
OCUSTOMER from R/3

Q. What is an info source?
Group of logically related objects; for example the OCUSTOMER info source
will contain data related to customer and attributes like customer number,
address, phone no, etc

Q. What are the types of info source?
Transactional, attributes, text and hierarchy

Q. What is communication structure?
Is an independent structure created from info source; it is independent of the
source system/data source

Q. What are transfer rules?
The transformation rules for data from source system to info
source/communication structure

Q. What is global transfer rule?
This is a transfer routine (ABAP) defined at the info object level; this is
common for all source systems.

Q. What are the options available in the transfer rule
Assign info object, assign a constant, ABAP routine or a Formula (From version
3.x); example are :
a. Assign info object - direct transfer; no transformation
b. Constant- for example if you are loading data from a specified country in a
flat file, you can make the country as constant and assign the value
c. ABAP routine - for example if you want to do some complex string
manipulation; assume that you are getting a flag file from legacy data and
the cost center is in a field and you have to massage the data to get it; in
this case use ABAP code
d. For simple calculations use formula; for example you want to convert all
lower case characters to upper case; use the TOUPPER formula

Q. Give some important formula available
Concatenate, sub string, condense, left/light (n characters), l_t1im, r_trim,
replace, date routines like DATECONV, date_week, add_to_date, date_diff,
logical functions like if, and;

Q. When you do the ABAP code for transfer rule, what are the important variables
you use?
a. RESULT - this gets the result of the ABAP code
b. RETURNCODE - you set this to 0 if everything is OK; else this record is
skipped
c. ABORT - set this to a value not 0, to abort the entire package

Q. What is the process of replication?
This copies data source structures from R/3 to BW

Q. What is the update rule?
Update rule defines the transformation of data from the communication
structure to the data targets; this is independent of the source systems/data
sources

Q. What are the options in update rules?
a. one to one move of info objects
b. constant
c. lookup for master data attributes
d. formula
e. routine (ABAP)
f. initial value

Q. What are the special conversions for time in update rules?
Time dimensions are automatically converted; for example if the cube contains
calendar month and your transfer structure contains date, the date to calendar
month is converted automatically.

Q.What is the time distribution option in update rule?
This is to distribute data according to time; for example if the source contains
calendar week and the target contains calendar day, the data is split for each
calendar day. Here you can select either the normal calendar or the factory
calendar.

Q. What is the return table option in update rules for key figures?
Usually the update rule sends one record to the data target; using this option you
can send multiple records; for example if we are getting total telephone
examples for the cost center, you can use this to return telephone expenses for
each employee (by dividing the total expenses by the number of employees in
the cost center) and creating cost record for each employee in the ABAP code.

Q. What is the start routine?
The first step in the update process is to call start routine; use this to fill global
variables to be used in update routines;

Q. How would you optimize the dimensions?
Use as many dimensions as possible for performance improvement; for example
assume that you have 100 products and 200 customers; if you make one
dimension for both, the size of the dimension will be 20,000; if you make
individual dimensions then the total number of rows will be 300. Even if you
put more than one characteristic per dimension, do the math considering worst
case scenario and decide which characteristics may be combined in a
dimension.

Q. What is the conversion routine for units and currencies in the update rule?
Using this option you can write ABAP code for unit/currency conversion; if you
enable this flag, then unit of the key figure appears in the ABAP code as an
additional parameter; for example you can use this to convert quantity in
pounds to kilo grams.

Q. How do you add an entry in the monitor log from the update rules?
This is added in the intemal table MONITOR; the following fields describe the
MONITOR structure
a. MONITOR-MSGID -> gives an ID
b. MONITOR-MSGTY -> message type
c. MONITOR-MSGNO -> message number
d. MONITOR-MSGV1 -> monitor messagel
e. MONITOR-MSGV2 -> monitor message 2
f. Append it to the MONITOR table; this will show up in the monitor

Q. What is a data mart?
The bw system can be a source to another BW system or to itself; the
ODS/cube/infoprovider which provide data to another system are called data
marts.

Q. What is the myself data mart?
The BW system feeding data to itself is called the myself data mart; this is
created automatically; uses ALE for data transfer

Q. How do you create a data mart?
a. Right click and create the export data source for the ODS/cube
b. In the target system replicate the data source
c. Create transfer rules and update rules
d. Create info package to load

Q. Can you make multi providers and master data as data marts?
Yes

Q. What are the benefits of data marts?a. Simple to use
b. Hub and spoke usage
c. Distributed data
d. Performance improvement in some cases

Q. What are events and how you use it in BW?
Events are background signal to tell the system that certain status has been
reached; you can use events in batch jobs; for example after you load data to the
cube you can trigger an event which will start another job to run the reporting
agent. Use SM62 to create and maintain events.

Q. What is an event chain?
This is a group of events which complete independently of one another; use this
to check the successful status of multiple events; for example you can trigger a
chain event if all loads are successful.

Q. How do you create event chains?
AWB -> Tools -> Event collector

Q. What is PSA?
Persistent staging area -is based on the transfer structure and source system
dependent.

Q. What are the options available for updates to data target?a. PSA and data targets in parallel - improves
performance
b. PSA and data target in sequence
c. PSA only - you have to manually load data to data targets
d. Data targets only - No PSA

Q. Why if one request fails, all the subsequent requests are tumed to red?
This is to avoid inconsistency and make sure that only verified data is entered
into the system

Q. What are the two fact tables?
There are two fact tables for each info cube; it is the E table and the F table;

Q. What is compression or collapse?
This is the process by which we delete the request 1Ds; this saves space; all the
regular requests are stored in the F table; when you compress, the request H) is
deleted and data is moved from F table to E table; this saves space and improves
performance but the disadvantage is that you can not delete the compressed
requests individually

Q. What is reconstruction?
This is the process by which you load data in to the same cube or to a different
cube.

Q. What is a remote cube?
Remote cube is a logical cube where the data is extracted from an external
source; usually it is used to report real time data from an r/3 system instead of
dlilling down from BW to R3

Q. What is virtual info cube with services?
In this case a user defined function module is used as data source

Q. What are the restrictions/recommendations for using remote cube?
These are used for small volume of data with few users; no master data allowed

Q. Give examples of data sources that support remote cubes?
0FI_AP_3 - vendor line items, 0FI_AR_3 - customer line items

Q. What is a multi provider?
Using multi provider you can access data from different data sources like cubes,
ODS, infosets, master data

Q. What are the added features in 3.x for multi provider?
Prior to 3.x only multi cubes were available; you can not combine a ODS and
cube for example

Q. What is an info set?
An info provider giving data by joining data from different sources like ODS,
master data, etc

Q. What is the difference between multi provider and infoset?
Multi provider is a Union whereas infoset is a Join (intersection)

Q. Can you create an info set with info cube?
No; only ODS and master data are allowed

Q. What is a line item (or degenerate dimensions) -
If the size of a dimension of a cube is more than the normal (more than 20%) of
the fact table, you define that dimension as a line item dimension; for example if
you store sales document number in one dimension in a sales cube, usually the
dimension size and the fact table size will be the same; when you add the
overhead of look ups for DIMID/SIDS the performance will be very slow; by
flagging it as a line item dimension, the system puts the SID in the fact table
instead of DIMID for the sales document number; this avoids one look up into
dimension table (the dimension table is not created in this case)

Q. What are the limitations of line item dimension?
Only one characteristic is allowed per line item dimension.

Q. What is a transactional info cube?
These cubes are used for both read and write; standard cubes are optimized for
reading. The transactional cubes are used in SEM.

Q. What is the cache monitoring transaction?
RSRCACI-IE

Q. What are the profile parameters for cache?
rdsb/esm/buffersize_kb (max size of cache) and rsdb/esm/max_objects (max
number of entries in cache)

Q. Can you disable cache?
Yes either globally or using query debug tool RSRT

Q. What does the program RSMDCNV EXIT check?
a. all characteristics with conversion exit ALPHA, NUMC and GJAI-IR
b. all characteristics which are compounded to the earlier

Q. Can you restart the conversion?
Yes

Q. When should you do the alpha conversion?
If you are upgrading you must do it before PREPARE phase of upgrade

Q. Can you make an info object as info provider and why?
Yes; when you want to report on characteristics or master data, you can make
them as an info provider; for example you can make OCUSTOMER as an info
provider and do bex reporting on OCUSTOMER; right click on the info area and
select Insert characteristic as data target

Q. What are the control parameters for data transfer?
This defines the maximum size of packet, max no of records per packet, the
number of parallel processes, etc

Q. What is number range object?
This defines the characteristic attributes; for example the object MATERIALNR
defines the attributes of material master like the length, etc

Q. How do you set up the permitted characters?
Using transaction RSKC.

Q. What is aggregate realignment run maintenance?
Defines the level of percentage change for realignment run will cause a
reconstruction of aggregates.

Q. What is update mode for master data?
Defines whether the master data (auto sid) is added automatically for non
existing master data when you load transaction data.

Q. What is the ODS object settings?
Defines number of parallel processors in activation, min number of data records
and wait time.

Q. What are the settings for flat files?
Defines thousand separator, decimal pointer, field separator (default is ;) and
field delimiter (default )

Q. Which transaction defines the background user in source system?
RSCUSTV3

Non Cumulative Key Figures

Q. What are non cumulative key figures?
These kinds of key figures are not summarized (unlike sales, etc); examples are
head count, inventory amount; these are always in relation to a point in time; for
example we will ask how many employees we had as of last quarter. We dont
add up the head count.
Give an example - the content key figure OTOTALSTCK (Quantity Total Stock)
Is a non cumulative key figure. It has exception aggregation as Last value and
inflow as receipt qty total stock and outflow as issue qty total stock.

Q. What is standard and exception aggregation?
Standard aggregation -> specifies how a key figure is compressed using all
characteristics except time; exception aggregation -> specifies how key figure is
compressed using time characteristics.

Q. What is inflow and outflow?
These are non cumulative changes used to get the right quantity

Q. What is a Marker?
Non cumulatives are stored using a Marker for the current period.

Q. What is a time reference characteristic?
Is a time characteristic which determines all other time characteristic;
OCALDAY, OCALMONTI-I, OCALWEEK, OFISCPER

Q. Give example data sources supporting this?
2LIS_03_BF and 2LIS_03_UM

Q. What is the opening balance?
When you start loading inventory data from R/3 you start with a certain point in
time; this is what is called opening balance; assume that you have inventory
since Jan 2002; you are loading data on J an 2003 and the opening balance for
the product is 200; the data before Jan 2003 is Historic data; any data loaded
after J an 2003 is a delta load.

Q. What is No Marker Update?
If you choose this option when compressing Non cumulative cube, the reference
point is not updated but the requests are moved to Request O (usual
compression); you must do this for compressing historical data; for example use
this option to compress data before Jan 2003;

Q. What are the steps to load non cumulative cube?
a. initialize opening balance in R/3 (S278)
b. activate extract structure MC03BFO for data source 2LIS_03_BF
c. set up historical material documents in R/3
d. load opening balance using data source 2LIS_40_S278
e. load historical movements and compress without marker update.
f. set up V3 update
g. load deltas using 2LIS_03_BF

Q. How does the query calculated?
Qty = Ref point in time Qty - Non compressed delta qtys - deltas for backward
qty

Q. What is a validity determining characteristic?
That determine validity period of non cumulative cube; example plants opening
and closing different time periods

Q. What are the dos and donts?
a. use few validity objects
b. compress the cube ASAP
Authorizations

Q. What is ar1 authorization object?
Defines the fields for authorization checks

Q. What is the role maintenance transaction?
PFCG

Q. What is a role?
Usually defines the responsibility of a user with proper menu and authorization
- example receiving clerk

Q. Give some examples of the roles delivered with SAP BW?
All the BW roles start with S_RS; S_RS__ROPAD- Production system
administrator; S_RS_RREPU - bex user

Q. What are the different authorization approaches available in BW?
a. infocube based approach - use this in conjunction with Info area to limit access
b. query name based approach - many customers use this to limit access; for zqueries are
read only, Y queries are read/write; FI* query names for FI use, etc
c. Dataset approach - limitation of characteristics and key figures; you can usereporting
authorization for this.

Q. What are the two object classes of BW authorization?
BW Warehouse authorization - SAP standard; BW Reporting - Not delivered by SAP - user has to create

Q. How many fields you can assign to authorization object?
10

Q. What are the values for ACTVT?
Create, change and display

Q. Give some examples of standard authorization objects delivered for BW?
a. S_RS_IOMAD - Master data
b. S_RS_ADMWB - AWB objects
c. S_RS_ODSO - ODS objects
d. S_RS_TOOLS - Bex tools
e. S_RS_ICUBE - info cube
f. S__RS_H1ER - hierarchy
g. S_RS_COMP, S_RS_COMP1 - reporting authorization
h. S_RS_FOLD - folders
i. S_RS_IOBJ - info object
j. S_RS_ISOUR -info source (transaction data)
k. S_RS_ISRCM -info source (master data) l. S_GUI - GUI Activities (workbooks)
m. S_BDS_DS - document set (for workbooks)
n. S_USER_AGR - role check for saving workbook in a role
o. S_USER_TCD - transaction in roles for saving workbook in a role

Q. What is an reporting object?
This is used for BW reporting to check authorizations by the OLAP processor

Q. Give a step by step approach to create an authorization object; let us assume that we want to restrict
the report by cost center. A
a. make the info object as Authorization relevant (flag ) and activate it; in thisexample
OCOSTCENTER b. create an authorization object using transaction RSSM c. assign the object to
one or more info providers d. create role(s) with different values for cost centers; for example
you can create a role called IT Manager and assign all IT cost centers e. Assign the role
to users f. Create a query; create a variable within the query for OCOSTCENTER of
type Authorization and include in the query; if the IT manager runs the query it shows only the cost
centers assigned to him/her.

Q. How to implement structural authorization in BW? a. create profile using transaction OOSP b.
assign user to profile using transaction OOSB c. update T77UU table d. run the program RI-
IBAUS00 e. activate the data source and related components 0HR_PA_2 in BW f. load ODS
from R/3
g. activate target info objects as Authorization relevant
h. run the function module RSSB to generate BW authorization.

Q. What are the new BW 3.x authorizations?
S_RS_COMP1 checks for authorization depending on the owner; S_RS_FOLD info area view of Bex
elements (to suppress); S_RS_ISET for info sets; S_GUI - new activity code 60 loaded for upload.

Q. What is the use : as an authorization value? a. it enables queries that do not contain a
authorization relevant object that have been checked in info cube b. it allows summary data to be
displayed if the user does not have access to detailed data; for example if you create
2 authorizations for one user one with Sales Org * and customers : and second with sales
org 1000 and customers *,
the user sees all customers for sales org 1000 and only summarized report for other sales org.

Q. What is $ as an authorization value?
You use S followed by a variable name (values populated in user exit for bex); this avoids having too
many roles

Q. What is info object OTCTAUTHH?
This is used in hierarchy authorization

What is the t-code to see log of transport connection?
in RSA1 -> Transport Connection you can collect the Queries and the Role and after this you can
transport them (enabling the transport in SE10, import it in STMS
1. RSA1
2. Transport connection (button on the left bar menu)
3. Sap transport -> Object Types (button on the left bar menu)
4. Find Query Elements -> Query
5. Find your query
6. Group necessery object
7. Transport Object (car icon)
8.Release transport (SE10 T-code)
9.load transport (STMS T-code)

or directly got o se01

LO - MM inventory data source with marker significance?
Marker is as like check point when u upload the data from inventory data source
2lis_03_bx data source for current stock and BF for movement type
after uploading data from BX u should rlise the request in cube or i menn to say compress it then load
data from another data source BF and set this updated data to no marker update so marker is use as a
check point if u dont do this u getting data missmatch at bex level bcz system get confuse .
(2LIS_03_BF Goods Movement From Inventory Management-- -----Unckeck the no marker update tab)
(2LIS_03_BX Stock Initialization for Inventory Management-- ---select the no marker update check box)
2LIS_03_UM Revaluations ----Uncheck the no marker update tab) in the infopackege of "collaps"

How can you navigate to see the error idocs ?
If it is fine check the IDOCs in source system go to BD87->give Ur user ID and date->execute->you can
find Red status Idocs select the erroneous Idoc->Rt.click and select Manual process.

You need to Reprocess this IDOC which are RED. For this you can take help of Any of your Team (ALE
IDOC Team or BAsis Team)Or Else
youcan push it manually. Just search it in bd87 screen only to Reprocess.
Also, Try to find why this IDocs are stuck there

How can you decide the query performance is slow or fast ?
You can check that in RSRT tcode.
execute the query in RSRT and after that follow the below steps
Goto SE16 and in the resulting screen give table name as RSDDSTAT for BW 3.x and RSDDSTAT_DM
for BI 7.0 and press enteryou can view all the details about the query like time taken to execute the query
and the timestmaps

Why we have construct setup tables?
The R/3 database structure for accounting is much more easier than the Logistical structure.
Once you post in a ledger that is done. You can correct, but that give just another posting.
BI can get information direct out of this (relatively) simple database structure.
In LO, you can have an order with multiple deliveries to more than one delivery addresses. And the payer
can also be different.
When 1 item (orderline) changes, this can have its reflection on order, supply, delivery, invoice, etc.
Therefore a special record structure is build for Logistical reports.and this structure now is used for BI.
In order to have this special structre filled with your starting position, you must run a set-up. from that
moment on R/3 will keep filling this LO-database.
If you wouldn't run the setup. BI would start with data from the moment you start the filling of LO (with
the logisticacocpit)


How can you eliminate the duplicate records in TD, MD?
Try to check the system logs through SM21 for the same.


What use marker in MM?
Marker update is just like check point.
ie it will give the snapshot of the stock on a particular date ie when was the marker updated.
Because we are using Noncumulative keyfigure it will lot of time to calculate the current stock for
example at report time. to overcome this we use marker update
Marker updates do not summarize the data.. In inventory management scenarios, we have to calculate
opening stock and closing stock on a daily basis. In order to facilitate this, we set a marker which will add
and subtract the values for each record.
In the absence of marker update, the data will be added up and will not provide the correct values.

web template
You get information on where the web template details are stored from the following tables :
RSZWOBJ Storage of the Web Objects
RSZWOBJTXT Texts for Templates/Items/ Views
RSZWOBJXREF Structure of the BW Objects in a Template

RSZWTEMPLATE Header Table for BW HTML Templates
You can check these tables and search for your web template entry . However, If I understand your
question correctly , you will have to open the template in the WAD and then make the corrections in the
same to correct it.

What is dashboard?
A dash board can be created using the web application Designer (WAD) or the visual composer (VC). A
dashboard is just a collection of reports, views and links etc in a single view. For e.g. igoogle is a
dashboard.

A dashboard is a graphical reporting interface, which displays KPIs (Key Performance Indicators) as
charts and graphs. A dashboard is a performance management system

When we look at the all organization measures how they are performing with helicopter view, we need a
report that teaches and shows the trend in a graphical display quickly. These reports are called as
Dashboard Reports, still we can report these measures individually, but by keeping all measures in a
single page, we are creating single access point to the users to view all information available to them.
Absolutely this will save lot of precious time, gives clarity on decision that needs to be taken, helps the
users to understand the measure(s) trend with business flow creating dashboard
Dashboards : Could be built with Visual Composer & WAD
create your dashboard in BW,

(1) Create all BEx Queries with required variants,tune them perfectly.
(2) Differentiate table queries and graph queries.
(3) Choose the graph type required that meet your requirement.
(4) Draw the layout how the Dashboard page looks like.
(5) Create a web template that has navigational block / selection information.
(6) Keep navigational block fields are common across the measures.
(7) Include the relevant web items into web template.
(8) Deploy the URL/Iview to users through portal/intranet

The steps to be followed in the creation of Dashboard using WAD are summarized as below:

1) Open a New Web template in WAD.
2) Define the tabular layout as per the requirements so as to embed the necessary web items.
3) Place the appropriate web items in the appropriate tabular grids
4) Assign queries to the web items (A Query assigned to a web item is called as a data provider)
5) Care should be taken to ensure that the navigation blocks selection parameters are common across all
the BEx queries of the affected dataproviders.
6) Properties of the individual web items are to be set as per the requirements. They can be modified in
Properties window or in the HTML code.
7) The URL when this web template is executed should be used in the portal/intranet


what we do in Business Blue Print Stage?
SAP has defined a business blueprint phase to help extract pertinent information about your company
that is necessary for implementation. These blueprints are in the form of questionnaires that are designed
to probe for information that uncovers how your company does business. As such, they also serve to
document the implementation. Each business blueprint document essentially outlines your future business
processes and business requirements. The kinds of questions asked are germane to the particular business
function, as seen inthe following sample questions:1) What information do you capture on a purchase
order?2) What information is required to complete a purchase order?Accelerated SAP question and
answer database:The question and answer database (QADB) is a simple although aging tool designed to
facilitate the creation and maintenance of your business blueprint.This database stores the questions and
the answers and serves as the heart of your blue print. Customers are provided with a customer input
template for each application that collects the data. The question and answer format is standard across
applications to facilitate easier use by the project team.Issues database: Another tool used in the
blueprinting phase is the issues database. Thisdatabase stores any open concerns and pending issues that
relate to the implementation. Centrally storing this information assists in gathering and then managing
issues to resolution, so that important matters do not fall through the cracks. You can then track the issues
in database, assign them to teammembers, and update the database accordingly.


How do we gather the requirements for an Implementation Project?
One of the biggest and most important challenges in any implementation is gathering and understanding
the end user and process team functional requirements. These functional requirements represent the scope
of analysis needs and expectations (both now and in the future) of the end user. These typically involve
all of the following:- Business reasons for the project and business questions answered by the
implementation- Critical success factors for the implementation- Source systems that are involved and the
scope of information needed from each- Intended audience and stakeholders and their analysis needs-
Any major transformation that is needed in order to provide the information- Security requirements to
prevent unauthorized useThis process involves one seemingly simple task: Find out exactly what theend
users' analysis requirements are, both now and in the future, and buildthe BW system to these
requirements. Although simple in concept, in practicegathering and reaching a clear understanding and
agreement on a complete setof BW functional requirements is not always so simple.

How do we decide what cubes has to be created?
Its depends on your project requirement. Customized cubes are not mandatory for all the projects. If your
bussines requirement is differs from given scenario ( BI content cubes ) then only we will opt for
customized cubes.Normally your BW customization or creation of new info providers all are depending
on your source system.If your source system other that R3 then you should go with customization of your
all objects.If your source system is R3 and your users are using only R3 standard business scenarios like
SD,MM or FI... etc., then you dont want to create any info providers or you dont want to enhance any
thing in the existing BW Business Content. But 99% this is not possible. Because surely they should have
included their new business scenario or new enhancements.For example, In my first project we
implemented for Solution Manager BW implemention. There we have activated all the business content
in CRM. But the source system have new scenarios for message escalation, ageing calculation etc.,
According their business scenrio we could't use standard business content. For that we have taken only
existing info objects and created new info objects which are not there in the business content. After that
we have created custom data source to info providers as well asreports.

Who used to make the Technical and Functional Specifications?
Technical Specification:Here we will mention all the BW objects (info objects, data sources, info sources
and info providers). Then we are going to say the data flow and behaviour of the data load (either delta or
full) also we can tell the duration of the cube activation or creation. Pure BW technical things are
available in this document. This is not for End users document.FunctionalSpecification:Here we will
describe the business requirements. That means here we are going to say which are all business we are
implementing like SD, MM and FI etc., then we are going to tell the KPI and deliverable reports detail to
the users. This document is going to mingle with both Function Consultants and Business Users. This
document is applicable for end users also.



Give me one example of a Functional Specification and explain what information we will get from
that?
Functional Specs are requirements of the business user.Technical Specs translate these requirements in a
technical fashion.Let's say Functional Spec says,1. the user should be able to enter the Key date, Fiscal
Year, Fiscal Version.2. The Company variable should be defaulted to USA but then if the user wants to
change it, they can check the drop down list and choose other countries.3. The calculations or formulas
for the report will be displayed in precision of one decimal point.4. The report should return values for 12
months of data depending on the fiscal year that the user enters Or it should display in quarterly values.
Functional specs are also called as Software requirements.Now from this Techinal Spec follows, to
resolve each of the line items listed above.1. To give the option of key date, Fiscal year and Fiscal
Version certain Info Obejcts should be availble in the system. If available, then should we create any
variables for them - so that they are used as user entry variable. To create any varaibles, what is the
approch, where do you do it, what is the technical of the objects you'll use, what'll be the technical name
of the objects you'll crete as a result of this report.2. Same explanation goes for the rest. How do you set
up the varaible,
3. What changes in properties willu do to get the precision.4. How will you get the 12 months of
data.What will be the technical and display name of the report, who'll be authorized to run this report, etc
are clearly specified in the technical specs.


What is Customization? How do we do in LO?

How to do basic LO extraction for SAP-R3-BW1. Go to transaction code RSA3 and see if any data is
available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete
Setup data) and delete the data by entering the application name.2. Go to transaction SBIW --> Settings
for Application Specific Datasource --> Logistics --> Managing extract structures --> Initialization -->
Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant
application)3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the
name of the run and execute. Now all the available records from R/3 will be loaded to setup tables.4. Go
to transaction RSA3 and check the data.5. Go to transaction LBWE and make sure the update mode for
the corresponding DataSource is serialized V3 update.6. Go to BW system and create infopackage and
under the update tab select the initialize delta process. And schedule the package. Now all the data
available in the setup tables are now loaded into the data target.7.Now for the delta records go to LBWE
in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing
this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see
green light # Once the new records are added immediately you can see the record in RSA7.


Tickets and Authorization in SAP Business Warehouse What is tickets? And example?
Tickets are the tracking tool by which the user will track the work which we do. It can be a change
requests or data loads or what ever. They will of types critical or moderate. Critical can be (Need to solve
in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client
that the issue is solved. Tickets are raised at the time of support project these may be any issues,
problems.... .etc. If the support person faces any issues then he will ask/request to operator to raise a
ticket.
Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated
issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to
contract in between companies. Generally Ticket raised by the client can be considered based on the
priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved
ASAP. If the ticket is of low> priority it must be considered only after attending to high priority tickets.
The typical tickets in a production Support work could be: 1. Loading any of the missing master data
attributes/texts. 2. Create ADHOC hierarchies. 3. Validating the data in Cubes/ODS. 4. If any of the loads
runs into errors then resolve it. 5. Add/remove fields in any of the master data/ODS/Cube. 6. Data source
Enhancement. 7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling the
infopackages for the attributes/texts mentioned by the client. 2. Create ADHOC hierarchies. - Create
hierarchies in RSA1 for the info-object. 3. Validating the data in Cubes/ODS. - By using the Validation
reports or by comparing BW data with R/3. 4. If any of the loads runs into errors then resolve it. -
Analyze the error and take suitable action. 5. Add/remove fields in any of the master data/ODS/Cube. -
Depends upon the requirement 6. Data source Enhancement. 7. Create ADHOC reports. - Create some
new reports based on the requirement of client.

Change attribute run.
Generally attribute change run is used when there is any change in the master data..it is used for
realingment of the master data..Attribute change run is nothing but adjusting the master data after its been
loaded from time to time so that it can change or generate or adjust the sid's so that u may not have any
problem when loading the trasaction data in to data targets.the detail explanation about Attribute change
run.The hierarchy/attribute change run which activates hierarchy and attribute changes and adjusts the
corresponding aggregates is devided, into 4 phases:1. Finding all affected aggregates2.set up all affected
aggregates again and write the result in the new aggregate table.3. Activating attributes and
hierarchies4.rename the new aggregate table. When renaming, it is not possible to execute queries. In
some databases, which cannot rename the indexes, the indexes are also created in this phase.

Different types of Delta updates?
Delta loads will bring any new or changed records after the last upload.This method is used for better
loading in less time. Most of the std SAP data sources come as delta enabled, but some are not. In this
case you can do a full load to the ODS and then do a delta from the ODS to the cube. If you create generic
datasources, then you have the option of creating a delta onCalday, timestamp or numeric pointer fields
(this can be doc number, etc).You'll be able to see the delta changes coming in the delta queue through
RSA7 on the R3 side.To do a delta, you first have to initialize the delta on the BW side and then set up
the delta.The delta mechanism is the same for both Master data and Transaction data
loads.============ ========= ==There are three deltas
Direct Delta: With this update mode, the extraction data is transferred with each document posting
directly into the BW delta queue. In doing so, each document posting with delta extraction is posted for
exactly one LUW in the respective BW delta queues.
Queued Delta: With this update mode, the extraction data is collected for the affected application
instead of being collected in an extraction queue, and can be transferred as usual with the V3 update by
means of an updating collective run into the BW delta queue. In doing so, up to 10000 deltaextractions of
documents for an LUW are compressed for each Data Source into the BW delta queue, depending on the
application.Non-serialized
V3 Update: With this update mode, the extraction data for the application considered is written as before
into the update tables with the help of a V3 update module. They are kept there as long as the data is
selected through an updating collective run and are processed. However, in contrast to the current default
settings (serialized V3 update), the data in the updating collective run are thereby read without regard to
sequence from the update tables and are transferred to the BW delta queue.



An SAP BW functional consultant is responsible for the following: Key responsibilities include:
Maintain project plans Manage all project activities, many of which are executed by resources not
directly managed by the project leader (central BW development team, source system developer, business
key users) Liase with key users to agree reporting requirements, report designs Translate requirements
into design specifications( report specs, data mapping / translation, functional specs) Write and execute
test plans and scripts .
Coordinate and manage business / user testing Deliver training to key users Coordinate and manage
product ionization and rollout activities Track CIP (continuous improvement) requests, work with users to
prioritize, plan and manage CIP An SAP BW technical consultant is responsible for:SAP BW extraction
using standard data extractor and available development tools for SAP and non-SAP data sources. -SAP
ABAP programming with BWData modeling, star schema, master data, ODS and cube design in BWData
loading process and procedures (performance tuning)Query and report development using Bex Analyzer
and Query DesignerWeb report development using Web Application.

Production support
In production support there will be two kind jobs which you will be doing mostly 1, looking into the data
load errors. 2, solving the tickets raised by the user. Data loading involves monitoring process chains,
solving the errors related to data load, other than this you will also be doing some enhancements to the
present cubes and master data but that done on requirement. User will raise a ticket when they face any
problem with the query, like report showing wrong values incorrect data etc.if the system response is slow
or if the query run time is high. Normally the production support activities include * Scheduling * R/3 Job
Monitoring * B/W Job Monitoring * Taking corrective action for failed data loads. * Working on some
tickets with small changes in reports or in AWB objects. The activities in a typical Production Support
would be as follows: 1.Data Loading - could be using process chains or manual loads. 2. Resolving urgent
user issues - helpline activities 3. Modifying BW reports as per the need of the user. 4. Creating
aggregates in Prod system 5. Regression testing when version/patch upgrade is done. 6. Creating adhoc
hierarchies. We can perform the daily activities in Production 1. monitoringDataload failures thru RSMO
2. Monitoring Process Chains Daily/weekly/ monthly 3. Perform Change run Hirerachy 4. Check Aggr's
Rollup.

How to convert a BeX query Global structure to local structure (Steps involved)
BeX query Global structure to local structureSteps; ***a local structure when you want to add structure
elements that are unique to the specific query. Changing the global structure changes the structure for all
the queries that use the global structure. That is reason you go for a local structure.Coming to the
navigation part--In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the open query
icon (icon tht looks like a folder) On the SAP BEx Open dialog box:ChooseQueries.Select the desired
InfoCubeChooseNew.On the Define the query screen:In the left frame, expand the Structure node.Drag
and drop the desired structure into either the Rows or Columnsframe.Select the global structure.Right-
click and choose Remove reference.A local structure is created.Remember that you cannot revert back the
changes made to global structure inthis regard. You will have to delete the local structure and then drag
ndrop global structure into query definition.*When you try to save a global structure, a dialogue box
prompts you tocomfirm changes to all queries. that is how you identify a global structure*

What is the use of Define cell in BeX& where it is useful?
Cell in BEX:::Use*When you define selection criteria and formulas for structural components and there
are two structural components of a query, generic cell definitions are created at the intersection of the
structural components that determine the values to be presented in the cell.Cell-specific definitions allow
you to define explicit formulas, along with implicit cell definition, and selection conditions for cells and
in this way, to override implicitly created cell values. This function allows you to design much more
detailed queries.Inaddition, you can define cells that have no direct relationship to the structural
components. These cells are not displayed and serve as containers for help selections or help
formulas.you need two structures to enable cell editor in bex. In every query you have one structure for
key figures, then you have to do another structure with selections or formulas inside.Then having two
structures, the cross among them results in a fix reporting area of n rows * m columns. The cross of any
row with any column can be defined as formula in cell editor.This is useful when you want to any cell had
a diferentbehaviour that the general one described in your query defininion.For example imagine you
have the following where % is a formula kfB/KfA *100.kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4
50%Then you want that % for row chC was the sum of % for chA and % chB. Then in cell editor you are
enable to write a formula specifically for that cell as sum of the two cell before. chC/% = chA/% + chB/%
then:kfAkfB %chA 6 4 66%chB 10 2 20%chC 8 4 86%

What is 0Recordmode?
A. it is an info object , 0Record mode is used to identify the delta images in BW which is used in DSO .it
is automatically activated when u activate DSO in BW. Like that in R/3 also have field 0cancel. It holds
delta images in R/3. When ever u extracting data from R/3 using LO or Generic.. Etc. this field 0Cancel is
mapping with 0Record mode in BW. Like this BW identify the Delta images.


What is the difference between filter & Restricted Key Figures? Examples & Steps in BI?
Filter restriction applies to entire query. RKF is restriction applied on a keyfigure.Suppose for example,
you want to analyse data only after 2006...showing sales in 2007,2008 against Materials..You have got a
keyfigure called Sales in your cube
Now you will put global restriction at query level by putting Fiscyear> 2006 in the Filter.This will make
only data which have fiscyear>2006 available for query to process or show.
Now to meet your requirement. ..like belowMaterial Sales in 2007 Sales in 2008M1 200 300M2 400
700You need to create two RKF's.Sales in 2007 is one RKF which is defined on keyfigure Sales restricted
by Fiscyear = 2007Similarly,Sales in 2008 is one RKF which is defined on Keyfigure Sales restricted by
Fiscyear = 2008Now i think u understood the differenceFilter will make the restriction on query
level..Like in above case putting filter Fiscyear>2006 willmake data from cube for
yeaers2001,2002,2003, 2004,2005 ,2006 unavailable to the query for showing up.So query is only left
with data to be shown from 2007 and 2008.Within that data.....you can design your RKF to show only
2007 or something like that...


How to create condition and exceptions in Bi.7.0? But I know in Bw3.5 version.?
From a query name or description, you would not be able to judge whether the query is having any
exception.There are two ways of finding exception against a query:1. Execute queries one by one, the one
which is having background colour as exception reporting are with exceptions.2. Open queries in the
BEX Query Designer. If you are finding exception tab at the right side of filter and rows/column tab, the
query is having exception.


The FI Business Flow related to BW. case studies or scenarios
FI Flow Basically there are 5 major topics/areas in FI,
1. GL Accounting -related tables are SKA1, SKB1 Master data BSIS and BSAS are the Transaction
Data
2. Account Receivables - related to Customer All the SD related data when transferred to FI these are
created. Related Tables BSID and BSAD
3. Account Payables - related Vendor All the MM related documents data when transferred to FI these
are created Related Tables BSIK and BSAK All the above six tables data is present in BKPF and BSEG
tables You can link these tables with the hlp of BELNR and GJAHR and with Dates also.
4. Special Purpose Ledger..which is rarely used.
5. Asset Management In CO there are Profit center Accounting Cost center Accounting will be there.

Interview Questions
Q) I want to delete a BEx query that is in Production system through request. Is anyone aware about it?
A) Have you tried the RSZDELETE transaction?

Q) What are the five ASAP Methodologies?
A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.
1. Project Preparation: In this phase, decision makers define clear project objectives and an efficient
decision making process (i.e. Discussions with the client, like what are his needs and requirements etc.).
Project managers will be involved in this phase (I guess). A Project Charter is issued and an
implementation strategy is outlined in this phase.
2. Business Blueprint: It is a detailed documentation of your company's requirements. (i.e. what are the
objects we need to develop are modified depending on the client's requirements).
3. Realization: In this only, the implementation of the project takes place (development of objects etc) and
we are involved in the project from here only.
4. Final Preparation: Final preparation before going live i.e. testing, conducting pre-go-live, end user
training etc. End user training is given that is in the client site you train them how to work with the new
environment, as they are new to the technology.
5. Go-Live & support: The project has gone live and it is into production. The Project team will be
supporting the end users.

Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3
A) Landscape of b/w: u have the development system, testing system, production system Development
system: All the implementation part is done in this sys. (I.e., Analysis of objects developing, modification
etc) and from here the objects are transported to the testing system, but before transporting an initial test
known as Unit testing (testing of objects) is done in the development sys. Testing/Quality system: quality
check is done in this system and integration testing is done.
Production system: All the extraction part takes place in this sys.

Q). Difference between infocube and ODS?
A: Infocube is structured as star schema (extended) where a fact table is surrounded by different dim table
that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes. No overwrite
functionality. ODS is a flat structure (flat table) with no star schema concept and which will have granular
data (detailed level). Overwrite functionality.
Also check the following link:
http://sapbibobj.blogspot.com/2010/09/differeces-between-dso-and-infocube.html

Q) What is ODS?
http://sapbibobj.blogspot.com/2010/10/data-store-objects.html

Q) What is InfoSet?
A) An InfoSet is a special view of a dataset, such as logical database, table join, table, and sequential file,
and is used by SAP Query as a source data. InfoSets determine the tables or fields in these tables that can
be referenced by a report. In most cases, InfoSets are based on logical databases. SAP Query includes a
component for maintaining InfoSets. When you create an InfoSet, a DataSource in an application system
is selected. Navigating in a BW to an InfoSet Query, using one or more ODS objects or InfoObjects. You
can also drill-through to BEx queries and InfoSet Queries from a second BW system that is connected as
a data mart. _The InfoSet Query functions allow you to report using flat data tables (master data
reporting). Choose InfoObjects or ODS objects as data sources. These can be connected using joins. You
define the data sources in an InfoSet. An InfoSet can contain data from one or more tables that are
connected to one another by key fields. The data sources specified in the InfoSet form the basis of the
InfoSet Query.


Q) What does InfoCube contains?
A) Each InfoCube has one FactTable& a maximum of 16 (13+3 system defined, time, unit & data packet)
dimensions.

Q). Differences between STAR Schema & Extended Schema?
A) In STAR SCHEMA, A FACT Table in center, surrounded by dimensional tables and the dimension
tables contains of master data. In Extended Schema the dimension tables does not contain master data,
instead they are stored in Masterdata tables divided into attributes, text & hierarchy. These Masterdata&
dimensional tables are linked with each other with SID keys. Masterdata tables are independent of
Infocube& reusability in other InfoCubes.

Q) What does FACT Table contain?
A FactTable consists of KeyFigures. Each Fact Table can contain a maximum of 233 key figures.
Dimension can contain up to 248 freely available characteristics.

Q) How many dimensions are in a CUBE?
A) 16 dimensions. (13 user defined & 3 system pre-defined [time, unit & data packet])

Q) What does SID Table contain?
SID keys linked with dimension table & master data tables (attributes, texts, hierarchies)

Q) What does ATTRIBUTE Table contain?
Master attribute data
Q) What does TEXT Table contain?
Master text data, short text, long text, medium text & language key if it is language dependent

Q) What does Hierarchy table contain?
Master hierarchy data

Q) How would we delete the data in ODS?
A) By request IDs, Selective deletion & change log entry deletion.

Q) How would we delete the data in change log table of ODS?
A) Context menu of ODS Manage Environment change log entries.

Q) Difference between display attributes and navigational attributes?
A: Display attribute is one, which is used only for display purpose in the report. Where as navigational
attribute is used for drilling down in the report. We don't need to maintain Navigational attribute in the
cube as a characteristic (that is the advantage) to drill down.

Q) What are the extra fields does PSA contain?
A) (4) Record id, Data packet

Q) Partitioning possible for ODS?
A) No, It's possible only for Cube.

Q) Why partitioning?
A) For performance tuning.

Q) Different types of Attributes?
A) Navigational attribute, Display attributes, Time dependent attributes, Compounding attributes,
Transitive attributes, Currency attributes.



Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can. ODS is nothing but a table.

Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
A) Initially we don't delete the setup tables but when we do change in extract structure we go for it. We r
changing the extract structure right, that means there are some newly added fields in that which r not
before. So to get the required data (i.e.; the data which is required is taken and to avoid redundancy) we
delete n then fill the setup tables. To refresh the statistical data. The extraction set up reads the dataset that
you want to process such as, customers orders with the tables like VBAK, VBAP) & fills the relevant
communication structure with the data. The data is stored in cluster tables from where it is read when the
initialization is run. It is important that during initialization phase, no one generates or modifies
application data, at least until the tables can be set up.

Q) Different types of INFOCUBES.
http://sapbibobj.blogspot.com/2010/10/infocube_19.html

Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
A ) Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic values.
Variable Types are Manual entry /default value Replacement path SAP exit Customer exit Authorization

Q) WHAT ARE INDEXES?
Indexes are data base indexes, which help in retrieving data fastly.

Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
Of course

Q) What types of partitioning are there for BW?
There are two Partitioning Performance aspects for BW (Cube & PSA) Query Data Retrieval
Performance Improvement: Partitioning by (say) Date Range improves data retrieval by making best use
of database [data range] execution plans and indexes (of say Oracle database engine). B) Transactional
Load Partitioning Improvement: Partitioning based on expected load volumes and data element sizes.
Improves data loading into PSA and Cubes by infopackages (Eg. without timeouts).

Q) What are Process Chains?
A) TCode is RSPC, is a sequence of processes scheduled in the background & waiting to be triggered by
a specific event. Process chains nothing but grouping processes. Process variant (start variant) is the place
the process chain knows where to start. There should be min and max one start variant in each process
chain, here we specify when should the process chain start by giving date and time or if you want to start
immediately Some of theses processes trigger an event of their own that in-turn triggers other processes.
Ex: - Start chain Delete BCube indexes Load data from the source system to PSA Load data
from PSA to DataTarget ODS Load data from ODS to BCube Create Indexes for BCube after
loading data Create database statistics Roll-Up data into the aggregate Restart chain from
beginning.

Q) What are Process Types & Process variant?
A) Process types are General services, Load Process & subsequent processing, Data Target
Administration, Reporting agent & Other BW services. Process variant (start variant) is the place the
process type knows when & where to start.

Q) Types of Updates?
A) Full Update, Init Delta Update & Delta Update.

Q) For what we use HIDE fields, SELECT fields & CANCELLATION fields?
A) Selection fields-- The only purpose is when we check this column, the field will appear in InfoPackage
Data selection tab. Hide fields -- These fields are not transferred to BW transfer structure. Cancellation -
It will reverse the posted documents of keyfigures of customer defined by multiplying it with -1...and
nullifying the value. I think this is reverse posting

Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads?
Are there any standard procedures for checking them or matching the number of records?
A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of records
extracted. Then go to BW Monitor to check the number of records in the PSA and check to see if it is the
same & also in the monitor header tab. A) RSA3 is a simple extractor checker program that allows you to
rule out extracts problems in R/3. It is simple to use, but only really tells you if the extractor works. Since
records that get updated into Cubes/ODS structures are controlled by Update Rules, you will not be able
to determine what is in the Cube compared to what is in the R/3 environment. You will need to compare
records on a 1:1 basis against records in R/3 transactions for the functional area in question. I would
recommend enlisting the help of the end user community to assist since they presumably know the data.
To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will see the
record count, you can also go to display that data. You are not modifying anything so what you do in
RSA3 has no effect on data quality afterwards. However, it will not tell you how many records should be
expected in BW for a given load. You have that information in the monitor RSMO during and after data
loads. From RSMO for a given load you can determine how many records were passed through the
transfer rules from R/3, how many targets were updated, and how many records passed through the
Update Rules. It also gives you error messages from the PSA.

Q) X & Y Tables? X-table = A table to link material SIDs with SIDs for time-independent navigation
attributes. Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes.
There are four types of sid tables X time independent navigational attributes sid tables Y time dependent
navigational attributes sid tables H hierarchy sid tables I hierarchy structure sid tables

Q) How to know in which table (SAP BW) contains Technical Name / Description and creation data of a
particular Reports. Reports that are created using BEx Analyzer.
A) There is no such table in BW if you want to know such details while you are opening a particular
query press properties button you will come to know all the details that you wanted. You will find your
information about technical names and description about queries in the following tables. Directory of all
reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR)
for workbooks and the connections to queries check Where- used list for reports in workbooks (Table
RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT)

Q) What is a LUW in the delta queue?
A) A LUW from the point of view of the delta queue can be an individual document, a group of
documents from a collective run or a whole data packet of an application extractor.

Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7 differ from
the number of data records that is displayed when you call the detail view?
A) The number on the overview screen corresponds to the total of LUWs (see also first question) that
were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the
records contained in the LUWs. Both, the records belonging to the previous delta request and the records
that do not meet the selection conditions of the preceding delta init requests are filtered out. Thus, only
the records that are ready for the next delta request are displayed on the detail screen. In the detail screen
of Transaction RSA7, a possibly existing customer exit is not taken into account.

Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta
loading?
A) Only when a new delta has been requested does the source system learn that the previous delta was
successfully loaded to the BW System. Then, the LUWs of the previous delta may be confirmed (and also
deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the
number on the overview screen does not change when the first delta was loaded to the BW System.

Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has also been loaded
successfully?
It is most likely that this is a DataSource that does not send delta data to the BW System via the delta
queue but directly via the extractor (delta for master data using ALE change pointers). Such a DataSource
should not be displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.

Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading procedure
from the delta queue?
A) The impact is limited. If performance problems are related to the loading process from the delta queue,
then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area
and so on). Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for
the delta queue as for a full update. Please note, however, that LUWs are not split during data loading for
consistency reasons. This means that when very large LUWs are written to the DeltaQueue, the actual
package size may differ considerably from the MAXSIZE and MAXLINES parameters.

Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7?
What exactly is deleted?
A) You should act with extreme caution when you use the deletion function in the delta queue. It is
comparable to deleting an InitDelta in the BW System and should preferably be executed there. You do
not only delete all data of this DataSource for the affected BW System, but also lose the entire
information concerning the delta initialization. Then you can only request new deltas after another delta
initialization. When you delete the data, the LUWs kept in the qRFC queue for the corresponding target
system are confirmed. Physical deletion only takes place in the qRFC outbound queue if there are no
more references to the LUWs. The deletion function is for example intended for a case where the BW
System, from which the delta initialization was originally executed, no longer exists or can no longer be
accessed.

Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?
A) The qRFC monitor basically displays the same data as RSA7. The internal queue name must be used
for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and
the short name of the DataSource. For DataSources whose name are 19 characters long or shorter, the
short name corresponds to the name of the DataSource. For DataSources whose name is longer than 19
characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in
table ROOSSHORTN. In the qRFC monitor you cannot distinguish between repeatable and new LUWs.
Moreover, the data of a LUW is displayed in an unstructured manner there.

Q) I loaded several delta inits with various selections. For which one is the delta loaded?
A) For delta, all selections made via delta inits are summed up. This means, a delta for the 'total' of all
delta initializations is loaded.

Q) How many selections for delta inits are possible in the system?
A) With simple selections (intervals without complicated join conditions or single values), you can make
up to about 100 delta inits. It should not be more. With complicated selection conditions, it should be only
up to 10-20 delta inits. Reason: With many selection conditions that are joined in a complicated way, too
many 'where' lines are generated in the generated ABAP source code that may exceed the memory limit.

Q) I intend to copy the source system, i.e. make a client copy. What will happen with delta? Should I
initialize again after that?
A) Before you copy a source client or source system, make sure that your deltas have been fetched from
the DeltaQueue into BW and that no delta is pending. After the client copy, an inconsistency might occur
between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy,
Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the
system copy, the table will contain the entries with the old logical system name that are no longer useful
for further delta loading from the new logical system. The delta must be initialized in any case since delta
depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs
in BW when editing or creating an InfoPackage, you should expect that the delta have to be initialized
after the copy.

Q) Despite of the delta request being started after completion of the collective run (V3 update), it does not
contain all documents. Only another delta request loads the missing documents into BW. What is the
cause for this "splitting"?
A) The collective run submits the open V2 documents for processing to the task handler, which processes
them in one or several parallel update processes in an asynchronous way. For this reason, plan a
sufficiently large "safety time window" between the end of the collective run in the source system and the
start of the delta request in BW. An alternative solution where this problem does not occur is described in
Note 505700.

Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some entries have the
status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which
values in the field 'Status' mean what and which values are correct and which are alarming? Are the
statuses BW-specific or generally valid in qRFC?
A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a
delta request or in a repetition of the delta request. However, this does not mean that the record has
successfully reached the BW yet. The status READY in the TRFCQOUT and RECORDED in the
ARFCSSTATE means that the record has been written into the DeltaQueue and will be loaded into the
BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY
and RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can
occur temporarily. It is set before starting a DeltaExtraction for all records with status READ present at
that time. The records with status EXECUTED are usually deleted from the queue in packages within a
delta request directly after setting the status before extracting a new delta. If you see such records, it
means that either a process which is confirming and deleting records which have been loaded into the BW
is successfully running at the moment, or, if the records remain in the table for a longer period of time
with status EXECUTED, it is likely that there are problems with deleting the records which have already
been successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every
other status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means nothing (see note
378903). The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.

Q) How and where can I control whether a repeat delta is requested?
A) Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will be
of type 'Repeat'. If you need to repeat the last load for certain reasons, set the request in the monitor to red
manually. For the contents of the repeat see Question 14. Delta requests set to red despite of data being
already updated lead to duplicate records in a subsequent repeat, if they have not been deleted from the
data targets concerned before.

Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE
CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
A) Go to TCode sm66 then see which one is locked select that pid from there and goto sm12 TCode then
unlock it this is happened when lock errors are occurred when u scheduled.

Q) In BW we need to write abap routines. I wish to know when and what type of abap routines we got to
write. Also, are these routines written in update rules? I will be glad, if this is clarified with real-time
scenarios and few examples?
A) Over here we write our routines in the start routines in the update rules or in the transfer structure (you
can choose between writing them in the start routines or directly behind the different characteristics. In
the transfer structure you just click on the yellow triangle behind a characteristic and choose "routine". In
the update rules you can choose "start routine" or click on the triangle with the green square behind an
individual characteristic. Usually we only use start routine when it does not concern one single
characteristic (for example when you have to read the same table for 4 characteristics). I hope this helps.
We used ABAP Routines for example: To convert to Uppercase (transfer structure) To convert Values out
of a third party tool with different keys into the same keys as our SAP System uses (transfer structure) To
select only a part of the data for from an infosource updating the InfoCube (Start Routine) etc.

Q) Difference between Calculated KeyFigure& Formula? A)

Q) Variables in Reporting?
A) Characteristics values, Text, Hierarchies, Hierarchy nodes & Formula elements,

Q) Variable processing types in Reporting?
A) Manual, Replacement path, SAP Exit, Authorizations, Customer Exit
Q) Why we use this RSRP0001 Enhancement?
A) For enhancing the Customer Exit in reporting.

Q) We need to find the table in which query and variable assignments are stored. We must read this table
in a user-exit to get which variables are used in a query.
A) Check out tables RSZELTDIR and RSZCOMPDIR for query BEx elements. I found this info on one
of the previous posting, variable table, RSZGLOBV, query table - get query id from table RSRREPDIR
(field RSRREPDIR-COMPUID), use this id for table start with RSZEL* When 'ZFISPER1' name of the
variable when VNAM = 'ZVC_FY1' - characteristics. Step 1 - before selection Step 2 - after selection
Step 3 - processed all variable at the same time

Q) What is an aggregate?
A) Aggregates are small or baby cubes. A subset of InfoCube. Flat Aggregate --when u have more than
15 characteristics for aggregate system will generate that aggregate as flat aggregate to increase
performance. Roll-up--when the data is loaded for second time into cube, we have to roll-up to make that
data available in aggregate.

Q) How can we stop loading data into infocube?
A) First you have to find out what is the job name from the monitor screen for this load monitor screen lo
Header Tabstrip lo untadhi. SM37 (job monitoring) in r/3 select this job and from the menu u can delete
the job. There are some options, just check them out in sm37 and also u have to delete the request in BW.
Cancellation is not suggestible for delta load.


Interview Questions and Answers for SAP BI
1. How to use Virtual K.F/Char. ?
Ans : This virtual characteristic is getting a value assigned at query runtime and must not be loaded
with data in data target. Therefore, no change to existing update rules.
The implementation can be divided into the following areas:
1. Create of InfoObject [Key Figure / Characteristics] and attach the InfoObject to the InfoProvider.
2. Implementation of BADI RSR_OLAP_BADI (Set filter on Infoprovider while defining BADI
implementation)
3. Adding the InfoObject into the Query.
2. Query Performance Tips :
Ans :
i. Dont show too much data in initial view of report output
ii. Limit the level of hierarchies on initial view
iii. Always use Mandatory variables
iv. Utilize filters based on InfoProviders
v. Suppress Result rows if not needed
vi. Eliminate or Reduce Not logic in query selection
3. DataStoreObjects :
Ans : Standard DSO Max. 16 key fields can be created.

4. Types Of DataStoreObjects :
Standard DSO
Write-Optimized DSO -
Direct DSO - The DataStore object for direct update differs from the standard DataStore object in
terms of how the data is processed. In a standard DataStore object, data is stored in different versions
(active, delta, modified), whereas a DataStore object for direct update contains data in a single version.
Therefore, data is stored in precisely the same form in which it was written to the DataStore object for
direct update by the application. In the BI system, you can use a DataStore object for direct update as a
data target for an analysis process
Type Structure Data Supply SID
Generation
Details Example
Standard
DataStore Object
Consists of
three tables:
activation
queue, table
of active data,
change log
From data
transfer
process
Yes Standard
DataStore
Object
Operational Scenario for Standard
DataStore Objects
Write-Optimized
DataStoreObjects
Consists of
the table of
active data
From data
transfer
No Write-
Optimized
A plausible scenario for write-
optimized DataStore objects is
exclusive saving of new, unique
only process DataStoreObject data records, for example in the
posting process for documents in
retail. In the example below,
however, write-optimized
DataStore objects are used as the
EDW layer for saving data.
DataStore
Objects for
Direct Update
Consists of
the table of
active data
only
For APD
5. Line Item Dimension : If dimension table size (no. of rows) exceeds 20% of fact table size, then the
dimension should be flagged as Line Item Dimension. This means that the system does not create a
dimension table. Instead, the SID table of the characteristic takes the role of dimension table. Removing
the dimension table has the followingadvantages:
When loading transaction data, no IDs are generated for the entries in the dimension table. This
number range operation can compromise performance precisely in the case where a degenerated
dimension is involved.
A table- having a very large cardinality- is removed from the star schema. As a result, the SQL-
based queries are simpler. In many cases, the database optimizer can choose better execution plans.
Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include
additional characteristics. This is only possible with normal dimensions.
Scenario : 0IS_DOCID(Document Identification) InfoObejct used as Line Iten Dimension.
6. High cardinality: This means that the dimension is to have a large number of instances (that is, a high
cardinality). This information is used to carry out optimizations on a physical level in depending on the
database platform. Different index types are used than is normally the case. A general rule is that a
dimension has a high cardinality when the number of dimension entries is at least 20% of the fact
table entries. If you are unsure, do not select a dimension having high cardinality.
7. Different Types Of InfoCubes :
Standard InfoCube (with physical data store)
VirtualProvider (without physical data store)
Based on a data transfer process and a DataSource with 3.x InfoSource: A VirtualProvider
that allows the definition of queries with direct access to transaction data in other SAP source
systems.
Based on a BAPI: A VirtualProvider whose data is not processed in the BI system, but
externally. The data is read from an external system for reporting using a BAPI.
Based on a function module: A VirtualProvider without its own physical data store in the BI
system. A user-defined function module is used as a data source.
8. Real Time Cube:
Real-time-enabled InfoCubes can be distinguished from standard InfoCubes by their ability to support
parallel write accesses, whereas standard InfoCubes are technically optimized for read accesses. Real-
time InfoCubes are used when creating planning data. The data is written to the InfoCube by several users
at the same time. Standard InfoCubes are not suitable for this. They should be used if you only need read
access (such as for reading reference data).
Real-time-enabled InfoCubes can be filled with data using two different methods: Using the BW-BPS
transaction for creating planning data and using BW Staging. You have the option to switch the real-time
InfoCube between these two methods. From the context menu for your real-time InfoCube in the
InfoProvider tree, choose Switch Real-Time InfoCube. If Real-Time InfoCube Can Be Planned, Data
Loading not Allowed is selected by default, the Cube is filled using BW-BPS functions. If you change
this setting to Real-Time InfoCube Can Be Loaded with Data; Planning Not Allowed, you can then fill
the Cube using BW Staging.
For real-time InfoCubes, a reduced read performance is compensated for by the option to read in
parallel (transactionally) and an improved write performance.
9. Remodelling :
Remodeling is a new feature available as of NW04s BI 7.0 which enables to change the structure of
an InfoCube already loaded without disturbing data. This feature does not yet support remodeling of DSO
and InfoObjects.
Using remodeling a characteristic can be simply deleted or added/replaced with a constant
value, value of another InfoObject (in the same dimension), with value of an attribute of another
InfoObject (in the same dimension) orwith a value derived using Customer Exit.
Similarly a KeyFigure can be deleted, replaced with a constant value or a new KeyFigure can be added
and populated using a constant value or a Customer Exit.
This article describes how to add a new characteristic to InfoCube using the remodeling feature and
populating it using a Customer Exit.
Note following before you start remodeling process:
Back-up of existing data
During remodeling process InfoCube is locked for any changes or data loads so make sure you
stall all the data loads for this InfoCube till the time this process finishes.
If you are adding or replacing a KeyFigure compress the cube first to avoid inconsistencies
unless all the records in the InfoCube are unique.

Note following after you finish remodeling process and start daily loads and querying this InfoCube:
All the objects dependent on InfoCube like transformations, MultiProviders will have to be re-
activated.
If aggregates exists than they need to be reconstructed.
Adjust queries based on this InfoCube to accommodate the changes made.
If new field was added using remodeling than dont forget to map it in the transformation rules
for future data loads.

The code is written in SE24 by creating a new class. The interface for the class should be
IF_RSCNV_EXIT and code is written in the Method IF_RSCNV_EXIT~EXIT.
10.Difference between With Export and Without Export Migration of 3.x datasource :
With Export - Allows you to revert back to 3.x DataSource, Transfer Rules, etc...when you choose
this option (Recommended) Without Export - Does not allow you to ever revert back to the
3.x DataSource.
11. Difference between Calculated Key Figure and Formula :
The replacement of formula variables with the processing type Replacement Path acts differently in
calculated key figures and formulas:
If you use a formula variable with Replacement from the Value of an Attribute in a calculated key
figure, then thesystem automatically adds the drilldown according to the reference characteristic for the
attribute. The system then evaluates the variables for each characteristic value for the reference
characteristic. Afterwards, the calculated key figure is calculated and, subsequently, all of the other
operations are executed, meaning all additional, calculated key figures, aggregations, and formulas. The
system only calculates the operators, which are assembled in the calculated key figure itself, before the
aggregation using the reference characteristic.
If you use a formula variable with Replacement from the Value of an Attribute in a formula element,
then the variable is only calculated if the reference characteristic is uniquely specified in the respective
row, column, or in the filter.
12. Constant Selection :
In the Query Designer, you use selections (e.g. Characteristic restriction in Restricted Key Figure) to
determine the data you want to display at the report runtime. You can alter the selections at runtime using
navigation and filters. This allows you to further restrict the selections. The Constant Selection function
allows you to mark a selection in the Query Designer as constant. This means that navigation and filtering
have no effect on the selection at runtime.
13. Customer Exit for Query Variables :
The customer exit for variables is called three times maximally. These three steps are called I_STEP.
The first step (I_STEP = 1) is before the processing of the variable pop-up and gets called for every
variable of the processing type customer exit. You can use this step to fill your variable with default
values.
The second step (I_STEP = 2) is called after the processing of the variable pop-up. This step is
called only for those variables that are not marked as ready for input and are set to mandatory variable
entry.
The third step (I_STEP = 3) is called after all variable processing and gets called only once and not per
variable. Here you can validate the user entries.
Please note that you cannot overwrite the user input values into a variable with this customer
exit. You can only derive values for other variables or validate the user entries.
14. How to create Generic Datasource using Function Module :
A structure is created first for extract structure which will contain all datasource fields. Then a
Function module is created by copying the FM RSAX_BIW_GET_DATA_SIMPLE. The code needs to
be modified as per requirement.
For delta functionality, if the base tables (from where data will be fetched) contain date and time field
then include a dummy field (Timestamp) in the extract structure created for the FM and use this field in
code. (by splitting the timestamp in date and time).
Type-Pools : SBIWA, SRSC.
15. In which scenario you have used Generic DataSource ?
We had a requirement to send Contract (Sales Order) data from BI to MERICS (external system).
Selection criteria to extract data was
o Billing Plan Date(FPLT-AFDAT) < Current month date
o Billing Status(FPLT-FKSAF) = Not yet Processed
o Contract Type(VBAK-AUART) = Fixed Price (ZSCC)
o Item Category(VBAP-PSTYV) = ZSV2
We could have used Data Source 2LIS_11_VAITM and could have enhanced that for FPLT fields. But
the problem was that whenever there will be status change in a Billing Plan, that will not be captured by
data source 2LIS_11_VAITM.
Thereby we have created a generic data source using Function Module.
16. Generic DS Safety Interval Lower limit and Upper Limit ?What are the Delta Specific fields
? When to chose New status for changed records and when Additive Delta ?
Safety Interval Upper Limit :
The upper limit for safety interval contains the difference between the current highest value at the time of
the delta or initial delta extraction and the data that has actually been read. If this value is initial, records
that are created during extraction cannot be extracted."
This would mean that if your extractor takes half an hour to run , then ideally your safety upper limit
should be half hour or more , this way records created during extraction are not missed.
For example : If you start extraction at 12:00:00 with no safety interval and then your extract runs for 15
minutes , the delta pointer will read 12:15:00 and subsequent delta will read records created / changed on
or after 12:15:00 - this would mean that all records created / changed during extraction are skipped.
Estimate the extraction time for your datasource and then accordingly set the safety upper limit so that no
records are skipped. But then this being an additive delta - you need to be careful not to double your
records . Ideally this being an additive delta - either extract records during periods of very low activity or
have smaller safety limits to make sure data does not get duplicated.

Safety Interval Lower Limit : This field contains the value taken from the highest value of the previous
delta extraction to determine the lowest value of the time stamp for the next delta extraction.
For example: A time stamp is used to determine a delta. The extracted data is master data: The system
only transfers after-images that overwrite the status in the BW. Therefore, a record can be extracted into
the BW for such data without any problems.
Taking this into account, the current time stamp can always be used as the upper limit when extracting:
The lower limit of the next extraction is not seamlessly joined to the upper limit of the last extraction.
Instead, its value is the same as this upper limit minus a safety margin. This safety margin needs to be big
enough to contain all values in the extraction which already had a time stamp when the last extraction was
carried out but which were not read. Not surprisingly, records can be transferred twice. However, for the
reasons above, this is unavoidable.
1. If delta field is Date (Record Create Date or change date), then use Upper Limit of 1 day.
This will load Delta in BW as of yesterday. Leave Lower limit blank.
2. If delta field is Time Stamp, then use Upper Limit of equal to 1800 Seconds (30 minutes).
This will load Delta in BW as of 30 minutes old. Leave Lower limit blank.
3. If delta field is a Numeric Pointer i.e. generated record # like in GLPCA table, then use
Lower Limit. Use count 10-100. Leave upper limit blank. If value 10 is used then last 10
records will be loaded again. If a record is created when load was running, those records
may get lost. To prevent this situation, lower limit can be used to backup the starting
sequence number. This may result in some records being processed more than once;
therefore, be sure this DataSources is only feeding an ODS Object
Delta Specific Fields :
o TimeStamp - The field is a DEC15 field which always contains the time
stamp of the last change to a record in the local time format.
o Calendar Day - The field is a DATS8 field which always contains the
day of the last change.
o Numeric Pointer - The field contains another numerical pointer that
appears with each new record.
Additive Delta :
The key figures for extracted data are added up in BW. DataSources with this delta type can supply data
to ODS objects and InfoCubes.
New status for changed records :
Each record to be loaded delivers the new status for the key figures and characteristics. DataSources with
this delta type can write to ODS objects or master data tables.

17. How to Fill Up Set Up Table and related Transaction
18. Maximum Char. And Key Figures allowed in a Dimension of Cube ?
Max. Char. 248
Max. Key Fig. 233
19. Different Types of DTP :
o Standard DTP - Standard DTP is used to update data from PSA to data targets ( Info cube, DSO
etc).
o Direct Access DTP - DTP for Direct Access is the only available option for VirtualProviders.
o Error DTP - Error DTP is used to update error records from Error stock to the corressponding data
targets.
20. How to Create Optimized InfoCube ?
o Define lots of small dimensions rather than a few large dimensions.
o The size of the dimension tables should account for less than 10% of the fact table.
o If the size of the dimension table amounts to more than 10% of the fact table, mark the dimension as
a line item dimension.
21. Difference between DSO and Cube
DSO InfoCube
Use Consolidation of data in the data
warehouse layer
Loading delta records that can
subsequently be updated to
InfoCubes or master data tables
Operative analysis (when being
used in the operational data
store)
Aggregation and performance
optimization for
multidimensional reporting
Analytical and strategic data
analysis
Type of data Non volatile data (when being
used
in the data warehouse layer)
Volatile data (when being used
in the
operational data store)
Transactional data, document-
type
data (line items)
Non volatile data
Aggregated data, totals
Type of data update Overwrite (in rare cases:
addition)
Addition only
Data structure Flat and relational database
tables,
semantic key fields
Enhanced star schema (fact
table and dimension tables)
Type of data analysis Reporting at high level of
granularity,
flat reporting
Multidimensional data analysis
with low level of granularity
The number of query records
should
be strictly limited by the choice
of key
fields
Individual document display
(OLAP analysis)
Use of InfoCube aggregates
Drill-through to document level
(stored in DataStore objects)
possible using the reportreport
interface
22. Give an example where DSO is used for Addition not overwrite.
23. Difference between 3.x and 7.0
1. In Infosets now you can include Infocubes as well.
2. The Remodeling transaction helps you add new key figure and characteristics and handles historical
data as well without much hassle. This is only for info cube.
3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor
of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM.
4. The monitoring has been improved with a new portal based cockpit. Which means you would need
to have an EP guy in ur project for implementing the portal ! :)
5. Search functionality hass improved!! You can search any object. Not like 3.5
6. Transformations are in and routines are passe! Yess, you can always revert to the old transactions
too.
7. The Data Warehousing Workbench replaces the Administrator Workbench.
8. Functional enhancements have been made for the DataStore object: New type of DataStore object
Enhanced settings for performance optimization of DataStore objects.
9. The transformation replaces the transfer and update rules.
10. New authorization objects have been added
11. Remodeling of InfoProviders supports you in Information Lifecycle Management.
12 The Data Source:There is a new object concept for the Data Source. Options for direct access to
data have been enhanced. From BI, remote activation of Data Sources is possible in SAP source systems.
13.There are functional changes to the Persistent Staging Area (PSA).
14.BI supports real-time data acquisition.
15. SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise
Data Warehousing (EDW). The new features/ Major differences include:
a) Renamed ODS as DataStore.
b) Inclusion of Write-optmizedDataStore which does not have any change log and the requests do need
any activation
c) Unification of Transfer and Update rules
d) Introduction of "end routine" and "Expert Routine"
e) Push of XML data into BI system (into PSA) without Service API or Delta Queue f) Intoduction of
BI accelerator that significantly improves the performance.
g) Load through PSA has become a must. I am not too sure about this. It looks like we would not have
the option to bypass the PSA Yes,
16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc transfer
method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the
Transformation now we can do "Start Routine, Expert Routine and End Routine". during data load. New
features in BI 7 compared to earlier versions:
i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition
(RDA).
ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options.
iii. One level of Transformation. This replaces the Transfer Rules and Update Rules iv. Performance
optimization includes new BI Accelerator feature.
v. User management (includes new concept for analysis authorizations) for more flexible BI end user
authorizations.

24. What is Extended Start Schema ?
25. What is Compression, RollUp , Attribute Change Run?
RollUp : You can automatically roll up and transfer into the aggregate requests in the InfoCube with
green traffic light status, that is, with saved data quality. The process terminates if no active, initially
filled aggregates exist in the system.
Compression : After rollup, the InfoCube content is automatically compressed. The system does this
by deleting the request IDs, which improves performance.
If aggregates exist, only requests that have already been rolled up are compressed. If no aggregates
exist, the system compresses all requests that have yet to be compressed.
First we need to do aggregate roll up before compression. when we roll up data load requests, we roll
them up into all the aggregates of the infocube and then carry on the compression of the cube. For
performance and disk space reasons, it is recommended to roll up a request as soon as possible and then
compress the infocube.
When you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is
rolledup into aggrgates before doing the compression.
Compression - with Zero Elimination Zero-elimination means that data rows with all keyfigs = 0
will be deleted.
26. What is Change Run ? How to resolve when a attribute change run fails because of locking
problem?
27. What are the error you have faced during Transport of object ?
28. What steps need to follow when a process in Process Chain fails and we need to make it green to
proceed further?
1. Right click on the failed process and go to Display Messages. From Chain tab get VARIANT and
INSTANCE value. For some cases INSTANCE is not available, that case we take Job Count Number.
2. Go to table RSPCPROCESSLOG. Give the VARIANT and INSTANCE and get LOGID, TYPE,
BATCHDATE and BATCHTIME
3. Execute program (SE38) RSPC_PROCESS_FINISH by providing LOGID, CHAIN, TYPE,
VARIANT, INSTANCE, BATCHDATE, BATCHTIME and STATE = G.
29. What is Rule Group in Transformation. Give example.
A rule group is a group of transformation rules. It contains one transformation rule for each key field of
the target. A transformation can contain multiple rule groups.
Rule groups allow you to combine various rules. This means that for a characteristic, you can create
different rules for different key figures.
Few key points about Rule Groups:
o A transformation can contain multiple rule groups.
o A default rule group is created for every transformation called as Standard Group. This group
contains all the default rules.
o Standard rule group cannot be deleted. Only the additional created groups can be deleted.
Example :
Records in source system: Actual and Plan Amount are represented as separate fields.
CompanyCode Account FiscalYear/Period ActualAmount Plan Amount
1000 5010180001 01/2008 100 400
1000 5010180001 02/2008 200 450
1000 5010180001 03/2008 300 500
Records in business warehouse: Single KeyFigure represents both Actual and Plan Amount. They are
differentiated using the characteristic Version (Version = 010 represents Actual Amount and Version =
020 represents Plan Amount in this example).

CompanyCode Account FiscalYear/Period Version Amount
1000 5010180001 01/2008 010 100
1000 5010180001 02/2008 010 200
1000 5010180001 03/2008 010 300
1000 5010180001 01/2008 020 400
1000 5010180001 02/2008 020 450
1000 5010180001 03/2008 020 500
To achieve this, In Standard Rule group(target), we will make the char. Version to constant value 010
and direct assignment form Actual Amount to Amount in target field.
And another rule group(New Rule group) will be created where we will make the char. Version to
constant value 020 and direct assignment form Plan Amount to Amount in target field.
30. Why we cannot use DSO to load inventory data ?
DS objects cannot admit any stock key figures (see Notes 752492 and 782314) and, among other
things, they do not have a validity table, which would be necessary. Therefore, ODS objects cannot be
used as non-cumulative InfoCubes, that is, they cannot calculate stocks in terms of BW technology.
31. Processing Type Replacement Path for Variable with Examples.
You use the Replacement Path to specify the value that automatically replaces the variable when you
execute the query or Web application.
The processing type Replacement Path can be used with characteristic value variables, text
variables andformula variables.
o Text and formula variables with the processing type Replacement Path are replaced by a
corresponding characteristic value.
o Characteristic value variables with the processing type Replacement Path, are replaced by the results
of a query.
Replacement with a characteristic value :
Replace Variable with
Key The variable value is replaced with the characteristic key.
External Characteristic Value Key The variable value is replaced with an external value of the
characteristic (external/internal conversion).
Name (Text) The variable value is replaced with the name of the characteristic.Note that formula
variables have to contain numbers in their names so that the formula variable represents a value after
replacement.
Attribute Value The variable value is replaced with the value of an attribute. An additional field
appears for entering the attribute. When replacing the variable with an attribute value, you can create a
reference to the characteristic for which the variable is defined. Choose the attribute Reference to
Characteristic (Constant 1). By choosing this attribute, you can influence the aggregation behavior of
calculated key figures and obtain improved performance during calculation.
Hierarchy Attribute The variable value is replaced with a value of a hierarchy attribute. An
additional field appears for entering the hierarchy attribute. You need this setting for sign reversal with
hierarchy nodes
o Example: Replacement with Query You want to insert the result for the query Top 5 products as a
variable in the query Sales Calendar year / month.
1. Select the characteristic Product and from the context menu, choose New Variable. The Variable
Wizard appears.
2. Enter a variable name and a description.
3. Choose the processing type Replacement Path.
4. Choose Next. You reach the Replacement Path dialog step.
5. Enter the query Top 5 Products.
6. Choose Next. You reach the Save Variable dialog step
7. Choose Exit
You are now able to insert the variable into the query Sales Calendar year / month. This allows you
to determine how the sales for these five, top-selling products has developed month for month.
32. Pseudo Delta
This is different from the normal delta in a way that, if you look at the data load it would say FULL
LOAD instead of DELTA. But as a matter of fact, its only pulling the records that are changed or created
after the previous load. This can be achieved multiple ways like logic in infopackage routine, selections
identifying only the changed records and so on.
In my past experience, we had a code in the infopackage that looks at when the previous request was
loaded, using that date calculates the month and loads data for which CALMONTH is between the
previously loaded date and today's date (since the data target is an ODS, even if there is a duplicate
selection, overwriting will happen thus not affecting the data integrity).
33. Flat Aggregates
If you have less than 16 characters in an aggregate ( including the time , data and package dimensions)
then the characteristic SIDs are stored as Line items meaning the E Fact table of the aggregate ( Assuming
that your aggregate is compressed ) will have 16 columns and these will have the SIDs only...
You do not have any further tables like dimension tables etc for an aggregate in this case - hence the
name FLAT - meaning that the aggregate more or less is like a standard table with the necessary SIDs and
nothing else.
Flat aggregates can be rolled up on DB Server (without loading data into Application Server)
34. Master Data Load failure recovery Steps :
Issue :
A delta update for a master data DataSource is aborted.The data was sent to the BW in this case but it
was not posted in the PSA.In addition, there are no, as yet, executed LUWs in the TRFC outbound of the
sourcesystem.Therefore, there is no way of reading the data from a buffer and transferring this to the
master data tables.
Solution :
Import the next PI or CRM patch into your source system and execute the RSA1BDCP report.
Alternatively, you can import the attached correction instructions into your system and create an
executable program in the customer namespace for this using transaction SE38, into which you copy the
source code of the correction instructions. Execute the report.
The report contains 3 parameters:
1. P_OS (DataSource): Name of the DataSource
2. P_RS (BIW system): logical name of the BW system
3. P_TIME (generation time stamp):The generation date and time of the first change pointer, which are
transferred into BW during the next upload, should be displayed as YYYYMMDDHHMMSS (for
example, 20010131193000 for January 31, 2001, 19:30:00).(e.g. 20010131193000 for 31.01.2001,
19:30:00).For this time stamp select the time stamp of the last successful delta request of this DataSource
in the corresponding BW system. After the report is executed, a dialog box appears with the number of
records that should have the 'unread' status. Check the plausibility of this number of records. It should be
larger than or the same as the number of records for the last, terminated request.
After you execute the report, change the status of the last (terminated) request in BW to 'green' and
request the data in 'delta' mode.

35. What is KPI ?
(1) Predefined calculations that render summarized and/or aggregated information, which is useful in
making strategic decisions.
(2) Also known as Performance Measure, Performance Metric measures. KPIs are put in place and
visible to an organization to indicate the level of progress and status of change efforts in an
organization. KPIs are industry-recognized measurements on which to base critical business decisions.
In SAP BW, Business Content KPIs have been developed based upon input from customers, partners,
and industry experts to ensure that they reflect best practices.

36. Performance Monitoring and Analysis tools in BW:
a) System Trace: Transaction ST01 lets you do various levels of system trace such as authorization
checks, SQL traces, table/buffer trace etc. It is a general Basis tool but can be leveraged for BW.
b) Workload Analysis: You use transaction code ST03
c) Database Performance Analysis: Transaction ST04 gives you all that you need to know about
whats happening at the database level.
d) Performance Analysis: Transaction ST05 enables you to do performance traces in different are as
namely SQL trace, Enqueue trace, RFC trace and buffer trace.
e) BW Technical Content Analysis: SAP Standard Business Content 0BWTCT that needs to be
activated. It contains several InfoCubes, ODS Objects and MultiProviders and contains a variety of
performance related information.
f) BW Monitor: You can get to it independently of an InfoPackage by running transaction RSMO or
via an InfoPackage. An important feature of this tool is the ability to retrieve important IDoc information.
g) ABAP Runtime Analysis Tool: Use transaction SE30 to do a runtime analysis of a transaction,
program or function module. It is a very helpful tool if you know the program or routine that you suspect
is causing a performance bottleneck.
37. Runtime Error MESSAGE_TYPE_X when opening an info package in BW
You sometimes run into error message 'Runtime error MESSAGE_TYPE_X' when you try to open an
existing delta infopackage. It won't even let you create a new infopackage, it throws the same error. The
error occurs in the FUNCTION-POOL FORM RSM1_CHECK_FOR_DELTAUPD.
This error typically occurs when delta is not in sync between source system and BW system. It might
happen when you copy new environments or when you refresh you QA or DEV boxes from production.
Solution : Try to open a existing full infopackage if you have it, you will be able to open existing full
infopackage because it is not going to check delta consistency. After you open the infopacke remove the
delta initialization from the infopackage as shown below. Got menu scheduleer -> Initialization options
for source system ->Select the entry -> click on delete button
After that you will be able to open existing delta infopackage. You can re initialize the delta and start
using the infopackage.
Follow the steps in the note 852443 , If you do not have a existing full infopackage. There are many
troubleshooting steps in this note. You can go through all of them or do what I do follow the steps below.
1. In table RSSDLINIT check for the record with the problematic datasource.
2. Get the request number (RNR) from the record.
3. Go to RSRQ transaction and enter the RNR number and say execute, It will show you the monitor
screen of actual delta init request.
4. Now change the status of the request to red.
That's it. Now you will be able to open your delta infopackage and run it. Of course you need to do
your delta init again as we made last delta init red. These steps have always worked for me, follow the
steps in the OSS note if this doesn't work for you.
38. KPIs for FI datasource :
Accounts Payable :
o DataSource 0FI_AP_4 (Vendors: Line Items with Delta Extrcation):
DataSource Field BI Info Object
DMSOL 0DEBIT_LC (Debit Amount in local currency)
DMHAB 0CREDIT_LC (Credit Amount in local currency)
DMSHB 0DEB_CRE_LC (Amount in Local currency with
+/- signs
WRSOL 0DEBIT_DC (Debit amount in Foreign currency)
WRHAB 0CREDIT_DC (Credit amount in Foreign
currency)
WRSHB 0DEB_CRE_DC (Foreign currency amount with
+/- signs
o DataSource 0FI_AP_6 (Vendor Sales Figures via Delta Extraction) :
DS Field BI IO
UM01S 0DEBIT (Total Debit Postings)
UM01H 0CREDIT (Total credit postings)
UM01K 0BALANCE (Cumulative Balance)
UM01U 0SALES (Sales for the Period)
Accounts Receivable :
o DataSource 0FI_AR_4 (Cu
stomers: Line Items with Delta Extraction)
DS Field BI IO
ZBD1T 0DSCT_DAYS1 (Days for Cash Discount 1)
ZBD2T 0DSCT_DAYS2 (Days for Second Cash
Discount)
ZBD3T 0NETTERMS (Deadline for Net Conditions)
DMSOL 0DEBIT_LC (Debit amount in local currency)
DMHAB 0CREDIT_LC (Credit amount in local currency)
DMSHB 0DEB_CRE_LC (Amount in local currency with
+/- signs
General Ledger :
o DataSource 0FI_GL_4
DS Field BI IO
WRBTR 0AMOUNT (Amount)
DMBTR 0VALUE_LC (Amount in local currency)
DMSOL 0DEBIT_LC (Debit amount in local currency)
DMHAB 0CREDIT_LC (Credit amount in local currency)
DMSHB 0DEB_CRE_LC (Amount in local currency with
+/- signs
WRSOL 0DEBIT_DC (Debit amount in Foreign currency)
WRHAB 0CREDIT_DC (Credit amount in Foreign
currency)
o DataSource 3FI_GL_0L_TT (Leading
Ledger (Totals))
DataSource 3FI_GL_Y1_TT (Non-leading ledger (Statutory) (Totals) - Y1)
DS Field BI IO
DEBIT 0DEBIT (Total Debit Postings)
CREDIT 0CREDIT (Total Credit postings)
BALANCE 0BALANCE (Cumulative Balance)
TURNOVER 0SALES (Sales for the Perod)
QUANTITY 0QUANTITY (Quantity)
o DataSource 3FI_GL_Y1_TT (Non-leading ledger (Statutory) (Totals) - Y1)
39. Example of Display Key Figure used in Master Data
In 0MATERIAL: The display key figures are 0HEIGHT(Height), 0LENGTH(Length), 0GROSS_WT
(Gross Weight), 0GROSS_CONT (Gross Content).
40. What all Custom reports you have created in your project ?
41. InfocubeOptimization :
o When designing an InfoCube, it is most important to keep the size of each dimension table as small
as possible.
o One should also try to minimise the number of dimensions.
o Both of these objectives can usually be met by building your dimensions with characteristics that are
related to each other in a 1:1 manner (for example each state is in one country) or only have a small
number of entries.
o Generally characteristics that have a large number of entries should be in a dimension by themselves,
which is flagged as a "line item" dimension.
o Characteristics that have a "many to many" relationship to each other should not be placed in the
same dimension otherwise the dimension table could be huge.
o It is generally recommended to do this if the dimension table size (number of rows) exceeds 10% of
the fact table's size. You should also flag it as a "line item" dimension in an SAP InfoCube.
42. How do you handle Init without data transfer through DTP ?
Under Execute tab select the processing mode as s

Interview Questions & Answers

1. What is data integrity?
Data Integrity is about eliminating duplicate entries in the database. Data integrity means
no duplicate data.
2. What is the difference between SAP BW 3.0B and SAP BW 3.1C, 3.5?
The best answer here is Business Content. There is additional Business Content provided
with BW 3.1C that wasn't found in BW 3.0B. SAP has a pretty decent reference library on
their Web site that documents that additional objects found with 3.1C.
3. What is the difference between SAP BW 3.5 and 7.0?
SAP BW 7.0 is called SAP BI and is one of the components of SAP NetWeaver 2004s. There
are many differences between them in areas like extraction, EDW, reporting, analysis
administration and so forth. For a detailed description, please refer to the documentation
given on help.sap.com.
1. No Update rules or Transfer rules (Not mandatory in data flow)
2.Instead of update rules and Transfer rules new concept introduced called transformations.
3. New ODS introduced in additional to the Standard and transactional.
4. ODS is renamed as DataStore to meet with the global data warehousing standards.
And lot more changes in the functionalities of BEX query designer and WAD etc.
5. In Infosets now you can include Infocubes as well.
6. The Re-Modeling transaction helps you adding new key figures and characteristics and
handles historical data as well without much hassle. This facility is available only for info
cube.
7. The BI accelerator (for now only for infocubes) helps in reducing query run time by
almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for
these would be HP or IBM.
8. The monitoring has been improved with a new portal based cockpit. Which means you
would need to have an EP guy in your project for implementing the portal !:)
9. Search functionality has improved!! You can search any object. Not like 3.5
10. Transformations are in and routines are passe! Yes, you can always revert to the old
transactions too.
4. What is index?
Indices/Indexes are used to locate needed records in a database table quickly. BW uses two
types of indices, B-tree indices for regular database tables and bitmap indices for fact tables
and aggregate tables.
5. What is KPIs (Key Performance Indicators)?
(1) Predefined calculations that render summarized and/or aggregated information, which is
useful in making strategic decisions.
(2) Also known as Performance Measure, Performance Metric measures. KPIs are put in
place and visible to an organization to indicate the level of progress and status of change
efforts in an organization.
KPIs are industry-recognized measurements on which to base critical business decisions.
In SAP BW, Business Content KPIs have been developed based upon input from customers,
partners, and industry experts to ensure that they reflect best practices.

6. What is the use of process chain?
The use of Process Chain is to automate the data load process.
Used to automate all the processes including Data load and all Administrative Tasks like
indices creation deletion, Cube compression etc.
Highly controlled data loading.
7. Difference between Display Attribute and Navigational Attribute?
The basic difference between the two is that navigational attributes can be used to drilldown
in a Bex report whereas display attributes cannot be used so. A navigational attribute would
function more or less like a characteristic within a cube.

To enable these features of a navigational attribute, the attribute needs to be made
navigational in the cube apart from the master data info-object.
The only difference is that navigation attributes can be used for navigation in queries, like
filtering, drill-down etc.
You can also use hierarchies on navigational attributes, as it is possible for characteristics.
But an extra feature is that there is a possibility to change your history. (Please look at the
relevant time scenarios). If navigation attributes changes for a characteristic, it is changed
for all records in the past.
Disadvantage is also a slow down in performance.
8. If there are duplicate data in Cubes, how would you fix it?
Delete the request ID, Fix data in PSA or ODS and re-load again from PSA / ODS.

9. What are the differences between ODS and Info Cube?
ODS holds transactional level data. Its just as a flat table. Its not based on
multidimensional model. ODS have three tables
1. Active Data table (A table containing the active data)
2. Change log Table (Contains the change history for delta updating from the ODS Object
into other data targets, such as ODS Objects or InfoCubes for example.)
3. Activation Queue table (For saving ODS data records that are to be updated but that
have not yet been activated. The data is deleted after the records have been activated)

Whereas Cube holds aggregated data which is not as detailed as ODS. Cube is based on
multidimensional model.
An ODS is a flat structure. It is just one table that contains all data.
Most of the time you use an ODS for line item data. Then you aggregate this data to an info
cube
One major difference is the manner of data storage. In ODS, data is stored in flat tables. By
flat I mean to say ordinary transparent table whereas in a CUBE, it composed of multiple
tables arranged in a STAR SCHEMA joined by SIDs. The purpose is to do MULTI-
DIMENSIONAL Reporting

In ODS; we can delete / overwrite the data load but in cube only add is possible, no
overwrite.
10. What is the use of change log table?
Change log is used for delta updates to the target; it stores all changes per request and
updates the target.
11. Difference between InfoSet and Multiprovider
a) The operation in Multiprovider is "Union" where as in Infoset it is either "inner join" or
"Outer join".

b) You can add Info-cube, ODS, Info-object in Multiprovider whereas in an Infoset you can
only have ODS and Info-object.
c) An Infoset is an Info-provider that joins data from ODS and Info-objects( with master
data). The join may be a outer join or a inner join. Whereas a Multiprovider is created on all
types of Infoproviders - Cubes, ODS, Info-object. These InfoProviders are connected to one
another by a union operation.
d) A union operation is used to combine the data from these objects into a MultiProvider.
Here, the system constructs the union set of the data sets involved. In other words, all
values of these data sets are combined. As a comparison: InfoSets are created using joins.
These joins only combine values that appear in both tables. In contrast to a union, joins
form the intersection of the tables.
12. What is the T.Code for Data Archival and what is it's advantage?
SARA.
Advantage: To minimize space, Query performance and Load performance
13. What are the Data Loading Tuning from R/3 to BW, FF to BW?
a) If you have enhanced an extractor, check your code in user exit RSAP0001 for expensive
SQL statements, nested selects and rectify them.
b) Watch out the ABAP code in Transfer and Update Rules, this might slow down
performance
c) If you have several extraction jobs running concurrently, there probably are not enough
system resources to dedicate to any single extraction job. Make sure schedule this job
judiciously.
d) If you have multiple application servers, try to do load balancing by distributing the load
among different servers.
e) Build secondary indexes on the under lying tables of a DataSource to correspond to the
fields in the selection criteria of the datasource. ( Indexes on Source tables)
f) Try to increase the number of parallel processes so that packages are extracted parallelly
instead of sequentially. (Use PSA and Data Target in parallel option in the info package.)
g) Buffer the SID number ranges if you load lot of data at once.
h) Load master data before loading transaction data.
i) Use SAP Delivered extractors as much as possible.
j) If your source is not an SAP system but a flat file, make sure that this file is housed on
the application server and not on the client machine. Files stored in an ASCII format are
faster to load than those stored in a CSV format.

14. Performance monitoring and analysis tools in BW
a) System Trace: Transaction ST01 lets you do various levels of system trace such as
authorization checks, SQL traces, table/buffer trace etc. It is a general Basis tool but can be
leveraged for BW.
b) Workload Analysis: You use transaction code ST03
c) Database Performance Analysis: Transaction ST04 gives you all that you need to
know about whats happening at the database level.
d) Performance Analysis: Transaction ST05 enables you to do performance traces in
different are as namely SQL trace, Enqueue trace, RFC trace and buffer trace.

e) BW Technical Content Analysis: SAP Standard Business Content 0BWTCT that needs
to be activated. It contains several InfoCubes, ODS Objects and MultiProviders and contains
a variety of performance related information.
f) BW Monitor: You can get to it independently of an InfoPackage by running transaction
RSMO or via an InfoPackage. An important feature of this tool is the ability to retrieve
important IDoc information.
g) ABAP Runtime Analysis Tool: Use transaction SE30 to do a runtime analysis of a
transaction, program or function module. It is a very helpful tool if you know the program or
routine that you suspect is causing a performance bottleneck.
15. Difference between Transfer Rules and Update Rules
a) Transfer Rules:
When we maintains the transfer structure and the communication structure, we use the
transfer rules to determine how we want the transfer structure fields to be assigned to the
communication structure InfoObjects. We can arrange for a 1:1 assignment. We can also fill
InfoObjects using routines, formulas, or constants.
Update rules:
Update rules specify how the data (key figures, time characteristics, characteristics) is
updated to data targets from the communication structure of an InfoSource. You are
therefore connecting an InfoSource with a data target.
b) Transfer rules are linked to InfoSource, update rules are linked to InfoProvider
(InfoCube, ODS).
i. Transfer rules are source system dependant whereas update rules are Data target
dependant.
ii.The no. of transfer rules would be equal to the no. of source system for a data target.

iii.Transfer rules is mainly for data cleansing and data formatting whereas in the update
rules you would write the business rules for your data target.
iv. Currency translations are possible in update rules.
c) Using transfer rules you can assign DataSource fields to corresponding InfoObjects of the
InfoSource. Transfer rules give you possibility to cleanse data before it is loaded into BW.
Update rules describe how the data is updated into the InfoProvider from the
communication structure of an InfoSource.
If you have several InfoCubes or ODS objects connected to one InfoSource you can for
example adjust data according to them using update rules.

Only in Update Rules: a. You can use return tables in update rules which would split the
incoming data package record into multiple ones. This is not possible in transfer rules.
b. Currency conversion is not possible in transfer rules.
c. If you have a key figure that is a calculated one using the base key figures you would do
the calculation only in the update rules.
16. What is OSS?
OSS is Online support system runs by SAP to support the customers.
You can access this by entering OSS1 transaction or visit Service.SAP.Com and access it by
providing the user name and password.
17. How to transport BW object?
Follow the steps.

i. RSA1 > Transport connection
ii. In the right window there is a category "all object according to type"
iii. Select required object you want to transport.
iv. Expand that object, there is select object, double click on this you will get the number of
objects, select yours one.
v. Continue.
vi. Go with the selection, select all your required objects you want to transport.
vii. There is icon Transport Object (Truck Symbol).
viii. Click that, it will create one request, note it down this request.
ix. Go to Transport Organizer (T.code SE01).
x. In the display tab, enter the Request, and then go with display.
xi. Check your transport request whether contains the required objects or not, if not go with
edit, if yes "Release" that request.

Thats it; your coordinator/Basis person will move this request to Quality or Production.
18. How to unlock objects in Transport Organizer?
To unlock a transport use Go to SE03 --> Request Task --> Unlock Objects
Enter your request and select unlock and execute. This will unlock the request.
19. What is InfoPackage Group?
An InfoPackage group is a collection of InfoPackages.

20. Differences Between Infopackage Groups and Process chains
i.Info Package Groups are used to group only Infopackages
where as Process chains are used to automate all the processes.
iiInfopackagegoups:
Use to group all releventinfopackages in a group, (Automation of a group of infopackages
only for dataload). Possible to Sequence the load in order.
Process Chains:
Used to automate all the processes including Dataload
and all Administrative Tasks like indices creation deletion, Cube compression etc
Highly controlled dataloading.
iii. InfoPackage Groups/Event Chains are older methods of scheduling/automation. Process
Chains are newer and provide more capabilities. We can use ABAP programs and lot of
additional features like ODS activation and sending emails to users based on success or
failure of data loads.

21. What are the critical issues you faced and how did you solve it?
Find your own answer based on your experience..
22. What is Conversion Routine?
a) Conversion Routines are used to convert data types from internal format to
external/display format or vice versa.
b) These are function modules.
c) There are many function modules, they will be of type
CONVERSION_EXIT_XXXX_INPUT, CONVERSION_EXIT_XXXX_OUTPUT.

example:

CONVERSION_EXIT_ALPHA_INPUT
CONVERSION_EXIT_ALPHA_OUTPUT

23. Difference between Start Routine and Conversion Routine
In the start routine you can modify data packages when data loading. Conversion routine
usually refers to routines bound to InfoObjects (or data elements) for conversion of internal
and display format.
24. What is the use of setup tables in LO extraction?
The use of setup table is to store your historical data in them before updating to the target
system. Once you fill up the setup tables with the data, you need not to go to the
application tables again and again which in turn will increase your system performance.

25. R/3 to ODS delta update is good but ODS to Cube delta is broken. How to fix
it?
i. Check the Monitor (RSMO) whats the error explanation. Based on explanation, we can
check the reason

ii. Check the timings of delta load from R3 ODS CUBE if conflicting after ODS load

iii. Check the mapping of Transfer/Update Rules

iv. Fails in RFC connection
v. BW is not set as source system

vi. Dump (for a lot of reasons, full table space, time out, sql errors...)
Do not receive an IDOC correctly.

vii. There is a error load before the last one and so on...
26. What is short dump and how to rectify?
Short dump specifies that an ABAP runtime error has occurred and the error messages are
written to the R/3 database tables. You can view the short dump through transaction ST22.
You get short dumps b'coz of runtime errors. The short dump u got is due to the
termination of background job. This could be of many reasons.

You can check short dumps in T-code ST22. U can give the job tech name and your userid.
It will show the status of jobs in the system. Here you can even analyze short dump. U can
use ST22 in both R/3 and BW.

OR To call an analysis method,
choose Tools --> ABAP Workbench --> Test --> Dump-Analysis from the SAP Easy Access
menu.
In the initial screen, you must specify whether you want to view todays dump or the dump
from yesterday. If these selection criteria are too imprecise, you can enter more specific
criteria. To do this, choose Goto --> Select Short Dump
You can display a list of all ABAP dumps by choosing Edit --> Display List. You can then
display and analyze a selected dump. To do this, choose Short Dump --> Dump Analysis.

S-ar putea să vă placă și