Sunteți pe pagina 1din 121

SAP Bw Interview Questions With IBM/HP

1)How we do the SD and MM configuration for BW ?


You need to activate the data sources in R3 system.
You need to maintain the login information for the logical system.sm59 : Choose the RFC
destination , BW system, Under logon Security, maintain the user credentials.
Maintain control parameters for data transfer.
Filling in of setup tables SBIW
I feel that these are certain prerequisites.
From an SD perspective, you as a BW consultant should first understand the basic SD process
flow on the R3 side. (search the forum for SD process flow and you'll get a wealth of
information on the flow and the tables as well as transactions involved in SD).
Next you need to understand the process flow that has been implemented at the clients place.
How the SD data flows and what are the integration points with other modules as well as how
the integration happens.
This knowledge is essential when modeling your BW design.

From a BW perspective you need to first know all the SD extractors and what information they
bring. Next look at all the cubes and ODS for SD.

1. What is the t-code to see log of transport connection?


in RSA1 -> Transport Connection you can collect the Queries and the Role and after this you can
transport them (enabling the transport in SE10, import it in STMS
1. RSA1
2. Transport connection (button on the left bar menu)
3. Sap transport -> Object Types (button on the left bar menu)
4. Find Query Elements -> Query
5. Find your query
6. Group necessery object
7. Transport Object (car icon)
8. Release transport (SE10 T-code)
9. load transport (STMS T-code)

2.Lo; mm inventory data source with marker significance?


Marker is as like check point when u upload the data from inventory data source
2lis_03_bx data source for current stock and BF for movement type
after uploading data from BX u should rlise the request in cube or i menn to say compress it
then load data from another data source BF and set this updated data to no marker update so
marker is use as a check point if u dont do this u getting data missmatch at bex level bcz
system get confuse .
(2LIS_03_BF Goods Movement From Inventory Management-- -----Unckeck the no marker update
tab)
(2LIS_03_BX Stock Initialization for Inventory Management-- ---select the no marker update
check box)
2LIS_03_UM Revaluations ----Uncheck the no marker update tab) in the infopackege of "collaps"

3. How can you navigate to see the error idocs ?


If it is fine check the IDOCs in source system go to BD87->give Ur user ID and date->execute-
>you can find Red status Idocs select the erroneous Idoc->Rt.click and select Manual process.

You need to Reprocess this IDOC which are RED. For this you can take help of Any of your Team
(ALE IDOC Team or BAsis Team)Or Else
youcan push it manually. Just search it in bd87 screen only to Reprocess.
Also, Try to find why this IDocs are stuck there.

4)Difference between v1, v2, v3 jobs in extraction?

V1 Update: when ever we create a transaction in R/3(e.g.,Sales Order) then the entries get
into the R/3 Tables(VBAK, VBAP..) and this takes place in V1 Update.
V2 Update: V2 Update starts a few seconds after V1 Update and in this update the values get
into Statistical Tables, from where we do the extraction into BW.
V3 Update: Its purely for BW extraction.
But in the Document below, V1, V2 and V3 are defined in a different way. Can You please
explain me in detial what exactly V1, V2 and V3 updates means?

5.What are statistical update and document update?


Synchronous Updating (V1 Update)
The statistics update is made synchronously with the document update.
While updating, if problems that result in the termination of the
statistics update occur, the original documents are NOT saved. The cause
of the termination should be investigated and the problem solved.
Subsequently, the documents can be entered again.
Radio button: V2 updating

6.Do you have any idea how to improve the performance of the BW..?
Asynchronous Updating (V2 Update)
With this update type, the document update is made separately from the statistics update. A
termination of the statistics update has NO influence on the document update (see V1 Update).
Radio button: Updating in U3 update program
Asynchronous Updating (V3 Update)
With this update type, updating is made separately from the document update. The difference
between this update type and the V2 Update lies,however, with the time schedule. If the V3
update is active, then the update can be executed at a later time.

In contrast to V1 and V2 Updates, no single documents are updated. The V3 update is,
therefore, also described as a collective update.

7)How can you decide the query performance is slow or fast ?


You can check that in RSRT tcode.
execute the query in RSRT and after that follow the below steps
Goto SE16 and in the resulting screen give table name as RSDDSTAT for BW 3.x and
RSDDSTAT_DM for BI 7.0 and press enteryou can view all the details about the query like time
taken to execute the query and the timestmaps

8)What is statistical setup and what is the need and why?


Follow these steps to filling the set up table.

1. Go to transaction code RSA3 and see if any data is available related to your DataSource. If
data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the
data by entering the application name.
2. Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics -->
Managing extract structures --> Initialization --> Filling the Setup table --> Application specific
setup of statistical data --> perform setup (relevant application)
3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name
of the run and execute. Now all the available records from R/3 will be loaded to setup tables.
4. Go to transaction RSA3 and check the data.
5. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is
serialized V3 update.
6. Go to BW system and create infopackage and under the update tab select the initialize delta
process. And schedule the package. Now all the data available in the setup tables are now
loaded into the data target.
7. Now for the delta records go to LBWE in R/3 and change the update mode for the
corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and
directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new
records are added immediately you can see the record in RSA7.
8. Go to BW system and create a new infopackage for delta loads. Double click on new
infopackage. Under update tab you can see the delta update radio button..
9. Now you can go to your data target and see the delta record.

9.Why we have construct setup tables?


The R/3 database structure for accounting is much more easier than the Logistical structure.
Once you post in a ledger that is done. You can correct, but that give just another posting.
BI can get information direct out of this (relatively) simple database structure.
In LO, you can have an order with multiple deliveries to more than one delivery addresses. And
the payer can also be different.
When 1 item (orderline) changes, this can have its reflection on order, supply, delivery,
invoice, etc.
Therefore a special record structure is build for Logistical reports.and this structure now is
used for BI.
In order to have this special structre filled with your starting position, you must run a set-up.
from that moment on R/3 will keep filling this LO-database.
If you wouldn't run the setup. BI would start with data from the moment you start the filling of
LO (with the logistica cocpit)

10.How can you eliminate the duplicate records in TD, MD?


Try to check the system logs through SM21 for the same.

11.What use marker in MM?


Marker update is just like check point.
ie it will give the snapshot of the stock on a particular date ie when was the marker updated.
Because we are using Noncumulative keyfigure it will lot of time to calculate the current stock
for example at report time. to overcome this we use marker update
Marker updates do not summarize the data.. In inventory management scenarios, we have to
calculate opening stock and closing stock on a daily basis. In order to facilitate this, we set a
marker which will add and subtract the values for each record.
In the absence of marker update, the data will be added up and will not provide the correct
values.

12.Tell me web template?


You get information on where the web template details are stored from the following tables :
RSZWOBJ Storage of the Web Objects
RSZWOBJTXT Texts for Templates/Items/ Views
RSZWOBJXREF Structure of the BW Objects in a Template

RSZWTEMPLATE Header Table for BW HTML Templates


You can check these tables and search for your web template entry . However, If I understand
your question correctly , you will have to open the template in the WAD and then make the
corrections in the same to correct it.

13.What is dashboard?
A dash board can be created using the web application Designer (WAD) or the visual composer
(VC). A dashboard is just a collection of reports, views and links etc in a single view. For e.g.
igoogle is a dashboard.

A dashboard is a graphical reporting interface, which displays KPIs (Key Performance


Indicators) as charts and graphs. A dashboard is a performance management system

When we look at the all organization measures how they are performing with helicopter view,
we need a report that teaches and shows the trend in a graphical display quickly. These reports
are called as Dashboard Reports, still we can report these measures individually, but by
keeping all measures in a single page, we are creating single access point to the users to view
all information available to them. Absolutely this will save lot of precious time, gives clarity on
decision that needs to be taken, helps the users to understand the measure(s) trend with
business flow creating dashboard
Dashboards : Could be built with Visual Composer & WAD
create your dashboard in BW,

(1) Create all BEx Queries with required variants,tune them perfectly.
(2) Differentiate table queries and graph queries.
(3) Choose the graph type required that meet your requirement.
(4) Draw the layout how the Dashboard page looks like.
(5) Create a web template that has navigational block / selection information.
(6) Keep navigational block fields are common across the measures.
(7) Include the relevant web items into web template.
(8) Deploy the URL/Iview to users through portal/intranet

The steps to be followed in the creation of Dashboard using WAD are summarized as below:

1) Open a New Web template in WAD.


2) Define the tabular layout as per the requirements so as to embed the necessary web items.
3) Place the appropriate web items in the appropriate tabular grids
4) Assign queries to the web items (A Query assigned to a web item is called as a data provider)
5) Care should be taken to ensure that the navigation blocks selection parameters are common
across all the BEx queries of the affected dataproviders.
6) Properties of the individual web items are to be set as per the requirements. They can be
modified in Properties window or in the HTML code.
7) The URL when this web template is executed should be used in the portal/intranet

14.How can you solve the data mismatch tickets between r/3 and bw?
Check the mapping at BW side for 0STREET in transfer rules.Check the data in PSA for the same
field.If the PSA is also doesn't have complete data then check the field in RSA3 in source
system.

15. What is thumb rule?

16)What is replacement path tell me one scenario?


http://www.sd- solutions. com/documents/ SDS_BW_Replaceme nt%20Path% 20Variables. html

17.What is difference between PSA & IDOC?

BI7 is PSA used only for Data load from Source System into BW
18). what we do in Business Blue Print Stage?
SAP has defined a business blueprint phase to help extract pertinent information about your
company that is necessary for implementation. These blueprints are in the form of
questionnaires that are designed to probe for information that uncovers how your company
does business. As such, they also serve to document the implementation. Each business
blueprint document essentially outlines your future business processes and business
requirements. The kinds of questions asked are germane to the particular business function, as
seen inthe following sample questions:1) What information do you capture on a purchase order?
2) What information is required to complete a purchase order?Accelerated SAP question and
answer database:The question and answer database (QADB) is a simple although aging tool
designed to facilitate the creation and maintenance of your business blueprint.This database
stores the questions and the answers and serves as the heart of your blue print. Customers are
provided with a customer input template for each application that collects the data. The
question and answer format is standard across applications to facilitate easier use by the
project team.Issues database: Another tool used in the blueprinting phase is the issues
database. Thisdatabase stores any open concerns and pending issues that relate to the
implementation. Centrally storing this information assists in gathering and then managing
issues to resolution, so that important matters do not fall through the cracks. You can then
track the issues in database, assign them to teammembers, and update the database
accordingly.

19). How do we gather the requirements for an Implementation Project?


One of the biggest and most important challenges in any implementation is gathering and
understanding the end user and process team functional requirements. These functional
requirements represent the scope of analysis needs and expectations (both now and in the
future) of the end user. These typically involve all of the following:- Business reasons for the
project and business questions answered by the implementation- Critical success factors for
the implementation- Source systems that are involved and the scope of information needed
from each- Intended audience and stakeholders and their analysis needs- Any major
transformation that is needed in order to provide the information- Security requirements to
prevent unauthorized useThis process involves one seemingly simple task: Find out exactly
what theend users' analysis requirements are, both now and in the future, and buildthe BW
system to these requirements. Although simple in concept, in practicegathering and reaching a
clear understanding and agreement on a complete setof BW functional requirements is not
always so simple.

20) How do we decide what cubes has to be created?


Its depends on your project requirement. Customized cubes are not mandatory for all the
projects. If your bussines requirement is differs from given scenario ( BI content cubes ) then
only we will opt for customized cubes.Normally your BW customization or creation of new info
providers all are depending on your source system.If your source system other that R3 then you
should go with customization of your all objects.If your source system is R3 and your users are
using only R3 standard business scenarios like SD,MM or FI... etc., then you dont want to create
any info providers or you dont want to enhance any thing in the existing BW Business Content.
But 99% this is not possible. Because surely they should have included their new business
scenario or new enhancements.For example, In my first project we implemented for Solution
Manager BW implemention. There we have activated all the business content in CRM. But the
source system have new scenarios for message escalation, ageing calculation etc., According
their business scenrio we could't use standard business content. For that we have taken only
existing info objects and created new info objects which are not there in the business content.
After that we have created custom data source to info providers as well asreports.

21) Who used to make the Technical and Functional Specifications?


Technical Specification:Here we will mention all the BW objects (info objects, data sources,
info sources and info providers). Then we are going to say the data flow and behaviour of the
data load (either delta or full) also we can tell the duration of the cube activation or creation.
Pure BW technical things are available in this document. This is not for End users
document.Functional Specification:Here we will describe the business requirements. That
means here we are going to say which are all business we are implementing like SD, MM and FI
etc., then we are going to tell the KPI and deliverable reports detail to the users. This
document is going to mingle with both Function Consultants and Business Users. This document
is applicable for end users also.
22) Give me one example of a Functional Specification and explain what information we
will get from that?
Functional Specs are requirements of the business user.Technical Specs translate these
requirements in a technical fashion.Let's say Functional Spec says,1. the user should be able to
enter the Key date, Fiscal Year, Fiscal Version.2. The Company variable should be defaulted to
USA but then if the user wants to change it, they can check the drop down list and choose
other countries.3. The calculations or formulas for the report will be displayed in precision of
one decimal point.4. The report should return values for 12 months of data depending on the
fiscal year that the user enters Or it should display in quarterly values. Functional specs are
also called as Software requirements.Now from this Techinal Spec follows, to resolve each of
the line items listed above.1. To give the option of key date, Fiscal year and Fiscal Version
certain Info Obejcts should be availble in the system. If available, then should we create any
variables for them - so that they are used as user entry variable. To create any varaibles, what
is the approch, where do you do it, what is the technical of the objects you'll use, what'll be
the technical name of the objects you'll crete as a result of this report.2. Same explanation
goes for the rest. How do you set up the varaible,
3. What changes in properties willu do to get the precision.4. How will you get the 12 months
of data.What will be the technical and display name of the report, who'll be authorized to run
this report, etc are clearly specified in the technical specs.

23) What is Customization? How do we do in LO?

How to do basic LO extraction for SAP-R3-BW1. Go to transaction code RSA3 and see if any data
is available related to your DataSource. If data is there in RSA3 then go to transaction code
LBWG (Delete Setup data) and delete the data by entering the application name.2. Go to
transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing
extract structures --> Initialization --> Filling the Setup table --> Application specific setup of
statistical data --> perform setup (relevant application)3. In OLI*** (for example OLI7BW for
Statistical setup for old documents : Orders) give the name of the run and execute. Now all the
available records from R/3 will be loaded to setup tables.4. Go to transaction RSA3 and check
the data.5. Go to transaction LBWE and make sure the update mode for the corresponding
DataSource is serialized V3 update.6. Go to BW system and create infopackage and under the
update tab select the initialize delta process. And schedule the package. Now all the data
available in the setup tables are now loaded into the data target.7.Now for the delta records
go to LBWE in R/3 and change the update mode for the corresponding DataSource to
Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to
transaction code RSA7 there you can see green light # Once the new records are added
immediately you can see the record in RSA7.

24) When we use Maintain Data Source, What we do? What we will maintain?
Go to BW system and create a new infopackage for delta loads. Double click on new
infopackage. Under update tab you can see the delta update radio button.

25) Tickets and Authorization in SAP Business Warehouse What is tickets? And example?
Tickets are the tracking tool by which the user will track the work which we do. It can be a
change requests or data loads or what ever. They will of types critical or moderate. Critical
can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will
be closed by informing the client that the issue is solved. Tickets are raised at the time of
support project these may be any issues, problems.... .etc. If the support person faces any
issues then he will ask/request to operator to raise a ticket.
Operator will raise a ticket and assign it to the respective person. Critical means it is most
complicated issues ....depends how you measure this...hope it helps. The concept of Ticket
varies from contract to contract in between companies. Generally Ticket raised by the client
can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is
of high priority it has to be resolved ASAP. If the ticket is of low> priority it must be considered
only after attending to high priority tickets. The typical tickets in a production Support work
could be: 1. Loading any of the missing master data attributes/texts. 2. Create ADHOC
hierarchies. 3. Validating the data in Cubes/ODS. 4. If any of the loads runs into errors then
resolve it. 5. Add/remove fields in any of the master data/ODS/Cube. 6. Data source
Enhancement. 7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling
the infopackages for the attributes/texts mentioned by the client. 2. Create ADHOC
hierarchies. - Create hierarchies in RSA1 for the info-object. 3. Validating the data in
Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3. 4. If any of
the loads runs into errors then resolve it. - Analyze the error and take suitable action. 5.
Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement 6.
Data source Enhancement. 7. Create ADHOC reports. - Create some new reports based on the
requirement of client.

26) Change attribute run.


Generally attribute change run is used when there is any change in the master data..it is used
for realingment of the master data..Attribute change run is nothing but adjusting the master
data after its been loaded from time to time so that it can change or generate or adjust the
sid's so that u may not have any problem when loading the trasaction data in to data
targets.the detail explanation about Attribute change run.The hierarchy/attribute change run
which activates hierarchy and attribute changes and adjusts the corresponding aggregates is
devided, into 4 phases:1. Finding all affected aggregates2. set up all affected aggregates again
and write the result in the new aggregate table.3. Activating attributes and hierarchies4.
rename the new aggregate table. When renaming, it is not possible to execute queries. In some
databases, which cannot rename the indexes, the indexes are also created in this phase.
27) Different types of Delta updates?
Delta loads will bring any new or changed records after the last upload.This method is used for
better loading in less time. Most of the std SAP data sources come as delta enabled, but some
are not. In this case you can do a full load to the ODS and then do a delta from the ODS to the
cube. If you create generic datasources, then you have the option of creating a delta onCalday,
timestamp or numeric pointer fields (this can be doc number, etc).You'll be able to see the
delta changes coming in the delta queue through RSA7 on the R3 side.To do a delta, you first
have to initialize the delta on the BW side and then set up the delta.The delta mechanism is
the same for both Master data and Transaction data loads.============ ========= ==There
are three deltasDirect Delta: With this update mode, the extraction data is transferred with
each document posting directly into the BW delta queue. In doing so, each document posting
with delta extraction is posted for exactly one LUW in the respective BW delta queues.Queued
Delta: With this update mode, the extraction data is collected for the affected application
instead of being collected in an extraction queue, and can be transferred as usual with the V3
update by means of an updating collective run into the BW delta queue. In doing so, up to
10000 deltaextractions of documents for an LUW are compressed for each Data Source into the
BW delta queue, depending on the application.Non-serialized V3 Update: With this update
mode, the extraction data for the application considered is written as before into the update
tables with the help of a V3 update module. They are kept there as long as the data is selected
through an updating collective run and are processed. However, in contrast to the current
default settings (serialized V3 update), the data in the updating collective run are thereby read
without regard to sequence from the update tables and are transferred to the BW delta queue.
28) Function modules;1) UNIT_CONVERSION_ SIMPLE and2) MD_CONVERT_MATERIAL
_UNITexplain how to use these things, if possible with a well explained example.
The conversion of units of measure is required to convert business measurements into other
units. Business measurements encompass physical measurements which are either assigned to a
dimension or are nondimensional. Nondimensional measurements are understood as countable
measurements(palette, unit..).You differentiate between conversions for which you only need
to enter a source and target unit in order to perform conversion and conversions for which
specifying these values alone is not sufficient. For the latter, you have to enter a conversion
factor which is derived from a characteristic ora characteristic combination (compound
characteristic) and the corresponding properties....1. Measurements of lengthConversions
within the same dimension ID (T006-DIMID) for example, length:1 m = 100 cm (linear
correlation)*Meter* and *Centimeter* both belong to dimension ID LENGTH.2. Measurements of
number associated with measurements of weightConversions involving different dimension IDs
for example, number andweight.1 unit = 25 g (linear correlation)*Unit* has dimension ID
AAAADL and *Gram* has dimension ID MASS.ExampleNumber

Unit Number Unit1


Chocolate bar 25 g1 Small carton 12 Chocolate bar1 Large carton 20 Small carton1 Europallet
40 Large carton* Quantity Conversion*
<http://help.sap.com/saphelp_nw2004s/helpdata/en/27/b65c42b4e05542e10000000a1550b0/f
rameset.htm>* *UseQuantity conversion allows you to convert key figures with units that have
different units of measure in the source system into a uniform unit of measure in the BI
system.FeaturesThis function enables the conversion of updated data records from the source
unit of measure into a target unit of measure, or into different target units of measure, if the
conversion is repeated. In terms of functionality, quantity conversion is structured similarly to
currency translation.In part it is based on the quantity conversion functionality in SAP
NetWeaver Application Server. Simple conversions can be performed between units of measure
that belong to the same dimension (such as meters to kilometers, kilograms to grams). You can
also perform InfoObject-specific conversions (for example, two palettes (PAL) of material 4711
were ordered and this order quantity has to be converted to the stock quantity
*Carton*(CAR) ).Quantity conversion is based on quantity conversion types. The business
transaction rules of the conversion are established in the quantity conversion type. The
conversion type is a combination of different parameters (conversion factors, source and target
units of measure) that determine how the conversion is performed. For more information, see
QuantityConversion
Types<http://help.sap.com/saphelp_nw2004s/helpdata/en/1c/1b5d427609c153e10000000a155
0b0/content.htm>.IntegrationThe quantity conversion type is stored for future use and is
available for quantity conversions in the transformation rules for InfoCubes and in the Business
Explorer:In the transformation rules for InfoCubes you can specify, for each key figure or data
field, whether quantity conversion is performed during the update. In certain cases you can
also run quantity conversion in user-defined routines in the transformation rules..In the
Business Explorer you can: Establish a quantity conversion in the query definition. Translate
quantities at query runtime. Translation is more limitedhere than in the query definition.
[image: This graphic is explained in the accompanying text]*Quantity Conversion
Types*<http://help.sap.com/saphelp_nw2004s/helpdata/en/1c/1b5d427609c153e10000000a15
50b0/frameset.htm>DefinitionA quantity conversion type is a combination of different
parameters that establish how the conversion is performed. StructureThe parameters that
determine the conversion factors are the source and target unit of measure and the option you
choose for determining the conversion factors.The decisive factor in defining a conversion type
is the way in which you want conversion factors to be determined. Entering source and target
quantities is optional.Conversion FactorsThe following options are available: Using a reference
InfoObjectThe system tries to determine the conversion factors from the reference InfoObject
you have chosen or from the associated quantity DataStore object.If you want to convert 1000
grams into kilograms but the conversion factors are not defined in the quantity DataStore
object, the system cannot perform the conversion, even though this is a very simple
conversion. Using central units of measure (T006)Conversion can only take place if the source
unit of measure and target unit of measure belong to the same dimension (for example, meters
to kilometers, kilograms to grams, and so on). Using reference InfoObject if available, central
units of measure (T006) if notThe system tries to determine the conversion factors using the
quantity DataStore object you have defined. If the system finds conversion factors, it uses
these to perform the calculation. If the system cannot determine conversion factors from the
quantity DataStore object it tries again usingthe central units of measure. Using central units
of measure (T006) if available, reference InfoObject if notThe system tries to find the
conversion factors in the central units of measure table. If the system finds conversion factors
it uses these to perform the conversion. If the system cannot determine conversion factors
from the central units of measure it tries to find conversion factors that match the attributes
of the data record by looking in the quantity DataStore object.The settings that you can make
in this regard affect performance and the decision must be strictly based on the data set. If
you only want to perform conversions within the same dimension, option 2 is most suitable.If
you are performing InfoObject-specific conversions (for example, material-specific conversions)
between units that do not belong to the same dimension, option 1 is most suitable.In both
cases, the system only accesses one database table. That table contains the conversion
factors.With option 3 and option 4, the system tries to determine conversion factors at each
stage. If conversion factors are not found in the basic table (T006), the system searches again
in the quantity DataStore object, or in reverse.The option you choose should depend on how
you want to spread the conversion. If the source unit of measure and target unit of measure
belong to the same dimension for 80% of the data records that you want to convert, first try to
determine factors using the central units of measure (option4), and accept that the system will
have to search in the second table also for the remaining 20%.The *Conversion Factor from
InfoObject *option (as with *Exchange Rate from InfoObject* in currency translation types) is
only available when you load data. The key figure you enter here has to exist in the
InfoProvider and the attribute this key figure has in the data record is taken as the
conversionfactor.Source Unit of MeasureThe source unit of measure is the unit of measure that
you want to convert. The source unit of measure is determined dynamically from the data
record or from a specified InfoObject (characteristic) . In addition, you can specify a fixed
source unit of measure or determine the source unit of measure using avariable.When
converting quantities in the Business Explorer, the source unit of measure is always determined
from the data record.During the data load process the source unit of measure can be
determined either from the data record or using a specified characteristic that bears master
data.You can use a fixed source unit of measure in planning functions. Data records are
converted that have the same unit key as the source unit of measure.The values in input help
correspond to the values in table T006 (units of measure).You reach the maintenance for the
unit of measure in *SAP Customizing Implementation Guide* (r) *SAP NetWeaver *(r) *General
Settings* (r) *Check Units of Measure*.In reporting, you can use a source unit of measure from a
variable. The variables that have been defined for InfoObject 0UNIT are used.
Target Unit of MeasureYou have the following options for determining the target unit of
measure: You can enter a fixed target unit of measure in the quantityconversion type (for
example, 'UNIT'). You can specify an InfoObject in the quantity conversion type that is used to
determine the target unit of measure during the conversion. This is not the same as defining
currency attributes where you determine a currency attribute on the *Business Explorer* tab
page in characteristic maintenance. With quantity conversion types you determine the
InfoObject in the quantity conversion type itself. Under *InfoObject for Determining Unit of
Measure*, all InfoObjects are listed that have at least one attribute of type *Unit*. You have to
select one of these attributes as the corresponding quantity attribute. Alternatively, you can
determine that the target unit of measure be determined during the conversion. In the Query
Designer under the properties for the relevant key figure, you specify either a fixed target unit
of measure or a variable to determine the target unit of measure. Target quantity using
InfoSetThis setting covers the same functionality as *InfoObject for Determining Target
Quantity*. If the InfoObject that you want to use to determine the target quantity is unique in
the InfoSet (it only occurs once in the whole InfoSet), you can enter the InfoObject under
*InfoObject for DeterminingTarget Quantity*.You only have to enter the InfoObject in *Target
Quantity Using InfoSet* if you want to determine the target quantity using an InfoObject but
that occurs more than once in the InfoSet.The InfoSet contains InfoProviders A and B and both
A and B contain InfoObject X with a quantity attribute. In this case you have to specify exactly
whether you want to use X from A or X from B to determine the target quantity. Field aliases
are used in an InfoSet to ensure uniqueness.All the active InfoSets in the system can be
displayed using input help. As long as you have selected an InfoSet, you can select an
InfoObject. All the InfoObjects with quantity attributes contained in the InfoSet can be
displayed using input help.
29) An SAP BW functional consultant is responsible for the following: Key responsibilities
include:
Maintain project plans Manage all project activities, many of which are executed by resources
not directly managed by the project leader (central BW development team, source system
developer, business key users) Liase with key users to agree reporting requirements, report
designs Translate requirements into design specifications( report specs, data mapping /
translation, functional specs) Write and execute test plans and scripts .
Coordinate and manage business / user testing Deliver training to key users Coordinate and
manage product ionization and rollout activities Track CIP (continuous improvement) requests,
work with users to prioritize, plan and manage CIP An SAP BW technical consultant is
responsible for:SAP BW extraction using standard data extractor and available development
tools for SAP and non-SAP data sources. -SAP ABAP programming with BWData modeling, star
schema, master data, ODS and cube design in BWData loading process and procedures
(performance tuning)Query and report development using Bex Analyzer and Query DesignerWeb
report development using Web Application.

29. Production support


In production support there will be two kind jobs which you will be doing mostly 1, looking into
the data load errors. 2, solving the tickets raised by the user. Data loading involves monitoring
process chains, solving the errors related to data load, other than this you will also be doing
some enhancements to the present cubes and master data but that done on requirement. User
will raise a ticket when they face any problem with the query, like report showing wrong
values incorrect data etc.if the system response is slow or if the query run time is high.
Normally the production support activities include * Scheduling * R/3 Job Monitoring * B/W Job
Monitoring * Taking corrective action for failed data loads. * Working on some tickets with
small changes in reports or in AWB objects. The activities in a typical Production Support would
be as follows: 1.Data Loading - could be using process chains or manual loads. 2. Resolving
urgent user issues - helpline activities 3. Modifying BW reports as per the need of the user. 4.
Creating aggregates in Prod system 5. Regression testing when version/patch upgrade is done.
6. Creating adhoc hierarchies. We can perform the daily activities in Production 1. monitoring
Dataload failures thru RSMO 2. Monitoring Process Chains Daily/weekly/ monthly 3. Perform
Change run Hirerachy 4. Check Aggr's Rollup.

30) How to convert a BeX query Global structure to local structure (Steps involved)
BeX query Global structure to local structureSteps; ***a local structure when you want to add
structure elements that are unique to the specific query. Changing the global structure changes
the structure for all the queries that use the global structure. That is reason you go for a local
structure.Coming to the navigation part--In the BEx Analyzer, from the SAP Business Explorer
toolbar, choose the open query icon (icon tht looks like a folder) On the SAP BEx Open dialog
box:Choose Queries.Select the desired InfoCubeChoose New.On the Define the query screen:In
the left frame, expand the Structure node.Drag and drop the desired structure into either the
Rows or Columnsframe.Select the global structure.Right-click and choose Remove reference.A
local structure is created.Remember that you cannot revert back the changes made to global
structure inthis regard. You will have to delete the local structure and then drag ndrop global
structure into query definition.*When you try to save a global structure, a dialogue box
prompts you tocomfirm changes to all queries. that is how you identify a global structure*

31) What is the use of Define cell in BeX & where it is useful?
Cell in BEX:::Use*When you define selection criteria and formulas for structural components
and there are two structural components of a query, generic cell definitions are created at the
intersection of the structural components that determine the values to be presented in the
cell.Cell-specific definitions allow you to define explicit formulas, along with implicit cell
definition, and selection conditions for cells and in this way, to override implicitly created cell
values. This function allows you to design much more detailed queries.In addition, you can
define cells that have no direct relationship to the structural components. These cells are not
displayed and serve as containers for help selections or help formulas.you need two structures
to enable cell editor in bex. In every query you have one structure for key figures, then you
have to do another structure with selections or formulas inside.Then having two structures, the
cross among them results in a fix reporting area of n rows * m columns. The cross of any row
with any column can be defined as formula in cell editor.This is useful when you want to any
cell had a diferent behaviour that the general one described in your query defininion.For
example imagine you have the following where % is a formula kfB/KfA *100.kfA kfB %chA 6 4
66%chB 10 2 20%chC 8 4 50%Then you want that % for row chC was the sum of % for chA and %
chB. Then in cell editor you are enable to write a formula specifically for that cell as sum of
the two cell before. chC/% = chA/% + chB/% then:kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 86%
Manager Round Review Questions.

32) What is SAP GUI and what use of it?


AP Graphic User Interface:
SAP GUI is the GUI client in SAP R/3's 3-tier architecture of database, application server and
client. It is software that runs on a Microsoft Windows, Apple Macintosh or Unix desktop, and
allows a user to access SAP functionality in SAP applications such as mySAP ERP and SAP
Business Information Warehouse (now called SAP Business Intelligence).
You need the SAP GUI to log on to and to use the SAP systems. Check
alsohttp://help.sap.com/saphelp_nw70/helpdata/en/4f/472e42e1ef5633e10000000a155106/fr
ameset.htm

33) What is the RMS Application?


SAP Records Management is a component of the SAP Web Application Server for the electronic
management of records and even paper-based information can be part of the electronic record
in the SAP RMS. Other advantages of using SAP Records Management compared to other
providers of record-based solutions:Records Management is a solution for the electronic
management of records. The RMS divides various business units logically thereby making it
possible to provide particular groups of users with access to particular records, as needed
within their business processes.
Quick access to information is a key factor for performing business successfully. Records
Management guarantees this quick access. In one record, all information objects of a business
transaction are grouped together in a transparent hierarchical structure. By converting paper
records to electronic records, an organisation can enjoy all the advantages of a paper-free
office: No storage costs for records, no cost-intensive copying procedures, and optimal
retrieval of information.
However, SAP Records Management not only provides an electronic representation of the
conventional paper record.

34) Bug resolution for the RMS Application?


3A.
http://rmsitservices.co.uk/upgrade.pdf
35) Development tasks for RMS release work?
The main task isComplete life cycle development of SAP Authorization Roles . This includes
participating in the high level, low level, RMS's and technical development of the roles.

36) What is BP Master Data?


BP Master data is nothing but Business partner data used in CRM Master tables
describe the BP Master Data tables, Authorization Objects
A.Basic Table : BUT000 Steps to view this tables:Go to TX (tcode) se16 , specify the table u
want ot view in this case is But000 and click on the icon table contents (or enter) and u can
find the entries by giving a selection or view the total no of entries.
You can't set an automatic code for BPs. However, you could use a formatted search to bring
up the next code, provided that the code you are using has a logical sequence. You can assign
this formatted search to the BP Code field and then the user can trigger it (Shift-F2) when they
are creating a new BP. If you want to have a separate range for each BP type then the user
needs to set the BP type field before using the formatted search.
I've also included this kind of function in an add-on. In this case, the query is still the same but
the user leaves the BP Code field blank and the add-on will populate it when the user clicks on
the Add button.
Process Flow:1. Configure application components in SAP Solution Manager.In the Business
Blueprint, transactions can already be assigned for process steps from the reference model.
You can also assign transactions to any additional processes and steps you have defined, and
thereby specify how your business processes are to run in the SAP system. Furthermore, you
can also edit the Implementation Guide.
2. Use metadata (PI).You specify the necessary metadata for your integration requirements,
such as data types, message interfaces, mappings, and so on.
3. Configure integration scenario and integration process (PI).You adapt the defined integration
scenarios and integration processes to your specific system landscape. In doing so, you specify,
for example, collaboration profiles (communication party, service and communication
channel). You can use wizards for the configuration.
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a8ffd911-0b01-0010-
679e-d47dade98cdd
Tools used for business process:1. BPM2. ARIS etc.
Business Process Management with SAP NetWeaver and ARIS for SAP NetWeaver provides
procedure models, methods, technologies and reference content for modeling, configuring,
executing and monitoring these business processes.
Process ModelingA process model is an abstraction of a process and describes all the aspects of
the process: Activities: steps that are executed within the process Roles: users or systems
that execute the activities Artifacts: objects, such as business documents, for example, that
are processed by the process
Processes within a company can be modeled on multiple abstraction levels and from numerous
different viewpoints. To implement and utilize innovative processes and strategies
successfully, you must convert business process views into technical views and relate both
views. Typically, different individuals or departments within a company are responsible for
modeling processes from the business and technical perspectives. A deciding factor for the
success of business process modeling is, therefore, that all those involved have a common
understanding of the business processes and speak the same language.
Business Process Management in SAP NetWeaver provides a common methodology for all levels
of process modeling. This common methodology forms a common reference framework for all
project participants and links models for multiple abstraction levels: Business process models
describe the process map and process architecture of a company from value chain diagrams
and event-driven process chains, right up to end-to-end processes. Process configuration
models support the process-driven configuration and implementation of processes. Process
execution models support service-based process execution

37) Describe the BP Master Data, Authorization Objects?


Authorization Objects:
SAP R/3 Authorization ConceptFundamental to SAP R/3 security is the authorization concept.
To get an understanding of SAP R/3 security, one needs to thoroughly understand the
authorization concept. The authorization concept allows the assignment of broad or finely
defined authorizations/permissions for system access. Several authorizations may be required
to perform a task such as creating a material master record. Based upon design, these
authorizations can be limited to:
Access to the transaction code (TCODE) to create a material master Access to specific material
Authorization to work in a particular plant in the system Authorization ObjectAuthorization
objects can best be described as locks that limit access to SAP R/3 system objects, such as
programs, TCODES and data entry screens. Depending on the SAP R/3 version, there are
approximately 800 standard authorizations.
There can be 10 fields in an authorization object, but all 10 fields are not used in all objects.
The most common field in an authorization object is the activity field. These are predefined
activity codes that reside in a table named TACT. Examples of activity are "01" create or
generate, "02" change, "03" read, "04" print or edit message, and "06" delete. The next most
common field is an organization field, such as company code or plant.
Authorization objects are classified and cataloged in the system based upon functionality, such
as FI (financial accounting) or HR (human resources). These classifications are called object
classes.
Developers and programmers can create new authorization objects through the developers'
workbench called ABAP Workbench in SAP R/3. ABAP/4 is a 4GL (fourth-generation
programming language) that was used to develop all SAP R/3 applications. It stands for
Advanced Business Application Programming Language.
AuthorizationsAuthorizations are the keys that can open the authorization objects, and they
contain the specific information for field values. For instance, an authorization contains a
specific set of values for one or all the fields of a particular authorization object. If a field is
not restricted, an authorization will have an asterisk (*) as a field value.
check in following table AGR_TCODES
An example of an authorization is as follows:
Field Value ACTVT (Activity) 01 BUKRS (Company Code) 0010
This particular authorization grants users access to create for company code 0010 the specific
object that is locked by the authorization object, such as a purchase order.
The following authorization will grant total access to all the activities for all the company
codes:
Field Value ACTVT (Activity) * BUKRS (Company Code) *

38) Tell what is localization?

39) Workflow SAP GUI

40) What is 0Recordmode?


A. it is an info object , 0Record mode is used to identify the delta images in BW which is used
in DSO .it is automatically activated when u activate DSO in BW. Like that in R/3 also have field
0cancel. It holds delta images in R/3. When ever u extracting data from R/3 using LO or
Generic.. Etc. this field 0Cancel is mapping with 0Record mode in BW. Like this BW identify the
Delta images.

41)What is the difference between filter & Restricted Key Figures? Examples & Steps in BI?
Filter restriction applies to entire query. RKF is restriction applied on a keyfigure.Suppose for
example, you want to analyse data only after 2006...showing sales in 2007,2008 against
Materials..You have got a keyfigure called Sales in your cube
Now you will put global restriction at query level by putting Fiscyear > 2006 in the Filter.This
will make only data which have fiscyear >2006 available for query to process or show.
Now to meet your requirement. ..like belowMaterial Sales in 2007 Sales in 2008M1 200 300M2
400 700You need to create two RKF's.Sales in 2007 is one RKF which is defined on keyfigure
Sales restricted by Fiscyear = 2007Similarly,Sales in 2008 is one RKF which is defined on
Keyfigure Sales restricted by Fiscyear = 2008Now i think u understood the differenceFilter will
make the restriction on query level..Like in above case putting filter Fiscyear>2006 willmake
data from cube for yeaers 2001,2002,2003, 2004,2005 ,2006 unavailable to the query for
showing up.So query is only left with data to be shown from 2007 and 2008.Within that
data.....you can design your RKF to show only 2007 or something like that...

42)How to create condition and exceptions in Bi.7.0? But I know in Bw3.5 version.?
From a query name or description, you would not be able to judge whether the query is having
any exception.There are two ways of finding exception against a query:1. Execute queries one
by one, the one which is having background colour as exception reporting are with
exceptions.2. Open queries in the BEX Query Designer. If you are finding exception tab at the
right side of filter and rows/column tab, the query is having exception.

43)The FI Business Flow related to BW. case studies or scenarios


FI FlowBasically there are 5 major topics/areas in FI,1. GL Accounting -related tables are SKA1,
SKB1 Master dataBSIS and BSAS are the Transaction Data2. Account Receivables- related to
CustomerAll the SD related data when transfered to FI these are created.Related Tables BSID
and BSAD3. Account Payables - related VendorAll the MM related documents data when
transfered to FI these are createdRelated Tables BSIK and BSAKAll the above six tables data is
present in BKPF and BSEG tablesYou can link these tables with the hlp of BELNR and GJAHR and
with Dates also.4. Special Purpose Ledger.. which is rarely used.5. Asset ManagmentIn CO there
are Profit center AccountingCost center Accounting will be there.

--
By: leela naveen

This are questions I faced. If u have any screen shots for any one of the question provide that
one also.
1. We have standard info objects given in sap why you created zinfo objects can u tell me the
business scenario
2. We have standard info cubes given in sap why you created zinfo cubes can u tell me the
business scenario
3. In keyfigure what is meant by cumulative value, non cumulative value change and non
cumulative value in and out flow.
4. when u creating infoobject it shows reference and template what is it
5. what is meant by compounding attribute tell me the scenario?
6. I have 3 cubes for that I created multiprovider and I created a report for that but I didnt get
data in that report what happen?
7. I have 10 cubes I created multiprovider I want only 1 cube data what u do?
8. what is meant by safety upper limit and safety lower limit in all the deltas tell me one by
one for time stamp, calender day and numberic pointer?
9. I have 80 queries which query is taking so much time how can you solve it
10. In compression level all requests are becoming zero which data is compressing tell me
detail
11. what is meant by flat aggregate?explain in detail
12. I created process chain 1st day it taking 10 min after that 1st week it taking 1 hour after
that next time it taking 1 day with a same loads what happen how can u reduce the time of
loading
13. how can u know the cube size? in detail show me u have screen shots
14. where can we find transport return codes
15. I have a report it taking so much time how can I rectify
16. what is offset? Without offset we create queries?
17. I told my process chains nearly 600 are there he asked me how can u monitor I told him I
will see in rspcm and bwccms he asked is there any third party tools is there to see? Any tools
are there to see tell me what it is
18. how client access the reports
19. I dont have master data it will possible to load transaction data? it is possible is there any
other steps to do that one
20. what is structure in reporting?
21. which object based you created extended star schema?
22. what is line item dimension tell me brief
23. what is high cardinality tell me brief
24. process chain is running I have to stop the process for 1 hour after that re runn the process
where it is stopped?
in multiprovider can I use aggregations
25. what is direct schedule and what is meta chain
26. which patch u used presently? How can I know which patch that one?
27. how can we increase data packet size
28. hierarchies are not there in bi?why
29. remodeling is applied only on info cube? why not dso/ods?
30. In jump queries we can jump any transactions just like rsa1, sm37 etc it is possible or not?
31. why ods activation fail? What types of fails are there? What are the steps to handle
32. I have a process chain is running the infopackage get error dont process the error of that
info package and then you can run the dependent variants is it possible?

Give me any performance and loading issues or support issues


Reporting errors, Loading errors, process chain errors?
Hi,

Normally You already know about BW.So you need to know extraa features of BI 7.O.Then
automatically you can get the solution for u r answer.

1. Tyeps of DSO in BI 7?

2. Use of Write Optimized DSO and scenario for using this DSO?

3. Remodelling Concept in BI 7?

4. BI Accelator?

5. Type of DTP and use of error stack/

6. Authorizations can be handled through one T-Code RSECADMIN

7.How u will do Migration of DS ?

Hope above quetions will give u complete picture of BI 7.0 OF new functionalityies.

Regards
Ram.

Links:
http://forums.sdn.sap.com/thread.jspa?threadID=1560106
What is ODS?

It is operational data store. ODS is a BW Architectural component that appears between


PSA ( Persistant Staging Area ) and infocubes and that allows Bex ( Business Explorer )
reporting. It is not based on the star schema and is used primarily for details reporting,
rather than for dimensional analysis. ODS objects do not aggregate data as infocubes do.
Data are loaded into an IDS object by inserting new records, updating existing records, or
deleting old records as specified by RECORDMODE value. *-- Viji

1. How much time does it take to extract 1 million of records from an infocube?

2. How much does it take to load (before question extract) 1 million of records to an
infocube?

3. What are the four ASAP Methodologies?

4. How do you measure the size of infocube?

5. Difference between infocube and ODS?

6. Difference between display attributes and navigational attributes? *-- Kiran

1. Ans. This depends,if you have complex coding in update rules it will take longer
time,orelse it will take less than 30 mins.

3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization

4. Ans:
In no of records

5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surrounded by
different dim table which connects to sids. And the data wise, you will have aggregated
data in the cubes.
ODS is a flat structure(flat table) with no star schema concept and which will have
granular data(detailed level).

6. Ans:
Display attribute is one which is used only for display purpose in the report.Where as
navigational attribute is used for drilling down in the report.We don't need to maintain
Nav attribute in the cube as a characteristic(that is the advantage) to drill down.

*-- Ravi

Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT


IT?
Ans: But how is it possible?.If you load it manually twice, then you can delete it by
request.

Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL?


Sure you can. ODS is nothing but a table.

Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE?


Yes of course. For example, for loading text and hierarchies we use different data
sources but the same infosource.

Q4. BRIEF THE DATAFLOW IN BW.


Data flows from transactional system to analytical system(BW). DS on the transactional
system needs to be replicated on BW side and attached to infosource and update rules
respectively.

Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY


NOT IN TRANSFER RULES?

Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?


Full and delta.

Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE
PROCEDURE IN LO-COCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append
fields).Refer white paper on LO-Cokpit extractions.

Q8. SIGNIFICANCE OF ODS.


It holds granular data.

Q9. WHERE THE PSA DATA IS STORED?


In PSA table.

Q10.WHAT IS DATA SIZE?


The volume of data one data target holds(in no.of records)

Q11. DIFFERENT TYPES OF INFOCUBES.


Basic,Virtual(remote,sap remote and multi)
Q12. INFOSET QUERY.
Can be made of ODSs and objects

Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES


ARE THERE.
In R/3 or in BW. 2 in R/3 and 2 in BW

Q14. ROUTINES?
Exist In the info object,transfer routines,update routines and start routine

Q15. BRIEF SOME STRUCTURES USED IN BEX.


Rows and Columns,you can create structures.

Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?


Variable with default entry
Replacement path
SAP exit
Customer exit
Authorization

Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING?


You can drill down to any level you want using Nav attributes and jump targets

Q18. WHAT ARE INDEXES?


Indexes are data base indexes,which help in retrieving data fastly.

Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.


Help!!!!!!!!!!!!!!!!!!!Refer documentation

Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS


USED.
Nope

Q21. WHAT IS THE SIGNIFICANCE OF KPI'S?


KPIs indicate the performance of a company.These are key figures

Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.


After image(correct me if I am wrong)

Q23. REPORTING AND RESTRICTIONS.


Help!!!!!!!!!!!!!!!!!!!Refer documentation

Q24. TOOLS USED FOR PERFORMANCE TUNING.


ST*,Number ranges,delete indexes before load ..etc
Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U
SCHEDULING DATA DAILY.
There should be some tool to run the job daily(SM37 jobs)

Q26. AUTHORIZATIONS.
Profile generator

Q27. WEB REPORTING.


What are you expecting??

Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN BE


INFOPROVIDER.
Of course

Q29. PROCEDURES OF REPORTING ON MULTICUBES.


Refer help.What are you expecting??.Multicube works on Union condition

Q30. EXPLAIN TRANPORTATION OF OBJECTS?


Dev ---> Q and Dev ---> P

Based on modeling area:

1. What is data integrity?


Data Integrity is about eliminating duplicate entries in the database. Data integrity
means no duplicate data.

2. What is the difference between SAP BW 3.0B and SAP BW 3.1C, 3.5?
The best answer here is Business Content. There is additional Business Content
provided with BW 3.1C that wasn't found in BW 3.0B. SAP has a pretty decent
reference library on their Web site that documents that additional objects found with
3.1C.

3. What is the difference between SAP BW 3.5 and 7.0?


SAP BW 7.0 is called SAP BI and is one of the components of SAP NetWeaver 2004s.
There are many differences between them in areas like extraction, EDW, reporting,
analysis administration and so forth. For a detailed description, please refer to the
documentation given on help.sap.com.

1. No Update rules or Transfer rules (Not mandatory in data flow)


2.Instead of update rules and Transfer rules new concept introduced called
transformations.
3. New ODS introduced in additional to the Standard and transactional.
4. ODS is renamed as DataStore to meet with the global data warehousing
standards.
And lot more changes in the functionalities of BEX query designer and WAD etc.
5. In Infosets now you can include Infocubes as well.
6. The Re-Modeling transaction helps you adding new key figures and characteristics
and handles historical data as well without much hassle. This facility is available only
for info cube.
7. The BI accelerator (for now only for infocubes) helps in reducing query run time
by almost a factor of 10 - 100. This BI accl is a separate box and would cost more.
Vendors for these would be HP or IBM.
8. The monitoring has been improved with a new portal based cockpit. Which means
you would need to have an EP guy in your project for implementing the portal ! :)
9. Search functionality has improved!! You can search any object. Not like 3.5
10. Transformations are in and routines are passe! Yes, you can always revert to the
old transactions too.

4. What is index?
Indices/Indexes are used to locate needed records in a database table quickly. BW
uses two types of indices, B-tree indices for regular database tables and bitmap
indices for fact tables and aggregate tables.

5. What is KPIs (Key Performance Indicators)?


(1) Predefined calculations that render summarized and/or aggregated information,
which is useful in making strategic decisions.
(2) Also known as Performance Measure, Performance Metric measures. KPIs are put
in place and visible to an organization to indicate the level of progress and status of
change efforts in an organization.
KPIs are industry-recognized measurements on which to base critical business
decisions.
In SAP BW, Business Content KPIs have been developed based upon input from
customers, partners, and industry experts to ensure that they reflect best practices.

6. What is the use of process chain?

The use of Process Chain is to automate the data load process.


Used to automate all the processes including Data load and all Administrative Tasks
like indices creation deletion, Cube compression etc.
Highly controlled data loading.

7. Difference between Display Attribute and Navigational Attribute?

The basic difference between the two is that navigational attributes can be used to
drilldown in a Bex report whereas display attributes cannot be used so. A
navigational attribute would function more or less like a characteristic within a cube.
To enable these features of a navigational attribute, the attribute needs to be made
navigational in the cube apart from the master data info-object.

The only difference is that navigation attributes can be used for navigation in
queries, like filtering, drill-down etc.
You can also use hierarchies on navigational attributes, as it is possible for
characteristics.

But an extra feature is that there is a possibility to change your history. (Please look
at the relevant time scenarios). If navigation attributes changes for a characteristic,
it is changed for all records in the past.
Disadvantage is also a slow down in performance.

8. If there are duplicate data in Cubes, how would you fix it?
Delete the request ID, Fix data in PSA or ODS and re-load again from PSA / ODS.

9. What are the differences between ODS and Info Cube?


ODS holds transactional level data. Its just as a flat table. Its not based on
multidimensional model. ODS have three tables
1. Active Data table (A table containing the active data)
2. Change log Table (Contains the change history for delta updating from the ODS
Object into other data targets, such as ODS Objects or InfoCubes for example.)
3. Activation Queue table (For saving ODS data records that are to be updated but
that have not yet been activated. The data is deleted after the records have been
activated)

Whereas Cube holds aggregated data which is not as detailed as ODS. Cube is based
on multidimensional model.

An ODS is a flat structure. It is just one table that contains all data.
Most of the time you use an ODS for line item data. Then you aggregate this data to
an info cube

One major difference is the manner of data storage. In ODS, data is stored in flat
tables. By flat I mean to say ordinary transparent table whereas in a CUBE, it
composed of multiple tables arranged in a STAR SCHEMA joined by SIDs. The
purpose is to do MULTI-DIMENSIONAL Reporting

In ODS; we can delete / overwrite the data load but in cube only add is possible,
no overwrite.

10. What is the use of change log table?


Change log is used for delta updates to the target; it stores all changes per request
and updates the target.

11. Difference between InfoSet and Multiprovider

a) The operation in Multiprovider is "Union" where as in Infoset it is either "inner


join" or "Outer join".

b) You can add Info-cube, ODS, Info-object in Multiprovider whereas in an Infoset


you can only have ODS and Info-object.

c) An Infoset is an Info-provider that joins data from ODS and Info-objects( with
master data). The join may be a outer join or a inner join. Whereas a Multiprovider is
created on all types of Infoproviders - Cubes, ODS, Info-object. These InfoProviders
are connected to one another by a union operation.

d) A union operation is used to combine the data from these objects into a
MultiProvider. Here, the system constructs the union set of the data sets involved. In
other words, all values of these data sets are combined. As a comparison: InfoSets
are created using joins. These joins only combine values that appear in both tables.
In contrast to a union, joins form the intersection of the tables.

12. What is the T.Code for Data Archival and what is it's advantage?
SARA.
Advantage: To minimize space, Query performance and Load performance

13. What are the Data Loading Tuning from R/3 to BW, FF to BW?

a) If you have enhanced an extractor, check your code in user exit RSAP0001 for
expensive SQL statements, nested selects and rectify them.

b) Watch out the ABAP code in Transfer and Update Rules, this might slow down
performance

c) If you have several extraction jobs running concurrently, there probably are not
enough system resources to dedicate to any single extraction job. Make sure
schedule this job judiciously.

d) If you have multiple application servers, try to do load balancing by distributing


the load among different servers.

e) Build secondary indexes on the under lying tables of a DataSource to correspond


to the fields in the selection criteria of the datasource. ( Indexes on Source tables)
f) Try to increase the number of parallel processes so that packages are extracted
parallelly instead of sequentially. (Use PSA and Data Target in parallel option in the
info package.)

g) Buffer the SID number ranges if you load lot of data at once.

h) Load master data before loading transaction data.

i) Use SAP Delivered extractors as much as possible.

j) If your source is not an SAP system but a flat file, make sure that this file is
housed on the application server and not on the client machine. Files stored in an
ASCII format are faster to load than those stored in a CSV format.

14. Performance monitoring and analysis tools in BW

a) System Trace: Transaction ST01 lets you do various levels of system trace such
as authorization checks, SQL traces, table/buffer trace etc. It is a general Basis tool
but can be leveraged for BW.

b) Workload Analysis: You use transaction code ST03

c) Database Performance Analysis: Transaction ST04 gives you all that you need
to know about whats happening at the database level.

d) Performance Analysis: Transaction ST05 enables you to do performance traces


in different are as namely SQL trace, Enqueue trace, RFC trace and buffer trace.

e) BW Technical Content Analysis: SAP Standard Business Content 0BWTCT that


needs to be activated. It contains several InfoCubes, ODS Objects and MultiProviders
and contains a variety of performance related information.

f) BW Monitor: You can get to it independently of an InfoPackage by running


transaction RSMO or via an InfoPackage. An important feature of this tool is the
ability to retrieve important IDoc information.

g) ABAP Runtime Analysis Tool: Use transaction SE30 to do a runtime analysis of


a transaction, program or function module. It is a very helpful tool if you know the
program or routine that you suspect is causing a performance bottleneck.

15. Difference between Transfer Rules and Update Rules

a) Transfer Rules:
When we maintains the transfer structure and the communication structure, we use
the transfer rules to determine how we want the transfer structure fields to be
assigned to the communication structure InfoObjects. We can arrange for a 1:1
assignment. We can also fill InfoObjects using routines, formulas, or constants.
Update rules:
Update rules specify how the data (key figures, time characteristics, characteristics)
is updated to data targets from the communication structure of an InfoSource. You
are therefore connecting an InfoSource with a data target.

b) Transfer rules are linked to InfoSource, update rules are linked to InfoProvider
(InfoCube, ODS).

i. Transfer rules are source system dependant whereas update rules are Data target
dependant.

ii.The no. of transfer rules would be equal to the no. of source system for a data
target.

iii.Transfer rules is mainly for data cleansing and data formatting whereas in the
update rules you would write the business rules for your data target.

iv. Currency translations are possible in update rules.

c) Using transfer rules you can assign DataSource fields to corresponding InfoObjects
of the InfoSource. Transfer rules give you possibility to cleanse data before it is
loaded into BW.
Update rules describe how the data is updated into the InfoProvider from the
communication structure of an InfoSource.
If you have several InfoCubes or ODS objects connected to one InfoSource you can
for example adjust data according to them using update rules.

Only in Update Rules: a. You can use return tables in update rules which would
split the incoming data package record into multiple ones. This is not possible in
transfer rules.
b. Currency conversion is not possible in transfer rules.
c. If you have a key figure that is a calculated one using the base key figures you
would do the calculation only in the update rules.

16. What is OSS?


OSS is Online support system runs by SAP to support the customers.
You can access this by entering OSS1 transaction or visit Service.SAP.Com and
access it by providing the user name and password.

17. How to transport BW object?


Follow the steps.

i. RSA1 > Transport connection


ii. In the right window there is a category "all object according to type"
iii. Select required object you want to transport.
iv. Expand that object, there is select object, double click on this you will get the
number of objects, select yours one.
v. Continue.
vi. Go with the selection, select all your required objects you want to transport.
vii. There is icon Transport Object (Truck Symbol).
viii. Click that, it will create one request, note it down this request.
ix. Go to Transport Organizer (T.code SE01).
x. In the display tab, enter the Request, and then go with display.
xi. Check your transport request whether contains the required objects or not, if not
go with edit, if yes "Release" that request.

Thats it; your coordinator/Basis person will move this request to Quality or
Production.

18. How to unlock objects in Transport Organizer?

To unlock a transport use Go to SE03 --> Request Task --> Unlock Objects

Enter your request and select unlock and execute. This will unlock the request.

19. What is InfoPackage Group?


An InfoPackage group is a collection of InfoPackages.

20. Differences Between Infopackage Groups and Process chains


i.Info Package Groups are used to group only Infopackages
where as Process chains are used to automate all the processes.

ii Infopackage goups:
Use to group all relevent infopackages in a group, (Automation of a group of
infopackages only for dataload). Possible to Sequence the load in order.
Process Chains:
Used to automate all the processes including Dataload
and all Administrative Tasks like indices creation deletion, Cube compression etc
Highly controlled dataloading.

iii. InfoPackage Groups/Event Chains are older methods of scheduling/automation.


Process Chains are newer and provide more capabilities. We can use ABAP programs
and lot of additional features like ODS activation and sending emails to users based
on success or failure of data loads.

21. What are the critical issues you faced and how did you solve it?

Find your own answer based on your experience..

22. What is Conversion Routine?

a) Conversion Routines are used to convert data types from internal format to
external/display format or vice versa.

b) These are function modules.


c) There are many function modules, they will be of type
CONVERSION_EXIT_XXXX_INPUT, CONVERSION_EXIT_XXXX_OUTPUT.

example:

CONVERSION_EXIT_ALPHA_INPUT
CONVERSION_EXIT_ALPHA_OUTPUT

23. Difference between Start Routine and Conversion Routine


In the start routine you can modify data packages when data loading. Conversion
routine usually refers to routines bound to InfoObjects (or data elements) for
conversion of internal and display format.

24. What is the use of setup tables in LO extraction?

The use of setup table is to store your historical data in them before updating to the
target system. Once you fill up the setup tables with the data, you need not to go to
the application tables again and again which in turn will increase your system
performance.

25. R/3 to ODS delta update is good but ODS to Cube delta is broken. How
to fix it?
i. Check the Monitor (RSMO) whats the error explanation. Based on explanation, we
can check the reason

ii. Check the timings of delta load from R3 ODS CUBE if conflicting after ODS load

iii. Check the mapping of Transfer/Update Rules


iv. Fails in RFC connection

v. BW is not set as source system

vi. Dump (for a lot of reasons, full table space, time out, sql errors...)
Do not receive an IDOC correctly.

vii. There is a error load before the last one and so on...

26. What is short dump and how to rectify?


Short dump specifies that an ABAP runtime error has occurred and the error
messages are written to the R/3 database tables. You can view the short dump
through transaction ST22.
You get short dumps b'coz of runtime errors. The short dump u got is due to the
termination of background job. This could be of many reasons.

You can check short dumps in T-code ST22. U can give the job tech name and your
userid. It will show the status of jobs in the system. Here you can even analyze short
dump. U can use ST22 in both R/3 and BW.

OR To call an analysis method,


choose Tools --> ABAP Workbench --> Test --> Dump-Analysis from the SAP Easy
Access menu.
In the initial screen, you must specify whether you want to view todays dump or the
dump from yesterday. If these selection criteria are too imprecise, you can enter
more specific criteria. To do this, choose Goto --> Select Short Dump
You can display a list of all ABAP dumps by choosing Edit --> Display List. You can
then display and analyze a selected dump. To do this, choose Short Dump --> Dump
Analysis.

Based on BW Reporting Area


Based on R/3 Extraction

Based on BW Reporting Area

1. What are the Query Tuning you do when you use reporting?
a) Install BW Statistics and use of aggregates for reporting

b) Avoid using too many characteristics in rows and columns, instead place it in free
characteristics and navigate / drill-down later.
c) OLAP Cache (Change Cache TCode RSCUSTV14): Its a technique that improves
query performance by caching or storing data centrally and thereby making it
accessible to various application servers. When the query is run for the first time, the
results are saved to the cache so that next time when similar query is run, it does
not have to read from the data target but from the cache.

d) Pre-calculated web templates

e) Use small amount of data as of starting points and do the drill down

f) Instead of running the same query each time save the query results in workbook
to get the same query results for different users. Each time you run the query, it
refreshes the data /same data should not fetch from data targets.

g) Complex and large reports should not run online rather they should be scheduled
run during off-peak hours to avoid excessive contention for limited system resources.
We should using RA to run those off-peak hours in batch mode.

h) Queries against remote cubes should be avoided as data comes from different
systems.

i) If you have choice between using hierarchies and characteristics or navigational


attributes, you should choose char or navigational attributes.

j) Create additional indexes

k) Use compression on cubes since the E tables are optimized for queries.

l) . Turn off warning messages on queries

2. What is BEX Download Scheduler?

The BEX Download Scheduler is an assistant that takes you through an automatic,
step-by-step process for downloading pre-calculated Web templates as HTML pages
from the BW server onto your PC.

3. Difference between Calculated key figure and Formula?

Formula & calculated key figures are functionality wise same.


Calculated Key Figure is global where as Formula is local (for that query only.

CKF will have tech name and description where as Formula will have only description.
CKF is available across all Queries on same InfoProvider where as formula is
available only for that Query.

While creating CKF, certain function will not available from formula builder where as
while creating formula, all the function will be available from formula builder.

4. What is difference between filter and restricted key figure?

Filter restricts whole Query result where as RKF restricts only selected KF.

for Example: Lets assume we have 'company code' in filter and it is restricted by
'0040'.Query output will be for only '0040'.

if u restrict a KF with '0040' in RKF, then only that KF data will be restricted by
'0040'.

Restricted key figures are (basic) key figures of the InfoProvider that are restricted
(filtered) by one or more characteristic selections. Unlike a filter, whose restrictions
are valid for the entire query.

For a restricted key figure, only the key figure in question is restricted to its allocated
characteristic value or characteristic value interval. Scenarios such as comparing a
particular key figure for various time segments, or
plan/actual comparison for a key figure if the plan data is stored using a particular
characteristic, can be realized using restricted key figures.

5. What is the use of Structure in BEX/Query?

Combination of characteristics and key figures are called Structure.


Structures are basically a grouping of key figures which can be created for a
Infocube and reused in any other queries for that cube.
Structures find the biggest use in financial reports. Take an example of a financial
report which has about 20 normal keyfigures, 10 calculated keyfigures and another
10 restricted keyfigures. Now assume that someone asks for a new report with all of
these as well as 5 more keyfigures. Normally you would have to create a new query
and manually re-create all the complex key-figures. However, if you had saved them
as a structure, you just have to drag-and-drop the structure into the query. So if
there was a change in the calculation of one key-figure, you just have to change the
key-figure in the structure and not change all the 10 reports which show that key-
figures.

We get a default structure for key-figures. That is most people use structures for
key-figures and SAP has designed it that way.
Within a query definition you can use either no structures or a maximum of two
structures. Of these, only one can be a key figure structure.

6. Difference between filter and condition in report

Filters act on Characteristics; Conditions act on Key Figures. You do not use KF in the
filter area. Only char values can be restricted in the filter area, whereas Conditions
are created to key figures.

7. Reporting Agent

Definition: The Reporting Agent is a tool used to schedule reporting functions in the
background.
The following functions are available:

Evaluating exceptions
Printing queries
Pre-calculating Web templates
Pre-calculating characteristic variables of type pre-calculated value sets.
Pre-calculation of queries for Crystal reports
Managing bookmarks

Use
You make settings for the specified reporting functions.
You assign the individual settings to scheduling packages for background
processing.
You schedule scheduling packages as a job or within a process chain.

8. RRI: Report-Report-Interfacing is the terminology used to describe linking reports


together. Report-Report-Interfacing uses Jump Targets that are created using the
transaction code RSBBS (see Question #4). A Query with RRI functionality can be
identified by clicking on the Goto icon in the BEx Analyzer toolbar.

9. What are the restrictions on ODS reporting? Active, retired and terminated
employees can be separated using different ODS for detail reports.

10. Difference between ODS & Cube Reporting

ODS is 2 dimensional format and it is not good to analyze the data in multi
dimensional way. If you want to take flat reporting then go for ODS reporting.
Cube is multidimensional format and you can analyze data in different dimensions,
so if your requirement is multidimensional report go for Cube.
Example: List of purchase orders for a vendor is two dimensional reports whereas
sales organization wise, sales area wise, customer wise sales for last quarter and
comparison with earlier quarters is a multi-dimensional report.

Two dimensional reports are similar to reporting on a table. ODS active table is a flat
table like an r/3 table. Reporting is done on active table of ODS. Other tables are for
handling the deltas.

Cube structure is a star schema structure. Hence Reports on cubes are


multidimensional reports.

11. Why we need to use 0Recordmode in ODS?


0Recordmode is an InfoObject for loading data into ODS. The value indicates how the
data should be updated and which type.

Field 0RECORDMODE is needed for the delta load and is added by the system if a
DataSource is delta-capable. In the ODS object the field is generated during the
creation process.

Based on R/3 Extraction

1. Different kinds of extractors:


LO Cockpit Extractors are SAP standard / pre-defined extractors / Data Source for
loading data to BW.

COPA- is customer generated application specific Data Source. When we create


COPA Data Source we will be getting different field selections. There are no BI cubes
for COPA.

Generic Extractor: We create generic extractors from table views, query and
functional module / InfoSet Query.

2. What's the difference between extraction structure and table in


datasource?

a) The extraction structure is just a technical definition, it does not hold any physical
data on the database. The reason why you have it in addition to the table/view is
that you can hide deselect fields here so that not the complete table needs to be
transferred to BW.

b) In short - The extract structure define the fields that will be extracted and the
table contains the records in that structure.
c) Table is having data but Extract structure doesnt have data.
Extract structure is formed based on table and here we have the option to select the
fields that are required for extraction. So extract structure will tell what are the fields
that are using for extraction.

3. Define V3 Update (Serialised and Unserialised), Direct Delta and Queued


Delta
a). Direct Delta: When number of document changes between two delta extractions
is small, you go for direct delta. The recommended limit is 10000 i.e. if the No of doc
changes (Creating, changing and deleting) between two successive delta runs is
within 10000, direct delta is recommended.
Here the number of LUWs are more as they are not clubbed into one LUW.

b). Queued delta is used if number of document changes is high ( more than
10000). Here data is written into an extraction queue and from there it is moved to
delta queue. Here up to 10000doc changes are cumulated to one LUW.

c). Unserialized V3 update method is used only when it is not important that data
to be transferred to BW in the exactly same sequence as it was generated in R/3.

d). Serialized V3 Update: This is the conventional update method in which the
document data is collected in the sequence of attachment and transferred to BW by
batch job. The sequence of the transfer does not always match the sequence in
which the data was created.

Basic difference is in the sequence of data transfer. In Queued delta it is same as the
one in which documents are created whereas in serialized v3 update it is not always
the same.

4) Difference between Costing based and Account based CO-PA

Account based is tied to a GL account posting. Costing based is derived from value
fields. Account based would be more exact to tie out to the GL. Costing based is not
easy to balance to the GL and more analytical and expect differences. Costing based
offers some added revaluation costing features

Implementing costing based is much more work but also gives much more reporting
possibilities especially focused on margin analyses. Without paying attention to it
while implementing costing based COPA, you get account based with it, with the
advantage of reconciled data.

COPA accounting based is for seeing at abstract level whereas costing based is the
detailed level, 90% we go for costing based only.
COPA Accounting is based on Account numbers; where as cost accounting is based
on cost centers.

COPA Tables: Account base COPA tables are COEJ, COEP, COSS and COSP

5. Give an example of business scenario you worked on

6. What does success mean to you?

Check these questions?

1. What are the extractor types?


Application Specific
o BW Content FI, HR, CO, SAP CRM, LO Cockpit
o Customer-Generated Extractors
LIS, FI-SL, CO-PA
Cross Application (Generic Extractors)
o DB View, InfoSet, Function Module

2. What are the steps involved in LO Extraction?


The steps are:
o RSA5 Select the DataSources
o LBWE Maintain DataSources and Activate Extract
Structures
o LBWG Delete Setup Tables
o 0LI*BW Setup tables
o RSA3 Check extraction and the data in Setup tables
o LBWQ Check the extraction queue
o LBWF Log for LO Extract Structures
o RSA7 BW Delta Queue Monitor

3. How to create a connection with LIS


InfoStructures?
LBW0 Connecting LIS InfoStructures to BW

4. What is the difference between ODS and InfoCube


and
MultiProvider?
ODS: Provides granular data, allows overwrite and
data is
in transparent tables, ideal for drilldown and RRI.
CUBE: Follows the star schema, we can only append
data,
ideal for primary reporting.
MultiProvider: Does not have physical data. It
allows to
access data from different InfoProviders (Cube, ODS,
InfoObject). It is also preferred for reporting.

5. What are Start routines, Transfer routines and


Update
routines?
Start Routines: The start routine is run for each
DataPackage after the data has been written to the
PSA and
before the transfer rules have been executed. It
allows
complex computations for a key figure or a
characteristic.
It has no return value. Its purpose is to execute
preliminary calculations and to store them in global
DataStructures. This structure or table can be
accessed in
the other routines. The entire DataPackage in the
transfer
structure format is used as a parameter for the
routine.
Transfer / Update Routines: They are defined at the
InfoObject level. It is like the Start Routine. It is
independent of the DataSource. We can use this to
define
Global Data and Global Checks.

6. What is the difference between start routine and


update
routine, when, how and why are they called?
Start routine can be used to access InfoPackage
while
update routines are used while updating the Data
Targets.

7. What is the table that is used in start routines?


Always the table structure will be the structure of
an
ODS or InfoCube. For example if it is an ODS then
active
table structure will be the table.

8. Explain how you used Start routines in your


project?
Start routines are used for mass processing of
records.
In start routine all the records of DataPackage is
available for processing. So we can process all these
records together in start routine. In one of
scenario, we
wanted to apply size % to the forecast data. For
example if
material M1 is forecasted to say 100 in May. Then
after
applying size %(Small 20%, Medium 40%, Large 20%,
Extra
Large 20%), we wanted to have 4 records against one
single
record that is coming in the info package. This is
achieved
in start routine.

9. What are Return Tables?


When we want to return multiple records, instead of
single value, we use the return table in the Update
Routine. Example: If we have total telephone expense
for a
Cost Center, using a return table we can get expense
per
employee.

10. How do start routine and return table synchronize


with
each other?
Return table is used to return the Value following
the
execution of start routine
11. What is the difference between V1, V2 and V3
updates?
V1 Update: It is a Synchronous update. Here the
Statistics update is carried out at the same time as
the
document update (in the application tables).
V2 Update: It is an Asynchronous update. Statistics
update and the Document update take place as
different
tasks.
o V1 & V2 dont need scheduling.
Serialized V3 Update: The V3 collective update must
be
scheduled as a job (via LBWE). Here, document data is
collected in the order it was created and transferred
into
the BW as a batch job. The transfer sequence may not
be the
same as the order in which the data was created in
all
scenarios. V3 update only processes the update data
that is
successfully processed with the V2 update.

12. What is compression?


It is a process used to delete the Request IDs and
this
saves space.

13. What is Rollup?


This is used to load new DataPackages (requests)
into the
InfoCube aggregates. If we have not performed a
rollup then
the new InfoCube data will not be available while
reporting
on the aggregate.

14. What is table partitioning and what are the


benefits of
partitioning in an InfoCube?
It is the method of dividing a table which would
enable a
quick reference. SAP uses fact file partitioning to
improve
performance. We can partition only at 0CALMONTH or
0FISCPER. Table partitioning helps to run the report
faster
as data is stored in the relevant partitions. Also
table
maintenance becomes easier. Oracle, Informix, IBM
DB2/390
supports table partitioning while SAP DB, Microsoft
SQL
Server, IBM DB2/400 do not support table portioning.

15. How many extra partitions are created and why?


Two partitions are created for date before the
begin date
and after the end date.

16. What are the options available in transfer rule?


InfoObject
Constant
Routine
Formula

17. How would you optimize the dimensions?


We should define as many dimensions as possible and
we
have to take care that no single dimension crosses
more
than 20% of the fact table size.

18. What are Conversion Routines for units and


currencies
in the update rule?
Using this option we can write ABAP code for
Units /
Currencies conversion. If we enable this flag then
unit of
Key Figure appears in the ABAP code as an additional
parameter. For example, we can convert units in
Pounds to
Kilos.

19. Can an InfoObject be an InfoProvider, how and


why?
Yes, when we want to report on Characteristics or
Master
Data. We have to right click on the InfoArea and
select Insert characteristic as data target. For
example,
we can make 0CUSTOMER as an InfoProvider and report
on it.

20. What is Open Hub Service?


The Open Hub Service enables us to distribute data
from
an SAP BW system into external Data Marts, analytical
applications, and other applications. We can ensure
controlled distribution using several systems. The
central
object for exporting data is the InfoSpoke. We can
define
the source and the target object for the data. BW
becomes a
hub of an enterprise data warehouse. The distribution
of
data becomes clear through central monitoring from
the
distribution status in the BW system.

21. How do you transform Open Hub Data?


Using BADI we can transform Open Hub Data according
to
the destination requirement.

22. What is ODS?


Operational DataSource is used for detailed storage
of
data. We can overwrite data in the ODS. The data is
stored
in transparent tables.
23. What are BW Statistics and what is its use?
They are group of Business Content InfoCubes which
are
used to measure performance for Query and Load
Monitoring.
It also shows the usage of aggregates, OLAP and
Warehouse
management.

24. What are the steps to extract data from R/3?


Replicate DataSources
Assign InfoSources
Maintain Communication Structure and Transfer rules
Create and InfoPackage
Load Data

25. What are the delta options available when you


load from
flat file?
The 3 options for Delta Management with Flat Files:
o Full Upload
o New Status for Changed records (ODS Object only)
o Additive Delta (ODS Object & InfoCube)
Q) Under which menu path is the Test Workbench to be
found,
including in earlier Releases?

The menu path is: Tools - ABAP Workbench - Test -


Test
Workbench.

Q) I want to delete a BEx query that is in Production


system through request. Is anyone aware about it?

A) Have you tried the RSZDELETE transaction?

Q) Errors while monitoring process chains.

A) During data loading. Apart from them, in process


chains
you add so many process types, for example after
loading
data into Info Cube, you rollup data into aggregates,
now
this rolling up of data into aggregates is a process
type
which you keep after the process type for loading
data into
Cube. This rolling up into aggregates might fail.

Another one is after you load data into ODS, you


activate
ODS data (another process type) this might also fail.

Q) In Monitor----- Details (Header/Status/Details)


Under
Processing (data packet): Everything OK Context
menu of
Data Package 1 (1 Records): Everything OK ----
Simulate
update. (Here we can debug update rules or transfer
rules.)

SM50 Program/Mode Program Debugging & debug


this work
process.

Q) PSA Cleansing.

A) You know how to edit PSA. I don't think you can


delete
single records. You have to delete entire PSA data
for a
request.

Q) Can we make a datasource to support delta.

A) If this is a custom (user-defined) datasource you


can
make the datasource delta enabled. While creating
datasource from RSO2, after entering datasource name
and
pressing create, in the next screen there is one
button at
the top, which says generic delta. If you want more
details
about this there is a chapter in Extraction book,
it's in
last pages u find out.

Generic delta services: -

Supports delta extraction for generic extractors


according
to:

Time stamp

Calendar day

Numeric pointer, such as document number & counter

Only one of these attributes can be set as a delta


attribute.

Delta extraction is supported for all generic


extractors,
such as tables/views, SAP Query and function modules

The delta queue (RSA7) allows you to monitor the


current
status of the delta attribute

Q) Workbooks, as a general rule, should be


transported with
the role.

Here are a couple of scenarios:


1. If both the workbook and its role have been
previously
transported, then the role does not need to be part
of the
transport.

2. If the role exists in both dev and the target


system but
the workbook has never been transported, and then you
have
a choice of transporting the role (recommended) or
just the
workbook. If only the workbook is transported, then
an
additional step will have to be taken after import:
Locate
the WorkbookID via Table RSRWBINDEXT (in Dev and
verify the
same exists in the target system) and proceed to
manually
add it to the role in the target system via
Transaction
Code PFCG -- ALWAYS use control c/control v
copy/paste for
manually adding!

3. If the role does not exist in the target system


you
should transport both the role and workbook. Keep in
mind
that a workbook is an object unto itself and has no
dependencies on other objects. Thus, you do not
receive an
error message from the transport of 'just a workbook'
--
even though it may not be visible, it will exist
(verified
via Table RSRWBINDEXT).

Overall, as a general rule, you should transport


roles with
workbooks.

Q) How much time does it take to extract 1 million


(10
lackhs) of records into an infocube?

A. This depends, if you have complex coding in update


rules
it will take longer time, or else it will take less
than 30
minutes.

Q) What are the five ASAP Methodologies?

A: Project plan, Business Blue print, Realization,


Final
preparation & Go-Live - support.

1. Project Preparation: In this phase, decision


makers
define clear project objectives and an efficient
decision
making process (i.e. Discussions with the client,
like what
are his needs and requirements etc.). Project
managers will
be involved in this phase (I guess).

A Project Charter is issued and an implementation


strategy
is outlined in this phase.

2. Business Blueprint: It is a detailed documentation


of
your company's requirements. (i.e. what are the
objects we
need to develop are modified depending on the
client's
requirements).

3. Realization: In this only, the implementation of


the
project takes place (development of objects etc) and
we are
involved in the project from here only.

4. Final Preparation: Final preparation before going


live
i.e. testing, conducting pre-go-live, end user
training etc.

End user training is given that is in the client site


you
train them how to work with the new environment, as
they
are new to the technology.

5. Go-Live & support: The project has gone live and


it is
into production. The Project team will be supporting
the
end users.

Q) What is landscape of R/3 & what is landscape of


BW.
Landscape of R/3 not sure.

Then Landscape of b/w: u have the development system,


testing system, production system

Development system: All the implementation part is


done in
this sys. (I.e., Analysis of objects developing,
modification etc) and from here the objects are
transported
to the testing system, but before transporting an
initial
test known as Unit testing (testing of objects) is
done in
the development sys.

Testing/Quality system: quality check is done in this


system and integration testing is done.

Production system: All the extraction part takes


place in
this sys.

Q) How do you measure the size of infocube?

A: In no of records.

Q). Difference between infocube and ODS?

A: Infocube is structured as star schema (extended)


where a
fact table is surrounded by different dim table that
are
linked with DIM'ids. And the data wise, you will have
aggregated data in the cubes. No overwrite
functionality
ODS is a flat structure (flat table) with no star
schema
concept and which will have granular data (detailed
level).
Overwrite functionality.

Flat file datasources does not support 0recordmode in


extraction.

x before, -after, n new, a add, d delete, r reverse

Q) Difference between display attributes and


navigational
attributes?
A: Display attribute is one, which is used only for
display
purpose in the report. Where as navigational
attribute is
used for drilling down in the report. We don't need
to
maintain Navigational attribute in the cube as a
characteristic (that is the advantage) to drill down.

Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO


CORRECT IT?
A: But how is it possible? If you load it manually
twice,
then you can delete it by requestID.

Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?


Sure you can. ODS is nothing but a table.

Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?


A) Yes of course. For example, for loading text and
hierarchies we use different data sources but the
same
InfoSource.

Q. BRIEF THE DATAFLOW IN BW.


A) Data flows from transactional system to analytical
system (BW). DataSources on the transactional system
needs
to be replicated on BW side and attached to
infosource and
update rules respectively.

Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE


RULES. WHY
NOT IN TRANSFER RULES?
Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA
TARGETS?
FULL and DELTA.

Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in


LIS
THEN WHAT IS THE PROCEDURE IN LO-COCKPIT?
No LIS in LO cockpit. We will have datasources and
can be
maintained (append fields). Refer white paper on LO-
Cockpit
extractions.

Q) Why we delete the setup tables (LBWG) & fill them


(OLI*BW)?

A) Initially we don't delete the setup tables but


when we
do change in extract structure we go for it. We r
changing
the extract structure right, that means there are
some
newly added fields in that which r not before. So to
get
the required data (i.e.; the data which is required
is
taken and to avoid redundancy) we delete n then fill
the
setup tables.

To refresh the statistical data. The extraction set


up
reads the dataset that you want to process such as,
customers orders with the tables like VBAK, VBAP) &
fills
the relevant communication structure with the data.
The
data is stored in cluster tables from where it is
read when
the initialization is run. It is important that
during
initialization phase, no one generates or modifies
application data, at least until the tables can be
set up.

Q) SIGNIFICANCE of ODS?
It holds granular data (detailed level).

Q) WHERE THE PSA DATA IS STORED?


In PSA table.

Q) WHAT IS DATA SIZE?


The volume of data one data target holds (in no. of
records)

Q) Different types of INFOCUBES.


Basic, Virtual (remote, sap remote and multi)

Virtual Cube is used for example, if you consider


railways
reservation all the information has to be updated
online.
For designing the Virtual cube you have to write the
function module that is linking to table, Virtual
cube it
is like a the structure, when ever the table is
updated the
virtual cube will fetch the data from table and
display
report Online... FYI.. you will get the information :
https://www.sdn.sap.com/sdn/index.sdn and search for
Designing Virtual Cube and you will get a good
material
designing the Function Module
Q) INFOSET QUERY.
Can be made of ODS's and Characteristic InfoObjects
with
masterdata.

Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER


STRUCTURES
ARE THERE.
In R/3 or in BW? 2 in R/3 and 2 in BW

Q) ROUTINES?
Exist in the InfoObject, transfer routines, update
routines
and start routine

Q) BRIEF SOME STRUCTURES USED IN BEX.


Rows and Columns, you can create structures.

Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?


Different Variable's are Texts, Formulas,
Hierarchies,
Hierarchy nodes & Characteristic values.

Variable Types are

Manual entry /default value


Replacement path
SAP exit
Customer exit
Authorization

Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?


You can drill down to any level by using Navigational
attributes and jump targets.
Q) WHAT ARE INDEXES?
Indexes are data base indexes, which help in
retrieving
data fastly.

Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.


Help! Refer documentation

Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA


UPDATE
IS USED?
No.

Q) WHAT IS THE SIGNIFICANCE OF KPI'S?


KPI's indicate the performance of a company. These
are key
figures

Q) AFTER THE DATA EXTRACTION WHAT IS THE IMAGE


POSITION.
After image (correct me if I am wrong)

Q) REPORTING AND RESTRICTIONS.


Help! Refer documentation.

Q) TOOLS USED FOR PERFORMANCE TUNING.


ST22, Number ranges, delete indexes before load. Etc

Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U


SCHEDULING DATA DAILY.
There should be some tool to run the job daily (SM37
jobs)

Q) AUTHORIZATIONS.
Profile generator

Q) WEB REPORTING.
What are you expecting??

Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.


Of course

Q) PROCEDURES OF REPORTING ON MULTICUBES


Refer help. What are you expecting? MultiCube works
on
Union condition

Q) EXPLAIN TRANPSORTATION OF OBJECTS?


Dev---Q and Dev-------P

Q) What types of partitioning are there for BW?

There are two Partitioning Performance aspects for BW


(Cube
& PSA)
Query Data Retrieval Performance Improvement:
Partitioning by (say) Date Range improves data
retrieval by
making best use of database [data range] execution
plans
and indexes (of say Oracle database engine).
B) Transactional Load Partitioning Improvement:
Partitioning based on expected load volumes and data
element sizes. Improves data loading into PSA and
Cubes by
infopackages (Eg. without timeouts).

Q) How can I compare data in R/3 with data in a BW


Cube
after the daily delta loads? Are there any standard
procedures for checking them or matching the number
of
records?

A) You can go to R/3 TCode RSA3 and run the


extractor. It
will give you the number of records extracted. Then
go to
BW Monitor to check the number of records in the PSA
and
check to see if it is the same & also in the monitor
header
tab.
A) RSA3 is a simple extractor checker program that
allows
you to rule out extracts problems in R/3. It is
simple to
use, but only really tells you if the extractor
works.
Since records that get updated into Cubes/ODS
structures
are controlled by Update Rules, you will not be able
to
determine what is in the Cube compared to what is in
the
R/3 environment. You will need to compare records on
a 1:1
basis against records in R/3 transactions for the
functional area in question. I would recommend
enlisting
the help of the end user community to assist since
they
presumably know the data.

To use RSA3, go to it and enter the extractor ex:


2LIS_02_HDR. Click execute and you will see the
record
count, you can also go to display that data. You are
not
modifying anything so what you do in RSA3 has no
effect on
data quality afterwards. However, it will not tell
you how
many records should be expected in BW for a given
load. You
have that information in the monitor RSMO during and
after
data loads. From RSMO for a given load you can
determine
how many records were passed through the transfer
rules
from R/3, how many targets were updated, and how many
records passed through the Update Rules. It also
gives you
error messages from the PSA.

Q) Types of Transfer Rules?

A) Field to Field mapping, Constant, Variable &


routine.

Q) Types of Update Rules?

A) (Check box), Return table

Q) Transfer Routine?

A) Routines, which we write in, transfer rules.

Q) Update Routine?

A) Routines, which we write in Update rules

Q) What is the difference between writing a routine


in
transfer rules and writing a routine in update rules?
A) If you are using the same InfoSource to update
data in
more than one data target its better u write in
transfer
rules because u can assign one InfoSource to more
than one
data target & and what ever logic u write in update
rules
it is specific to particular one data target.

Q) Routine with Return Table.

A) Update rules generally only have one return value.


However, you can create a routine in the tab strip
key
figure calculation, by choosing checkbox Return
table. The
corresponding key figure routine then no longer has a
return value, but a return table. You can then
generate as
many key figure values, as you like from one data
record.

Q) Start routines?

A) Start routines u can write in both updates rules


and
transfer rules, suppose you want to restrict (delete)
some
records based on conditions before getting loaded
into data
targets, then you can specify this in update rules-
start
routine.

Ex: - Delete Data_Package ani ante it will delete a


record
based on the condition
Q) X & Y Tables?

X-table = A table to link material SIDs with SIDs for


time-
independent navigation attributes.

Y-table = A table to link material SIDs with SIDS for


time-
dependent navigation attributes.

There are four types of sid tables

X time independent navigational attributes sid tables

Y time dependent navigational attributes sid tables

H hierarchy sid tables

I hierarchy structure sid tables

Q) Filters & Restricted Key figures (real time


example)

Restricted KF's u can have for an SD cube: billed


quantity,
billing value, no: of billing documents as RKF's.

Q) Line-Item Dimension (give me an real time example)

Line-Item Dimension: Invoice no: or Doc no: is a real


time
example

Q) What does the number in the 'Total' column in


Transaction RSA7 mean?

A) The 'Total' column displays the number of LUWs


that were
written in the delta queue and that have not yet been
confirmed. The number includes the LUWs of the last
delta
request (for repetition of a delta request) and the
LUWs
for the next delta request. A LUW only disappears
from the
RSA7 display when it has been transferred to the BW
System
and a new delta request has been received from the BW
System.

Q) How to know in which table (SAP BW) contains


Technical
Name / Description and creation data of a particular
Reports. Reports that are created using BEx Analyzer.

A) There is no such table in BW if you want to know


such
details while you are opening a particular query
press
properties button you will come to know all the
details
that you wanted.

You will find your information about technical names


and
description about queries in the following tables.
Directory of all reports (Table RSRREPDIR) and
Directory of
the reporting component elements (Table RSZELTDIR)
for
workbooks and the connections to queries check Where-
used
list for reports in workbooks (Table RSRWORKBOOK)
Titles of
Excel Workbooks in InfoCatalog (Table RSRWBINDEXT)
Q) What is a LUW in the delta queue?

A) A LUW from the point of view of the delta queue


can be
an individual document, a group of documents from a
collective run or a whole data packet of an
application
extractor.

Q) Why does the number in the 'Total' column in the


overview screen of Transaction RSA7 differ from the
number
of data records that is displayed when you call the
detail
view?

A) The number on the overview screen corresponds to


the
total of LUWs (see also first question) that were
written
to the qRFC queue and that have not yet been
confirmed. The
detail screen displays the records contained in the
LUWs.
Both, the records belonging to the previous delta
request
and the records that do not meet the selection
conditions
of the preceding delta init requests are filtered
out.
Thus, only the records that are ready for the next
delta
request are displayed on the detail screen. In the
detail
screen of Transaction RSA7, a possibly existing
customer
exit is not taken into account.
Q) Why does Transaction RSA7 still display LUWs on
the
overview screen after successful delta loading?

A) Only when a new delta has been requested does the


source
system learn that the previous delta was successfully
loaded to the BW System. Then, the LUWs of the
previous
delta may be confirmed (and also deleted). In the
meantime,
the LUWs must be kept for a possible delta request
repetition. In particular, the number on the overview
screen does not change when the first delta was
loaded to
the BW System.

Q) Why are selections not taken into account when the


delta
queue is filled?

A) Filtering according to selections takes place when


the
system reads from the delta queue. This is necessary
for
reasons of performance.

Q) Why is there a DataSource with '0' records in RSA7


if
delta exists and has also been loaded successfully?

It is most likely that this is a DataSource that does


not
send delta data to the BW System via the delta queue
but
directly via the extractor (delta for master data
using ALE
change pointers). Such a DataSource should not be
displayed
in RSA7. This error is corrected with BW 2.0B Support
Package 11.

Q) Do the entries in table ROIDOCPRMS have an impact


on the
performance of the loading procedure from the delta
queue?

A) The impact is limited. If performance problems are


related to the loading process from the delta queue,
then
refer to the application-specific notes (for example
in the
CO-PA area, in the logistics cockpit area and so on).

Caution: As of Plug In 2000.2 patch 3 the entries in


table
ROIDOCPRMS are as effective for the delta queue as
for a
full update. Please note, however, that LUWs are not
split
during data loading for consistency reasons. This
means
that when very large LUWs are written to the
DeltaQueue,
the actual package size may differ considerably from
the
MAXSIZE and MAXLINES parameters.

Q) Why does it take so long to display the data in


the
delta queue (for example approximately 2 hours)?

A) With Plug In 2001.1 the display was changed: the


user
has the option of defining the amount of data to be
displayed, to restrict it, to selectively choose the
number
of a data record, to make a distinction between
the 'actual' delta data and the data intended for
repetition and so on.
Q) What is the purpose of function 'Delete data and
meta
data in a queue' in RSA7? What exactly is deleted?

A) You should act with extreme caution when you use


the
deletion function in the delta queue. It is
comparable to
deleting an InitDelta in the BW System and should
preferably be executed there. You do not only delete
all
data of this DataSource for the affected BW System,
but
also lose the entire information concerning the delta
initialization. Then you can only request new deltas
after
another delta initialization.

When you delete the data, the LUWs kept in the qRFC
queue
for the corresponding target system are confirmed.
Physical
deletion only takes place in the qRFC outbound queue
if
there are no more references to the LUWs.

The deletion function is for example intended for a


case
where the BW System, from which the delta
initialization
was originally executed, no longer exists or can no
longer
be accessed.

Q) Why does it take so long to delete from the delta


queue
(for example half a day)?

A) Import PlugIn 2000.2 patch 3. With this patch the


performance during deletion is considerably improved.

Q) Why is the delta queue not updated when you start


the V3
update in the logistics cockpit area?

A) It is most likely that a delta initialization had


not
yet run or that the delta initialization was not
successful. A successful delta initialization (the
corresponding request must have QM status 'green' in
the BW
System) is a prerequisite for the application data
being
written in the delta queue.

Q) What is the relationship between RSA7 and the qRFC


monitor (Transaction SMQ1)?

A) The qRFC monitor basically displays the same data


as
RSA7. The internal queue name must be used for
selection on
the initial screen of the qRFC monitor. This is made
up of
the prefix 'BW, the client and the short name of the
DataSource. For DataSources whose name are 19
characters
long or shorter, the short name corresponds to the
name of
the DataSource. For DataSources whose name is longer
than
19 characters (for delta-capable DataSources only
possible
as of PlugIn 2001.1) the short name is assigned in
table
ROOSSHORTN.

In the qRFC monitor you cannot distinguish between


repeatable and new LUWs. Moreover, the data of a LUW
is
displayed in an unstructured manner there.

Q) Why are the data in the delta queue although the


V3
update was not started?

A) Data was posted in background. Then, the records


are
updated directly in the delta queue (RSA7). This
happens in
particular during automatic goods receipt posting
(MRRS).
There is no duplicate transfer of records to the BW
system.
See Note 417189.

Q) Why does button 'Repeatable' on the RSA7 data


details
screen not only show data loaded into BW during the
last
delta but also data that were newly added, i.e.
'pure'
delta records?

A) Was programmed in a way that the request in repeat


mode
fetches both actually repeatable (old) data and new
data
from the source system.

Q) I loaded several delta inits with various


selections.
For which one is the delta loaded?

A) For delta, all selections made via delta inits are


summed up. This means, a delta for the 'total' of all
delta
initializations is loaded.

Q) How many selections for delta inits are possible


in the
system?

A) With simple selections (intervals without


complicated
join conditions or single values), you can make up to
about
100 delta inits. It should not be more.

With complicated selection conditions, it should be


only up
to 10-20 delta inits.

Reason: With many selection conditions that are


joined in a
complicated way, too many 'where' lines are generated
in
the generated ABAP source code that may exceed the
memory
limit.

Q) I intend to copy the source system, i.e. make a


client
copy. What will happen with may delta? Should I
initialize
again after that?

A) Before you copy a source client or source system,


make
sure that your deltas have been fetched from the
DeltaQueue
into BW and that no delta is pending. After the
client
copy, an inconsistency might occur between BW delta
tables
and the OLTP delta tables as described in Note
405943.
After the client copy, Table ROOSPRMSC will probably
be
empty in the OLTP since this table is client-
independent.
After the system copy, the table will contain the
entries
with the old logical system name that are no longer
useful
for further delta loading from the new logical
system. The
delta must be initialized in any case since delta
depends
on both the BW system and the source system. Even if
no
dump 'MESSAGE_TYPE_X' occurs in BW when editing or
creating
an InfoPackage, you should expect that the delta have
to be
initialized after the copy.

Q) Is it allowed in Transaction SMQ1 to use the


functions
for manual control of processes?

A) Use SMQ1 as an instrument for diagnosis and


control
only. Make changes to BW queues only after informing
the BW
Support or only if this is explicitly requested in a
note
for component 'BC-BW' or 'BW-WHM-SAPI'.

Q) Despite of the delta request being started after


completion of the collective run (V3 update), it does
not
contain all documents. Only another delta request
loads the
missing documents into BW. What is the cause for
this "splitting"?
A) The collective run submits the open V2 documents
for
processing to the task handler, which processes them
in one
or several parallel update processes in an
asynchronous
way. For this reason, plan a sufficiently large
"safety
time window" between the end of the collective run in
the
source system and the start of the delta request in
BW. An
alternative solution where this problem does not
occur is
described in Note 505700.

Q) Despite my deleting the delta init, LUWs are still


written into the DeltaQueue?

A) In general, delta initializations and deletions of


delta
inits should always be carried out at a time when no
posting takes place. Otherwise, buffer problems may
occur:
If a user started the internal mode at a time when
the
delta initialization was still active, he/she posts
data
into the queue even though the initialization had
been
deleted in the meantime. This is the case in your
system.

Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In


the
table TRFCQOUT, some entries have the status 'READY',
others 'RECORDED'. ARFCSSTATE is 'READ'. What do
these
statuses mean? Which values in the field 'Status'
mean what
and which values are correct and which are alarming?
Are
the statuses BW-specific or generally valid in qRFC?

A) Table TRFCQOUT and ARFCSSTATE: Status READ means


that
the record was read once either in a delta request or
in a
repetition of the delta request. However, this does
not
mean that the record has successfully reached the BW
yet.
The status READY in the TRFCQOUT and RECORDED in the
ARFCSSTATE means that the record has been written
into the
DeltaQueue and will be loaded into the BW with the
next
delta request or a repetition of a delta. In any case
only
the statuses READ, READY and RECORDED in both tables
are
considered to be valid. The status EXECUTED in
TRFCQOUT can
occur temporarily. It is set before starting a
DeltaExtraction for all records with status READ
present at
that time. The records with status EXECUTED are
usually
deleted from the queue in packages within a delta
request
directly after setting the status before extracting a
new
delta. If you see such records, it means that either
a
process which is confirming and deleting records
which have
been loaded into the BW is successfully running at
the
moment, or, if the records remain in the table for a
longer
period of time with status EXECUTED, it is likely
that
there are problems with deleting the records which
have
already been successfully been loaded into the BW. In
this
state, no more deltas are loaded into the BW. Every
other
status is an indicator for an error or an
inconsistency.
NOSEND in SMQ1 means nothing (see note 378903).

The value 'U' in field 'NOSEND' of table TRFCQOUT is


discomforting.

Q) The extract structure was changed when the


DeltaQueue
was empty. Afterwards new delta records were written
to the
DeltaQueue. When loading the delta into the PSA, it
shows
that some fields were moved. The same result occurs
when
the contents of the DeltaQueue are listed via the
detail
display. Why are the data displayed differently? What
can
be done?

Make sure that the change of the extract structure is


also
reflected in the database and that all servers are
synchronized. We recommend to reset the buffers using
Transaction $SYNC. If the extract structure change is
not
communicated synchronously to the server where delta
records are being created, the records are written
with the
old structure until the new structure has been
generated.
This may have disastrous consequences for the delta.

When the problem occurs, the delta needs to be re-


initialized.

Q) How and where can I control whether a repeat delta


is
requested?

A) Via the status of the last delta in the BW Request


Monitor. If the request is RED, the next load will be
of
type 'Repeat'. If you need to repeat the last load
for
certain reasons, set the request in the monitor to
red
manually. For the contents of the repeat see Question
14.
Delta requests set to red despite of data being
already
updated lead to duplicate records in a subsequent
repeat,
if they have not been deleted from the data targets
concerned before.

Q) As of PI 2003.1, the Logistic Cockpit offers


various
types of update methods. Which update method is
recommended
in logistics? According to which criteria should the
decision be made? How can I choose an update method
in
logistics?

See the recommendation in Note 505700.

Q) Are there particular recommendations regarding the


data
volume the DeltaQueue may grow to without facing the
danger
of a read failure due to memory problems?
A) There is no strict limit (except for the
restricted
number range of the 24-digit QCOUNT counter in the
LUW
management table - which is of no practical
importance,
however - or the restrictions regarding the volume
and
number of records in a database table).

When estimating "smooth" limits, both the number of


LUWs is
important and the average data volume per LUW. As a
rule,
we recommend to bundle data (usually documents)
already
when writing to the DeltaQueue to keep number of LUWs
small
(partly this can be set in the applications, e.g. in
the
Logistics Cockpit). The data volume of a single LUW
should
not be considerably larger than 10% of the memory
available
to the work process for data extraction (in a 32-bit
architecture with a memory volume of about 1GByte per
work
process, 100 Mbytes per LUW should not be exceeded).
That
limit is of rather small practical importance as well
since
a comparable limit already applies when writing to
the
DeltaQueue. If the limit is observed, correct reading
is
guaranteed in most cases.

If the number of LUWs cannot be reduced by bundling


application transactions, you should at least make
sure
that the data are fetched from all connected BWs as
quickly
as possible. But for other, BW-specific, reasons, the
frequency should not be higher than one DeltaRequest
per
hour.

To avoid memory problems, a program-internal limit


ensures
that never more than 1 million LUWs are read and
fetched
from the database per DeltaRequest. If this limit is
reached within a request, the DeltaQueue must be
emptied by
several successive DeltaRequests. We recommend,
however, to
try not to reach that limit but trigger the fetching
of
data from the connected BWs already when the number
of LUWs
reaches a 5-digit value.

Q) I would like to display the date the data was


uploaded
on the report. Usually, we load the transactional
data
nightly. Is there any easy way to include this
information
on the report for users? So that they know the
validity of
the report.

A) If I understand your requirement correctly, you


want to
display the date on which data was loaded into the
data
target from which the report is being executed. If it
is
so, configure your workbook to display the text
elements in
the report. This displays the relevance of data
field,
which is the date on which the data load has taken
place.

Q) Can we filter the fields at Transfer Structure?

Q) Can we load data directly into infoobject with out


extraction is it possible.

Yes. We can copy from other infoobject if it is same.


We
load data from PSA if it is already in PSA.

Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R


SHEDULED DAILY, WEEKLY AND MONTHLY.

a) We can set the time.

Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING


ON
OFFSHORE PROJECTS. THROUGH WHICH NETWORK.

a) VPN.Virtual Private Network, VPN is nothing


but one
sort of network where we can connect to the client
systems
sitting in offshore through RAS (Remote access
server).

Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?

Prepare Project Plan and Environment


Define Project Management Standards and

Procedures
Define Implementation Standards and Procedures

Testing & Go-live + supporting.


Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT
TIME
TO ALL CUBES IF ONE CUBE GOT LOCK ERROR. HOW CAN U
RECTIFY
THE ERROR?

Go to TCode sm66 then see which one is locked select


that
pid from there and goto sm12
TCode then unlock it this is happened when lock
errors are
occurred when u scheduled.

Q) Can anybody tell me how to add a navigational


attribute
in the BEx report in the rows?

A) Expand dimension under left side panel (that is


infocube
panel) select than navigational attributes drag and
drop
under rows panel.

Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT.

In current systems (BW 3.0B and R/3 4.6B) these


Tcodes
don't exist!

Q) WHAT IS TRANSACTIONAL CUBE?

A) Transactional InfoCubes differ from standard


InfoCubes
in that the former have an improved write access
performance level. Standard InfoCubes are technically
optimized for read-only access and for a
comparatively
small number of simultaneous accesses. Instead, the
transactional InfoCube was developed to meet the
demands of
SAP Strategic Enterprise Management (SEM), meaning
that,
data is written to the InfoCube (possibly by several
users
at the same time) and re-read as soon as possible.
Standard
Basic cubes are not suitable for this.

Q) Is there any way to delete cube contents within


update
rules from an ODS data source? The reason for this
would be
to delete (or zero out) a cube record in an "Open
Order"
cube if the open order quantity was 0.
I've tried using the 0recordmode but that doesn't
work.
Also, would it
be easier to write a program that would be run after
the
load and delete
the records with a zero open qty?

A) START routine for update rules u can write ABAP


code.

A) Yap, you can do it. Create a start routine in


Update
rule.

It is not "Deleting cube contents with update rules"


It is
only possible to avoid that some content is updated
into
the InfoCube using the start routine. Loop at all the
records and delete the record that has the condition.
"If
the open order quantity was 0" You have to think also
in
before and after images in case of a delta upload. In
that
case you may delete the change record and keep the
old and
after the change the wrong information.

Q) I am not able to access a node in hierarchy


directly
using variables for reports. When I am using Tcode
RSZV it
is giving a message that it doesn't exist in BW 3.0
and it
is embedded in BEx. Can any one tell me the other
options
to get the same functionality in BEx?

A) Tcode RSZV is used in the earlier version of 3.0B


only.
From 3.0B onwards, it's possible in the Query
Designer
(BEx) itself. Just right click on the InfoObject for
which
you want to use as variables and precede further
selecting
variable type and processing types.

Q) Wondering how can I get the values, for an


example, if I
run a report for month range 01/2004 - 10/2004 then
monthly
value is actually divide by the number of months that
I
selected. Which variable should I use?

Q) Why is it every time I switch from Info Provider


to
InfoObject or from one item to another while in
modeling I
always get this message " Reading Data " or
"constructing
workbench" in it runs for minutes.... anyway to stop
this?

Q) Can any one give me info on how the BW delta works


also
would like to know about 'before image and after
image' am
currently in a BW project and have to write start
routines
for delta load.

Q) I am very new to BW. I would like to clarify a


doubt
regarding Delta extractor. If I am correct, by using
delta
extractors the data that has already been scheduled
will
not be uploaded again. Say for a specific scenario,
Sales.
Now I have uploaded all the sales order created till
yesterday into the cube. Now say I make changes to
any of
the open record, which was already uploaded. Now what
happens when I schedule it again? Will the same
record be
uploaded again with the changes or will the changes
get
affected to the previous record.

A)

Q) In BW we need to write abap routines. I wish to


know
when and what type of abap routines we got to write.
Also,
are these routines written in update rules? I will be
glad,
if this is clarified with real-time scenarios and few
examples?

A) Over here we write our routines in the start


routines in
the update rules or in the transfer structure (you
can
choose between writing them in the start routines or
directly behind the different characteristics. In the
transfer structure you just click on the yellow
triangle
behind a characteristic and choose "routine". In the
update
rules you can choose "start routine" or click on the
triangle with the green square behind an individual
characteristic. Usually we only use start routine
when it
does not concern one single characteristic (for
example
when you have to read the same table for 4
characteristics). I hope this helps.

We used ABAP Routines for example:

To convert to Uppercase (transfer structure)

To convert Values out of a third party tool with


different
keys into the same keys as our SAP System uses
(transfer
structure)

To select only a part of the data for from an


infosource
updating the InfoCube (Start Routine) etc.

Q) What is ODS?

A) An ODS object acts as a storage location for


consolidated and cleaned-up transaction data
(transaction
data or master data, for example) on the document
(atomic)
level.

This data can be evaluated using a BEx query.

Standard ODS Object

Transactional ODS object:

The data is immediately available here for reporting.


For
implementation, compare with the Transactional ODS
Object.

A transactional ODS object differs from a standard


ODS
object in the way it prepares data. In a standard ODS
object, data is stored in different versions ((new)
delta,
active, (change log) modified), where as a
transactional
ODS object contains the data in a single version.
Therefore, data is stored in precisely the same form
in
which it was written to the transactional ODS object
by the
application. In BW, you can use a transaction ODS
object as
a data target for an analysis process.

The transactional ODS object is also required by


diverse
applications, such as SAP Strategic Enterprise
Management
(SEM) for example, as well as other external
applications.

Transactional ODS objects allow data to be available


quickly. The data from this kind of ODS object is
accessed
transactionally, that is, data is written to the ODS
object
(possibly by several users at the same time) and
reread as
soon as possible.

It offers no replacement for the standard ODS object.


Instead, an additional function displays those that
can be
used for special applications.

The transactional ODS object simply consists of a


table for
active data. It retrieves its data from external
systems
via fill- or delete- APIs. The loading process is not
supported by the BW system. The advantage to the way
it is
structured is that data is easy to access. They are
made
available for reporting immediately after being
loaded.

Q) What does InfoCube contains?

A) Each InfoCube has one FactTable & a maximum of 16


(13+3
system defined, time, unit & data packet) dimensions.

Q) What does FACT Table contain?

A FactTable consists of KeyFigures.

Each Fact Table can contain a maximum of 233 key


figures.
Dimension can contain up to 248 freely available
characteristics.

Q) How many dimensions are in a CUBE?

A) 16 dimensions. (13 user defined & 3 system pre-


defined
[time, unit & data packet])

Q) What does SID Table contain?

SID keys linked with dimension table & master data


tables
(attributes, texts, hierarchies)

Q) What does ATTRIBUTE Table contain?

Master attribute data

Q) What does TEXT Table contain?

Master text data, short text, long text, medium text


&
language key if it is language dependent

Q) What does Hierarchy table contain?

Master hierarchy data

Q) What is the advantage of extended STAR Schema?

Q). Differences between STAR Schema & Extended


Schema?

A) In STAR SCHEMA, A FACT Table in center, surrounded


by
dimensional tables and the dimension tables contains
of
master data. In Extended Schema the dimension tables
does
not contain master data, instead they are stored in
Masterdata tables divided into attributes, text &
hierarchy. These Masterdata & dimensional tables are
linked
with each other with SID keys. Masterdata tables are
independent of Infocube & reusability in other
InfoCubes.

Q) As to where in BW do you go to add a character


like a \;
# so that BW will accept it. This is transaction data
which
loads fine in the PSA but not the data target.

A) Check transaction SPRO ---Then click the


"Goggles"-
Button => Business

Information Warehouse => Global Settings => 2nd point


in
the list. I

hope you can use my "Guide" (my BW is in german, so i


don't
know all the english descriptions).

Q) Does data packets exits even if you don't enter


the
master data, (when created)?

Q) When are Dimension ID's created?


A) When Transaction data is loaded into InfoCube.

Q) When are SID's generated?

A) When Master data loaded into Master Tables (Attr,


Text,
Hierarchies).

Q) How would we delete the data in ODS?

A) By request IDs, Selective deletion & change log


entry
deletion.

Q) How would we delete the data in change log table


of ODS?

A) Context menu of ODS &#8594; Manage &#8594;


Environment &#8594; change
log entries.

Q) What are the extra fields does PSA contain?

A) (4) Record id, Data packet

Q) Partitioning possible for ODS?

A) No, It's possible only for Cube.

Q) Why partitioning?

A) For performance tuning.

Q) Have you ever tried to load data from 2


InfoPackages
into one cube?

A) Yes.

Q) Different types of Attributes?

A) Navigational attribute, Display attributes, Time


dependent attributes, Compounding attributes,
Transitive
attributes, Currency attributes.

Q) Transitive Attributes?

A) Navigational attributes having nav attrthese nav


attrs
are called transitive attrs

Q) Navigational attribute?

A) Are used for drill down reporting (RRI).

Q) Display attributes?

A) You can show DISPLAY attributes in a report, which


are
used only for displaying.

Q) How does u recognize an attribute whether it is a


display attribute or not?

A) In Edit characteristics of char, on general tab


checked
as attribute only.

Q) Compounding attribute?
A)

Q) Time dependent attributes?

A)

Q) Currency attributes?

A)

Q) Authorization relevant object. Why authorization


needed?

A)

Q) How do we convert Master data InfoObject to a Data


target?
A) InfoArea &#8594; Infoprovider (context menu)
&#8594; Insert
characteristic Data as DataTarget.

Q) How do we load the data if a FlatFile consists of


both
Master and Transaction data?

A) Using Flexible update method while creating


InfoSource.

Q) Steps in LIS are Extraction?

A)

Q) Steps in LO are Extraction?


A) * Maintain extract structures. (R/3)

* Maintain DataSources. (R/3)

* Replicate DataSource in BW.

* Assign InfoSources.

* Maintain communication structures/transfer rules.

* Maintain InfoCubes & Update rules.

* Activate extract structures. (R/3)

* Delete setup tables/setup extraction. (R/3)

* InfoPackage for the Delta initialization.

* Set-up periodic V3 update. (R/3)

* InfoPackage for Delta uploads.

Q) Steps in FlatFile Extraction?

A)

Q) Different types in LO's?

A) Direct Delta, Queued Delta, Serialized V3 update,


Unserialized V3 Update.

Direct Delta: - With every document posted in R/3,


the
extraction data is transferred directly into the BW
delta
queue. Each document posting with delta extraction
becomes
exactly one LUW in the corresponding Delta queue.
Queued Delta: - The extraction data from the
application is
collected in extraction queue instead of as update
data and
can be transferred to the BW delta queue by an update
collection run, as in the V3 update.

Q) What does LO Cockpit contain?

A) * Maintaining Extract structure.

* Maintaining DataSources.

* Activating Updates.

* Controlling Updates.

Q) RSA6 --- Maintain DataSources.

Q) RSA7 ---- Delta Queue (allows you to monitor the


current
status of the delta attribute)

Q) RSA3 ---- Extract checker.

Q) LBW0 --- TCode for LIS.

Q) LBWG --- Delete set-up tables in LO's.

Q) OLI*BW --- Fill Set-up tables.

Q) LBWE ---- TCode for Logistics extractors.


Q) RSO2 --- Maintaining Generic DataSources.

Q) MC21 ----- Creating user-defined Information


Structure
for LIS (It is InfoSource in SAP BW).

Q) MC24 ---- Creating Updating rules for LO's.

Q) PFCG ---- Role maintenance, assign users to these


roles.

Q) SE03 -- Changeability of the BW namespace.

Q) RSDCUBEM --- For Delete, Change or Delete the


InfoCube.

Q) RSD5 -- Data packet characteristics maint.

Q) RSDBC - DB Connect

Q) RSMO --- Monitoring of Dataloads.

Q) RSCUSTV6 -- Partitioning of PSA.

Q) RSRT -- Query monitor.

Q) RSRV - Analysis and Repair of BW Objects

Q) RRMX - BEx Analyzer


Q) RSBBS - Report to Report interface (RRI).

Q) SPRO -- IMG (To make configurations in BW).

Q) RSDDV - Maintaining Aggregates.

Q) RSKC -- Character permit checker.

Q) ST22 - Checking ShortDump.

Q) SM37 - Scheduling Background jobs.

Q) RSBOH1 -- Open Hub Service: Create InfoSpoke.

Q) RSMONMESS -- "Messages for the monitor" table.

Q) ROOSOURCE - Table to find out delta update


methods.

Q) RODELTAM - Finding for modes of records (i.e.


before
image & after image)

Q) SMOD - Definition

Q) CMOD - Project Management enhancing

Q) SPAU - Program Compare

Q) SE11 - ABAP Dictionary


Q) SE09 - Transport Organizer (workbench organizer)

Q) SE10 - Transport Organizer (Extended View)

Q) SBIW - Implementation guide

Q) Statistical Update?

A)

Q) What are Process Chains?

A) TCode is RSPC, is a sequence of processes


scheduled in
the background & waiting to be triggered by a
specific
event. Process chains nothing but grouping processes.
Process variant (start variant) is the place the
process
chain knows where to start.

There should be min and max one start variant in each


process chain, here we specify when should the
process
chain start by giving date and time or if you want to
start
immediately

Some of theses processes trigger an event of their


own that
in-turn triggers other processes.

Ex: - Start chain &#8594; Delete BCube indexes


&#8594; Load data from
the source system to PSA &#8594;

Load data from PSA to DataTarget ODS &#8594; Load


data from ODS
to BCube &#8594; Create Indexes for BCube after
loading data &#8594;
Create database statistics &#8594; Roll-Up data into
the
aggregate &#8594; Restart chain from beginning.

Q) What are Process Types & Process variant?

A) Process types are General services, Load Process &


subsequent processing, Data Target Administration,
Reporting agent & Other BW services.

Process variant (start variant) is the place the


process
type knows when & where to start.

Q) Difference between MasterData & Transaction


InfoPackage?

A) 5 tabs in Masterdata & 6 tabs in Transaction data,


the
extra tab in Transaction data is DATA TARGETS.

Q) Types of Updates?

A) Full Update, Init Delta Update & Delta Update.

Q) For Full update possible while loading data from


R/3?

A) InfoPackage &#8594; Scheduler &#8594; Repair


Request flag (check).

This is only possible when we use MM & SD modules.


Q) InfoPackage groups?

A)

Q) Explain the status of records in Active & change


log
tables in ODS when modified in source system?

A)

Q) Why it takes more time while loading the


transaction
data even to load the transaction without master data
(we
check the checkbox, Always Update data, even if no
master
data exits for the data)?

A) Because while loading the data it has to create


SID keys
for transaction data.

Q) For what we use HIDE fields, SELECT fields &


CANCELLATION fields?

A) Selection fields-- The only purpose is when we


check
this column, the field will appear in InfoPackage
Data
selection tab.

Hide fields -- These fields are not transferred to BW


transfer structure.

Cancellation - It will reverse the posted documents


of
keyfigures of customer defined by multiplying it with
-
1...and nullifying the value.
I think this is reverse posting

Q) Transporting.

A) When it comes to transporting for R/3 and BW, u


should
always transport all the R/3 Objects firstonce you
transport all the R/3 objects to the 2nd system, you
have
to replicate the datasources into the 2nd BW system
and
then transport BW objects.

First you will transport all the datasources from 1st


R/3
system to 2nd R/3 System. Second, you will replicate
the
datasources from 2nd R/3 system into 2nd BW system.
Third,
you will transport all the BW Objects from 1st BW
system to
2nd BW system.

You have to send your extractors first to the


corresponding
R/3 Q Box and replicate that to BW. Then you have to
do
this transport in BW.

Development, testing and then production

Q) Functionality of InitDelta & Delta Updates?

A)

Q) What is Change-run ID?


A)

Q) Currency conversions?

A)

Q) Difference between Calculated KeyFigure & Formula?

A)

Q) When does a transfer structure contain more fields


than
the communication structure of an InfoSource?

A) If we use a routine to enhance a field in the


communication from several fields in the transfer
structure, the communication structure may contain
more
fields.

A) The total no of InfoObjects in the communication


structure & Extract structure may be different, since
InfoObjects can be copied to the communication
structure
from all the extract structures.

Q) What is the PSA, technical name of PSA, Uses?

A) When we want to delete the data in InfoProvider &


again
want to re-load the data, at this stage we can
directly
load from PSA not going to extract from R/3.

A) For cleansing purpose.


Q) Variables in Reporting?

A) Characteristics values, Text, Hierarchies,


Hierarchy
nodes & Formula elements,

Q) Variable processing types in Reporting?

A) Manual, Replacement path, SAP Exit,


Authorizations,
Customer Exit

Q) Why we use this RSRP0001 Enhancement?

A) For enhancing the Customer Exit in reporting.

Q) What is the use of Filters?

A) It Restricts Data.

Q) What is the use of Conditioning?

A) To retrieve data based on particular conditions


like
less than, greater than, less than or equal etc.,

Q) Difference between Filters & Conditioning?

A)

Q) What is NODIM?

A) For example it converts 5lts + 5kgs = 10.


Q) What for Exception's? How can I get PINK color?

A) To differentiate values with colors, by adding


relevant
colors u can get pink.

Q) Why SAPLRSAP is used?

A) We use these function modules for enhancing in


r/3.

Q) What are workbooks & uses?

A)

Q) Where are workbooks saved?

A) Workbooks are saved in favorites.

Q) Can Favorites accessed by other users?

A) No, they need authorization.

Q) What is InfoSet?

A) An InfoSet is a special view of a dataset, such as


logical database, table join, table, and sequential
file,
and is used by SAP Query as a source data. InfoSets
determine the tables or fields in these tables that
can be
referenced by a report. In most cases, InfoSets are
based
on logical databases.
SAP Query includes a component for maintaining
InfoSets.
When you create an InfoSet, a DataSource in an
application
system is selected.

Navigating in a BW to an InfoSet Query, using one or


more
ODS objects or InfoObjects.

You can also drill-through to BEx queries and InfoSet


Queries from a second BW system, that is

Connected as a data mart.

_The InfoSet Query functions allow you to report


using flat
data tables (master data reporting).

Choose InfoObjects or ODS objects as data sources.


These
can be connected using joins.

__You define the data sources in an InfoSet. An


InfoSet can
contain data from one or more tables that are
connected to
one another by key fields.

__The data sources specified in the InfoSet form the


basis
of the InfoSet Query.

Q) LO's?

A)

Synchronous update (V1 update)


Statistics update is carried out at the same time as
the
document update in the same task.

Asynchronous update (V2 update)

Document update and the statistics update take place


separately in different tasks.

Collective update (V3 update)

Again, document update is separate from the


statistics
update. However, in contrast to the V2 update, the V3
collective statistics update must be scheduled as a
job.

Successfully scheduling the update will ensure that


all the
necessary information

Structures are properly updated when new or existing


documents are processed.

Scheduling intervals should be based on the amount of


activity on a particular OLTP

system. For example, a development system with a


relatively
low or no volume of new documents may only need to
run the
V3 update on a weekly basis. A full production
environment,
with hundreds of transactions per hour may have to be
updated every 15 to 30 minutes.

SAP standard background job scheduling functionality


may be
used in order to schedule the V3 updates
successfully. It
is possible to verify that all V3 updates are
successfully
completed via transaction SM13. This transaction will
take
you to the UPDATE RECORDS: MAIN MENU screen. At
this
screen, enter asterisk as your user (for all users),
flag
the radio button 'All' and hit enter. Any outstanding
V3
updates will be listed. While a non-executed V3
update will
not hinder your OLTP system, by administering the V3
update
jobs properly, your information structures will be
current
and overall performance will be improved.

COMPENDIT MAKES NO REPRESENTATIONS ABOUT THE


SUITABILITY OF
THE

What Is SPRO In BW Project?


What Is SPRO In BW Project?

1) What is spro?
2) How to use in bw project?
3) What is difference between idoc and psa in transfer methods?

1. SPRO is the transaction code for Implementation Guide, where you can do configuration
settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse
Information.

2. SPRO is used to configure the following settings :


* General Settings like printer settings, fiscal year settings, ODS Object Settings, Authorisation
settings, settings for displaying SAP Documents, etc., etc.,
* Links to other systems : like links between flat files and BW Systems, R/3 and BW, and other
data sources, link between BW system and Microsoft Analysis services, and crystal
enterprise....etc., etc.,
* UD Connect Settings : Like configuring BI Java Connectors, Establishing the RFC Desitination
for SAP BW for J2EEE Engine, Installation of Availability monitoring for UD Connect.
* Automated Processes: like settings for batch processes, background processes etc., etc.,
* Transport Settings : like settings for source system name change after transport and create
destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.

3. PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed requests in
the format of the transfer structure. It is defined according to the Datasource and source
system, and is source system dependent.

IDOCS : Intermediate DOCuments : Data Structures used as API working storage for
applications, which need to move data in or out of SAP Systems.

Posted by prasheel Reddy at 9:45 AM 0 comments


Labels: Introduction

Use of manual security profiles with SAP BW


Use of manual security profiles with SAP BW

-----Original Message-----
Subject: Use of manual security profiles with BW? (Business Information Warehouse)

Our company is currently on version 3.1H and will be moving to 4.6B late
summer 2000. Currently all of our R/3 security profiles were created
manually. We are also in the stage of developing and going live with the
add-on system of Business Warehouse (BW). For consistency, we have wish
to use manual profiles within the BW system and later convert all of our
manual security profiles (R/3 and BW) to profile generated ones.

Is there anyone else that can shed any light on this situation? (Success
or problems with using manual security profiles with BW?)

Any feedback would be greatly appreciated.

Thank you,

-----Reply Message-----
Subject: Use of manual security profiles with BW? (Business Information Warehouse)

Hi ,
You are going to have fun doing this upgrade. The 4.6b system is a
completely different beast than the 3.1h system. You will probably find a
lot of areas where you have to extend you manually created profiles to
cover new authorisation objects (but then you can have this at any level).

In 4.6b you really have to use the profile generator, but at least there is
a utility to allow you to pick up your manually created profile and have it
converted to an activity group for you. This will give you a running start
in this area, but you will still have a lot of work to do.

The fact that you did not use PG at 3.1h will not matter as it changed at
4.5 too and the old activity groups need the same type of conversion (we
are going through that bit right now).
Hope this helps

-----End of Message-----
Posted by prasheel Reddy at 9:44 AM 0 comments
Labels: Introduction

SAP Business Information Warehouse


SAP Business Information Warehouse

-----Original Message-----
Subject: Business Information Warehouse

Ever heard about apples and oranges. SAP/R3 is an OLTP system where as BIW
is an OLAP system. LIS reports can not provide the functionality provided
by BIW.

-----Reply Message-----
Subject: Business Information Warehouse

Hello,

The following information is for you to get more clarity on the subject:
SAP R/3 LIS (Logistic Information System) consist of infostructures (which
are representation of reporting requirements). So whenever any event (goods
reciept, invoice reciept etc. ) takes place in SAP R/3 module, if relevant
to the infostructure, an corresponding entry is made in the infostructures.
Thus infostructures form the database part of the datawarehouse. For
reporting the data (based on OLAP features such drill-down, abc, graphics
etc.), you can use SAP R/3 standard analysis (or flexible analysis) or
Business Warehouse (which is excel based) or Business Objects (which is
third party product but can interface with SAP R/3 infostructures using BAPI
calls).

In short, the infostructures (which are part of SAP R/3 LIS) form the data
basis for reporting with BW.

Regards

-----End of Message-----
Posted by prasheel Reddy at 9:42 AM 0 comments
Labels: Introduction

SAP Data Warehouse


We have large amounts of historical sales data stored on our legacy system (i.e. multiple
files with 1 million+ records). Today the users use custom written programs and the Focus
query tool to generate sales'ish type of reports.

We are wanting that existing legacy system to go away and need to find a home for the data
and the functionality to access and report on that data. What options does SAP afford for
data warehousing? How does it affect the response of the SAP database server?

We are thinking of moving the data onto a scaleable NT server with large amounts of disk
(10gb +) and using PC tools to access the data. In this environment, our production SAP
machine would perform weekly data transfers to this historical sales reporting system.
Has anybody implemented a similar solution or have any ideas on a good attack method to
solve this issue?

You may want to look at SAP's Business Information Warehouse. This is their answer to data
warehousing. I saw a presentation on this last October at the SAP Technical Education
Conference and it looked pretty slick.

BIW runs on its own server to relieve the main database from query and report processing. It
accepts data from many different types of systems and has a detailed administration piece to
determine data source and age. Although the Information System may be around for sometime
it sounded like SAP is moving towards the Business Information Warehouse as a reporting
solution.

Posted by prasheel Reddy at 9:32 AM 0 comments


Labels: Introduction

The Three Layers of SAP BW


The Three Layers of SAP BW

SAP BW has three layers:

* Business Explorer: As the top layer in the SAP BW architecture,


the Business Explorer (BEx) serves as the reporting environment
(presentation and analysis) for end users. It consists of the BEx
Analyzer, BEx Browser, BEx Web, and BEx Map for analysis and reporting
activities.

* Business Information Warehouse Server: The SAP BW server, as


the middle layer, has two primary roles:

Data warehouse management and administration: These tasks are


handled by the production data extractor (a set of programs for the
extraction of data from R/3 OLTP applications such as logistics, and
controlling), the staging engine, and the Administrator Workbench.
Data storage and representation: These tasks are handled by the
InfoCubes in conjunction with the data manager, Metadata repository,
and Operational Data Store (ODS).
* Source Systems: The source systems, as the bottom layer, serve
as the data sources for raw business data. SAP BW supports various data
sources:

R/3 Systems as of Release 3.1H (with Business Content) and R/3


Systems prior to Release 3.1H (SAP BW regards them as external systems)
Non-SAP systems or external systems
mySAP.com components (such as mySAP SCM, mySAP SEM, mySAP CRM,
or R/3 components) or another SAP BW system.

Posted by prasheel Reddy at 9:29 AM 0 comments


Labels: Introduction

5/12/08
Tickets and Authorization in SAP Business
Warehouse
Tickets and Authorization in SAP Business Warehouse

What is tickets? and example?

The typical tickets in a production Support work could be:


1. Loading any of the missing master data attributes/texts.
2. Create ADHOC hierarchies.
3. Validating the data in Cubes/ODS.
4. If any of the loads runs into errors then resolve it.
5. Add/remove fields in any of the master data/ODS/Cube.
6. Data source Enhancement.
7. Create ADHOC reports.

1. Loading any of the missing master data attributes/texts - This would


be done by scheduling the infopackages for the attributes/texts
mentioned by the client.
2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-
object.
3. Validating the data in Cubes/ODS. - By using the Validation reports
or by comparing BW data with R/3.
4. If any of the loads runs into errors then resolve it. - Analyze the
error and take suitable action.
5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon
the requirement
6. Data source Enhancement.
7. Create ADHOC reports. - Create some new reports based on the
requirement of client.

Tickets are the tracking tool by which the user will track the work
which we do. It can be a change requests or data loads or what ever.
They will of types critical or moderate. Critical can be (Need to solve
in 1 day or half a day) depends on the client. After solving the ticket
will be closed by informing the client that the issue is solved.
Tickets are raised at the time of support project these may be any
issues, problems.....etc. If the support person faces any issues then
he will ask/request to operator to raise a ticket. Operator will raise
a ticket and assign it to the respective person. Critical means it is
most complicated issues ....depends how you measure this...hope it
helps. The concept of Ticket varies from contract to contract in
between companies. Generally Ticket raised by the client can be
considered based on the priority. Like High Priority, Low priority and
so on. If a ticket is of high priority it has to be resolved ASAP. If
the ticket is of low priority it must be considered only after
attending to high priority tickets.

Checklists for a support project of BPS - To start the checklist:

1) InfoCubes / ODS / datatargets


2) planning areas
3) planning levels
4) planning packages
5) planning functions
6) planning layouts
7) global planning sequences
8) profiles
9) list of reports
10) process chains
11) enhancements in update routines
12) any ABAP programs to be run and their logic
13) major bps dev issues
14) major bps production support issues and resolution
Posted by prasheel Reddy at 2:40 AM 0 comments
Labels: SAP BW Frequently Asked Question

Differences Between BW and BI Versions


Differences Between BW and BI Versions

List the differences between BW 3.5 and BI 7.0 versions.

Major Differences between Sap Bw 3.5 & SapBI 7.0 version:

1. In Infosets now you can include Infocubes as well.


2. The Remodeling transaction helps you add new key figure and characteristics and handles
historical data as well without much hassle. This is only for info cube.
3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a
factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would
be HP or IBM.
4. The monitoring has been imprvoed with a new portal based cockpit. Which means you would
need to have an EP guy in ur project for implementing the portal ! :)
5. Search functionality hass improved!! You can search any object. Not like 3.5
6. Transformations are in and routines are passe! Yess, you can always revert to the old
transactions too.
7. The Data Warehousing Workbench replaces the Administrator Workbench.
8. Functional enhancements have been made for the DataStore object: New type of DataStore
object Enhanced settings for performance optimization of DataStore objects.
9. The transformation replaces the transfer and update rules.

10. New authorization objects have been added


11. Remodeling of InfoProviders supports you in Information Lifecycle Management.
12 The Data Source:
There is a new object concept for the Data Source.
Options for direct access to data have been enhanced.
From BI, remote activation of Data Sources is possible in SAP source systems.
13.There are functional changes to the Persistent Staging Area (PSA).
14.BI supports real-time data acquisition.
15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise
Data Warehousing (EDW). The new features/ Major differences include:

a) Renamed ODS as DataStore.


b) Inclusion of Write-optmized DataStore which does not have any change log and the requests
do need any activation
c) Unification of Transfer and Update rules
d) Introduction of "end routine" and "Expert Routine"
e) Push of XML data into BI system (into PSA) without Service API or Delta Queue
f) Intoduction of BI accelerator that significantly improves the performance.
g) Load through PSA has become a must. I am not too sure about this. It looks like we would
not have the option to bypass the PSA Yes,

16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc
transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules.
Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine".
during data load.
New features in BI 7 compared to earlier versions:
i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition
(RDA).
ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options.
iii. One level of Transformation. This replaces the Transfer Rules and Update Rules
iv. Performance optimization includes new BI Accelerator feature.
v. User management (includes new concept for analysis authorizations) for more flexible BI end
user authorizations.

Posted by prasheel Reddy at 2:39 AM 0 comments


Labels: SAP BW Frequently Asked Question

What Is Different Between ODS & IC


What Is Different Between ODS & IC

What is the differenct between IC & ODS? How to flat data load to IC &
ODS?

By: Vaishnav

ODS is a datastore where you can store data at a very granular level.
It has overwritting capability. The data is stored in two dimensional
tables. Whereas cube is a based on multidimensional modeling which
facilitates reporting on diff dimensions. The data is stored in an
aggregated form unlike ODS and have no overwriting capability.
Reporting and analysis can be done on multidimensions unlike on ODS.

ODS are used to consolidate data. Normally ODS contain very detailed
data, technically there is the option to overwrite or add single
records.InfoCubes are optimized for reporting. There are options to
improve performance like aggregates and compression and it is not
possible to replace single records, all records sent to InfoCube will
be added up.

The most important difference between ODS and BW is the existence of


key fields in the ODS. In the ODS you can have up to 16 info objects as
key fields. Any other info objects will either be added or overwritten!
So if you have flat files and want to be able to upload them multiple
times you should not load them directly into the info cube, otherwise
you need to delete the old request before uploading a new one. There is
the disadvantage that if you delete rows in the flat file the rows are
not deleted in the ODS.

I also use ODS-Objects to upload control data for update or transfer


routines. You can simply do a select on the ODS-Table /BIC/A00 to get
the data.

ODS is used as an intermediate storage area of operational data for the


data ware house . ODS contains high granular data . ODS are based on
flat tables , resulting in simple modeling of ODS. We can cleanse
transform merge sort data to build staging tables that can later be
used to populate INOFCUBE .

An infocube is a multidimentionsl dat acontainer used as a basis for


analysis and reporting processing. The infocube is a fact table and
their associated dimension tables in a star schema. It looks like a
fact table appears in the middle of the graphic, along with several
surrounding dimension tables. The central fact is usually very large,
measured in gigabytes. it is the table from which you retrieve the
interesting data. the size of the dimension tables amounts to only 1 to
5 percent of hte size of the fact table. common dimensions are unit &
time etc. There are different type of infocubes in BW, such as basic
infocubes, remote infocubes etc.

An ODS is a flat data container used for reporting and data


cleansing/quality assurance purpose. They are not based on star schema
and are used primaily for detail reporting rather than for dimensional
analyais.

An infocube has a fact table, which contains his facts (key figures)
and a relation to dimension tables. This means that an infocube exists
of more than one table. These tables all relate to each other. This is
also called the star scheme, because the dimension tables all relate to
the fact table, which is the central point. A dimension is for example
the customer dimension, which contains all data that is important for
the customer.

An ODS is a flat structure. It is just one table that contains all


data. Most of the time you use an ODS for line item data. Then you
aggregate this data to an infocube.

Posted by prasheel Reddy at 2:38 AM 0 comments


Labels: SAP BW Frequently Asked Question

Difference Between PSA, ALE IDoc, ODS


Difference Between PSA, ALE IDoc, ODS

What is difference between PSA and ALE IDoc? And how data is transferd
using each one of them?

The following update types are available in SAP BW:


1. PSA
2. ALE (data IDoc)

You determine the PSA or IDoc transfer method in the transfer rule
maintenance screen. The process for loading the data for both transfer
methods is triggered by a request IDoc to the source system. Info IDocs
are used in both transfer methods. Info IDocs are transferred
exclusively using ALE

A data IDoc consists of a control record, a data record, and a status


record The control record contains, for example, administrative
information such as the receiver, the sender, and the client. The
status record describes the status of the IDoc, for example,
"Processed". If you use the PSA for data extraction, you benefit from
increased flexiblity (treatment of incorrect data records). Since you
are storing the data temporarily in the PSA before updating it in to
the data targets, you can check the data and change it if necessary.
Unlike a data request with IDocs, the PSA gives you various options for
additional data updates into data targets:

InfoObject/Data Target Only - This option means that the PSA is not
used as a temporary store. You choose this update type if you do not
want to check the source system data for consistency and accuracy, or
you have already checked this yourself and are sure that you no longer
require this data since you are not going to change the structure of
the data target again.

PSA and InfoObject/Data Target in Parallel (Package by Package) - BW


receives the data from the source system, writes the data to the PSA
and at the same time starts the update into the relevant data targets.
Therefore, this method has the best performance.

The parallel update is described in detail in the following: A dialog


process is started by data package, in which the data of this package
is writtein into the PSA table. If the data is posted successfully into
the PSA table, the system releases a second, parallel dialog process
that writes the data to the data targets. In this dialog process the
transfer rules for the data records of the data package are applied,
that data is transferred to the communcation structure, and then
written to the data targets. The first dialog process (data posting
into the PSA) confirms in the source system that is it completed and
the source system sends a new data package to BW while the second
dialog process is still updating the data into the data targets.

The parallelism relates to the data packages, that is, the system
writes the data packages into the PSA table and into the data targets
in parallel. Caution: The maximum number of processes setin the source
system in customizing for the extractors does not restrict the number
of processes in BW. Therefore, BW can require many dialog processes for
the load process. Ensure that there are enough dialog processes
available in the BW system. If there are not enough processes on the
system side, errors occur. Therefore, this method is the least
recommended.

PSA and then into InfoObject/Data Targets (Package by Package) -


Updates data in series into the PSA table and into the data targets by
data package. The system starts one process that writes the data
packages into the PSA table. Once the data is posted successfuly into
the PSA table, it is then written to the data targets in the same
dialog process. Updating in series gives you more control over the
overall data flow when compared to parallel data transfer since there
is only one process per data package in BW. In the BW system the
maximum number of dialog process required for each data request
corresponds to the setting that you made in customizing for the
extractors in the control parameter maintenance screen. In contrast to
the parallel update, the system confirms that the process is completed
only after the data has been updated into the PSA and also into the
data targets for the first data package.

Only PSA - The data is not posted further from the PSA table
immediately. It is useful to transfer the data only into the PSA table
if you want to check its accuracy and consistency and, if necessary,
modify the data. You then have the following options for updating data
from the PSA table:

Automatic update - In order to update the data automatically in the


relevant data target after all data packages are in the PSA table and
updated successfully there, in the scheduler when you schedule the
InfoPackage, choose Update Subsequently in Data Targets on the
Processing tab page. *-- Sunil

What is difference between PSA and ODS?

PSA: This is just an intermediate data container. This is NOT a data


target. Main purpose/use is for data quality maintenance. This has the
original data (unchanged) data from source system.

ODS: This is a data target. Reporting can be done through ODS. ODS data
is overwriteable. For datasources for which delta is not enabled, ODS
can be used to upload delta records to Infocube.

You can do reporting in ODS. In PSA you can't do reporting directly

ODS contains detail -level data , PSA The requested data is saved,
unchanged from the source system. Request data is stored in the
transfer structure format in transparent, relational database tables in
the Business Information Warehouse. The data format remains unchanged,
meaning that no summarization or transformations take place

In ODS you have 3 tables Active, New data table, change log, In PSA you
don't have.

Posted by prasheel Reddy at 2:37 AM 0 comments


Labels: SAP BW Frequently Asked Question

Difference Between BW Technical and Functional


Difference Between BW Technical and Functional

In general Functional means, derive the funtional specification from


the business requirement document. This job normally is done either by
the business analyst or system analyst who has a very good knowledge of
the business. In some large organizations there will be a business
analyst as well as system analyst.

In any business requirement or need for new reports or queries


originates with the business user. This requirement will be recorded
after discussion by the business analyst. A system analyst analyses
these requirements and generates functional specification document. In
the case of BW it could be also called logical design in DATA MODELING.

After review this logical desing will be translated to physical


design . This process defines all the required dimensions, key figures,
master data, etc.

Once this process is approved and signed off by the requester(users),


then conversion of this into practically usable tasks using the SAP BW
software. This is called Technical. The whole process of creating an
InfoProvider, InfoObjects, InforSources, Source system, etc falls under
the Technical domain.
What is the role of consultant has to play if the title is BW
administrator? What is his day to day activity and which will be the
main focus area for him in which he should be proficient?

BW Administartor - is the person who provides Authorization access to


different Roles, Profiles depending upon the requirement.

For eg. There are two groups of people : Group A and Group B.

Group A - Manager

Group B - Developer

Now the Authorization or Access Rights for both the Groups are
different.

So for doing this sort of activity.........we required Administrator.

Tips by : Raja Muraly, Rekha

Which one is more in demand in SAP Job, ABAP/4 or BW?

In terms of opportunities a career in SAP BW is sounds better.

ABAP knowledge will help you excel as a BW consultant, so taking the


training in ABAP will be worth it.

You can shift to BW coming from either an ABAP or functional consultant


background. The advantages of the ABAP background is you will find it
easier to understand the technical aspects of BW, such as when you need
to create additional transfer structures or if you need to program any
conversion routines for the data being uploaded, as well as being
familiar with the source tables from SAP R/3.

The advantage of coming from a functional consultant background is the


knowledge of the business process. This is important when you're
modeling new infocubes. You should be familiar with what kind of
data/information your user needs and how they want to view/group the
data together.
Posted by prasheel Reddy at 2:37 AM 0 comments
Labels: SAP BW Frequently Asked Question

Daily Tasks in Support Role and Infopackage


Failures
Daily Tasks in Support Role and Infopackage Failures
1. Why there is frequent load failures during extractions? and how they
are going to analyse them?

If these failures are related to Data,, there might be data


inconsistency in source system.
though you are handling properly in transfer rules. You can monitor these issues in T-code ->
RSMO and PSA (failed records).and update .

If you are talking about whole extraction process, there might be issues of work process
scheduling and IDoc transfer to target system from source system. These issues can be re-
initiated by canceling that specific data load and ( usually by changing Request color from
Yellow - > Red in RSMO).. and restart the extraction.

2. Can anyone explain briefly about 0record modes in ODS?

ORECORDMODE is SAP Delivered object and will be added to ODS object while activating. Using
this ODS will be updated during DELTA loads.. This has three possible values ( X D R).. D & R is
for deleting and removing records and X is for skipping records during delta load.

3. What is reconciliation in bw? What the procedure to do reconciliation?

Reconcilation is the process of comparing the data after it is transferred to the BW system with
the source system. The procedure to do reconcilation is either you can check the data from the
SE16 if the data is coming from a particular table only or if the datasource is any std
datasource then the data is coming from the many tables in that scenario what I used to do ask
the R/3 consultant to report on that particular selections and used to get the data in the excel
sheet and then used to reconcile with the data in BW . If you are familiar with the reports of
R/3 then you are good to go meaning you need not be dependant on the R/3 consultant ( its
better to know which reports to run to check the data ).

4. What is the daily task we do in production support.How many times we will extract the data
at what times.

It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in
number of records and kind of transfer rules you have provided. If transfer rules have some
kind of round about transfer rules and updates rules has calculations for customized key
figures... long times are expected..

Usually You need to work on RSMO and see what records are failing.. and update from PSA.

5. What are some of the frequent failures and errors?

As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want
it for the interview perspective I would answer it in this way.

a) Loads can be failed due to the invalid characters


b) Can be because of the deadlock in the system
c) Can be becuase of previuos load failure , if the load is dependant on other loads
d) Can be because of erreneous records
e) Can be because of RFC connections

These are some of the reasons for the load failures


Posted by prasheel Reddy at 2:36 AM 0 comments
Labels: SAP BW Frequently Asked Question

Questions Answers on SAP BW


Questions Answers on SAP BW

What is the purpose of setup tables?

Setup tables are kind of interface between the extractor and application tables. LO extractor takes
data from set up table while initalization and full upload and hitting the application table for
selection is avoided. As these tables are required only for full and init load, you can delete the data
after loading in order to avoid duplicate data. Setup tables are filled with data from application
tables.The setup tables sit on top of the actual applcation tables (i.e the OLTP tables storing
transaction records). During the Setup run, these setup tables are filled. Normally it's a good
practice to delete the existing setup tables before executing the setup runs so as to avoid duplicate
records for the same selections

We are having Cube. what is the need to use ODS. what is the necessary to use ODS though we are
having cube?

1) Remember cube has aggregated data and ods has granular data.
2) In update rules of a infocube you do not have option for over write whereas for a ods the
default is overwrite.

What is the importance of transaction RSKC? How it is useful in resolving the issues with speial
characters.

How to handle double data loading in SAP BW?

What do you mean by SAP exit, User exit, Customer exit?

What are some of the production support isues-trouble shooting guide?

When we go for Business content extraction and when go for LO/COPA extraction?

What are some of the few infocube name in SD and MM that we use for extraction and load them
to BW?

How to create indexes on ODS and fact tables?

What are data load monitor (RSMO or RSMON)?

1A. RSKC.

Using this T-code, you can allow BW system to accept special char's in the data coming from
source systems. This list of chars can be obtained after analyzing source system's data OR can be
confirmed with client during design specs stage.

2A. Exit.s

These exits are customized for handling data transfer in various scenarios.
(Ex. Replacement Path in Reports- > Way to pass variable to BW Report)
Some can be developed by BW/ABAP developer and inserted wherever its required.

Some of these programs are already available and part of SAP Business Content. These are called
SAP Exits. Depends on the requirement, we need to extend some exits and customize.

3A.

Production issues are different for each BW project and most common issues can be obtained
from some of the previous mails. (data load issues).

4A.

LIS Extraction is kind of old school type and not preferred with big BW systems. Here you can
expect issues related to performance and data duplication in set up tables.

LO extraction came up with most of the advantages and using this, you can extend exiting extract
structures and use customized data sources.
If you can fetch all required data elements using SAP provided extract structures, you don't need
to write custom extractions... You can get clear idea on this after analyzing source system's data
fields and required fields in target system's data target's structure.

5A.

MM - 0PUR_C01(Purchasing data) , OPUR_C03 (Vendor Evaluation)


SD - 0SD_CO1(Customer),0SD_C03( Sales Overview) ETC..

6A.

You can do this by choosing "Manage Data Target" option and click on few buttons available in
"performance" tab.

7A.

RSMO is used to monitor data flow to target system from source system. You can see data by
request, source system, time request id etc.... just play with this..

What is KPI?

A KPI are Key Performance Indicators.


These are values companies use to manage their business. E.g. net profit.

In detail:

Stands for Key Performance Indicators. A KPI is used to measure how well an organization or
individual is accomplishing its goals and objectives. Organizations and businesses typically
outline a number of KPIs to evaluate progress made in areas where performance is harder to
measure.

For example, job performance, consumer satisfaction and public reputation can be determined
using a set of defined KPIs. Additionally, KPI can be used to specify objective organizational and
individual goals such as sales, earnings, profits, market share and similar objectives.

KPIs selected must reflect the organization's goals, they must be key to its success, and they must
be measurable. Key performance indicators usually are long-term considerations for an
organization

Posted by prasheel Reddy at 2:35 AM 0 comments


Labels: SAP BW Frequently Asked Question

Business Warehouse SAP Interview


Business Warehouse SAP Interview

1. How to convert a BeX query Global structure to local structure


(Steps involved)?

To convert a BeX query Global structure to local structureSteps:


A local structure when you want to add structure elements that are
unique to the specific query. Changing the global structure changes the
structure for all the queries that use the global structure. That is
reason you go for a local structure.
Coming to the navigation part--

In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the
open query icon (icon tht looks like a folder) On the SAP BEx Open
dialog box: Choose Queries. Select the desired InfoCube Choose New. On
the Define the query screen: In the left frame, expand the Structure
node. Drag and drop the desired structure into either the Rows or
Columns frame. Select the global structure. Right-click and choose
Remove reference. A local structure is created.

Remember that you cannot revert back the changes made to global
structure in this regard. You will have to delete the local structure
and then drag n drop global structure into query definition.

When you try to save a global structure, a dialogue box prompts you to
comfirm changes to all queries. that is how you identify a global
structure.

2.I have RKF & CKF in a query, if report is giving error which one
should be checked first RKF or CKF and why (This is asked in one of
int).

RKF consists of a key figure restricted with certain charecteristics


combinations CKF have calculations which fully uses various key figures

They are not interdependent on each other . You can have both at same
time

To my knowledge there is no documented limit on the number of RKF's and


CKF's. But the only concern would be the performance. Restructed and
Calculated Key Figures would not be an issue. However the No of Key
figures that you can have in a Cube is limited to around 248.

Restricted Key Figures restrict the Keyfigure values based on a


Characteristic.(Remember it wont restrict the query but only KF Values)

Ex: You can restrict the values based on particular month

Now I create a RKFlike this:(ZRKF)


Restrict with a funds KF
with period variable entered by the user.

This is defined globally and can be used in any of the queries on that
infoprovider. In columns: Lets assume 3 company codes are there. In new
selection, i drag

ZRKF
Company Code1

Similarly I do for other company codes.

Which means I have created a RKF once and using it in different ways in
different columns(restricting with other chars too)

In the properties I give the relevant currency to be comverted which


will display after converting the value to target currency from native
currency.
Similarly for other two columns with remaining company codes.

3. What is the use of Define cell in BeX & where it is useful?

Cell in BEX:::Use

When you define selection criteria and formulas for structural


components and there are two structural components of a query, generic
cell definitions are created at the intersection of the structural
components that determine the values to be presented in the cell.

Cell-specific definitions allow you to define explicit formulas, along


with implicit cell definition, and selection conditions for cells and
in this way, to override implicitly created cell values. This function
allows you to design much more detailed queries.

In addition, you can define cells that have no direct relationship to


the structural components. These cells are not displayed and serve as
containers for help selections or help formulas.

You need two structures to enable cell editor in bex. In every query
you have one structure for key figures, then you have to do another
structure with selections or formulas inside.

Then having two structures, the cross among them results in a fix
reporting area of n rows * m columns. The cross of any row with any
column can be defined as formula in cell editor.

This is useful when you want to any cell had a diferent behaviour that
the general one described in your query defininion.

For example imagine you have the following where % is a formula kfB/KfA
* 100.

kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 50%

Then you want that % for row chC was the sum of % for chA and % chB.
Then in cell editor you are enable to write a formula specifically for
that cell as sum of the two cell before. chC/% = chA/% + chB/% then:

kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 86%

Posted by prasheel Reddy at 2:28 AM 0 comments


Labels: SAP BW Frequently Asked Question

SAP BW Interview Questions 2


1) What is process chain? How many types are there? How many we use in
real time scenario? Can we define interdependent processes with tasks
like data loading, cube compression, index maintenance, master data &
ods activation in the best possible performance & data integrity.
2) What is data integrityand how can we achieve this?
3) What is index maintenance and what is the purpose to use this in
real time?
4) When and why use infocube compression in real time?
5) What is mean by data modelling and what will the consultant do in
data modelling?
6) How can enhance business content and what for purpose we enhance
business content (becausing we can activate business content)
7) What is fine-tuning and how many types are there and what for
purpose we done tuning in real time. tuning can only be done for
infocube partitions and creating aggregates or any other?
8) What is mean by multiprovider and what purpose we use multiprovider?
9) What is scheduled and monitored data loads and for what purpose?

Ans # 1: Process chains exists in Admin Work Bench. Using these we can
automate ETTL processes. These allows BW guys to schedule all
activities and monitor (T Code: RSPC).

PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in


any given process chain. Is a procedure either with in the SAP or
external to it with a start and end. This process runs in the
background.

PROCESS CHAIN is set of such processes that are linked together in a


chain. In other words each process is dependent on the previous process
and dependencies are clearly defined in the process chain.

This is normally done in order to automate a job or task that has to


execute more than one process in order to complete the job or task. 1.
Check the Source System for that particular PC.

2. Select the request ID (it will be in Header Tab) of PC

3. Go to SM37 of Source System.

4. Double Click on the Job.

5. You will navigate to a screen

6. In that Click "Job Details" button

7. A small Pop-up Window comes

8. In the Pop-up screen, take a note of a) Executing Server b) WP


Number/PID

9. Open a new SM37 (/OSM37) command

10. In the Click on "Application Servers" button

11. You can see different Application Servers.

11a. Goto Executing server, and Double Click (Point 8 (a))


12. Goto PID (Point 8 (b))

13. On the left most you can see a check box

14. "Check" the check Box

15. On the Menu Bar.. You can see "Process"

16. In the "process" you have the Option "Cancel with Core"

17. Click on that option. * -- Ramkumar K

Ans # 2: Data Integrity is about eliminating duplicate entries in the


database and achieve normalization.

Ans # 4: InfoCube compression creates new cube by eliminating


duplicates. Compressed infocubes require less storage space and are
faster for retrieval of information. Here the catch is .. Once you
compress, you can't alter the InfoCube. You are safe as long as you
don't have any error in modeling.

This compression can be done through Process Chain and also manually.

Tips by: Anand

Ans#3: Indexing is a process where the data is stored by indexing it.


Eg: A phone book... When we write somebodys number we write it as
Prasads number would be in "P" and Rajesh's number would be in "R"...
The phone book process is indexing.. similarly the storing of data by
creating indexes is called indexing.

Ans#5: Datamodeling is a process where you collect the facts..the


attributes associated to facts.. navigation atributes etc.. and after
you collect all these you need to decide which one you ill be using.
This process of collection is done by interviewing the end users, the
power users, the share holders etc.. it is generally done by the Team
Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So
if you are new you dont have to worry about it....But do remember that
it is a imp aspect of any datawarehousing soln.. so make sure that you
have read datamodeling before attending any interview or even starting
to work....

Ans#6: We can enhance the Business Content bby adding fields to it.
Since BC is delivered by SAP Inc it may not contain all the
infoobjects, infocubes etc that you want to use according to your
company's data model... eg: you have a customer infocube(In BC) but
your company uses a attribute for say..apt number... then instead of
constructing the whole infocube you can add the above field to the
existing BC infocube and get going...

Ans#7: Tuning is the most imp process in BW..Tuning is done the


increase efficiency.... that means lowering time for loading data in
cube.. lowering time for accessing a query.. lowering time for doing a
drill down etc.. fine tuning=lowering time(for everything
possible)...tuning can be done by many things not only by partitions
and aggregates there are various things you can do... for eg:
compression, etc..
Ans#8: Multiprovider can combine various infoproviders for reporting
purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3
ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com
for more info...

Ans#9: Scheduled data load means you have scheduled the loading of data
for some particular date and time you can do it in scheduler tab if
infoobject... and monitored means you are monitoring that particular
data load or some other loads by using transaction RSMON.

*****

Posted by prasheel Reddy at 2:15 AM 0 comments


Labels: SAP BW Frequently Asked Question

SAP BW Interview Questions


What is ODS?
It is operational data store. ODS is a BW Architectural component that appears between PSA
( Persistant Staging Area ) and infocubes and that allows Bex ( Business Explorer ) reporting.
It is not based on the star schema and is used primarily for details reporting, rather than for
dimensional analysis. ODS objects do not aggregate data as infocubes do. Data are loaded into an
IDS object by inserting new records, updating existing records, or deleting old records as specified
byRECORDMODE value.

1. How much time does it take to extract 1 million of records from an infocube?
2. How much does it take to load (before question extract) 1 million of records to an infocube?
3. What are the four ASAP Methodologies?
4. How do you measure the size of infocube?
5. Difference between infocube and ODS?
6. Difference between display attributes and navigational attributes?

1. Ans. This depends,if you have complex coding in update rules it will take longer time,orelse it
will take less than 30 mins.

3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization

4. Ans:
In no of records

5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surrounded by different
dim table which connects to sids. And the data wise, you will have aggregated data in the cubes.
ODS is a flat structure(flat table) with no star schema concept and which will have granular
data(detailed level).

6. Ans:
Display attribute is one which is used only for display purpose in the report.Where as navigational
attribute is used for drilling down in the report.We don't need to maintain Nav attribute in the
cube as a characteristic(that is the advantage) to drill down.

*****

Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
Ans: But how is it possible?.If you load it manually twice, then you can delete it by request.
[use Delta upload method]

Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL?


Sure you can.ODS is nothing but a table.

Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE?


Yes ofcourse.For example, for loading text and hierarchies we use different data sources but the
same infosource.

Q4. BRIEF THE DATAFLOW IN BW.


Data flows from transactional system to analytical system(BW).
DS on the transactional system needs to be replicated on BW side and attached to infosource and
update rules respectively.

Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN


TRANSFER RULES?

Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?


Full and delta.

Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE
IN LO-COCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append fields).Refer white
paper on LO-Cokpit extractions.

Q8. SIGNIFICANCE OF ODS.


It holds granular data.

Q9. WHERE THE PSA DATA IS STORED?


In PSA table.

Q10.WHAT IS DATA SIZE?


The volume of data one data target holds(in no.of records)

Q11. DIFFERENT TYPES OF INFOCUBES.


Basic,Transactional and Virtual Infocubes(remote,sap remote and multi)

Q12. INFOSET QUERY.


Can be made of ODSs and objects/Charactaristic InfoObjects

Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW??.2 in R/3 and 2 in BW

Q14. ROUTINES?
Exist In the info object,transfer routines,update routines and start routine

Q15. BRIEF SOME STRUCTURES USED IN BEX.


Rows and Columns,you can create structures.
Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Variable with default entry
Replacement path
SAP exit
Customer exit
Authorization

Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING?


You can drill down to any level you want using Nav attributes and jump targets

Q18. WHAT ARE INDEXES?


Indexes are data base indexes,which help in retrieving data fastly.

Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.


Help!!!!!!!!!!!!!!!!!!!Refer documentation

Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED.


Nope

Q21. WHAT IS THE SIGNIFICANCE OF KPI'S?


KPIs indicate the performance of a company.These are key figures

Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.


After image(correct me if I am wrong)

Q23. REPORTING AND RESTRICTIONS.


Help!!!!!!!!!!!!!!!!!!!Refer documentation

Q24. TOOLS USED FOR PERFORMANCE TUNING.


ST*,Number ranges,delete indexes before load ..etc

Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING DATA
DAILY.
There should be some tool to run the job daily(SM37 jobs)

Q26. AUTHORIZATIONS.
Profile generator[PFCG]

Q27. WEB REPORTING.

Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN


BEINFOPROVIDER.
Of course

Q29. PROCEDURES OF REPORTING ON MULTICUBES.


Refer help.What are you expecting??.Multicube works on Union condition

Q30. EXPLAIN TRANPORTATION OF OBJECTS?


Dev ---> Q and Dev ---> P
Posted by prasheel Reddy at 2:11 AM 1 comments
Labels: SAP BW Frequently Asked Question

SAP BW FAQ
BW Query Performance
Question:
1. What kind of tools are available to monitor the overall Query
Performance?
o BW Statistics
o BW Workload Analysis in ST03N (Use Export Mode!)
o Content of Table RSDDSTAT

2. Do I have to do something to enable such tools?


o Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)

3. What kind of tools are available to analyse a specific query in


detail?
o Transaction RSRT
o Transaction RSRTRACE

4. Do I have a overall query performance problem?


o Use ST03N -> BW System load values to recognize the problem. Use the
number given in table 'Reporting - InfoCubes:Share of total time (s)'
to check if one of the columns %OLAP, %DB, %Frontend shows a high
number in all InfoCubes.
o You need to run ST03N in expert mode to get these values

5. What can I do if the database proportion is high for all queries?


Check:
o If the database statistic strategy is set up properly for your DB
platform
(above all for the BW specific tables)
o If database parameter set up accords with SAP Notes and SAP Services
(EarlyWatch)
o If Buffers, I/O, CPU, memory on the database server are exhausted?
o If Cube compression is used regularly
o If Database partitioning is used (not available on all DB platforms)

6. What can I do if the OLAP proportion is high for all queries?


Check:
o If the CPUs on the application server are exhausted
o If the SAP R/3 memory set up is done properly (use TX ST02 to find
bottlenecks)
o If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,
Customizing default)

7. What can I do if the client proportion is high for all queries?


o Check whether most of your clients are connected via a WAN Connection
and the amount
of data which is transferred is rather high.

8. Where can I get specific runtime information for one query?


o Again you can use ST03N -> BW System Load
o Depending on the time frame you select, you get historical data or
current data.
o To get to a specific query you need to drill down using the InfoCube
name
o Use Aggregation Query to get more runtime information about a
single query. Use tab All data to get to the details.
(DB, OLAP, and Frontend time, plus Select/ Transferred records,
plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
o High Database Runtime
o High OLAP Runtime
o High Frontend Runtime

10. What can I do if a query has a high database runtime?


o Check if an aggregate is suitable (use All data to get values
"selected records to transferred records", a high number here would
be an indicator for query performance improvement using an aggregate)
o Check if database statistics are update to data for the
Cube/Aggregate, use TX RSRV output (use database check for statistics
and indexes)
o Check if the read mode of the query is unfavourable - Recommended (H)

11. What can I do if a query has a high OLAP runtime?


o Check if a high number of Cells transferred to the OLAP (use
"All data" to get value "No. of Cells")
o Use RSRT technical Information to check if any extra OLAP-processing
is necessary (Stock Query, Exception Aggregation, Calc. before
Aggregation, Virtual Char. Key Figures, Attributes in Calculated
Key Figs, Time-dependent Currency Translation)
together with a high number of records transferred.
o Check if a user exit Usage is involved in the OLAP runtime?
o Check if large hierarchies are used and the entry hierarchy level is
as deep as possible. This limits the levels of the
hierarchy that must be processed. Use SE16 on the inclusion
tables and use the List of Value feature on the column successor
and predecessor to see which entry level of the hierarchy is used.
- Check if a proper index on the inclusion table exist

12. What can I do if a query has a high frontend runtime?


o Check if a very high number of cells and formattings are transferred
to the Frontend ( use "All data" to get value "No. of Cells") which
cause high network and frontend (processing) runtime.
o Check if frontend PC are within the recommendation (RAM, CPU Mhz)
o Check if the bandwidth for WAN connection is sufficient
Posted by prasheel Reddy at 2:00 AM 0 comments
Labels: SAP BW Frequently Asked Question

Important Transaction Codes For BW


1 RSA1 Administrator Work Bench
2 RSA11 Calling up AWB with the IC tree
3 RSA12 Calling up AWB with the IS tree
4 RSA13 Calling up AWB with the LG tree
5 RSA14 Calling up AWB with the IO tree
6 RSA15 Calling up AWB with the ODS tree
7 RSA2 OLTP Metadata Repository
8 RSA3 Extractor Checker
9 RSA5 Install Business Content
10 RSA6 Maintain DataSources

11 RSA7 BW Delta Queue Monitor


12 RSA8 DataSource Repository
13 RSA9 Transfer Application Components
14 RSD1 Characteristic maintenance
15 RSD2 Maintenance of key figures
16 RSD3 Maintenance of units
17 RSD4 Maintenance of time characteristics
18 RSBBS Maintain Query Jumps (RRI Interface)
19 RSDCUBE Start: InfoCube editing
20 RSDCUBED Start: InfoCube editing

21 RSDCUBEM Start: InfoCube editing


22 RSDDV Maintaining
23 RSDIOBC Start: InfoObject catalog editing
24 RSDIOBCD Start: InfoObject catalog editing
25 RSDIOBCM Start: InfoObject catalog editing
26 RSDL DB Connect - Test Program
27 RSDMD Master Data Maintenance w.Prev. Sel.
28 RSDMD_TEST Master Data Test
29 RSDMPRO Initial Screen: MultiProvider Proc.
30 RSDMPROD Initial Screen: MultiProvider Proc.

31 RSDMPROM Initial Screen: MultiProvider Proc.


32 RSDMWB Customer Behavior Modeling
33 RSDODS Initial Screen: ODS Object Processng
34 RSIMPCUR Load Exchange Rates from File
35 RSINPUT Manual Data Entry
36 RSIS1 Create InfoSource
37 RSIS2 Change InfoSource
38 RSIS3 Display InfoSource
39 RSISET Maintain InfoSets
40 RSKC Maintaining the Permittd Extra Chars

41 RSLGMP Maintain RSLOGSYSMAP


42 RSMO Data Load Monitor Start
43 RSMON BW Administrator Workbench
44 RSOR BW Metadata Repository
45 RSORBCT BI Business Content Transfer
46 RSORMDR BW Metadata Repository
47 RSPC Process Chain Maintenance
48 RSPC1 Process Chain Display
49 RSPCM Monitor daily process chains
50 RSRCACHE OLAP: Cache Monitor

51 RSRT Start of the report monitor


52 RSRT1 Start of the Report Monitor
53 RSRT2 Start of the Report Monitor
54 RSRTRACE Set trace configuration
55 RSRTRACETEST Trace tool configuration
56 RSRV Analysis and Repair of BW Objects
57 SE03 Transport Organizer Tools
58 SE06 Set Up Transport Organizer
59 SE07 CTS Status Display
60 SE09 Transport Organizer

61 SE10 Transport Organizer


62 SE11 ABAP Dictionary
63 SE18 Business Add-Ins: Definitions
64 RSDS Data Source Repository
65 SE19 Business Add-Ins: Implementations
66 SE19_OLD Business Add-Ins: Implementations
67 SE21 Package Builder
68 SE24 Class Builder
69 SE80 Object Navigator
70 RSCUSTA Maintain BW Settings

71 RSCUSTA2 ODS Settings


72 RSCUSTV
73 RSSM Authorizations for Reporting
74 SM04 User List
75 SM12 Display and Delete Locks
76 SM21 Online System Log Analysis
77 SM37 Overview of job selection
78 SM50 Work Process Overview
79 SM51 List of SAP Systems
80 SM58 Asynchronous RFC Error Log

81 SM59 RFC Destinations (Display/Maintain)


82 LISTCUBE List viewer for InfoCubes
83 LISTSCHEMA Show InfoCube schema
84 WE02 Display IDoc
85 WE05 IDoc Lists
86 WE06 Active IDoc monitoring
87 WE07 IDoc statistics
88 WE08 Status File Interface
89 WE09 Search for IDoc in Database
90 WE10 Search for IDoc in Archive

91 WE11 Delete IDocs


92 WE12 Test Modified Inbound File
93 WE14 Test Outbound Processing
94 WE15 Test Outbound Processing from MC
95 WE16 Test Inbound File
96 WE17 Test Status File
97 WE18 Generate Status File
98 WE19 Test tool
99 WE20 Partner Profiles
100 WE21 Port definition

101 WE23 Verification of IDoc processing


102 DB02 Tables and Indexes Monitor
103 DB14 Display DBA Operation Logs
104 DB16 Display DB Check Results
105 DB20 Update DB Statistics
106 KEB2 DISPLAY DETAILED INFO ON CO-PA DATA SOURCE R3
107 RSD5 Edit InfoObjects
108 SM66 Global work process Monitor
109 SM12 Display and delete locks
110 OS06 Local Operating System Activity

111 RSKC Maintaining the Permittd Extra Chars


112 SMQ1 qRFC Monitor (Outbound Queue)

----------------------------------------

S-ar putea să vă placă și