Documente Academic
Documente Profesional
Documente Cultură
From a BW perspective you need to first know all the SD extractors and what information they
bring. Next look at all the cubes and ODS for SD.
You need to Reprocess this IDOC which are RED. For this you can take help of Any of your Team
(ALE IDOC Team or BAsis Team)Or Else
youcan push it manually. Just search it in bd87 screen only to Reprocess.
Also, Try to find why this IDocs are stuck there.
V1 Update: when ever we create a transaction in R/3(e.g.,Sales Order) then the entries get
into the R/3 Tables(VBAK, VBAP..) and this takes place in V1 Update.
V2 Update: V2 Update starts a few seconds after V1 Update and in this update the values get
into Statistical Tables, from where we do the extraction into BW.
V3 Update: Its purely for BW extraction.
But in the Document below, V1, V2 and V3 are defined in a different way. Can You please
explain me in detial what exactly V1, V2 and V3 updates means?
6.Do you have any idea how to improve the performance of the BW..?
Asynchronous Updating (V2 Update)
With this update type, the document update is made separately from the statistics update. A
termination of the statistics update has NO influence on the document update (see V1 Update).
Radio button: Updating in U3 update program
Asynchronous Updating (V3 Update)
With this update type, updating is made separately from the document update. The difference
between this update type and the V2 Update lies,however, with the time schedule. If the V3
update is active, then the update can be executed at a later time.
In contrast to V1 and V2 Updates, no single documents are updated. The V3 update is,
therefore, also described as a collective update.
1. Go to transaction code RSA3 and see if any data is available related to your DataSource. If
data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the
data by entering the application name.
2. Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics -->
Managing extract structures --> Initialization --> Filling the Setup table --> Application specific
setup of statistical data --> perform setup (relevant application)
3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name
of the run and execute. Now all the available records from R/3 will be loaded to setup tables.
4. Go to transaction RSA3 and check the data.
5. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is
serialized V3 update.
6. Go to BW system and create infopackage and under the update tab select the initialize delta
process. And schedule the package. Now all the data available in the setup tables are now
loaded into the data target.
7. Now for the delta records go to LBWE in R/3 and change the update mode for the
corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and
directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new
records are added immediately you can see the record in RSA7.
8. Go to BW system and create a new infopackage for delta loads. Double click on new
infopackage. Under update tab you can see the delta update radio button..
9. Now you can go to your data target and see the delta record.
13.What is dashboard?
A dash board can be created using the web application Designer (WAD) or the visual composer
(VC). A dashboard is just a collection of reports, views and links etc in a single view. For e.g.
igoogle is a dashboard.
When we look at the all organization measures how they are performing with helicopter view,
we need a report that teaches and shows the trend in a graphical display quickly. These reports
are called as Dashboard Reports, still we can report these measures individually, but by
keeping all measures in a single page, we are creating single access point to the users to view
all information available to them. Absolutely this will save lot of precious time, gives clarity on
decision that needs to be taken, helps the users to understand the measure(s) trend with
business flow creating dashboard
Dashboards : Could be built with Visual Composer & WAD
create your dashboard in BW,
(1) Create all BEx Queries with required variants,tune them perfectly.
(2) Differentiate table queries and graph queries.
(3) Choose the graph type required that meet your requirement.
(4) Draw the layout how the Dashboard page looks like.
(5) Create a web template that has navigational block / selection information.
(6) Keep navigational block fields are common across the measures.
(7) Include the relevant web items into web template.
(8) Deploy the URL/Iview to users through portal/intranet
The steps to be followed in the creation of Dashboard using WAD are summarized as below:
14.How can you solve the data mismatch tickets between r/3 and bw?
Check the mapping at BW side for 0STREET in transfer rules.Check the data in PSA for the same
field.If the PSA is also doesn't have complete data then check the field in RSA3 in source
system.
BI7 is PSA used only for Data load from Source System into BW
18). what we do in Business Blue Print Stage?
SAP has defined a business blueprint phase to help extract pertinent information about your
company that is necessary for implementation. These blueprints are in the form of
questionnaires that are designed to probe for information that uncovers how your company
does business. As such, they also serve to document the implementation. Each business
blueprint document essentially outlines your future business processes and business
requirements. The kinds of questions asked are germane to the particular business function, as
seen inthe following sample questions:1) What information do you capture on a purchase order?
2) What information is required to complete a purchase order?Accelerated SAP question and
answer database:The question and answer database (QADB) is a simple although aging tool
designed to facilitate the creation and maintenance of your business blueprint.This database
stores the questions and the answers and serves as the heart of your blue print. Customers are
provided with a customer input template for each application that collects the data. The
question and answer format is standard across applications to facilitate easier use by the
project team.Issues database: Another tool used in the blueprinting phase is the issues
database. Thisdatabase stores any open concerns and pending issues that relate to the
implementation. Centrally storing this information assists in gathering and then managing
issues to resolution, so that important matters do not fall through the cracks. You can then
track the issues in database, assign them to teammembers, and update the database
accordingly.
How to do basic LO extraction for SAP-R3-BW1. Go to transaction code RSA3 and see if any data
is available related to your DataSource. If data is there in RSA3 then go to transaction code
LBWG (Delete Setup data) and delete the data by entering the application name.2. Go to
transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing
extract structures --> Initialization --> Filling the Setup table --> Application specific setup of
statistical data --> perform setup (relevant application)3. In OLI*** (for example OLI7BW for
Statistical setup for old documents : Orders) give the name of the run and execute. Now all the
available records from R/3 will be loaded to setup tables.4. Go to transaction RSA3 and check
the data.5. Go to transaction LBWE and make sure the update mode for the corresponding
DataSource is serialized V3 update.6. Go to BW system and create infopackage and under the
update tab select the initialize delta process. And schedule the package. Now all the data
available in the setup tables are now loaded into the data target.7.Now for the delta records
go to LBWE in R/3 and change the update mode for the corresponding DataSource to
Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to
transaction code RSA7 there you can see green light # Once the new records are added
immediately you can see the record in RSA7.
24) When we use Maintain Data Source, What we do? What we will maintain?
Go to BW system and create a new infopackage for delta loads. Double click on new
infopackage. Under update tab you can see the delta update radio button.
25) Tickets and Authorization in SAP Business Warehouse What is tickets? And example?
Tickets are the tracking tool by which the user will track the work which we do. It can be a
change requests or data loads or what ever. They will of types critical or moderate. Critical
can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will
be closed by informing the client that the issue is solved. Tickets are raised at the time of
support project these may be any issues, problems.... .etc. If the support person faces any
issues then he will ask/request to operator to raise a ticket.
Operator will raise a ticket and assign it to the respective person. Critical means it is most
complicated issues ....depends how you measure this...hope it helps. The concept of Ticket
varies from contract to contract in between companies. Generally Ticket raised by the client
can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is
of high priority it has to be resolved ASAP. If the ticket is of low> priority it must be considered
only after attending to high priority tickets. The typical tickets in a production Support work
could be: 1. Loading any of the missing master data attributes/texts. 2. Create ADHOC
hierarchies. 3. Validating the data in Cubes/ODS. 4. If any of the loads runs into errors then
resolve it. 5. Add/remove fields in any of the master data/ODS/Cube. 6. Data source
Enhancement. 7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling
the infopackages for the attributes/texts mentioned by the client. 2. Create ADHOC
hierarchies. - Create hierarchies in RSA1 for the info-object. 3. Validating the data in
Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3. 4. If any of
the loads runs into errors then resolve it. - Analyze the error and take suitable action. 5.
Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement 6.
Data source Enhancement. 7. Create ADHOC reports. - Create some new reports based on the
requirement of client.
30) How to convert a BeX query Global structure to local structure (Steps involved)
BeX query Global structure to local structureSteps; ***a local structure when you want to add
structure elements that are unique to the specific query. Changing the global structure changes
the structure for all the queries that use the global structure. That is reason you go for a local
structure.Coming to the navigation part--In the BEx Analyzer, from the SAP Business Explorer
toolbar, choose the open query icon (icon tht looks like a folder) On the SAP BEx Open dialog
box:Choose Queries.Select the desired InfoCubeChoose New.On the Define the query screen:In
the left frame, expand the Structure node.Drag and drop the desired structure into either the
Rows or Columnsframe.Select the global structure.Right-click and choose Remove reference.A
local structure is created.Remember that you cannot revert back the changes made to global
structure inthis regard. You will have to delete the local structure and then drag ndrop global
structure into query definition.*When you try to save a global structure, a dialogue box
prompts you tocomfirm changes to all queries. that is how you identify a global structure*
31) What is the use of Define cell in BeX & where it is useful?
Cell in BEX:::Use*When you define selection criteria and formulas for structural components
and there are two structural components of a query, generic cell definitions are created at the
intersection of the structural components that determine the values to be presented in the
cell.Cell-specific definitions allow you to define explicit formulas, along with implicit cell
definition, and selection conditions for cells and in this way, to override implicitly created cell
values. This function allows you to design much more detailed queries.In addition, you can
define cells that have no direct relationship to the structural components. These cells are not
displayed and serve as containers for help selections or help formulas.you need two structures
to enable cell editor in bex. In every query you have one structure for key figures, then you
have to do another structure with selections or formulas inside.Then having two structures, the
cross among them results in a fix reporting area of n rows * m columns. The cross of any row
with any column can be defined as formula in cell editor.This is useful when you want to any
cell had a diferent behaviour that the general one described in your query defininion.For
example imagine you have the following where % is a formula kfB/KfA *100.kfA kfB %chA 6 4
66%chB 10 2 20%chC 8 4 50%Then you want that % for row chC was the sum of % for chA and %
chB. Then in cell editor you are enable to write a formula specifically for that cell as sum of
the two cell before. chC/% = chA/% + chB/% then:kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 86%
Manager Round Review Questions.
41)What is the difference between filter & Restricted Key Figures? Examples & Steps in BI?
Filter restriction applies to entire query. RKF is restriction applied on a keyfigure.Suppose for
example, you want to analyse data only after 2006...showing sales in 2007,2008 against
Materials..You have got a keyfigure called Sales in your cube
Now you will put global restriction at query level by putting Fiscyear > 2006 in the Filter.This
will make only data which have fiscyear >2006 available for query to process or show.
Now to meet your requirement. ..like belowMaterial Sales in 2007 Sales in 2008M1 200 300M2
400 700You need to create two RKF's.Sales in 2007 is one RKF which is defined on keyfigure
Sales restricted by Fiscyear = 2007Similarly,Sales in 2008 is one RKF which is defined on
Keyfigure Sales restricted by Fiscyear = 2008Now i think u understood the differenceFilter will
make the restriction on query level..Like in above case putting filter Fiscyear>2006 willmake
data from cube for yeaers 2001,2002,2003, 2004,2005 ,2006 unavailable to the query for
showing up.So query is only left with data to be shown from 2007 and 2008.Within that
data.....you can design your RKF to show only 2007 or something like that...
42)How to create condition and exceptions in Bi.7.0? But I know in Bw3.5 version.?
From a query name or description, you would not be able to judge whether the query is having
any exception.There are two ways of finding exception against a query:1. Execute queries one
by one, the one which is having background colour as exception reporting are with
exceptions.2. Open queries in the BEX Query Designer. If you are finding exception tab at the
right side of filter and rows/column tab, the query is having exception.
--
By: leela naveen
This are questions I faced. If u have any screen shots for any one of the question provide that
one also.
1. We have standard info objects given in sap why you created zinfo objects can u tell me the
business scenario
2. We have standard info cubes given in sap why you created zinfo cubes can u tell me the
business scenario
3. In keyfigure what is meant by cumulative value, non cumulative value change and non
cumulative value in and out flow.
4. when u creating infoobject it shows reference and template what is it
5. what is meant by compounding attribute tell me the scenario?
6. I have 3 cubes for that I created multiprovider and I created a report for that but I didnt get
data in that report what happen?
7. I have 10 cubes I created multiprovider I want only 1 cube data what u do?
8. what is meant by safety upper limit and safety lower limit in all the deltas tell me one by
one for time stamp, calender day and numberic pointer?
9. I have 80 queries which query is taking so much time how can you solve it
10. In compression level all requests are becoming zero which data is compressing tell me
detail
11. what is meant by flat aggregate?explain in detail
12. I created process chain 1st day it taking 10 min after that 1st week it taking 1 hour after
that next time it taking 1 day with a same loads what happen how can u reduce the time of
loading
13. how can u know the cube size? in detail show me u have screen shots
14. where can we find transport return codes
15. I have a report it taking so much time how can I rectify
16. what is offset? Without offset we create queries?
17. I told my process chains nearly 600 are there he asked me how can u monitor I told him I
will see in rspcm and bwccms he asked is there any third party tools is there to see? Any tools
are there to see tell me what it is
18. how client access the reports
19. I dont have master data it will possible to load transaction data? it is possible is there any
other steps to do that one
20. what is structure in reporting?
21. which object based you created extended star schema?
22. what is line item dimension tell me brief
23. what is high cardinality tell me brief
24. process chain is running I have to stop the process for 1 hour after that re runn the process
where it is stopped?
in multiprovider can I use aggregations
25. what is direct schedule and what is meta chain
26. which patch u used presently? How can I know which patch that one?
27. how can we increase data packet size
28. hierarchies are not there in bi?why
29. remodeling is applied only on info cube? why not dso/ods?
30. In jump queries we can jump any transactions just like rsa1, sm37 etc it is possible or not?
31. why ods activation fail? What types of fails are there? What are the steps to handle
32. I have a process chain is running the infopackage get error dont process the error of that
info package and then you can run the dependent variants is it possible?
Normally You already know about BW.So you need to know extraa features of BI 7.O.Then
automatically you can get the solution for u r answer.
1. Tyeps of DSO in BI 7?
2. Use of Write Optimized DSO and scenario for using this DSO?
3. Remodelling Concept in BI 7?
4. BI Accelator?
Hope above quetions will give u complete picture of BI 7.0 OF new functionalityies.
Regards
Ram.
Links:
http://forums.sdn.sap.com/thread.jspa?threadID=1560106
What is ODS?
1. How much time does it take to extract 1 million of records from an infocube?
2. How much does it take to load (before question extract) 1 million of records to an
infocube?
1. Ans. This depends,if you have complex coding in update rules it will take longer
time,orelse it will take less than 30 mins.
3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization
4. Ans:
In no of records
5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surrounded by
different dim table which connects to sids. And the data wise, you will have aggregated
data in the cubes.
ODS is a flat structure(flat table) with no star schema concept and which will have
granular data(detailed level).
6. Ans:
Display attribute is one which is used only for display purpose in the report.Where as
navigational attribute is used for drilling down in the report.We don't need to maintain
Nav attribute in the cube as a characteristic(that is the advantage) to drill down.
*-- Ravi
Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE
PROCEDURE IN LO-COCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append
fields).Refer white paper on LO-Cokpit extractions.
Q14. ROUTINES?
Exist In the info object,transfer routines,update routines and start routine
Q26. AUTHORIZATIONS.
Profile generator
2. What is the difference between SAP BW 3.0B and SAP BW 3.1C, 3.5?
The best answer here is Business Content. There is additional Business Content
provided with BW 3.1C that wasn't found in BW 3.0B. SAP has a pretty decent
reference library on their Web site that documents that additional objects found with
3.1C.
4. What is index?
Indices/Indexes are used to locate needed records in a database table quickly. BW
uses two types of indices, B-tree indices for regular database tables and bitmap
indices for fact tables and aggregate tables.
The basic difference between the two is that navigational attributes can be used to
drilldown in a Bex report whereas display attributes cannot be used so. A
navigational attribute would function more or less like a characteristic within a cube.
To enable these features of a navigational attribute, the attribute needs to be made
navigational in the cube apart from the master data info-object.
The only difference is that navigation attributes can be used for navigation in
queries, like filtering, drill-down etc.
You can also use hierarchies on navigational attributes, as it is possible for
characteristics.
But an extra feature is that there is a possibility to change your history. (Please look
at the relevant time scenarios). If navigation attributes changes for a characteristic,
it is changed for all records in the past.
Disadvantage is also a slow down in performance.
8. If there are duplicate data in Cubes, how would you fix it?
Delete the request ID, Fix data in PSA or ODS and re-load again from PSA / ODS.
Whereas Cube holds aggregated data which is not as detailed as ODS. Cube is based
on multidimensional model.
An ODS is a flat structure. It is just one table that contains all data.
Most of the time you use an ODS for line item data. Then you aggregate this data to
an info cube
One major difference is the manner of data storage. In ODS, data is stored in flat
tables. By flat I mean to say ordinary transparent table whereas in a CUBE, it
composed of multiple tables arranged in a STAR SCHEMA joined by SIDs. The
purpose is to do MULTI-DIMENSIONAL Reporting
In ODS; we can delete / overwrite the data load but in cube only add is possible,
no overwrite.
c) An Infoset is an Info-provider that joins data from ODS and Info-objects( with
master data). The join may be a outer join or a inner join. Whereas a Multiprovider is
created on all types of Infoproviders - Cubes, ODS, Info-object. These InfoProviders
are connected to one another by a union operation.
d) A union operation is used to combine the data from these objects into a
MultiProvider. Here, the system constructs the union set of the data sets involved. In
other words, all values of these data sets are combined. As a comparison: InfoSets
are created using joins. These joins only combine values that appear in both tables.
In contrast to a union, joins form the intersection of the tables.
12. What is the T.Code for Data Archival and what is it's advantage?
SARA.
Advantage: To minimize space, Query performance and Load performance
13. What are the Data Loading Tuning from R/3 to BW, FF to BW?
a) If you have enhanced an extractor, check your code in user exit RSAP0001 for
expensive SQL statements, nested selects and rectify them.
b) Watch out the ABAP code in Transfer and Update Rules, this might slow down
performance
c) If you have several extraction jobs running concurrently, there probably are not
enough system resources to dedicate to any single extraction job. Make sure
schedule this job judiciously.
g) Buffer the SID number ranges if you load lot of data at once.
j) If your source is not an SAP system but a flat file, make sure that this file is
housed on the application server and not on the client machine. Files stored in an
ASCII format are faster to load than those stored in a CSV format.
a) System Trace: Transaction ST01 lets you do various levels of system trace such
as authorization checks, SQL traces, table/buffer trace etc. It is a general Basis tool
but can be leveraged for BW.
c) Database Performance Analysis: Transaction ST04 gives you all that you need
to know about whats happening at the database level.
a) Transfer Rules:
When we maintains the transfer structure and the communication structure, we use
the transfer rules to determine how we want the transfer structure fields to be
assigned to the communication structure InfoObjects. We can arrange for a 1:1
assignment. We can also fill InfoObjects using routines, formulas, or constants.
Update rules:
Update rules specify how the data (key figures, time characteristics, characteristics)
is updated to data targets from the communication structure of an InfoSource. You
are therefore connecting an InfoSource with a data target.
b) Transfer rules are linked to InfoSource, update rules are linked to InfoProvider
(InfoCube, ODS).
i. Transfer rules are source system dependant whereas update rules are Data target
dependant.
ii.The no. of transfer rules would be equal to the no. of source system for a data
target.
iii.Transfer rules is mainly for data cleansing and data formatting whereas in the
update rules you would write the business rules for your data target.
c) Using transfer rules you can assign DataSource fields to corresponding InfoObjects
of the InfoSource. Transfer rules give you possibility to cleanse data before it is
loaded into BW.
Update rules describe how the data is updated into the InfoProvider from the
communication structure of an InfoSource.
If you have several InfoCubes or ODS objects connected to one InfoSource you can
for example adjust data according to them using update rules.
Only in Update Rules: a. You can use return tables in update rules which would
split the incoming data package record into multiple ones. This is not possible in
transfer rules.
b. Currency conversion is not possible in transfer rules.
c. If you have a key figure that is a calculated one using the base key figures you
would do the calculation only in the update rules.
Thats it; your coordinator/Basis person will move this request to Quality or
Production.
To unlock a transport use Go to SE03 --> Request Task --> Unlock Objects
Enter your request and select unlock and execute. This will unlock the request.
ii Infopackage goups:
Use to group all relevent infopackages in a group, (Automation of a group of
infopackages only for dataload). Possible to Sequence the load in order.
Process Chains:
Used to automate all the processes including Dataload
and all Administrative Tasks like indices creation deletion, Cube compression etc
Highly controlled dataloading.
21. What are the critical issues you faced and how did you solve it?
a) Conversion Routines are used to convert data types from internal format to
external/display format or vice versa.
example:
CONVERSION_EXIT_ALPHA_INPUT
CONVERSION_EXIT_ALPHA_OUTPUT
The use of setup table is to store your historical data in them before updating to the
target system. Once you fill up the setup tables with the data, you need not to go to
the application tables again and again which in turn will increase your system
performance.
25. R/3 to ODS delta update is good but ODS to Cube delta is broken. How
to fix it?
i. Check the Monitor (RSMO) whats the error explanation. Based on explanation, we
can check the reason
ii. Check the timings of delta load from R3 ODS CUBE if conflicting after ODS load
vi. Dump (for a lot of reasons, full table space, time out, sql errors...)
Do not receive an IDOC correctly.
vii. There is a error load before the last one and so on...
You can check short dumps in T-code ST22. U can give the job tech name and your
userid. It will show the status of jobs in the system. Here you can even analyze short
dump. U can use ST22 in both R/3 and BW.
1. What are the Query Tuning you do when you use reporting?
a) Install BW Statistics and use of aggregates for reporting
b) Avoid using too many characteristics in rows and columns, instead place it in free
characteristics and navigate / drill-down later.
c) OLAP Cache (Change Cache TCode RSCUSTV14): Its a technique that improves
query performance by caching or storing data centrally and thereby making it
accessible to various application servers. When the query is run for the first time, the
results are saved to the cache so that next time when similar query is run, it does
not have to read from the data target but from the cache.
e) Use small amount of data as of starting points and do the drill down
f) Instead of running the same query each time save the query results in workbook
to get the same query results for different users. Each time you run the query, it
refreshes the data /same data should not fetch from data targets.
g) Complex and large reports should not run online rather they should be scheduled
run during off-peak hours to avoid excessive contention for limited system resources.
We should using RA to run those off-peak hours in batch mode.
h) Queries against remote cubes should be avoided as data comes from different
systems.
k) Use compression on cubes since the E tables are optimized for queries.
The BEX Download Scheduler is an assistant that takes you through an automatic,
step-by-step process for downloading pre-calculated Web templates as HTML pages
from the BW server onto your PC.
CKF will have tech name and description where as Formula will have only description.
CKF is available across all Queries on same InfoProvider where as formula is
available only for that Query.
While creating CKF, certain function will not available from formula builder where as
while creating formula, all the function will be available from formula builder.
Filter restricts whole Query result where as RKF restricts only selected KF.
for Example: Lets assume we have 'company code' in filter and it is restricted by
'0040'.Query output will be for only '0040'.
if u restrict a KF with '0040' in RKF, then only that KF data will be restricted by
'0040'.
Restricted key figures are (basic) key figures of the InfoProvider that are restricted
(filtered) by one or more characteristic selections. Unlike a filter, whose restrictions
are valid for the entire query.
For a restricted key figure, only the key figure in question is restricted to its allocated
characteristic value or characteristic value interval. Scenarios such as comparing a
particular key figure for various time segments, or
plan/actual comparison for a key figure if the plan data is stored using a particular
characteristic, can be realized using restricted key figures.
We get a default structure for key-figures. That is most people use structures for
key-figures and SAP has designed it that way.
Within a query definition you can use either no structures or a maximum of two
structures. Of these, only one can be a key figure structure.
Filters act on Characteristics; Conditions act on Key Figures. You do not use KF in the
filter area. Only char values can be restricted in the filter area, whereas Conditions
are created to key figures.
7. Reporting Agent
Definition: The Reporting Agent is a tool used to schedule reporting functions in the
background.
The following functions are available:
Evaluating exceptions
Printing queries
Pre-calculating Web templates
Pre-calculating characteristic variables of type pre-calculated value sets.
Pre-calculation of queries for Crystal reports
Managing bookmarks
Use
You make settings for the specified reporting functions.
You assign the individual settings to scheduling packages for background
processing.
You schedule scheduling packages as a job or within a process chain.
9. What are the restrictions on ODS reporting? Active, retired and terminated
employees can be separated using different ODS for detail reports.
ODS is 2 dimensional format and it is not good to analyze the data in multi
dimensional way. If you want to take flat reporting then go for ODS reporting.
Cube is multidimensional format and you can analyze data in different dimensions,
so if your requirement is multidimensional report go for Cube.
Example: List of purchase orders for a vendor is two dimensional reports whereas
sales organization wise, sales area wise, customer wise sales for last quarter and
comparison with earlier quarters is a multi-dimensional report.
Two dimensional reports are similar to reporting on a table. ODS active table is a flat
table like an r/3 table. Reporting is done on active table of ODS. Other tables are for
handling the deltas.
Field 0RECORDMODE is needed for the delta load and is added by the system if a
DataSource is delta-capable. In the ODS object the field is generated during the
creation process.
Generic Extractor: We create generic extractors from table views, query and
functional module / InfoSet Query.
a) The extraction structure is just a technical definition, it does not hold any physical
data on the database. The reason why you have it in addition to the table/view is
that you can hide deselect fields here so that not the complete table needs to be
transferred to BW.
b) In short - The extract structure define the fields that will be extracted and the
table contains the records in that structure.
c) Table is having data but Extract structure doesnt have data.
Extract structure is formed based on table and here we have the option to select the
fields that are required for extraction. So extract structure will tell what are the fields
that are using for extraction.
b). Queued delta is used if number of document changes is high ( more than
10000). Here data is written into an extraction queue and from there it is moved to
delta queue. Here up to 10000doc changes are cumulated to one LUW.
c). Unserialized V3 update method is used only when it is not important that data
to be transferred to BW in the exactly same sequence as it was generated in R/3.
d). Serialized V3 Update: This is the conventional update method in which the
document data is collected in the sequence of attachment and transferred to BW by
batch job. The sequence of the transfer does not always match the sequence in
which the data was created.
Basic difference is in the sequence of data transfer. In Queued delta it is same as the
one in which documents are created whereas in serialized v3 update it is not always
the same.
Account based is tied to a GL account posting. Costing based is derived from value
fields. Account based would be more exact to tie out to the GL. Costing based is not
easy to balance to the GL and more analytical and expect differences. Costing based
offers some added revaluation costing features
Implementing costing based is much more work but also gives much more reporting
possibilities especially focused on margin analyses. Without paying attention to it
while implementing costing based COPA, you get account based with it, with the
advantage of reconciled data.
COPA accounting based is for seeing at abstract level whereas costing based is the
detailed level, 90% we go for costing based only.
COPA Accounting is based on Account numbers; where as cost accounting is based
on cost centers.
COPA Tables: Account base COPA tables are COEJ, COEP, COSS and COSP
Q) PSA Cleansing.
Time stamp
Calendar day
A: In no of records.
Q) SIGNIFICANCE of ODS?
It holds granular data (detailed level).
Q) ROUTINES?
Exist in the InfoObject, transfer routines, update
routines
and start routine
Q) AUTHORIZATIONS.
Profile generator
Q) WEB REPORTING.
What are you expecting??
Q) Transfer Routine?
Q) Update Routine?
Q) Start routines?
When you delete the data, the LUWs kept in the qRFC
queue
for the corresponding target system are confirmed.
Physical
deletion only takes place in the qRFC outbound queue
if
there are no more references to the LUWs.
Procedures
Define Implementation Standards and Procedures
A)
Q) What is ODS?
Q) Why partitioning?
A) Yes.
Q) Transitive Attributes?
Q) Navigational attribute?
Q) Display attributes?
Q) Compounding attribute?
A)
A)
Q) Currency attributes?
A)
A)
A)
* Assign InfoSources.
A)
* Maintaining DataSources.
* Activating Updates.
* Controlling Updates.
Q) RSDBC - DB Connect
Q) SMOD - Definition
Q) Statistical Update?
A)
Q) Types of Updates?
A)
A)
Q) Transporting.
A)
Q) Currency conversions?
A)
A)
A) It Restricts Data.
A)
Q) What is NODIM?
A)
Q) What is InfoSet?
Q) LO's?
A)
1) What is spro?
2) How to use in bw project?
3) What is difference between idoc and psa in transfer methods?
1. SPRO is the transaction code for Implementation Guide, where you can do configuration
settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse
Information.
3. PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed requests in
the format of the transfer structure. It is defined according to the Datasource and source
system, and is source system dependent.
IDOCS : Intermediate DOCuments : Data Structures used as API working storage for
applications, which need to move data in or out of SAP Systems.
-----Original Message-----
Subject: Use of manual security profiles with BW? (Business Information Warehouse)
Our company is currently on version 3.1H and will be moving to 4.6B late
summer 2000. Currently all of our R/3 security profiles were created
manually. We are also in the stage of developing and going live with the
add-on system of Business Warehouse (BW). For consistency, we have wish
to use manual profiles within the BW system and later convert all of our
manual security profiles (R/3 and BW) to profile generated ones.
Is there anyone else that can shed any light on this situation? (Success
or problems with using manual security profiles with BW?)
Thank you,
-----Reply Message-----
Subject: Use of manual security profiles with BW? (Business Information Warehouse)
Hi ,
You are going to have fun doing this upgrade. The 4.6b system is a
completely different beast than the 3.1h system. You will probably find a
lot of areas where you have to extend you manually created profiles to
cover new authorisation objects (but then you can have this at any level).
In 4.6b you really have to use the profile generator, but at least there is
a utility to allow you to pick up your manually created profile and have it
converted to an activity group for you. This will give you a running start
in this area, but you will still have a lot of work to do.
The fact that you did not use PG at 3.1h will not matter as it changed at
4.5 too and the old activity groups need the same type of conversion (we
are going through that bit right now).
Hope this helps
-----End of Message-----
Posted by prasheel Reddy at 9:44 AM 0 comments
Labels: Introduction
-----Original Message-----
Subject: Business Information Warehouse
Ever heard about apples and oranges. SAP/R3 is an OLTP system where as BIW
is an OLAP system. LIS reports can not provide the functionality provided
by BIW.
-----Reply Message-----
Subject: Business Information Warehouse
Hello,
The following information is for you to get more clarity on the subject:
SAP R/3 LIS (Logistic Information System) consist of infostructures (which
are representation of reporting requirements). So whenever any event (goods
reciept, invoice reciept etc. ) takes place in SAP R/3 module, if relevant
to the infostructure, an corresponding entry is made in the infostructures.
Thus infostructures form the database part of the datawarehouse. For
reporting the data (based on OLAP features such drill-down, abc, graphics
etc.), you can use SAP R/3 standard analysis (or flexible analysis) or
Business Warehouse (which is excel based) or Business Objects (which is
third party product but can interface with SAP R/3 infostructures using BAPI
calls).
In short, the infostructures (which are part of SAP R/3 LIS) form the data
basis for reporting with BW.
Regards
-----End of Message-----
Posted by prasheel Reddy at 9:42 AM 0 comments
Labels: Introduction
We are wanting that existing legacy system to go away and need to find a home for the data
and the functionality to access and report on that data. What options does SAP afford for
data warehousing? How does it affect the response of the SAP database server?
We are thinking of moving the data onto a scaleable NT server with large amounts of disk
(10gb +) and using PC tools to access the data. In this environment, our production SAP
machine would perform weekly data transfers to this historical sales reporting system.
Has anybody implemented a similar solution or have any ideas on a good attack method to
solve this issue?
You may want to look at SAP's Business Information Warehouse. This is their answer to data
warehousing. I saw a presentation on this last October at the SAP Technical Education
Conference and it looked pretty slick.
BIW runs on its own server to relieve the main database from query and report processing. It
accepts data from many different types of systems and has a detailed administration piece to
determine data source and age. Although the Information System may be around for sometime
it sounded like SAP is moving towards the Business Information Warehouse as a reporting
solution.
5/12/08
Tickets and Authorization in SAP Business
Warehouse
Tickets and Authorization in SAP Business Warehouse
Tickets are the tracking tool by which the user will track the work
which we do. It can be a change requests or data loads or what ever.
They will of types critical or moderate. Critical can be (Need to solve
in 1 day or half a day) depends on the client. After solving the ticket
will be closed by informing the client that the issue is solved.
Tickets are raised at the time of support project these may be any
issues, problems.....etc. If the support person faces any issues then
he will ask/request to operator to raise a ticket. Operator will raise
a ticket and assign it to the respective person. Critical means it is
most complicated issues ....depends how you measure this...hope it
helps. The concept of Ticket varies from contract to contract in
between companies. Generally Ticket raised by the client can be
considered based on the priority. Like High Priority, Low priority and
so on. If a ticket is of high priority it has to be resolved ASAP. If
the ticket is of low priority it must be considered only after
attending to high priority tickets.
16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc
transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules.
Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine".
during data load.
New features in BI 7 compared to earlier versions:
i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition
(RDA).
ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options.
iii. One level of Transformation. This replaces the Transfer Rules and Update Rules
iv. Performance optimization includes new BI Accelerator feature.
v. User management (includes new concept for analysis authorizations) for more flexible BI end
user authorizations.
What is the differenct between IC & ODS? How to flat data load to IC &
ODS?
By: Vaishnav
ODS is a datastore where you can store data at a very granular level.
It has overwritting capability. The data is stored in two dimensional
tables. Whereas cube is a based on multidimensional modeling which
facilitates reporting on diff dimensions. The data is stored in an
aggregated form unlike ODS and have no overwriting capability.
Reporting and analysis can be done on multidimensions unlike on ODS.
ODS are used to consolidate data. Normally ODS contain very detailed
data, technically there is the option to overwrite or add single
records.InfoCubes are optimized for reporting. There are options to
improve performance like aggregates and compression and it is not
possible to replace single records, all records sent to InfoCube will
be added up.
An infocube has a fact table, which contains his facts (key figures)
and a relation to dimension tables. This means that an infocube exists
of more than one table. These tables all relate to each other. This is
also called the star scheme, because the dimension tables all relate to
the fact table, which is the central point. A dimension is for example
the customer dimension, which contains all data that is important for
the customer.
What is difference between PSA and ALE IDoc? And how data is transferd
using each one of them?
You determine the PSA or IDoc transfer method in the transfer rule
maintenance screen. The process for loading the data for both transfer
methods is triggered by a request IDoc to the source system. Info IDocs
are used in both transfer methods. Info IDocs are transferred
exclusively using ALE
InfoObject/Data Target Only - This option means that the PSA is not
used as a temporary store. You choose this update type if you do not
want to check the source system data for consistency and accuracy, or
you have already checked this yourself and are sure that you no longer
require this data since you are not going to change the structure of
the data target again.
The parallelism relates to the data packages, that is, the system
writes the data packages into the PSA table and into the data targets
in parallel. Caution: The maximum number of processes setin the source
system in customizing for the extractors does not restrict the number
of processes in BW. Therefore, BW can require many dialog processes for
the load process. Ensure that there are enough dialog processes
available in the BW system. If there are not enough processes on the
system side, errors occur. Therefore, this method is the least
recommended.
Only PSA - The data is not posted further from the PSA table
immediately. It is useful to transfer the data only into the PSA table
if you want to check its accuracy and consistency and, if necessary,
modify the data. You then have the following options for updating data
from the PSA table:
ODS: This is a data target. Reporting can be done through ODS. ODS data
is overwriteable. For datasources for which delta is not enabled, ODS
can be used to upload delta records to Infocube.
ODS contains detail -level data , PSA The requested data is saved,
unchanged from the source system. Request data is stored in the
transfer structure format in transparent, relational database tables in
the Business Information Warehouse. The data format remains unchanged,
meaning that no summarization or transformations take place
In ODS you have 3 tables Active, New data table, change log, In PSA you
don't have.
For eg. There are two groups of people : Group A and Group B.
Group A - Manager
Group B - Developer
Now the Authorization or Access Rights for both the Groups are
different.
If you are talking about whole extraction process, there might be issues of work process
scheduling and IDoc transfer to target system from source system. These issues can be re-
initiated by canceling that specific data load and ( usually by changing Request color from
Yellow - > Red in RSMO).. and restart the extraction.
ORECORDMODE is SAP Delivered object and will be added to ODS object while activating. Using
this ODS will be updated during DELTA loads.. This has three possible values ( X D R).. D & R is
for deleting and removing records and X is for skipping records during delta load.
Reconcilation is the process of comparing the data after it is transferred to the BW system with
the source system. The procedure to do reconcilation is either you can check the data from the
SE16 if the data is coming from a particular table only or if the datasource is any std
datasource then the data is coming from the many tables in that scenario what I used to do ask
the R/3 consultant to report on that particular selections and used to get the data in the excel
sheet and then used to reconcile with the data in BW . If you are familiar with the reports of
R/3 then you are good to go meaning you need not be dependant on the R/3 consultant ( its
better to know which reports to run to check the data ).
4. What is the daily task we do in production support.How many times we will extract the data
at what times.
It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in
number of records and kind of transfer rules you have provided. If transfer rules have some
kind of round about transfer rules and updates rules has calculations for customized key
figures... long times are expected..
Usually You need to work on RSMO and see what records are failing.. and update from PSA.
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want
it for the interview perspective I would answer it in this way.
Setup tables are kind of interface between the extractor and application tables. LO extractor takes
data from set up table while initalization and full upload and hitting the application table for
selection is avoided. As these tables are required only for full and init load, you can delete the data
after loading in order to avoid duplicate data. Setup tables are filled with data from application
tables.The setup tables sit on top of the actual applcation tables (i.e the OLTP tables storing
transaction records). During the Setup run, these setup tables are filled. Normally it's a good
practice to delete the existing setup tables before executing the setup runs so as to avoid duplicate
records for the same selections
We are having Cube. what is the need to use ODS. what is the necessary to use ODS though we are
having cube?
1) Remember cube has aggregated data and ods has granular data.
2) In update rules of a infocube you do not have option for over write whereas for a ods the
default is overwrite.
What is the importance of transaction RSKC? How it is useful in resolving the issues with speial
characters.
When we go for Business content extraction and when go for LO/COPA extraction?
What are some of the few infocube name in SD and MM that we use for extraction and load them
to BW?
1A. RSKC.
Using this T-code, you can allow BW system to accept special char's in the data coming from
source systems. This list of chars can be obtained after analyzing source system's data OR can be
confirmed with client during design specs stage.
2A. Exit.s
These exits are customized for handling data transfer in various scenarios.
(Ex. Replacement Path in Reports- > Way to pass variable to BW Report)
Some can be developed by BW/ABAP developer and inserted wherever its required.
Some of these programs are already available and part of SAP Business Content. These are called
SAP Exits. Depends on the requirement, we need to extend some exits and customize.
3A.
Production issues are different for each BW project and most common issues can be obtained
from some of the previous mails. (data load issues).
4A.
LIS Extraction is kind of old school type and not preferred with big BW systems. Here you can
expect issues related to performance and data duplication in set up tables.
LO extraction came up with most of the advantages and using this, you can extend exiting extract
structures and use customized data sources.
If you can fetch all required data elements using SAP provided extract structures, you don't need
to write custom extractions... You can get clear idea on this after analyzing source system's data
fields and required fields in target system's data target's structure.
5A.
6A.
You can do this by choosing "Manage Data Target" option and click on few buttons available in
"performance" tab.
7A.
RSMO is used to monitor data flow to target system from source system. You can see data by
request, source system, time request id etc.... just play with this..
What is KPI?
In detail:
Stands for Key Performance Indicators. A KPI is used to measure how well an organization or
individual is accomplishing its goals and objectives. Organizations and businesses typically
outline a number of KPIs to evaluate progress made in areas where performance is harder to
measure.
For example, job performance, consumer satisfaction and public reputation can be determined
using a set of defined KPIs. Additionally, KPI can be used to specify objective organizational and
individual goals such as sales, earnings, profits, market share and similar objectives.
KPIs selected must reflect the organization's goals, they must be key to its success, and they must
be measurable. Key performance indicators usually are long-term considerations for an
organization
In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the
open query icon (icon tht looks like a folder) On the SAP BEx Open
dialog box: Choose Queries. Select the desired InfoCube Choose New. On
the Define the query screen: In the left frame, expand the Structure
node. Drag and drop the desired structure into either the Rows or
Columns frame. Select the global structure. Right-click and choose
Remove reference. A local structure is created.
Remember that you cannot revert back the changes made to global
structure in this regard. You will have to delete the local structure
and then drag n drop global structure into query definition.
When you try to save a global structure, a dialogue box prompts you to
comfirm changes to all queries. that is how you identify a global
structure.
2.I have RKF & CKF in a query, if report is giving error which one
should be checked first RKF or CKF and why (This is asked in one of
int).
They are not interdependent on each other . You can have both at same
time
This is defined globally and can be used in any of the queries on that
infoprovider. In columns: Lets assume 3 company codes are there. In new
selection, i drag
ZRKF
Company Code1
Which means I have created a RKF once and using it in different ways in
different columns(restricting with other chars too)
Cell in BEX:::Use
You need two structures to enable cell editor in bex. In every query
you have one structure for key figures, then you have to do another
structure with selections or formulas inside.
Then having two structures, the cross among them results in a fix
reporting area of n rows * m columns. The cross of any row with any
column can be defined as formula in cell editor.
This is useful when you want to any cell had a diferent behaviour that
the general one described in your query defininion.
For example imagine you have the following where % is a formula kfB/KfA
* 100.
kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 50%
Then you want that % for row chC was the sum of % for chA and % chB.
Then in cell editor you are enable to write a formula specifically for
that cell as sum of the two cell before. chC/% = chA/% + chB/% then:
kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 86%
Ans # 1: Process chains exists in Admin Work Bench. Using these we can
automate ETTL processes. These allows BW guys to schedule all
activities and monitor (T Code: RSPC).
16. In the "process" you have the Option "Cancel with Core"
This compression can be done through Process Chain and also manually.
Ans#6: We can enhance the Business Content bby adding fields to it.
Since BC is delivered by SAP Inc it may not contain all the
infoobjects, infocubes etc that you want to use according to your
company's data model... eg: you have a customer infocube(In BC) but
your company uses a attribute for say..apt number... then instead of
constructing the whole infocube you can add the above field to the
existing BC infocube and get going...
Ans#9: Scheduled data load means you have scheduled the loading of data
for some particular date and time you can do it in scheduler tab if
infoobject... and monitored means you are monitoring that particular
data load or some other loads by using transaction RSMON.
*****
1. How much time does it take to extract 1 million of records from an infocube?
2. How much does it take to load (before question extract) 1 million of records to an infocube?
3. What are the four ASAP Methodologies?
4. How do you measure the size of infocube?
5. Difference between infocube and ODS?
6. Difference between display attributes and navigational attributes?
1. Ans. This depends,if you have complex coding in update rules it will take longer time,orelse it
will take less than 30 mins.
3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization
4. Ans:
In no of records
5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surrounded by different
dim table which connects to sids. And the data wise, you will have aggregated data in the cubes.
ODS is a flat structure(flat table) with no star schema concept and which will have granular
data(detailed level).
6. Ans:
Display attribute is one which is used only for display purpose in the report.Where as navigational
attribute is used for drilling down in the report.We don't need to maintain Nav attribute in the
cube as a characteristic(that is the advantage) to drill down.
*****
Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
Ans: But how is it possible?.If you load it manually twice, then you can delete it by request.
[use Delta upload method]
Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE
IN LO-COCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append fields).Refer white
paper on LO-Cokpit extractions.
Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW??.2 in R/3 and 2 in BW
Q14. ROUTINES?
Exist In the info object,transfer routines,update routines and start routine
Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING DATA
DAILY.
There should be some tool to run the job daily(SM37 jobs)
Q26. AUTHORIZATIONS.
Profile generator[PFCG]
SAP BW FAQ
BW Query Performance
Question:
1. What kind of tools are available to monitor the overall Query
Performance?
o BW Statistics
o BW Workload Analysis in ST03N (Use Export Mode!)
o Content of Table RSDDSTAT
----------------------------------------