Sunteți pe pagina 1din 38

Tips for Optimizing the Performance of

Web Intelligence Documents


created by Jonathan Brown on Oct 1, 2014 10:49 PM, last modified by Jonathan Brown on May 23, 2015 1:41 AM
Version 16
inShare70

Tips for Optimizing the Performance of Web Intelligence Documents


Document History
Introduction
Chapter 1 - Client Side Performance
TIP 1.1 - Use HTML Interface for Faster viewing/refreshing of Reports
TIP 1.2 - Upgrade to BI 4.1 SP03+ for single JAR file Applet Interface
TIP 1.3 - Ensure Online Certificate Revocation Checks aren't slowing down your
Applet Interface

TIP 1.4 - Make sure JRE Client Side Caching is working


TIP 1.5 - Ensure you are not running into these known JRE Security Change issues
TIP 1.6 - Choose the right client - Webi Rich Client vs HTML vs Applet Interfaces
Chapter 2 - Process Best Practices
TIP 2.1 - Schedule reports to save time and resources
TIP 2.2 - Use the Retry options when Scheduling to Automate Retries
TIP 2.3 - Use Instance Limits to help reduce the # of Instances in your environment
TIP 2.4 - Platform Search Tweaking for Performance
Chapter 3 - Report Design Best Practices
TIP 3.1 - Steer Clear of Monster Webi Documents
TIP 3.2 - Utilize Report Linking Whenever Possible
TIP 3.3 - Avoid Autofit When not Required
TIP 3.4 - Utilize Query Filters instead of Report Filters whenever possible
TIP 3.5 - Avoid Charts with Many Data Points
TIP 3.6 - Limit use of Scope of Analysis
TIP 3.7 - Limit the # of Data Providers Used
TIP 3.8 - Don't accidentally Disable the Report Caching
TIP 3.9 - Test Using Query Drill for Drill Down Reports
TIP 3.10 - Mandatory Prompts vs Optional Prompts
Chapter 4 - Semantic Layer Best Practices
TIP 4.1 - Only Merge Dimensions that are needed
TIP 4.2 - Build Universes & Queries for the Business Needs of the Document
TIP 4.3 - Array Fetch Size Optimizations
TIP 4.4 - Ensure Query Stripping is Enabled
TIP 4.5 - Follow these Best Practices for Performance Optimizing SAP BW (BICS)
Reports

TIP 4.6 - Using Index-Awareness for Better Performance


TIP 4.7 - Using Aggregate Awareness for Performance
TIP 4.8 - Utilizing JOIN_BY_SQL to avoid multiple queries
TIP 4.9 - Security Considerations for the Semantic Layer
Chapter 5 - Formula & Calculation Engine Tips
TIP 5.1 - Use Nested Sections with Conditions with caution
TIP 5.2 - Use IN instead of ForEach and ForAll when possible
TIP 5.3 - Use IF...THEN...ELSE instead of Where operator when possible
TIP 5.4 - Factorize (Reuse) Variables
Chapter 6 - Sizing for Performance

TIP 6.1 - Use these Resources to Help you Size your Environment
TIP 6.3 - Ensure the Adaptive Processing Server is Split and Sized Correctly
TIP 6.4 - Keep Location and Network in mind when Designing your Environment
TIP 6.5 - Use Local, Fast Storage for Cache and Temp Directories
TIP 6.6 - Ensure your CPU Speed is Adequate
TIP 6.7 - Use the BI Platform Support Tool for Sizing Reviews
Chapter 7 - Architectural Differences between XI 3.1 & BI 4.x
TIP 7.1 - 32-bit vs 64-bit - What does it mean?
TIP 7.2 - Hosted Services are more heavily used in BI 4.x
TIP 7.3 - Larger # of Processes involved in Process Workflows
Chapter 8 - Performance Based Improvements / Enhancements

Tips for Optimizing the Performance of Web


Intelligence Documents
DRAFT DISCLAIMER - This document is a work in progress and will be released 1 chapter at a time. Please
follow, bookmark and subscribe to updates to ensure you are notified of the latest changes. It is also a living
document and we would love to hear your feedback and tips & tricks! Comment or private message
anything you would like to see added, changed or removed.
.

Document History
.
Date

Who

What

10-012014

Jonathan
Brown

Created Initial Document Structure and Completed Chapter 1 - Client Side


Performance

10-022014

Jonathan
Brown

Made some minor updates to the formatting and some links

10-082014

Jonathan
Brown

Started on Chapter 2 - Process Best Practices Started on Chapter 2 - Process


Best Practices

10-092014

Jonathan
Brown

Finished Chapter 2 - Fixed some formatting issues

10-152014

Jonathan
Brown

Updated Introduction to discuss overlap with SCN


DOChttp://scn.sap.com/docs/DOC-58532

10-172014

Jonathan
Brown

Started Chapter 3. TIPS 3.1 - 3.4 Added.

10-222014

Jonathan
Brown

Added Tips 3.5 and 3.6

10-242014

Jonathan
Brown

Added Tips 3.7 - 3.9 to complete Chapter 3.

10-312014

Jonathan
Brown

Started Chapter 4.

11-072014

Jonathan
Brown

Completed Chapter 4 and Published the latest version of Doc.

11-142014

Jonathan
Brown

Completed Chapter 5.

12-012014

Jonathan
Brown

Modified list of functions that can turn off caching as per Matthew Shaw's
suggestion in comments

12-092014

Jonathan
Brown

Started Chapter 6.

12-152014

Jonathan
Brown

Finished Chapter 6.

Date

Who

What

12-182014

Jonathan
Brown

Added Link to Ted Ueda's blog about sizing

02-192015

Jonathan
Brown

Added Tip 4.9 on SL Security impacts on Performance

05-212015

Jonathan
Brown

Added Tip 3.10 - Mandatory vs Optional prompts -- Started Chapter


7

Introduction
.
This document will become a central repository for all things Web Intelligence & Performance related. It is a
living document and will be growing over time as new tips, tricks and best practices are discovered. We
encourage suggestions and contradictions on the content within and hope the community will collaborate on
this content to ensure accuracy.
.
Please feel free to bookmark this Document and received Email notifications on updates. I would also love
to hear your feedback on the contents of this doc so feel free to comment below, private message me, or
just like and ratethe document to give me feedback.
.
I am the writer of this document but information contained within is a collection of tips from many sources.
The bulk of the material was gathered from within SAP Product Support and from the SAP Products &
Innovation / Development teams. Some of the content also came from shared knowledge on the SAP
Community Network and other like websites.
.
The purpose of this document is really to bring awareness to known issues, solutions, and best practices in
hopes of increasing the throughput of existing hardware, improving the end user/consumers experience, and
to save time and money on report design/consumption.
.
The Origin of this idea was from an America's SAP User Group session that was presented in Sept 2014. That
presentation spawned this document as well as another high level best practices document found here: Best
Practices for Web Intelligence Report Design
.
Where the purpose of this document is to focus on Performance of Web Intelligence Documents, the Best
Practices Guide above will cover high level best practices across Web Intelligence in general. There is a lot of
overlap between this document and the Best Practices document referenced above as they both spawn from
the same source presentation of the ASUG User Conference.
.
The 2014 ASUG Session Presentations for Web Intelligence can be found here: 2014 ASUG SAP Analytics &
BusinessObjects Conference - Webi
.
.

Chapter 1 - Client Side Performance


.
Client side performance tips and tricks cover anything that is specific to the client machine. This includes
the HTML, Applet and Rich Client Interfaces as well as the Browser that the client uses to a certain degree.
.

TIP 1.1 - Use HTML Interface for Faster viewing/refreshing of Reports


.
The HTML Interface is a light-weight thin client viewer. It uses HTML to display and edit the Webi

Documents. Since it is a thin client application that requires little more than displaying and consuming
HTML, it is a great choice for those users that want fast document viewing and refreshing in their browser.
.
The HTML Interface does have a few less features in comparison with the Applet Interface and you will have
to weigh the benefits of performance vs functionality.
.
Chapter 1.4 of the Webi User Guide covers the differences between the HTML, Applet and Rich Client
Interfaces. Review this to help you make a decision whether or not the HTML Interface will do everything you
need it to do.
.
Here is a screenshot example of what the feature comparison matrix looks like in the user guide
.
.

Below is a link to our Web Intelligence documentation page on our Support Portal. Go to the End User
Guides section to find the latest Webi User Guide documentation.
.
PORTAL - SAP BusinessObjects Web Intelligence 4.1 SAP Help Portal Page
.
Here is also a direct link to the BI 4.1 SP04 (most current at time of this writing)
.
GUIDE - BI 4.1 SP04 Web Intelligence User Guide - Direct Link
.

TIP 1.2 - Upgrade to BI 4.1 SP03+ for single JAR file Applet Interface

.
BI 4.x introduced a new architecture for the Applet Interface, aka Java Report Panel/Java Viewer. Previous
versions were a single JAR file called ThinCadenza.jar.
.
BI 4.0 and earlier versions of BI 4.1 split this architecture out into over 60 jar files. This was done for ease of
maintenance and deployment originally but Java updates later made this architecture more cumbersome.
Java security updates and restrictions that are now enforced by default have made the performance of this
new architecture too slow in many cases.
.
BI 4.1 SP03 and above have reverted back to a single .jar file deployment. This will often improve
performance on the client side due to a reduced number of security and validation checks that have to
happen on each .jar file.
.
The below What's New Guide talks about this change briefly. It should mostly be invisible to the end users
though. Except for maybe the improved performance.
.
GUIDE - BI 4.1 What's New Guide - Section 4.5
.
This KBA also covers this issue in a limited fashion:
.

KBA - 1975294 - In Business Intelligence 4.1, when using the webi Rich internet applet, it takes a
long time to open
.

TIP 1.3 - Ensure Online Certificate Revocation Checks aren't slowing down
your Applet Interface

.
Online Certificate Revocation Checks are turned on my default in newer versions of the Java Runtime Engine
(JRE). These basically tell the client side JRE to go out to online servers to validate the certificates that the
applet jar files are signed with. On slower networks, this can add a lot of overhead.
.
Older versions of the JRE did not have this enabled by default so it wasn't an issue.
.
Since BI 4.x had 60+ jar files to load for the Applet, it could potentially take much longer to run these checks
across 60+ files. On slower internet connections, this could equate to several minutes of delays!.
I talk about this in much more detail in the following Wiki and KBA:
.
WIKI - Tips for Fine Tuning Performance for the Webi Applet
.
KBA - 1904873 - Web Intelligence Rich Internet Applet loads slower after installing Java 7 Update
25 (JRE 7u25+) and above
.

TIP 1.4 - Make sure JRE Client Side Caching is working

.
When troubleshooting client side JRE performance issues, one of the first things you want to check is that JRE
Caching is enabled and WORKING. We have seen issues with performance when caching was either
disabled, configured incorrectly, or was just not working because of some sort of system or deployment
issue.
.
One example is on a Citrix deployment. Since each user can potentially have a unique and dynamic "Users"
folder, the cache may not be persistent across sessions. Setting the cache to a common location that can be
persistent across sessions may help in this type of scenario.
.
We cover a lot more on how to enable and validate JRE cache in my Wiki below
.
WIKI - Tips for Fine Tuning Performance for the Webi Applet
.

TIP 1.5 - Ensure you are not running into these known JRE Security Change
issues

.
A number of Java security updates and changes have caused issues with the Applet Interface. The known
issues are well documented and can be found on this Wiki:
.
WIKI - Web Intelligence and Oracle Java Runtime Engine Known Issues
.
This is divided into individual sections for the known issues on different XI 3.1 and BI 4.x versions.
.
Here are direct links for the BI 4.0 and BI 4.1 known issues pages
.
While these are not technically performance issues, they will slow down your end users and will cause delays
in viewing, modifying and refreshing documents and instances.

.
SAP only releases Patches/Support Packs every few months so when Oracle security changes come into play,
there can sometimes be a bit of a delay before we can have a patch out to resolve/address the change.
Keep this in mind when pushing the latest and greatest Oracle JRE updates out to your clients.
.

TIP 1.6 - Choose the right client - Webi Rich Client vs HTML vs Applet
Interfaces

.
Each of the Interfaces has a list of pros and cons. Choosing the right client interface for Web Intelligence is
about striking a balance between functionality, performance and convenience.
.
Chapter 1.4 of the Webi User Guide covers the differences between the interfaces. Reading and
understanding this should help you decide which interface to use. It's often not as cut and dry and
standardizing on only one interface though. Some users may like the HTML interface for viewing of
documents but prefer the Rich Client Interface for creating and editing documents. It is really up to the user
which interface they use.
.
Use the Portal link below to find the latest Webi User Guide. Chapter 1.4 covers the interface differences.
.
PORTAL - SAP BusinessObjects Web Intelligence 4.1 SAP Help Portal Page
.
Here is also a direct link to the BI 4.1 SP04 (most current at time of this writing)
.
GUIDE - BI 4.1 SP04 Web Intelligence User Guide - Direct Link
.
As a general guideline, we recommend the following use cases for the interfaces:
.
Webi HTML Interface

Best interface for report consumers who will mostly be running predesigned reports and doing only
light modifications

The HTML interface utilizes the 64-bit backend servers but lacks some of the design capabilities of
the Applet Interface
.
Webi Applet Interface

Best interface for report designers and power users who will be creating, modifying and doing
advanced analysis of documents and data.

This interface takes advantage of 64-bit backend servers and can generally handle larger amounts of
data/calculations as it utilizes backend servers to do the heavy lifting.

Since this is a Web Application, timeouts can occur when leaving the session idle or when carrying
out long running actions.
.
Webi Rich Client Interface

This stateless interface has almost all of the features and functionality that the Applet interface does
plus a few additional features of its own. This should be used by advanced designers and power users that
wish to have a stable design environment for larger documents

Can be used with local data sources and some desktop type data sources such as Excel and Access

Also can be used in 3-tier mode which takes advantage of backend servers for data retrieval
.

Chapter 2 - Process Best Practices

.
When we talk about "Process" Best Practices, we are really talking about the workflows around how we utilize
Web Intelligence reports in our Business Processes.
.
This chapter will cover a number of Best Practices that will allow you to build good business processes or
workflows around your Web Intelligence documents
.
Let's get started!
.

TIP 2.1 - Schedule reports to save time and resources

.
This may seem like a no-brainer but we see countless support incidents come through that could be avoided
with a simple process around when to schedule a document vs view it on-demand.
.
The Best Practices threshold for Scheduling is 5 minutes. If a report takes more than 5 minutes to refresh
and render, then that report should be scheduled.
.
Scheduling allows for a user or administrator to offload the processing of a document to a backend server so
they are not forced to sit and wait for the report to finally come up on their screen.
.
Benefits of Scheduling Documents
Provides lower user wait times when implemented correctly
Allows us to offload processing to non-peak times
Can help reduce sizing requirements for concurrent users
Reduces impact on Database during Peak times
Can combine Instances with Report Linking to produce smaller, faster documents
.
Studies have shown that in today's world, end users are unlikely to wait for more than 5 seconds for a video
to load. For example, if you are on YouTube and click the Play button, would you wait 5 minutes for that
video to load up and start playing? I think most of us would give up or try to refresh the video again after
about 10-20 seconds.
.
This holds true for Web Application users too. If a report doesn't view within a minute or two, the consumer
is very likely to close the request and try again, or just give up all together. The danger in them submitting
the request again is that they are using up even more resources on the backend servers when they do this.
Here's a workflow as an example:
.

1.

UserA logins to BI Launchpad and navigates to the "Monster Finance Report" document

2.

UserA Views this document and clicks the refresh button to get the latest data

3.

After about 2 minutes, UserA is not sure what is going on. The report appears to be refreshing still,
but given the fact that UserA is impatient, he suspects that the refresh has "hung" and closes the viewer.

4.

UserA decides to test his luck and submit the request again. This essentially creates a new request
for the same data and potentially works against BOTH requests as they compete for resources on the BI
servers and the database side.

5.

After a few more minutes, UserA gives up and moves on. Meanwhile he has no idea the amount of
resources and time he's wasted in the background.
.
In the above scenario a few bad things happened:

UserA Never got his report and had a bad experience

Backend resources were wasted without any usable results

.
Both of these could have been avoided by building processes around proper use of scheduling.
.
Here are some tips on how to best utilize scheduling:
.
1.

Educate your users to schedule anything that takes over 5 minutes to run

2.

Encourage users to Schedule reports that they know they will need throughout the day to non-peak
hours before their day begins

3.

Schedule Documents to formats that you know your end users will want such as Excel, Text, or PDF.
This can save time and resources during the day

4.

Utilize Publications when multiple users have different requirements for the same documents
.
For more information on Scheduling Objects and Publications, see the below links
.
DOC - BI 4.1 SP4 BI Launchpad User Guide - Chapter 7 - Scheduling Objects
.
DOC - BI 4.1 SP4 BI Launchpad User Guide - Chapter 10-11 - Publications
.

TIP 2.2 - Use the Retry options when Scheduling to Automate Retries
.
Although this isn't really a true performance tip, I do find that it is a best practice that goes hand in hand
with scheduling. It often amazes me how many people are not aware of the Retry functionality within the
Schedule (CMC Only) Dialog. This feature allows you to configure your scheduled instances to retry X
number of times and after X number of seconds if a failure occurs.
.
Here is a screenshot of this option in BI 4.1
.

.
Where this tip DOES save you time is in hunting down and manually rescheduling reports that may have
failed due to database issues or resource issues on the BI Platform side. Intermittent failures are usually tied
to resources somewhere in the process flow so simply setting up retries a few minutes apart can help in
limiting the number of true failures we see in a busy environment.
.
This option can be set in the Default Settings/Recurrence section of the Schedule Dialog or under
theSchedule/Recurrence section. The difference between the two is that the Default Settings option will
set the default retry values for any future schedules. Setting it under the Schedule section only sets it for
that particular schedule.
.
NOTE: It is important to note that this option is only available in the CMC and not through BI Launchpad
currently
.

TIP 2.3 - Use Instance Limits to help reduce the # of Instances in your
environment

.
This is another little known feature that you can use to help improve the performance of your system. The
feature is called Instance Limits and you can set it on a Folder or Object Level.
.
The basic concept is that you can set limits on the # of instances a folder or object will keep. If the limit is
exceeded, the CMS will clean up the oldest instances to help reduce the amount of metadata and resources
that is stored in the CMS database and on the Filestore disk.
.
Here are the basic instructions on how to enable and set limits, as found in the CMC Help guide:
.
Setting limits enables you to automatically delete report instances in the BI platform. The limits you set on a
folder affect all objects in the folder.
At the folder level, you can set limits for:

The number of instances for each object, user, or user group

The number of days that instances are retained for a user or a group

.
Steps to enable Instance Limits in the CMC
1.
Go to the Folders management area of the CMC.
2.
Locate and select the folder for which to set limits, and select Actions/Limits.
3.
In the Limits dialog box, select the Delete excess instances when there are more than N
instances of an object check box, and enter the maximum number of instances per object the folder can
contain before instances are deleted in the box. The default value is 100.
4.
Click Update.
5.
To limit the number of instances per user or group, Click the Add button beside Delete excess
instances for the following users/groups option.
6.
Select a user or a group, click > to add the user or group to the Selected users/groups list, and
click OK.
7.
For each user or group you added in step 6, in the Maximum instance count per object per user
box, type the maximum number of instances you want to appear in the BI platform. The default value is 100.
8.
To limit the age of instances per user or group, click Add beside the Delete instances after N
days for the following users/groups option.
9.
Select a user or a group, click > to add the user or group to the Selected users/groups list, and
click OK.
10.
For each user or group you added in step 9, in the Maximum instance age in days box, type the
maximum age for instances before they are removed from the BI platform. The default value is 100.

11.

Click Update.
.
Below is a screenshot of the dialog for your reference
.

.
.
Once you have enabled Instance Limits, you will have better control over the size of your CMS and
Input/Output FRS. A bloated CMS database and Filestore can definitely contribute to a slower running BI
system in general so having a handle on this can definitely help keep your system running at top speed.
.

TIP 2.4 - Platform Search Tweaking for Performance

.
Have you ever seen a bunch of resources (CPU/RAM) being used on your BI Platform server without any user
activity? If you have, this is most likely the Continuous Crawl feature of Platform Search doing a whole lot of
indexing.
.
What is Platform Search?
.
Platform Search enables you to search content within the BI platform repository. It refines the search results
by grouping them into categories and ranking them in order of their relevance.
.
There is no doubt that Platform Search is a great feature! It is just a factor that needs to be taken into
consideration when sizing an environment for Performance.
.
The below Administrators guide talks about this feature and how to configure it:
.
DOC - BI Platform Administrators Guide (BI 4.1 SP4) - Chapter 22 - Platform Search
..
When BI 4.0 first came out, support saw a lot of instances where customers were seeing performance
degradation and resource issues on their system AFTER migrating the bulk of their content over to the new
BI 4.0 system.
.
After an extensive investigation, we discovered that in most of these cases, the issue was the Indexing of
this "new" content that was added to the server.
So how does this affect performance? How can adding new content to a BI 4.x system cause Processing
Servers and other resources to spike up?
.

Behind the scenes, the Platform Search application detects that there is new content that needs to be
indexed and cataloged. This means that for every new object (Webi Doc, Universe, Crystal Report, etc...)
needs to be analyzed, cataloged and indexed by the Search Service. To do this, The Platform Search Service,
found on an Adaptive Processing Server, will utilize Processing Servers (Webi, Crystal, Etc...) to read the
Report contents and generate an index that it can use to map search terms to the content. Really cool
functionality, but with large documents with lots of data, objects, keywords, etc... this can add a lot of
overhead to the system. Especially if a lot of new objects are added at once.
.
By default the indexer is configured to Continuously Crawl the system and index the Metadata of the
objects. If you find this is taking up a lot of resources on your system then you may want to use the
Schedule option to control when it runs. Running indexing outside of regular business hours or peak times
would provide you with the best performance
.
Luckily we can configure the frequency and verbosity level used by the Indexer. These options are discussed
inChapter 22 of the Administrators guide above.
.
In short, be sure to keep Platform Search on your radar in case you have unexplained resource consumption
on your server.
.
More Info:
.
KBA - 1640934 - How to safely use Platform Search Service in BI 4.0 without overloading the
server?
.
BLOG - What is the optimal configuration for Platform Search in BI 4.x? - By Simone Caneparo
.
.

Chapter 3 - Report Design Best Practices


.
This chapter will discuss some Report Design Best Practices that can help you optimize your report for
Performance. These tips should be considered whenever a new report is being designed. A lot of these can
also be applied to existing reports with little effort.
.
NEW - A compilation of Report Design Tips & Tricks, not necessarily related to performance, can also be
found in the below document by William Marcy. This is a great document and is a must see for anyone
striving to design better reports.
.
DOC - Webi 4.x Tricks - By William Marcy & various other contributors on SCN.
.
.

TIP 3.1 - Steer Clear of Monster Webi Documents

.
A "Monster Document" is a document that contains many large reports within in. A Web Intelligence
document can contain multiple Reports. When we are referring to Reports, we mean the tabs at the bottom
of a Webi document. We often use the term Report to mean a Webi Document, but it is important to
differentiate between the two. A document can contain multiple reports.
.
When creating a Document, we need to start with the actual Business Need for that document. We can do
this by asking the stakeholder questions like:
.

1.

What is the primary purpose of this document?

2.

What question(s) does this document have to answer?

3.

How many different consumers will be utilizing this document?

4.

Can this document be split into multiple documents that service smaller, more specific needs?

1.

.
By asking questions like the above, we are drilling in on what the actual needs are and can use the answers
to these questions to help eliminate waste. If we build a Monster Document that accounts for every possible
scenario that a consumer may want to look at, then we are potentially wasting a lot of time for both the
document designer and the consumer. For example, if only 10-20% of a large document is actually utilized
by the consumer on a regular basis, then that means 80-90% of the document is waste.
.
Once we know the Business Needs of the consumer, we can design a focused document that eliminates
much of the waste.
.
Below are a few recommended best practices to keep in mind when building a document:
.
Avoid using a large number of Reports (tabs) within a Document

i.

10 or less Reports is a reasonable number

ii.

Exceeding 20 reports in a single document should be avoided


2.

Creating smaller documents for specific business needs allows for faster runtime and
analysis

i.

Utilize Report linking to join smaller documents together. This is discussed more in TIP 3.2

ii.

Aim to satisfy only 1-2 business needs per document.


3.

Provide only the data required for the business need(s) of the Document

i.

50.000 rows of data per document is a reasonable number

ii.

Do not exceed 500.000 rows of data per document


4.

i.

Do not add additional Data Providers if not needed or beyond document needs
5 data providers is a reasonable number

ii.

Do not Exceed 15 data providers per document

.
There of course will be exceptions to the above recommendations but I urge you to investigate other ways of
designing your documents if you find your document is growing too large.
.
You will see the following benefits by creating smaller, reusable documents based only on the business needs
of the consumers.
Reduce the time it takes to load the document initially in the viewer/interface
Smaller documents will load quicker in the viewers. This is because the resources needed to
transfer the document and process it initially will be much less with smaller documents.

Reduce the refresh time of the document.


The larger the document, the more time it will take to process the document during a
refresh. Once the report engine receives the data from the data providers, it has to render the report and
perform complex calculations based on the document design. Larger documents with many variables and
large amounts of data can take much longer to render during a refresh.

Reduce the system resources needed on the both the client side and the server side.
The resources needed to work with a large document are going to be much greater than
those needed for smaller documents. By reducing the size of your documents, you are potentially reducing

the overall system resources, such as CPU, RAM, Disk space, that your system will consume on average. This
can equate to better throughput on your existing hardware.

Improve performance while modifying the document


When modifying a large document, the client and server has to load the document structure

and data into memory. As you add/change/move objects in the reports, this causes client/server
communication to occur. This can slow down the designer as updates require reprocessing on any objects
involved. The more objects in a document, the longer each operation during a modify action can take.

Improve performance for the consumer during adhoc query and analysis.
Slicing, dicing, filtering and drilling actions will perform quicker on smaller documents as

well. This will equate to faster response times to the consumers as they navigate and do detailed analysis
on the documents
.
.

TIP 3.2 - Utilize Report Linking Whenever Possible


.
Report Linking is a great way to relate two documents together. This can be an alternative to drilling down
and allows the report designer better control over the size and performance of their documents. Report
Linking can be used to help reduce the size of single documents by allowing the designer to break out
documents into smaller chunks while still allowing them to be related to each other. This compliments the
recommendation to steer clear of Monster Documents very nicely
.
The concept of Report Linking is simple. You basically embed a hyperlink into a document that calls another
document. This hyperlink can use data from the source report to provide prompt values to the destination
report. Below is an example that explains the concept
.

Sales_Summary is a summary report that summarizes the sales for all 100 sites of Company XYZ Inc.

Sales_Summary has a hyperlink that allows a user to "drill into" a 2nd Report (Sales_Details) to get
the sales details on any of the 100 sites.

Sales_Summary is scheduled to run each night and take ~20 minutes to complete.

Users can view the latest instance of Sales_Summary which takes only a few seconds to load.

Users can drill down into Site Sales data for each of the 100 sites which launches Sales_Details
report using Report Linking and a prompt value

The prompt value filters the Sales_Details report using a Query Filter so that it only displays the
sales details for the 1 site that the user drilled into.
.
In the above scenario, we see many benefits

1.

The Sales_Summary report only contains the Summary details. Therefore it runs faster than if it
contained both summary and detailed data

2.

The Sales_Summary report is smaller and will load/navigate much quicker on its own

3.

The User can drill down and get a much faster response time because the details report only
contains the specific data that they are interested in
.
The Web Intelligence User Guide covers this in more details in Section 5.1.3 - Linking to another
document in the CMS
.
DOC - Web Intelligence User Guide - BI 4.1 SP04 Direct Link - Chapter 5 - Section 5.1.3
.

The easiest way to generate these Hyperlinks is using the Hyperlink Wizard. This Wizard is currently only
available in the HTML Interface. For manual creation of the hyperlinks, you will want to follow the
OpenDocument guidelines available in the below link:
.
DOC - Viewing Documents Using OpenDocument
.
Here is a screenshot of the Wizard and where the button is on the toolbar. It can be a little tricky to find if
you haven't used it before:
.

.
It is important to note that this can add a little more time to the planning and design phases of your
Document creation process. Properly implemented though, this can save your consumer a lot of waiting and
will reduce the backend resources needed to fulfill requests
.
When configuring a hyperlink using OpenDocument or the HTML Hyperlink Wizard, you can choose whether
or not you want the report to refresh on open, or to open the latest instance. Our recommendation is to use
Latest Instance whenever possible. This allows you to schedule the load on your database and backend
processing server and will reduce the time it takes for the consumer to get their reports.
..

TIP 3.3 - Avoid Autofit When not Required

.
The Autofit functionality allows you to set a cell, table, cross-tab or chart to be resized automatically based
on the data. A cell for example, has the option to Autofit the Height and Width of the cell based on the data
size. The below screenshot shows this feature in the Applet Interface for a cell.
.

.
This is a great feature for the presentation of the report but it can cause some performance delays when
navigating through pages or generating a complete document.
.
NOTE: The default setting for a cell is to enable the Autofit height option. This could impact the
performance of your reports so it is important to no how this can affect performance.
.
How does this affect performance of the report?
.
When autofit is enabled for objects on a report, the Processing Server has to evaluate the data used in every
instance of that object in order to determine the size of the object. This means that in order to skip to a
particular page of the report, the processing server would need to calculate the size for every object that
comes before that page. For example, if I have 100,000 rows of data in my report and I navigate to page
1000, then the processing server has to generate all of the pages leading up to page 1000 before it can
display that page. This is because the size of the objects on each page is dynamically linked to the rows of
data so it is impossible to determine what rows will be on page 1000 without first calculating the size of the
objects for each page preceding it.
.
In short, this option adds a lot more work to the page generation piece of the report rendering process. A
fixed size for height and width allows the processing server to determine how many objects fit on each page
and allows it to skip the generation process for pages that are not requested.
.
For another example: if I have 100,000 rows and have set my objects to fixed width/height, then the
processing server knows that 50 rows will fit on each and every page. If I request page 1000, it will know
that the rows on that page will be rows 50,000 to 50,050. It can then display that page with just those rows
in it. Way quicker than having to generate 999 pages first!
.
-------.
As you can imagine, this mostly just affects reports that have many rows of data and have many pages. If
you have a report with only a few pages, it probably isn't worth looking at this option. For larger, longer
reports, it might be worth investigating.
.

TIP 3.4 - Utilize Query Filters instead of Report Filters whenever possible
.
A Query Filter is a filter that is added to the SQL Statement for a report. Query Filters limit the data that is
returned by the Database server itself by adding to the WHERE clause of the SQL Statement.
.
A Report Filter is a filter that is applied at the Report Level and is only used to limit the data displayed on the
report itself. All of the data fetched from the Database is still available behind the scenes, but the report
itself is only showing what is not filtered out.
.
There is a time and a place for both Query Filters and Report Filters but understanding the differences
between them is a good way to ensure that you are not causing unnecessary delays in your report rendering
and refreshing. It is best to predesign Query Filters in your Semantic Layer design but you can also add them
manually using the Query Panel within Web Intelligence itself.
.
Here is a screenshot of a Predefined Filter being added to a Query in Query Panel
.

.
And here is an example of a similar Query Filter being added manually
.

.
In both of the above cases, the WHERE clause of the SQL Statement will be updated to reduce the data
returned to the report to filter based on the year.
.
Alternatively, here is a screenshot of a Report Filter that does something similar
.

In this Report Filter example, the display data is being filtered to the selected year but the data contained in
the cube itself still contains ALL years. This can affect performance so be sure to use Query Filters to limit
the data whenever possible. There is of course scenarios where Report Filters are the better choice for
slicing and dicing, but it is just something to keep in mind when designing reports for performance.
.

TIP 3.5 - Avoid Charts with Many Data Points

..
BI 4.0 introduced a new charting engine that is called Common Visualization Object Model, or CVOM for
short. This is a versatile SDK that provides enhanced charting capabilities to Web Intelligence and other SAP
Products. Web Intelligence utilizes CVOM for creating the charts and visualizations found within the Web
Intelligence product. The CVOM service is hosted on an Adaptive Process Server (APS) and is referred to as
the Visualization Service.
.
Out of the box, this service is already added to the default APS but depending on the usage of visualizations
in your deployment, you will likely want to split out the APS services according to the System Configuration
Wizard or APS Splitting Guidelines.
.
If we click the Edit Common Services option on the right-click menu of the APS in the CMC, we will see it
listed as the following:
.

.
The reason this is relevant for performance is this service can become a bottleneck in situations where the
generation of charts takes a long time due to resource or sizing issues. It is important to ensure you are
sized correctly with this service to ensure it doesn't become a bottleneck. We discuss this in more details
later on in the Sizing chapter.
.
When we spoke to the developers for the CVOM component and asked them for advice on fast performing
visualizations, they gave us a tip based on their testing and development knowledge. They recommended to
avoid using large charts with many data points and instead try to use multiple smaller charts with fewer data
points within your reports.
.
The reason behind this is that the CVOM components can produce charts much quicker when they do not
have many data points to contend with. Obviously some business needs may require this still but whenever
possible, the recommendation is a smaller number for better performance.

.
DOC - Webi User Guide - Chapter 4.3 - discusses Charting with Web Intelligence
.
.

TIP 3.6 - Limit use of Scope of Analysis

.
As quoted from the Webi User Guide:
.
"The scope of analysis for a query is extra data that you can retrieve from the database that is available to
offer more details on the results returned.
.
This extra data does not appear in the initial result report, but it remains available in the data cube, and you
can pull this data into the report to allow you to access more details at any time. This process of refining the
data to lower levels of detail is called drilling down on an object.
.
In the universe, the scope of analysis corresponds to the hierarchical levels below the object selected for a
query. For example, a scope of analysis of one level down for the object Year, would include the object
Quarter, which appears immediately under Year.
.
You can set this level when you build a query. It allows objects lower down the hierarchy to be included in the
query, without them appearing in the Result Objects pane. The hierarchies in a universe allow you to choose
your scope of analysis, and correspondingly the level of drill available. You can also create a custom scope of
analysis by selecting specific dimensions to be included in the scope.
.
Scope of Analysis is a great way to provide drill down capabilities and "preload" the data cube with the data
needed for drilling in on dimensions. Where this can impact performance is with those extra objects being
added to the SQL statement behind the scenes. It is important to note that by adding objects to the scope of
analysis, you are essentially adding them to the query that will be run against the database. This can impact
the runtime of the query to be sure to make this decision consciously."
.
As an alternative to Scope of Analysis, Report Linking can be utilized to achieve an "on-demand" type of
drilldown. This can offload the performance hit to only the times where this extra data is required. Since
some report consumers may not drill down on the extra data fetched, it may make sense to exclude it by
default and provide OpenDocument hyperlinks (report linking) to the consumers to drill down on the data as
needed.
.
Below is a example of using Scope of Analysis to add Quarter, Month and Week in the scope even though the
Result Objects only include Year:
.

.
What this essentially does is modifies the query to include Quarter, Month and Week in it. This of course
would return more data and could take longer to return.
.
In short, you should ensure that Scope of Analysis is used consciously and that the majority of report
consumers will benefit from it. An alternative is Report Linking as discussed above.
.

TIP 3.7 - Limit the # of Data Providers Used

.
Best practices from the field is to limit the # of data providers to 15 or less for faster performing reports. If
you have a need for more than 15 data providers, then you may want to consider a different way of
combining your data in a single source. Using a proper ETL Tool and Data Warehouse is a better way to
achieve this and pushes the consolidation of data to a data warehouse server instead of the BI Server or
Client machine.
.
The current design of the Webi Processing Server is to run Data Providers in series. This means that each
data provider is run one after another and not in parallel as you might expect. So, the combined total
runtime of ALL of your data providers is how long the report will take to get the data.
.
Here is a representation of the combined time it might take for a report with multiple data providers:
.

.
Another consideration for reports with a lot of data providers is that merging dimensions between multiple
sources adds overhead into the processing time. Keeping it simple will certainly result in a better performing
report.
.
.

TIP 3.8 - Don't accidentally Disable the Report Caching

.
Web Intelligence utilizes disk and memory caching to improve the performance of loading and processing
documents & universes. This can provide a faster initial load time for common reports and universes when
implemented correctly.
.
The good news is that caching is enabled by default so in most cases this will be happening automatically for
you and your users behind the scenes. There are a few situations where cache cannot be used though so we
wanted to make sure report designers were aware of these:
.
The following functions will force a document to bypass the cache:
.
CurrentDate()
CurrentTime()
CurrentUser()
GetDominantPreferredViewingLocale()
GetPreferredViewingLocale()
GetLocale()
GetContentLocale()
.
If you use these within your document, then cache will not be utilized. These functions are quite common so
it is important to be aware of the potential impact on caching they can have.
.
At the current time, caching is done at a document level and not an individual Report (tab) level. Therefore,
if these functions are used anywhere in the document, the cached copies will not be used for subsequent
requests.
..

TIP 3.9 - Test Using Query Drill for Drill Down Reports

.
What is Query Drill? As quoted from the Web Intelligence User Guide:
.
"When you activate query drill, you drill by modifying the underlying query (adding and removing
dimensions and query filters) in addition to applying drill filters.
.
You use query drill when your report contains aggregate measures calculated at the database level. It is
designed in particular to provide a drill mode adapted to databases such as Oracle 9i OLAP, which contain
aggregate functions which are not supported in Web Intelligence, or which cannot be accurately calculated
in the report during a drill session.
.
Query drill is also useful for reducing the amount of data stored locally during a drill session. Because query
drill reduces the scope of analysis when you drill up, it purges unnecessary data."
.
Performance gains can be appreciated by reducing the amount of data that a Webi document stores locally
and by pushing some of the aggregation to the database server side.
.

Performance gains may or may not be realized by using this option but it is simple enough to test it out to
see if it will improve performance for you. To enable this option, go into the Document Properties and check
the "Use Query Drill" option. Below is a screenshot of the option:
.

TIP 3.10 - Mandatory Prompts vs Optional Prompts


.
This tip came to me while investigating a support incident. The customer I was working with noticed
that reports took significantly longer to refresh when his prompts were Optional vs Mandatory. we
were seeing a 30 second difference in even one of the more simple reports he had for testing. We
investigated this through the traces and noticed that the SQL Generation functions were executing
twice when Optional Prompts were involved and this was adding to the overhead of running the report.
.
This was happening in XI 3.1 SP7 on the customers side so it was with a legacy UNV universe. I could
replicate the issue internally with our simple eFashion universe but since it executes very quickly, the
extra time was barely noticeable in my own testing. I collected my internal logs and BIAR file and
asked a developer for a quick review.
.
The developer confirmed that the delay I saw from the SQL Generation functions as suspected. He
then did a code review to see why this was happening. His explanation was that Optional prompts
may or may not have values and therefore the the SQL generation could change after the prompt
dialog appears. For example, if an optional prompt value is not selected, then the Where clause will
omit that object. With Mandatory prompts, the SQL structure will always be the same before and after
prompts are selected so it does not need to regenerate the SQL after a value is selected.
.
So, in short, Optional vs Mandatory can give different performance results so it should be considered
before choosing one vs the other. As with many of the other tips in this doc though, this does not

mean that you should not use Optional prompts. They are useful and are often necessary, but they are
a factor and as long as you know this, you can optimize your report design.
.
.
..
.

Chapter 4 - Semantic Layer Best Practices


.
Most of the below Best Practices involve the Semantic Layer, or SL as we sometimes refer to it as. These
Best Practices can help you design faster running queries which can result in faster running Webi Docs.
.

TIP 4.1 - Only Merge Dimensions that are needed

.
A Merged Dimension is a mechanism for synchronizing data from different Data Providers. For example, if
your document had 2 Data Providers and each of them has a "Product Name" dimension, you could merge
the two different dimensions into a single "Merged" dimension that would contain the complete list of
Product Names from each data provider.
.
Web Intelligence will automatically merge dimensions in BI 4.x by default, so you may want to evaluate if
there are performance gains you can achieve by reviewing the merged dimensions. If you do not want your
dimensions to be automatically merged, you can uncheck the "Auto-merge dimensions" property in the
Document Properties of your reports.
.
We have 2 Document Properties within a Webi document that can affect the merging of dimensions:
.
Auto-merge dimensions -- Automatically Merges dimensions with the same name from the same universe.
.
Extend merged dimension values -- This option will automatically include merged dimension values for a
dimension even if the merged dimension object is not used in a table.
.
Merging dimensions has overhead associated to it that can impact the performance of your Webi
documents. If you do not need to have certain dimensions merged within your document, you can simply
choose to unmerge them. This removes the overhead performance hit that is associated with merging those
dimensions. Besides, you can always merge them again later if needed.
.

In short, to squeeze a little extra performance out of your larger reports, it might be worth unmerging
dimensions that are not being used as merged dimensions.
.

TIP 4.2 - Build Universes & Queries for the Business Needs of the
Document

.
Like any successful project, the key to a successful Webi Document is good planning. This helps avoid
scope/feature creep when you build out the document. During the planning stage, it is important to
determine exactly what the business needs for your document are. Once you know the business needs, you
can build a lean document that only contains the information needed to fulfill those needs.
.
Just like the previous tip that talks about "Monster" documents, we also need to avoid "Monster"
queries/universes as well. The fact is, the larger a universe or query is, the worse the performance and
overhead resources will be. By focusing only on the business needs, we can minimize the size of our queries
and optimize the runtime of our documents.
.
As a real-life example, I have seen a report that was built off of a query that contained over 300 objects.
This report pulled back around 500,000 rows of data and took over 45 minutes to complete. On inspecting
the document, only about 1/4 of the objects were used in the document. When asked "Why" they were using
a query that had over 300 objects in it they didn't have an answer. If we do the math on this, 300 objects x
500,000 rows = 1.5 million cells. It was likely that this query was designed to account for ALL scenarios that
the query designer could account for and NOT based on the business needs of the report consumer.
.
In Summary, it is important to know who will be utilizing the universes and what their needs will be. You then
want to build a lean universe, and supporting queries, that are optimized to suit those needs.
.

TIP 4.3 - Array Fetch Size Optimizations

.
The Array Fetch Size (AFS) is the maximum # of rows that will be fetched at a time when running a Web
Intelligence document. For example, if you run a query that returns 100,000 rows of data and you have an
Array Fetch Size of 100, it will take 1000 fetches of 100 rows per fetch (1000 x 100 = 100,000) to retrieve all
of those rows.
.
In newer versions of Web Intelligence, we automatically determine what an optimal AFS should be based on
the size of the objects within your query. For most scenarios, this results in an optimized value that will
return the data with the good performance. Sometimes though, manually setting this value to a higher value
can squeeze a little better performance out.
.
I did some quick testing on my side and here are the results that the Array Fetch Size had on my test server:
.

.
As you can see above, the time took to run the same query varied based on the AFS value that was set. The
optimized value (which I believe was around 700 behind the scenes) took around 30 seconds. By overriding

this and setting my AFS to 1000, I was able to shave another 12 seconds off to take it down to 18 seconds.
This is great for performance, but keep in mind that this means large packets are sent over the network and
extra memory will be needed to accommodate the larger fetches.
.
As I mentioned, by default the optimized value will be used for newly created connections/universes. To
override this and test your own values, you have to disable the AFS optimization using a Universe Parameter
called "DISABLE_ARRAY_FETCH_SIZE_OPTIMIZATION". Setting this to "Yes" will disable the optimization and
take the Array Fetch Size value set on your connection.
.
More information on this can be found in the Information Design Tool or Universe Designer Guide referenced
below:
.
DOC - Information Design Tool User Guide (BI 4.1 SP4 - Direct Link)
.
DOC - Universe Design Tool User Guide
.

TIP 4.4 - Ensure Query Stripping is Enabled

.
Query Stripping is a feature that will remove unused objects from a query automatically to improve
performance and reduce the data contained in the cube. Query Stripping was originally only available for
BICS based connectivity to BEx queries but was introduced for Relational database connections starting in BI
4.1 SP3.
.
Query Stripping needs to be enabled for Relational Databases through three different options:
.
1. Enable "Allow query stripping" option at the Business Layer level in the Universe (UNX)

2. In the Document Properties of the Webi Document


.

.
3. In the Query Properties
.

It is best to double-check those 3 places when implementing Query Stripping. If it is unchecked at any level,
you may not be benefiting from the Query Stripping feature.
.
There is also a way to tell if it is working. With Query Stripping enabled, refresh your query and then go back
in to the Query Panel and click the View SQL button. You should see that only objects used in a block within
the report are used. In this example, I am only using 3 of the 6 objects in my report, so the query only
selects those objects.
.

.
You can see above that the SQL has been stripped of any unused objects and should run quicker as a result.
.
For BICS based documents, Query Stripping is enabled by default.
.
In summary, you will want to ensure your documents are utilizing Query Stripping to get better performance
when refreshing queries.
.

TIP 4.5 - Follow these Best Practices for Performance Optimizing SAP BW
(BICS) Reports
.
There is a lot of great information contained in the below document. It outlines many best practices for
reporting off of SAP BW using the BICS connectivity. Please review the below guide for more details on
optimizing the performance of BICS based Reports.
.
DOC - How to Performance Optimize SAP BusinessObjects Reports Based Upon SAP BW using
BICS Connectivity
.

TIP 4.6 - Using Index-Awareness for Better Performance


.

o
o

Index-Awareness is described in the Information Design Tool User guide in section 12.7 as:
.
"Index awareness is the ability to take advantage of the indexes on key columns to improve query
performance.
.
The objects in the business layer are based on database columns that are meaningful for querying data. For
example, a Customer object retrieves the value in the customer name column of the customer table. In
many databases, the customer table has a primary key (for example an integer) to uniquely identify each
customer. The key value is not meaningful for reporting, but it is important for database performance.
.
When you set up index awareness, you define which database columns are primary and foreign keys for the
dimensions and attributes in the business layer. The benefits of defining index awareness include the
following:
Joining and filtering on key columns are faster than on non-key columns.
Fewer joins are needed in a query, therefore fewer tables are requested. For example, in a

star schema database, if you build a query that involves filtering on a value in a dimension table, the query
can apply the filter directly on the fact table by using the dimension table foreign key.
Uniqueness in filters and lists of values is taken into account. For example, if two customers
have the same name, the application retrieves only one customer unless it is aware that each customer has
a separate primary key."
.
Utilizing Index Awareness can help improve performance as key columns will be utilized behind the scenese
in the queries to do faster lookups and linking on the database side.
.
The Information Design Tool User Guide covers Index Awareness in the following chapters:
.
DOC - Information Design Tool User Guide (BI 4.1 SP4) - Chapter 12
.
.

TIP 4.7 - Using Aggregate Awareness for Performance


.
Aggregate Awareness is described as the following in the IDT User Guide:
.
"Aggregate awareness is the ability of a relational universe to take advantage of database tables that
contain pre-aggregated data (aggregate tables). Setting up aggregate awareness accelerates queries by
processing fewer facts and aggregating fewer rows.
.
If an aggregate aware object is included in a query, at run time the query generator retrieves the data from
the table with the highest aggregation level that matches the level of detail in the query.
For example, in a data foundation there is a fact table for sales with detail on the transaction level, and an
aggregate table with sales summed by day. If a query asks for sales details, then the transaction table is
used. If a query asks for sales per day, then the aggregate table is used. Which table is used is transparent
to the user.
.
Setting up aggregate awareness in the universe has several steps. See the related topic for more
information"
.
Utilizing the database to pre-aggregate data can help speed up the performance of your Webi documents.
This is because the Webi Processing Server will not have to do the aggregations locally and will only have to
work with the aggregated data that is returned from the database side.

.
Use Aggregate Awareness whenever it makes sense.
.

TIP 4.8 - Utilizing JOIN_BY_SQL to avoid multiple queries


.
The JOIN_BY_SQL parameter determines how the SQL Generation handles multiple SQL statements. By
default, SQL Statements are not combined and in some scenarios, performance gains can be realized by
allowing the SQL Generation to combine multiple statements.
.
The JOIN_BY_SQL parameter is found in the Information Design Tool in the Business Layer and/or Data
Foundation. Below is a screenshot of the parameter in its default state.
.

.
By changing this Value to "Yes", you are instructing the SQL Generation process to use combined statements
whenever possible. This can result in faster query execution so it may be worth testing this option out on
your universes/documents.

..
.

TIP 4.9 - Security Considerations for the Semantic Layer


.
There is no doubt that security is a necessity when dealing with sensitive data. The purpose of this tip is to
prompt you to review your security model and implementation to ensure it is as lean as it can be.
Performance can definitely be impacted, sometimes quite severely, by the complexity of the security model
at both your Semantic Layer, and your BI Platform (Users and Groups) levels.

As an example, I have worked on an incident recently where we were seeing roughly a 10-40% performance
difference when opening a Webi document with the built in Administrator account vs another User Account.
On closer examination, the user was a member of over 70 groups and a good portion of the load time was
spent on rights aggregation and look-ups.

.
We also found that there were some inefficiencies in our code that could be optimized in future Support
Packages/Minor Releases. These should help improve performance for customers who may be unaware of
the performance impacts their complex security model may be having.

.
So, some actions you may want to consider for this tip are:

.
1.

Review your business requirements and reduce/remove any unnecessary Data/Business Layer
Security profiles at the Universe level.

2.

Consider using the Change State / "Hidden" option in the Business Layer for fields that you do not
want any users to see.

3.

Consider using Access Levels in the Business Layer to control which users have access to objects

4.

Reduce the # of User Groups/Roles that your Users are apart of

5.

Test performance with an Administrator User and compare it to a Restricted User to gauge the
impact on Performance

.
.

Chapter 5 - Formula & Calculation Engine Tips


.
These tips involve some insight from the product developers around how the backend calculation engine
handles calculations in regards to performance.
.

TIP 5.1 - Use Nested Sections with Conditions with caution

.
A Nested section, or subsection as they are sometimes referred to, is a section within a section. For
example, you might have a Country Section and a Region section within it. This would be considered a
"Nested" section. Nested sections can add overhead into the report rendering/processing time. This is
especially true when you add conditions to the section such as "Hide section when...". This doesn't mean
that you should not use Nested Sections, they are certainly useful for making a report look and feel the way
you want it to but you should consider the performance impact before you heavily utilize nested sections
within your documents.
.
Here is a example of 4 levels of nested sections from an eFashion based report
.

.
Here are the Format Section options that can affect performance when overused:
.

.
When using the conditions within Nested sections, the calculation engine needs to figure out which sections
are displayed. The more nested sections you have, the more overhead there is to figure out which levels of
sections are actually visible. Again, very useful in most cases but for reports with thousands of dimensions in
the sections and conditions associated to them, this can impact the performance.
.

TIP 5.2 - Use IN instead of ForEach and ForAll when possible


.
This tip came directly from our developers that work with the calculation engine. Behind the scenes, the
code is much more efficient when processing the IN context vs the ForEach or ForAll contexts.
.
The following Document is available on our help portal. It covers using functions, formulas, calculations and
contexts within a Webi document in more detail:
.
DOC - Using functions, formulas, and calculations in Web Intelligence (BI 4.1 SP3)
.
Section 4.3.1.1 covers the "IN" context operator with examples of how it works. In short, the IN context
operator specifies dimensions explicitly in a context.
.
Section 4.3.1.2 and 4.3.1.3 cover the ForEach and ForAll context operators. In short, these two functions
allow you to modify the default context by including or excluding dimensions from the calculation context.
.
In a lot of cases, IN can be used to achieve similar results to the ForEach and ForAll operators so if you
suspect these are contributing to performance issues, try changing your formulas to use IN instead.
.
.

TIP 5.3 - Use IF...THEN...ELSE instead of Where operator when possible

.
In most cases, the IF/THEN/ELSE operators can be used instead of a Where Operator. This is more efficient
from a calculation engine perspective according to our developers. If you ever suspect that the Where
operator is causing performance degradation issues in your report, try swapping it for a IF statement if you
can.
.
The following document discusses these operators in more details:
..
DOC - Using functions, formulas, and calculations in Web Intelligence (BI 4.1 SP3)
.
Section 6.2.4.14 covers the usage of the Where Operator and provides examples.
.
Section 6.1.10.11 covers the IF...Then...Else functionality
..

TIP 5.4 - Factorize (Reuse) Variables


.
Factorizing variables essentially means to reuse them within other variables. By doing this, you are reducing
the number of calculations that the engine needs to do to calculate the results.
.
Here is an example of what we mean when we say Factorizing variables:
.
v_H1_Sales = Sum([Sales Revenue]) Where ([Quarter] InList("Q1";"Q2"))

.
v_H2_Sales = Sum([Sales Revenue]) Where ([Quarter] InList("Q3";"Q4"))
.
Now we reuse these two to get the Years sales (H1+H2 revenue combined)
.
v_Year_Sales = v_H1_Sales + v_H2_Sales
.
By reusing variables, you are saving the time needed to recalculate values that have already been
calculated. The above is a simple example but applying the same logic to more complex calculations can
save you some real time on the calculation side.
.
.

Chapter 6 - Sizing for Performance


.
One of the keys to a faster performing report is proper sizing on the back-end components. More often than
not, we see systems that were originally sized correctly for "Day One" usage but have since outgrown the
original sizing and are now experiencing performance issues due to resource and concurrency limits. It is
important to size your system for Today's usage as well as the usage for the near future. It is equally
important to have checkpoints a few times a year to ensure you are not outgrowing your original sizing
estimates.
.
UPDATE: Ted Ueda has written a great blog that goes over some of these recommendations in greater
detail. Link below:
.
BLOG - Revisit the Sizing for your deployment of BI 4.x Web Intelligence Processing Servers!
.

.The following tips will help you size your system for performance and may help you avoid

some common mistakes we have seen in support.


..

TIP 6.1 - Use these Resources to Help you Size your Environment
.
The BI Platform is not sized for performance right out of the box. Since every installation will have a different
set of content, users, nodes, rights, data, etc... it is difficult to do sizing without putting quite a bit of
preparation and thought into the exercise.
.
The following resources can help you do a sizing exercise on your environment:
DOCSizing and Deploying SAP BI 4 and SAP Lumira
.
DOCSAP BusinessObjects BI4 Sizing Guide
.
XLS - SAP BI 4x Resource Usage Estimator
.
To complete a sizing exercise, you will want to use the above Sizing Guide and Resource Usage
Estimator. You will also need to know quite a bit about the hardware, and the expected usage on the
system to do a proper sizing.
.

.
TIP 6.2 - Do not reuse XI 3.1 Sizing on a BI 4.x System (32bit vs 64bit)
.

A common mistake that some administrators will make is to reuse the sizing requirements that they did for
XI 3.1 for their BI 4.x environments. BI 4.x is much more than a simple upgrade and contains quite a few
architectural changes that need to be considered in order to size an environment correctly. One of the
biggest changes was the adoption of 64-bit processes in BI 4.x. This overcomes one of the major sizing
variables for XI 3.1 which was memory limits.
.
XI 3.1 was all 32-bit processes. On Windows systems, this meant that the most memory that could be
allocated per process was around 2 Gigabytes. In most cases, this was limiting the scalability for a system,
especially where Web Intelligence was concerned. The Web Intelligence Processing Server
(WIReportServer.exe) would only be able to use up to 2 GB of memory before it would hang or crash. On a
Linux system this could be increased to about 4 GB but could still be easily reached with a few very large
report requests. For this reason, the recommendation was to have multiple Web Intelligence Processing
Servers (WIPS) on a single node. For example, if you had 32 GB of RAM, you might put 12 WIPS on that
machine so that a total of 24GB of that RAM. There was still the risk of a single WIPS exceeding the 2GB
ceiling, but with load balancing the odds went way down for you hitting that limit.
.
In BI 4.x, the WIPS is now a 64-bit process. This means that the 2GB limit is no longer an issue. In the same
example as above, you might want to reduce the number of WIPS to 2 instead of 12. With 2 x 64-bit servers,
you can use all of the RAM that is available on the server and you still get fault tolerance and failover
capabilities. Technically you could just have 1 WIPS but if that one was to crash for some reason, you
wouldn't have a 2nd one to handle the failover.
.
There were also some major differences introduced into the Adaptive Processing Server. These are a big
factor when sizing a BI 4.x system and should be taken into account when sizing out a BI 4.x system. The
next tip covers this in more details.
.
In Short, be sure to redo your sizing for your BI 4.x upgrade system.
.

TIP 6.3 - Ensure the Adaptive Processing Server is Split and Sized
Correctly

.
When BI 4.0 first came out, a lot of the performance and stability related issues were narrowed down to
inadequate resource availability for the Adaptive Processing Server (APS) and it's services. Out of the box, BI
4.x installs 1 APS with around 21+ services set up on it. These 21 services provide different functions to the
BI Platform and this ranges from Platform Search to Universe connectivity. If you do not split out and size
your APS correctly, you will definitely experience resource and performance issues at some point in your
deployment.
.
As far as Web Intelligence is concerned, there are 3 main services that are hosted on the APS that can
drastically affect performance of a Web Intelligence document. These are:
.
DSL Bridge Service - Used for BICS (SAP BW Direct access) and UNX (Universe) access
Visualization Service - Used to creating the charts and visualizations within Web Intelligence

Documents
Data Federation Service - Used for Multi-Source Universes and Data Federation
.
The Platform Search Service can also affect Webi Performance as it utilizes Webi Processing Servers to
index the metadata of Webi Docs.
.

Shortly after the release of BI 4.0, SAP released an APS splitting guide that assisted System Administrators in
splitting the APS to accommodate their system usage. A link to this guide is found below. It covers a lot
more detail then I will go into here and is a must read for anyone in charge of a BI 4.x deployment.
.
DOC - Best Practices for SAPBO BI 4.0 Adaptive Processing Servers
.
The document talks about the architecture of the APS process and all of the different services that are
installed. There are recommendations on how you can group services together to pair up less resource
intensive processes with the ones that require more resources. This helps strike a balance between # of APS
services and performance.
.
There is also a System Configuration Wizard that is available in the Central Management Console (CMC).
This wizard will do some simple T-shirt sized splitting of the APS and can be used as a baseline for new
installs. SAP still recommends that you do a proper sizing exercise in addition to this though.
.
.

TIP 6.4 - Keep Location and Network in mind when Designing your
Environment

.
Network transfer rates can play a large part in the performance of a BI environment. It is important to know
where bottlenecks can occur and how to ensure that network is not going to slow down your performance.
.
It is important to have a fast, reliable network connection between the Web Intelligence Processing Server /
APS (DSL Bridge Service) and the Reporting Database. This is because the data retrieved from the Webi
Documents will have to be transferred from the Database server to the WIPS or APS process over the
network. In most cases, it is best to co-locate the Processing Servers with the Database in the same
network segment but if that is not possible, it is still important to ensure the network between the two is fast
and reliable.
.
If you suspect that network is causing performance bottlenecks for you, you should be able to use either BI
Platform, Network or Database traces to identify where the bottleneck is.
.

TIP 6.5 - Use Local, Fast Storage for Cache and Temp Directories

.
Some Administrators change the Cache and Temp directories for their Web Intelligence Processing Servers to
some sort of Network Attached Storage (NAS) device.
.
This is unnecessary in most cases and unless that network storage is as fast, or faster than local hard drive
storage, it could become a bottleneck. Cache and Temp files for Web Intelligence are non-critical
components that do not need to be backed up or highly availability. If a WIPS doesn't find a Cache/Temp file
it needs, then it will simply recreate it. There is a slight performance hit in recreating a file but with local
storage, there is often little chance of a file going missing.
.
With NAS, network issues,can cause outages to the entire file system or network traffic could reduce the
performance. Local disk is much cheaper and is often quicker for Cache and Temp files.
.

TIP 6.6 - Ensure your CPU Speed is Adequate


.
The speed of the processor or cores that you have available for a BI system can definitely contribute to the
performance of your workflows. I've seen a scenario where a customer was noticing much slower

performance in their Production environment vs their Quality Assurance environment. In digging into the
issue, the problem was determined to be the CPU speed. In Production, they had 128 cores running at 1200
Mhz. This is great for concurrent requests that run on separate threads. QA only had 8 cores but the CPU
was a 2.8 Ghz processor. So, when doing a single workflow comparison, QA ran requests much quicker than
Production. Production could handle a high load of concurrent users, but the throughput was quite a bit
slower.
.
Nowadays, most machines have a pretty fast processor in them so this might not be something that most
people will run into. Where I have seen this more frequently is when older UNIX machines are being used.
.

TIP 6.7 - Use the BI Platform Support Tool for Sizing Reviews

.
The BI Platform Support Tool (BIPST) is a great tool to use to gather information around your BI 4.x system
landscape. If you haven't used this tool yet, I highly recommend you download it and play with it. Below is
the link to the BIPST portal:
.
WIKI - BI Platform Support Tool
.
The tool can be downloaded from the above link and there is a Webinar available that covers some of the
features and how to use them. The Wiki itself also gives you a good overview of the features of the tool.
.
For sizing reviews, this tool is invaluable as it gives you a real easy overview of the servers that you have
and their settings. It also gives you a good idea of the content and users that you have in your environment
which you can use when doing a sizing (or resizing) exercise.
.
.

Chapter 7 - Architectural Differences between XI 3.1 & BI


4.x
.

This section covers some of the main architectural differences between XI 3.1 and BI 4.x as they
pertain to Web Intelligence. Knowing these differences can help during upgrades and new installs. For
an Architectural Diagram for BI 4.x, please see the below link:
.
DOC - BI Platform 4.1 Architecture Diagram
.
I've divided this chapter up into TIPs like the previous chapter but really these are more like sections as
they are really only for informational purposes.
.
.

TIP 7.1 - 32-bit vs 64-bit - What does it mean?

.
One of the biggest differences between XI 3.1 and BI 4.x is that a number of the processes have been
upgraded to 64-bit applications. 64-bit processes can utilize a lot more memory than 32-bit processes
so this can greatly increase the throughput of a single process. In the 32-bit world, a process could
only address up to 4GB of memory in total and on Windows, that was slashed in half by default. So in
the XI 3.1 days, the WIReportServer.exe (Web Intelligence Processing Server) could easily reach the
2GB maximum on a Windows OS and would then become unstable.
.

This update to the 64-bit Processing Server for Webi means that 64-bit Database Client/Drivers could
now be utilized as well. This all does of course require that your Operating System and hardware is 64bits as well.
.
This change is especially relevant for your Sizing of the BI 4.x system. In previous versions we
recommended that you scale the Webi out on a single node to utilize more than 2GB of the available
RAM. In BI 4.x this is not required as a single Web Intelligence Processing Server (WIPS) can utilize
essentially all the available RAM on a machine in a single process.
.

TIP 7.2 - Hosted Services are more heavily used in BI 4.x


.

BI 4.x may look similar on the surface but another major architectural change on the backend was the
shift to service based process flows. These services are often referred to as Outproc or Hosted services
as they are often hosted outside of the process that is utilizing them. As an example, the WIPS utilizes
the DSL Bridge Services that is hosted on the Adaptive Processing Server for Semantic Layer access
such as UNX Universe and BICS connectivity.
.
This heavier reliance on Hosted services means that there are more variables to consider when
investigating issues or sizing an environment. To name a few considerations:

The process flows contain more services which could be spread across multiple nodes. This greatly
increases the complexity of process flows in some cases

Sizing is more complex as you have to consider the impact on multiple processes when estimating
the resources needed for a report.

Network can be a factor when troubleshooting bottlenecks in workflows

.
In most cases, it is the Adaptive Servers (Job and Processing) that host these Hosted Services so the major
change is surrounding proper scale out of these Servers.

TIP 7.3 - Larger # of Processes involved in Process Workflows


Below is a list of processes that can be involved in certain Web Intelligence workflows in BI 4.x:

Web Intelligence Processing Server -> Processing and Creation of Web Intelligence documents

Visualization Service (APS) -> Generating Charts

DLS-Bridge (APS) -> New Semantic Layer and BICS connections

Data Federation Service (APS) -> Multi Source Universes

Connection Server (64-bits) -> 3 Tier mode Connections

Connection Server (32-bits) -> 3 Tier mode Connections

Secure Token Service (APS) -> SSO Tickets sessions

WebI Monitoring Service (APS) -> Client Monitoring

Web Application Server -> Page Rendering

Central Management Service -> Authentication, Server Communication, Rights, etc...

File Repository Service -> File retrieval / Instance Storage

Publication Service (APS) -> Web Intelligence Publication Processing

Adaptive Job Server -> Publications and Scheduled Jobs

I may have missed one or two, but you get the point. When we compare this list to XI 3.1, you can see the
contrast:

Web Intelligence Processing Server -> Processing and Creation of Web Intelligence documents

Connection Server (32-bits) -> 3 Tier mode Connections

Web Application Server -> Page Rendering

Central Management Service -> Authentication, Server Communication, Rights, etc...

File Repository Service -> File retrieval / Instance Storage

Adaptive Job Server -> Publications and Scheduled Jobs

.
For this reason, it is important to know the BI 4.x Workflows fairly intimately. Below are some links to
Interactive Workflows that will help you learn and understand these changed workflows.

S-ar putea să vă placă și