Documente Academic
Documente Profesional
Documente Cultură
Version: 8.1.2
Document Number: 09470812
Various MicroStrategy products contain the copyrighted technology of third parties. This product may contain one or more of the following copyrighted technologies: Graph Generation Engine Copyright 1998-2008. Three D Graphics, Inc. All rights reserved. Actuate Formula One. Copyright 1993-2008 Actuate Corporation. All rights reserved. XML parser Copyright 2003-2008 Microsoft Corporation. All rights reserved. Xalan XSLT processor. Copyright 1999-2008. The Apache Software Foundation. All rights reserved. Xerces XML parser. Copyright 1999-2008. The Apache Software Foundation. All rights reserved. FOP XSL formatting objects. Copyright 2004-2008. The Apache Software Foundation. All rights reserved. Portions of Intelligence Server memory management Copyright 1991-2008 Compuware Corporation. All rights reserved. International Components for Unicode Copyright 1999-2008 Compaq Computer Corporation Copyright 1999-2008 Hewlett-Packard Company Copyright 1999-2008 IBM Corporation Copyright 1999-2008 Hummingbird Communications Ltd. Copyright 1999-2008 Silicon Graphics, Inc. Copyright 1999-2008 Sun Microsystems, Inc. Copyright 1999-2008 The Open Group All rights reserved. Real Player and RealJukebox are included under license from Real Networks, Inc. Copyright 1999-2008. All rights reserved.
CONTENTS
Description of Guide ................................................................ xix About this book ........................................................................... xxii How to find business scenarios and examples ..................... xxii Prerequisites ........................................................................ xxiii Who should use this guide................................................... xxiii Resources.................................................................................. xxiii Documentation..................................................................... xxiii Education ............................................................................. xxix Consulting ............................................................................. xxx International support ............................................................. xxx Technical Support ................................................................ xxxi Feedback ................................................................................. xxxvi
Introduction.................................................................................. 1 Storing information in the data warehouse .................................... 2 Basic Intelligence Server operation ............................................... 3 Running Intelligence Server as an application or a service ..... 3 What happens when Intelligence Server starts?.................... 10 What happens when Intelligence Server stops?.................... 11 Using the Listener/Restarter to start Intelligence Server ....... 12 Storing information in MicroStrategy metadata............................ 14 Understanding projects and project sources ............................... 16 Communicating with databases................................................... 17 Connecting to the MicroStrategy metadata............................ 17 Connecting to the data warehouse ........................................ 19 Caching database connections.............................................. 20
Contents
Benefiting from centralized database access control............. 21 Updating VLDB properties for ODBC connections ................ 22 Letting users see ETL source information ................................... 24 Processing jobs ........................................................................... 27 Intelligence Server job processing (common to all jobs)........ 27 Processing report execution .................................................. 28 Processing object browsing ................................................... 33 Processing element browsing ................................................ 34 Processing Report Services document execution ................. 37 Processing HTML document execution ................................. 40 Client-specific job processing ................................................ 42 Security checklist before deploying the system ........................... 47
Introduction................................................................................ 49 The MicroStrategy user model..................................................... 50 About MicroStrategy users..................................................... 50 About MicroStrategy user groups .......................................... 51 Creating and importing users and groups.............................. 54 Authentication options ................................................................. 55 Implementing standard authentication ................................... 57 Implementing Windows authentication .................................. 58 Implementing database warehouse authentication................ 65 Implementing database warehouse and metadata (6.x) authentication ........................................................................ 67 Implementing anonymous authentication .............................. 68 Implementing LDAP authentication ............................................. 69 LDAP information flow ........................................................... 70 Implementing LDAP authentication in MicroStrategy............. 71 Defining LDAP search filters to verify and import users and groups at login ................................................................ 77 Importing or linking LDAP users and groups in MicroStrategy......................................................................... 82 Implementing different LDAP security types .......................... 91 Synchronizing imported LDAP users and groups .................. 96 Troubleshooting LDAP authentication in MicroStrategy ...... 100 Controlling access to data ......................................................... 101 Controlling access to data at the database (RDBMS) level ..................................................................................... 101 Controlling access to data at the application level ............... 105 Controlling access to application functionality ........................... 124 Privileges ............................................................................. 124
vi
Contents
Permissions ......................................................................... 128 Access control lists and personalized drill paths in Web ..... 134 Access control lists and report execution............................. 134 Merging users or groups............................................................ 137 How users and groups are merged...................................... 138 Running the User Merge Wizard.......................................... 141 Tying it all together: examples ................................................... 142 Simple setup: Standard authentication and default database login for the database instance ............................ 143 Audit trail: Database authentication and linked warehouse login................................................................... 143 Security views: Windows authentication and linked warehouse login................................................................... 144 Connection maps: Standard authentication, connection maps, and partitioned fact tables ......................................... 146 Controlling access to a project............................................. 147
Introduction.............................................................................. 149 Verifying, auditing, and updating licenses ................................. 149 Verifying Named User licenses............................................ 150 Verifying CPU licenses ........................................................ 152 Using License Manager ....................................................... 152 Auditing your system for the proper licenses ....................... 154 Updating a license ............................................................... 156 Updating CPU affinity ................................................................ 157 CPU affinity for Intelligence Server on Windows ................. 158 CPU affinity for Intelligence Server on UNIX/Linux.............. 159 CPU affinity for MicroStrategy Web ..................................... 163
Introduction.............................................................................. 169 Report caches............................................................................ 171 Types of report caches ........................................................ 173 Location of report caches .................................................... 175 Report cache file format....................................................... 175 Report cache index files ...................................................... 176 Cache matching algorithm ................................................... 176 Disabling report caching ...................................................... 179 Monitoring report caches ..................................................... 180 Managing report caches ...................................................... 183 Deleting or purging report caches........................................ 190 Configuring report cache settings ........................................ 190
vii
Contents
Summary table of report cache settings .............................. 200 Element caches ......................................................................... 201 Element caching terminology............................................... 202 Location of element caches ................................................. 203 Cache matching algorithm ................................................... 204 Enabling or disabling element caching ................................ 204 Limiting the number of elements displayed and cached at a time ............................................................................... 205 Caching algorithm ................................................................ 209 Limiting the amount of memory available for element caches ................................................................................. 210 Limiting which attribute elements a user can see ................ 211 Limiting element caches by database connection ............... 212 Limiting element caches by database login ......................... 213 Deleting or purging element caches .................................... 214 Summary table of element cache settings ........................... 215 Object caches ............................................................................ 216 Cache matching algorithm ................................................... 217 Enabling or disabling object caching.................................... 217 Limiting the amount of memory available for object caches ................................................................................. 218 Deleting or purging object caches........................................ 219 Summary table of object caching settings ........................... 220 Managing report caches: History List ........................................ 221 History List file location ........................................................ 222 History List backup frequency.............................................. 223 History Lists and caching ..................................................... 223 History Lists in a clustered environment .............................. 224 Accessing History Lists ........................................................ 224 Using History Lists ............................................................... 226 Archiving History List messages .......................................... 229 Managing History Lists......................................................... 231 Configuring History Lists ...................................................... 233
Introduction.............................................................................. 237 Monitoring the current state of the system................................. 237 Managing job status: Job Monitor ........................................ 238 Managing project status, configuration, or security: Project Monitor..................................................................... 240 Managing user connections to a project: User Connection Monitor.............................................................. 243 Managing database instance connections: Database
viii
Contents
Connection Monitor.............................................................. 244 Managing scheduled jobs and tasks: Schedule Monitor...... 245 Managing report and History List caches: Cache Monitor ... 246 Managing clustered Intelligence Servers: Cluster Monitor... 248 Manage system memory and resources: Windows Performance Monitor ........................................................... 249 Analyzing system usage with Enterprise Manager .................... 251 Understanding the Enterprise Manager architecture ........... 254 Installing and configuring Enterprise Manager..................... 257 Analyzing statistics with Enterprise Manager....................... 283 Reporting in Enterprise Manager ......................................... 297
Introduction.............................................................................. 323 Setting Project Modes................................................................ 323 Project and data warehouse maintenance example scenarios ............................................................................. 328 Clustering multiple machines in the system............................... 329 Understanding the MicroStrategy clustered environment .... 331 Clustering Intelligence Servers: Windows environment....... 334 Clustering Intelligence Servers: UNIX/Linux environment ... 345 Distributing projects across nodes in a cluster..................... 354 Updating caches in a clustered environment ....................... 359 Connecting MicroStrategy Web to a cluster......................... 361 Monitoring the cluster and its nodes .................................... 362 Shutting down a node .......................................................... 365 Troubleshooting clustering................................................... 367 Tuning the system for best performance ................................... 368 Tuning overview................................................................... 368 Managing user sessions ...................................................... 373 Governing requests ............................................................. 382 Managing job execution ....................................................... 388 Governing results delivery ................................................... 405 Managing system resources ................................................ 410 Designing system architecture............................................. 430 Designing reports................................................................. 436 Configuring Intelligence Server and projects ....................... 439 How Narrowcast Server affects Intelligence Server .................. 447 Application design considerations........................................ 448 Web delivery (Send now and Scheduled e-mail) ................. 449 How Narrowcast Server connects to Intelligence Server..... 450
ix
Contents
7. Automating tasks
Introduction.............................................................................. 451 Scheduling jobs and administrative tasks.................................. 451 Creating schedules .............................................................. 452 Scheduling reports ............................................................... 457 Scheduling administration tasks .......................................... 461 Managing scheduled requests ............................................. 463 Automating processes using Command Manager..................... 464 Using Command Manager ................................................... 466 Command Manager script editor features............................ 469 Command Manager script syntax ........................................ 470 Executing a script in Command Manager ............................ 472 Using Command Manager with OEM software.................... 473 Using automated installation techniques ................................... 474 Using a Response file to install the product......................... 474 Using a Response file to configure the product ................... 474 Running a silent installation ................................................. 475
Introduction.............................................................................. 477 The application life cycle............................................................ 478 Recommended scenario: Development -> test -> production ............................................................................ 479 Real-life scenario: New version from an application developer ............................................................................. 481 Implementing the recommended application life cycle .............. 483 Duplicating a project .................................................................. 485 Overview of the Project Duplication Wizard......................... 486 Migrating a project to a new database platform ................... 490 Updating projects with new objects ........................................... 492 Comparing Project Merge to Object Manager ..................... 493 Locking projects ................................................................... 494 Copying objects between projects ............................................. 495 Copying objects ................................................................... 496 What happens when you copy or move an object ............... 499 Resolving conflicts when copying objects............................ 509 Merging projects to synchronize objects.................................... 517 What happens when you merge projects............................. 518 Merging projects with the Project Merge Wizard ................. 519 Resolving conflicts when merging projects .......................... 525 Comparing and tracking projects ............................................... 526 Comparing objects between two projects ............................ 527
Contents
Tracking your projects with the Search Export feature ........ 529 Verifying reports and documents ............................................... 530 What is an integrity test? ..................................................... 532 Creating an integrity test ...................................................... 536 Executing an integrity test.................................................... 539 Viewing the results of a test ................................................. 544 Deleting unused schema objects: managed objects ................. 551 Deleting managed objects one-by-one ................................ 552 Deleting all unused managed objects .................................. 553
9. Troubleshooting
Introduction.............................................................................. 555 Methodology for finding trouble spots........................................ 556 Finding trouble spots using diagnostics..................................... 558 Configuring what is logged: Diagnostics and Performance Logging tool.................................................... 558 Reading log files .................................................................. 567 Server state dumps.............................................................. 570 Understanding memory depletions ...................................... 576 Working with exceptions ...................................................... 584 Troubleshooting authentication.................................................. 584 Windows authentication in MicroStrategy Web.................... 584 Failure to log in to server (three-tier) project source ............ 585 LDAP authentication ............................................................ 586 Fixing inconsistencies in the metadata ...................................... 592 Troubleshooting date/time functions.......................................... 596 Troubleshooting performance.................................................... 597 Project performance ............................................................ 597 Troubleshooting report results ................................................... 598 Graph report results ............................................................. 598 Number of report result rows ............................................... 598 SQL/MDX string length ........................................................ 599 Freeform SQL report error ................................................... 600 Document error in MicroStrategy Web................................. 601 Failure to start MicroStrategy Intelligence Server...................... 602 Logon failure ........................................................................ 602 Failure to activate Intelligence Server.................................. 603 Troubleshooting clustered environments................................... 605 Troubleshooting connection problems in a clustered environment ......................................................................... 605 Troubleshooting caches in a clustered environment............ 606
xi
Contents
Troubleshooting node synchronization in a clustered environment ......................................................................... 607 Troubleshooting Enterprise Manager ........................................ 608 Statistics logging .................................................................. 608 Data loading......................................................................... 610 Resources for troubleshooting................................................... 611 MicroStrategy readme and release notes ............................ 612 MicroStrategy Knowledge Base........................................... 613 Customer Forum .................................................................. 615 Technical Support ................................................................ 616
Introduction.............................................................................. 617 Supporting your system configuration ....................................... 618 Order of precedence ............................................................ 619 Accessing and working with VLDB properties ........................... 620 Opening the VLDB Properties Editor ................................... 621 Creating a VLDB settings report .......................................... 623 Viewing and changing VLDB properties .............................. 624 Viewing and changing advanced VLDB properties.............. 626 Setting all VLDB properties to default .................................. 627 Upgrading the VLDB options for a particular database type ...................................................................................... 630 Modifying the VLDB properties for a warehouse database instance................................................................ 631 Details for all VLDB properties................................................... 631 Displaying, evaluating, and ordering data: Analytical Engine.................................................................................. 632 Limiting report rows, SQL size, and SQL time-out: Governing ............................................................................ 645 Retrieving data: Indexing ..................................................... 649 Relating column data with SQL: Joins ................................. 659 Modifying third-party cube sources in MicroStrategy: MDX ..................................................................................... 692 Calculating data: Metrics...................................................... 700 Customizing SQL statements: Pre/Post Statements ........... 720 Optimizing queries ............................................................... 743 Selecting and inserting data with SQL: Select/Insert ........... 785 Creating and supporting tables with SQL: Tables................ 807 Default VLDB settings for specific databases............................ 827
xii
Contents
Introduction.............................................................................. 843 Access Control List permissions for an object ........................... 844 Access control entries.......................................................... 845 Permissions for server governing and configuration.................. 846 List of all privileges .................................................................... 847 Web Reporter privileges ...................................................... 849 Web Analyst privileges......................................................... 850 Web Professional privileges................................................. 852 Web MMT Option privilege .................................................. 853 Common privileges .............................................................. 854 MicroStrategy Office privileges ............................................ 854 MicroStrategy Mobile privileges ........................................... 855 Desktop Analyst privileges................................................... 855 Desktop Designer privileges ................................................ 856 Architect privileges............................................................... 858 MicroStrategy Administrator privileges ................................ 858 Integrity Manager privileges................................................. 859 Administration privileges ...................................................... 859 How user groups inherit privileges....................................... 860 Default user group privileges ..................................................... 861 Group privileges for new projects ........................................ 861 LDAP group privileges in MicroStrategy .............................. 862 Privileges for special user roles ................................................. 862 Security filter role ................................................................. 863
C. Enterprise Manager Introduction.............................................................................. 865 Data Model and Object Enterprise Manager data model ................................................ 866 Definitions Enterprise Manager statistics data dictionary ............................ 875 Enterprise Manager data warehouse tables .............................. 899 Fact tables ........................................................................... 900 Lookup tables ...................................................................... 911 Transformation tables .......................................................... 913 Indicator lookup tables ......................................................... 914 Relationship tables .............................................................. 917 Enterprise Manager metadata tables......................................... 917 Enterprise Manager attributes ................................................... 918 Application Objects attributes .............................................. 919 Configuration Objects attributes........................................... 919 Data Load attributes ............................................................ 920 Date attributes ..................................................................... 920
2008 MicroStrategy, Inc.
xiii
Contents
Document Job attributes ...................................................... 921 Indicators and Flags attributes............................................. 921 Report Job attributes ........................................................... 922 Schema Objects attributes................................................... 923 Session attributes ................................................................ 923 Time attributes ..................................................................... 924
Introduction.............................................................................. 925 Assigning privileges for editions of MicroStrategy Web products ..................................................................................... 926 Using the MicroStrategy Web Administrator page..................... 928 Controlling access to the Administrator page ...................... 929 Defining project defaults ............................................................ 930 Loading and applying default values ................................... 931 Integrating Narrowcast Server with MicroStrategy Web products ..................................................................................... 933 Enabling users to install MicroStrategy Office from Web........... 935 Using additional security features for MicroStrategy Web products ..................................................................................... 935 Using firewalls...................................................................... 935 Working with SSL (HTTPS) ................................................. 939 Using cookies ...................................................................... 940 Using encryption .................................................................. 942 Applying file-level security.................................................... 943 Sample MicroStrategy system ............................................. 945 FAQs for configuring and tuning MicroStrategy Web products ..................................................................................... 946
Introduction.............................................................................. 949 Cookies in MicroStrategy Web .................................................. 950 Session information ............................................................. 950 Default user name ............................................................... 953 Project information ............................................................... 953 Current language ................................................................. 954 GUI settings ......................................................................... 954 Personal autostyle information............................................. 954 System autostyle information............................................... 955 Connection information ........................................................ 955 Available projects information .............................................. 955 Global user preferences ...................................................... 956
xiv
Contents
Cached preferences ............................................................ 956 Preferences ......................................................................... 957 Cookies in MicroStrategy Web Universal .................................. 961 Permanent cookies .............................................................. 962 Temporary cookies .............................................................. 967 Project cookies .................................................................... 968
About the Diagnostics and Performance Logging tool............... 971 Features you can trace and log ................................................. 971
Introduction ................................................................................ 979 Executing a script with Command Manager Runtime................ 980 Syntax reference guide.............................................................. 982 Cache management statements................................................ 983 Alter report caching statement ............................................. 983 Alter object caching statement............................................. 984 Alter element caching statement.......................................... 985 List caching properties statement ........................................ 986 Purge caching statement ..................................................... 986 Delete report cache statement............................................. 987 Expire report cache statement ............................................. 987 List report cache properties statement................................. 988 List report caches statement................................................ 988 Invalidate report cache statement........................................ 989 Connection mapping management statements ......................... 989 Create connection map........................................................ 990 Alter connection map statement .......................................... 990 Delete connection map statement ....................................... 991 DBConnection management statements ................................... 992 Create DBConnection statement ......................................... 992 Alter DBConnection statement............................................. 993 Delete DBConnection statement.......................................... 994 List DBConnection properties statement ............................. 995 List DBConnections statement............................................. 995 DBInstance management statements........................................ 996 Create DBInstance statement.............................................. 996 Alter DBInstance statement ................................................. 997 Delete DBInstance statement .............................................. 999 List DBInstance properties statement .................................. 999 List DBInstances statement ............................................... 1000
xv
Contents
DBLogin management statements .......................................... 1000 Create DBLogin statement ................................................ 1000 Alter DBLogin statement .................................................... 1001 Delete DBLogin statement ................................................. 1002 List DBLogin properties statement..................................... 1002 List DBLogins statement .................................................... 1003 Event management statements ............................................... 1003 Trigger event statement ..................................................... 1003 Create event statement ..................................................... 1004 Alter event statement ......................................................... 1004 Delete event statement ...................................................... 1005 List events statement ......................................................... 1005 Miscellaneous statements ....................................................... 1005 Start service statement ...................................................... 1005 Stop service statement ...................................................... 1006 Project configuration management statements ....................... 1007 Alter project configuration statement ................................. 1007 Add DBInstance statement ................................................ 1009 Remove DBInstance statement ......................................... 1010 Project management statements ............................................. 1011 Resume project statement ................................................. 1011 Load project statement ...................................................... 1011 Unload project statement ................................................... 1012 Update project statement................................................... 1012 Delete project statement .................................................... 1012 Schedule management statements ......................................... 1013 Create schedule statement ................................................ 1013 Alter schedule statement ................................................... 1015 Delete schedule statement ................................................ 1017 List schedule properties statement .................................... 1017 List schedules statement ................................................... 1018 Server configuration management statements ........................ 1018 Alter server configuration statement .................................. 1018 List server configuration properties statement ................... 1020 Apply run-time settings statement...................................... 1021 Purge statistics statement.................................................. 1021 Server management statements.............................................. 1022 Start server statement ....................................................... 1022 Stop server statement........................................................ 1023 Restart server statement.................................................... 1023 List server properties statement......................................... 1024
xvi
Contents
xvii
Contents
xviii
Description of Guide
This guide is to be the primary resource that system administrators use to learn about the concepts and high-level steps for implementing, deploying, maintaining, tuning, and troubleshooting the MicroStrategy business intelligence system. It picks up where the MicroStrategy Installation and Configuration Guide leaves off (getting a basic and simple system running) and offers a full discussion of the concepts that a system administrator must consider before the system is made widely available to users in the enterprise. The chapters provide conceptual information about Understanding the MicroStrategy System architecture This chapter provides an overview of the architecture and how the MicroStrategy system interacts with the various external components/systems, and how Intelligence Server connects to and uses the data warehouse. It describes what Intelligence Server is, what happens when it is started and stopped, and what MicroStrategy metadata is and what purposes it serves, as well as what a
xix
MicroStrategy project is and what MicroStrategy objects are. It describes all aspects of connecting to databases including database instances, database connections, and what a MicroStrategy server definition is and what it controls. It also describes general job processing flows with the MicroStrategy system including report execution, object and element browsing, and HTML document execution. Setting up user security This chapter covers what users and groups are, what the different modes are for authentication and how to implement them, how to control access to data at both the application and database levels, and how to control access to the application functionality. The examples section shows how combinations of security features in both the MicroStrategy system and in the database management systems can be used together. This chapter describes how to manage MicroStrategy Web and MicroStrategy Web Universal, what the Web-related privileges are for the product, how to use the Administrator page including how to set project defaults. It also describes additional security requirements or options you can use with MicroStrategy Web products, including using digital certificates or firewalls, secure sockets layers, and so on. Managing your licenses This chapter covers making the system available to users. This includes some best practices are for deploying the system, and how to implement easy ways to install systems using SMS systems and silent installs; what License Manager is and how to use it; and setting up security in the MicroStrategy environment. Increasing system efficiency: Caching This chapter explains how you can make the system efficient and remove load from Intelligence Server by using the caching and History List features. It describes how caches work in the system, where they are stored, what the matching requirements are for using a cache, how to create pre-calculated data using aggregate tables,
xx
how to administer caches including how to invalidate them. It also describes what the History List is, how it is used in both Web and Desktop, and how to administer it. Monitoring and analyzing system usage This chapter explains how you can use the monitors available in the system to see the state of the system at any time (past or present). It describes how Enterprise Manager can help do this by monitoring statistics that can be logged. Maintaining and optimizing your system This chapter explains how to keep the system running smoothly and at peak performance. It discusses the different modes to which a project can be set, and how these settings affect database load. It describes how clustering works in the system, what information is synchronized within the cluster, and how clusters interact with and affect scheduling and caching. It covers tuning and how setting Intelligence Server governors, memory managers, other governors, and job prioritization settings can protect the system from being overloaded. It briefly covers designing Narrowcast Server applications in a way that does not overload the system. Automating tasks This chapter covers the methods MicroStrategy provides to automate repeated tasks. It includes a description of how scheduling reports and administration tasks can help ease the load on the system during peak times by doing work ahead of time in off-peak hours, and how to create and use both time- and event-based schedules. It also describes how to use Command Manager to automate certain administration processes. Managing your projects This chapter covers MicroStrategys recommended application development lifecycle and how to manage system objects through the lifecycle, using specific MicroStrategy tools: Object Manager, Project Merge, Project Mover, and Integrity Manager.
xxi
Troubleshooting This chapter provides a high-level methodology for finding trouble spots in the system and fixing them. It describes how to use the Diagnostics and Performance Logging tool, as well as Enterprise Manager, to help diagnose bottlenecks in the system, memory depletions, exceptions, or authentication problems.
Prerequisites
Before working with this document, you should be familiar with the information in the MicroStrategy Installation and Configuration Guide.
Resources
Documentation
MicroStrategy provides both manuals and online help; these two information sources provide different types of information, as described below. Manuals: MicroStrategy manuals provide: Introductory information Concepts Checklists Examples High-level procedures to get started
Resources
xxiii
Online help: MicroStrategy online help provides: Detailed steps to perform procedures Descriptions of each option on every software screen
Manuals
The following manuals are available from your MicroStrategy disk or the machine where MicroStrategy was installed. The procedure to access them is below. Adobe Acrobat Reader 4.0 or higher is required to view these documents. If you do not have Acrobat Reader installed on your computer, you can download it from www.adobe.com. The best place for all users to begin is with the MicroStrategy Basic Reporting Guide.
MicroStrategy Overview
Introduction to MicroStrategy: Evaluation Guide Instructions for installing, configuring, and using the MicroStrategy Evaluation Edition of the software. MicroStrategy Quick Start Guide Overview of the installation and evaluation process, and additional resources.
xxiv Resources
MicroStrategy Project Design Guide Information to create and modify MicroStrategy projects, and understand facts, attributes, hierarchies, transformations, advanced schemas, and project optimization.
MicroStrategy Basic Reporting Guide Instructions to get started with MicroStrategy Desktop and MicroStrategy Web, and how to analyze data in a report. Includes the basics for creating reports, metrics, filters, and prompts.
MicroStrategy Advanced Reporting Guide Instructions for advanced topics in the MicroStrategy system, building on information in the Basic Reporting Guide. Topics include reports, Freeform SQL reports, Query Builder reports, OLAP Cube reports, filters, metrics, Data Mining Services, custom groups, consolidations, and prompts.
MicroStrategy Report Services Document Creation Guide Instructions to design and create Report Services documents, building on information in the Basic Reporting Guide and Advanced Reporting Guide.
MicroStrategy Office User Guide Instructions for using MicroStrategy Office to work with MicroStrategy reports and documents in Microsoft Excel, PowerPoint, Word, and Outlook, to analyze, format, and distribute business data.
MicroStrategy Mobile User Guide Instructions for using MicroStrategy Mobile to view and analyze data, and perform other business tasks with MicroStrategy reports and documents on a mobile device. Covers installation and configuration of MicroStrategy Mobile and how a designer working in MicroStrategy Desktop or MicroStrategy Web can create effective reports and documents for use with MicroStrategy Mobile.
Resources
xxv
MicroStrategy System Administration Guide Concepts and high-level steps to implement, deploy, maintain, tune, and troubleshoot a MicroStrategy business intelligence system.
MicroStrategy Functions Reference Function syntax and formula components; instructions to use functions in metrics, filters, attribute forms; examples of functions in business scenarios.
MicroStrategy OLAP Cube Reporting Guide Information to integrate MicroStrategy with OLAP cube sources. You can integrate data from OLAP cube sources such as SAP BI, Microsoft Analysis Services, and Hyperion Essbase into your MicroStrategy projects and applications.
MicroStrategy Web Services Administration Guide Concepts and tasks to install, configure, tune, and troubleshoot MicroStrategy Web Services.
xxvi Resources
MicroStrategy Narrowcast Server Upgrade Guide Instructions to upgrade an existing Narrowcast Server.
Resources
xxvii
1 From the Windows Start menu, choose Programs, MicroStrategy, then Product Manuals. A Web page opens with a list of available manuals in PDF format. 2 Click the link for the desired manual. 3 Some documentation is provided in HTML help format. When you select one of these guides, the File Download dialog box opens. Select Open this file from its current location, and click OK. If bookmarks are not visible on the left side of an Acrobat (PDF) document, from the View menu click Bookmarks and Page. These steps may vary slightly depending on your version of Adobe Acrobat Reader.
Online help
MicroStrategy provides several ways to access online help: Help button: Use the Help button at the bottom of most software screens to see context-sensitive help. Help menu: Select Contents and Index to see the main table of contents for the help system. F1 key: Press F1 to see context-sensitive help addressing the function or task you are currently performing.
Documentation standards
MicroStrategy online help and PDF manuals (available both online and in printed format) use standards to help you identify certain types of content. The following table lists these standards.
xxviii Resources
These standards may differ depending on the language of this manual; some languages have rules that supersede the table below.
Type bold Indicates Button names, check boxes, dialog boxes, options, lists, and menus that are the focus of actions or part of a list of such GUI elements and their definitions Text to be entered by the user Example: Click Select
c:\filename d:\foldername\filename
Courier font
Calculations Code samples Registry keys Path and file names URLs Messages displayed in the screen
Example:
Sum(revenue)/number of months.
UPPERCASE Keyboard command key (such as ENTER) Shortcut key (such as CTRL+V) Example: To bold the selected text, press CTRL+B. + A keyboard command that calls for the use of more than one key (for example, SHIFT+F1) A note icon indicates helpful information for specific situations. A warning icon alerts you to important information such as potential security risks; these should be read before continuing.
Education
MicroStrategy Education Services provides a comprehensive curriculum and highly skilled education consultants. Many customers and partners from over 800 different organizations have benefited from MicroStrategy instruction.
Resources
xxix
Courses that can help you prepare for using this manual or that address some of the information in this manual include: MicroStrategy Intelligence Server Administration MicroStrategy Administrator
For the most up-to-date and detailed description of education offerings and course curricula, visit www.microstrategy.com/Education.
Consulting
MicroStrategy Consulting Services provides proven methods for delivering leading-edge technology solutions. Offerings include complex security architecture designs, performance and tuning, project and testing strategies and recommendations, strategic planning, and more. For a detailed description of consulting offerings, visit www.microstrategy.com/Consulting.
International support
MicroStrategy supports several locales. Support for a locale typically includes native database and operating system support, support for date formats, numeric formats, currency symbols etc. and availability of translated interfaces and documentation. The level of support is defined in terms of the components of a MicroStrategy Business Intelligence environment. A MicroStrategy Business Intelligence environment consists of the following components, collectively known as a configuration: Warehouse, metadata, and statistics databases MicroStrategy Intelligence Server MicroStrategy Web or Web Universal Server MicroStrategy Desktop client Web browser
xxx Resources
MicroStrategy is certified in homogeneous configurations (where all the components lie in the same locale) in the following languagesEnglish (US), French, German, Italian, Japanese, Korean, Portuguese (Brazilian), Spanish, Chinese (Simplified), Chinese (Traditional), Danish, and Swedish. MicroStrategy also provides limited support for heterogeneous configurations (where some of the components may lie in different locales). Please contact MicroStrategy Technical Support for more details. A translated user interface is available in each of the above languages. In addition, translated versions of the online help files and product documentation are available in several of the above languages.
Technical Support
If you have questions about a specific MicroStrategy product, you should: 1 Consult the product guides, online help, and readme files. Paths to access each are described above. 2 Consult the MicroStrategy Knowledge Base online at http://www.microstrategy.com/support/ k_base/index.asp A technical administrator in your organization may be able to help you resolve your issues immediately. 3 If the resources listed in the steps above do not provide a solution, contact MicroStrategy Technical Support directly. To ensure the most effective and productive relationship with MicroStrategy Technical Support, review the Policies and Procedures document posted at http://www.microstrategy.com/ Support/Policies. Refer to the terms of your purchase agreement to determine the type of support available to you.
Resources
xxxi
MicroStrategy Technical Support can be contacted by your companys Support Liaison. A Support Liaison is a person whom your company has designated as a point-of-contact with MicroStrategys support personnel. All customer inquiries and case communications must come through these named individuals. Your company may designate two employees to serve as their Support Liaisons, and can request to change their Support Liaisons two times per year with prior written notice to MicroStrategy Technical Support. It is recommended that you designate Support Liaisons who have MicroStrategy Administrator privileges. This can eliminate security conflicts and improve case resolution time. When troubleshooting and researching issues, MicroStrategy Technical Support personnel may make recommendations that require administrative privileges within MicroStrategy, or that assume that the designated Support Liaison has a security level that permits them to fully manipulate the MicroStrategy projects and has access to potentially sensitive project data such as security filter definitions.
xxxii Resources
6 Discuss the issue with other users by posting a question about the issue on the MicroStrategy Customer Forum at https://forums.microstrategy.com. The following table shows where, when, and how to contact MicroStrategy Technical Support. If your Support Liaison is unable to reach MicroStrategy Technical Support by phone during the hours of operation, they can leave a voicemail message, send email or fax, or log a case using the Online Support Interface. The individual Technical Support Centers are closed on certain public holidays.
North America
E-mail: support@microstrategy.com Web: https://support.microstrategy.com Fax: (703) 8488709 Phone: (703) 8488700 Hours: 9:00 A.M.7:00 P.M. Eastern Time, MondayFriday except holidays E-mail: eurosupp@microstrategy.com Web: https://support.microstrategy.com Fax: +44 (0) 208 711 2525 The European Technical Support Centre is closed on certain public holidays. These holidays reflect the national public holidays in each country. Phone: United Kingdom: +44 (0) 208 3960085 Poland: +48 22 321 8680 France: +33 17 099 4737 Germany: +49 22 16501 0609 Ireland: +353 1436 0916 Italy: +39 023626 9668 Spain: +34 91788 9852 Scandinavia & Finland: +46 8505 20421 The Netherlands: +31 20 794 8425 International distributors: +44 (0) 208 396 0080 Hours: United Kingdom: 9:00 A.M.6:00 P.M. GMT, Monday-Friday except holidays Mainland Europe: 9:00 A.M.6:00 P.M. CET, Monday-Friday except holidays
Resources
xxxiii
Asia Pacific
E-mail: apsupport@microstrategy.com Web: https://support.microstrategy.com Phone: Korea: +82 2 560 6565 Fax: +82 2 560 6555 Japan: +81 3 3511 6720 Fax: +81 3 3511 6740 Asia Pacific (supporting all except Japan and Korea): +65 6303 8969 Fax: +65 6303 8999 Hours: 9:00 A.M.6:00 P.M. JST (Tokyo), Monday-Friday except holidays E-mail: latamsupport@microstrategy.com Web: https://support.microstrategy.com Phone:
Latin America
LATAM (except Brazil & Argentina): +54 11 5222 9360 Fax: +54 11 5222 9355 Argentina: 0 800 444 MSTR Fax: +54 11 5222 9355 Brazil: +55 11 3054 1010 Fax: +55 11 3044 4088
Hours: 9:00 A.M.7:00 P.M. BST (Sao Paulo), MondayFriday except holidays
Support Liaisons should contact the Technical Support Center from which they obtained their MicroStrategy software licenses or the Technical Support Center to which they have been designated.
xxxiv Resources
If this is the Support Liaisons first call, they should also be prepared to provide the following: Street address Phone number Fax number Email address
To help the Technical Support representative resolve the problem promptly and effectively, be prepared to provide the following additional information: Case number: Please keep a record of the number assigned to each case logged with MicroStrategy Technical Support, and be ready to provide it when inquiring about an existing case Software version and product registration numbers of the MicroStrategy software products you are using Case description: What causes the condition to occur? Does the condition occur sporadically or each time a certain action is performed? Does the condition occur on all machines or just on one? When did the condition first occur? What events took place immediately prior to the first occurrence of the condition (for example, a major database load, a database move, or a software upgrade)? If there was an error message, what was its exact wording? What steps have you taken to isolate and resolve the issue? What were the results? System configuration (the information needed depends on the nature of the problem; not all items listed below may be necessary):
Resources
xxxv
computer hardware specifications (processor speed, RAM, disk space, and so on) network protocol used ODBC driver manufacturer and version database gateway software version (for MicroStrategy Web-related problems) browser manufacturer and version (for MicroStrategy Web-related problems) Web server manufacturer and version If the issue requires additional investigation or testing, the Support Liaison and the MicroStrategy Technical Support representative should agree on certain action items to be performed. The Support Liaison should perform any agreed-upon actions before contacting MicroStrategy Technical Support again regarding the issue. If the Technical Support representative is responsible for an action item, the Support Liaison may call MicroStrategy Technical Support at any time to inquire about the status of the issue.
Feedback
Please send any comments or suggestions about user documentation for MicroStrategy products to: documentationfeedback@microstrategy.com Send suggestions for product enhancements to: support@microstrategy.com When you provide feedback to us, please include the name and version of the products you are currently using. Your feedback is important to us as we prepare for future releases.
xxxvi Feedback
1
1.
Introduction
This chapter summarizes the major components in the MicroStrategy system architecture. Read the discussion of BI architecture in Chapter 1 of the MicroStrategy Project Design Guide as context for this chapter, which builds on the concepts presented there and provides information specific to your role as a system administrator. The discussions in this chapter include:
2008 MicroStrategy, Inc.
Storing information in the data warehouse, page 2 Basic Intelligence Server operation, page 3 Storing information in MicroStrategy metadata, page 14 Understanding projects and project sources, page 16 Communicating with databases, page 17 Letting users see ETL source information, page 24 Processing jobs, page 27 Security checklist before deploying the system, page 47
1
Changes to Intelligence Server configuration options that cannot be changed while Intelligence Server is running Potential power outages due to storms or planned building maintenance
You can start and stop MicroStrategy Intelligence Server manually as a service using any of the following methods: MicroStrategy Service Manager is a management application that can run in the background on the Intelligence Server machine. It is often the most convenient way to start and stop Intelligence Server. For instructions, see MicroStrategy Service Manager, page 5. If you are already using MicroStrategy Desktop, you might want to start and stop Intelligence Server from within Desktop. For instructions, see MicroStrategy Desktop, page 6. You can start and stop Intelligence Server as part of a Command Manager script. For details, see MicroStrategy Command Manager, page 7. Finally, you can start and stop Intelligence Server using operating system commands. For instructions, see Command line, page 7 or Command line, page 7. To remotely start and stop the Intelligence Server service in Windows, you must be logged in to the remote machine as a Windows user with administrative privileges.
In the system tray of the Windows task bar, double-click the or . MicroStrategy Service Manager icon,
2008 MicroStrategy, Inc. Basic Intelligence Server operation
If the icon is not present in the system tray, then from the Windows Start menu, point to Programs, then MicroStrategy, then Tools, then choose Service Manager. The MicroStrategy Service Manager dialog box opens.
To open MicroStrategy Service Manager in UNIX
In UNIX, Service Manager requires an X-Windows environment. 1 Browse to the directory specified as the home directory during MicroStrategy installation (the default is ~\MicroStrategy), then browse to \bin. 2 Type ./mstrsvcmgr and press ENTER. The MicroStrategy Service Manager dialog box opens.
MicroStrategy Desktop
You can start and stop MicroStrategy Intelligence Server from MicroStrategy Desktop.
1 In the Folder List, right-click the Administration icon. 2 Choose Start Server to start it or Stop Server to stop it.
Command line
You can start and stop MicroStrategy Intelligence Server from a command prompt, using the MicroStrategy Server Control Utility. This utility is invoked by the command mstrctl. By default the utility is located in C:\Program Files\Common Files\MicroStrategy\ in Windows, and in ~/MicroStrategy/bin in UNIX. The syntax to start the service is mstrctl -s IntelligenceServer start --service The syntax to stop the service is mstrctl -s IntelligenceServer stop
1 On the Windows Start menu, point to Settings, then choose Control Panel. The Control Panel window opens. 2 Double-click Administrative Tools, and then double-click Services. The Services window opens. 3 From the Services list, select MicroStrategy Intelligence Server. The Properties dialog box for the Intelligence Server service opens. 4 You can do any of the following: To start the service, click Start. To stop the service, click Stop. To change the startup type, select a startup option from the drop-down list. Automatic means that the service starts when the computer starts. Manual means that you must start the service manually. Disabled means that you cannot start the service until you change the startup type to one of the other types. 5 When you are finished, click OK to close the Properties dialog box.
Executing this file from the command line displays the following administration menu in Windows, and a similar menu in UNIX.
To use these options, type the corresponding letter on the command line and press Enter. For example, to monitor users, type U and press Enter. The information is displayed.
Reads from the machine registry which server definition it is supposed to use and connects to the specified metadata database Loads configuration and schema information for each loaded project Loads existing report cache files from automatic backup files into memory for each loaded project (up to the specified maximum RAM setting) This occurs only if report caching is enabled and the Load caches on startup feature is enabled.
Loads schedules Loads OLAP cube schemas You can set Intelligence Server to load OLAP cube schemas when it starts, rather than loading OLAP cube schemas upon running an OLAP cube report. For more details on this subject and steps to load OLAP cube schemas when Intelligence Server starts, see the Configuring and Connecting Intelligence Server chapter of the MicroStrategy Installation and Configuration Guide.
In the event of a system or power failure, Intelligence Server cannot capture its current state. The next time the server is started, it loads the state information, caches, and History Lists that were saved in the last automatic backup. (The automatic backup frequency is set using the Intelligence Server Configuration editor.) The server does not re-execute any job that was running until the person requesting the job logs in again.
11
The user who submitted a canceled job sees a message in the History List indicating that there was an error. The user must resubmit the job. Closes database connections Logs out connected users from the system Removes itself from the cluster (if it was in a cluster) It will not rejoin the cluster automatically when restarted. As noted earlier, if a system or power failure occurs, these actions cannot be done. Instead, Intelligence Server recovers its state from the latest automatic backup.
1 From the Windows Start menu, point to MicroStrategy, then Tools, then choose Service Manager. The MicroStrategy Service Manager dialog box opens. 2 In the Server drop-down list, select the name of the machine on which MicroStrategy Intelligence Server is installed. 3 In the Service drop-down list, select MicroStrategy Intelligence Server. 4 Click Options. The Service Options dialog box opens. 5 Select Automatic as the Startup Type option.
6 Click OK. You can also set this using the Services option in the Microsoft Windows Control Panel.
To start Intelligence Server service automatically (when it fails unexpectedly) using the Restarter
1 From the Windows Start menu, point to MicroStrategy, then Tools, then choose Service Manager. The MicroStrategy Service Manager dialog box opens. 2 In the Server drop-down list, select the machine on which the MicroStrategy Intelligence Server service is installed. 3 In the Service drop-down list, select MicroStrategy Intelligence Server. 4 Click Options. The Service Options dialog box opens. 5 On the Intelligence Server Options tab, select the Enabled check box for the Re-starter Option. The MicroStrategy Listener service must also be running for the restarter feature to work.
To start the MicroStrategy Listener service
1 Use the MicroStrategy Service Manager as described above. 2 Select MicroStrategy Listener from the Service drop-down list. 3 Click Start.
13
In addition to the physical warehouse schemas implicit presence in the metadata, the actual types of objects stored in the metadata are: Schema objects Application objects Configuration objects
Schema objects
Schema objects are objects created, usually by a project designer or architect, based on the logical and physical models. Facts, attributes, and hierarchies are examples of schema objects. These objects are developed in MicroStrategy Architect, which can be accessed from MicroStrategy Desktop.
Application objects
The report designer creates the application objects necessary to run reports. These application objects include reports, report templates, filters, metrics, prompts, and so on. These objects are built in MicroStrategy Desktop or Command Manager. The MicroStrategy Advanced Reporting Guide is devoted to explaining both schema and application objects.
Configuration objects
Administrative and connectivity-related objects, also called configuration objects, are managed in MicroStrategy Desktop (or Command Manager) by an administrator changing the Intelligence Server configuration or project configuration. Examples of configuration objects include users, groups, server definitions and so on. A server definition is an instance of MicroStrategy Intelligence Server and all of its configuration settings. Multiple server definitions can be stored in the metadata, but only one can be run at a time on a given machine. If you want
2008 MicroStrategy, Inc. Storing information in MicroStrategy metadata
15
multiple machines to point to the same metadata, you should cluster them. For more information about clustering, including instructions on how to cluster Intelligence Servers, see Clustering multiple machines in the system, page 329. Pointing multiple Intelligence Servers to the same metadata without clustering may cause metadata inconsistencies. This configuration is not supported, and users are strongly recommended not to configure their systems in this way. The server definition information includes: Metadata connectivity information Metadata DSN Metadata ID and encrypted password MicroStrategy administrator user name MicroStrategy Intelligence Server configuration settings set in MicroStrategy Desktop This System Administration Guide is devoted to explaining in detail how best to configure Intelligence Server and the rest of the MicroStrategy system configuration objects.
There are three types of project sources, defined based on the type of connection they represent: Server connection, or three-tier, which specifies the Intelligence Server to connect to. Direct connection, or two-tier, which bypasses Intelligence Server and allows MicroStrategy Desktop to connect directly to the MicroStrategy metadata and data warehouse. Note that this is primarily for project designers or very small project implementations that have few users. Because this type of connection bypasses Intelligence Server, important benefits such as caching and governing, which help protect the system from being overloaded, are not available. 6.x Project connection (also two-tier), which connects to a MicroStrategy version 6 project in read-only mode.
For more information on project sources, see the MicroStrategy Installation and Configuration Guide.
17
metadata by reading the server definition registry when it starts. However, this connection is only one segment of the connectivity picture. Consider these questions: How does a MicroStrategy Desktop user access the metadata? How does a user connect to MicroStrategy Intelligence Server? Where is the connection information stored?
The diagram below illustrates 3-tier metadata connectivity between the MicroStrategy metadata database (tier one), Intelligence Server (tier two), and MicroStrategy Desktop (tier three).
In a server (three-tier) environment, MicroStrategy Desktop metadata connectivity is established through the project source. For steps to create a project source, see the MicroStrategy Installation and Configuration Guide.
You can also create and edit a project source using the Project Source Manager in MicroStrategy Desktop. When you use the Project Source Manager, you must specify the MicroStrategy Intelligence Server machine to which to connect. It is through this connection that MicroStrategy Desktop users retrieve metadata information. The MicroStrategy Desktop connection information is stored in the MicroStrategy Desktop machine registry.
19
Creating a database instance: A MicroStrategy object created in Desktop that represents a connection to the data warehouse. A database instance specifies warehouse connection information such as the data warehouse DSN, Login ID and password, and other data warehouse-specific information. Creating a database connection: Specifies the DSN and database login used to access the data warehouse. A database instance designates one database connection as the default connection for MicroStrategy users. Creating a database login: Specifies the user ID and password used to access the data warehouse. The database login overwrites any login information stored in the DSN. User connection mapping: The process of mapping MicroStrategy users to database connections and database logins. To execute reports, MicroStrategy users must be mapped to a database connection and login.
For procedures to connect to the data warehouse, see the MicroStrategy Installation and Configuration Guide.
A cached connection is used for a job if the following criteria are satisfied:
The connection string for the cached connection matches the connection string that will be used for the job. The driver mode (multiprocess versus multithreaded) for the cached connection matches the driver mode that will be used for the job. Intelligence Server does not cache any connections that have pre- or post-SQL statements associated with them because these options might drastically alter the state of the connection.
21
Ease of administration/monitoringSince all database connectivity is handled by Intelligence Server, keeping track of all connections to all databases in the system is easy. Prioritized access to databasesYou can set access priority based on user, project, estimated job cost, or any combination of these. Multiprocess executionThe ability to run in multiprocess mode means that if one process fails, such as a lost or hung database access thread, the others are not affected. Database optimizationsUsing VLDB properties, Intelligence Server is able to take advantage of the unique performance optimizations that different database servers offer.
For more information about all of the VLDB properties, see SQL Generation and Data Processing: VLDB Properties, page 617.
For more information about VLDB properties, see SQL Generation and Data Processing: VLDB Properties, page 617. You may need to manually upgrade the database types if you chose not to run the update metadata process after installing a new release.
23
1 In the Database Instance editor, click the General tab. 2 Select Upgrade. The Upgrade Database Type dialog box opens. For more detailed information about manually upgrading VLDB properties, functions, and SQL syntax for your database server, see the online help. The readme file for each MicroStrategy release lists all DBMSs that are supported or certified for use with MicroStrategy. In some cases, MicroStrategy no longer updates certain DBMS objects as newer versions are released. While we do not normally remove these, in one case, we merged Oracle 8i R2 and Oracle 8i R3 into Oracle 8i R2/R3 for both Standard and Enterprise editions (Oracle 8i R3 is no longer being updated). You may need to select the merged version as part of your database instance if you are using a version of Oracle 8i. This will become apparent if date/time functions stop working, particularly in Enterprise Manager.
how MicroStrategy metrics and attributes map to original data sources. This helps users answer questions such as: How recent is my data? Where does the data for this report come from? Is my analysis significant? Have enough successful records been transferred from original sources to make my queries meaningful?
More specifically, the Intelligence Server is able to retrieve information related to each metric, attribute, and fact in the MicroStrategy project. MicroStrategy Desktop users can access the information through the Report, Metric, Attribute, and Fact editors. Each MicroStrategy object can be traced back to the data warehouse table-column pairs on which the object is defined. For example, a metric can be traced back to fact columns in the database based on the metric definition. A single object may trace back to multiple table-column combinations. Using the ETL repository, the Intelligence Server can then trace back to the original data sources for each of these table-column combinations. For each table-column combination, the Intelligence Server retrieves the following: The original data source The source tables and columns The transformation rules used in transferring data from the source to the target (both a text description and the transformation expression itself) The number of successful and failed rows transferred during the most recent transformation The start and finish time of the most recent transformation In MicroStrategy, this functionality is available for Informatica Powermart repositories, version 5.0.
25
The following execution flow occurs when a user requests ETL Information: 1 The user accesses ETL Information from the Report, Metric, Attribute, or Fact editor (the user must have the View ETL Information privilege). 2 Intelligence Server queries the ETL repository for information about all table-column pairs related to the selected metric, attribute, or fact. 3 This query is made via the MX2 API provided by Informatica, which is available with Powermart 5.0. The MX2 client component must be installed on the Intelligence Server machine to enable the ETL integration functionality. Intelligence Server automatically detects the presence of this component. See your Informatica documentation for more information about MX2. 4 Intelligence Server presents the ETL information back to the user via MicroStrategy Desktop. The Intelligence Server can also cache the ETL Information to avoid unnecessary queries to the ETL. This caching may be enabled optionally for each project. The Intelligence Server loads all ETL data relevant to a project from the ETL repository into memory with the first request. Further requests are then served by Intelligence Server without re-querying the ETL repository. The cache can be refreshed manually using Desktop (using the Project Configuration editor: Project definition category and the ETL configuration subcategory) or using the MicroStrategy SDK.
To implement the ETL functionality for a project
1 Informaticas MX2 client component must be installed on the Intelligence Server machine. (It is not required on the Desktop machine.) 2 For the specific project, access the Project Configuration editor: Project definition category and in the ETL Configuration subcategory, specify the ETL settings. 3 Grant users the View ETL Information privilege.
Processing jobs
Any request to the system submitted by users from any part of the MicroStrategy system is known as a job. Jobs may originate from servers such as Narrowcast Server or the Intelligence Servers internal scheduler, or from client applications such as Desktop, Web, Mobile, Integrity Manager, or another custom-coded application. The main requests include report execution requests, object browsing requests, element browsing requests, Report Services document requests, and HTML document requests.
Processing jobs
27
4 The result is sent back to the client application, which presents the result to the user. Most of the actual processing that takes place is done in steps 2 and 3 internally within Intelligence Server. Although the user request must be received and the final results must be delivered (steps 1 and 4), those are relatively simple tasks. It is more useful to explain how Intelligence Server works. Therefore, the rest of this section discusses Intelligence Server activity as it processes jobs. This includes: Processing report execution, page 28 Processing object browsing, page 33 Processing element browsing, page 34 Processing Report Services document execution, page 37 Processing HTML document execution, page 40 Client-specific job processing, page 42
Being familiar with this material should help you to understand and interpret statistics, Enterprise Manager reports, and other log files available within the system. This may help you to know where to look for bottlenecks in the system and how you can tune the system to minimize their effects.
28 Processing jobs
The most prominent Intelligence Server components related to report job processing are listed here.
Component Analytical Engine Server Function Performs complex calculations on a result set returned from the data warehouse, such as statistical and financial functions. Also, sorts raw results returned from the Query Engine into a cross-tabbed grid suitable for display to the user. In addition, it performs subtotal calculations on the result set. Depending on the metric definitions, the Analytical Engine will also perform metric calculations that were not or could not be performed using SQL, such as complex functions.
Metadata Server Controls all access to the metadata for the entire project. Object Server Creates, modifies, saves, loads and deletes objects from metadata. Also maintains a server cache of recently used objects.Tthe Object Server does not manipulate metadata directly. The Metadata Server does all reading/writing from/to the metadata; the Object Server uses the Metadata Server to make any changes to the metadata. Sends the SQL generated by the SQL Engine to the data warehouse for execution. Creates and manages all server reporting instance objects. Maintains a cache of executed reports. Resolves prompts for report requests. Works in conjunction with Object Server and Element Server to retrieve necessary objects and elements for a given request. Generates the SQL needed for the report.
Processing jobs
29
Below is a typical scenario of a reports execution within Intelligence Server. The diagram shows the report processing steps. An explanation of each step follows the diagram.
Client
3
Report Server
5
SQL Engine
6
Query Engine
7
Analytical Engine
Resolution Server
Object cache
Object Server
Report cache
Metadata Server
SQL/ODBC
SQL/ODBC
Metadata
Data Warehouse
1 Intelligence Server receives the request. 2 The Resolution Server checks for prompts. If the report has one or more prompts, the user must answer them. For information about these extra steps, see Processing reports with prompts, page 31. 3 The Report Server checks the internal cache, if the caching feature is turned on, to see whether the report results already exist. If the report exists in the cache, Intelligence Server skips directly to the last step and delivers the report to the client. If no valid cache exists for the report, Intelligence Server creates the task list necessary to execute the report. For more information on caching, see Report caches, page 171. Prompts are resolved before the Server checks for caches. Users may be able to retrieve results from cache even if they have personalized the report with their own prompt answers.
30 Processing jobs
4 The Resolution Server obtains the report definition and any other required application objects from the Object Server. The Object Server retrieves these objects from the object cache, if possible, or reads them from the metadata via the Metadata Server. Objects retrieved from metadata are stored in the object cache. 5 The SQL Generation Engine creates the optimized SQL specific to the RDBMS being used in the data warehouse. The SQL is generated based on the definition of the report and associated application objects retrieved in the previous step. 6 The Query Engine runs the SQL against the data warehouse. The report results are returned to Intelligence Server. 7 The Analytical Engine performs additional calculations as necessary. For most reports, this includes cross-tabbing the raw data and calculating subtotals. Some reports may require additional calculations that cannot be performed in the database via SQL. 8 Depending on the analytical complexity of the report, the results might be passed back to the Query Engine for further processing by the database until the final report is ready (in this case, steps 57 are repeated). 9 Intelligence Servers Report Server saves or updates the report in the cache, if the caching feature is turned on, and passes the formatted report back to the client, which displays the results to the user.
Processing jobs
31
2 Intelligence Server puts the job in a sleep mode and tells the Result Sender component to send a message to the client application prompting the user for the information. 3 The user completes the prompt and the client application sends the users prompt selections back to Intelligence Server. 4 Intelligence Server performs the security and governing checks and updates the statistics. It then wakes up the sleeping job, adds the users prompt reply to the jobs report instance, and passes the job to the Resolution Server again. 5 This cycle repeats until all prompts in the report are resolved. A sleeping job times out after a certain period of time or if the connection to the client is lost. If the prompt reply comes back after the job has timed out, the user sees an error message. All regular report processing resumes from the point at which Intelligence Server checks for a report cache, if the caching feature is turned on.
32 Processing jobs
In a three-tier connection, Intelligence Server sends the report to MicroStrategy Desktop, which creates the graph image. In a four-tier connection, Intelligence Server uses the graph generation component to create the graph image and sends it to the client.
Processing jobs
33
The diagram below shows the object request execution steps. An explanation of each step follows the diagram.
C lie n t
2
O b je c t cache O b je c t S e rve r
3
M e ta d a ta S e rve r
S Q L /O D B C
M e ta d a ta
1 Intelligence Server receives the request. 2 The Object Server checks for an object cache that can service the request. If an object cache exists, it is returned to the client and Intelligence Server skips to the last step in this process. If no object cache exists, the request is sent to the Metadata Server. 3 The Metadata Server reads the object definition from the metadata repository. 4 The requested objects are received by the Object Server where are they deposited into memory object cache. 5 Intelligence Server returns the objects to the client.
34 Processing jobs
For a more thorough discussion of attribute elements, see the section in the MicroStrategy Basic Reporting guide about the logical data model. When users request attribute elements from the system, they are said to be element browsing and create what are called element requests. More specifically, this happens when users: Answer prompts when executing a report Browse attribute elements in MicroStrategy Desktop using the Data Explorer (either in the Folder List or the Report Editor) Use Desktops Filter Editor, Custom Group Editor, or Security Filter Editor Use the Design Mode on Web to edit the report filter
When Intelligence Server receives an element request from the user, it sends a SQL statement to the data warehouse requesting attribute elements. When it receives the results from the data warehouse, it then passes the results back to the user. Also, if the element caching feature is turned on, it stores the results in memory so that additional requests are retrieved from memory instead of querying the data warehouse again. For more information on this, see Element caches, page 201. The most prominent Intelligence Server components related to element browsing are listed here.
Component DB Element Server Element Net Server Function Transforms element requests into report requests and then sends report requests to the warehouse. Receives, de-serializes, and passes element request messages to the Element Server.
Element Server Creates and stores server element caches in memory. Manages all element requests in the project. Query Engine Report Server Sends the SQL generated by the SQL Engine to the data warehouse for execution. Creates and manages all server reporting instance objects. Maintains a cache of executed reports.
Processing jobs
35
Function Resolves prompts for report requests. Works in conjunction with Object Server and Element Server to retrieve necessary objects and elements for a given request. Generates the SQL needed for the report.
The diagram below shows the element request execution steps. An explanation of each step follows the diagram.
Client
2
Element cache Element Server
5
Resolution Server
6
SQL Engine
7
Query Engine
3
DB Element Server
4
Report Server
1 Intelligence Server receives the request. 2 The Element Server checks for a server element cache that can service the request. If a server element cache exists, the element cache is returned to the client. Skip to the last step in this process.
36 Processing jobs
3 If no server element cache exists, the database Element Server receives the request and transforms it into a report request. The element request at this point is processed like a report request: The Intelligence Server creates a report that has only the attributes and possibly some filtering criteria, and SQL is generated and executed like any other report. 4 The Report Server receives the request and creates a report instance. 5 The Resolution Server receives the request and determines what elements are needed to satisfy the request, and then passes the request to the SQL Engine Server. 6 The SQL Engine Server generates the necessary SQL to satisfy the request and passes it to the Query Engine Server. 7 The Query Engine Server sends the SQL to the data warehouse. 8 The elements are returned from the data warehouse to Intelligence Server and deposited in the server memory element cache by the Element Server. 9 Intelligence Server returns the elements to the client.
Processing jobs
37
Most of the data on a document is from an underlying dataset. A dataset is a MicroStrategy report that defines the information that the Intelligence Server retrieves from the data warehouse or cache. Other data that does not originate from the dataset is stored in the documents definition. Document execution is slightly different from the execution of a single report, since documents can contain multiple reports. The following diagram shows the document processing execution steps. An explanation of each step follows the diagram.
Client
5
Analytical Engine
2
Document Server
Resolution Server
7a
6
Object Server
Export Engine
Metadata Server
SQL/ODBC
Metadata
1 The Intelligence Server receives a document execution request and creates a document instance in the Intelligence Server. This instance holds the results of the request. A document instance facilitates the processing of the document through the Intelligence Server, similar to a report instance that is used to process reports. It contains the report instances for all the dataset reports and therefore has access to all the information that may be included in the dataset reports. This information includes prompts, formats, and so on.
38 Processing jobs
2008 MicroStrategy, Inc.
2 The Document Server inspects all dataset reports and prepares for execution. It consolidates all prompts from from datasets into a single prompt to be answered. All identical prompts are merged so that the resulting prompt contains only one copy of each prompt question. 3 The Document Server, with the assistance of the Resolution Server, asks the user to answer the consolidated prompt. The users answers are stored in the Document Server. 4 The Document Server creates an individual report execution job for each dataset report. Each job is processed by the Intelligence Server, using the report execution flow described in Processing report execution, page 28. Prompt answers are provided by the Document Server to avoid further prompt resolution. 5 After the Intelligence Server has completed all the report execution jobs, the Analytical Engine receives the corresponding report instances to begin the data preparation step. Document elements are mapped to the corresponding report instance to construct internal data views for each element. Document elements include grouping, data fields, Grid/Graphs, and so on. 6 The Analytical Engine evaluates each data view and performs the calculations that are required to prepare a consolidated dataset for the entire document instance. These calculations include calculated expressions, derived metrics, and conditional formatting. The consolidated dataset determines the number of elements for each group and the number of detail sections. 7 The Document Server receives the final document instance to finalize the document format. Additional formatting steps are required if the document is exported to PDF or Excel format. The export generation takes place on the client side in three-tier and on the server side in four-tier, although the component in charge is the same in both cases.
Processing jobs
39
If the document is executed in HTML, the Web client requests an XML representation of the document to process it and render the final output.
40 Processing jobs
The diagram below shows the HTML document processing execution steps. An explanation of each step follows the diagram.
C lie n t
M ic ro S tra te g y In te llig e n c e S e rv e r
In te llig e n c e P ip e lin e
R e s o lu tio n S e rv e r
O b je c t S e rv e r
M e ta d a ta S e rv e r
S Q L /O D B C
M e ta d a ta
1 Intelligence Server receives an HTML document execution request and creates an HTML document instance to go through the Intelligence Server and hold the results. An HTML document instance facilitates the processing of the HTML document through Intelligence Server like a report instance is used for processing reports. It contains the report instances for all the child reports, the XML results for the child reports, and any prompt information that may be included in the child reports. 2 The HTML Document Server consolidates all prompts from child reports into a single prompt to be answered. Any identical prompts are merged so that the resulting single prompt contains only one copy of each prompt question. 3 Resolution Server asks the user to answer the consolidated prompt. (The user only needs to answer a single set of questions.)
2008 MicroStrategy, Inc. Processing jobs
41
4 The HTML Document Server splits the HTML document request into separate individual jobs for the constituent reports. Each report goes through the report execution flow as described above. Prompts have already been resolved for the child reports. 5 The completed request is returned to the client.
For information about the processing steps performed by Intelligence Server for all jobs, see Intelligence Server job processing (common to all jobs), page 27.
42 Processing jobs
3 The MicroStrategy Web API sends the request to MicroStrategy Intelligence Server, which processes the job as usual (see Processing report execution, page 28). 4 Intelligence Server sends the results back to the MicroStrategy Web API via XML. 5 Web converts the XML to HTML within the application code. In MicroStrategy Web, the conversion is primarily performed in ASP code. In MicroStrategy Web Universal, the conversion is performed within the Java transform classes. In some customizations, the conversion may occur within custom XSL classes. By default, the product does not use XSL for rendering output, except in document objects.
6 Web sends the HTML to the clients browser, which displays the results.
Processing jobs
43
Exporting a report from MicroStrategy Web products causes Intelligence Server to retrieve the entire result set (no incremental fetch) into memory and send it to Web. This increases the memory use on the Intelligence Server machine and it increases network traffic. For information about governing report size limits for exporting, see Setting the Incremental fetch limit in MicroStrategy Web products, page 407 and the following sections.
44 Processing jobs
The MicroStrategy system performs these steps when exporting to Excel with formatting: 1 MicroStrategy Web product receives the request for the export to Excel and passes the request to Intelligence Server. Intelligence Server produces an HTML document by combining the XML containing the report data with the XSL containing formatting information. 2 Intelligence Server passes the HTML document to Web, which creates an Excel file and sends it to the browser. Users can then choose to view the Excel file or save it depending on the client machine operating systems setting for viewing Excel files.
Export to PDF
Exporting to PDF also makes use of both Intelligence Server and Web to create a PDF (Portable Document Format) file. PDF files are viewed with Adobes Acrobat reader and provide greater printing functionality than simply printing the report from the browser. To view the PDF files, the client machine must have Adobe Acrobat Reader 5.0 version or greater.
Processing jobs
45
The MicroStrategy system performs these steps when exporting to PDF: 1 When a MicroStrategy Web product receives the request to export to PDF, it passes the request to Intelligence Server. Intelligence Server passes XML containing both report data and formatting information to Web. 2 Web passes the XML to a custom component. This component determines the number of pages for the PDF file as well as other formatting options such as row/column headers. The component then generates the PDF file and passes it to the users browser. Users can then choose to view the PDF file or save it to the hard disk depending on the client machine operating system's setting for viewing PDF files.
46 Processing jobs
5 Intelligence Server processes the report job request as usual. (See Processing report execution, page 28.) It then sends the result back to Narrowcast Server. 6 Narrowcast Server creates formatted documents using the personalized report data. 7 Narrowcast Server packages documents as appropriate for the service's delivery method, such as e-mail, wireless, and so on. 8 Narrowcast Server delivers the information to recipients by the chosen delivery method.
Security implementation Ensure the Administrator password has been changed. When you install Intelligence Server, the Administrator account comes with a blank password that must be changed. Set up access controls for the database (see Controlling access to data, page 101). Depending on your security requirements you may need to: Set up security views to restrict access to specific tables, rows, or columns in the database Split tables in the database to control user access to data by separating a logical data set into multiple physical tables, which require separate permissions for access Implement connection mapping to control individual access to the database Configure passthrough execution to control individual access to the database from each project, and to track which users are accessing the RDBMS system Assign security filters to users or groups to control access to specific data (these operate similarly to security views, but at the application level) Understand the MicroStrategy user model (see The MicroStrategy user model, page 50). Use this model to: Select and implement a system authentication mode to identify users
Completed
47
Security implementation Set up security roles for users and groups to assign basic privileges and permissions Understand ACLs (access control lists) which allow users access permissions to individual objects Check and, if necessary, modify privileges and permissions for anonymous authentication for guest users. (By default, anonymous access is disabled at both the server and the project levels.) Do not assign delete privileges to the guest user account. If you leave anonymous access disabled, ensure the hyperlink for guest access does not appear on the login page. Use adminoptions.asp to turn this hyperlink off. Assign privileges and permissions to control user access to application functionary. (See Appendix B, Permissions and Privileges for a list of all default and available user permissions and privileges.) You may need to: Assign the Denied All permission to a special user or group so that, even if permission is granted at another level, permission is still denied Make sure guest users (anonymous authentication) have access to the Log folder located in C:\Program Files\Common Files\MicroStrategy. This ensures that any application errors that occur while a guest user is logged in can be written to the log files. Use your Web application server security features to: Implement file-level security requirements Create security roles for the application server Make use of standard Internet security technologies such as firewalls, digital certificates, and encryption. You may need to: Enable encryption for MicroStrategy Web products. By default most encryption technologies are not used unless you enable them. Specifically, you may need to: Enable the Encrypt User Credentials setting in the Login section of the Project defaults preferences. If you are working with particularly sensitive or confidential data, enable the setting to encrypt all communication between Web server and Intelligence Server. Note: There may be a noticeable performance degradation since the system must encrypt and decrypt all network traffic. Locate the physical machine hosting the Web application in a physically secure location. Restrict access to files stored on the machine hosting the Web application by implementing standard file-level security offered by your operating system. Specifically, apply this type of security to protect access to the MicroStrategy administrator pages, to prevent someone from typing specific URLs into a browser to access these pages. (The default location of the Admin page file is /MicroStrategy7/admin/admin.asp.) Be sure to restrict access to: The Admin directory adminoptions.asp delete.asp
Completed
2
2.
Introduction
Security is a concern in any organization. The data warehouse may contain sensitive information that should not be viewed by all users. It is your responsibility as administrator to make the right data available to the right users. MicroStrategy has a robust security model that enables you to create users and groups, determine how they are authenticated, and control what data they can see and what objects they can use. This is covered in the following sections:
2008 MicroStrategy, Inc.
The MicroStrategy user model, page 50 Authentication options, page 55 Implementing LDAP authentication, page 69 Controlling access to data, page 101 Controlling access to application functionality, page 124 Merging users or groups, page 137 Tying it all together: examples, page 142
49
User privileges
All users created in the MicroStrategy system are assigned a set of privileges by default. Privileges allow users to access and work with various functionality within the software. For a detailed list of all user and group privileges in MicroStrategy, see Appendix B, Permissions and Privileges. To view a users privileges and from where those privileges are inherited (such as from a group to which the user belongs), in Desktops Folder List under Administration,
double-click on the user in the User Manager. The Project Access tab shows privileges granted to the user, as well as where the privilege is inherited from. To see which users are using certain privileges, use the License Manager. See Using License Manager, page 152.
51
checking and modifying user privileges after upgrading, refer to the After the Upgrade chapter of the MicroStrategy Upgrade Guide. Public/Guest: The Public group provides the capability for anonymous logins and is used to manage the access rights of guest users. If you choose to allow anonymous authentication (see Implementing anonymous authentication, page 68), each guest user assumes the profile defined by the Public group. When a user logs in as a guest, a new user is created dynamically and becomes a member of the Public group. By default, guest users are able to access the project, browse objects, run and manipulate reports, but they are not able to create their own objects, schedule report executions, or perform certain other actions. Privileges that are grayed out in the User Editor are not available by default to a guest user.
For information on integrating LDAP with MicroStrategy, see Implementing LDAP authentication, page 69. For information on LDAP user privileges within MicroStrategy, see Appendix B, Permissions and Privileges.
For administrators
System Monitors: The System Monitors group provides an easy way to give users basic administrative privileges in the system. Users in the System Monitors group have access to all of the monitoring tools under the Administration section of a project sources folder list. However, System Monitors cannot modify any configuration objects such as database instances, server configurations, governors, and so on.
System Administrators: The System Administrators group is a group within the System Monitors group. It provides all the capabilities of the System Monitors group plus the ability to modify configuration objects such as database instances, and so on.
53
Web Analyst: The Web Analyst group includes a subset of Web Professional privileges. It is a simplified version that provides ad hoc analysis from Intelligent Cubes with interactive, slice and dice OLAP. Web Reporter: The Web Reporter group includes a subset of Web Analyst privileges. It is an enterprise-wide reporting version that allows you to view scheduled reports and interactively slice and dice them. It also provides printing, exporting, and e-mail subscription features. In addition to having privileges of their own, subgroups always inherit the privileges from their parent groups.
1 From the File menu, select New User. The User Editor opens. 2 Specify the user information for each tab. For details about each field, see the online help. The user login ID is limited to 50 characters. For detailed information about other methods for creating or importing users or groups, see the online help: To import users and groups from a text file, see the Steps to import users/groups from a text file topic. To import users and groups from a Windows directory, see the Steps to import a group from Windows topic and Steps to import a user from Windows topic.
To import users and groups from an LDAP directory, see the LDAP User/Group Import topic.
Authentication options
Authentication is the process through which the system identifies the user. Several authentication modes are supported in the MicroStrategy environment. They vary primarily by the authentication authority used in each mode. The authentication authority is the system that verifies and accepts the login/password credentials provided by the user. The available modes are Standard Implementing standard authentication, page 57 Windows Implementing Windows authentication, page 58 Database: warehouse Implementing database warehouse authentication, page 65 Database: metadata and warehouse (6.X) Implementing database warehouse and metadata (6.x) authentication, page 67 Anonymous Implementing anonymous authentication, page 68 LDAP (lightweight directory access protocol) Implementing LDAP authentication, page 69
Various MicroStrategy products can be configured to authenticate users via Intelligence Server, using the authentication modes listed above. Products for which you can configure authentication include: MicroStrategy Desktop: To set the authentication mode in Desktop, use the Project Source Manager and click the Advanced tab. (Click Help to see steps to access and use this feature.) MicroStrategy Web: To set the authentication mode in MicroStrategy Web, use the Project Source Manager described above, as well as the MicroStrategy Web
Authentication options
55
Administrators Default Server Properties page. (Click Help to see steps to access and use this feature in the online help.) MicroStrategy Office: To set the authentication mode in MicroStrategy Office, use the Project Source Manager described above, as well as the projectsources.xml file. (See the MicroStrategy Office User Guide for steps to access and use this feature.)
56 Authentication options
Consider doing this only if you have users who do not need to create their own individual objects, and if you do not need to monitor and identify each individual user uniquely.
Authentication options
57
The first two settings are made in the User Editor; the last setting is made in the Password Definition Options dialog. See the online help for the specific steps you need to take to define any of these password settings.
58 Authentication options
To use Windows authentication with MicroStrategy Web, you must be running Web or Web Universal under Microsoft IIS. Non-IIS web servers do not support Windows authentication. See Enabling integrated Windows authentication for Microsoft IIS, page 61 Beginning with MicroStrategy 8.1.0, Windows authentication can be used with MicroStrategy Web on IIS regardless of the operating system on which Intelligence Server is installed. If the Windows domain account information is linked to a MicroStrategy user definition, a MicroStrategy Web user can be logged in automatically through Web. When a user accesses MicroStrategy Web, IIS detects the Windows user and sends the login information to MicroStrategy Intelligence Server. If the Windows user is linked to a MicroStrategy user, Intelligence Server starts a session for that user. For information on setting up MicroStrategy Web to allow single sign-on using Windows authentication, see Enabling Windows authentication login for MicroStrategy Web, page 63.
59
MSDL contain information on customizing MicroStrategy Web. To evaluate the MicroStrategy SDK product, contact your account executive for details.
Prerequisites
Before continuing with the procedures described in the rest of this section, you must first set up a Windows domain that contains a domain name for each user that you want to allow single sign-on access to MicroStrategy Web with Windows authentication.
Steps to enable single sign-on to MicroStrategy Web using Windows authentication
1 Enable integrated Windows authentication for Microsoft IIS. See Enabling integrated Windows authentication for Microsoft IIS, page 61. 2 Create a link between a Windows domain user and a MicroStrategy Web user for each person that will be accessing MicroStrategy Web with Windows authentication. See Linking a Windows domain user to a MicroStrategy user, page 62. 3 Define a project source to use Windows authentication. See Defining a project source to use Windows authentication, page 63. 4 Enable Windows authentication in MicroStrategy Web. See Enabling Windows authentication login for MicroStrategy Web, page 63. 5 Configure each MicroStrategy Web users browser for single sign-on. See Configuring a browser for single sign-on to MicroStrategy Web, page 65.
60 Authentication options
The procedure below provides information on software provided by third-party vendors. Refer to your third-party vendor documentation for complete steps and updated requirements. 1 On the MicroStrategy Web server machine, access the IIS Internet Service Manager. 2 Navigate to and right-click the MicroStrategy virtual folder, and select Properties. 3 Select the Directory Security tab, and then under Anonymous access and authentication control, click Edit. The Authentication Methods dialog box opens. 4 Clear the Anonymous access check box.
Authentication options
61
5 Select the Integrated Windows authentication check box. 6 Click OK to save your changes and close the Authentication Methods dialog box. 7 Click OK again to save your changes to the MicroStrategy virtual folder. 8 Restart IIS for the changes to take effect.
1 In MicroStrategy Desktop, log in to a project source using an account with administrative privileges. 2 From the Folder List, expand a project source, then expand Administration, and then expand User Manager. If you are using MicroStrategy 8.1.1, use a project source with a direct connection to map users. 3 Navigate to the MicroStrategy user you want to link a Windows user to. Right-click the MicroStrategy user and select Edit. The User Editor opens. 4 On the Authentication tab, under Windows Authentication, in the Link Windows user area, provide the Windows user name for the user you want to link the MicroStrategy user to. There are two ways to do this: Click Browse to select the user from the list of Windows users displayed.
62 Authentication options
Click Search to search for a specific Windows user by providing the Windows login to search for and, optionally, the Windows domain to search. Then click OK to run the search.
1 In MicroStrategy Desktop, log in to a project source using an account with administrative privileges. 2 Right-click the project source and select Modify Project Source. The Project Source Manager opens. 3 On the Advanced tab, select the Use network login id (Windows authentication) option. 4 Click OK. The Project Source Manager closes.
Authentication options
63
1 From the Windows Start menu, point to Programs, then point to MicroStrategy, then point to Web, and then select Web Administrator. The Web Administrator Page opens in a web browser. 2 On the left, under Intelligence Server, select Default Properties. 3 In the Login area, for Windows Authentication, select the Enabled check box. If you want Windows authentication to be the default login mode for MicroStrategy Web, for Windows Authentication, select the Default option. 4 Click Save.
To enable Windows authentication login for a project
1 Log into a MicroStrategy Web project as a user with administrative privileges. 2 At the top of the page, click Preferences. 3 On the left, select Project Defaults, then Security. 4 In the Login modes area, for Windows Authentication, select the Enabled check box. If you want Windows authentication to be the default login mode for this project in MicroStrategy Web, also select the Default option. 5 Next to Apply, choose whether to apply these settings to all projects, or just to the one you are currently logged into. 6 Click Apply.
64 Authentication options
65
When users log in to a database authentication project source, they are logged in as the Guest user, so that they can get a list of projects. This Guest user is not the same as the Guest user which is visible in the User Manager, but is a special account invisible to the client. The Guest user inherits permissions from the Public/Guest group. By default, the Public/Guest group is denied access to all projects by default. A security role with View access to the projects must be explicitly assigned to the Public/Guest group, so that these users can see and log in to the available projects. All users logging in to a database authentication project source can see all projects visible to the Guest user. Project access is granted or denied for each individual user only when the user attempts to log into the project. Once users select a project within a project source, they log in using their data warehouse login ID and are authenticated against the data warehouse database associated with that project and linked with a MicroStrategy user. To allow database authentication, you must link the users in the MicroStrategy environment to warehouse database users. Linking allows Intelligence Server to map a warehouse database user to a MicroStrategy user. You can import users from the database into MicroStrategy via a text file, MicroStrategy Desktop, or Command Manager.
High-level steps for setting up database authentication
See the online help for details on each step. 1 On the Advanced tab of the Project Source Manager, select Use Login ID and password entered by the user for Warehouse (database authentication). 2 Configure the Intelligence Server to allow anonymous authentication. For information, refer to Steps to allow anonymous access to a project in the online help. 3 Assign a security role to the Public/Guest group for each project to which you want to provide access. For information, refer to Steps to assign access privileges to a group in the online help.
66 Authentication options
4 On the Authentication tab of the User Editor, link each MicroStrategy user to a user in the data warehouse by typing his or her login ID and password in the Warehouse Login and Password boxes. You can also link them using MicroStrategy Command Manager. 5 On the Project Configuration editor, expand the Database instances category, click Execution, and select the Use linked warehouse login for execution check box. If you use database authentication, for security reasons MicroStrategy recommends that you use the setting Create caches per database login. This ensures that users who execute their reports using different database login IDs cannot use the same cache. You can set this in the Project Configuration editor in the Caching: Reports (advanced) category. 6 To enable database authentication from the MicroStrategy Web or Web Universal interface, access the Preferences: Project defaults: Login page and select the Database Authentication check box, then click Apply.
67
ensures that users who execute their reports using different database login IDs cannot use the same cache. This is found in the Project Configuration editor in the Caching: Reports (advanced) category.
By default, anonymous access is disabled at both the server and the project levels. 1 Configure the Intelligence Server to enable anonymous authentication. For information, refer to Steps to allow anonymous access to a project in the online help. 2 On the Advanced tab of the Project Source Manager, select Use a guest account (anonymous authentication).
68 Authentication options
69
71
Intelligence Server requires that the LDAP SDK supports the following: LDAP v. 3 64-bit architecture on UNIX and Linux platforms SSL connections
The following image shows how behavior of the various elements in an LDAP configuration affects other elements in the configuration.
1: The behavior between Intelligence Server and the LDAP SDK varies slightly depending on the LDAP SDK used. The MicroStrategy readme lists recommendations. 2: The behavior between the LDAP SDK and the LDAP server is identical no matter which LDAP SDK is used. MicroStrategy recommends that you use the LDAP SDK vendor that corresponds to the operating system vendor on which Intelligence Server is running in your environment. Specific recommendations are listed in the MicroStrategy readme, with the latest set of certified and supported LDAP SDKs, references to MicroStrategy Tech Notes with version-specific details, and SDK download location information. To configure Intelligence Server to use a specific SDK and DLLs, see the Intelligence Server Configuration Editor: General section in the MicroStrategy Desktop online help.
1 Download the LDAP SDK DLLs onto the machine where Intelligence Server is installed. 2 Install the LDAP SDK. 3 Register the location of the LDAP SDK files as follows: Windows environment: Add the path of the LDAP SDK libraries to the system environment variable so that Intelligence Server can locate them. UNIX/Linux environment: Modify the LDAP.sh file located in the /Installations/env folder to point to the location of the LDAP SDK libraries. The detailed procedure is described in the procedure To add the library path to the environment variable in UNIX below.
This procedure assumes you have installed an LDAP SDK. For high-level steps to install an LDAP SDK, see High-level steps to install the LDAP SDK DLLs. 1 In a UNIX/Linux console window, browse to <HOME_PATH> where <HOME_PATH> is the directory you specified as the home directory during MicroStrategy installation. Browse to the folder /env in this path. 2 Add Write privileges to the LDAP.sh file by typing the command chmod u+w LDAP.sh and then pressing ENTER. 3 Open the LDAP.sh file in a text editor and add the library path to the MSTR_LDAP_LIBRARY_PATH environment variable. For example: MSTR_LDAP_LIBRARY_PATH=/path/LDAP/ library
73
It is recommended that you store all libraries in the same path. If you have several paths, you can add all paths to the MSTR_LDAP_LIBRARY_PATH environment variable and separate them by a colon (:). For example: MSTR_LDAP_LIBRARY_PATH=/path/LDAP/ library:/path/LDAP/library2 4 Remove Write privileges from the LDAP.sh file by typing the command chmod a-w LDAP.sh and then pressing ENTER. 5 Restart Intelligence Server for your changes to take effect.
1 In MicroStrategy Desktop, log in to a project source. You must log in as a user with administrative privileges. 2 From the Administration menu, point to Server, and then select Configure MicroStrategy Intelligence Server. The MicroStrategy Intelligence Server Configuration Editor opens. 3 Expand the LDAP category to display a list of LDAP configuration options. Select the General option. 4 Type the information for the Host field and choose the appropriate Port. Host The LDAP host is either the host machine name or IP address of the host machine of the LDAP server.
Port The LDAP port is the port number of the LDAP server. Port 389 is the default when connecting with clear text, and port 636 is the default for SSL. However, the LDAP port can be set to a different number than the default. Confirm the accurate port number with your LDAP administrator.
5 Select whether to use a Clear Text or SSL (encrypted) security connection. Connecting with clear text or encrypted SSL Intelligence Server can connect to an LDAP server either using clear text or by transmitting encrypted information using SSL. MicroStrategy recommends using SSL to protect information transmitted between Intelligence Server and the LDAP server. If you will be implementing SSL encryption, use the following steps to set it up:
High-level steps to use SSL connectivity
Make sure you have completed the procedure High-level steps to install the LDAP SDK DLLs, page 73 to set up the LDAP SDK DLLs so that the LDAP file location is registered before setting up SSL connectivity. a Obtain a valid certificate from your LDAP server and save it on the machine where Intelligence Server is installed. b Follow the procedure recommended by your operating system to install the certificate. c In MicroStrategy Desktop, configure the LDAP SSL certificate settings in the MicroStrategy Intelligence Server Configuration Editor to point to the certificate. For steps to access the MicroStrategy Intelligence Server Configuration Editor, see To define the required LDAP configuration parameters, page 74. Depending on your LDAP server vendor, you point to the SSL certificate in the following ways: Microsoft Active Directory: No information is required.
Implementing LDAP authentication
75
Sun ONE/iPlanet: Provide the path to the certificate. Do not include the file name. Novell: Provide the path to the certificate, including the file name. IBM: Use Java GSKit 7 to import the certificate, and provide the key database name with full path, starting with the home directory. Open LDAP: Provide the path to the directory that contains the CA certificate file cacert.pem, the server certificate file servercrt.pem, and the server certificate key file serverkey.pem. HP-UX: Provide the path to the certificate. Do not include the file name.
d Select the appropriate settings for the LDAP server vendor name, connectivity driver, connectivity files, and Intelligence Server platform. For details on setting these connection parameters, see Setting up LDAP SDK connectivity, page 71. 6 Under the LDAP category, select the Configuration option. 7 Type the appropriate information in the Distinguished name (DN) and Password fields for the authentication user. Authentication user The main administrative user, referred to as the authentication user, is used by Intelligence Server to access, search in, and retrieve information from the LDAP server when authenticating, importing, and synchronizing new user accounts. No MicroStrategy user from the metadata is associated with the authentication user, and only a temporary session is created for this type of authentication. However, the authentication user does appear in the MicroStrategy Connection Monitor. 8 Click Ok. The Configuration Editor closes and the parameters are saved.
1 In MicroStrategy Desktop, right-click the project source, and select Modify Project Source. The Project Source Manager opens. 2 On the Advanced tab, select Use LDAP Authentication. 3 Click OK to accept your changes and close the Project Source Manager.
Defining LDAP search filters to verify and import users and groups at login
You must provide Intelligence Server with some specific parameters so it can search effectively through your LDAP directory for user information. To set the search parameters, you enter these parameters in the MicroStrategy Intelligence Server Configuration Editor, in the User Search Filter and Group Search Filter dialog boxes. To access the Intelligence Server Configuration Editor, right click a project source. Detailed steps for how to enter this information can be found in the Desktop Online Help. A Distinguished Name is a unique way to define a user or group within the LDAP directory structure. Intelligence Server uses the Distinguished Name of users and groups to authenticate them and to link them to existing MicroStrategy users and groups. MicroStrategy also provides a configuration option to import LDAP users and groups in their Distinguished Name format. For more information on
77
importing LDAP users and groups into MicroStrategy, see Importing or linking LDAP users and groups in MicroStrategy, page 82. If you do not provide the search root, the user search filter, and the group search filter searches of the LDAP directory might perform poorly. Highest level to start an LDAP search: Search root, page 78, provides examples of these parameters as well as additional details of each parameter and some LDAP server-specific notes.
The following table, based on the diagram above, provides common search scenarios for users to be imported into MicroStrategy. The search root is the root to be defined in MicroStrategy for the LDAP directory.
Scenario Include all users and groups from Operations Include all users and groups from Operations, Consultants, and Sales Include all users and groups from Operations, Consultants, and Technology Include all users and groups from Technology and Operations but not Consultants. Search Root Operations Sales Departments Departments (with an exclusion clause in the User/Group search filter to exclude users who belong to Consultants.)
For some LDAP vendors, the search root cannot be the LDAP trees root. For example, both Microsoft Active Directory and Sun ONE require a search to begin from the domain controller RDN (dc). The image below shows an example of this type of RDN, where dc=labs, dc=microstrategy, dc=com:
The search root parameter searches for users in the leaves of the tree who are all registered within a single domain. If your LDAP directory has multiple domains for different departments, see MicroStrategy Tech Note TN5300-801-0661.
79
Depending on your LDAP server vendor and your LDAP tree structure, you may need to try different attributes within the search filter syntax above. For example, (&(objectclass=person) (uniqueID=#LDAP_LOGIN#)), where uniqueID is the LDAP attribute name your company uses for authentication.
The group search filter forms listed above have the following placeholders: LDAP_GROUP_OBJECT_CLASS indicates the object class of the LDAP groups. For example, you can enter (&(objectclass=groupOfNames)(member=#LDAP_ DN#)). LDAP_MEMBER_[LOGIN or DN]_ATTR indicates which LDAP attribute of an LDAP group is used to store LDAP logins/DNs of the LDAP users. For example, you can enter (&(objectclass=groupOfNames)(member=#LDAP_ DN#)). #LDAP_DN# can be used in this filter to represent the distinguished name of an LDAP user. #LDAP_LOGIN# can be used in this filter to represent an LDAP users login for Intelligence Server version 8.0.1 and later. #LDAP_GIDNUMBER# can be used in this filter to represent the UNIX or Linux group ID number; this corresponds to the LDAP attribute gidNumber for Intelligence Server version 8.0.1 and later.
81
You can implement specific search patterns by adding additional criteria. For example, you may have 20 different groups of users, of which only five groups will be accessing and working in MicroStrategy. You can add additional criteria to the group search filter to import only those five groups into MicroStrategy. To see the detailed process of importing LDAP users and groups into MicroStrategy, refer to Importing or linking LDAP users and groups in MicroStrategy, page 82.
If your environment includes multiple Intelligence Servers connected to one MicroStrategy Web server, see Tech Note TN5300-75x-0592 for details on setting up LDAP authentication. When you import LDAP users and groups in MicroStrategy, all authenticated LDAP users and groups are imported into the MicroStrategy metadata, which means that a physical MicroStrategy user is created within the MicroStrategy metadata. For further details about and steps for implementing this option, see Importing LDAP users and groups into MicroStrategy, page 83. Another option is to link LDAP users and groups to users and groups in MicroStrategy (see Linking users and groups without importing, page 89). This helps keep the metadata from filling with data related to the users. However, it limits the privileges available to the linked users and groups. If you choose to not import or link LDAP users and groups, no LDAP users or groups are imported into the metadata when LDAP authentication occurs; instead, LDAP users can log into MicroStrategy but receive the privileges of the MicroStrategy Public/Guest group. For details on this authentication option, see Allowing anonymous/guest users with LDAP authentication, page 94.
83
LDAP administrator. For more information on specifying an authentication users DN and password, see Defining LDAP connection parameters, page 74. Imported users are automatically a member of MicroStrategys LDAP Users group, and are assigned the access control list (ACL) and privileges of that group. To assign different ACLs or privileges to a user, you can move the user to another MicroStrategy user group. When an LDAP user is imported into MicroStrategy, you can also choose to import that users LDAP groups. If a user belongs to more than one group, all the users groups are imported and created in the metadata. Imported LDAP groups are created within MicroStrategys LDAP Users folder and in MicroStrategys User Manager. LDAP users and LDAP groups are all created within the MicroStrategy LDAP Users group at the same level. While the LDAP relationship between a user and any associated groups exists in the MicroStrategy metadata, the relationship is not visually represented in MicroStrategy Desktop. For example, looking in the LDAP Users folder in MicroStrategy immediately after an import or synchronization, you might see the following list of imported LDAP users and groups:
If you want users group to be shown in MicroStrategy, you must manually move them into the appropriate groups. You cannot export users or groups from MicroStrategy to an LDAP directory. Removing a user from the LDAP directory does not affect the users presence in the MicroStrategy metadata. Deleted LDAP users are not automatically deleted from the MicroStrategy metadata during synchronization. You can revoke a users privileges in MicroStrategy, or remove the user manually.
For information on setting up user and group import options, see Importing users and groups, page 85.
1 In MicroStrategy Desktop, log in to a project source. You must log in as a user with administrative privileges. 2 From the Administration menu, select Server, and then select Configure MicroStrategy Intelligence Server. The MicroStrategy Intelligence Server Configuration Editor opens.
3
4 Select the Import Users and Import groups check boxes. The user names, user logins, and group names that are imported into MicroStrategy depend on the LDAP attributes you select to import. To designate the LDAP attributes that MicroStrategy will import, see the following sections: Selecting the LDAP attribute to import as user login, page 86
85
Selecting the LDAP attribute to import as user name, page 87 Selecting the LDAP attribute to import as group name, page 88
1 In MicroStrategy Desktop, log in to a project source. You must log in as a user with administrative privileges. 2 From the Administration menu, select Server, and then select Configure MicroStrategy Intelligence Server. The Intelligence Server Configuration Editor opens. 3 Expand the LDAP category, and then select User/Group Import. The User/Group Import options are displayed. 4 In the Import user login as area, select one of the following LDAP attributes to import as the user login: User login: Imports the users LDAP user login as the MicroStrategy user login. Distinguished name: Imports the users LDAP Distinguished Name as the MicroStrategy user login. Other: You can provide a different LDAP attribute than those listed above to be imported and used as the MicroStrategy user login. Your LDAP administrator can provide you with the appropriate LDAP attribute to be used as the user login.
If you type a value in the Other field, ensure that the authentication user contains a valid value for the same attribute specified in the Other field. For more information on the authentication user, see Authentication user, page 76.
1 In MicroStrategy Desktop, log in to a project source. You must log in as a user with administrative privileges. 2 From the Administration menu, select Server, and then select Configure MicroStrategy Intelligence Server. The Intelligence Server Configuration Editor opens. 3 Expand the LDAP category, and then select User/Group Import. The User/Group Import options are displayed. 4 In the Import user name as area, select one of the following LDAP attributes to import as the user name: User name: Imports the users LDAP user name as the MicroStrategy user name. Distinguished name: Imports the users LDAP Distinguished Name as the MicroStrategy user name. Other: You can provide a different LDAP attribute than those listed above to be imported and used as the MicroStrategy user name. Your LDAP administrator
87
can provide you with the appropriate LDAP attribute to be used as the user name. If you type a value in the Other field, ensure that the authentication user contains a valid value for the same attribute specified in the Other field. For more information on the authentication user, see Authentication user, page 76.
1 In MicroStrategy Desktop, log in to a project source. You must log in as a user with administrative privileges. 2 From the Administration menu, select Server, and then select Configure MicroStrategy Intelligence Server. The Intelligence Server Configuration Editor opens. 3 Expand the LDAP category, and then select User/Group Import. The User/Group Import options are displayed. 4 In the Import group name as area, select one of the following LDAP attributes to import as the group name: Group name: Imports the groups LDAP group name as the MicroStrategy group name. Distinguished name: Imports the groups LDAP Distinguished Name as the MicroStrategy group name. Other: You can provide a different LDAP attribute than those listed above to be imported and used as the MicroStrategy group name. Your LDAP administrator can provide you with the appropriate LDAP attribute to be used as the group name.
If you type a value in the Other field, Ensure that the authentication user contains a valid value for the same attribute specified in the Other field. For more information on the authentication user, see Authentication user, page 76.
89
linked LDAP user is not created in the MicroStrategy Metadata. An LDAP group can only be linked to a MicroStrategy group, and an LDAP user can only be linked to a MicroStrategy user. It is not possible to link a group to a user without giving the user membership in the group. When an LDAP user or group is linked to an existing MicroStrategy user or group, no new user or group is created within the MicroStrategy metadata as with importing. Instead, a link is established between an existing MicroStrategy user or group and an LDAP user or group, which allows the LDAP user to log in to MicroStrategy. The link between an LDAP user or group and the MicroStrategy user or group is maintained in the MicroStrategy metadata in the form of a shared Distinguished Name. The users or groups LDAP privileges are not linked with the MicroStrategy user. In MicroStrategy, a linked LDAP user or group receives the privileges of the MicroStrategy user or group to which it is linked. LDAP groups cannot be linked to MicroStrategy system groups. For example, you cannot link an LDAP group to MicroStrategys Everyone group. However, it is possible to link an LDAP user to a MicroStrategy user that has membership in a MicroStrategy group.
High-level steps to set up links between LDAP users/groups and MicroStrategy users/groups
1 In MicroStrategy Desktop, log in to a project source. You must log in as a user with administrative privileges. 2 In the project source, expand Administration, and then expand User Manager. 3 Depending on whether you want to link a MicroStrategy user or group to an LDAP user or group respectively, perform one of the following:
User: Expand a MicroStrategy group, and then right-click a user and select Edit. The User Editor opens. Group: Right-click a group and select Edit. The Group Editor opens.
4 Select the Authentication tab. 5 In the field within the LDAP Authentication area, enter the user distinguished name or the group distinguished name. 6 Click OK to accept the changes and close the User Editor or Group Editor. 7 From the Administration menu, select Server, and then select Configure MicroStrategy Intelligence Server. The MicroStrategy Intelligence Server Configuration Editor opens. 8 Expand the LDAP category, and then select User/Group Import. The User/Group Import options are displayed. 9 Clear the Import Users and/or Import Groups check boxes.
If the Import check boxes are not cleared, the linked MicroStrategy user will be overwritten in the metadata by the imported LDAP user and cannot be recovered.
See the online help for specific details to access the User Editor and Intelligence Server Configuration Editor and to use the features available there.
91
Allowing anonymous/guest users with LDAP authentication, page 94 Enabling database passthrough authentication with LDAP, page 95
If MicroStrategy can verify that none of these restrictions are in effect for this user account, MicroStrategy performs an LDAP bind, and successfully authenticates the user logging in. This is the default behavior for users and groups that have been imported into MicroStrategy. You can choose to have MicroStrategy verify only the accuracy of the users password with which the user logged in, and not check for additional restrictions on the password or user account. To support password comparison authentication, your LDAP server must also be configured to allow password comparison only.
Performing this procedure may compromise one or more security features in your environment. Enabling password comparison only configures MicroStrategy to authenticate LDAP users by only verifying that the user typed in the correct password. No other verifications are performed. 1 Locate the MSTR_LDAP_PASSWORD_COMPARE environment variable as follows, depending on your environment:
Windows environment: From your machines System Properties dialog box (right-click the My Computer icon on your desktop and select Properties), on the Advanced tab, select Environment Variables. In the list of System Variables, locate MSTR_LDAP_PASSWORD_COMPARE.
UNIX/Linux/HP environment: Open the LDAP.sh file located in the /Installations/env folder. Locate MSTR_LDAP_PASSWORD_COMPARE.
2 Change the value for this variable according to whether you want to enable binding or password comparison: 0: Do not use password compare only (use binding). This default setting means MicroStrategy does not only use password comparison, but also checks for several password and user account restrictions, as detailed above. 1: Use password compare only, if available. This setting tells MicroStrategy to only use password comparison when authenticating an LDAP user logging in to MicroStrategy, as long as the LDAP Server is set up to allow password comparison. This setting my compromise your security protocols.
3 On the LDAP Server, make sure that the password comparison functionality is enabled. If this functionality is disabled on the LDAP Server, MicroStrategy uses binding automatically; binding is described above.
93
Make sure the LDAP server is set up to accept anonymous authentication. 1 In MicroStrategy Desktop, log in to a project source. You must log in as a user with administrative privileges. 2 Right-click the project source that you want the anonymous/guest users to have access to. 3 Select Modify Project Source. The Project Source Manager opens. 4 In the Project Source Manager, make sure Use LDAP Authentication is selected.
5 Log in to MicroStrategy using any LDAP server user login that is not imported or linked to a MicroStrategy user. See the Desktop online help for details to access the User Editor and Intelligence Server Configuration Editor, and to use the features available there.
95
To prevent this problem, schedule password changes for a time when users are unlikely to run scheduled reports. In the case of users using database passthrough authentication who regularly run scheduled reports, inform to them reschedule all reports if their passwords have been changed.
To support database passthrough authentication
1 In MicroStrategy Desktop, log in to a project source. You must log in as a user with administrative privileges. 2 In Desktops Folder List, expand Administration, and then expand User Manager. 3 Expand a MicroStrategy group, then right-click a user and select Edit. The User Editor opens. 4 Select the Authentication tab. 5 In the Warehouse Passthrough area, enter the following information: Warehouse passthrough login: Enter the LDAP user login to connect to and execute reports or queries against the database. Warehouse passthrough password: Enter the password for the LDAP user login.
server will be applied within MicroStrategy the next time an LDAP user logs in to MicroStrategy. This keeps the LDAP directory and the MicroStrategy metadata in synchronization. The steps for this process are detailed in To import users and groups, page 85. By synchronizing users and groups between your LDAP server and MicroStrategy, you can update the imported LDAP users and groups in the MicroStrategy metadata with the following modifications: User synchronization: User details such as user name in MicroStrategy are updated with the latest definitions in the LDAP directory. User membership in MicroStrategy for LDAP groups is updated to reflect an LDAP users group membership in the LDAP directory. This includes adding a user to any groups in MicroStrategy that the user has been recently added to in the LDAP directory, and removing a user from any groups in MicroStrategy that the user has been removed from recently in the LDAP directory. Group synchronization: Group details such as group name in MicroStrategy are updated with the latest definitions in the LDAP directory. Group membership in MicroStrategy for LDAP groups is updated to reflect an LDAP groups group membership in the LDAP directory. This includes adding a group to any groups in MicroStrategy that the group has been added to recently in the LDAP directory, and removing a group from any groups in MicroStrategy that the group has been removed from recently in the LDAP directory.
97
When synchronizing LDAP users and groups in MicroStrategy, you should be aware of the following circumstances: If an LDAP user or group has been given new membership to a group that has not been imported or linked to a group in MicroStrategy and import options are turned off, the group cannot be imported into MicroStrategy and thus cannot apply its permissions in MicroStrategy. For example, User1 is a member of Group1 in MicroStrategy and both have been imported into MicroStrategy. Then User1 is removed from Group1 in LDAP and given membership to Group2, but Group2 is not imported or linked to a MicroStrategy group. Upon synchronization, User1 is removed from Group1 and is recognized as a member of Group2, but any permissions for Group2 are not applied for the user until Group2 is imported or linked to a MicroStrategy group. In the meantime, User1 is given the security of the LDAP Users group. MicroStrategy users and groups imported from the LDAP directory or linked to LDAP users and groups are NOT automatically deleted when their associated LDAP users and groups are deleted from the LDAP directory. You can revoke users and groups privileges in MicroStrategy and remove the users and groups manually. Regardless of your synchronization settings, if a users password is modified in the LDAP directory, a user must log in to MicroStrategy with the new password. LDAP passwords are not stored in the MicroStrategy metadata. MicroStrategy uses the credentials provided by the user to search for and validate the user in the LDAP directory.
Consider a user named Joe Doe who belongs to a particular group, Sales, when he is imported into MicroStrategy. Later, he is moved to a different group, Marketing, in the LDAP directory. The LDAP user Joe Doe and LDAP groups Sales and Marketing have been imported into MicroStrategy. The images below show a sample LDAP directory with user Joe Doe being moved within the LDAP directory from Sales to
Marketing. Also, the user name for Joe Doe is changed to Joseph Doe, and the group name for Marketing is changed to MarketingLDAP.
The following table describes what happens with users and groups in MicroStrategy if users, groups, or both users and groups are synchronized.
Sync Users? No No Yes Yes Sync Groups? No Yes No Yes User Name After Synchronization Joe Doe Joe Doe Joseph Doe Joseph Doe Group Name After Synchronization Marketing MarketingLDAP Marketing MarketingLDAP User Membership After Synchronization Sales Sales Marketing MarketingLDAP
In another scenario, User A belongs to Groups 1, 2, and 3 in LDAP. User A is imported into MicroStrategy, along with his groups. Subsequently, User A is added to Group 4 in the LDAP directory. If the Synchronize Users and Synchronize Groups options are selected, the next time User A logs into MicroStrategy, Group 4 is automatically imported into MicroStrategy metadata and is assigned as an additional group to User A.
99
If you have trouble getting synchronization to work properly, see the section LDAP authentication, page 559 in Chapter 8, Troubleshooting the System.
To synchronize users and/or groups
1 In MicroStrategy Desktop, log in to a project source. You must log in as a user with administrative privileges. 2 From the Administration menu, select Server, and then select Configure MicroStrategy Intelligence Server. The MicroStrategy Intelligence Server Configuration Editor opens. 3 Expand the LDAP category, and then select User/Group Import. The User/Group Import options are displayed. 4 Do one or both of the following: To synchronize users, select the Synchronize MicroStrategy Login/User Name with LDAP check box. To synchronize groups, select the Synchronize MicroStrategy Group Name with LDAP check box.
5 Select the Synchronize check boxes for users and groups, depending on whether you want to synchronize users, groups, or both. 6 Click OK to accept your changes and close the MicroStrategy Intelligence Server Configuration Editor.
101
Security views
Most databases provide a way to restrict access to data. For example, a user may be able to access only certain tables or he may be restricted to certain rows and columns within a table. The subset of data available to a user is called the users security view. Why would you use security views? Security views are usually used when splitting fact tables by columns and splitting fact tables by rows (discussed below) cannot be used. The rules that determine which rows each user is allowed to see typically vary so much that users cannot be separated into a manageable number of groups. In the extreme, each user is allowed to see a different set of rows. Note that restrictions on tables, or rows and columns within tables, may not be directly evident to a user. However, they do affect the values displayed in a report. You need to inform users as to which data they can access so that they do not inadvertently run a report that yields misleading final results. For example, if a user has access to only half of the sales information in the data warehouse but runs a summary report on all sales, the summary reflects only half of the sales. Reports do not indicate the database security view used to generate the report. Consult your database vendors product documentation to learn how to create security views for your particular database.
Why would you split fact tables by rows? If the data to be secured can be separated by rows, then this may be a useful technique. For example, suppose a fact table contains the key Customer ID, Address, Member Bank and two fact columns, as shown below:
Customer ID 123456 945940 908974 886580 562055 Customer Address 12 Elm St. 888 Oak St. 45 Crest Dr. 907 Grove Rd. 1 Ocean Blvd. Member Bank 1st National Eastern Credit Peoples Bank 1st National Eastern Credit Transaction Amount ($) 400.80 150.00 3,000.00 76.35 888.50 Current Balance ($) 40,450.00 60,010.70 100,009.00 10,333.45 1,000.00
You can split the table into separate tables (based on the value in Member Bank), one for each bank: 1st National, Eastern Credit, and so on. In this example, the table for 1st National bank would look like this:
Customer ID 123456 886580 Customer Address 12 Elm St. 907 Grove Rd. Member Bank 1st National 1st National Transaction Current Amount ($) Balance ($) 400.80 76.35 40,450.00 10,333.45
This makes it simple to grant permissions by table to managers or account executives who should only be looking at customers for a certain bank. In most RDBMSs, split fact tables by rows are invisible to system users. Although there are many physical tables, the system sees one logical fact table.
103
Support for Split fact tables by rows for security reasons should not be confused with the support that Intelligence Server provides for split fact tables by rows for performance benefits. For more information on partitioning, see the MicroStrategy Advanced Reporting Guide.
You can split the table into two tables, one for the marketing department and one for the finance department. The marketing fact table would contain everything except the financial fact columns as follows:
Customer ID Customer Address Member Bank
The second table used by the financial department would contain only the financial fact columns but not the marketing-related information as follows:
Customer ID Transaction Current Amount ($) Balance ($)
Connection mappings
Connection mappings allow you to assign a user or group in the MicroStrategy system to a specific login ID on the data warehouse RDBMS. The mappings are typically used to take advantage of one of several RDBMS data security techniques (security views, split fact tables by rows, split fact tables by columns) that you may have already created. You define connection mappings with the Project Configuration editor in MicroStrategy Desktop or with the Command Manager in MicroStrategy Administrator. To create a connection mapping, you assign a user or group either a database connection or database login that is different from the default. For information on this, see Connecting to the data warehouse, page 19.
105
Project
All users
Data Warehouse
One case in which you may wish to use connection mappings is if you have existing security views defined in the data warehouse and you wish to allow MicroStrategy users jobs to execute on the data warehouse using those specific login IDs. For example, The CEO can access all data (warehouse login ID = CEO) All other users have limited access (warehouse login ID = MSTR users)
In this case, you would need to create a user connection mapping within MicroStrategy for the CEO. To do this, you would: Create a new database login definition for the CEO in MicroStrategy so it matches his or her existing login ID on the data warehouse Create the new connection mapping in the MicroStrategy project to specify that the CEO user uses the new database login
This is shown in the diagram below in which the CEO connects as CEO (using the new database login called CEO) and all other users use the default database login MSTR users.
All users
Database Connection
DSN DB Login: MSTR users DB Login: CEO
Project
CEO
Data Warehouse
Both the CEO and all the other users use the same project, database instance, database connection (and DSN), but the database login is different for the CEO. If we were to create a connection mapping in the MicroStrategy Tutorial project according to this example, it would look like the diagram below.
107
For information on creating a new database connection, see Connecting to the data warehouse, page 19. For information on creating a new database login, see Connecting to the data warehouse, page 19. Connection mappings can also be made for user groups and are not limited to individual users. Continuing the example above, if you have a Managers group within the MicroStrategy system that can access most data in the data warehouse (warehouse login ID = Managers), you could create another database login and then create another connection mapping to assign it to the Managers user group. Another case in which you may wish to use connection mappings is if you need to have users connect to two data warehouses using the same project. In this case, both data warehouses must have the same structure so that the project works with both. This may be applicable if you have a data warehouse with domestic data and another with foreign data and you want users to be directed to one or the other based on the user group to which they belong when they log in to the MicroStrategy system. For example, if you have two user groups such that: US users connect to the U.S. data warehouse (data warehouse login ID MSTR users) Europe users connect to the London data warehouse (data warehouse login ID MSTR users)
In this case, you would need to create a user connection mapping within MicroStrategy for both user groups. To do this, you would: Create two database connections in MicroStrategyone to each data warehouse (this assumes that the two DSNs already exist) Create two connection mappings in the MicroStrategy project that link the groups to the different data warehouses via the two new database connection definitions
Project
Europe users
Database Connection: US
DSN: US DB Login: MSTR users
The project, database instance, and database login can be the same, but the connection mapping specifies different database connections (and therefore, different DSNs) for the two groups.
Passthrough execution
You can link a MicroStrategy user to an RDBMS login ID using the User Editor (on the Authentication tab, specify the Warehouse Login and Password) or using Command Manager. If you are using warehouse database authentication, you may have already done this. You are not required to use warehouse database authentication to use the passthrough execution feature. It works for other authentication modes as well. You can configure each project to use either connection mappings or the linked warehouse login ID when users execute reports, documents, or browse attribute elements. By default, warehouse passthrough execution is turned off, and the system uses connection mappings. To configure this for a project, use the Project Configuration editor, select the Database Instances category, click Execution, then select
109
or clear the Use linked warehouse login for execution check box as follows: If the check box is selected, the project uses the linked warehouse login ID and password as defined in the User Editor (Authentication tab). If no warehouse login ID is linked to a user, Intelligence Server uses the default connection and login ID for the projects database instance. If the check box is cleared, the project uses connection mappings defined for the user. Intelligence Server connects to the data warehouse using the login ID as defined in the users connection mapping defined as part of the projects database instance. If no connection mapping is defined for the user, Intelligence Server uses the default connection and login ID for the project's database instance.
Security filters
A security filter is a filter object that you assign to users or groups to narrow their result set when they execute reports or browse elements.
For example, two regional managers may have two different security filters for their regionsone in the Northeast and the other in the Southwest. This means that if these two regional managers run the same report they may get different results. For an extended example, with images of the reports that result, see Security filters example, page 112. Security filters are made up of qualifications, like a regular MicroStrategy filter. Security filters are the same as regular filters except that relationship filters and metric qualifications are not allowed in a security filter. A security filter may only contain attribute qualifications. A security filter may include as many filtering expressions as you like, joined together by logical operators. For more information on creating filters, see the Advanced Filters chapter in the MicroStrategy Advanced Reporting Guide.
111
Security filters may be assigned to user groups as well as individual users. Because a user may be assigned a security filter directly and because a user may belong to one or more groups, it is possible that multiple security filters will be merged for use when executing reports or browsing elements (see Merging security filters, page 120). You should inform your users of the security filters assigned to them or their group. If you do not inform them of their security filter, they may not know that the data they see in their reports has been filtered. A security filter can be created using the Security Filter Manager. To access the Security Filter Manager, in Desktop from the Administration menu, select Projects and then choose Security Filter Manager. For steps to create a security filter or assign a security filter to a user or group, see the Desktop online help. For details on merging multiple security filters, see Merging security filters, page 120. For details on using security filters with metric levels, see Security filters and metric levels, page 114.
When this user executes a simple report with the following definition, he sees the results below:
Category Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Total
Template: Category, Subcategory, and Item in the rows, Revenue in the columns, and Year on the page by axis Filter: Empty
Item Metrics GPX 5" AM/FM Portable TV RCA 32" Stereo TV RCA Indoor TV Antenna RCA Power TV Antenna RCA 27" Stereo TV RCA 13" TV/VCR RCA 13" TV RCA 4" LCD Color TV RCA 2" Diagonal LCD Color TV Sharp 25" TV/VCR Com bo Sharp 25" Stereo Color TV Sharp 13" 2-Head TV/VCR Sharp 32" Color TV Sony 32" Trinitron Television Sony 35" Trinitron Television Revenue $11,650 $100,800 $2,250 $8,100 $54,000 $42,500 $18,980 $28,600 $17,000 $60,420 $40,040 $33,580 $99,450 $112,800 $76,923 $707,093
Subcategory TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs
Only the Items with the TV subcategory are returned, even though the report filter did not qualify on any attribute. If this user executes another report with the following definition, he gets different results: Template: Category in the rows, Revenue in the columns, and Year on the page by axis Filter: Empty
Category Electronics Metrics Revenue $707,093
Only the Revenue from the TV subcategory is displayed. This illustrates how the security filter of Subcategory=TV is applied to every report.
113
Security filter: Subcategory=TV Top range attribute: none Bottom range attribute: none
Metrics Subcategory Revenue $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 Category Revenue $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984
Subcategory Item TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs
GPX 5" AM/FM Portable TV RCA 32" Stereo TV RCA Indoor TV Antenna RCA Power TV Antenna RCA 27" Stereo TV RCA 13" TV/VCR RCA 13" TV RCA 4" LCD Color TV RCA 2" Diagonal LCD Color TV Sharp 25" TV/VCR Combo Sharp 25" Stereo Color TV Sharp 13" 2-Head TV/VCR Sharp 32" Color TV Sony 32" Trinitron Television Sony 35" Trinitron Television
Revenue $11,650 $100,800 $2,250 $8,100 $54,000 $42,500 $18,980 $28,600 $17,000 $60,420 $40,040 $33,580 $99,450 $112,800 $76,923
As shown in the example above, Item-level detail is displayed for only the items within the TV category. The Subcategory Revenue is displayed for all items within the TV subcategory. The Category Revenue is displayed for all items in the Category, including items that are not part of the TV subcategory. However, only the Electronics category is displayed. This illustrates how the security filter Subcategory=TV is raised to the category level such that Category=Electronics is the filter used with Category Revenue.
115
Report 2
Category Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics
Security filter: Subcategory=TV Top range attribute: Subcategory Bottom range attribute: none
Metrics Subcategory Category Revenue Revenue $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093
Subcategory Item TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs
GPX 5" AM/FM Portable TV RCA 32" Stereo TV RCA Indoor TV Antenna RCA Power TV Antenna RCA 27" Stereo TV RCA 13" TV/VCR RCA 13" TV RCA 4" LCD Color TV RCA 2" Diagonal LCD Color TV Sharp 25" TV/VCR Combo Sharp 25" Stereo Color TV Sharp 13" 2-Head TV/VCR Sharp 32" Color TV Sony 32" Trinitron Television Sony 35" Trinitron Television
Revenue $11,650 $100,800 $2,250 $8,100 $54,000 $42,500 $18,980 $28,600 $17,000 $60,420 $40,040 $33,580 $99,450 $112,800 $76,923
As shown in the above example, the Category Revenue is displayed for only the items within the TV subcategory. The security filter Subcategory=TV is not raised to the category level, because Category is a level above the Top level of Subcategory. Report 3 Security filter: Subcategory=TV Top range attribute: none Bottom range attribute: Subcategory
Category Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics Electronics
Subcategory Item TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs TVs
Metrics
GPX 5" AM/FM Portable TV RCA 32" Stereo TV RCA Indoor TV Antenna RCA Power TV Antenna RCA 27" Stereo TV RCA 13" TV/VCR RCA 13" TV RCA 4" LCD Color TV RCA 2" Diagonal LCD Color TV Sharp 25" TV/VCR Combo Sharp 25" Stereo Color TV Sharp 13" 2-Head TV/VCR Sharp 32" Color TV ASony 32" Trinitron Television s Sony 35" Trinitron Television
Revenue $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093
Subcategory Revenue $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093 $707,093
Category Revenue $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984 $5,952,984
As shown in the example above, Item-level detail is not displayed, since Item is a level below the bottom level of Subcategory. Instead, data for the entire Subcategory is shown for each item. Data at the Subcategory level is essentially the lowest level of granularity the user is allowed to see. To further illustrate the effect of the Top and Bottom properties used in the above reports, we provide the following: The diagram below illustrates the data access for a user with no Top or Bottom Range Attribute defined, as in Report 1. At the level of the security filter (=Subcategory) and below, the user cannot see data outside his or her security filter. Above the level of the security filter, the user can see data outside the security filter if it is in a
117
metric with absolute filtering for that level. Even in this case, the user sees only data for the Category in which his or her security filter is defined.
The diagram below illustrates the effect of a defined Top Range Attribute on the security filter=TV, as in Report 2. In this case, the user cannot see any data outside of his or
her security filter. This is true even at levels above the Top level, regardless of whether metrics with absolute filtering are used.
119
The next diagram illustrates the effect of a defined Bottom Range Attribute on the security filter=TV, as in Report 3. The user cannot see data aggregated at a lower level than the Bottom level.
attributes they derive from belong in the same hierarchy, for example, Country and Region, or Year and Month. They are considered unrelated if they do not belong in the same hierarchy, for example, Product and Year. Below are two examples to illustrate this behavior in the security filter model. For both examples, we use the following user groups:
Group
Group1 (G1) Group2 (G2) Group3 (G3) Group4 (G4)
Security Filter
None Country = USA Year = 2002 Region = Northeast
* Gi = Any group
121
The User Login system prompt is located in MicroStrategy Desktop, in the following location: in the Public Objects folder, navigate to Prompts, then System Prompts. This system prompt provides flexibility when implementing security mechanisms in MicroStrategy, as described in the following sections.
123
security filter. For details to implement these examples, see MicroStrategy Tech Notes TN5700-800-1327 and TN5200-800-0481.
Privileges
As discussed earlier in this chapter, there are different types of users and groups in the user community. It is your responsibility as a system administrator to assign privileges to users and groups. They give you full control over the user experience. Based on their different privileges, the users and user groups can perform different types of operations in the MicroStrategy system. If a user does not have a certain privilege, he or she will not have access to that functionality in the interface. Privileges can be assigned to users and user groups directly or through security roles. The difference is that the former grants functionality across all projects while the latter just within a specific project (see Security Roles, page 126). The predefined user groups, that is, Web Reporter, Web Analyst, Web Professional, Desktop Analyst, Desktop
Designer, and Architect, possess all the privileges associated with their corresponding products. System Administrators have all privileges and all other predefined groups, that is, Everyone, Public/Guest, LDAP Public, LDAP Users, System Monitors, have no predefined privileges. A users privileges within a given project include the following: All privileges assigned directly to the user All privileges assigned to all groups of which the user is a member Groups inherit privileges from their parent groups. All privileges assigned to all security roles that are assigned to the user within the project
Privileges may be granted within a specific project or across all projects. You may assign privileges with one of the following characteristics: Object creation privileges, for example, Create new folder Application functionality privileges, for example, Pivot report There is a special privilege called Bypass all object security access checks. For users who have this special privilege, the access control permissions (discussed later) are ignored. In the User Editor, the privilege categories reflect product offerings, with the exceptions of Common Privileges and Administration: Web Reporter Web Analyst Web Professional Desktop Analyst Desktop Designer Architect
125
For a complete list of privileges and what they control in the system, see List of all privileges, page 847.
To assign privileges to users or groups
1 From MicroStrategy Desktop User Manager, edit the user with the User Editor or edit the group with the Group Editor. 2 Click the Project Access tab. 3 Select the check boxes to grant privileges to the user or group. Rather than assigning individual users and groups these privileges, it may be easier for you to create Security Roles (collections of privileges) and assign them to users and groups. Then you can assign additional privileges individually when there are exceptions.
Security Roles
Security roles are collections of privileges that are assigned to users and groups and can be reused from project to project. For example, you have two types of users with different functionality needs: the Executive Users who need to run, sort, and print reports; the Business Analysts who need additional capabilities to drill and change subtotal definitions. In this case, you can create two security roles to suit these two different types of users. Security roles are different from user groups. A user group is a collection of users that can be assigned privileges as a group. A security role is a collection of privileges that can be assigned to a user all at once. Once you create a security role, you can save it and use it in any project registered with the Intelligence Server. The users associated with a particular security role can vary by project. This means that a user could have the Executive Users functionality in one project and Business Analysts in another.
The following security roles are provided with the MicroStrategy system by default: Normal Users: have no privileges granted. Power Users: have a set of privileges typical of users who need access to advanced functionality in the interface.
You can create security roles in several ways: In the User Editor (for more information, see the Using the User Editor topic in the System Administration Online Help) In the Security Roles editor (selecting Security Roles from the Administration menu) (for more information, see the Using the Security Role Editor topic in the System Administration Online Help) Using the Command Manager (for more information, see Automating processes using Command Manager, page 464)
127
Desktop Analyst Desktop Designer In the case of Desktop user groups, the Desktop Designer inherits the privileges of the Desktop Analyst and therefore has more privileges than the Desktop Analysts.
System Monitors System Administrators The System Administrators user group inherits the privileges of the System Monitors user group and therefore has more privileges than the System Monitors.
Permissions
Permissions define the degree of control users have over individual objects in the system. For example, in the case of a report, a user may have permission to view the report definition and execute the report, but not to modify the report definition or delete the report. Permissions differ from privileges in that permissions restrict or allow actions related to a single object, while privileges restrict or allow actions, across all objects in a project. While privileges are assigned to users (either individually, through groups, or with security roles), permissions are assigned to objects. More precisely, each object has an Access Control List (ACL) that specifies which permissions different sets of users have on that object. When you edit an objects ACL using the objects Properties dialog box, you can assign a predefined grouping of permissions or you can create a custom grouping. The table
below lists the predefined groupings and the specific permissions each one grants.
Grouping
View
Modify
Full Control
Custom
When you give users only Browse access to a folder, they can see that folder displayed, but cannot see a list of objects within the folder. However, if they perform a search, and objects within that folder match the search criteria, they can see those objects. To deny a user the ability to see objects within a folder, you must deny all access directly to the objects in the folder.
129
For example, grant the Browse permission to the folder, but assign Denied all for the folders children objects, then select the Apply changes in permissions to all children objects check box. This allows a user to see the folder, but nothing inside it. Alternatively, you could assign Denied all to the folder and its children and the user cannot see the folder or anything inside it. Modifying the ACL of a shortcut object does not modify the ACL of that shortcuts parent object.
The default ACL of a newly created object has the following characteristics: The owner (the user who created the object) has Full Control permission. The ACLs for all other users are set according to the Children ACL of the parent folder. Newly created folders inherit the standard ACLs of the parent folder. They do not inherit the Children ACL.
For example, if the Children setting of the parent folders ACL includes Full Control permission for the Administrator and View permission for the Everyone group, then the newly created object inside that folder will have Full Control permission for the owner, Full Control for the Administrator, and View permission for Everyone. When you move an object to a different folder, the moved object retains its original ACLs. When you copy an object, the copied object inherits its ACL from the Children ACL of the folder into which it is copied. For a detailed list of permission groupings you can assign to an object and what each permission actually controls, see Access Control List permissions for an object, page 844. For information on what each item in the ACL means, see Access control entries, page 845. For information about using Command Manager scripts to automate ACL management, see the online help (press F1 while using Command Managers graphical interface).
To assign permissions for an object
1 From within MicroStrategy Desktop, right-click the object and select Properties. The Properties dialog box opens. 2 Click the Security tab. 3 For the User or Group (click Add to select a new user or group) from the Object drop-down list, select the predefined set of permissions. If the object is a folder, you can also assign permissions to objects contained in that folder using the Children drop-down list. 4 Click OK. For specific information about each setting in the dialog box, press F1 to see the online help.
131
Permission levels: Higher level permissions override lower ones when both are assigned to a user. User permission are higher than group permissions. If a user has permissions assigned to an object and is in a group that has a different permission grouping assigned to the object, the highest level of permission is granted to the user. For example, if Joe has Full Control permissions for a report, and Joe is a member of the Managers group, which has View permissions for the report, Joe has Full Control permissions for the report. Permissions ranked from highest level down to lowest level are listed below. The permissions at the top of the list override those permissions lower down the list, when both types of permissions are assigned to a user: 1 Denied All: Highest level permission 2 The permissions with the fewest restrictions 3 The permissions with the most restrictions (except Denied All): Lowest level permissions For example, if a user is a member of multiple user groups that have different permissions assigned to an object, the user is assigned permissions with the fewest restrictions. For example, if Joe is a member of the Managers group, which has View permission for a report, and he is also a member of the Developers group, which has Full control permission for the report, then Joe has Full control permission for the
132 Controlling access to application functionality
2008 MicroStrategy, Inc.
report. In another example, if Tom is a member of the Managers group, which has Full Control of the report and at the same time a member of the Developers group, which has Denied All permission, then Tom has Denied All permission for the report.
133
A user may have three different levels of access to an object using these two new permissions: Use and Execute permissions: The user can use the object to make new reports, and can execute reports containing the object. Execute permission only: The user can execute reports containing the object, but cannot make new ones. That is, someone else made the report and gave the user Execute permission on the report. Neither permission: The user cannot make reports containing the object, nor can the user execute such reports, even if the user has Execute rights on the report. It is not possible for a user to have the Use permission but not the Execute permission. That is, someone cannot create a report and then not have permission to execute it. The Use permission implies the Execute permission.
135
permission is required on all attributes, custom groups, consolidations, prompts, metrics, facts, filters, templates, and hierarchies used to define the report. Permissions are not checked on transformations and functions used to define the report. In version 7.1.x, a users ability to execute a report was determined by whether that user had Use/Execute permission on the report. In version 7.2.x and later, if the user does not have access to an attribute, custom group, consolidation, prompt, fact, filter, template, or hierarchy used to define a report, the report execution fails. If the user does not have access to a metric used to define a report, the report execution continues, but the metric is not displayed in the report for that user. This enhancement allows a finer level of access control when executing reports. The same report can be deployed to many users who experience different results depending on their respective permissions on metrics.
1 In MicroStrategy Desktop, right-click the project that you want to upgrade and select Project Configuration. The Project Configuration Editor opens. 2 Expand the Project definition category and select the Security subcategory.
3 Select the Use 7.2.x security model option. Once the 7.2.x object security model is applied, the project is no longer able to use the 7.1.x security model. 4 Click Update the ACL for Reports to update the Access Control List (ACL) for each report in the project. If you update the ACL for reports, objects that were granted the Use/Execute permission in previous versions now have both Use and Execute permissions. Objects without the Use/Execute permission in previous versions now have neither the Use nor Execute permission. Hence, you should perform the access control list update when enabling the new security model only if you do not intend to prevent users from executing reports based on permissions on attributes, facts, and metrics. 5 Click OK. Your changes are saved and the Project Configuration Editor closes.
137
When you open the User Merge Wizard and select a project source, the wizard locks that project configuration. Other users cannot change any configuration objects until you close the wizard. For more information about locking and unlocking projects, see Locking projects, page 494. You can also merge users in batches if you have a large number of users to merge. Merging in batches can significantly speed up the merge process. Batch-merging is an option in the User Merge Wizard. Click Help for details on setting this option. The User Merge Wizard automatically merges the following properties: privileges, group memberships, profile folders, and object ownership (access control lists). You may optionally choose to merge a users or groups security roles, security filters, database connection maps, and schedules. Details about how the wizard merges each of these properties are discussed below.
139
ownership and access to the merged users objects are granted to the destination user, UserA. The merged user is removed from the objects ACL. Any other users that existed in the ACL remain in the ACL. For example, before the merge, UserB owns an object that a third user, UserC has access to. After the merge, UserA owns the object, and UserC still has access to it.
Merging schedules
The User Merge Wizard does not automatically merge a users or groups schedules. To merge user schedules, you must select the Schedules check box on the Merge Options page in the wizard. The wizard follows the rules listed below. If the user to be merged has a schedule request for a particular report and: The destination user does not have the schedule request for the same report, a new schedule request for that report is created for the destination user. The destination user has a schedule request for the same report, then nothing changes. The destination user has a schedule request for a different report, a new schedule request is created like the source users (according to the first bullet above) and the existing schedule for the different report is preserved. The destination user now has two report schedules.
141
1 From the Windows Start menu, point to Programs, then MicroStrategy, then Administrator, then Object Manager, and then select User Merge Wizard. The User Merge Wizard opens. 2 Specify the project source containing the users/groups you wish to merge. 3 Select whether you wish to merge optional user properties such as Security roles, Security filters, Schedules and Database connection maps. For a description of how the User Merge Wizard merges these optional properties, see each individual propertys section in How users and groups are merged, page 138. 4 Specify whether you wish to have the wizard select the users/groups to merge automatically (you can verify and correct the merge candidates), or if you wish to manually select them. 5 In the User Merge Candidates page, select the destination users or groups and click > to move them to the right-hand side. 6 Select the users or groups to be merged and click > to move them to the right-hand side. They display below the selected destination user or group. 7 On the Summary page, review your selections, and click Finish. The users or groups are merged.
authentication method controls how the RDBMS recognizes users and consequently determines how MicroStrategy reports are executed in the RDBMS. Below are a few examples of how the different methods for user authentication can be combined with different methods for database authentication to achieve the security requirements of your MicroStrategy system. These examples illustrate a few possibilities; other combinations are possible.
Simple setup: Standard authentication and default database login for the database instance
This is the simplest configuration to set up. When users access the MicroStrategy system, they log in using their MicroStrategy user names and passwords. When executing reports, the Intelligence Server uses the same database account to execute queries for all users.
To establish this configuration
1 In Desktop, open the Project Source Manager, and on the Advanced tab, select Use login ID and password entered by the user (standard authentication) as the Authentication mode. This is the default setting. 2 From Web, log in as an administrator and select Preferences, select Project Defaults, select Security, and then enable Standard (user name & password) as the login mode. 3 In Desktop, create a database instance and assign a default database login for it. This is the RDBMS account that will be used to execute reports from all users.
143
When using linked warehouse logins, each report is executed on the RDBMS under the RDBMS account of the user who submitted the report from the MicroStrategy system. Since this approach requires all users to have an account on the RDBMS, you might also choose to have users log in to the MicroStrategy system with their RDBMS account credentials. With Database authentication, users log in to the MicroStrategy system using their RDBMS login IDs and passwords.
To establish this configuration
1 In Desktop, open the Project Source Manager, and on the Advanced tab, select Use login ID and password entered by the user for the Warehouse (database authentication) as the Authentication mode. 2 From Web, log in as an administrator and select Preferences, select Project Defaults, select Security, and then enable Database Authentication as the login mode. 3 In the User Editor, select the Authentication tab, and link users to their respective database user IDs using the Warehouse Login and Password boxes for each user. For more information, see the System Administration online help. 4 Turn on the setting for database execution using linked warehouse logins on each project that you wish to use linked warehouse logins for database execution. To do this, access the Project Configuration editor, expand the Database instances category, click Execution, and select the Use linked warehouse login for execution check box.
query receive different results, reflecting their different levels of access. For the security views to work, each report is executed under the RDBMS account of the user who submitted the report from the MicroStrategy system. Even though this approach requires users to have accounts on the RDBMS, you may choose to use Windows authentication so that users do not have to remember their RDBMS login ID and password when logging in to the MicroStrategy system. With Windows authentication, users are automatically logged in to the MicroStrategy system using their Windows ID and password.
To establish this configuration
1 In Desktop, open the Project Source Manager, and on the Advanced tab, select Use network login ID (Windows authentication) as the Authentication mode. 2 From Web, log in as an administrator and select Preferences, select Project Defaults, select Security, and then enable Windows Authentication as the login mode. 3 In the User Editor, select the Authentication tab, and link users to their respective database user IDs using the Warehouse Login and Password boxes for each user. For more information, see the System Administration online help. 4 Turn on the setting for database execution to use linked warehouse logins on each project that you wish to use linked warehouse logins for database execution. To do this, access the Project Configuration editor, expand the Database instances category, click Execution, and select the Use linked warehouse login for execution check box.
145
Connection maps: Standard authentication, connection maps, and partitioned fact tables
You may want to use this configuration if you implement access control policies in the RDBMS so that you can have multiple user accounts in the RDBMS, but not necessarily one for every user. For example, perhaps you are partitioning fact tables by rows as described earlier in this chapter. You have a user ID for the First National Bank that only has access to the table containing records for that bank and another user ID for the Eastern Credit Bank that only has access to its corresponding table. Depending on the user ID used to log in to the RDBMS, a different table is used in SQL queries. Although there are only a small number of user IDs in the RDBMS, there are many more users who access the MicroStrategy application. When users access the MicroStrategy system, they log in using their MicroStrategy user names and passwords. Using connection maps, the Intelligence Server uses different database accounts to execute queries, depending on the user who submitted the report.
To establish this configuration
1 In Desktop, open the Project Source Manager, and on the Advanced tab, select Use login ID and password entered by the user (standard authentication) as the Authentication mode. This is the default setting. 2 From Web, log in as an administrator and select Preferences, select Project Defaults, select Security, and then enable Standard (user name & password) as the login mode. 3 Create a database login for each of the RDBMS accounts. 4 Create a user group in the MicroStrategy system corresponding to each of the RDBMS accounts and then assign multiple users to these groups as necessary.
5 Define a connection mapping that maps each user group to the appropriate database login.
1 In Desktop, right-click on the project you want to deny access to. Select Project Configuration. The Project Configuration Editor opens. 2 Expand Project Access. The Project Access - General dialog box opens. 3 From the Select a security role drop-down list, select the security role that contains the user or group who you want to deny project access. For example, select the Normal Users security role. 4 On the right-hand side of the Project access - General dialog, select the user or group who you want to deny project access. Then click the left arrow to remove that user or group from the security role. For example, remove the Everyone group. 5 Using the right arrow, add any users to the security role for whom you want to grant project access. To see the users contained in each group, highlight the group and check the Show users check box. 6 Make sure the user or group whose access you want deny does not appear in the Selected users and groups pane on the right-hand side of the dialog. Then click OK. 7 In Desktop, under the project source that contains the project you are restricting access to, expand Administration, then expand User Manager.
147
8 Click on the group to which the user belongs who you want to deny project access for. Then double-click on the user in the right-hand side of Desktop. The User Editor opens. 9 On the Project Access tab, under the project you want to restrict access to, review the Security Role Selection drop-down list. Make sure that no security role is associated with this project for this user. 10 Click OK. When the user attempts to log in to the project, he receives the message No projects were returned by this project source.
3
3.
Introduction
This chapter covers how to administer the licenses involved in a MicroStrategy system. Administration topics include: Verifying, auditing, and updating licenses, page 149 Updating CPU affinity, page 157
149
More users are using the system than you have licenses for. More CPUs are being used with Intelligence Server than you have licenses for. Your licenses do not fit new user requirements.
If your MicroStrategy business intelligence deployment licenses are not managed properly, the system displays a message when you start Command Manager, the Enterprise Manager Console, License Manager, MicroStrategy Web, or open the Administration icon in MicroStrategy Desktop. Your license management is verified according to the license type you have purchased. Refer to your MicroStrategy contract and any accompanying contract documentation for descriptions of the different MicroStrategy license types. MicroStrategy License Manager can help you locate more details about any license management issues. If you obtain additional licenses from MicroStrategy, you can use License Manager to update them. For details on performing a license key update, see Updating a license, page 156.
To fix this problem, you can either change the user privileges to match the number of licenses you have, or you can obtain additional licenses from MicroStrategy. License Manager can determine which users are causing the metadata to exceed your licenses and which privileges for those users are causing each user to be classified as a particular license type (see Using License Manager, page 152). For more information about the privileges associated with each license type, see List of all privileges, page 847 in Appendix B, Permissions and Privileges. To verify your licenses, Intelligence Server scans the metadata repository for the number of users fitting these license types: Web Professional, Web Analyst, Web Reporter, Architect, Desktop Designer, Desktop Analyst, OLAP Services, Report Services, MicroStrategy Office, and MicroStrategy Administrator. If the number of licenses has been exceeded, you see a warning message when you start Command Manager, the Enterprise Manager Console, License Manager, MicroStrategy Web, or open the Administration icon in MicroStrategy Desktop. For steps to verify your licenses, see Auditing your system for the proper licenses, page 154. You can configure the time of day that Intelligence Server verifies your Named User licenses.
To configure the time to verify Named User licenses
1 From Desktop or Control Center, right click a project source and select Configure MicroStrategy Intelligence Server. The MicroStrategy Intelligence Server Configuration editor opens. 2 Expand the Server category, and select Advanced. 3 Specify the time in the Time to run license check (24 hr format). 4 Click OK to accept any changes and close the Intelligence Server Configuration editor.
151
Display the enabled or disabled licenses used by the particular user group for the selected products. From this information, you can determine whether you have the number of licenses you need. You can even print a report or create and view a Web page with this information.
Update licenses by providing the new license key, without re-installing the products. For example, you can: Upgrade from an evaluation edition to a standard edition. Update the number of Intelligence Server processors allowed. Update the processor speed allowed.
Activate or deactivate your MicroStrategy installation. For more information on activating your MicroStrategy installation, see the MicroStrategy Installation and Configuration Guide.
Perform license management by changing the number of CPUs being used for a given product, including MicroStrategy Web, Narrowcast Server, and Intelligence Server, if your licenses are based on CPUs. Trigger a license verification check after you have made any license management changes, so the system can immediately return to normal behavior. View your machines configuration including hardware and operating system information. View your MicroStrategy installation history including all license keys that have been applied. View the version, edition, and expiration date of the MicroStrategy products installed on the machine. If the edition is other than Evaluation, the expiration date has a value of Never.
License Manager in command line mode allows you to: Audit your MicroStrategy products.
153
Request an Activation Code and activate your MicroStrategy installation. Update your license key.
For detailed steps to perform all of these procedures, press F1 in License Manager to access the License Manger online help.
1 Go to each machine. 2 Open MicroStrategy License Manager: Windows: From the Start menu, point to Programs, then MicroStrategy, and then select License Manager. License Manager opens.
UNIX/Linux: License Manager can be run in GUI mode or command line mode. GUIIn a UNIX or Linux console window, browse to <HOME_PATH> where <HOME_PATH> is the directory you specified as the home directory during installation. Browse to the folder bin and type ./mstrlicmgr, then press ENTER. License Manager opens in GUI mode. Command lineIn a UNIX or Linux console window, browse to <HOME_PATH> where <HOME_PATH> is the directory you specified as the home directory during installation. Browse to the folder bin and type ./mstrlicmgr -console, then press ENTER. License Manager opens in command line mode. The steps to audit system licenses in command line mode of License Manager vary from the steps below. Refer to the License Manager command line prompts to guide you through the steps to audit system licenses.
3 On the Audit tab, connect to a project source. 4 Select the Everyone group and click Audit. A folder tree of the assigned licenses is listed in the Number of licenses pane. 5 Count the number of licenses per product for enabled users (disabled users should not be counted). Click Print to print the summary information. For detailed information, click Report to create and view XML, HTML, and CSV reports. You can also have the report display all privileges for each user based on the license type. To do this, select the Show User Privileges in Report check box. 6 Total this number for all machines. The License Manager counts the number of licenses based on the number of users with at least one privilege for a given product. The Administrator user (or any new name given to this user) created by default with the repository is not
155
considered in the count. Users with no product-based privileges are listed under Users without license association.
Updating a license
If you need to update a license and you receive a new license key from MicroStrategy, use the License Manager to perform the upgrade. If you have licenses based on the number of CPUs being used, you can also use the update process to change the number of CPUs being used by a given product. See the MicroStrategy Upgrade Guide for complete details on performing an upgrade in your environment. Update your license key on the machine where MicroStrategy products were installed. License Manager updates the license information for those products that are installed on the local machine. The following procedure presents high-level steps to update a license key.
To update a MicroStrategy license
1 Open MicroStrategy License Manager: Windows: From the Start menu, point to Programs, then MicroStrategy, and then select License Manager. License Manager opens. UNIX/Linux: License Manager can be run in GUI mode or command line mode. GUIIn a UNIX or Linux console window, browse to <HOME_PATH> where <HOME_PATH> is the directory you specified as the home directory during installation. Browse to the folder bin and type ./mstrlicmgr, then press ENTER. License Manager opens in GUI mode. Command lineIn a UNIX or Linux console window, browse to <HOME_PATH> where <HOME_PATH> is the directory you specified as the home directory during installation. Browse to the
folder bin and type ./mstrlicmgr -console, then press ENTER. License Manager opens in command line mode. The steps to update system licenses in command line mode of License Manager vary from the steps below. Refer to the License Manager command line prompts to guide you through the steps to update system licenses. 2 On the License Administration tab, select the Update local license key radio button and click Next. 3 Type or paste the new key in the New License Key field and click Next. If you have one or more products that are licensed based on CPUs, the Upgrade window appears, showing the maximum number of CPUs each product is licensed to use on that machine. You can change these numbers to fit your license agreement. For example, if you purchase a license that allows more CPUs to be used, you can increase the number of CPUs being used by a product. 4 License Manager can automatically request an Activation Code for your license after you update. The results of the upgrade appear in the Upgrade Results box. 5 If you have updated your system to make changes to your license management, you should restart Intelligence Server after the update. This allows the system to recognize the license key update and system behavior can return to normal. For detailed steps to perform all of the procedure above, press F1 in License Manager to access the License Manger online help.
157
The ability to deploy Intelligence Server or MicroStrategy Web on specific, selected CPUs (a subset of the total number of physical CPUs) on a given machine is called CPU affinity (or processor affinity). As part of the installation process you must provide the number of processors to be used by Intelligence Server or MicroStrategy Web on that machine.
1 From the Start menu, point to MicroStrategy, then Tools, and then select Service Manager. Service Manager opens. 2 From the Service drop-down menu, select MicroStrategy Intelligence Server. 3 Click the Options button. The Service Options dialog box opens. 4 Click the Intelligence Server Options tab. 5 In the Processor Usage section, select which processors the Intelligence Server should use. 6 Click OK. The Service Options dialog box closes. 7 Close Service Manager.
159
Processor Affinity. The only times Intelligence Server can use Processor 0 are when it is set to all CPUs, or when it is set to one CPU. For additional detail about processor sets in HP-UX, see Processor sets, page 161.
1 Navigate to the bin directory in the installation location. 2 Type the following command: mstrctl -s IntelligenceServerName rs Whenever you change the CPU affinity, you must restart the machine.
Resource sets
On an AIX system, CPU affinity is implemented using resource sets. A resource set is a logical structure that contains a list of specific CPUs that will be used. Processors are bound to a resource set. Resource sets do not define exclusive use of a resource; the same CPU can be part of several different resource sets.
Processor sets
On a Solaris or HP-UX system, multi-CPU affinity is implemented using processor set binding. A processor set is a collection of processors. A process assigned to a processor set can only use the CPUs specified for that processor set. Additionally, a processor set takes exclusive ownership of the CPUs assigned to it. Only the processes assigned to the processor set can use the processor sets CPUs. Other processes are not allowed to use any of these CPUs.
161
A processor set exists beyond the lifetime of the process that created it. Therefore, when a process is shut down, the process must delete the processor set that was created. For example, if a process creates a processor set with three CPUs and the process unexpectedly terminates without deleting the processor set it created, the three CPUs cannot be utilized by any other process until the system is rebooted or the processor set is manually deleted. Whenever the CPU license is limited to a single CPU, Intelligence Server dynamically binds the process to the selected CPU without creating a processor set.
Intelligence Server may automatically modify the existing processor set to keep your licenses mapped properly, if necessary. For example, consider a scenario where an existing processor set is comprised of Processor 0, Processor 1, and Processor 2. A CPU-based license for MicroStrategy Intelligence Server allows two physical CPUs to be used. Intelligence Server is installed and configured to use this existing processor set at startup. In this scenario, Intelligence Server modifies the existing processor set to use only two physical CPUs so that it matches its license. In the scenario described above, Intelligence Server does not create a new processor set, and when it shuts down it does not delete this processor set.
163
IIS versions
CPU affinity can be configured on machines running either IIS 5.0 or IIS 6.0. The overall behavior depends on the IIS version and how IIS is configured. The following cases are considered: IIS 5.0: In IIS 5.0, all ASP.NET applications run in the same process, which is an instance of aspnet_wp.exe. This means that when MicroStrategy Web CPU affinity is enabled, it is applied to all ASP.NET applications running on the Web server machine. A warning is displayed before installation or before the CPU affinity tool (described below) attempts to set the CPU affinity on a machine with IIS 5.0. IIS 6.0: CPU affinity is applied differently depending on whether IIS is running in worker process isolation mode or IIS 5.0 compatibility mode: IIS 6.0 running in worker process isolation mode: In this mode, the CPU affinity setting is applied at the application pool level. When MicroStrategy Web CPU affinity is enabled, it is applied to all ASP.NET applications running in the same application pool. By default, MicroStrategy Web runs in its own application pool. The CPU affinity setting is shared by all instances of MicroStrategy Web on a given machine. Worker process isolation mode is the default mode of operation on IIS 6.0 when the machine has not been upgraded from an older version of Windows. IIS 6.0 running in IIS 5.0 compatibility mode: In this mode, IIS 6.0 behaves like IIS 5.0. All ASP.NET applications run in the same process. As a result, the MicroStrategy Web CPU affinity setting affects other ASP.NET applications that are running on the machine. IIS 5.0 compatibility mode is the default mode of operation when the machine has been upgraded from an older version of Windows.
that uses all available CPUs. The administrator specifies the total number of CPUs that are used. The Web Garden settings can interact with and affect MicroStrategy CPU affinity. The Web Garden setting should not be used with MicroStrategy Web. At runtime, the MicroStrategy Web CPU affinity setting is applied after IIS sets the CPU affinity for the Web Garden feature. Using these settings together can produce unintended results. In both IIS 5.0 and 6.0, the Web Garden feature is disabled by default. CPU affinity interaction depends on the IIS version you use and how it is configured, as described below: In IIS 5.0, a single setting affects all ASP.NET applications. You enable Web Garden mode and specify a CPU mask that determines which CPUs are used. A given number of CPUs are specified in the mask, and IIS creates that number of aspnet_wp.exe instances. Each of these instances runs the ASP.NET applications. In IIS 6.0 running in IIS 5.0 compatibility mode, the behavior is the same as that listed above for IIS 5.0. The Web Garden feature is enabled or disabled using the WebGarden and cpuMask attributes under the processModel node in machine.config. For more information, refer to your IIS documentation. In IIS 6.0 running in worker process isolation mode, the Web Garden setting is applied at the application pool level. You specify the number of CPUs to be used. A given number of CPUs are specified, and IIS creates that number of w3wp.exe instances. Each of the instances runs all of the ASP.NET applications associated with the application pool. The Web Garden feature is configured through the application pool settings. For more information, refer to your IIS documentation. IIS 6.0 provides metabase properties (SMPAffinitized and SMPProcessorAffinityMask) to determine the CPU affinity for a given application pool. Do not use these settings in conjunction with the MicroStrategy Web CPU affinity setting.
165
The MAWebAff.exe tool lists each physical CPU on a machine. You can add or remove CPUs or disable CPU affinity using the associated check boxes. Clearing all check boxes prevents the MicroStrategy Web CPU affinity setting from overriding any IIS-related CPU affinity settings.
To update CPU affinity
1 Double-click the MAWebAff.exe tool to open the CPU affinity tool. 2 Select or clear the check boxes for each processor as desired.
3 Click Apply to apply the settings without closing the tool, or click OK to apply settings and close the tool. Clicking Exit closes the tool without saving any settings. 4 Restart IIS to apply your CPU affinity changes.
167
4
4.
Introduction
A cache is a result set that is stored on a system to improve response time in future requests. With caching, users can retrieve results from Intelligence Server rather than re-executing queries against a database. Intelligence Server supports the following types of caches: Report caches: Report results that have already been calculated and processed, that are stored on the Intelligence Server machine so they can be retrieved more quickly than re-executing the request against the data warehouse. For more information on these, see Report caches, page 171. The History List is a way of managing report caches on a per-user basis. For more information, see Managing report caches: History List, page 221.
169
Element caches: Most-recently used lookup table elements that are stored in memory on the Intelligence Server or MicroStrategy Desktop machines so they can be retrieved more quickly. For more information on these, see Element caches, page 201. Object caches: Most-recently used metadata objects that are stored in memory on the Intelligence Server and MicroStrategy Desktop machines so they can be retrieved more quickly. For more information on these, see Object caches, page 216.
You specify settings for all three cache types under Caching in the Project Configuration editor.
Caches are created and stored for individual projects; they are not shared across projects. Changes to cache settings do not take effect until you stop and restart MicroStrategy Intelligence Server.
170
Report caches
A report cache is a result set from an executed report that is stored on MicroStrategy Intelligence Server. Report caches are not retained on MicroStrategy Desktop; therefore, you cannot create or use report caches in a Direct (two-tier) environment. Report caching takes place when any of the following occurs: In MicroStrategy Desktop or MicroStrategy Web products, execute a saved report containing only static objects. In MicroStrategy Desktop or MicroStrategy Web products, execute a saved report containing one or more prompts. Each unique set of prompt selections corresponds to a distinct cache. In MicroStrategy Web products, execute a template and filter combination. A report is executed based on a schedule set previously in MicroStrategy Desktop or MicroStrategy Web products. Caching does not apply to a drill report request because the report is constructed on the fly. When a user runs a report, a job is submitted to Intelligence Server for processing. If a cache for that request is not found on the server, a query is submitted to the data warehouse for processing, and then the results of the report are cached. The next time someone runs the report, the results are returned immediately without having to wait for the database to process the query. The Cache Monitor displays detailed information about caches on a machine; for more information see Monitoring report caches, page 180. You can easily check whether an individual report hit a cache by viewing the report in SQL
Report caches
171
View. The image below shows the SQL View of a MicroStrategy Tutorial report, Sales by Region. The 5th line of the SQL View of this report shows Cache Used: Yes.
Client-side analytical processing, such as ad hoc data sorting, pivoting, view filters, derived metrics, etc., does not cause the MicroStrategy Intelligence Server to create a new cache. Report caches can only be created or used for a project if the Enable report server caching check box is selected in the Project Configuration Editor under the Caching: Reports (Advanced) category. By default, report caching is enabled at the project level, but it can be set on a per-report basis. For more information, see Report cache settings at the report level, page 199. This section discusses the following topics concerning report caching: Types of report caches, page 173 Location of report caches, page 175 Report cache file format, page 175
Report cache index files, page 176 Cache matching algorithm, page 176 Disabling report caching, page 179 Monitoring report caches, page 180 Managing report caches, page 183 Deleting or purging report caches, page 190 Configuring report cache settings, page 190 Summary table of report cache settings, page 200
Matching caches
Matching caches are report results retained for the purpose of being reused by the same report requests later on. This process is managed by the system administrator and is transparent to general users who simply benefit from faster response times. When report caching is enabled, the Intelligence Server determines for each report request whether it can be served by an already existing Matching cache. If there is no match, it then runs the report on the database and creates a new Matching cache that can be reused if the same report request is submitted again.
Report caches
173
History caches
History caches are report results saved for future reference via the History List by a specific user. When a report is executed, an option is available to the user to send this report to the History List. Selecting this option creates a History cache to hold the data of that report and a message in the users History List pointing to that History cache. The user can later reuse that report result set by accessing the corresponding message in the History List. Note that it is possible that multiple History List messages from different users refer to the same History cache. The main difference between the Matching and History caches is that the former holds the data for a report and is accessed during report execution, while the latter holds the data for a History List message and is only accessed when the content of that message is retrieved. For more information, see Managing report caches: History List, page 221.
Matching-History caches
A Matching-History cache is a Matching cache with at least one History List message referencing it. It is actually one cache with two logical parts: Matching and History. Properties associated with the Matching caches and History caches discussed above correspond to the two parts in the Matching-History caches. This type of cache is displayed as Matching, History under the Type category in the Cache Monitor.
XML caches
An XML cache is a report cache in XML format. It is created when a report is executed from Web and is available for reuse on Web later on. It is possible that the XML cache is created at the same time as its corresponding normal report cache. Although just a different format of the same report cache, the XML cache is maintained as a distinct cache and thus counts
towards the maximum number of caches as an independent unit. It is automatically removed when the associated report or History cache is removed. To disable XML caching, select the Enable Web personalized drill paths option in the Project definition: Drilling category in the Project Configuration editor. Note that this may adversely affect Web performance.
Report caches
175
The location of these index files is the same as the location of report cache files: C:\Program Files\MicroStrategy\Intelligence Server\Caches\Server Definition\Machine Name
Report caches
177
Report caches
179
1 Open the Project Configuration editor for the project. 2 Select the Caching: Reports (Advanced) category. 3 Clear the Enable report server caching check box. 4 Click OK. 5 Restart Intelligence Server to have the change take affect.
Cache Monitor
The Cache Monitor lists all report caches for all projects on the Intelligence Server. It provides this information: Report Name: name of the report Project Name: name of the project Status: status of the cache, for example, Ready, Processing, Invalid, and so on (see Cache statuses, page 247) Last Update: the time when the cache data was last loaded from the data warehouse Cache Size: size of the cache on disk
Expiration: the time when the cache will be automatically invalidated based on the expiration setting (see Report cache duration (in hours), page 198) Type: type of cache, for example, Matching, History, Matching-History, and XML caches (Types of report caches, page 173) Cache ID: the ID number of the cache
To see more details about a cache, right-click the cache name and select Complete Information (also available from the Administration menu). It displays these columns: Hit Count: the number of times the cache is used Creation Time: the time when the cache was first created Last Hit Time: the time when the cache was last used File Name: the name of the cache file Waiting List: indicating the number of report requests that are waiting for the cache that is being processed. Once the cache is available, the reports in waiting will hit it.
By default, the Cache Monitor does not display History and XML caches. To display them, right-click the Cache Monitor and select Show caches for History List messages and Show XML caches (also available from the Administration menu). To see a complete list of information for a cache, double-click the cache or right-click the cache and then select Quick View. The Quick View - Cache page lists information on Cache ID, Report Name, Project Name, Cache File Name, Cache File Size, Status, Hit Count, Waiting List, Last Hit Time, Creation Time, Expiration Time, Last Updated, and Last Load Time. For more information about this monitor, see Managing report and History List caches: Cache Monitor, page 246. When a report is executed (which creates a job) and the results of that report are retrieved from a cache instead of from the data warehouse, Intelligence Server increments the caches Hit Count (only for Matching or Matching-History cache types). This can
Report caches
181
happen when a user runs a report or when the report is run on a schedule for the user. This does not include the case of a user retrieving a report from the History List (which does not create a job). Even if that report is cached, it does not increase its Hit Count.
Command Manager
You can also use two Command Manager scripts to monitor report caches: LIST [ALL] REPORT CACHES [FOR PROJECT "<project_name>"] lists all report caches on Intelligence Server for a specified project. LIST [ALL] PROPERTIES FOR REPORT CACHE "<cache_name>" IN PROJECT "<project_name>" lists the following information for a specified report: Report Name, Cache ID, Project Name, Cache File Size (KB), Status, Cache Type, Hit Count, Waiting List, Last Hit Time, Creation Time, Expiration Time, Last Updated, and Last Load Time.
These scripts are located at C:\Program Files\MicroStrategy\Administrator\Command Manager\Outlines\Cache_Outlines. For more information about Command Manager, see Automating processes using Command Manager, page 464 or the Command Manager online help.
1 Open the MicroStrategy Diagnostics and Performance Logging Tool. (From the Windows Start menu, point to Programs, then MicroStrategy, then Tools, and then select Diagnostics Configuration.) 2 In the Select Configuration drop-down list, select CastorServer Instance. 3 Clear the Use Machine Default Diagnostics Configuration check box. 4 In the Report Server component, in the Cache Trace dispatcher, click the File Log (currently set to <None>) and select <New>. The Log Destination Editor opens. 5 Enter the following information in the editor: Select Log Destination: <New> File Name: cacheTrace Max. File Size: 5000 File Type: Diagnostics
6 Click Save, and then click Close. The Log Destination Editor closes. 7 In the Report Server component, in the Cache Trace dispatcher, click the File Log (currently set to <None>) and select cacheTrace. The creation and deletion of report caches is now logged to this file.
Report caches
183
do this in two ways: Invalidating and Scheduling. We discuss these below along with other maintenance operations that you can use when managing report caches. This includes: Invalidating report caches, page 184 Scheduling updates of report caches, page 185 Deleting report caches, page 187 Expiring report caches, page 188 Purging all report caches for a project, page 189 Unloading report caches from memory to disk, page 189 Loading report caches from disk into memory, page 190
Report caches
185
on a schedule might not be worth it. For more information on scheduling, see Scheduling jobs and administrative tasks, page 451.
Report caches
187
You can manually delete caches via the Cache Monitor and Command Manager or scheduled via the Administration Tasks Scheduling.
Step 1
Invalidate the cache. Delete all History List messages referencing the History cache. Invalidate the cache.
Result
The Matching cache is deleted. The History cache is deleted.
Step 2
Final result
MatchingHistory (1)
The Matching-History cache is converted to a History cache. The Matching-History cache is converted to a Matching cache.
Delete all History List messages referencing the converted History cache. Invalidate the converted Matching cache.
The converted History cache is deleted. The converted Matching cache is deleted.
MatchingHistory (2)
Cache expiration occurs automatically according to the Report cache duration (Hours) setting in the Caching: Reports (Advanced) subcategory in the Project Configuration editor. When a cache is updated, the current cache lifetime is used to determine the cache expiration date based on the last update time of the cache. This means that changing the Report cache duration (Hours) setting does not affect the expiration date of the already existing caches. It only affects the new caches that are being or will be processed. Rather than expiring a cache based on a time interval, MicroStrategy strongly recommends that you invalidate a cache if changes in the data from the data warehouse affect the cache. For more information on this alternative, see Invalidating report caches, page 184.
189
Each is discussed in detail below. Changes to any of the caching settings are effective only after the Intelligence Server restarts.
To locate these settings, open the MicroStrategy Intelligence Server Configuration editor, and under Server Definition, select Advanced. You can also use the Command Manager script, Alter_Server_Config_Outline.otl, located at C:\Program Files\MicroStrategy\Administrator\Command Manager\Outlines\Cache_Outlines.
Report caches
191
You can specify the cache backup frequency in the Backup frequency (minutes) box under the Server definition: Advanced subcategory in the MicroStrategy Intelligence Server Configuration editor.
If you specify a backup frequency of 0 (zero), report caches are saved to disk as soon as they are created. If you specify a backup frequency of 10 (minutes), it means that the report caches are backed up from memory to disk ten minutes after they are created. For a clustered environment, MicroStrategy recommends that you set this to 0 (zero) to ensure that History List messages are synchronized correctly. Backing up caches from memory to disk more frequently than necessary can drain resources.
The default value for this setting is 0, which means that the cleanup takes place only at server shutdown. You may change this value to another based on your needs, but make sure that it does not negatively affect your system performance. MicroStrategy recommends cleaning the cache lookup at least daily but not more frequently than every half hour.
Report caches
193
You can also use the Command Manager scripts located at C:\Program Files\MicroStrategy\Administrator\Command Manager\Outlines\Cache_Outlines.
For more information about see which configuration may be best in clustered environments, Sharing caches in a cluster, page 338.
You should monitor the systems performance when you change the Maximum RAM usage setting. In general, it should not be more than 30% of the machines total memory.
Report caches
195
For more information about when report caches are moved in and out of memory, see Location of report caches, page 175.
Maximum RAM usage limit. If the Load caches on startup setting is disabled, it loads report caches only when requested by users. Load caches on startup is enabled by default. Note that for large projects, loading caches on startup can take a long time so you have the option to set the loading of caches on demand only. However, if caches are not loaded in advance, there will be a small additional delay in response time when they are hit. Therefore, you need to decide which is best for your set of user and system requirements.
Report caches
197
199
Use default project-level behavior. This default setting means that those settings configured at the project level in the Project Configuration editor apply to this specific report as well. These settings include Enable report server caching, Enable prompted report caching, Enable non-prompted report caching, and so on.
see Cache file directory, page 194 see Maximum RAM usage, page 195 see Maximum number of caches,
page 196
see RAM swap multiplier, page 196 see Load caches on startup, page 196 see Enable report server caching,
page 197
see Create caches per user, page 198 see Create caches per database login,
page 198
For information...
Element caches
When a user runs a prompted report containing an attribute element prompt or a hierarchy prompt, an element request is created. (Additional ways to create an element request are listed below.) An element request is actually a SQL statement that is submitted to the data warehouse. Once the element request is completed, the prompt can be resolved and sent back to the user. Element caching, set by default, allows for this element to be stored in memory so it can be retrieved rapidly for subsequent element requests without triggering new SQL statements against the data warehouse. For example, if ten users run a report with a prompt to select a region from a list, when the first user runs the report, a SQL statement executes and retrieves the region elements from the data warehouse to store in an element cache. The next nine users see the list of elements return much faster than the first user because the results are retrieved from the element cache in memory. If element caching is not enabled, when the next nine users run the report, nine additional SQL statements will be submitted to the data warehouse, which puts unnecessary load on the data warehouse. Element caches are the most-recently used lookup table elements that are stored in memory on the Intelligence Server or MicroStrategy Desktop machines so they can be retrieved more quickly. They are created when users: Browse attribute elements in MicroStrategy Desktop using the Data Explorer, either in the Folder List or the Report Editor Browse attribute elements in the Filter Editor
Element caches
201
Execute a report containing a prompt exposing an attribute list (which includes hierarchies and element list types). The element list is displayed when the report executes and creates an element cache.
This section discusses the following topics concerning element caching: Element caching terminology, page 202 Location of element caches, page 203 Cache matching algorithm, page 204 Enabling or disabling element caching, page 204 Limiting the number of elements displayed and cached at a time, page 205 Caching algorithm, page 209 Limiting the amount of memory available for element caches, page 210 Limiting which attribute elements a user can see, page 211 Limiting element caches by database connection, page 212 Limiting element caches by database login, page 213 Deleting or purging element caches, page 214 Summary table of element cache settings, page 215
example, month in the lookup date table) a SELECT DISTINCT is used for the element request. Element requests may also contain a WHERE clause if resulting from a search, filtered hierarchy prompt, drill request on a hierarchy prompt, or a security filter. Element Cache Pool: The amount of memory Intelligence Server allocates for element caching. In the interface, this value is called Maximum RAM usage, set in the Project Configuration Editor: Caching - Elements category. The default value for this setting is 512 KB. Intelligence Server estimates that each object uses 512 bytes; therefore, by default, Intelligence Server caches about 1,024 element objects. If an element request results in more objects needing to be cached than what the maximum size of the element cache pool allows, the request is not cached. Element Incremental Fetch Size: The maximum number of elements for display in the interface per element request. On Desktop, the default for the Element Incremental Fetch setting is 1,000 elements; on Web, the default is 15 elements.
Element caches
203
In the Project Configuration editor: Caching: Elements category, set the Maximum RAM usage (KBytes) to 0 (zero).
In the Project Source Manager, select the Caching tab, set the Maximum RAM usage (KBytes) to 0 (zero). You might want to perform this operation if you always want to use the caches on Intelligence Server. This is because when element caches are purged, only the ones on the Intelligence Server are eliminated automatically while the ones in Desktop remain intact. Caches are generally purged because there are frequent changes in the data warehouse that make the caches invalid.
To disable element caching for an attribute
1 Edit the Attribute and on the Display tab, clear the Enable element caching check box.
Element caches
205
When the incremental element fetching is used, an additional pass of SQL is added to each element request. This pass of SQL determines the total number of elements that exist for a given request. This number helps users decide how to browse a given attributes element list. This additional pass of SQL generates a SELECT COUNT DISTINCT on the lookup table of the attribute followed by a second SELECT statement (using an ORDER BY) on the same table. From the result of the first query, Intelligence Server determines if it should cache all of the elements or only an incremental set. The incremental retrieval limit is four times the incremental fetch size. For example, if your MicroStrategy Web product is configured to retrieve 50 elements at a time, 200 elements along with the distinct count value are placed in the element cache. The user must hit the next option four times to introduce another SELECT pass, which will retrieve another 200 records in this example. Because the SELECT COUNT DISTINCT value was cached, this would not be issued a second time the SELECT statement is issued. To optimize the incremental element caching feature (if you have large element fetch limits or small element cache pool sizes), Intelligence Server uses only 10% of the element cache on any single cache request. For example, if 200 elements use 20% of the cache pool, Intelligence Server only caches 100 elements, which is 10% of the available memory for element caches. The number of elements retrieved per element cache can be set for Desktop users at the project level, MicroStrategy Web product users, a hierarchy, or an attribute. Each is discussed below.
To limit the number of elements displayed for a project (affects only Desktop users)
1 Open the Project Configuration editor and select the Project definition: Advanced category. 2 Type the limit in the Maximum number of elements to display box.
To limit the number of elements displayed for MicroStrategy Web product users
1 From the MicroStrategy Web or Web Universal interface, select Preferences. 2 Select Project Defaults in the Preferences Level category. 3 Select General in the Preferences category. 4 Type the limit for the Maximum number of attribute elements per block setting in the Incremental Fetch subcategory.
To limit the number of elements displayed on a hierarchy
1 Open the Hierarchy editor, right-click the attribute and select Element Display from the shortcut menu, and then select Limit. The Limit dialog box opens. 2 Type a number in the Limit box.
To limit the number of elements displayed for an Attribute
1 Open the Attribute Editor. 2 Select the Display tab. 3 In the Element Display category, select the Limit option and type a number in the box. The element display limit set for hierarchies and attributes may further limit the number of elements set in the project properties or Web preferences. For example, if you set 1,000 for the project, 500 for the attribute, and 100 for the hierarchy, Intelligence Server will only retrieve 100 elements.
Element caches
207
2 From the Administration menu, select VLDB Properties. The VLDB Properties editor opens. 3 Under Query Optimizations, select Attribute Element Number Count Method and on the right-hand side, select one of the options: To have the data warehouse calculate the count, select Use Count(Attribute@ID) to calculate total element number (will use count distinct if necessary) -Default To have Intelligence Server calculate the count, select Use ODBC cursor to calculate total element number
Caching algorithm
The cache behaves as though it contains a collection of blocks of elements. Each cached element is counted as one object and each cached block of elements is also counted as an object. As a result, a block of four elements are counted as five objects, one object for each element and a fifth object for the block. However, if the same element occurs on several blocks it is only counted once. This is because the element cache shares elements between blocks. The cache uses the "least recently used" algorithm on blocks of elements. That is, when the cache is full, it discards the blocks of elements that have been in the cache for the longest time without any requests for the blocks. Individual elements, which are shared between blocks, are discarded when all of the blocks that contain the elements have been discarded. Finding the blocks to discard is a relatively expensive operation. Hence, the cache discards one quarter of its contents each time it reaches the maximum number of allowed objects.
Element caches
209
1 Open the Project Configuration editor and select Caching: Elements. 2 Specify the amount of RAM (in kilobytes) in the Server: Maximum RAM usage (KBytes) box. Note the following:
The default value is 512 KB. Intelligence Server estimates that each element object uses 512 bytes, and therefore, by default it caches 1,024 objects in the element cache. If you set the value to 0, element caching is disabled. If you set it to -1, Intelligence Server uses the default value of 512 KB.
3 Specify the amount of RAM (in kilobytes) in the Client: Maximum RAM usage (KBytes) box. The new settings take affect only after Intelligence Server is restarted.
To set the RAM available for element caches on MicroStrategy Desktop
1 In the Project Source Manager, click the Caching tab and within the Element Cache group of controls, select the Use custom value option. If you select the Use project default option, the amount of RAM will be the same as specified in the Client section in the Project Configuration editor described above. 2 Specify the RAM (in kilobytes) in the Maximum RAM usage (KBytes) field.
Element caches
211
you do not limit attribute elements with security filters for a project, you can enable it for certain attributes. For example, if you have Item information in the data warehouse available to external suppliers, you could limit the attributes in the Product hierarchy with a security filter. This is done by editing each attribute. This way, suppliers can see their products, but not other suppliers products. Element caches not related to the Product hierarchy, such as Time and Geography, are still shared among users. For more information on security filters, see Controlling access to data at the application level, page 105.
To limit which attribute elements users can see per project
1 In the Project Configuration editor, select the Project definition: Advanced category. 2 Select the Apply security filters to element browsing check box.
To limit which attribute elements users can see per attribute
1 Edit the attribute, and click the Display tab. 2 Select the Apply security filters to element browsing check box. You must update the schema before changes to this setting take affect (from the Schema menu, select Update Schema).
database connections cannot share element caches. This causes the element cache matching key to contain the users database connection.
To limit element caches by database connection
1 In the Project Configuration editor, select the Caching: Elements category. 2 Select the Create element caches per database connection check box. The new setting takes affect only after the project is reloaded or after Intelligence Server is restarted. For more information about connection mapping, see Connection mappings, page 105. Users may connect to the data warehouse using their linked warehouse logins, as described below.
Element caches
213
1 In the Project Configuration editor, select the Caching: Elements category. 2 Select the Create element caches per database login check box. The new setting takes affect only after the project is reloaded or after Intelligence Server is restarted.
1 Right-click the project and select Project Configuration Editor. 2 On the left, expand Caching and select Elements. 3 Click Purge Now. All element caches are automatically purged whenever schema is updated. Even after purging a cache, a report that is re-executed may not show the latest results from your data source, but rather still display cached data. This can occur because report results may be cached at the object, report, and element
levels. To ensure that a re-executed report displays the most recent data stored in your data source, you should purge all three caches. See the information on purging report caches and object caches in this chapter.
see Limiting the number of elements displayed and cached at a time, page 205 see Optimizing element requests, page 208 see Limiting the amount of memory available for element caches, page 210 see Limiting the amount of memory available for element caches, page 210 see Limiting which attribute elements a user can see, page 211 see Limiting element caches by database connection, page 212 see Limiting element caches by database login, page 213 see Deleting or purging element caches, page 214
Element caches
215
Object caches
When you or any users browse an object definition (attribute, metric, and so on), you create what is called an object cache. An object cache is a recently used object definition stored in memory on MicroStrategy Desktop and MicroStrategy Intelligence Server. You browse an object definition when you open the editor for that object. You can create object caches for applications. For example, when a user opens the Report Editor for a report, the collection of attributes, metrics, and other user objects displayed in the Report Editor compose the reports definition. If no object cache for the report exists in memory on MicroStrategy Desktop or MicroStrategy Intelligence Server, the object request is sent to the metadata for processing. The report object definition retrieved from the metadata and displayed to the user in the Report Editor is deposited into an object cache in memory on MicroStrategy Intelligence Server and also on the MicroStrategy Desktop of the user who submitted the request. As with element caching, any time the object definition can be returned from memory in either the Desktop or Intelligence Server machine, it is faster than retrieving it from the metadata database. So when a Desktop user triggers an object request, the cache within the Desktop machines memory is checked first. If it is not there, the Intelligence Server memory is checked. If the cache is not even there, the results are retrieved from the metadata database. Each option is successively slower than the previous. If a MicroStrategy Web product user triggers an object request, only the Intelligence Server cache is checked before getting the results from the metadata database. This section discusses the following topics concerning object caching: Cache matching algorithm, page 217 Enabling or disabling object caching, page 217 Limiting the amount of memory available for object caches, page 218
Deleting or purging object caches, page 219) Summary table of object caching settings, page 220
Object caches
217
1 Open the Project Configuration editor and select the Caching: Objects category. 2 Specify the RAM (in kilobytes) in the Server: Maximum RAM usage (KBytes) box. 3 Specify the RAM (in kilobytes) in the Client: Maximum RAM usage (KBytes) box. The new settings take affect only after Intelligence Server is restarted.
To set the RAM available for object caches for a MicroStrategy Desktop machine
1 In the Project Source Manager, click the Caching tab and in the Object Cache group of controls, select the Use custom value option. If you select the Use project default option, the amount of RAM is the same as specified in the Client section in the Project Configuration editor described above. 2 Specify the RAM (in kilobytes) in the Maximum RAM usage (KBytes) box.
1 Right-click the project and select Project Configuration Editor. 2 On the left, expand Caching and select Objects. 3 Click Purge Now. Object caches are automatically purged whenever your schema is updated. Configuration objects are cached at the server level. You can choose to delete these object caches as well. Even after purging a cache, a report that is re-executed may not show the latest results from your data source, but rather still display cached data. This can occur because report
Object caches
219
results may be cached at the object, report, and element levels. To ensure that a re-executed report displays the most recent data stored in your data source, you should purge all three caches. See the information on purging element caches and report caches in this chapter.
To delete all configuration object caches for a server
1 Log in to the project source. 2 From the Administration menu in Desktop, point to Server, and then select Purge Server Object Caches. You cannot automatically schedule the purging of server object caches from within MicroStrategy Desktop. However, you can compose a Command Manager script to purge server object caches and schedule that script to execute at certain times. For a description of this process, see tech note TN6600-75X-0267 in the MicroStrategy Knowledge Base. For more information about Command Manager, see Automating processes using Command Manager, page 464.
See Limiting the amount of memory available for object caches, page 218 See Limiting the amount of memory available for object caches, page 218 See Deleting or purging object caches,
page 219
The History List is displayed at the user level, but is maintained at the project source level. The History folder contains messages for all the projects in which the user is working. The number of messages in this folder is controlled by the setting Maximum number of messages per user. For example, if you set this number at 40, and you have 10 messages for Project A and 15 for Project B, you can have no more than 15 for Project C. When the maximum number is reached, the oldest message in the current project is purged automatically to leave room for the new one. If the current project has no messages but the message limit has been reached in other projects in the project source, the user may be unable to run any reports in the current project. In this case the user must log on to one of the other projects and delete messages from the History list in that project. This section discusses the following topics concerning History Lists: History List file location, page 222 History List backup frequency, page 223 History Lists and caching, page 223 History Lists in a clustered environment, page 224
221
Accessing History Lists, page 224 Using History Lists, page 226 Archiving History List messages, page 229 Managing History Lists, page 231 Configuring History Lists, page 233
For example, //My_File_Server/My_Inbox_Directory. Make sure this cache directory is writable to the network account under which the Intelligence Server is running. Each MicroStrategy Intelligence Server creates its own subdirectory.
223
On MicroStrategy Web
On Web, click the name of the desired project to see the History List folder. Click the folder to view all the messages. The following information is available: Name: name of the report Status: status of a report job, for example, executing, processing on another node, ready, and so on Only Ready and Error statuses are synchronized across nodes. While a job on one node is reported as Executing, it is reported as Processing on another node on all the other nodes. Message Creation Time: the time the message was created Details: more information about the report, including Total number of rows, Total number of columns, Server name, Report path, Message ID, Report ID, Status, Message created, Message last updated, Start time, Finish time, Owner, Report description, Template, Report filter, View filter, Template details, Prompt details, and SQL statements
Each time a user submits a report that contains a prompt, the dialog requires that he or she answer the prompt. As a result, multiple listings of the same report may occur. The differences among these reports can be found by checking the timestamp and the data contents.
On MicroStrategy Desktop
On Desktop, History List is located in the History folder under the project name. A numeric number indicates how many unread messages remain in this folder. Click the History folder to view all the messages with the following information: Name: name of the report Finish Time: the time the report execution is finished Folder name: name of the folder where the report is saved Last update time: the time when the report is last updated Message text: the text of the message itself Start time: the time the report execution is started Status: status of a report job, for example, has been executed successfully and is ready, is not executed successfully, is currently executing, or is waiting to execute You can configure the History Headers to determine which of the above information you want displayed for the History List messages. To locate this setting, from the Tools menu, select Desktop References, and then History Options. You can see more details of any message by right-clicking it and selecting Details. In addition to information on the Name and Finish Time, you will also see the following: Start Time: the time the report execution is started Description: a brief summary of the report
225
Owner: name of the owner of the report Folder name: name of the folder where the report is saved
Operation from Web does not create a second job, because the first one is modified for the Send to History List request. After report execution From Web: After the report is executed, select Add to History List from the Report menu. From Desktop: After the report is executed, select Send to History from the File menu. Two jobs are created for Desktop, but only one for Web. The note above applies here, too. Automatically: This can be set by performing two operations. Configuring the settings From Web: Select History List from the Project Preferences, and then select Automatically for Add reports and documents to my History List. From Desktop: Select Desktop Preferences from the Tools menu, then select History Options, and then select Automatically send reports to History List during execution. Using the scheduling function From Web: On the reports page, under the name of the report that you want to send to History List, select Subscriptions, and then select Add Subscription. Choose a schedule for the report execution. A History List message is generated automatically whenever the report is executed based on the schedule. You can then go to My Subscriptions to make sure the new schedule is listed there. From Desktop: use one of the following two ways: Right-click a report name and select Properties. In the properties dialog box, select the Scheduling tab, then select a schedule for the report, and then check the box of Create messages in the History List.
227
From the Tools menu, select Report Scheduler. In the Report Scheduler dialog box, select a schedule, Add a report, and then select the Create inbox messages for new schedules check box.
Web operations
In Web, you can perform the following operations in the History List folder: Sort list: Click on the column title to sort the message list in either an ascending or descending order. Mark as unread: Select this option to make the message appear as new. Remove: Select this option to delete a message from the History List. Refresh my History List: Select this option to refresh the screen for History List. Clear my History List: Select this option to remove all the messages from History list.
In addition, you can click on the report or document name to retrieve the report. If the results are not available, you will be prompted to re-execute the report or document.
Desktop operations
On Desktop, you can perform the following operations on History List by selecting the options in the right-click menu: Request: Select this option to retrieve the report or document. You cannot request the results for child datasets for a document. Details: Select this option to obtain more information about the executed report or document.
Delete: Select this option to remove the message from the History List. Mark as Read/Unread: Select this option to make the message appear as new.
1 In Preferences Levels category, select Project defaults. 2 In the Preferences category, select History List. 3 Clear the check box for The new report will overwrite older versions of itself.
229
1 From the Administration menu, select Scheduling and then choose Schedule multiple reports. The System-wide Report Scheduler dialog box opens. 2 Select a report or document you want to archive. 3 Select a user group to receive the message for the archived report. All members in the user group receive the History List message. 4 Click OK.
Modify the list of report objects (use Object Browser - all objects) Refresh and re-execute report against warehouse Re-prompt Save templates and filters Use design mode Use report filter editor If any of the editors related to the above operations was open prior to accessing the report page of the archived message, it will be closed.
In Desktop, there is no difference in display between the regular History List messages and the archived ones in the History List. However, when you retrieve the report results through an archived message, the report will be opened in the Report Preview window and can be viewed only. No menu operations are available on the report page.
231
1 From Desktop or Control Center, right-click a project source and select Configure MicroStrategy Intelligence Server. The MicroStrategy Intelligence Configuration editor opens. 2 Select History settings in the Governing category. 3 Type a number in the Message lifetime (days) box.
Messages that were created more than x number of days ago The Delete History List messages feature can be used for one-time maintenance even though it is available through Scheduling by using Event-triggered schedules.
1 In the Schedule Administration Tasks dialog box, select a project from the Available Projects list. 2 Select Delete History List messages as the action. 3 Select a schedule from the preconfigured options, for example, at close of business (weekday), first of month, on database load. 4 Type a number in the Lifetime (days) box. 5 Select an option for the messages status: Read Unread All 6 Use the browse button to select a user/group for which the History List messages will be deleted.
Administrator-controlled settings
The Web Administrator History List settings are divided between Web and Desktop.
233
Web settings
From Web, go to the Preferences page, select Project defaults, and then select History List. On the History List page, select the following options based on your needs: Add reports and documents to my History List: Select automatically or manually. If manually, how many of the most recently run reports and documents do you want to keep available for manipulation?: Enter a number in the box. This option controls how many reports are kept in memory on the Intelligence Server in what is called the working set. For more information on this, see Working set, page 380. For example, if the user runs three reports and this setting has a value of 2, the user can click the browsers Back button to see the previous report he or she ran, even if it was not added to the History List. These reports and documents are available for manipulation even if they are not saved into the History List. The results of scheduled reports get added to the History List: This option is for archiving purposes, as discussed earlier (see Creating archived messages, page 229). Select or clear the box for The new report will overwrite older versions of itself. This option is automatically turned on for users without access to the History List. Format of scheduled documents: This drop-down menu lets you determine the default format in which scheduled Report Services documents are displayed when executed. Selecting either PDF or Excel results in the selected format being pre-generated. This, in turn, results in faster retrieval from the History List after the schedule runs. Keep document available for manipulation when scheduling in PDF or Excel format: This option lets you determine if Report Services documents scheduled in PDF or Excel view are available in the History List. When this
check box is selected (default), all subscribed Report Services documents are available in the History List, regardless of their format. This means you can run these documents in HTML, PDF, or Excel view directly from the History List. However, when this check box is cleared, you can only execute a document in the History List in its scheduled format. For example, if you subscribe to an Excel-formatted document, you can only open it in Excel format from the History List. In this case, the Details and PDF icons in the History List are both missing from the History List because you can only execute the document in its scheduled (Excel) format. If you want to run this document in PDF format, you must re-execute the document. If you choose to not keep the document available for manipulation (by clearing the check box), this results in less memory load on the Intelligence Server.
Desktop settings
From Desktop, you need to configure the following settings in the Server Definition category in the Server Configuration Editor: Maximum number of messages per user: controls how many messages can exist in a users History List. When the limit is reached, the oldest message is removed. For more information, see History List backup frequency, page 223. Message lifetime (days): controls how long messages can exist in a users History List. The default value is -1, which means that messages can stay indefinitely in the system until deleted manually by the user. For more information, see Controlling the lifetime of the History List messages, page 232.
The following setting also needs to be configured, in the Governing category in the Server Configuration Editor: History directory: indicates where the History List messages are stored. For more information, see History List file location, page 222.
235
User-controlled settings
From Web, go to the Preferences page, select User preferences, and then select Security. On the Security page, select from the following options based on your needs: Cancel this sessions pending requests?: Select Yes or No. Remove the finished jobs from the History List?: Select Yes to remove all finished jobs, No to leave all finished jobs, or Only the read messages to remove History List messages that have been read. Then select or clear the Prompt me on logout box. The MicroStrategy Web administrator can control whether these options are visible to users. To hide these options, in MicroStrategy Web, go to the Preferences page, select Project defaults, and then select Security. In the Logout section of the Security page, clear the Show this option in user preferences check boxes for the options that the users should not be able to change.
5
5.
Introduction
This section explains how you can use the monitors and statistics to see the state of the system at any time, or over a period of time. This includes: Monitoring the current state of the system, page 237 to see the current state of various aspects of the system Analyzing system usage with Enterprise Manager, page 251 to see trends over time with Enterprise Manager
237
Managing user connections to a project: User Connection Monitor, page 243 Managing database instance connections: Database Connection Monitor, page 244 Managing scheduled jobs and tasks: Schedule Monitor, page 245 Managing report and History List caches: Cache Monitor, page 246 Managing clustered Intelligence Servers: Cluster Monitor, page 248 Manage system memory and resources: Windows Performance Monitor, page 249
The Job Monitor displays which tasks are currently running on an Intelligence Server. When a job has completed it no longer appears in the monitor. You can view a jobs identification number; the user who submitted it; the jobs status; a description of the status and the name of the report, document, or query; and the project executing it. Because the Job Monitor does not refresh itself, you must periodically refresh it to see the latest status of jobs. To do this, press F5.
1 Expand Administration within a project sources folder list. 2 Click Job Monitor. The job information displays on the right-hand side. 3 You can select several options once the jobs are displayed: To refresh the information, press F5. To view a jobs details including its SQL, double-click it. A Quick View dialog box opens. To cancel a job, select it and press DELETE, then confirm whether you wish to cancel the job. To view more details for all jobs displayed, right-click and select Complete Information. Additional columns displayed are Project name, Priority, Creation time, and Network address. Priority shows the execution priority, not the priority set for the users connection thread in the Job Prioritization tab of the Database Instances Monitor. When viewing details, you can sort the list of jobs by clicking one of the column headings. At times, you may see Temp client in the Network Address column. This may happen when Intelligence Server is under a heavy load and a user accesses the Projects or Home page in MicroStrategy Web (the pages that display the list of available projects). Intelligence Server creates a temporary session that submits a job request for the available projects and then sends the list to the Web client for display. This temporary session, which remains open until the request is fulfilled, is displayed as Temp client.
239
For more detailed information about any of these options, see the online help or related sections in this guide. For more information about the different states a project can be in, see Setting Project Modes, page 323.
To access the Project Monitor
1 Expand Administration in the project sources folder list. 2 Click Project Monitor. The projects and their statuses display on the right-hand side.
3 To perform the actions listed above, right-click a project, then select one of these options: Unload, Idle/Resume, Purge Report Caches, Purge Object Caches, or Purge Element Caches. You can perform an action on multiple projects at the same time. To do this, select several projects (CTRL+click), then right-click and select one of the options.
Loaded projects
When a project is loaded it appears as an available project in MicroStrategy Desktop and MicroStrategy Web products. Unless a loaded project is idled, user requests are accepted and processed as normal. The project is also available for architectural work, such as warehouse catalog updates and object creation and modification. If you need to work on a projects architecture and, therefore, must disable job submission for that project, you can idle it.
241
To idle a project
1 Right-click the project and on the shortcut menu, click Idle/Resume. The Idle/Resume dialog box opens.
2 Select the options for the idle mode that you want to set the project to: Request Idle (Request Idle): all currently executing and queued jobs finish executing, and any newly submitted jobs are rejected. Execution Idle (Execution Idle for All Jobs): all currently executing, queued, and newly submitted jobs are placed in the queue, to be executed when the project resumes. Warehouse Execution Idle (Execution Idle for Warehouse jobs): all currently executing, queued, and newly submitted jobs that require SQL to be submitted to the data warehouse are placed in the queue, to be executed when the project resumes. Any jobs that do not require SQL to be executed against the data warehouse are executed. Full Idle (Request Idle and Execution Idle for All jobs): all currently executing and queued jobs are cancelled, and any newly submitted jobs are rejected. Partial Idle (Request Idle and Execution Idle for Warehouse jobs): all currently executing and queued jobs that do not submit SQL against the data
2008 MicroStrategy, Inc.
warehouse are cancelled, and any newly submitted jobs are rejected. Any currently executing and queued jobs that do not require SQL to be executed against the data warehouse are executed. To resume the project from a previously idled state, clear the Request Idle and Execution Idle check boxes. For more information about these project states, see Setting Project Modes, page 323.
Unloaded projects
Unloaded projects are still registered on MicroStrategy Intelligence Server, but they do not appear as available projects in MicroStrategy Desktop or MicroStrategy Web products, even for administrators. A project unload request is fully processed only when all currently executing jobs for the project are complete.
1 Expand Administration within a project sources folder list. 2 Click User Connection Monitor. The connection information displays on the right-hand side. 3 You can do several options once the connections are displayed:
243
To refresh the information, press F5. To view a connections details, double-click it. A Quick View dialog box opens. To disconnect a user, select the connection and press DELETE, then confirm whether you wish to disconnect the user. You can also right-click a user and select Disconnect.
Scheduler: Connections made by Intelligence Server to process scheduled reports or documents appear as <Scheduler> in the Network Address column. Scheduler sessions cannot be manually disconnected as described above. However, these sessions will be removed automatically by Intelligence Server when the user session idle time-out value is reached. Temp client: At times, you may see Temp client in the Network Address column. This may happen when Intelligence Server is under a heavy load and a user accesses the Projects or Home page in MicroStrategy Web (the pages that display the list of available projects). Intelligence Server creates a temporary session that submits a job request for the available projects and then sends the list to the Web client for display. This temporary session, which remains open until the request is fulfilled, is displayed as Temp client.
If a database connection is cached, the ODBC connection from Intelligence Server to the data warehouse remains open. However, if the data warehouse connection surpasses the connection time-out or lifetime governors (set on the Database connections dialog box, Advanced tab), the ODBC connection closes and it no longer displays in the Database Connection Monitor.
To access the Database Connection Monitor
1 Expand Administration within a project sources folder list. 2 Click Database Connection Monitor. The database connection information displays on the right-hand side. To delete a connection to a database, right-click the connection and select Disconnect.
245
1 Expand Administration within a project sources folder list. 2 Click Schedule Monitor. The schedule request information displays on the right-hand side. To delete a schedule request, right-click it and select Expire.
1 Expand Administration within a project sources folder list. 2 Click Cache Monitor. The cache information displays on the right-hand side. You can display complete information about the caches. To do this, from the View menu, select Details. Then from the Administration menu, select Complete Information. This allows you to sort the caches by a column. For example, you could sort by cache size to see which report caches are the largest. You can perform any of the following options after you select one or more caches and right-click: Delete: removes the cache from both memory and disk Invalidate: renders the cache unusable, but leaves a reference to it in users History Lists (if any) Load from disk: loads into memory a cache that was previously unloaded to disk
Unload to disk: takes the cache out of memory and stores it on disk Filter by Project: lists only caches in the project you select
For more information about any of these actions, see Managing report caches, page 183.
Cache statuses
A caches status is displayed using some combination of the following letters:
Status
R P I E L U D F
Stands for
Ready
Description
The cache is valid and ready to be used
Processing The cache is currently being updated Invalid Expired Loaded Updated Dirty Filed An object definition within the cache has changed and the cache will not be matched The cache is invalid because its lifetime has elapsed The cache is in memory The cache file has been updated The cache in memory is more recent than the one on disk Cache file exists on disk
Cache types
Caches can have the following types:
Type
Matching History XML
Description
The cache is valid and available for use. The cache remains in a users History List. The cache exists as an XML file and is referenced by the matching cache. When the corresponding Matching cache is deleted, the XML cache is deleted.
247
To display History and XML caches, right-click anywhere in the Cache Monitor and select Show caches for History List messages or Show XML caches. For more information about each cache type, see Types of report caches, page 173.
1 Expand Administration within a project sources folder list. 2 Click Cluster Monitor. 3 The Cluster Monitor provides two views, the node view (or server view) and the project view. The node view appears by default on the right-hand side. To toggle between views, right-click in the view (that is, on the right-hand side of the window) and select either View by project or View by node. View by node: This view lists all projects and their status by server. To display complete information about the nodes, from Desktops View menu, select Details. You can see which nodes are active, their workloads, and the cluster port number and protocol being used. You can also see whether a project is loaded (running) or unloaded on a given node.
Workload: The system calculates the workload based on how busy or loaded the server is. The higher the workload number, the harder the server is working. To see a workload summary for a server, right-click Administration in the project sources folder list and select Server summary. View by project: This view lists all servers and, for each project, the status of the project on that server. You can see whether a project is loaded (that is, running) or unloaded on a given node.
4 You can determine whether to add or remove a node from a cluster. Select one of the nodes in either view of the Cluster Monitor, right-click, and select one of the following: Join cluster: available only if it is not currently in a cluster Delete: available only if it already is in a cluster
For more information about clustering Intelligence Servers, see Clustering Intelligence Servers, page 332. For more information about distributing projects across nodes in the cluster, see Distributing projects across nodes in a cluster, page 354.
249
The Windows Performance Monitor is shipped with both Windows NT and Windows 2000; however the two versions differ in their implementations.
To access the Windows Performance Monitor
From the Windows Start menu, point to Programs, then MicroStrategy, then Intelligence Server, and then choose Microsoft Performance Monitor. The features of Performance Monitor are as follows: Logs and saves historical performance data. Charts data in real time. Provides several real-time and historical views of memory, server, disk, and network performance. Enables you to observe the effects of changes to your system. Alerts you about system-related problems immediately. Monitors multiple servers using one Performance Monitor session. Runs several Performance Monitor sessions at one time. Saves specific analysis configurations via workspaces. Exports data to other applications including Microsoft Excel and Microsoft SQL Server.
For more information on Windows Performance monitor, see http://www.microsoft.com/. For details about monitoring resources in a MicroStrategy environment: Memory: see Monitoring memory use with Performance Monitor, page 415 Processor (CPU): see Processor type and speed, page 412 and see Number of processors, page 412
Physical disk: see Physical disk, page 413 Network bandwidth: see How the network can affect performance, page 432
251
Determine whether you need to add more threads to the database connection threads if queue times are long Find which tables are the most used, and create indexes to improve performance
You can use the predefined reports as they are, copy the reports and then modify the copies, or build your own custom reports to suit your needs. The Enterprise Manager project includes over 300 metrics and 90 predefined reports. Many of these reports include prompts, which accept user input before a report is executed, for flexibility. You can create new metrics, prompts, filters, templates, or reports to suit the needs of your environment or the type of analysis you want to do. All of the predefined objects are located in the Public Objects folder in the Enterprise Manager project. This section provides the following information: Best practices for using Enterprise Manager, page 252 provides MicroStrategys recommendations for best practices when using Enterprise Manager. Understanding the Enterprise Manager architecture, page 254 explains how Enterprise Manager interacts with your MicroStrategy system. Installing and configuring Enterprise Manager, page 257 walks you through the process of setting up Enterprise Manager to monitor projects in your system. Analyzing statistics with Enterprise Manager, page 283 explains how Enterprise Manager uses the data in the statistics tables. Reporting in Enterprise Manager, page 297 provides descriptions of all the predefined reports available in Enterprise Manager. These reports can help you identify problem areas in the system.
privileges to use Enterprise Manager. For more information on privileges and permissions, see Controlling access to application functionality, page 124. Use Enterprise Manager to monitor itself. This feedback can help you fine-tune Enterprise Managers monitoring ability. For instructions on how to monitor a project in Enterprise Manager, see Selecting the projects to monitor, page 278. Use the project documentation available from within the Enterprise Manager project to see details on every object in the project, as well as a detailed list of sample reporting requirements. To access project documentation, log into the Enterprise Manager project in Desktop. Then, from the Tools menu select Project Documentation. When you install Enterprise Manager on a given computer, the Enterprise Manager project documentation is initially only available to a user working directly on that computer. You can share the project documentation so that networked users, such as Web users, can also view it. For additional information about every object in the Enterprise Manager project, see the objects Long Description property (right-click the object, select Properties, and select the Long Description tab). The long description includes sample reporting requirements for the object, where applicable. For best practices relating to specific Enterprise Manager functionality, see the following sections: Best practices for installing and configuring Enterprise Manager, page 258 Best practices for Enterprise Manager data loading, page 287 Best practices for Enterprise Manager reporting, page 299
253
255
operation, lookup tables, and fact tables with transformed information from the statistics tables. Lookup tables contain descriptive information about each object in the monitored projects, such as name, owner, creation date, folder path, and so on. Fact tables contain data that has been loaded from the statistics database by the data load process.
Enterprise Manager reporting The Enterprise Manager administrator executes reports in the Enterprise Manager project to analyze the information in the data warehouse. For detailed descriptions of each report, see Reporting in Enterprise Manager, page 297.
257
5 Create the statistics tables in the Enterprise Manager data warehouse. The MicroStrategy Configuration Wizard walks you through the steps to create these tables. For more information and detailed instructions, see Creating statistics tables in the Enterprise Manager data warehouse, page 263. 6 Configure your projects to log statistics to the statistics tables. For each project, you need to set the location of the statistics tables and the information that you want to have logged. For more information and detailed instructions, see Configuring statistics logging, page 266). 7 Step through the Enterprise Manager Console, which sets up the following processes: a Initialize, which migrates the Enterprise Manager project to your Enterprise Manager production databases. For more information and detailed instructions, see Initializing your Enterprise Manager production environment, page 272. b Configure, which selects the projects to monitor. For more information and detailed instructions, see Selecting the projects to monitor, page 278. c Schedule, which specifies how often your projects statistics are loaded into the Enterprise Manager data warehouse For more information and detailed instructions, see Scheduling how often statistics are loaded into the Enterprise Manager data warehouse, page 280.
The Enterprise Manager project contains a user group named EM Admin. Users assigned to this group have all the necessary privileges to configure and use the Enterprise Manager project. You should assign the Enterprise Manager users to this group. To ensure that you can successfully upgrade the Enterprise Manager project in the future, do not modify schema objects. Rather, make copies of the objects you want to modify and then modify the copy. Upgrade to Enterprise Manager service packs when they become available. MicroStrategy includes your feedback in the service packs, including fixes to issues and additional enhancements.
259
For a complete list of the versions of each database that are certified for use with Enterprise Manager, see the readme file for MicroStrategy Administrator. Administrator privileges for the MicroStrategy projects that you want to monitor in Enterprise Manager. You must also have the Create Configuration Objects privilege for the project source in which you are creating the Enterprise Manager project. For detailed information about privileges and permissions, see Controlling access to application functionality, page 124. The Intelligence Server statistics that you need to report on. For a list and description of the statistics logging options, see Configuring statistics logging for a project, page 283. The frequency and best times for Enterprise Manager to load data from the statistics tables into the data warehouse. For example, if you need near-real-time data, you can load the data as often as once per hour. However, if you are collecting statistics in great detail, you should run the data load during times when Intelligence Server usage is low, such as overnight. For an explanation of the data loading process, see Data loading and Enterprise Manager maintenance tasks, page 285. To use the four dashboards that come with Enterprise Manager, you need a MicroStrategy Report Services license. If you do not have a Report Services license, contact your account representative for information about obtaining one. For information about the dashboards in Enterprise Manager, see Dashboards, page 299. These dashboards are designed for use with MicroStrategy Web. If you are using MicroStrategy Web Universal, the links to other reports in the dashboards do not function. To correct the links, edit the dashboards and change all occurrences of Main.aspx in the links to mstrWeb.
Enterprise Manager components that are installed with Administrator: Microsoft Access metadata database: This database contains the metadata for the Enterprise Manager project, including all of the facts, attributes, hierarchies, metrics, filters, and reports that are predefined as part of Enterprise Manager. Microsoft Access data warehouse database: This database contains the fact and lookup tables that need to be populated, as well as configuration tables with information necessary for the operation of Enterprise Manager. Enterprise Manager Console: This application helps you configure Enterprise Manager, and launches the data loading scripts. SQL scripts: A wide variety of SQL scripts that create warehouse tables, perform various other setup tasks, and perform the data load process.
For instructions on installing MicroStrategy Administrator, see the MicroStrategy Installation and Configuration Guide.
The Enterprise Manager-specific tables in this database are created and populated when you proceed through the Enterprise Manager Console, in the Initialization step. This is done by importing the Microsoft Access metadata database
261
into the production database. For information about this process, see Initializing your Enterprise Manager production environment, page 272.
To create a new Enterprise Manager metadata database
1 Create a database to store your Enterprise Manager metadata. (This is generally performed by your database administrator.) This database must be one of the databases certified for use with MicroStrategy metadata. For a list of certified databases, see the MicroStrategy readme. 2 Use the Connectivity Configuration Wizard to create a Data Source Name for the metadata. Note this DSN for later. It is needed when creating the statistics tables and when you specify the DSN for the Metadata Location. To access the Connectivity Configuration Wizard, from the Windows Start menu point to Program Files, then MicroStrategy, then Tools, then select Connectivity Configuration Wizard. For detailed instructions on using the Connectivity Configuration Wizard, see the MicroStrategy Installation and Configuration Guide. Now you must create a data warehouse to store the statistics that Enterprise Manager will help you analyze.
You can use an existing database in your system, or create a new database: To use an existing database, note the Data Source Name (DSN) for it. This DSN is used later in the installation process, when creating the statistics tables and when you step through the Enterprise Manager Console and are prompted to select the DSN for the Warehouse Location. To create a new database, follow the procedure below. For a list of databases that are certified for use with Enterprise Manager, see MicroStrategy Enterprise Manager prerequisites, page 259, or see the readme file for MicroStrategy Administrator.
1 Create the empty data warehouse database. (This is generally performed by your database administrator.) This database must be one of the databases certified for Enterprise Manager, as listed in the readme file for MicroStrategy Administrator. 2 Use the Connectivity Configuration Wizard to create a Data Source Name for the data warehouse. Note this DSN for later. You need it when creating the statistics tables and when you select the DSN for the Warehouse Location. To access the Connectivity Configuration Wizard, from the Windows Start menu point to Program Files, then MicroStrategy, then Tools, then select Connectivity Configuration Wizard. For detailed instructions on using the Connectivity Configuration Wizard, see the MicroStrategy Installation and Configuration Guide. Next, you must create the empty statistics tables in this database using the MicroStrategy Configuration Wizard.
263
where you configure the Intelligence Server machines to log information. From these tables, the data load process moves data to the data warehouse.
To create the empty statistics tables
1 Start the MicroStrategy Configuration Wizard. Windows: From the Windows Start menu, point to Programs, then MicroStrategy, then select Configuration Wizard. UNIX/Linux: Browse to the directory specified as the home directory during MicroStrategy installation, then browse to the folder bin and type ./mstrcfgwiz and press ENTER.
2 On the Welcome page, select Metadata Repository and Statistics Tables and click Next. 3 Select the Create Statistics Tables option and clear all other options. Click Next. The ODBC Data Source Name page opens.
4 From the ODBC Data Source Name drop-down list, select the DSN for the Enterprise Manager data warehouse. Any table in this database that has the same name as a MicroStrategy statistics table is dropped. For a list of the MicroStrategy statistics tables, see Enterprise Manager statistics data dictionary, page 875. 5 Enter a valid login and password for the data warehouse database. The login that you specify must have permission to create and drop tables in the database, and permission to create views. 6 Click Next. 7 On the next page of the Configuration Wizard, in the Statistics Script Location box, the default script is displayed. The selected script depends on the database type you specified previously. To select a different script, click ... to browse and select a script that corresponds to the DBMS you are using. The default location for these scripts is: \Program Files\Common Files\MicroStrategy\ If you use the Enterprise Manager warehouse in Oracle 9i or higher, use the following script: em_sql_ora_9i.sql. This script uses SQL-92 syntax, and results in better execution and improved performance for Enterprise Manager queries. 8 Click Next. 9 The final screen shows a summary of your choices. To create the statistics tables, click Finish. The statistics tables are now created. Next you must specify the MicroStrategy projects to log statistics into the statistics tables.
265
1 From Desktop or Control Center, right-click the project source and select Configure MicroStrategy Intelligence Server. The Intelligence Server Configuration Editor opens. 2 Under Server Definition, select Statistics. 3 Select the Single Instance Session Logging option. 4 Select a project from the drop-down list. The statistics for all projects on this Intelligence Server will be logged according to the settings for this project. These settings are defined in the next procedure. 5 Click OK. The Intelligence Server Configuration Editor closes. Then continue with the procedure To set up a project to log statistics below.
To set up a project to log statistics
1 Log in to the server (three-tier) project source containing the projects in MicroStrategy Desktop that you want to monitor with Enterprise Manager. You must use a login that has Administrator privileges. 2 Right-click the project that you want to monitor and select Project Configuration. The Project Configuration Editor opens. If you are using single instance session logging, the project that you select to configure must be the project that you selected when you set up single instance session logging. 3 Under Database Instances, select Relational.
267
4 You need to create a new database instance for the statistics database. Click New. The Database Instances dialog box opens.
5 In the Database instance name field, type in a name for the statistics logging database instance. The name should describe the Enterprise Manager data warehouse that you created earlier. 6 From the Database connection type drop-down list, select the database type and version that corresponds to the Enterprise Manager data warehouse database DBMS. If you are using Oracle 8i R3, select Oracle 8i R2/R3. For more information, see the note on page 24.
7 You need to create a new database connection to connect to the database instance. Click New. The Database Connections dialog box opens.
8 In the Database connection name field, type a name for the database connection. 9 Select the Data Source Name used to connect to the Enterprise Manager data warehouse. 10 You need to create a new database login to log in to the database instance. Click New. The Database Logins dialog box opens.
269
11 Type a name for the new database login in the Database login field. If this database login is more than 32 characters long, the statistics logging will generate errors in the DSS Errors log. 12 Type a valid database login ID and password in the corresponding fields. MicroStrategy does not validate this login ID and password when you click OK, so be careful to type them correctly. 13 Click OK three times until you return to the Project Configuration Editor. Each time, make sure your new database login and database connection are selected before clicking OK. 14 Select the Statistics category. 15 From the Statistics DB Instance drop-down list, select the database instance you want statistics to be logged to.
16 Select the Log information about check box, and select the appropriate check boxes for the types of logging you wish to analyze. For details about the statistics logged by each option, see Configuring statistics logging for a project, page 283, or click Help. 17 Click OK. 18 To begin logging statistics, reload the project. (In the Project Monitor, right-click the project and select Unload, then right-click the project and select Load.) The project level statistics settings must be specified for any additional projects you want to monitor and analyze with Enterprise Manager. In addition, if you choose to use Complete Session Logging (described in Configuring statistics logging, page 266), you must specify the statistics database instance for each project. Repeat the applicable steps from the procedure above for any additional projects.
271
according to the time window for a data load process. The time window is determined by the data warehouse RDBMS time. If the data warehouse time is different from the Intelligence Server machine time, certain reports in Enterprise Manager can have missing data. For example, if statistics appear for Deleted report in Enterprise Manager reports, it may be because statistics are being logged for reports that, according to the warehouses timestamp, should not exist.
You can close the Enterprise Manager Console at any time without losing anything you have already completed. The Enterprise Manager Console automatically saves all the work that you do and continues from where you stopped when you open the console again.
The metadata contains all of the Enterprise Manager project definition information: schema objects, metrics, templates, filters, reports, and more. The data warehouse contains configuration tables necessary to the operation of Enterprise Manager, some time-based lookup tables, and standard lookup tables that are populated during the data load process. Initializing the environment also involves running a SQL script specific to the type of production database that you are using. This SQL script creates the lookup and fact tables and creates the primary keys in the data warehouse. It also inserts database-specific SQL procedures into the EM_SQL table for use in the data load process.
To initialize the Enterprise Manager project
1 From the Start menu, point to Programs, then MicroStrategy, then Administrator, then choose Enterprise Manager Console. 2 Read the Home page and click Next. 3 Read the Initialize Overview page. Be sure you have the Data Source Names for the Enterprise Manager data warehouse and the Enterprise Manager metadata database before you continue. Click Next. 4 Read the Initialize Project page and click Transfer to import the project files. The Project Mover Wizard opens. This procedure walks you through the Project Mover Wizard. For a detailed explanation of the wizard, see Migrating a project to a new database platform, page 490, or click Help.
273
5 Read the welcome screen and click Next. The Select your source files page opens.
6 Verify that the correct Metadata Repository Source is specified. The metadata repository source is a Microsoft Access database that was installed as part of the Enterprise Manager installation. If it is not in the default Enterprise Manager directory, search for EM_Proj_MD.mdb and type the correct path in the field. 7 Verify that the correct Warehouse Source is specified. The warehouse source is also a Microsoft Access database that was installed as part of the Enterprise Manager installation. If it is not in the default Enterprise Manager directory, search for EM_WH.mdb and type the correct path in the field. 8 Leave the Run a SQL script check box selected, and make sure that the Overwrite warehouse tables if they exist check box is selected. 9 Click Next. The Project Selection page opens.
10 Select Enterprise Manager and click Next. The Run a SQL Script on the Warehouse page opens.
11 In the SQL script to be run after warehouse transfer field, specify the Enterprise Manager SQL script appropriate to your data warehouse: a Click the browse (...) button. A window opens to a directory containing .sql files. b Select the em_sql_databasetype.sql file (where databasetype corresponds to the DBMS of your Enterprise Manager data warehouse). c Click Open.
275
13 Select the Data Source Name for the Enterprise Manager metadata database from the drop-down list. 14 Type a valid User Name and Password for this database. 15 Click Next. When the ODBC Driver dialog box opens, verify that you are using a certified ODBC driver and click Close. The Warehouse Location page opens.
Warehouse Location
16 Select the Data Source Name for the Enterprise Manager data warehouse from the drop-down list. 17 Type a valid User Name and Password for this database. 18 Select the correct database type from the Database Connection Type drop-down list. 19 Click Next. The Metadata Repository Connection page opens.
20 In the Project Source Name field, type the name of the project source to create the Enterprise Manager project in. If this project source does not exist, it will be created. 21 Click Next. The Summary page opens. 22 Review the information on the summary page and click Transfer when you are ready. The process can take several minutes. 23 When the transfer process is complete, the Project Mover Wizard closes and you are returned to the Enterprise Manager Console. Click Next. The Configure Overview page opens.
277
1 Read the Configure Overview page and click Next. The Choose Projects page opens. 2 From the Available Server Project Sources list, select the project source that contains the projects that you want to monitor using Enterprise Manager. If the project source that you want to monitor has the default name of New Project Source, you must change the name of the project source before adding it to Enterprise Manager. Otherwise Enterprise Manager will be unable to load statistics and fact data from the projects in the project source.
3 Click the right arrow (>). The Select Projects (Project Source Name) dialog box opens.
4 In the User Login area, type in a MicroStrategy login ID that has Administrator access to the selected projects, and the password for that login ID. 5 Click Validate. The available projects are listed in the selection area. 6 Select the projects you want to monitor with Enterprise Manager and click OK. You can only select projects that have had statistics logging enabled (see Configuring statistics logging, page 266). 7 Repeat steps 2 through 6 for any other project sources containing projects you want to monitor.
279
In a clustered environment, you can only monitor a project once, regardless of how many Intelligence Servers it is running on. This is because Enterprise Manager connects to the project to populate the lookup tables, and because the project is the same on all Intelligence Servers when clustered. All of the statistics are available in Enterprise Manager as long as each clustered Intelligence Server is set up to log statistics to the Enterprise Manager data warehouse. 8 Click Next. The Schedule Overview page opens.
Scheduling how often statistics are loaded into the Enterprise Manager data warehouse
This procedure configures how often data is automatically loaded from the statistics tables into the Enterprise Manager data warehouse. For detailed information about the data load process, see Data loading and Enterprise Manager maintenance tasks, page 285. The data load process is scheduled to run automatically, as a Windows service. To manually force a load, click Run Manual Loading Now. You can also schedule the data load process to run by using the Windows AT command and calling the MicroStrategy Enterprise Manager data load command: maemetl. For syntax information, see To perform Enterprise Manager maintenance tasks from the command line, page 291.
To set the schedule for Enterprise Manager data loading
1 Read the Schedule Overview Page and click Next. The Define Schedule page opens.
Time Range
2 Select the time of day at which you want the data load process to start.
3 Select the time of day after which the data load process does not start.
Recurrence
4 Specify how often in the Time Range the data load occurs. by selecting either Every _ minutes or Every _ hours and specifying the minutes or hours between data loads. 5 Select either Daily, Weekly, or Monthly. Depending on which you select, the options to the right change. Then select one of the options specifying when you want the data load to occur.
Date Range
6 Select the date on which to start the scheduled data load. 7 Select No end date or End by and choose a date. If you choose a date, then after this date no data will be loaded into the Enterprise Manager data warehouse.
Maintenance tasks
8 From the Tools menu, select Options. The Options dialog box opens. 9 Select the Data Load tab. 10 Select the check box for any maintenance tasks you want to perform during each data load. For a description of each task, see Enterprise Manager maintenance tasks, page 289. 11 Click Apply Changes. Changes made on this page do not take effect until you click Apply Changes. 12 To restart the data loading service with the new schedule, click Stop to stop it, and then click Start. To load statistics immediately and not wait for the first scheduled load to occur, click Run Manual Loading Now.
281
To trigger an immediate data load from a Windows Command Prompt, use the maemetl command. For syntax information, see To perform Enterprise Manager maintenance tasks from the command line, page 291.
283
Document Jobs - Detailed and Report Jobs - Detailed: These options are recommended to obtain granular statistics data on documents and reports, respectively. Columns/Tables, Job SQL, and Security Filters: These options are dependent on your information requirements and the types of analysis you want to perform with Enterprise Manager. Job SQL generally creates a very large statistics table. Only select this option when you specifically need the job SQL data.
For details about the database tables that the statistics options log to, see Enterprise Manager statistics data dictionary, page 875. For information about troubleshooting statistics logging, see Statistics logging, page 608.
To configure statistics logging for a project
1 In MicroStrategy Desktop, right-click a project and select Project Configuration. The Project Configuration dialog box opens. 2 Select the Statistics category, and the General subcategory under it. 3 Make sure a Statistics Database Instance is selected, and then select the Log Information About check box. 4 Select the appropriate options, based on the statistics you want to log. 5 Click Apply. The changes you have made to the statistics logging are saved.
lookup and fact tables that are constructed according to the schema for the Enterprise Manager project. For a detailed examination of the statistics database, including the statistics tables and the Enterprise Manager data warehouse, see Appendix C, Enterprise Manager Data Model and Object Definitions. Back up your statistics database regularly, and retain only as much data as is required for your analysis. You can purge the statistics database on a regular basis to decrease table size and improve performance. For more information about purging statistics, see Improving database performance by purging statistics data, page 293. Intelligence Server opens a maximum of one database connection per project that is configured to log statistics. For example, in a project source with four projects, each of which is logging statistics, there may be up to four database connections opened for the purpose of logging statistics. However, the maximum number of database connections is typically only seen in high concurrency environments. In a clustered environment, each node of the cluster requires a database connection for each project loaded onto that node. For example, a two-node cluster with 10 projects loaded on each node has 20 connections to the warehouse (10 for each node). Even if the same ten projects are loaded on both nodes, there are 20 database connections.
285
The first half of the Enterprise Manager data load process involves gathering metadata from projects. The system connects to the project sources specified in the Enterprise Manager console and transfers relevant information to the Enterprise Manager lookup tables. Relevant information includes such data as report names, user/group names, and object relationships. Examples include user/group relationships and which schedules are mapped to which reports. Metadata information for all projects in a given project source is transferred into the Enterprise Manager lookup tables, regardless of whether those projects are configured to log statistics. The second half of the data load process involves transferring facts from the statistics tables into the Enterprise Manager warehouse. SQL scripts transform the statistics data into a form that can be useful for administrative reporting in the Enterprise Manager warehouse. The transformation ensures that reporting on MicroStrategy metadata content is feasible. Some of Enterprise Managers fact tables are really views of certain statistics tables. This substantially speeds up the data load process. To ensure that the warehouse data is complete, at the beginning of the data load process a timestamp is created in the EM_IS_LAST_UPDATE table. This timestamp provides the end of the data migration window. The beginning of the data migration window is determined by the previous data loads timestamp entry in the EM_IS_LAST_UPDATE table. Therefore, any statistics logged between the start of the last data load and the start of the current data load is transferred. At the completion of the data load process, Enterprise Manager updates the EM_IS_LAST_UPDATE table to show that the process is finished. If the data load process is interrupted before it finishes, this last update is not stamped. In this case, the next time the data load runs, it starts with data from the time the last successful data load was finished. Data loading generally occurs according to the schedule that you set up when you configure Enterprise Manager. For instructions on changing the schedule for the data load, see Scheduling how often statistics are loaded into the
286 Analyzing system usage with Enterprise Manager
2008 MicroStrategy, Inc.
Enterprise Manager data warehouse, page 280. You can also run a manual data load at any time, or manually load data from a specific time window if your warehouse is missing data for some reason. For information about troubleshooting the Enterprise Manager data loading process, see Data loading, page 610.
287
1 In the Enterprise Manager console, select Schedule. 2 Click Next. 3 Make any necessary changes to the Time Range, Recurrence, or Date Range options. For details about each option, see Scheduling how often statistics are loaded into the Enterprise Manager data warehouse, page 280, or click Help. 4 Click Apply Changes. Changes made on this screen do not take effect until you click Apply Changes. 5 To restart the data loading service with the new schedule, click Stop to stop it, and then click Start.
To run a manual data load immediately
1 In the Enterprise Manager console, select Schedule. 2 Read the Scheduling Overview page and click Next. 3 Click Run Manual Loading Now. A dialog box opens with a progress bar indicating the progress of the data load process. When the data load finishes, click OK to close this dialog box.
To load data for a specific time period
1 In the Enterprise Manager console, from the Tools menu, select Data Load Options. 2 In the drop-down list boxes, specify the Start Date and End Date for the custom time window.
3 Click Custom Time Window. The Data Load Options dialog box closes, and a dialog box opens with a progress bar indicating the progress of the data load process. When the data load finishes, click OK to close this dialog box.
To execute a data load from the command line
Call the Data Loader executable using the command maemetl.exe, with one or more of the parameters listed below. For example, to execute a data load, the syntax is maemetl.exe -dm The maemetl.exe file is located by default in the following location: C:\Program Files\MicroStrategy\ Administrator\Enterprise Manager The data loading parameters are as follows: -DM: Data migration process. This switch migrates data from the statistics tables to the Enterprise Manager data warehouse tables. -CWM -MM/DD/YYYY -MM/DD/YYYY: Custom time window migration. This option loads all statistics data logged between the specified dates. You can call the Data Loader executable with data load parameters, maintenance parameters, and warehouse purge parameters at the same time. For a list of maintenance parameters, see To perform Enterprise Manager maintenance tasks from the command line, page 291. For a list of warehouse purge parameters, see To purge Enterprise Manager warehouse data from the command line, page 296.
289
Manager project and data loads performing efficiently. They can be performed during each data load, or can be run immediately. The maintenance tasks that can be performed during a data load are: Update folder paths: This task updates the location property of attributes such as Report, User, and so on. It synchronizes the Enterprise Manager warehouse lookup tables with the actual folder paths in the metadata. Update object deletions: Information about deleted objects is retained in the Enterprise Manager lookup tables for historical analysis. This task synchronizes the existence property of an object in Enterprise Manager with its state in the metadata. A deleted object is marked with a Deleted flag in the corresponding lookup table. Update database statistics: This task executes SQL scripts that cause the Enterprise Manager warehouse and statistics database to collect statistics on these warehouse tables. The database uses these statistics to improve response times for Enterprise Manager reports. This option is available for SQL Server, Oracle, Teradata, and DB2 version 8.2.2 or later. This task should be run frequently to improve the performance of Enterprise Manager reports. The SQL scripts that are run for this option are: Upd_Stat_Table_Stats_DBname.sql Upd_Fact_Table_Stats_DBname.sql
Close open sessions: This task closes all sessions that are listed as open in the statistics database. Using this task can avoid orphan sessions, entries in the statistics database that indicate that a session was initiated in Intelligence Server, but no information was recorded when the session ended. Orphan sessions occur rarely, but they can affect the accuracy of Enterprise Manager reports that use Session Duration. For example, one long-running orphan session may skew the average time a session lasts by several days. The SQL script run for this option is em_close_orphan_sessions_DBname.sql.
Repopulate relate tables: This task synchronizes the relationship (relate) tables in the Enterprise Manager warehouse, such as IS_SCHED_RELATE or IS_USR_GP_USR, with the metadata.
1 From the Tools menu in the Enterprise Manager Console, select Options. The Options dialog box opens. 2 Select the Data Load tab. 3 Select the tasks to perform during the data load. For a detailed explanation of each task, see Enterprise Manager maintenance tasks, page 289. 4 Click OK to save your choices and close the Options dialog box. The selected data load options will be performed during each subsequent data load.
To perform an Enterprise Manager maintenance task immediately
1 From the Tools menu in the Enterprise Manager Console, select Data Load Options. The Data Load Options dialog box opens. 2 Click the button corresponding to the task to perform. For a detailed explanation of each task, see Enterprise Manager maintenance tasks, page 289.
To perform Enterprise Manager maintenance tasks from the command line
Call the Data Loader executable using the command maemetl.exe, with one or more of the parameters listed below. For example, to update object locations and clean up orphan sessions, the syntax is maemetl.exe -ol -cos
291
The maemetl.exe file is located by default in the following location: C:\Program Files\MicroStrategy\ Administrator\Enterprise Manager The maintenance task parameters are as follows: -OL: Object location update process. This switch updates the Location property of all objects. If users have moved objects or changed their folder structure, this switch allows Enterprise Manager to be synchronized with the users' project structure. This is the same as the Update Folder Paths option. -EF: Exists flag update process. This switch determines which objects monitored by Enterprise Manager no longer exist in users' projects. This is the same as the Update Object Deletions option. -UWTS: Update Enterprise Manager warehouse table statistics process. This switch helps improve the query performance on the database. The database's optimizer uses these statistics when running SQL. This switch is available for SQL Server, Oracle, Teradata, and DB2 version 8.2.2 or later. This option is part of the Update Database Statistics option. -USTS: Update Intelligence Server statistics table statistics process. This switch helps improve the query performance on the database. The database's optimizer uses these statistics when running SQL. This switch is available for SQL Server, Oracle, Teradata, and DB2 version 8.2.2 or later. This option is part of the Update Database Statistics option. -COS: Close orphan sessions process. This switch updates the information in users' statistics table to close any sessions left open by Intelligence Server for any reason. This is the same as the Close Orphan Sessions option. -REP: Repopulate Enterprise Manager relate tables process. This option synchronizes the relationship tables with the metadata as of the time of the data load. This is the same as the Repopulate Relate Tables option.
You can call the Data Loader executable with data load parameters, maintenance parameters, and warehouse purge parameters at the same time. For a list of data load parameters, see To execute a data load from the command line, page 289. For a list of warehouse purge parameters, see To purge Enterprise Manager warehouse data from the command line, page 296.
293
The IS_SESSION_STATS table does not contain any project-level information and is therefore not affected by purges at the project level. 1 In Desktop, right-click the project and select Project Configuration. The Project Configuration Editor opens. 2 Select the Statistics category, and the Purge subcategory beneath it. 3 In the Select dates drop-down lists, select the date range within which you want the purge operation to be performed. 4 In the Purge timeout (sec.) field, specify the time-out setting in seconds. The server uses this during the purge operation. The server issues a single SQL statement to purge each statistics table, and the time-out setting applies to each individual SQL statement issued during the purge operation. 5 To purge data by a specific Server / ProjectID combination, select the Purge by Intelligence Server Project combination check box. This option is useful for performing a more granular purge within a clustered environment, or in an environment where many Intelligence Servers are logging statistics to the same database. 6 Click Purge Now. The statistics for the specified dates are deleted from the statistics database. Statistics at the project level can also be purged with a Command Manager script, using the PURGE STATISTICS command in the Project Configuration area. For more information about using Command Manager, including sample scripts, see the Command Manager online help, or see Automating processes using Command Manager, page 464.
To purge statistics for all projects logging statistics simultaneously from a particular server
1 In Desktop, right-click the project source and select Configure MicroStrategy Intelligence Server. The Intelligence Server Configuration dialog box opens. 2 Select the Server Definition category, and the Statistics. subcategory underneath it. 3 In the Select dates drop-down lists, select the date range within which you want the purge operation to be performed. 4 In the Purge timeout (sec.) field, specify the time-out setting in seconds. The server uses this during the purge operation. The server issues a single SQL statement to purge each statistics table, and the time-out setting applies to each individual SQL statement issued during the purge operation. 5 To purge data by a specific Server / ProjectID combination, select the Purge by Intelligence Server Project combination check box. This option is useful for performing a more granular purge within a clustered environment, or in an environment where many Intelligence Servers are logging statistics to the same database. 6 Click Purge Now. The statistics for the specified dates are deleted from the statistics database.
295
To purge data for a specified time period from the Enterprise Manager warehouse
1 On the Enterprise Manager console, from the Tools menu select Data Load Options. The Data Load Options dialog box opens. 2 In the Specify custom time window area, set a Start date and End date. 3 Click WH data deletion. The Purge Projects dialog box opens. 4 From the list of projects, select the projects whose data you want to purge from the warehouse. 5 Click OK. All data pertaining to the selected projects during the specified time window is deleted from the Enterprise Manager warehouse.
To purge Enterprise Manager warehouse data from the command line
Call the Data Loader executable using the command maemetl.exe, with the parameter -WHD -MM/DD/YYYY -MM/DD/YYYY [-SServerName -PProjectName]. You can choose to purge data for only a specific server, or for only a specific project on a specific server. For example, to purge the warehouse data for all projects on the Intelligence Server machine ARCHIMEDES between January 1 and March 31, 2008, the syntax is: maemetl.exe -whd -01/01/2008 -03/31/2008 -SARCHIMEDES The maemetl.exe file is located by default in the following location: C:\Program Files\MicroStrategy\ Administrator\Enterprise Manager
You can call the Data Loader executable with data load parameters, maintenance parameters, and warehouse purge parameters at the same time. For a list of data load parameters, see To execute a data load from the command line, page 289. For a list of maintenance parameters, see To perform Enterprise Manager maintenance tasks from the command line, page 291.
297
Indexes are included for out-of-the-box reports on the Enterprise Manager fact tables. Check the indexes for the Enterprise Manager reports that you run most frequently or that take the longest to complete. If necessary, you should build additional indexes if you find some reports using tables that do not currently have an index. The analysis areas of the Enterprise Manager project are described below. Several of the analysis area descriptions include details on one or two representative reports from that area, and report customization ideas that can be used with many of the reports within that analysis area. Dashboards are an excellent source of summarized data, and provide interactive analysis at deeper levels of detail. For descriptions of each Enterprise Manager dashboard, see Dashboards, page 299. Operations analysis reports provide information on system resource usage, concurrency, and report processing time. For descriptions of these reports, see Operations analysis, page 303. Performance analysis reports support analysis related to usage patterns and Intelligence Server governing settings. For descriptions of these reports, see Performance analysis, page 311. Project analysis reports provide information about MicroStrategy project growth and the usage patterns of metadata objects. For descriptions of these reports, see Project analysis, page 313. Real-time analysis reports provide information related to current response times and schedule results. This information can be useful for troubleshooting and for optimizing your database configuration. For descriptions of these reports, see Real-time analysis, page 317. User analysis reports analyze user activity and preferences. For descriptions of these reports, see User analysis, page 318
For a detailed list of all Enterprise Manager facts, attributes, and metrics, see Appendix C, Enterprise Manager Data Model and Object Definitions.
Dashboards
Enterprise Manager comes with four Report Services documents that show one or more related reports in a dashboard-type display. Report Services documents are an excellent source of summarized data from related areas of analysis. Dashboards provide a lot of interactive graphical features to enable exploration of the data at several levels of detail. You must have a MicroStrategy Report Services license to view or work with a Report Services document. In addition, dashboards must be viewed in MicroStrategy Web to take full advantage of their interactivity.
299
The dashboards provided with Enterprise Manager are: Data Warehouse Optimization Advisor, page 300 Project Analysis Dashboard, page 301 Real-Time Server Usage Dashboard, page 302 Server Caching Optimization Advisor, page 302 These dashboards are designed for use with MicroStrategy Web. If you are using MicroStrategy Web Universal, the links to other reports in the dashboards do not function. To correct the links, edit the dashboards and change all occurrences of Main.aspx in the links to mstrWeb.
The bottom half of the dashboard contains a list of the database tables being considered for optimization. Clicking on a table brings up a list of optimizations and their potential effectiveness. These optimizations include aggregate table grouping and different types of secondary indexes.
On the dashboard, below the general analysis section is a separate section for each project. These sections contain a detailed analysis of the project, including: A line graph showing the weekly growth trend for the numbers of reports and other objects in the project. A line graph showing the weekly usage trend, in terms of number of users and number of user requests. A line graph showing the weekly project performance trend, in terms of job execution time and number of jobs. A graph showing the load distribution (ad-hoc versus scheduled jobs).
This dashboard also contains a number of links to other Enterprise Manager reports. For customization purposes, the document links work over the ASP.NET version of MicroStrategy Web. For MicroStrategy Web Universal, the links must be modified appropriately. The MicroStrategy Developer Library (MSDL) that comes with the MicroStrategy SDK provides information to customize Report Services documents.
301
The dashboard also includes links to more detailed reports. For customization purposes, the document links work over the ASP.NET version of MicroStrategy Web. For MicroStrategy Web Universal, the links must be modified appropriately. The MicroStrategy Developer Library (MSDL) that comes with the MicroStrategy SDK provides information to customize Report Services documents.
A gauge showing the percentage that database execution time has been reduced by jobs that hit the cache instead of the database. A bar graph analyzing the hourly server workload by average time each job spends in queue, average execution time per job, and number of jobs per hour.
The second panel provides the optimization potential for three different optimization strategies, presented in three different grids: Enabling caching for the worst-performing reports, based on projected database savings. Disabling caching for reports with low hit ratios. Increasing caching efficiency by building OLAP cubes augmented with frequently drilled-to objects.
Operations analysis
The Operations Analysis folder in Enterprise Manager contains the following analysis areas, each with its own reports: Concurrency analysis (including user/session analysis), page 303 Data load analysis, page 305 Report processing analysis, page 305 Resource utilization analysis (including top consumers), page 308
303
The total number of sessions logged in via Web/Desktop/other clients The total number of sessions with active jobs
Reports in this analysis area use an attribute from the Time hierarchy as the primary attribute for analysis, as well as various metrics representing answers to administrator questions.
Report name
10. Concurrency by Hour of Day 11. Daily Session Concurrency Analysis 12. Session Duration Analysis 13. Daily User Connection Concurrency Analysis 14. Minute Level User Concurrency During Peak Hours 14.1 Top n Maximum User Concurrency Hours - report as filter
Function
Provides the number of concurrent active users and the number of concurrent sessions by hour of day. This report is prompted on time. Uses various metrics to analyze the concurrent active sessions over time. This report is prompted on time and on session duration. Uses various metrics to analyze the duration of user sessions, over time. This report is prompted on time and on session duration. Uses various metrics to analyze the concurrency of user sessions, over time. This report is prompted on time and on session duration. Provides a minute-level graph for the active users and sessions during the peak hours of the day. This report is prompted on time. Provides a list of the top N hours in terms of maximum user concurrency. This report is prompted on time, session duration, and number of hours to be returned.
Sample report: Daily Session Concurrency Analysis This report uses various metrics to analyze the concurrent active sessions over time. The results summarize the load on and usage of Intelligence Server. This report contains prompts on time and on the minimum and maximum duration of the sessions to be analyzed. Usage scenario Administrators can use this report to analyze the total number of sessions on the MicroStrategy system on any given day. They can also see the average, minimum, and maximum number of sessions open at any given minute during a day.
Report details Drill path: The recommended drill path is along the Time hierarchy. Other options: To restrict the scope of analysis to a specific MicroStrategy client such as Desktop, you can add an additional filter or page-by on the Connection Source attribute using the appropriate client. Alternatively, you can add a filter or page-by on any attribute from the Session hierarchy (except the Session attribute itself).
305
Whether the time-out setting for user sessions is appropriate. Analysis can help you configure the User Session Idle Time setting. (From Desktop or Control Center, right-click a project source and select Configure MicroStrategy Intelligence Server, expand Governing, and select General.) Whether caching should be enabled for prompted reports. Analysis can help you configure the Enable Prompted Report Caching setting. (In Desktop, from the Folder List, expand Administration, right-click the project name, select Project Configuration, expand Caching, and select Reports (advanced)).
While insights into such questions usually involve gathering data from multiple reports spanning multiple analysis areas, the Enterprise Manager reports within the Report Processing Analysis area provide a targeted examination to assess server and project governing.
Report name
1. Weekly Summary - Activity Analysis 2. Report Execution Analysis Working Set 3. Document Execution Analysis Working Set 4. Report Error Analysis 4.1 Report Job Time Out Analysis 4.2 Job Cancellation Trend 4.3 Job Cancellation Breakdown 5. DB Result Rows by Report
Function
Provides a comprehensive weekly summary of project activity. This report is prompted on the projects to be summarized. Analyzes report execution by time each job takes to execute. This report is prompted on time and on the projects to be analyzed. Provides a comprehensive analysis of document execution by time the jobs take to execute. This report is prompted on time. Provides a comprehensive analysis of jobs that do not run to completion. This report is prompted on time. Provides information about which and how many report executions have exceeded the execution time-out limit. This report is prompted on time. Provides the number of cancelled and non-cancelled jobs, over time. This report is prompted on time. Provides the number of cancelled report jobs, with and without SQL execution. This report is prompted on time and on type of report job. Provides the number of jobs, the number of database result rows, and the average elapsed report execution duration, per report and project. This report is prompted on time. Analyzes the report jobs that return no data. This report is prompted on time and on type of report job.
Report name
5.2 Post-Report Execution Activity 6. Top 10 Longest Executing Reports
Function
Analyzes user activity after executing each report. This report is prompted on time and on type of report job. Provides the number of jobs and the average elapsed report execution duration for the ten longest executing reports. This report is prompted on time.
Sample report: Database Result Rows by Report This Enterprise Manager report can help you understand the effect on load and performance of those user reports that did not result in cache hits. This report contains a prompt on Time. Usage scenarios You can use this report to identify those user reports that have high Average Elapsed Time (Average Elapsed Duration per Job) and are requested frequently (RP Number of Jobs). You can then consider a strategy to ensure that these user reports have a high cache hit ratio in the future. Total Database Result Rows (RP Number of Database Result Rows) provides a good approximate measure of the size of report caches. This can give you insight into tuning report-related project settings. To make changes to the project settings, in Desktop, from the Folder List, expand Administration, right-click the project name, select Project Configuration, expand Caching, and select Reports (general). For detailed information about these settings, click Help. Total Database Result Rows also provides a measure of the data returned by the database to the Intelligence Server for post-processing. Average Execution Time provides a measure of time taken to execute a report on the warehouse data source.
Report details Additional options: To restrict your analysis to a given computer, a connection source, a user session, and so on, add any attribute from the Session folder to this
307
Enterprise Manager report. For example, to restrict analysis to Web reports, add the Connection Source attribute to the page-by axis. For detailed information about page-by, see the MicroStrategy Basic Reporting Guide.
Reports in this analysis area prompt you to select a time period to analyze, and use various metrics representing answers to administrator requirements. The Top Consumers folder contains shortcuts to reports elsewhere in Enterprise Manager. These reports, taken together, indicate what users and reports are top consumers of system resources.
Report name
30. Execution cycle breakdown 30.1 Queue time breakdown 30.2 Queue to Exec time ratios by Server Processing Unit 30.3 Effect of job prioritization on queue time 31. Activity Analysis by Weekday/Hour Working Set 32. Peak Time Periods 33. Server Activity Analysis Summary 33.1 Scheduled Report Load on Intelligence Server 33.2 Subscribed Report Load on Intelligence Server 33.3.1 Web Access Trends 33.3.2 Web and Non-Web Usage 33.3.3 Web Usage Statistics
Function
Provides a daily breakdown of the time taken by each of the four steps in the report execution cycle: queue, SQL generation, SQL execution, and Analytical Engine. This report is prompted on time. Analyzes each report by the amount of time it spends in each part of the queue. This report is prompted on time. Provides a breakdown of queue time and execution time for each report job step. This report is prompted on time. Provides an analysis of the effect of job prioritization on the queue time and execution time of reports. This report is prompted on time. Analyzes Intelligence Server activity on an hourly basis, based on one of a number of metrics. This report is prompted on time and on a method of analysis. Provides the number of jobs and the average queue and execution durations per job, on an hourly basis. This report is prompted on time. Analyzes the daily usage of each Intelligence Server, based on a number of metrics. This report is prompted on time and on methods of analysis. Analyzes the duration and CPU usage of all scheduled jobs. This report is prompted on time. Analyzes the duration and CPU usage of all Narrowcast Server subscription jobs. This report is prompted on time. Analyzes the number of jobs run from MicroStrategy Web. This report is prompted on time. Compares the server usage of Web and non-Web users. This report is prompted on time. Provides the number of Web users, the average number of jobs per Web user, and the average report execution time per job for Web users. This report is prompted on time.
Note: This report is deprecated. Use 4.2 Job Cancellation Trend" instead.
Top Consumers
43. Top 10 Database Tables Lists the top ten most accessed database tables per project, and how many jobs access those tables. This report is prompted on time.
309
Report name
6. Top 10 Longest Executing Reports 62. Top 10 Reports 80. Top N users 91. Popular reports in a users User Group
Function
Provides the number of jobs and the average elapsed report execution duration for the ten longest executing reports. This report is prompted on time. Analyzes the server load for the ten most-executed reports. This report is prompted on time. Determines the top N users, based on a metric of your choice. This report is prompted on time and on analysis method. Lists the top N most-executed reports in a users User Group. This report is prompted on user, time, and number of reports.
Sample report: Scheduled Report Load on Intelligence Server This report provides a comprehensive analysis of the impact scheduled jobs have on the Intelligence Server machines in your system. This report contains a prompt on time. Usage scenario You can use this report to understand the daily impact of scheduled reports on each Intelligence Server machine. Impacts can be measured with metrics such as the total server report jobs and the total time spent in Intelligence Server. You can also use this report to study which user reports are executed as part of a schedule. By viewing which scheduled jobs have errors, you can quickly take appropriate action. Report details This report contains several attributes in the Report Objects window that are not in the report grid. If you have a license to use MicroStrategy OLAP Services, you can move these attributes from the Report Objects window to the report grid without re-executing the report. For detailed information about OLAP Services, see the MicroStrategy Basic Reporting Guide. To know which users in your system have scheduled the largest number of jobs, include the User attribute in this report.
To understand which schedules have been mapped to a given report in a project, include the Report attribute in this report. To find out which of your scheduled reports had errors, include the Error Indicator attribute in this report.
Performance analysis
Administrators can use this analysis area to understand the impact of server and project governing settings and usage patterns on the system. The Performance Analysis folder in Enterprise Manager contains reports that measure such metrics as average job execution time and other job performance trends, cache analysis, longest executing reports, and so on. Two reports from this analysis area are presented in detail below. These sample reports have been selected as representative reports of the analysis area; the details and options suggested for the sample reports can often be used on other reports within the same or related analysis areas.
Report name
40. System Performance Trends 40.1 Average Execution Time vs. Number of Jobs per User 40.2 Average Execution Time vs. Number of Sessions 41. Cache Analysis 42. Job Performance Trend 43. Top 10 Database Tables 44. Warehouse Tables Accessed
Function
Analyzes system performance over time based on your choice of metrics. This report is prompted on time and on methods of analysis. Provides a daily comparison of the number of jobs per user and the average execution time of the jobs. This report is prompted on time. Provides a daily comparison of the number of reporting sessions and the average execution time per job. This report is prompted on time. Analyzes the effectiveness of caching on the system. This report is prompted on time and on type of report. Analyzes daily and weekly trends in report requests and job performance. This report is prompted on time. Lists the top ten most accessed database tables per project, and how many jobs access those tables. This report is prompted on time. Provides a count of the warehouse tables and columns accessed, by type of SQL clause. This report is prompted on time.
311
Sample report: Cache Analysis This report provides a comprehensive analysis of report caching within the system. A good caching strategy can significantly improve system performance. This report contains prompts on Time and on the job type you want to analyze, and allows you to select the number of top report jobs for which you want to see data. Usage scenario You can use this report to analyze the cache hit ratios for certain reports; typically, these are the most frequently requested or most resource-intensive reports. You can also determine whether prompted reports should be set up to create a cache by analyzing whether or not prompted reports are hitting the cache regularly. Report details To analyze the cache hit ratios for element load jobs, select Element browsing job at the prompt. Be sure to remove the Report attribute from the report, since Element browsing jobs are ad hoc and do not map to any existing report in the metadata. This can give you insight into tuning element-related project settings. To make changes to the project settings, in Desktop, from the Folder List, expand Administration, right-click the project name, select Project Configuration, expand Caching, and select Elements. For detailed information about these settings, click Help. To analyze the cache hit ratios for prompted jobs, select Prompted jobs at the prompt. This can give you insight into tuning advanced report-related project settings. To make changes to the project settings, in Desktop, from the Folder List, expand Administration, right-click the project name, select Project Configuration, expand Caching, and select Reports (advanced). For detailed information about these settings, click Help.
Sample report: Warehouse Tables Accessed This report provides a count of the number of warehouse tables and columns accessed in various SQL clauses. This report contains a prompt on Time.
Usage scenario You can use this report to gain insights into database tuning by determining which warehouse tables and columns are accessed in the various SQL clauses, such as SELECT, WHERE, and so on. This information can help you determine where database tuning can be adjusted to improve overall query and reporting performance of your MicroStrategy project. For example, columns that are frequently accessed in the WHERE clause are good candidates for indexing.
Project analysis
Enterprise Manager reports within this analysis area use the Project attribute to analyze various metrics related to project use and Intelligence Server use. Administrators can use these reports to analyze project usage trends and understand how a project grows over time. The reports can help you determine which metadata objects are used and how often, so you can take appropriate actions. The Project Analysis folder in Enterprise Manager contains the following analysis areas, each with its own reports: Object properties analysis, page 313 Object usage analysis, page 315 Project development trends, page 317
These areas are described below, and one report is presented in detail. This sample report has been selected as a representative report of the analysis area; the details and options suggested for the sample report can often be used on other reports within the same or related analysis areas.
313
Report name
50.1 Attribute Form Properties_deprecated
50.2 Attribute Properties 50.3 Column Properties 50.4 Fact Properties 50.5 Hierarchy Properties 50.6 Logical Table Properties 50.7 Transformation Properties 51.1 Consolidation Properties
51.2 Custom Group Properties Lists the properties of all custom groups in all monitored projects. This report is paged by project. 51.3 Document Properties 51.4 Filter Properties 51.5 Metric Properties 51.6 Prompt Properties 51.7 Report Properties 51.8 Template Properties 52.1 DB Instance Properties 52.2 Event Properties 52.3 Intelligence Server Properties Lists the properties of all documents in all monitored projects. This report is paged by project. Lists the properties of all filters in all monitored projects. This report is paged by project. Lists the properties of all metrics in all monitored projects. This report is paged by project. Lists the properties of all prompts in all monitored projects. This report is paged by project. Lists the properties of all reports in all monitored projects. This report is paged by project. Lists the properties of all templates in all monitored projects. This report is paged by project. Lists the properties of all database instances in all monitored Intelligence Servers. This report is paged by Intelligence Server. Lists the properties of all events in all monitored Intelligence Servers. This report is paged by Intelligence Server. Lists the properties of all monitored Intelligence Servers.
Report name
52.4 Project Properties 52.5 Schedule Properties 52.6 User Group Properties 52.7 User Properties 54. User_Security_Filter_ Relations
Function
Lists the properties of all projects in all monitored Intelligence Servers. This report is paged by Intelligence Server. Lists the properties of all schedules in all monitored Intelligence Servers. This report is paged by Intelligence Server. Lists the properties of all user groups in all monitored Intelligence Servers. This report is paged by Intelligence Server. Lists the properties of all users in all monitored Intelligence Servers. This report is paged by Intelligence Server. Lists all users and their associated security filters in all monitored Intelligence Servers. This report is paged by Intelligence Server.
Report name
60.1 Report Statistics
Function
Lists all reports that have not been executed since the specified date, and provides the number of times they have been executed. This report is prompted on time. Lists all templates that have not been used since the specified date, and provides the number of times they have been used. This report is prompted on time. Lists all schedules that have not been used since the specified date, and provides the number of times they have been used. This report is prompted on time. Lists all server definitions that have not been used since the specified date, and provides the number of times they have been used. This report is prompted on time. Analyzes the server load for the ten most-executed reports. This report is prompted on time. Provides information about how many times a report has been executed and how many times users have drilled from that report. This report is prompted on time. Lists the users and reports associated with each schedule.
61.2 Server Definition Statistics 62. Top 10 Reports 63. Report Drilling Analysis
315
Report name
64.2 Schedule_Document_ User_Relations 65. Report Drilling Patterns
Function
Lists the users and documents associated with each schedule. For a particular report, lists the objects that have been drilled from and drilled to from four-tier clients such as MicroStrategy Web. This report is prompted on time.
Sample report: Report Drilling Patterns This report lists the objects in each report that have been drilled from and drilled to in four-tier clients such as MicroStrategy Web. The report is paged by project and by report. It prompts you for the dates to be analyzed. Analysis of drilling and statistics is only available from a four-tier client such as MicroStrategy Web. Usage scenario The Report Drilling Patterns report shows you what users want to see, by displaying the most commonly drilled-to objects. This information allows you to determine which attributes to include in a reports list of report objects. Because SQL is not generated for OLAP Services drilling, you can use this report to optimize your OLAP Services implementation. Sample report display
Report details Additional options: Use this report in conjunction with other statistics-type reports, which display similar usage information about individual objects such as templates, schedules, and so on.
Function
Provides a count of all types of application objects (reports, filters, metrics, and so on) in all monitored projects, by owner. This report is paged by project. Provides a count of all types of configuration objects (schedules, users, and so on) in all monitored Intelligence Servers, by owner. This report is paged by Intelligence Server. Provides a count of all types of schema objects (facts, attributes, and so on) in all monitored projects, by owner. This report is paged by project. Shows the weekly trends per project of users, sessions, and requests. This report is prompted on time. Provides a graph of new application objects created over a specified period. This report is prompted on time.
Real-time analysis
Several administrative questions require near real-time information about project and server activity. For example: When a user contacts the administrator to troubleshoot an error received when executing a report, the administrator needs a list of recent errors and error messages to investigate the problem. Administrators often want to ensure that current throughput and response times observed by users are meeting expectations.
317
Schedules are typically used to update caches during a batch window. The administrator may want to monitor the system to ensure that scheduled jobs have finished successfully.
Such requirements as those listed above focus on a relatively small snapshot of recent activity on the system. Reports that provide answers to such questions must be refreshed without requiring frequent updates using the Enterprise Manager data loader. The Real-time Analysis reports provide details of current Intelligence Server activity. The data used in these reports is no more than 24 hours old. If a successful data load has finished in the last 24 hours, data from that data load is used; otherwise, the reports work directly with data from the statistics tables. The reports in this analysis area use Freeform SQL and provide targeted administrative reporting features that complement the historical reporting features in the Operations, Performance, Project, and User Analysis areas.
Report name
Function
101. Recently Completed Jobs Provides details about all jobs that have completed since the specified date. This report is prompted on time. 102. Recent Sessions, Users 103. Recently Completed Scheduled Jobs Provides details about recent user connection activity This report is prompted on time. Provides details about all recently completed scheduled jobs. This report is prompted on time.
User analysis
Reports in this analysis area contain the User attribute as their primary attribute for analysis, along with various metrics that answer an administrators questions about user activity and preferences. The User Analysis folder in Enterprise Manager contains the following analysis areas, each with its own reports: User activity analysis, page 319
2008 MicroStrategy, Inc.
These areas are described below, and two reports are presented in detail. These sample reports have been selected as representative reports of the analysis area; the details and options suggested for the sample reports can often be used on other reports within the same or related analysis areas.
Report name
80. Top N users 81. Activity by User 81.1 Ad-hoc job activity by User 81.1.1 Drilling Activity by User 81.2 DB Result Rows by User
Function
Determine the top N users, based on a metric of your choice. This report is prompted on time and on analysis method. Provides summary information of user reporting activity by user and project. This report is prompted on time. Provides information about how many ad-hoc jobs are being run, and the composition of ad-hoc jobs. This report is prompted on time. Provides information about how many jobs each user has run, and how many of those jobs were drilling. This report is prompted on time. Provides the number of jobs, the number of database result rows and the average elapsed report execution duration per user and project. This report is prompted on time. Lists all users that have not logged in since the specified date, and provides information about their connections. This report is prompted on time.
319
Sample report: Activity by User This report provides data on total elapsed report duration. It also provides counts of cancelled jobs, non-cancelled jobs, and jobs that end with an error, as well as timed-out jobs by user and by project. This report prompts the user for a time for the analysis. Usage scenario You can use this report to gain insight into how reports are used on a per project basis by all users. You can determine which users are wasting resources by repeatedly cancelling jobs, and determine which users run the most reports in a given project. You can also see where reporting errors originate. Report details Drill paths: To narrow the scope of your analysis to individual sessions, drill across from User to Session and keep the parent attribute. To identify the reports and documents that were executed by a given user during a session, drill across from Session to Report/Document. Other options: To restrict your analysis to the most prolific users based on your chosen criteria, add the report Top N Users as a filter to this Enterprise Manager report. To determine which projects a given user is using, add a filter on User. To restrict your analysis to a given machine or connection source, add any attribute from the Session folder to this Enterprise Manager report. Sample report: Top N Users This report displays the top N users based on the user activities you select. The report prompts you for user activities and the number of users you want returned.
Usage scenario You can use this report to learn the top users in a number of analysis areas related to user activity, including: Which users log in to Intelligence Server most often. (Select the Number of Sessions metric.) Which users are connected for the longest duration. (Select the Connection Duration metric.) Which users run the most report jobs. (Select the RP Number of Jobs metric.)
Report details Add your own metrics to this report for user activity analysis that focuses on your environments requirements. Use this report as a filter in custom reports that you create. For example, the Activity by User report returns the total elapsed time for report execution by user and project, as well as the number of cancelled and non-cancelled jobs. You can add this Top N Users report as a filter to the Activity by User report, to narrow the results to the top ten users responsible for the highest number of cancelled jobs. This allows you to analyze overall user activity and determine whether these users are cancelling jobs legitimately.
321
Report name
90. List User Groups to which users belong 91. Popular reports in a users User Group
Function
Lists all User Groups to which the specified users belong. This report is prompted on user. Lists the top N most-executed reports in a users User Group. This report is prompted on user, time, and number of reports.
6
6.
Introduction
This chapter explains the benefits and functionality of various MicroStrategy administrative features. These features ensure that the system runs smoothly and efficiently. They include: Setting Project Modes, page 323 Clustering multiple machines in the system, page 329 Tuning the system for best performance, page 368 How Narrowcast Server affects Intelligence Server, page 447
323
Intelligence Server operation for other projects. The tasks that are allowed to occur depend on the job or jobs that are required for that task. You can use the Project Monitor or the Schedule Administration Tasks dialog box to change a projects mode. The different project modes are: Loaded, page 324 Unloaded, page 324 Request Idle, page 325 Execution Idle, page 325 Warehouse Execution Idle, page 326 Full Idle, page 327 Partial Idle, page 327
For details on how to set a project to different project modes, see the MicroStrategy Desktop online help. For example scenarios where the different project idle modes can help to support project and data warehouse maintenance tasks, see Project and data warehouse maintenance example scenarios, page 328.
Loaded
A project in Loaded mode allows all job requests and executions as is expected of a fully functioning and accessible project. When a project is loaded, Intelligence Server accepts user requests from the clients for the project and submits jobs to the data warehouse for processing. The communication is open between the user, Intelligence Server, and the data warehouse.
Unloaded
A project in Unloaded mode is completely offline to users and removed from Intelligence Server memory. Nothing can be done in the project until it is loaded again.
Unloading a project can be helpful when an administrator has changed some project configuration settings which do not affect run-time execution and are to be applied to the project at a later time. The administrator can unload the project first, and then reload the project when it is time to apply the project configuration settings.
Request Idle
Request Idle mode helps to achieve a graceful shutdown of the project rather than modifying a project from Loaded mode directly to Full Idle mode. In this mode, Intelligence Server: Stops accepting new user requests from the clients for the project. Completes jobs that are already being processed. If a user requested that results be sent to their History List, then the results are available in the users inbox after the project is resumed.
Setting a project to Request Idle can be helpful to manage server load for projects on different clusters. For example, in a cluster with two nodes named Node1 and Node2, the administrator wants to redirect load temporarily to the project on Node2. The administrator must first set the project on Node1 to Request Idle. This allows existing requests to finish execution for the project on Node1, and then all new load is handled by the project on Node2.
Execution Idle
A project in Execution Idle mode is ideal for Intelligence Server maintenance because this mode restricts users in the project from running any job in Intelligence Server. In this mode, Intelligence Server: Stops executing all new and currently executing jobs and, in most cases, places them in the job queue. This includes jobs that require SQL to be submitted to the data
325
warehouse, as well as jobs that are executed within Intelligence Server such as answering prompts. If a project is idled while Intelligence Server is in the process of fetching query results from the data warehouse for a job, that job is cancelled instead of being placed in the job queue. When the project is resumed, an error message is placed in the users History List. The user can click the message to resubmit the job request. Allows users to continue to request jobs, but execution is not allowed and the jobs are placed in the job queue. Jobs in the job queue are displayed as Waiting for project in the Job Monitor. When the project is resumed, Intelligence Server resumes executing the jobs in the job queue.
This mode allows you to perform maintenance tasks for the project. For example, you can still view the different project administration monitors, create reports, create attributes, and so on. However, tasks such as element browsing, exporting, and running reports that are not cached are not allowed.
resumed, an error message is placed in the users History List. The user can click the message to resubmit the job request. Completes any jobs that do not require SQL to be executed against the data warehouse.
This mode allows you to perform maintenance tasks on the data warehouse while users continue to access non-database dependent functionality. For example, users can run cached reports, but they cannot drill if that drilling requires additional SQL to be submitted to the data warehouse. Users can also export reports and documents in the project.
Full Idle
In this mode, Intelligence Server does not accept any new user requests and currently active requests are canceled. When the project is resumed, Intelligence Server does not resubmit the canceled jobs and it places an error message in the users History List. The user can click the message to resubmit the request. This mode allows you to stop all Intelligence Server and data warehouse processing for a project. However, the project still remains in Intelligence Server memory.
Partial Idle
In this mode, Intelligence Server does not accept any new user requests and currently active requests that require SQL to be submitted to the data warehouse are queued until the project is resumed. All other currently active requests are completed.
327
329
Simplified management: Clustering simplifies the management of large or rapidly growing systems.
Moreover, clustering enables you to implement the following strategies in your business intelligence environment, all of which are discussed in this chapter: Failover support (see Failover support, page 330) Load balancing (see Load balancing, page 330) Project distribution and project failover (see Distributing projects across nodes in a cluster, page 354)
Failover support
Failover support ensures that a business intelligence system remains available for use in the event of an application or hardware failure. Clustering provides failover support in two ways: Load redistribution: When a node fails, the work for which it is responsible is directed to another node or set of nodes. MicroStrategy Web or Web Universal users can be automatically connected to another node when a node fails. To implement automatic load redistribution for these users, on the Web Administrator page, under Web Server select Security, and in the Login area select Allow Automatic Login if Session is Lost. Request recovery: When a node fails, the system attempts to reconnect MicroStrategy Web or Web Universal users with queued or processing requests to another node. Users must log in again to be authenticated on the new node. The user is prompted to resubmit job requests.
Load balancing
Load balancing is a strategy aimed at achieving even distribution of MicroStrategy Web or Web Universal user sessions across MicroStrategy Intelligence Servers, so that no single machine is overwhelmed. This strategy is particularly
valuable when it is difficult to predict the number of requests a server will receive. MicroStrategy achieves four-tier load balancing by incorporating load balancers into the MicroStrategy Web products. Load is calculated as the number of user sessions connected to a node. The load balancers collect information on the number of user sessions each node is carrying. Based on this information at the time a user logs in to a project, MicroStrategy Web or Web Universal connects them to the Intelligence Server node that is carrying the lightest session load. All requests by that user are routed to the node to which they are connected until the user disconnects from the MicroStrategy Web product.
331
The following table describes a typical job process in a clustered, four-tier environment. Refer to the numbered steps in the report distribution flow diagram above.
Step
1 2
Action
MicroStrategy Web or Web Universal users log in to a project and request reports from their Web browsers A third party IP distribution tool such as Cisco Local Router, Microsoft Network Load Balancing, or Microsoft Windows Load Balancing Service distributes the user connections from the Web clients among Web Servers. The MicroStrategy Web product load balancers on each Web Server collect load information from each cluster node, and then connect the users to the nodes that carry the lightest loads and run the project that the user logged into. All report requests are then processed by the nodes to which the users are connected.
333
Step
4
Action
The Intelligence Server nodes receive the requests and process them. In addition, the nodes communicate with each other to maintain metadata synchronization and cache accessibility across nodes. The nodes send the requests to the warehouse as queries.
Query flow in a clustered environment is identical to a standard query flow in an unclustered environment (see Processing jobs, page 27), with two exceptions: Report caches: When a query is submitted by a user, if a cached report is not available locally, the server will retrieve the cache (if it exists) from another node in the cluster. (For an introduction to caching, see Report caches, page 171.) History Lists: Each users History List, which is held in memory by each node in the cluster, contains direct references to the relevant cache files. Accessing a report through the History List bypasses many of the report execution steps, for greater efficiency. (For an introduction to History Lists, see Managing report caches: History List, page 221.)
Software prerequisites
The computers that will be clustered must be running the same version of the same operating system. For example, you cannot cluster two machines when one is running on Windows 2000 and one is running on Windows 2003. The RDBMS containing the metadata and warehouse instances should already be set up on machines separate from the Intelligence Server nodes. Server definitions hold Intelligence Server configuration information. All nodes in a cluster should share a single definition, to simplify administration. This ensures that all nodes have the same governing settings. Server definitions can be modified from Desktop through the Intelligence Server Configuration Editor and the Project Configuration Editor. (In Desktop, click Help for steps to access and use these editors.) Data Source Names (DSNs) pointing to the metadata and warehouse databases must be created on all nodes, with identical names for the DSNs. Information on the clustered configuration is stored in the metadata, so the computers that will be clustered must use the same metadata repository. The metadata may be created from any of the nodes, and only needs to be set up once. When you create or modify the server definition in
335
the MicroStrategy Configuration Wizard, you can specify either a new or an existing metadata repository for Intelligence Server to use. The computers that will be clustered must each have the same version of Intelligence Server installed. (To verify: on each Intelligence Server machine, in Desktop, from the Help menu, select About MicroStrategy Desktop.) When Intelligence Server is installed, the last step is to choose a user identity under which the service will run. To run a clustered configuration, the user must be a domain account that has a trust relationship with each of the computers in the cluster. This allows resources to be shared across the network. The user account under which the Intelligence Server service is running must have full control of cache and History List folders (described below) on all nodes. Otherwise, Intelligence Server will not be able to create and access cache and History List files. The computers that will be clustered must have the same intra-cluster communication settings. (To set this: on each Intelligence Server machine, in Desktop or Control Center, right-click on the project source and select Configure MicroStrategy Intelligence Server. The MicroStrategy Intelligence Server Configuration editor opens. Under the Server definition category, select Communications.) You must have purchased the license that allows clustering. To determine the license information, use the License Manager tool and verify that the Clustering feature is available for MicroStrategy Intelligence Server. For more information on using License Manager, see Chapter 3, Managing Your Licenses. Ensure that the users Regional Options settings are the same as the clustered systems Regional Options settings. You must have access to the Cluster Monitor in Desktop. Therefore, you must have the Administration privilege to create a cluster. (To access the Cluster Monitor: on the Intelligence Server machine, in Desktop, in the project sources Administration folder list, right-click Cluster Monitor, select View, then choose Details.)
To view clustered cache information, such as cache hit counts, see Monitoring the cluster and its nodes, page 362. Report cache settings are configured per project, and different projects may use different methods of report cache storage. Different projects may also use different locations for their cache repositories. However, History List settings are configured per server definition. Therefore, different projects cannot use different locations for their History List backups. You must configure either multiple local caches or a centralized cache for your cluster. The following sections describe the caches that are affected by clustering, and presents the procedures to configure caches across cluster nodes. For ease of administration, it is recommended that the same configuration method be used for both report caches and History Lists.
Synchronizing metadata
Metadata synchronization refers to the process of synchronizing object caches across all nodes in the cluster. For example, when a user connected to a node in a cluster modifies a metadata object, the cache for that object on other nodes is no longer valid. The node that processed the change notifies all other nodes in the cluster that the object changed.
337
The other nodes then delete the old object cache from memory. The next request for that object processed by another node in the cluster is executed against the metadata. In addition to server object caches, client object caches are also invalidated when a change occurs. When a user requests a changed object, the invalid client cache is not used and the request is processed against the server object cache. If the server object cache has not been refreshed with the changed object, the request is executed against the metadata.
In a clustered environment, each node within a cluster must share its cache with the other nodes, so all clustered machines have the latest cache information. For example, for a given project, report caches on each node that is running the project are shared among other nodes in the cluster that are also running the project. Sharing caches among appropriate nodes eliminates the overhead associated with executing the same report on multiple nodes. Both memory and disk caches are shared among nodes. Cache sharing among nodes can be configured in two different ways: to use multiple local cache files or one centralized cache file. Each method is briefly described and then followed by detailed descriptions of how they are configured. Local caching: Each node hosts its own cache file directory that needs to be shared as ClusterCaches so that other nodes can access it. ClusterCaches is the share name Intelligence Server looks for on other nodes to retrieve caches. Centralized caching: All nodes have the cache file directory set to the same network location, \\<machine name>\<shared folder name>
The following table summarizes the pros and cons of the report cache configurations:
Pros Multiple local cache files Allows faster read and write operations for cache files created by local server. Faster backup of cache lookup table. Allows for easier backup process. Allows all cache files to be accessible even if one Intelligence Server in a cluster goes offline. May better suit some security plans, because nodes using network account are accessing only one machine for files. Cons The local cache files may be temporarily unavailable if an Intelligence Server is taken off the network or powered down. All cache operations are required to go over the network if shared location is not located on one of the Intelligence Server machines. Requires additional hardware if shared location is not located on an Intelligence Server.
MicroStrategy recommends storing the report caches locally if your users mostly do ad hoc reporting. In ad hoc reporting the report caches will not be used very much, and the
339
overhead incurred by creating the caches on a remote file server outweighs the low probability that a report cache will be used. Each cache configuration is explained below. Multiple local cache files: In this cache configuration, each node maintains its own local cache file, and thus maintains its own cache index file. Each nodes caches are accessible by other nodes in the cluster through the cache index file. This is illustrated in the diagram below.
For example, User A, who is connected to node 1, executes report A and thus creates report cache A. User B, who is connected to node 2, executes report A. Node 2 checks its own cache index file first. When it does not locate report cache A in its own cache index file, it checks the index file of other nodes in the cluster. Locating report cache A on node 1, it uses that cache to service the request, rather than executing the report against the warehouse. Centralized cache file: In this cache configuration, one shared centralized file is used among all nodes in the cluster. This can be located on one of the Intelligence Server machines or on a separate machine dedicated to serving the report caches. The History List messages and report caches for all the Intelligence Server machines in
the cluster are written to the same location. In this option, only one cache index file is maintained. This is illustrated in the diagram below.
For example, User A, who is connected to node 1, executes report A and thus creates report cache A, which is stored in a centralized file folder. User B, who is connected to node 2, executes report A. Node 2 checks the centralized cache index file for report cache A. Locating report cache A in the centralized file folder, it uses that cache to service the request, regardless of the fact that node 1 originally created the cache.
1 Open the Project Configuration Editor for the project. 2 Select Caching, then select Reports (general). 3 In the Cache file directory box, type: .\Caches\ClusterCaches
341
This implicit URL tells the other clustered nodes to search for report caches along the following path on all machines in the cluster: <Intelligence Server Application Folder>\Caches\ClusterCaches 4 Click OK. 5 On each machine in the cluster, open Windows Explorer and navigate to the cache file folder. The default location is: C:\Program Files\MicroStrategy\ Intelligence Server\Caches\Server Definition 6 Right-click the cache file folder, and select Sharing. The [Server Definition] Properties dialog box opens.
7 On the Sharing tab, select the Shared as option. In the Shared Name box, delete the existing text and type in ClusterCaches
8 Click OK. After you have completed these steps, you can cluster the nodes using the Cluster Monitor.
To configure cache sharing using a centralized cache file
1 Open the Project Configuration editor for the project. 2 Select the Caching: Reports (general) category. 3 In the Cache file directory box, type one of the following: \\<Machine Name>\<Shared Folder Name> or \\<IP Address>\<Shared Folder Name> For example, \\My_File_Server\My_Cache_Directory. 4 Click OK. 5 On the machine that is storing the centralized cache, create the file folder that will be used as the shared folder. The file folder name must be identical to the name you earlier specified in the Cache file directory box (shown as Shared Folder Name above). Make sure this cache directory is writable to the network account under which Intelligence Server is running. Each MicroStrategy Intelligence Server creates its own subdirectory. 6 After you have completed these steps, you can cluster the nodes using the Cluster Monitor.
343
messages are stored in the History folder. (To locate this folder, in the Intelligence Server Configuration Editor, select Governing, then select History settings.) The History List location should be .\Inbox\ServerDefinition, where Server Definition is the name of the folder containing the History Lists. This folder must be shared with the share name ClusterInbox, since this is the share name used by Intelligence Server to look for History Lists on other nodes.
1 On the Intelligence Server machine, start MicroStrategy Desktop. In the project sources Administration folder list, click Cluster Monitor. 2 From the Administration menu, select Server, then choose Join cluster... The Cluster Manager dialog box opens. 3 Type the name of the machine running the Intelligence Server to which you wish to add this node, or click ... to browse for and select it. 4 Once you have specified or selected the server to join, click OK.
To remove a node from a cluster
1 In the Cluster Monitor, right-click the node to remove. 2 From the right-click menu, select Delete.
345
No more than one Intelligence Server can be configured for a single machine. Multiple instances of Intelligence Server should not run on the same machine for clustering purposes. Part of the clustering procedures in this section is to synchronize caches across nodes in the cluster. It is important to understand methods to do this before you begin the procedures. To configure a cluster of Intelligence Servers in a UNIX/Linux environment, all servers must have access to each others caches and inbox (History List) files. Both cache and History List files are referred to generally as cache files throughout this section. An Intelligence Server looks for cache files from other nodes in the cluster by machine name. For an explanation and diagrams of general cache synchronization setup, see Synchronizing information across nodes in a cluster in Windows, page 337 in the Clustering Intelligence Servers: Windows section of this chapter. The syntax for the command is as follows: /<machine_name>/ClusterCaches /<machine_name>/ClusterInbox Do not change the folder names ClusterCaches and ClusterInbox, and be sure the case (upper and lower case) is correct. For example, a two-node cluster with Intelligence Servers is running on machines UNIX1 and UNIX2. Intelligence Server running on UNIX1 looks for caches of the other Intelligence Server only on /UNIX2/ClusterCaches. When you perform the clustering procedure below, use one of the following mechanisms to set up file sharing for caches: Network File System (NFS) file sharing: Allows all network users to access shared files stored on different computers. NFS provides access to shared files through a Virtual File System (VFS) interface that runs on top of TCP/IP. You can manipulate shared files as if they were stored locally on your hard disk. With NFS,
computers connected to a network operate as clients while accessing remote files, and as servers while providing remote users access to local shared files. The cache files are stored on a device shared via NFS. The device must be mounted properly on each machine to make it accessible to all servers. Direct-attached storage (DAS): A device such as an array of disks, a fiber channel, and the like, that is directly connected to the machine. This means there is no network traffic or overhead involved. In the case of a PC, you can think of a hard drive of the PC as directly attached to it. Therefore, the access time for caches and other files is improved. All machines share a DAS device, for example, a fiber channel. All servers have access to this device for cache files. Make sure you adhere to the following naming conventions: No matter which method you choose above to share cache files with, be careful when naming each mount to ensure that you comply with the naming conventions. This is because each server looks for caches by machine name. Because all servers in a cluster must use the same server definition, in some cases you cannot specify the cache location with an absolute path such as /<machine_name>. This occurs because the location would have to be different for each server. To solve this problem, use relative paths and soft links. A soft link is a special type of UNIX file that refers to another file by its path name. A soft link is created with the ln (link) command: ln -s OLDNAME NEWNAME where: OLDNAME is the target of the link, usually a path name. NEWNAME is the path name of the link itself.
347
Most operations (open, read, write) on the soft link automatically de-reference it and operate on its target (OLDNAME). Some operations (for example, removing) work on the link itself (NEWNAME). The same version of Intelligence Server must be installed on two or more UNIX/Linux machines. The same version of MicroStrategy Desktop must be installed on a Windows machine. The required data source names (DSNs) must be created and configured for Intelligence Server on each machine. Both servers should be configured using the same metadata database, warehouse, port number, and server definition. Confirm that each server works properly, and then shut each server down.
MicroStrategy home path in each node is accessible to all clustered machines procedure
In this example procedure, the following assumptions are made: You have completed any required steps in Prerequisites for UNIX/Linux clustering above. The UNIX/Linux machines are called UNIX1 and UNIX2. The Intelligence Server files are installed in MSTR_<HOME_PATH> on each machine. The MSTR_HOME_PATH for each machine is /Build/BIN/SunOS/.
To configure MicroStrategy home path in each node so it is accessible to all nodes in a cluster
1 Start MicroStrategy Intelligence Server on UNIX 1. 2 In Desktop, create project sources pointing to UNIX1 and UNIX2. 3 Connect to UNIX1 using Desktop. 4 Right-click the project source of UNIX1 and select Configure Server. 5 Select the Server Definition category, and select History Settings. 6 Set the path to ./ClusterInbox and click OK. 7 Right-click the project name and select Project Configuration. 8 Select the Caching category, and select Reports (General). 9 Set the path for the cache file directory to ./ClusterCaches.
349
10 Disconnect from the project source and shut down Intelligence Server.
To set up the UNIX1 machine
11 Create the folders for caches: mkdir $MSTR_<HOME_PATH>/ClusterCaches mkdir $MSTR_<HOME_PATH>/ClusterInbox 12 Mount the folders from UNIX2 on UNIX1. For example: mkdir /UNIX2 mount UNIX2:/Build/BIN/SunOD /UNIX2
To set up the UNIX2 machine
13 Create the folders for caches: mkdir $MSTR_HOME_PATH/ClusterCaches mkdir $MSTR_HOME_PATH/ClusterInbox 14 Mount the folders from UNIX2 on UNIX1. For example: mkdir /UNIX1 mount UNIX1:/Build/BIN/SunOD /UNIX1
To start the servers and connect the cluster
15 Start the Intelligence Servers on UNIX1 and UNIX2. 16 In Desktop, connect to UNIX2. 17 In the Folder List, open Administration and select Cluster Monitor. 18 Right-click the Cluster Monitor and select Join cluster. 19 Type the name of the server to connect to the cluster, in this example, UNIX2. The cluster membership now shows both nodes of the cluster.
1 Start MicroStrategy Intelligence Server on UNIX1. 2 In Desktop, create project sources pointing to UNIX1 and UNIX2. 3 Connect to UNIX1 using Desktop. 4 Right-click the project source of UNIX1 and select Configure Server. 5 Select the Server Definition category, and select History Settings. 6 Set the path using the following convention: \\<SharedLocation>\<InboxFolder> In this example, set it as \\sandbox\Inbox. 7 Right-click the project name and select Project Configuration. 8 Select the Caching category, and select Reports (General). 9 Following the convention \\<SharedLocation>\<CacheFolder>, set the path to \\sandbox\Caches. 10 Disconnect from the project source and shut down Intelligence Server.
351
11 Create the folders for caches on the shared device (as described in Prerequisites for UNIX/Linux clustering above): mkdir /sandbox/Caches mkdir /sandbox/Inbox
To set up the UNIX1 and UNIX 2 machines
13 Start the Intelligence Servers on UNIX1 and UNIX2. 14 In Desktop, connect to UNIX1. 15 In the Folder List, expand Administration and select Cluster Monitor. 16 Right-click the Cluster Monitor and select Join cluster. 17 Type the name of the server to join to the cluster, in this example, UNIX2. The cluster membership now shows both nodes in the cluster.
1 Connect to one Intelligence Server in the cluster and ensure that the Cluster Monitor in Desktop is showing all the proper nodes as members of the cluster.
Verify the cache
3 Use the Cache Manager and view the report details to make sure the cache is created. 4 Connect to a different node and run the same report. Verify that the report used the cache created by the first node.
Verify the History List
5 Connect to any node and run a report. 6 Add the report to the History List. 7 Without logging out that user, log on to a different node with the same user name. 8 Verify that the History List contains the report added in the first node.
To verify from Web Universal
1 Open the MicroStrategy Web Administrator page. 2 Connect to any node in the cluster. MicroStrategy Web Universal should automatically recognize all nodes in the cluster and show them as connected. If MicroStrategy Web Universal does not recognize all nodes in the cluster, it is possible that the machine itself cannot resolve the name of that node. MicroStrategy cluster implementation uses the names of the machines for internal communication. Therefore, the Web Universal machine should be able to resolve names to IP addresses. You can edit the lmhost file to relate IP addresses to machine names. You can also perform the same cache and History List tests described above in To verify from Desktop.
353
and users cannot use it. You must then manually load the project for it to be available. To manually load a project, right-click the project in the Project Monitor and select Load.
To distribute projects across nodes in a cluster
1 In MicroStrategy Desktop, from the Administration menu, select Projects, then select Select Projects. The Intelligence Server Configuration Editor opens, with the Projects: General category displayed. 2 A column appears for each node in the cluster that is detected at the time the Configuration editor opens. Select the appropriate check box to configure the system to load a given project on a given server or servers. A checked box at the intersection of a project row and a server column signifies that the project is to be loaded at startup on that server. If no check boxes are selected for a project, the project is not loaded on any node at startup. Likewise, if no check boxes are selected for a node, no projects are loaded on that server at startup. All Servers: If the All Servers box is selected for a project, all servers in the cluster load this project at startup. If you check the All Servers check box, all node check boxes are also checked off automatically. When you add a new server to the cluster, any projects set to load on all the servers automatically loads on the new server. Be aware of the following: If you place an individual checkmark for a given project to be loaded on every server, but you do not check the All Servers box, the system loads the project on the selected servers. When a new server is added to the cluster this project is not automatically loaded on that new server. If you are using single instance session logging with Enterprise Manager, the single instance session logging project must be loaded onto all the clustered Intelligence Servers at startup. Failure to load this project on all servers at startup will result
2008 MicroStrategy, Inc. Clustering multiple machines in the system
355
in a loss of session statistics for any Intelligence Server onto which the project is not loaded at startup. For more information about single instance session logging, see Configuring statistics logging, page 266. For more information about this issue, see Tech Note TN6400-80X-0422 in the MicroStrategy Knowledge Base. 3 You can choose whether to display the selected projects only and whether to apply the startup configuration on save: Show selected projects only: This option allows you to display only those projects that have been assigned to be loaded on a node. For display purposes it filters out projects that are not loaded on any server in the cluster. Apply startup configuration on save: This option allows your changes to be reflected immediately across the cluster. If the check box is cleared, changes are saved when you click OK, but are not put into effect until Intelligence Server is restarted.
4 Click OK when you are finished configuring your projects across the nodes in the cluster. If you do not see the project(s) you want to load displayed in the Intelligence Server Configuration Editor, you must configure the Intelligence Server to use a server definition that points to the metadata containing the project. Use the MicroStrategy Configuration Wizard to accomplish this. See the MicroStrategy Installation and Configuration Guide for details. It is possible that not all projects in the metadata are registered and listed in the server definition when the Intelligence Server Configuration editor opens. This can occur if a project is created or duplicated in a two-tier (direct connection) project source that points to the same metadata as that being used by Intelligence Server while it is running. Creating, duplicating, or deleting a project in two-tier while a server is started against the same metadata is not recommended.
1 In MicroStrategy Desktop, from the Administration menu, select Projects, then select Select Projects. The Intelligence Server Configuration editor opens, with the Projects: General subcategory displayed. 2 Select the Advanced subcategory. 3 Enter the project failover latency and configuration recovery latency, and click OK. When deciding on these latency period settings, consider how long it takes an average project in your environment to load on a machine. If your projects are particularly large, they may
357
take some time to load, which presents a strain on your system resources. With this consideration in mind, use the following information to decide on a latency period.
back onto the recovered server (the projects original server). Consider the following information when setting a latency period: Setting a higher latency period leaves projects on the surrogate server longer. This can be a good idea if your projects are large and you want to be sure your recovered server stays online for a specific period of time before the project load process begins. A high latency period provides the recovered server more time after it comes back online before its projects are reloaded. Setting a lower latency period causes projects on the surrogate machine to be removed and loaded relatively quickly onto the recovered server. This can be a good idea if you want to reduce the strain on the surrogate server as soon as possible. Disabling the latency period or the recovery process: If you enter a 0 (zero), there is no latency period and thus there is no delay; the configuration recovery process begins immediately. If you enter a -1, the configuration recovery process is disabled and projects are not loaded onto the recovered server.
359
361
The right-click menu displays additional information: Hit count (only hits from local node) Creation time Last hit time (only hits from local node) File name Waiting list
For background information on the Cache Monitor, see Managing report and History List caches: Cache Monitor, page 246.
363
For example: 1 Cluster two servers together, called Machine A and Machine B. 2 Run a report on Machine A. This creates a cache on Machine A. 3 Then run the same report on Machine B. Machine B accesses the cache on Machine A to return report results. 4 Look at the hit count in the Cache Monitor on Machine A. It shows no hits. This is because the Cache Monitor on any given machine only reflects hits created by the local machine itself. 5 Look at the hit count in the Cache Monitor on Machine B. It shows one hit. This is because the Cache Monitor on any given machine is displaying the information initiated by that machine. In this case, Machine B hit a cache within the cluster. When determining hit counts, the location of the cache within the cluster does not matter. The Cache Monitors hit count number on a given machine only reflects the number of cache hits the given machine initiated on any cache within the cluster.
View connections on the local machine with the Database Connection Monitor
The MicroStrategy Database Connection Monitor shows the connections to the local node. You can view connection status and number of connections. Based on the information you see in the Database Connection Monitor, you can configure the number of threads and change job prioritization. (In Desktop, from the Administration menu, select Prioritization.) For background information on the Database Connection Monitor, see Managing database instance connections: Database Connection Monitor, page 244.
The primary statuses are Active and Stopped. Projects loaded and project status are displayed for each node. Nodes leaving the cluster are removed from the list completely. For background information on the Cluster Monitor, see Managing clustered Intelligence Servers: Cluster Monitor, page 248. For background information on asymmetric project distribution across nodes, see Distributing projects across nodes in a cluster, page 354.
365
Node failure: This includes instances such as a power failure or a software error; this is sometimes called a forceful shutdown. Forcefully shutdown nodes retain their valid caches if they are available. However, while the node is shut down, there is no way to monitor the caches, change their status, or invalidate them. They can be deleted by manually deleting the cache files on the local node, or by deleting the appropriate cache files on a shared network location. Be aware that cache files are named with object IDs.
Resource availability
If a node is rendered unavailable due to a forceful shutdown, its cache resources are still valid to other nodes in the cluster and will be accessed if they are available. If they are not available, new caches are created on other nodes. In an administrative shutdown, caches associated with the shut down node are no longer valid for other nodes, even if they are physically available, such as on a file server.
MicroStrategy Web
If a cluster node shuts down while there are MicroStrategy Web users connected, those jobs return an error message by default. The error message offers the option to resubmit the job, in which case MicroStrategy Web automatically reconnects the user to another node. Customizations to MicroStrategy Web can alter this default behavior in several ways. If a node is removed from the cluster, all existing connections continue to function and remain connected to that machine, although the machine will no longer have access to the clustered nodes resources. Future connections from MicroStrategy Web will be to valid cluster nodes.
Troubleshooting clustering
For answers to common questions and information on troubleshooting a clustered environment, see Troubleshooting clustered environments, page 605.
367
Tuning overview
The size of machines you use for MicroStrategy Intelligence Server, how you tune them, and practical use of them depend on the number of users, concurrent users, their usage patterns, and so on. MicroStrategy gives general recommendations for these areas on the Support website. Access the Knowledge Base, Latest Release Information, MicroStrategy 7 Tuning Scaling and Configurations Guidelines section and open the MicroStrategy Enterprise Configuration Guide).
There are additional references for Sizing, Tuning, and Practical Usage guidelines. These documents, in turn, refer to more specific Tech Notes. This is illustrated in the diagram below.
Sizing Guidelines
Tuning Guidelines
As a high-level overview of tuning the system, you must: Define system requirements Configure runtime capacity variables Define the systems design
These topics are introduced below and lay the foundation for the remainder of this Tuning section.
369
Global Web-based deployment for 400 users with 15 second response time for prompted reports and the ability to subscribe to personalized weekly sales reports. Internal deployment for 200 market research analysts accessing an enterprise data warehouse on a completely ad hoc basis. Web-based deployment for 1,500 remote users with access to pre-defined daily sales and inventory reports with 5 second response time.
These scenarios share common requirements that can help you define your own expectations for the system. First, you may require a certain number of users be actively logged in and require the system to handle a certain number of concurrent users. Second, you may require that the application has a certain level of performance, such as report results returning to the users within a certain time, or when they manipulate a report the change happens quickly, or that a certain number of reports can be run within an hour or within a day. Third, you probably require that certain functionality be available in the system, such as allowing report flexibility so users can run ad hoc, predefined, prompted, page-by, or OLAP Services reports. Another functionality requirement may be that users can use certain features, such as scheduling (or subscribing to) a report, or sending a report from the Web interface to someone else via e-mail, or that your users will be able to use the rich MicroStrategy Desktop interface.
To achieve the system requirement of a certain number of users able to access the system, Intelligence Server must adequately service user sessions. To attain the requirement of a certain level of performance, Intelligence Server must adequately execute jobs (together with other system components). Finally, the requirement that a certain set of functionality be available to users can also influence the systems capacity. The Intelligence Server capacity can be thought of as a box shared by the two important goals of serving the necessary number of user sessions (through which users submit requests) and maximizing job execution (which return results)all while meeting the system requirements you set above. This capacity is shown in the diagram below.
Intelligence Server Capacity Requests
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
These runtime capacity variables, together with system capacity are interrelated. If you change settings in one, the others are affected. For example, if you place more emphasis on serving more user sessions, the job execution may suffer because it does not have as much of the system capacity available to use. Or, for example, if you increase Intelligence Servers capacity, it could execute jobs more quickly or it could serve more user sessions.
371
The configuration of Intelligence Server and projects to determine how system resources can be used
The diagram below illustrates these factors that influence the systems capacity.
System resources
Intelligence Server Capacity Requests
Architecture
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
Report design
UNIX and Linux systems allow processes and applications to run in a virtual environment. MicroStrategy Intelligence Server Universal installs on UNIX and Linux systems with the required environment variables set to ensure that the servers jobs are processed correctly. However, you can tune these system settings to fit your system requirements and improve performance. For more information, see the Planning Your Installation chapter of the MicroStrategy Installation and Configuration Guide.
Tuning summary
Perhaps your most important job as system administrator is to find the balance that maximizes the use of your systems capacity to provide the best performance possible for the required number of users. In the remainder of this tuning section we discuss the settings in Intelligence Server (and elsewhere) you can make, along with other practices that you can use to achieve this balance. These topics are: Managing user sessions, page 373 Governing requests, page 382 Managing job execution, page 388 Governing results delivery, page 405
Managing system resources, page 410 Designing system architecture, page 430 Designing reports, page 436 Configuring Intelligence Server and projects, page 439
Architecture
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
Report design
Primarily, this section covers: How the number of active users and user sessions on your system use system resources just by logging in to the system (see Governing active users, page 374) How the users profile can determine what he or she is able to do when they are logged in to the system and how you can govern it (see Governing user profiles, page 377)
373
How memory and CPU are used by active users when they execute jobs, run reports, and make requests and how you can govern them (see Governing user resources, page 379)
person who left for lunch has his session ended by the Intelligence Server. After that, other users can log in. Intelligence Server does not end a user session until all of the jobs submitted by that user have completed (or timed out). This includes reports waiting for autoprompt answers. These user session limits are discussed below as they relate to specific software features and products.
375
column, you can see the total number of user sessions for each project. You can limit the number of sessions that are allowed for a project. When the maximum number of user sessions for a project is reached, users cannot log in, except for the administrator who can disconnect current users (using the User Connection Monitor) or increase this governing setting. To specify this setting, use the Project Configuration editor for the project, select the Governing: User sessions category and type the number in the User sessions per project box.
To specify this setting, use the MicroStrategy Intelligence Server Configuration editor, select the Governing: General category and type the number of seconds of idle time you want to allow, in the Web user session idle time (sec) box. Report Services documents in Web: Set the Web user session idle time (sec) to 3600 to avoid a project source timeout, if designers will be building documents and dashboards in Web. Then restart Intelligence Server.
377
Schedule-related privileges
If you allow users to schedule reports to be run (create subscriptions), they can potentially use much of the system resources. Most reports should be scheduled for when the system is not under a heavy workload. This is controlled by the Web scheduled reports and Schedule request privileges. If you have Narrowcast Server implemented in your system and users have the Web scheduled e-mail privilege, they can have a report e-mailed at a predetermined time. The system creates a session on Intelligence Server. If you have Narrowcast Server implemented in your system and users have the Web send now privileges, they can e-mail a report. This also causes Narrowcast Server to create a session on Intelligence Server.
Exporting privilege
Exporting reports can consume large amounts of memory, especially when exporting to Excel with formatting (a Web export privilege). For more information on this, see Limiting the number of XML cells, page 408.
History List
The History List is an in-memory message list that references reports a user has executed or scheduled. The results are stored as History or Matching-History caches on Intelligence Server. If you allow users to use the History List (sometimes called the Inbox), they can potentially use much of the system resources. Answering the following questions can give you an idea about how its use is affecting your system: Do your users put a lot of their reports in the History List? For example, do they send every executed report to the History List? Do the users clean out read messages from the History List when they log out? How often do you delete the History List messages using the Schedule Administration Tasks feature? Have you limited the Maximum number of messages per user and the Message lifetime (days)? To set this, use the Intelligence Server Configuration editor, Governing: History settings category.
For details on History List settings, see Managing report caches: History List, page 221.
379
Working set
When a user runs a report from MicroStrategy Web or Web Universal, the results from the report are added to what is called the working set for that users session and stored in memory on Intelligence Server. The working set is a collection of messages that reference in-memory report instances. A message is added to the working set when a user executes a report or retrieves a message from his or her Inbox. The purpose of the working set is to: Allow the efficient use of the web browsers Back button Improve web performance for manipulations Allow users to manually add messages to the History List
Each message in the working set may store two versions of the report instance in memory: the original version and the result version. The original version of the report instance is created the first time the report is executed and is held in memory the entire time a message is part of the working set. The result version of the report instance is added to the working set only after the user manipulates the report. Each report manipulation adds what is called a delta XML to the report message. On each successive manipulation, a new delta XML is applied to the result version. When the user clicks the browsers Back button, previous delta XMLs are applied to the original report instance up to the point (or state) the user is requesting. For example, if a user has made four manipulations, the report has four delta XMLs; when the user clicks the Back button, the three previous XMLs are applied to the original version. This allows additional manipulations to be performed on the original data set without having to run SQL against the data warehouse for each change. The table below outlines which MicroStrategy clients use the working set functionality when running reports.
Client
MicroStrategy Desktop User MicroStrategy Web or Web Universal User
Additionally, if a session has an open job, the session is not closed (and its report instance is not removed from the Working set) until the job has finished or timed out. This functionality is designed to support the execution of jobs even after the user has logged out. It may, however, add to the problem of excessive memory usage on Intelligence Server because the sessions working set is held in memory until the session is closed. This typically happens when a MicroStrategy Web or Web Universal user runs a report with a prompt. If the user clicks the browsers Back button instead of answering the prompt, an open job is created. If the user closes her browser or logs out without canceling the job, the user session remains open until the job Waiting for Autoprompt times out from the User session idle time limit and the Report execution time (sec) limit ends the job. Therefore, you should decrease these time limits if you notice many jobs that seem to fit this category.
381
Governing requests
Each user session can execute multiple concurrent jobs or requests. This happens when users run documents that submit multiple child reports at a time or when they send a report to the History List, then execute another while the first one is still executing. Users can also log in to the system multiple times and run reports simultaneously. Again, this has the potential for using up much of the system resources. This aspect of Intelligence Server capacity is shown in the diagram below.
System resources
Intelligence Server Capacity Requests
Architecture
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
Report design
To control the number of jobs that can be running, you can set limits per user for a project. Specifically, you can limit: The amount of time reports can execute (Limiting the maximum report execution time, page 383) The number of executing reports or data marts per user account (not counting element requests, metadata requests, and report manipulations) (Limiting the number of executing jobs per user and project, page 384) The number of jobs per user account and per user session (Limiting the number of jobs per user session and per user account, page 385) The number of jobs per project (Limiting the number of jobs per project, page 386) The total number of jobs (Limiting the total number of jobs, page 387)
A report's SQL (per pass) including both its size and the time it executes (Limiting a report's SQL per pass, page 387)
Step Status
1 2* Waiting for Autoprompt Waiting (in queue)
Comment
Resolving prompts Element request is waiting in job queue for execution
383
Step Status
3* 4 5 6 7 Executing Waiting for Autoprompt Waiting (in queue) Executing Executing
Comment
Element request is executing on the database Waiting for user to make prompt selections Waiting in job queue for execution Query engine executes SQL on database (can be multiple passes) Analytical engine processes results
*Steps 2 and 3 are for an element request. They are executed as separate jobs. During steps 2 and 3, the original report job has the status Waiting for Autoprompt. The following tasks are not shown in the example above because they consume less time, but they also count toward the report execution time: Element Request SQL Generation Report SQL Generation Returning results from the database
For more information about the job processing steps, see Processing jobs, page 27.
then processed in the order in which they were placed in the queue, which is controlled by the priority map (see Prioritizing jobs, page 394). To specify this limit setting, use the Project Configuration editor for the project, select the Governing: Jobs category, and type the number of concurrent report jobs per user you want to allow in the Executing jobs per user box.
Limiting the number of jobs per user session and per user account
If your users job requests place a heavy burden on the system, you can limit the number of open jobs within Intelligence Server, including element requests, autoprompts, and reports for a user. To help control the number of jobs that can run in a project and thus reduce their impact on system resources, you can limit the number of concurrent jobs a user can execute in a user session. For example, if the Jobs per user session limit is set at four and a user has one session open for the project, that user could only execute four jobs at a time. However, the user can bypass this limit by logging in to the project multiple times. (To prevent this, see the next setting, Jobs per user account limit.) To specify this setting, use the Project Configuration editor for the project, select the Governing: Jobs category and type the number in the Jobs per user session box. You can set a limit on the number of concurrent jobs that a user can execute for each project regardless of the number of user sessions that user has at the time. For example, if the user has two user sessions and the number of Jobs per user session limit is set at four, the user could potentially run eight jobs. But if this Jobs per user account limit is set to five, that user could only execute five jobs, regardless of the number of times the user logs in to the system. Therefore, this limit can prevent users from circumventing the Jobs per user session limit by logging in multiple times.
385
To specify this setting, use the Project Configuration editor for the project, select the Governing: Jobs category, and type the number of jobs per user account you want to allow in the Jobs per user account box. These two limits count the following types of job requests that are executing or waiting to execute: Report Element Autoprompt
Jobs that have finished, cached jobs, or jobs that returned in error are not counted toward these limits. If either limit is reached, any jobs the user submits do not execute and the user sees an error message.
387
Maximum SQL Size (database instance) You can limit the size (in bytes) of the SQL statement per pass before it is submitted to the data warehouse. If the size for a SQL pass reaches the maximum, Intelligence Server cancels the job and the user sees an error message. You can specify this setting at the database instance level. To specify this, edit the VLDB properties for the database instance, expand Governing settings, then select the Maximum SQL Size option. (See the online help for details.)
Architecture
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
Report design
The main factor that determines job execution performance is the number of connection threads that are made to the data warehouse. The optimum number allows the users requests to return without waiting too long and does not overload the system resources. This section discusses how you can manage job execution. This includes Managing database connection threads, page 389 Prioritizing jobs, page 394
Results processing, page 400 (the processing that Intelligence Server performs on results returned from the data warehouse)
389
provide enough threads so that jobs that must be processed immediately are processed immediately, and the remainder of jobs are processed as timely as possible. To monitor whether the number of database connection threads in your system is effective, use the Database Connection Monitor. For more information about this tool, see Managing database instance connections: Database Connection Monitor, page 244. If all threads are Busy a high percentage of the time, consider increasing the number of connection threads as long as your data warehouse can handle the load and as long as the Intelligence Server does not become overloaded. Once you have the number of threads calculated, you can then set priorities and control how many threads are dedicated to serving jobs meeting certain criteria.
between Intelligence Server and the data warehouse. These settings apply to all projects that use the selected database instance.
You are not required to set medium and high connections; however, you should have at least one low connection, because low priority is the default job priority. If you set all connections to zero, jobs are not submitted to the data warehouse. This may be a useful way for you to test whether scheduled reports are processed by Intelligence Server properly. Jobs wait in the queue and are not submitted to the data warehouse until you increase the connection number, at which point they are then submitted to the data warehouse. Once the testing is over, you can delete the jobs so they are never submitted to the data warehouse. The optimal number of connections depends on many factors; however, the main criterion to consider when setting the number of connections is the number of concurrent queries your data warehouse can support.
391
Maximum cancel attempt time When a user runs a report that executes for a long time on the data warehouse, the user may cancel the job execution. This may be due to an error in the reports design, especially if it is in a project in a development environment, or the user may simply not want to wait any longer. Once the cancel request is made, if the cancel is not successful after a short time (30 seconds) a timer starts counting. You can set a limit on how long Intelligence Server should count (in addition to the 30 seconds) while the cancel occurs. If the cancel is not successful before the limit, Intelligence Server deletes that database connection thread. To set this limit, edit the database instance, then modify the database connection (at the bottom of the Database Instances dialog box), and on the Database Connections dialog box Advanced tab, specify the Maximum cancel attempt time (sec).
Maximum query execution time This is the maximum amount of time a single pass of SQL can execute on the data warehouse. When the SQL statement or fetch operation begins, a timer starts counting. If the Maximum query execution time limit is reached, Intelligence Server cancels the operation. This setting is very similar to the SQL time out (per pass) VLDB setting that was discussed earlier; however, that VLDB setting overrides this setting. This setting is made on the database connection and may be used to govern the maximum across all projects using that connection. The VLDB setting may be used to override this setting for a specific report. Values of 0 and -1 indicate no limit.
To set this limit, edit the database instance, then modify the database connection (at the bottom of the Database Instances dialog box), and on the Database Connections dialog box Advanced tab, specify the Maximum query execution time (sec).
Maximum connection attempt time This is the maximum amount of time Intelligence Server waits while attempting to connect to the data warehouse. If the connection cannot be made, this setting limits how long a user must wait before he or she sees an error message. The connection may not be made because the data warehouse is busy or the ODBC driver software is not responding. Values of 0 and -1 indicate no limit. To set this limit, edit the database instance, then modify the database connection (at the bottom of the Database Instances dialog box), and on the Database Connections dialog box Advanced tab, specify the Maximum connection attempt time (sec).
Connection lifetime (sec) When a database connection thread is created, a timer starts counting and compares it to this limit called Connection lifetime. The entire duration of the connection cannot be longer than this limit. Depending on the status of the connection at the time the limit is reached, several things can happen:
393
If the database connection has a status of Cached (it is idle, but available) when the limit is reached, the connection is deleted. If the database connection has a status of Busy (it is executing a job) when the limit is reached, the job completes and then the connection is deleted and does not go into a Cached state. Values of 0 and -1 indicate no limit. If this setting is too long, the data warehouse RDBMS may have a limit that deletes the connection whether in the middle of the job or not. Ideally you should set the Connection lifetime to be shorter than the RDBMS limit. To set this limit, edit the database instance, then modify the database connection (at the bottom of the Database Instances dialog box), and on the Database Connections dialog box Advanced tab, specify the Connection lifetime (sec).
Connection idle timeout (sec) This is the amount of time an inactive connection thread remains cached in Intelligence Server until it is terminated. When a database connection finishes a job and there is no job waiting to use it, the connection is cached and a timer starts counting. If the time reaches the Connection idle timeout limit, the database connection thread is deleted. This is used to prevent connections from tying up data warehouse and Intelligence Server resources if they are not needed. Values of 0 and -1 indicate no limit. To set this limit, edit the database instance, then modify the database connection (at the bottom of the Database Instances dialog box), and on the Database Connections dialog box Advanced tab, specify the Connection idle timeout (sec).
Prioritizing jobs
Job priority defines the order in which jobs are processed. Jobs are usually executed on a first-come, first-served basis; however, you probably have certain jobs that you want to be processed before other jobs.
Consider this example. An executive in your company runs reports at unplanned times and you want to ensure that these reports are immediately processed. If no priority is set for the executives reports, they are processed with the bulk of other jobs, which, depending on data warehouse activity, could require some wait time. A better alternative is to assign a high priority to the executives user group, thus enabling Intelligence Server to process and submit them to the data warehouse first, rather than waiting for other jobs. Intelligence Server processes a job on a database connection that corresponds to the jobs priority. If no priority is specified for a job, Intelligence Server processes the job on a low priority connection. For example, jobs with a high priority are processed by high priority connections, jobs with a low or no priority are processed by a low priority connection. These priority connections do not affect the amount of resources a job gets once it is submitted to the data warehouse. Rather, they determine whether jobs are submitted to the data warehouse before others in the queue. Intelligence Server also engages in connection borrowing when processing jobs. Connection borrowing occurs when Intelligence Server executes a job on a lower priority connection because no connections that correspond to the jobs priority are available at execution time. High priority jobs can run on high, medium, and low priority connections. Likewise, medium priority jobs can run on medium and low priority connections. For example, if a job with a medium priority is submitted and all medium priority connections are busy processing jobs, Intelligence Server processes the job on a low priority connection. When a job is submitted and no connections are available to process it, either with the same priority or with a lower priority, Intelligence Server places the job in queue and then processes it when a connection becomes available.
395
There are three possible priorities: high, medium, and low. You decide which variables are used to determine a jobs priority. The possible variables are:
Request type: jobs of different types (Documents, Elements, Narrowcast Server and Reports) are processed by the priority you define for each (Prioritizing jobs by request type, page 397) Application type: jobs submitted from different applications (Desktop, Scheduler and Web) are processed by the priority you specify (Prioritizing jobs by application type, page 397) User Group: jobs submitted by users in the groups you select are processed by the priority you specify (Prioritizing jobs by user group, page 398) Cost: refers to the cost of processing a job (Prioritizing jobs by report cost, page 398)
Job cost is an arbitrary value you can assign to a reports properties that symbolizes the cost of processing that job (the higher the number the heavier the cost)
Project: jobs submitted from projects you select are processed by the priority you specify (Prioritizing jobs by project, page 400)
These variables allow you to create sophisticated rules for which job requests are processed first. For example, you could specify that element requests submitted by any user are high priority, that any report requests from Project M are low priority and from Project Y are medium priority.
To set job prioritization rules
1 On the Intelligence Server machine, start Desktop. In the project sources Administration folder list, click Database Instance Manager. 2 Right-click the database instance used to connect to the data warehouse and select Prioritization. The Database Instances editor opens with the Job Prioritization tab selected. Any prioritization rules that have already been created are listed in the tab.
3 To add rules, click New. This starts the Job Prioritization Wizard.
For specific information about using the wizard, press F1 to view the online help. The variables you can set using the wizard are discussed below.
397
This variable can be selected in combination with other variables to create a set of criteria that must all be met to have a certain job priority.
Report cost between 335 and 666 = Medium Report cost between 667 and 999 = Heavy
With a set of report costs like those above, you could specify all Heavy reports to have a low job priority, Medium reports to have a medium job priority and Light reports to have a high job priority. The set of cost groupings must cover all values from 0 to 999. Once you determine the cost groupings, you can set the report cost value on individual reports. For example, you notice that a particular report requires significantly more processing time than most other reports. You can assign it a report cost of 900 (heavy). In this sample configuration, the report has a low priority. This variable can be selected in combination with other variables to create a set of criteria that must all be met to have a certain job priority. You can assign the report cost priority variable individually to a report. The following are the steps to assign priority to a report based on the report cost. To set the priority of a report by report cost, you need to belong to the System Administrators group. You must have system administrator privileges to view the Priority tab in the Properties dialog box for a report.
To set the report cost value for a report
1 Using MicroStrategy Desktop, right-click the report and select Properties. The Properties dialog box opens. 2 Click the Priority tab. (See related note above.) 3 Type a number in the Report Cost box.
Higher numbers indicate a heavy report, or one that is large or exerts a higher workload on the data warehouse. Lower numbers indicate a lighter report.
399
Results processing
When Intelligence Server processes results that were returned from the data warehouse, several factors determine how much of the machines resources are used. These are discussed below (see the page number):
Report size (Report size, page 400) Analytic calculations and subtotaling (Analytic complexity, page 404)
Report size
A report instance is the version of the report results Intelligence Server holds in memory for cache and working set results. The size of the report instance is proportional to the size of the report results (row size * number of rows). The row size depends on the data types of the attributes and metrics on the report. Dates are the largest data type. Text strings, such as descriptions and names are next in size (unless the description is unusually longthen they can be larger than dates). Numbers, such as IDs, totals, and metric values are the smallest.
The table below shows examples of the relationship between cache size and report size (in number of cells).
Rows
7000 10000 59190
Columns
6 11 13
Report Cells
42000 110000 769470 2249856 3550770
160704 14 197265 18
The easiest way to estimate the amount of memory the reports use is to view the size of the cache files using the Cache Monitor in MicroStrategy Desktop. The Cache Monitor shows the size of the report results in binary format, which from testing has proven to be 30 50% of the actual size of the report instance in memory. To govern a report (or requests) size, you can limit the reports number of result rows, the number of element rows, and the number of intermediate rows. These are discussed below.
401
To specify this setting for all reports in a project, use the Project Configuration editor for the project, select the Governing: Result Sets category and type the number in the Number of report result rows box.
To set the result set limit for a specific report
1 Edit the report for which you wish to set the limit. (Right-click the report and select Edit.) The Report Editor opens. 2 From the Data menu, select VLDB properties. The VLDB Properties dialog box opens. 3 Expand the Governing settings, then select Results Set Row Limit. 4 Clear the Use default inherited value check box (if it is not already cleared). 5 Type the limit in the Results Set Row Limit box. 6 Click Save and Close to save the VLDB properties. You see the Report Editor again. 7 Click Save and Close to save the report along with the changed VLDB properties. Limiting the number of element rows
Another way you can limit a report (or a requests) size on Intelligence Server is to limit the number of element rows returned at a time for either a report prompt or when using the Data Explorer feature in MicroStrategy Desktop. Element rows are incrementally fetched (returned in smaller batches) from the data warehouse to Intelligence Server. The size of the increment depends on the maximum number of elements specified in the client (the defaults are 10,000 in Desktop and 15 in Web). Intelligence Server incrementally fetches 4 times the number for each element request.
MicroStrategy recommends that you set the Number of element rows limit to be larger than the maximum number of attribute element rows you expect users to browse. For example, if the Product table in the data warehouse has 10,000 rows that users want to browse and the Order table has 200,000 rows that you do not expect users to browse, you should set this limit to 11,000. Intelligence Server will incrementally fetch the element rows. If the Number of element rows limit is reached, the user sees an error message and cannot view the prompt or the data. For more information about element requests, such as how they are created, how incremental fetch works, and the caches that store the results, see Element caches, page 201. To specify this setting for all reports in a project, use the Project Configuration editor for the project, select the Governing: Result Sets category and type the number in the Number of element rows box.
403
To specify this limit for all reports in a project, use the Project Configuration editor, select the Governing: Result Sets category and type the number in the Number of intermediate result rows box.
To set the Intermediate row limit for a specific report
1 Edit the report for which you wish to set the limit. (Right-click the report and select Edit.) The Report Editor opens. 2 From the Data menu, select VLDB properties. The VLDB Properties dialog box opens. 3 Expand the Governing settings, then select Intermediate Row Limit. 4 Clear the Use default inherited value check box (if it is not already cleared). 5 Type the limit in the Intermediate Row Limit box. 6 Click Save and Close to save the VLDB properties. You see the Report Editor again. 7 Click Save and Close to save the report along with the changed VLDB properties.
Analytic complexity
Users can use features on a report that require the Analytical Engine component of Intelligence Server to handle. These have an impact on Intelligence Servers system resources. Be aware of their potential impact and inform your report designers of them.
Analytic calculations Calculations that cannot be done with SQL in the data warehouse are performed by the Analytical Engine in Intelligence Server. These may result in significant memory use during report execution. Some analytic calculations (such as AvgDev) require the entire column
of the fact table as input to the calculation. The amount of memory used depends on the type of calculation and the size of the input dataset.
Subtotals The amount of memory required to calculate and store subtotals can be significant. In some cases, the size of the subtotals can surpass the size of the report result itself. One example of when subtotals can grow large is if, in the Advanced Subtotals Options dialog box (in Desktop when viewing the report, select the Data menu, then Subtotals, then choose Advanced), you select All Subtotals option in the Pages drop-down list. This option calculates all possible subtotal calculations at runtime and stores the results in the report instance. You should encourage users and report designers to use less taxing options for calculating subtotals across pages, such as, Selected Subtotals and Grand Total. The number of subtotals depends on the subtotaling option chosen along with the order and the number of unique attributes. The easiest way to determine the number of subtotals being calculated is by examining the number of result rows added with the different options selected.
405
Architecture
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
Report design
To deliver results, Intelligence Server generates XML and sends it to the Web server (this happens when a report is first run or when it is manipulated). The Web server then translates the XML into HTML for display in the users Web browser. You can set limits in two areas to control how much information is sent at a time. The lower of these two settings determines the maximum size of results that Intelligence Server delivers at a time:
How many rows and columns can be displayed at a time in a MicroStrategy Web product (see Setting the Incremental fetch limit in MicroStrategy Web products, page 407) How many XML cells in a result set can be delivered at a time (see Limiting the number of XML cells, page 408)
As a result, the user sees report results more quickly and the impact on system resources is limited. Additionally, governing results delivery includes:
Controlling how large of a report can be exported at a time (see Limiting export sizes, page 409 and Limiting the memory consumption for file generation, page 409) Controlling XML drill paths (see Limiting the total number of XML drill paths, page 410)
These control how much information is displayed at one time in the MicroStrategy Web products interface. They can be set as a project defaults by the Web administrator, or if users have the proper privilege, they can customize the values.
The Web administrator sets the default values for these limits. (To do this, click Preferences, then Project defaults, then click Grid display and specify the Maximum rows in grid and Maximum columns in grid). Users must have the privilege Web change user preferences. Then in the MicroStrategy Web products interface, they click Preferences, then Grid display, and specify the Maximum rows in grid and Maximum columns in grid.
For more information about this, see the topic for Incremental fetch in the Web online help. If a reports result set is larger than these limits, the report is broken into pages (or increments) that are delivered (or fetched from the server) one at a time. Therefore, the user sees one increment at a time. If users set the number of rows and columns too high, the number of XML cells limit that is set in the Intelligence Server (discussed next) governs the size of the result set.
407
If the limit is smaller, it takes a longer time to generate the XML because it does it in small batches, thus taking less memory and system resources. If the limit is larger, it takes a shorter time to generate the XML in fewer, but larger, batches, thus using more memory and system resources.
To set the XML limit, use the Intelligence Server Configuration editor, select the Governing: File Generation category, then specify the Maximum number of XML cells. You must restart Intelligence Server for the new limit to take effect.
Maximum number of cells to export to plain text Maximum number of cells to export to HTML and Excel
409
resources you wish you could have. This factor that influences Intelligence Server capacity is highlighted in the diagram below.
System resources
Intelligence Server Capacity Requests
Architecture
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
Report design
You must make certain choices about how to maximize the use of your systems resources. If you can increase resources, what should you add so that you get the most improvement? In short, the machine resources available to Intelligence Server are probably the most important thing you should maximize. This includes:
The processors (Processor type and speed, page 412 and Number of processors, page 412) Physical disk characteristics (Physical disk, page 413) The amount of memory (Memory, page 414) including these additional discussions: How memory is used when Intelligence Server starts (How much memory does Intelligence Server use when it starts up?, page 416) How memory is used when it is running (How does Intelligence Server use memory after it is running?, page 418) How to govern memory (Governing Intelligence Server memory use, page 419 and Governing memory for requests from MicroStrategy Web products, page 429)
411
Because Intelligence Server is the capital component of the MicroStrategy system, it is imperative that the machine(s) running it have enough memory and the right number of processors suited for your needs. These are discussed below. You may also wish to see the MicroStrategy Installation and Configuration Guide for information about small, medium, and large configurations. Also refer to the MicroStrategy Enterprise Configuration Guide on the MicroStrategy Knowledge Base.
Number of processors
Intelligence Server performs faster on a machine with multiple processors. If you notice that the processor is running consistently at a high capacity (for example, greater than 80% on average), consider increasing the number of processors. You can use the Windows Performance Monitor (or Task Manager) to monitor this. Intelligence Server is aware of the number of processors it is available to use according to the license key you have purchased. For example, if a machine running Intelligence Server has two processors and you upgrade it to four, Intelligence Server uses only the two processors and ignores
412 Tuning the system for best performance
2008 MicroStrategy, Inc.
the additional two until you receive a new license key from MicroStrategy. Also, if several Intelligence Server machines are clustered together, the application ensures that the number of processors being used does not exceed the number licensed. Use the License Manager tool to enter the new license key (see Updating a license, page 156).
Physical disk
If the physical disk is utilized too much on a machine hosting Intelligence Server, it can indicate that there is a bottleneck in the systems performance. To monitor this, use the Windows Performance Monitor for the object PhysicalDisk and the counter % Disk Time. If you see that the counter is greater than 80% on average, it may indicate that there is not enough memory on the machine. This is because when the machines physical RAM is full, the operating system starts swapping memory in and out of the page file on disk. This is not as efficient as using RAM. Therefore, Intelligence Servers performance may suffer. By monitoring the disk utilization, you can see if the machine is consistently swapping at a high level. Defragmenting the physical disk may help lessen the amount of swapping. If that does not sufficiently lessen the utilization, consider increasing the amount of physical RAM in the machine. For a discussion about how memory is used in Intelligence Server, see Memory, page 414. We recommend that you establish a benchmark or baseline of a machines normal disk utilization, perhaps even before Intelligence Server is installed. This way you can determine whether or not Intelligence Server is responsible for excessive swapping because of limited RAM. Another performance counter you can use to gauge the disks utilization is the Current disk queue length, which indicates how many requests are waiting at a given time. MicroStrategy recommends using the % Disk Time and Current disk queue length counters to monitor the disk utilization, however, you may also wish to use other counters.
413
Memory
The memory used by Intelligence Server is limited by two factors:
The machines virtual memory The user address space used by the Intelligence Server process
Virtual Memory
Virtual memory is the amount of physical memory (RAM) plus the Disk Page file (swap file). It is shared by all processes running on the machine, including the operating system. When a machine runs out of virtual memory, processes on the machine are no longer able to process instructions and eventually the operating system may shut down. More virtual memory can be obtained by making sure that as few programs or services as possible are executing on the machine or by increasing the amount of physical memory or the size of the page file. Increasing the amount of virtual memory, and therefore the available private bytes, by increasing the page file size may have adverse effects on Intelligence Server performance due to increased swapping. Private bytes are the bytes of virtual memory that are allocated to a given process. Private bytes are so named because they cannot be shared with other processes: when a process such as Intelligence Server needs memory, it allocates an amount of virtual memory for its own use. The private bytes used by a process can be measured with the Private Bytes counter in the Windows Performance Monitor. The governing settings built into Intelligence Server control its demand for private bytes by limiting the number and scale of operations which it may perform simultaneously. In most production environments, depletion of virtual memory through private bytes is not an issue with Intelligence Server.
415
The two counters you should log with Performance Monitor are Private Bytes and Virtual Bytes for the Intelligence Server process (Mstrsvr.exe). A sample log of these two counters (along with others) for Intelligence Server is shown in the diagram below.
The diagram above illustrates the gap between private bytes and virtual bytes in Intelligence Server. The Virtual Bytes counter represents memory that is reserved, not committed, for the process. Private Bytes represents memory actually being used by the process. Intelligence Server reserves regions of memory (called heaps) for use within the process. The heaps that are used by Intelligence Server cannot share reserved memory between themselves, causing the gap between reserved memory (virtual bytes) and memory being used by the process (private bytes) to increase further.
How much memory does Intelligence Server use when it starts up?
The amount of memory consumed during startup is impacted by a number of factors such as metadata size, the number of projects, schema size, number of processing units, number of database connection threads, and whether Intelligence Server is in a clustered configuration. Because these factors
are static, the amount of memory consumed at startup is fairly constant. This lets you accurately estimate how much memory is available to users at runtime. When Intelligence Server starts up, it uses memory in the following ways:
It initializes all internal components and loads the static DLLs necessary for operation. This consumes 25 MB of private bytes and 110 MB of virtual bytes. You cannot control this memory usage. It loads all server definition settings and all configuration objects. This consumes an additional 10 MB of private bytes and an additional 40 MB of virtual bytes. This brings the total memory consumption at this point to 35 MB of private bytes and 150 MB of virtual bytes. You cannot control this memory usage. It loads the project schema into memory (needed by the SQL engine component). The number and size of projects greatly impacts the amount of memory used. This consumes an amount of private bytes equal to 3x the schema size and an amount of virtual bytes equal to 4x the schema size. For example, with a schema size of 5 MB, the private bytes consumption would increase by 15 MB (3 * 5 MB). The virtual bytes consumption would increase by 20 MB (4 * 5 MB). You can control this memory usage by limiting the number of projects that load at startup time. It creates the database connection threads. This primarily affects virtual bytes consumption with an increase of 1 MB per thread whether it is actually connected to the database or not. You cannot control this memory usage.
To calculate the amount of memory that Intelligence Server uses when it starts
If you are not performing this procedure in a production environment, make sure you set all the configuration options as they exist in your production environment. Otherwise, the measurements will not reflect the actual production memory consumption.
417
2 Once Intelligence Server has started, use Windows Performance Monitor to create and start a performance log that measures Private and Virtual bytes of the MSTRSVR process. For instructions on using the Windows Performance Monitor, see Manage system memory and resources: Windows Performance Monitor, page 249. 3 While logging with Performance Monitor, stop Intelligence Server. Performance Monitor continues to log information for the Intelligence Server process. You can confirm this by logging the counter information to the current activity window as well as the performance log. 4 Start Intelligence Server again. The amount of memory consumed should be easily measured.
Additional configuration objects: caching of User, Connection map, and Scheduler information created or used after Intelligence Server has been started. Report caches: result and matching caches created after Intelligence Server has been started. The maximum amount of memory Intelligence Server uses for report caches is configured at the project level. For more information, see Report caches, page 171. Object and Element caches: caches that have been created since Intelligence Server has started. The maximum amount of memory used for object and element caches is configured at the project level. For details, see Element caches, page 201 and Object caches, page 216. User session related resources: History List and Working set memory, which are greatly influenced by governing settings, report size, and report design. For details, see Managing user sessions, page 373 and see Managing report caches: History List, page 221.
Request and results processing: memory needed by Intelligence Server components to process requests and report results. This is primarily influenced by report size and report design with respect to analytical complexity. For information, see Governing requests, page 382 and see Results processing, page 400. Clustering: memory used by Intelligence Server to communicate with other cluster nodes and maintain synchronized report cache and History List information. For information, see Clustering multiple machines in the system, page 329. Scheduling: memory used by scheduler while executing reports for users when they are not logged in to the system. For information, see Scheduling jobs and administrative tasks, page 451.
Database requests from either the MicroStrategy metadata or the data warehouse SQL generation Analytical Engine processing (subtotals, cross tabulation, analytic functions) Cache creation and updating Report parsing and serialization for network transfer XML generation
419
The memory load of the requests governed by MCM cannot be predicted. This load depends on the amount of data that is returned from the data warehouse. Requests such as graphing, cache lookup, or document generation use a predictable amount of memory, and thus are not governed by MCM. For example, a request for a report returns an acceptable amount of data. A graph of the reports results would be based on the same data, and thus would be allowed. Therefore, there is no need for MCM to be involved in graphing requests. If the report was not returned because it exceeded memory limits, the graphing request would never be issued.
The following section provides an overview of each MCM setting. For detailed information about each setting, see the Intelligence Server Configuration online help (click the Help button in the Intelligence Server Configuration dialog box). The Enable single memory allocation governing option lets you specify how much memory can be reserved for a single Intelligence Server operation at a time. When this option is enabled, each memory request is compared to the Maximum single allocation size (MBytes) setting. If the request exceeds this limit, the request is denied. For example, if the allocation limit is set to 100 MB and a request is made for 120 MB, the request is denied, while a later request for 95 MB is allowed. If the Intelligence Server machine has additional software running on it, you may wish to set aside some memory for those processes to use. To reserve this memory, you can specify the Minimum reserved memory in terms of either the number of MB or the percent of total system memory. In this case, the total available memory is calculated as the initial size of the page file plus the RAM. It is possible that a machine has more virtual memory than MCM knows about if the maximum page file size is greater than the initial size. Intelligence Server always reserves a minimum of 500 MB for its own operation. If the machine does not have this much memory, or if the Minimum reserved memory would leave less than 500 MB available for Intelligence Server, no memory is reserved for other processes. When MCM receives a request that would cause Intelligence Servers current memory usage to exceed the Maximum use of virtual address space, it denies the request and goes into memory request idle mode. In this mode, MCM denies any requests that would deplete memory. MCM remains in memory request idle mode until the memory used by Intelligence Server falls below a certain limit, known as the low watermark. For information on how the low watermark is calculated, see Memory watermarks, page 424. For information about how MCM handles memory request idle mode, see Memory request idle mode, page 426.
421
The Memory request idle time is the longest amount of time MCM remains in memory request idle mode. If the memory usage has not fallen below the low watermark by the end of the Memory request idle time, MCM restarts Intelligence Server. Setting the idle time to -1 causes Intelligence Server to remain idle until the memory usage falls below a certain limit, known as the low watermark. For information on how watermarks are calculated, see Memory watermarks, page 424.
It is smaller than the Maximum single allocation size setting. It is smaller than the high watermark, or the low watermark if Intelligence Server is in memory request idle mode. These watermarks are derived from the current Intelligence Server memory usage and the Maximum use of virtual address space and Minimum reserved memory settings. For a more detailed explanation of the memory watermarks, see Memory watermarks, page 424. It is smaller than 80% of the largest contiguous block of free memory, to account for the memory fragmentation that occurs in any large-scale server application.
To determine whether a memory request is granted or denied, MCM follows the logic in the flow diagram below.
Memory request with contract size
NO Key: I-Server = Intelligence Server PB = Private Bytes VB = Virtual Bytes HWM = High Watermark LWM = Low Watermark
NO
1st time I-Server VB > Maximum Use setting? YES Calculate HWM2 (I-Server PB)
NO
Contract size < Max Single Allocation Size? YES I-Server in Memory Request Idle mode?
NO
Deny request
NO
Calculate Max contract request size = LWM [1.05 * (I-Server PB) + Contracted Memory]
YES
Calculate Max contract request size = HWM [1.05 * (I-Server PB) + Contracted Memory]
NO
YES
YES
NO
NO
Largest free block updated in last 100 requests? YES Request > 80% of largest free block? YES NO
Deny request
and enter Memory Request Idle mode
Deny request
and restart I-server
Deny request
Grant request
and exit Memory Request Idle mode if in it
Deny request
423
Memory watermarks
The high watermark (HWM) represents the highest value that the sum of private bytes and outstanding memory contracts reach before triggering memory request idle mode. The low watermark (LWM) represents the value that Intelligence Servers private byte usage must drop to before MCM exits memory request idle mode. Before every memory request for more than 10 MB, or every tenth request if none of those requests are for more than 10 MB, MCM recalculates the high and low watermarks. Two possible values are calculated for the high watermark: one based on virtual memory, and one based on virtual bytes. For an explanation of the different types of memory, such as virtual bytes and private bytes, see Memory, page 414.
The high watermark for virtual memory (HWM1 in the diagram above) is calculated as (Intelligence Server private bytes + available system memory). It is recalculated for each potential memory depletion. The available system memory is calculated using the Minimum reserved memory limit if the actual memory used by other processes is less than this limit.
The high watermark for virtual bytes (HWM2 in the diagram above) is calculated as (Intelligence Server private bytes). It is calculated the first time the virtual byte usage exceeds the amount specified in the Maximum use of virtual address space setting. Since MCM ensures that Intelligence Server private byte usage cannot increase beyond the initial calculation, it is not recalculated until after Intelligence Server returns from the memory request idle state.
The high watermark used by MCM is the lower of these two values. This accounts for the scenario in which, after the virtual bytes HWM is calculated, Intelligence Server releases memory but other processes consume more available memory. This can cause a later calculation of the virtual memory HWM to be lower than the virtual bytes HWM. The low watermark is calculated as 95 percent of the HWM. It is recalculated every time the HWM changes.
424 Tuning the system for best performance
2008 MicroStrategy, Inc.
For normal Intelligence Server operation, the maximum request size is based on the high watermark. The formula is [HWM - (1.05 *(Intelligence Server Private Bytes) + Outstanding Contracts)]. In memory request idle mode, the maximum request size is based on the low watermark. The formula is [LWM (1.05 *(Intelligence Server Private Bytes) + Outstanding Contracts)]. The value of 1.05 is a built-in safety factor.
For normal Intelligence Server operation, if the request is larger than the maximum request size, MCM denies the request. It then enters memory request idle mode. If MCM is already in memory request idle mode and the request is larger than the maximum request size, MCM denies the request. It then checks whether the memory request idle time has been exceeded, and if so, it restarts Intelligence Server. For a detailed explanation of memory request idle mode, see Memory request idle mode, page 426. If the request is smaller than the maximum request size, MCM performs a final check to account for potential fragmentation of virtual address space. MCM checks whether its record of the largest free block of memory has been updated in the last 100 requests, and if not, updates the record with the size of the current largest free block. It then compares the request against the largest free block. If the request is more than 80% of the largest free block, the request is denied. Otherwise, the request is granted.
425
After granting a request, if MCM has been in memory request idle mode, it returns to normal operation.
Intelligence Servers memory usage drops below the low watermark. In this case, MCM exits memory request idle mode and resumes normal operation. MCM has been in memory request idle mode for longer than the Memory request idle time. In this case, MCM shuts down and restarts Intelligence Server. This frees up the memory that had been allocated to Intelligence Server tasks, and avoids memory depletion.
The Memory request idle time limit is not enforced via an internal clock or scheduler. Instead, after every denied request MCM checks how much time has passed since the memory request idle mode was triggered. If this time is more than the memory request idle time limit, then Intelligence Server restarts. This eliminates a potentially unnecessary Intelligence Server restart. For example, a memory request causes the request idle mode to be triggered, but then no more requests are submitted for some time. A scheduled check at the end of the Memory request idle time would restart Intelligence Server even though no new jobs are being submitted. However, since Intelligence Server is completing its existing contracts and releasing memory, it is possible that the next contract request submitted will be below the LWM. In this case, MCM accepts the request and resumes normal operation, without having to restart Intelligence Server. MicroStrategy recommends setting the Memory request idle time to slightly longer than the amount of time it takes most large reports in your system to run. This way, Intelligence Server does not shut down needlessly, while waiting for a task to complete. To help you determine the time limit, use Enterprise Manager to find out the average and maximum
report execution times for your system. For instructions on using Enterprise Manager, see Analyzing system usage with Enterprise Manager, page 251.
In this example, MCM grants memory request A. Once granted, a new memory contract is accounted for in the available system memory. Request B is then denied because it exceeds the high watermark, as derived from the Maximum use of virtual address space setting. Once request B has been denied, Intelligence Server enters the memory request idle mode. In this mode of operation, all requests that would push the total memory used above the low watermark are denied. In the example above, request C falls above the LWM. Since Intelligence Server is in memory request idle mode, this request will be denied unless Intelligence Server releases memory from elsewhere, such as other completed contracts. Request D is below the LWM, so it will be granted. Once it has been granted, Intelligence Server switches out of request idle mode and resumes normal operation.
427
If Intelligence Server receives no requests below the LWM before the Memory request idle time is exceeded, MCM shuts down and restarts Intelligence Server.
In this example, Intelligence Server has increased its private byte usage to the point that existing contracts are pushed above the high watermark. Request A is denied because the requested memory would further deplete Intelligence Servers virtual address space. Once request A has been denied, Intelligence Server enters the memory request idle mode. In this mode of operation, all requests that would push the total memory used above the low watermark are denied. The low watermark is 95% of the high watermark. In this scenario, the HWM is the amount of Intelligence Server private bytes at the time when the memory depletion was first detected. Once the virtual byte HWM has been set, it is not recalculated. Thus, for Intelligence Server to exit memory request idle mode it must release some of the private bytes. Although the virtual bytes HWM is not recalculated, the virtual memory HWM is recalculated after each request. MCM calculates the LWM based on the lower
of the virtual memory HWM and the virtual bytes HWM. This accounts for the scenario in which, after the virtual bytes HWM is calculated, Intelligence Server releases memory but other processes consume more available memory. This can cause a later calculation of the virtual memory HWM to be lower than the virtual bytes HWM. Intelligence Server remains in memory request idle mode until the memory usage looks like it does at the time of request B. The Intelligence Server private byte usage has dropped to the point where a request can be made that is below the LWM. This request is granted, and MCM exits memory request idle mode. If Intelligence Server does not free up enough memory to process request B before the Memory request idle time is exceeded, MCM restarts Intelligence Server.
429
System resources
Intelligence Server Capacity Requests
Architecture
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
Report design
How the data warehouse is configured (see How the data warehouse can affect performance, page 431) Where you place machines relative to each other, and certain characteristics of the network across which information is sent (such as bandwidth) (see How the network can affect performance, page 432)
Whether you cluster several Intelligence Servers together and what benefits you can get from clustering (see How clustering can affect performance, page 435)
Platform considerations
The size and speed of the machine(s) hosting your data warehouse and the database platform (RDBMS) running your data warehouse both affect the systems performance. While MicroStrategy does not give recommendations about data warehouse platforms, certain RDBMSs are better suited than others to handle very large data warehouses and large numbers of users. You should have an idea of the number of users your system needs to serve and research which RDBMS can handle that type of load.
2008 MicroStrategy, Inc.
Will you use a normalized, moderately normalized or fully denormalized schema? What kind of lookup, relate, and fact tables will you need? What aggregate tables will you need? What tables do you need to partition and how? What tables will you index?
Tuning the system for best performance
431
For more information about data warehouse design and data modeling, see the MicroStrategy Basic Reporting Guide and the Advanced Reporting Guide.
Desktop
5 3 4 6
Metadata
1
Web client MicroStrategy Web server
2 7
Intelligence Server cluster Data Warehouse
Step Protocol
1 HTTP
Details
HTML sent from Web server to client. Data size is small compared to other points because results have been incrementally fetched from Intelligence Server and HTML results do not contain any unnecessary information. XML requests are sent to Intelligence Server. XML report results are incrementally fetched from Intelligence Server. Requests are sent to Intelligence Server. (No incremental fetch is used.) Broadcasts between all nodes of the cluster (if implemented): metadata changes, Inbox, report caches. Files containing cache and Inbox messages are exchanged between Intelligence Server nodes.
TPC/IP
3 4
TCP/IP TCP/IP
Step Protocol
5 TCP/IP
Details
Files containing cache and Inbox messages may also be exchanged between Intelligence Server nodes and a shared cache file server if implemented (see Sharing caches in a cluster, page 338). Object requests and transactions to metadata. Request results are stored locally in Intelligence Server object cache. Complete result set is retrieved from database and stored in Intelligence Server memory and/or cache files.
ODBC
ODBC
Place the Web server machine(s) close to the Intelligence Server machine(s) Place Intelligence Server close to the both the data warehouse and the metadata repository Dedicate a machine for the metadata repository If you use Enterprise Manager, dedicate a machine for the Enterprise Manager database (statistics tables and data warehouse) If you have a clustered environment with a shared cache file server, place the shared cache file server close to the Intelligence Server machines
These depend on the type of reports your users typically run. This, in turn, determines the load they place on the system and how much network traffic occurs between the system components. This is discussed next.
433
Web client
Intelligence Server(s)
Data Warehouse
The load at C is determined by the number of rows pulled back from the data warehouse. Sending SQL and retrieving objects from the metadata result in minimal traffic. Cached reports do not cause any network traffic at C. Incremental fetch size directly influences the amount of traffic at A. Report manipulations that do not cause SQL to be generated and sent to the data warehouse (such as pivot, sort, and page-by) are similar to running cached reports. Report manipulations that cause SQL to be generated and sent to the data warehouse are similar to running non-cached reports of the same size. Graphics increase network bandwidth at B.
After noting where the highest load is on your network, you are able to tune it or change the placement of system components to improve the networks performance.
You can tell whether or not your network is negatively impacting your systems performance by monitoring how much of your networks capacity is being used. Use the Windows Performance Monitor for the object Network Interface, and the watch the counter Total bytes/sec as a percent of your networks bandwidth. If it is consistently greater than 60% (for example), it may indicate that the network is negatively affecting the systems performance. You may wish to use a figure different than 60% for your system. To calculate the network capacity utilization percent, take the total capacity, in terms of bits/second, and divide it by (Total bytes per sec * 8). Multiply the Total Bytes/sec by 8 because 1 byte = 8 bits. The Current Bandwidth counter in Performance Monitor gives an approximate value of total capacity because it is only an estimate. You may want to use another network monitoring utility (such as NetPerf, or others) to get the actual bandwidth figure.
Memory capacity Because multiple machines are sharing the work, and each may have up to 2 GB of virtual address space (or possibly 3 GB), memory is significantly increased.
Fault tolerance If one of the machines fails, the other(s) can still run.
The clustering feature is built into Intelligence Server and is available out of the box if you have the proper license. It is discussed in detail above (see Clustering multiple machines in the system, page 329).
435
Designing reports
In addition to the fact that large reports can exert a heavy toll on system performance, a reports design can also affect it. Some features consume more of the systems capacity than others when they are used. This factor that influences Intelligence Server capacity is highlighted in the diagram below.
System resources
Intelligence Server Capacity Requests
Architecture
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
Report design
Page-by feature (Page-by feature, page 436) OLAP Services features (Intelligent Cube reports) (OLAP Services reports, page 437) Prompt complexity (Prompt complexity, page 439) Documents (Documents, page 439)
For more information about these features, see the MicroStrategy Advanced Reporting Guide.
Page-by feature
If designers or users create reports that use the page-by feature, they can potentially use significant system resources. This is because the entire report is held in memory even though the user is seeing only a portion of it at a time. To lessen their potential impact, consider splitting large reports with the page-by feature into multiple reports and eliminating the use of page-by.
Add or remove rows or columns of information (attributes or metrics) that are already contained in the data definition (the user must have the Use report objects window or Web use report objects window privilege) Create derived metrics (immediately calculate values using metrics already in the data definition) (must have the Create derived metrics or Web create derived metrics privilege) Filter the data that is displayed in the report (must have the Use view filter editor or Web use view filter editor privilege) For more information about Intelligent Cube reports, data definitions and view definitions, see the MicroStrategy Advanced Reporting Guide.
Less data warehouse execution if several users access the same data definition but with customized view definitions Users to manipulate their reports in various ways, thus changing the view definition, but not changing the underlying data definition
Because of these features and the way the reports are held in memory on Intelligence Server, the reports have the potential to use more memory than standard reports if:
The size of a reports data definition plus its view definition is significantly more than a report not using OLAP Services functionality The enhanced OLAP Services functionality results in more report manipulations
437
However, this increased memory use may be worth it if you are looking for ways to reduce the load on your data warehouse. It may also be worth it if your users like the flexibility of the reports (being able to drag and drop items on or off the report grid). This reiterates the trade-offs you must make to meet your requirements of user sessions and job execution given the system resources that are available. The memory required for the report view instance depends on the amount of data seen in the view. For example, if a report is created that returns 200,000 orders, but the view only displays 500 orders, the report view is much smaller than if the view returns all 200,000 orders. Below is an example of an Intelligent Cube report and how the report view sizes vary depending on how the view filter changes (these results were achieved using the MicroStrategy Tutorial project).
View filter
Order < 500 Order < 5000 Order < 50000 Order < 200000
You should monitor Intelligence Server CPU utilization and memory use closely if users are making extensive use of OLAP Services functionality. You should consider these guidelines when implementing OLAP Services:
The cache size of the data definition and report view is the easiest way to determine the size of Intelligent Cube reports. The cache size reported in the Cache Monitor is typically 30-50% smaller than the version held in memory. In theory, the largest cache file that may exist is limited only by the amount of virtual address space available for the Intelligence Server process. This theoretical limit for an Intelligent Cube report is a cache over 400MB (created with a report of approximately 40 million data cells). With an Intelligent Cube of this size, Intelligence Server has limited memory resources for other types of operations.
The largest practical size of an Intelligent Cube cache depends on many factors including system resources, expected response time, and user concurrency. Based on testing at MicroStrategy, we recommend that an Intelligent Cube cache not be greater than 2MB (created with a report of approximately 250,000 data cells). Or, for example, this would be lower in high concurrency environments.
The primary way to manage the size of these reports is to limit the memory use on the Intelligence Server. For more information about this, see Governing Intelligence Server memory use, page 419.
Prompt complexity
Each attribute element or hierarchy prompt requires an element request to be executed by Intelligence Server. The number of prompts used and the number of elements returned from the prompts determine how much load is placed on the Intelligence Server. Make sure element caches are being used effectively. For details, see Element caches, page 201.
Documents
If you allow users to execute documents, they may submit multiple reports to the data warehouse simultaneously. Depending on how many documents your users execute and the number of child reports those documents submit, documents can use a lot of system resources. To limit this impact, create report caches for the child reports used in document objects.
439
settings throughout the system. This factor that influences Intelligence Server capacity is highlighted in the diagram below.
System resources
Intelligence Server Capacity Requests
Architecture
User Sessions
- Active users - User resources - User profile Users
Executing Jobs
- Connection threads - Job prioritization - Results processing Databases
Report design
These governors are arranged by where in the interface you can find them.
Intelligence Server Configuration editor (including project distribution settings) (Intelligence Server Configuration editor, page 440) Project configuration (Project configuration, page 444) Database connections (Database connection, page 446) VLDB settings (VLDB settings, page 447)
You can access the Intelligence Server Configuration editor from Desktop or Control Center in the manner described above.
Only the categories and settings in the Intelligence Server Configuration editor that affect system scalability are described below. Other categories and settings that appear in the Configuration editor are described elsewhere in this guide.
Description
Controls the frequency at which cache and History List messages are backed up to disk. A value of 0 means that cache and history messages are backed up immediately after they are created.
See page
200, 223
Description
The maximum number concurrent of jobs that may exist on an Intelligence Server. The maximum number of user sessions (connections) for an Intelligence Server. A single user account may establish multiple sessions to an Intelligence Server. The time allowed for a Desktop user to remain idle before his or her session is ended. An idle session is one from which there are no requests submitted to Intelligence Server. The time allowed for a Web user to remain idle before his or her session is ended.
See page
387 375
376
376
Note for Report Services documents in Web: Set the Web user session idle time (sec) to 3600 to avoid a project source timeout, if designers will be building documents and dashboards in Web. Then restart Intelligence Server.
Description
The maximum number of History (Inbox) messages that may exist in a users History List at any time. When the limit is reached, the oldest message is removed.
See page
379
441
Description
The maximum number of XML cells in a report result set that Intelligence Server can send to the MicroStrategy Web products at a time. When this limit is reached, the user sees an error message along with the partial result set. The user can incrementally fetch the remaining cells.
See page
408
XML Generation: Maximum number of XML drill paths XML Generation: Maximum memory consumption for XML (MB) PDF Generation: Maximum memory consumption for PDF (MB) Excel Generation: Maximum memory consumption for Excel (MB)
The maximum number of attribute elements users can see in the 410 drill across menu in MicroStrategy Web products. If this setting is set too low, the user does not see all of the available drill attributes. The maximum amount of memory (in megabytes) that Intelligence Server can use to generate a report or document in XML. If this limit is reached, the XML document is not generated and the user sees an error message. The maximum amount of memory (in megabytes) that Intelligence Server can use to generate a report or document in PDF. If this limit is reached, the PDF document is not generated and the user sees an error message. The maximum amount of memory (in megabytes) that Intelligence Server can use to generate a report or document in Excel. If this limit is reached, the Excel document is not generated and the user sees an error message. 409
409
409
Description
A check box that enables the governors: Maximum Intelligence Server use of total memory Minimum machine free physical memory. The maximum amount of total system memory (RAM + Page File) that may be used by the Intelligence Server process (MSTRSVR.exe) compared to the total amount of memory on the machine. If the limit is met, all requests from MicroStrategy Web products of any nature (log in, report execution, search, folder browsing) are denied until the conditions are resolved. The minimum amount of physical memory (RAM) that needs to be available (as a percentage of the total amount of physical memory on the machine). If the limit is met, all requests from MicroStrategy Web products of any nature (log in, report execution, search, folder browsing) are denied until the conditions are resolved.
See page
429
429
430
Governor
Enable single memory allocation governing Maximum single allocation size (MBytes) Enable memory contract management
Description
A check box that enables the governor: Maximum single allocation size Prevents Intelligence Server from granting a request that would exceed this limit. A check box that enables the governors: Minimum reserved memory (MB or %) Maximum use of virtual address space (%) Memory request idle time The amount of system memory, in either MB or a percent, that must be reserved for processes external to Intelligence Server. The maximum percent of the process virtual address space that Intelligence Server can use before entering memory request idle mode. The amount of time Intelligence Server denies requests that may result in memory depletion. If the Intelligence Server does not return to acceptable memory conditions before the idle time is reached, Intelligence Server shuts down.
See page
420 420 420
Minimum reserved memory (MB or %) Maximum use of virtual address space (%) Memory request idle time
420 420
420
Description
The maximum amount of memory that can be used for report instances referenced by messages in the Working set.
See page
380
Description
The amount of time (the delay) before the project is loaded on another server to maintain minimum level availability. When the conditions that caused a project failover disappear, the failover configuration reverts automatically to the original configuration. This setting is the amount of time (the delay) before the failover configuration reverts to the original configuration.
See page
358 358
443
Project configuration
These governors can be set per project. To access them, right-click the project, select Project Configuration, then select the category as noted below.
Description
Prevents too many attribute elements from being returned from the data warehouse at a time.
See page
205
Description
The amount of time a report request can take before it is canceled. This accounts for total time spent resolving prompts, waiting for autoprompts, waiting in the job queue, executing SQL, analytical calculation, and preparing report results. The maximum number of rows that may be returned to the Intelligence Server for a report request. This setting is applied by the Query Engine when retrieving the results from the database. This is the default for all reports in a project and can be overridden for individual reports using VLDB settings. The maximum number of rows that may be retrieved from the data warehouse for an element request. The maximum number of rows that may be in an intermediate result set used for analytical processing in Intelligence Server. This is the default for all reports in a project and can be overridden using VLDB settings for individual reports.
See page
383
401
402 403
Jobs per user account The maximum number of concurrent jobs per user account and project. Jobs per user session The maximum number of concurrent jobs a user may have during a session.
Governor
Executing jobs per user Jobs per project
Description
The maximum number of concurrent jobs a single user account may have executing in the project. If this condition is met, additional jobs are placed in the queue until executing jobs finish. The maximum number of concurrent jobs that the project can process at a time.
See page
384
386
Description
The maximum number of user sessions (connections) that are allowed in the project. When the limit is reached, users cannot log in, except for the Administrator. This may be required to disconnect current users or increase the governing setting.
See page
375
Description
The maximum amount of memory reserved for the creation and storage of report cache. This setting should be configured at least the size of the largest cache file or that report will not be cached.
See page
190
Maximum number of The maximum number of report caches the project may have at a caches time.
183
Description
The amount of time a cache remains valid.
See page
200
445
Description
(Server) The amount of memory Intelligence Server allocates for object caching. (Client) The amount of memory Desktop allocates for object caching.
See page
215 215
Description
(Server) The amount of memory Intelligence Server allocates for element caching.
See page
215
Database connection
This set of governors can be set by modifying a project sources database instance and then modifying either the number of Job Prioritization connections or the Database connection. See the noted page below for each governor for more details.
ODBC Settings
Governor
Number of database connection threads
Description
The sum of the Number of database connections by priority: High, Medium and Low allowed at a time between Intelligence Server and the data warehouse (set on the database instances Job Prioritization tab).
See page
389
Maximum cancel attempt The maximum amount of time the Query Engine waits for a time (sec) successful attempt to cancel a query. Maximum query execution time (sec) Maximum connection attempt time (sec) The maximum amount of time a single pass of SQL may execute on the data warehouse. The maximum amount of time Intelligence Server waits to connect to the data warehouse.
Description
The amount of time an active database connection thread remains open and cached on Intelligence Server. The amount of time an inactive database connection thread remains cached until it is terminated.
See page
393 394
VLDB settings
These settings can be made in the VLDB properties for either reports or the database instance. For information about accessing them, see the noted page for each property in the table below. For complete details about all VLDB properties, see the Details for all VLDB properties appendix in this guide.
Governor
Intermediate row limit Results Set Row Limit SQL time out (per pass) Maximum SQL size
Description
Maximum number of rows that may be in an intermediate table used by Intelligence Server. This setting overrides what is in the projects default setting Number of intermediate result rows. Maximum number of rows that can be in a report result set. This setting overrides what is in the projects default setting Number of report result rows. The amount of time in seconds any given SQL pass can execute on the data warehouse. This can be set at the database instance and report levels. Maximum size (in bytes) the SQL statement can be. This can be set at the database instance level.
See page
403
401
387
388
447
How you design Narrowcast Server applications (Application design considerations, page 448) If and how users employ the Web delivery features (Web delivery (Send now and Scheduled e-mail), page 449) How Narrowcast Server connects to Intelligence Server (How Narrowcast Server connects to Intelligence Server, page 450)
Personal report execution (PRE) PRE executes a separate report for each set of users with unique personalization. Users may have reports executed under the context of the corresponding Intelligence Server user if desired. Using this option, security profiles defined in MicroStrategy Desktop are maintained. However if there are many users who all have unique personalization, this option can place a large load on Intelligence Server.
Personal page execution (PPE) PPE executes one multi-page report for all users in a segment and then uses this single report to provide personalized content (pages) for different users. All users have their reports executed under the context of the same Intelligence Server user, so individual security profiles are not maintained. However, load on the Intelligence Server may be significantly lower than for PRE in some cases.
For more detailed information about these options, refer to the Narrowcast Server Application Designer Guide (specifically, see the chapter on Page Personalization and Dynamic Subscriptions). Two additional things you should consider in designing your Narrowcast Server applications are:
448 How Narrowcast Server affects Intelligence Server
2008 MicroStrategy, Inc.
Timing of Narrowcast Server jobs You can schedule reports to run at off-peak hours when the Intelligence Servers load from MicroStrategy Web products and Desktop users is lowest.
Intelligence Server selection You can send Narrowcast Server jobs to a specific Intelligence Server to ensure that some Intelligence Servers are used solely for MicroStrategy Web products or Desktop. For more information about this, see Information sources, page 450.
449
Information sources
Narrowcast Server can connect to a specific Intelligence Server(s) within or outside of a clustered configuration. Narrowcast Server does this by using one or more information sources (IS) to point to and connect to the desired Intelligence Server(s). Things to note:
Intelligence Server provides automatic load balancing for Narrowcast Server requests. Once an IS is configured, jobs using the IS go to the appropriate Intelligence Server for the most efficient response. Narrowcast Server can connect to any Intelligence Server in a clusterthis does not need to be the primary node. You can balance the load manually by creating multiple ISs or by using a single IS pointing to one Intelligence Server, thereby designating it to handle all Narrowcast Server requests.
For more information, refer to the MicroStrategy Narrowcast Server System Administrator Guide. In particular, see the section on Administration objects and the subsection on information source modules and information sources in Chapter 1 of the MicroStrategy Narrowcast Server System Administrator Guide.
7
7.
AUTOMATING TASKS
Introduction
This section describes how you can automate certain MicroStrategy jobs and administrative tasks. Methods of automation include:
Scheduling jobs and administrative tasks, page 451 Automating processes using Command Manager, page 464 Using automated installation techniques, page 474
451
Automating tasks
Time-sensitive, time-consuming, repetitive, and bulk tasks are ideal candidates for scheduling. Running a report is the most commonly scheduled task since it greatly improves overall performance of the system. Certain administration tasks can also be scheduled. The scheduling function is turned on by default. However, you can disable it by opening the Intelligence Server Configuration editor and in the Server: Advanced subcategory, clearing the check box for Use MicroStrategy Scheduler. This section discusses the following topics concerning scheduling:
Creating schedules, page 452 Scheduling reports, page 457 Scheduling administration tasks, page 461 Schedules for projects distributed across nodes in a cluster, page 463 Managing scheduled requests, page 463
Creating schedules
A schedule is a MicroStrategy object that contains information specifying when a task is to be executed. However, a schedule is not specific to a particular task and can be used for multiple ones. It can be shared across projects within the metadata. To create schedules, you need to have these two privileges: Create Configuration Object and Use Schedule Manager. In addition, you need to have the permission of Write ACL access to the Schedule folder. To create effective and useful schedules, you need to have a clear understanding of your users needs as well as the usage patterns of the overall system. Schedules need to be created in advance before they are linked to any tasks.
Automating tasks
Time-triggered schedules (see Creating Time-triggered schedules, page 453) Event-triggered schedules (see Creating event-triggered schedules, page 455)
When the time or event criterion specified in a schedule occurs, MicroStrategy Intelligence Server executes all tasks associated with it. Each schedule is available to user groups based on the Access Control List settings of the schedule object. You can view all the schedules in the Schedule Manager. All the schedules are displayed with the following information:
Name: name of the schedule Owner: name of the person who created the schedule Modification time: the time when the schedule was changed Description: details of the schedule
You can also use the Command Manager script to view the schedules: LIST [ALL] SCHEDULERELATIONS [FOR (SCHEDULE "<schedule_name>" | (USER | GROUP) "<user_or_group_name>" | REPORT "<rep_or_doc_name>" IN "<location_path>")] IN PROJECT "<project_name>"; By default, this script is located at C:\Program Files\MicroStrategy\Administrator\Command Manager\Outlines\Schedule_Relations_Outlines.
453
Automating tasks
example. Time-triggered schedules are useful to create a batch window and allow large, resource-intensive tasks to run at off-peak times, such as overnight or over a weekend. Note the following:
In a clustered environment, tasks associated with time-triggered schedules are executed on the primary node of the cluster only. Time-triggered schedules execute according to the time on the machine where they were created. For example, a schedule is created using Desktop on a machine that is in the Pacific time zone (GMT -8:00). The machine is connected to an Intelligence Server in the Eastern time zone (GMT -5:00). The schedule executes three hours later than the scheduled time on the Intelligence Server.
1 From MicroStrategy Desktop, expand Administration in the folder list. 2 Click the Schedule Manager. 3 Right-click in the right-hand portion of the screen and select New, and then Schedule. The Schedule Wizard opens. 4 Proceed through the wizard and when prompted for the schedule type, select Time-triggered. 5 You can then select the starting and ending dates, and the time and recurrence of the schedule.
You can also use the Command Manager script Create_Schedule_Outline.otl to create a time-based schedule.
Automating tasks
2 From the Tools menu, select Events. The Event Viewer opens. 3 From the File menu, select New. A new event is created. 4 Enter the name for the event. 5 From the File menu, select Exit. The Event viewer closes.
455
Automating tasks
6 Expand Administration in the folder list. 7 Click Schedule Manager. 8 On the right-hand portion of the screen, right-click and select New, then Schedule. The Schedule Wizard opens. 9 Step through the Schedule Wizard. When prompted for the schedule type, select Event-triggered. 10 You can then select the starting and ending dates, and the time and recurrence of the schedule. 11 Select the event to which you will link this schedule. 12 Click Finish. The Schedule Wizard closes.
You can also use the Command Manager script, Create_Schedule_Outline.otl, to create an event-based schedule.
Automating tasks
checks to see when the DB_LOAD_COMPLETE table is updated, and then executes a Command Manager script. That script contains the following line: TRIGGER EVENT OnDBLoad; When the script is executed, the OnDBLoad event is triggered, and the schedule is executed. You can also use the MicroStrategy SDK to develop an application that triggers an event. You can then cause the database trigger to execute this application. For information about obtaining the MicroStrategy SDK, contact your MicroStrategy account representative.
1 From the Administration menu, select Events. The Events dialog box opens. 2 Right-click the event to trigger and select Trigger.
Scheduling reports
Ordinarily, MicroStrategy Intelligence Server executes report requests immediately after they are made. Scheduling allows these requests to be executed according to the schedules specified by the administrator in advance.
457
Automating tasks
Report scheduling is important because it reduces the systems load in two ways:
Frequently-accessed reports can be pre-cached, which provides very fast response times to users while not creating much load on the database system. Long-running reports can be postponed to a later time when the load on the system is not as heavy.
For example, in a particularly large or busy system, you may wish to limit the number and complexity of ad hoc reports users can execute during the day. In this case, you can create schedules for resource-intensive reports to run at a time when MicroStrategy Intelligence Server is not busy, for example, at 4 AM or on Sunday. There are two ways to schedule reports:
Allow users to subscribe to reports themselves either through Desktop or Web based on schedules defined by administrators. Report scheduling is available to all users with schedule request privileges.
Have the administrators schedule frequently-accessed reports on behalf of the users. Executing simultaneous reports can strain resources. Be prudent in setting report schedules for your project. Consider limiting access to a schedule by editing its Access Control List permission grouping. You can do so by right-clicking the schedule object, selecting Properties, and then selecting the Security tab in the Properties dialog box. Make sure that only the desired user groups are listed.
When a scheduled report finishes executing, a message displays in the users History List alerting him or her that the report is ready to be viewed. This is done automatically if the execution is performed in Web, and in Desktop, only if you set the preference. The user simply opens the message to retrieve the report results. If the request was not completed successfully, the user can view details of the error message.
Automating tasks
Alternatively, it might not be as important to deliver the results of a scheduled report to the user as it is to refresh a report cache, which can be shared by several users. Users wishing to run the report access the cached result instead of executing the query against the database. These types of schedules are usually event-triggered because caches do not need refreshing unless the underlying data changes from an event like a data warehouse load. For example, suppose a set of standard weekly and monthly reports has been developed. These reports should be kept in cache because they are frequently accessed. Certain tables in the database are refreshed weekly, and other tables are refreshed monthly. Whenever these tables are updated, the appropriate caches should be refreshed. Note that although the refresh occurs on a weekly or monthly schedule, the refresh does not complete at exactly the same time each week or month. For information on scheduling strategies, see Scheduling updates of report caches, page 185. You can schedule reports from Desktop, Web, and by using the Command Manager script.
To schedule reports in MicroStrategy Web
1 On the reports page, under the name of the report for which you want to create a schedule, select Subscriptions. 2 Select Add Subscription. 3 Choose a schedule for the report execution. 4 Answer the prompts, if any.
A History List message is automatically generated whenever a report is executed in Web based on a schedule.
459
Automating tasks
1 From the Administration menu, select Scheduling and then select Schedule multiple reports. 2 In the System-wide Report Scheduler dialog box, select a user group for whom the schedule is created, and click > to move it to the schedule list. 3 Select a report or document you want to create a schedule for, and then click > to move it under the user or user groups in the schedule list. 4 Select the Create messages in the History List check box, if available and you need a message created.
You cannot schedule reports with prompts unless the report has default answers.
To schedule reports using the Command Manager script
You can create a schedule by using the following Command Manager script: CREATE SCHEDULERELATION SCHEDULE "<schedule_name>" (USER | GROUP) "<user_or_group_name>" REPORT "<rep_or_doc_name>" IN "<location_path>" IN PROJECT "<project_name>" [CREATEMSGHIST (TRUE | FALSE)]; By default, this script is located at C:\Program Files\MicroStrategy\Administrator\Command Manager\Outlines\Schedule_Relation_Outlines.
Automating tasks
To ensure that a prompt on a scheduled report is executed, the prompt must be required and have default prompt answers set. The following table explains the different possible scenarios that can occur for each prompt in a scheduled report:
Prompt Required
No No
Default Answer
No Yes
Result
The prompt is ignored; the report is not filtered by the prompt. The prompt and default answer are ignored since the prompt is not required; the report is not filtered by the prompt. The report is not executed. No answer is provided to the required prompt so MicroStrategy cannot complete the report without user interaction. The report is executed; the prompt is answered with the default.
Yes
No
Yes
Yes
Idle a project: causes the project to stop accepting certain types of requests. See Setting Project Modes, page 323 Resume a project: brings the project back into normal operation from an idle state. See Loaded, page 324. Unload project: takes a project offline to users and removes the project from Intelligence Server memory. See Unloaded, page 324. Invalidate caches: see Invalidating report caches, page 184 Delete caches: see Deleting report caches, page 187
461
Automating tasks
Delete users History List messages: see Deleting History List messages, page 232
This maintenance request can sometimes be large. Schedule History List deletions during times when Intelligence Server is not busy, such as during hours when users are not sending requests to the system. Alternatively, delete History Lists in increments, for example, delete the History Lists of groups of users at different times, such as at 1 AM, 2 AM, and so on.
Purge element caches: see Limiting element caches by database connection, page 212 Delete unused managed objects: removes the unused managed objects created for Freeform SQL reports and OLAP cube reports. For more information, see Deleting unused schema objects: managed objects, page 551.
The following high-level procedure provides a general overview of the steps to schedule an administration task. For detailed instructions, see the Desktop online help (from within Desktop, press F1).
To schedule an administration task from Desktop
1 From the Administration menu, select Scheduling and then Schedule Administration Tasks. 2 In the Schedule Administration Tasks dialog box, select a project for which you want to create a schedule from the Available Projects list. 3 Choose a task from the action list. See the bulleted list above. 4 Choose a schedule for the task.
Automating tasks
Time-based schedules are handled by the primary server or primary node for each project. You can find a projects primary server using the Cluster Monitor. For the steps to view a projects primary server, see Managing clustered Intelligence Servers: Cluster Monitor, page 248. Event-based schedules are executed on the designated node only for the project(s) loaded on that node. The administrator must be sure to trigger the event on all nodes (that is, all machines) that are running the project for which the event-based schedule is assigned. You can see which nodes are running which projects using the Cluster Monitor. For the steps to view this node information, see Managing clustered Intelligence Servers: Cluster Monitor, page 248. For steps to set up an event-based schedule, see Creating event-triggered schedules, page 455.
For details on distributing projects across nodes in a cluster, see Distributing projects across nodes in a cluster, page 354.
1 In the Administration folder list, click Schedule Monitor to view all the scheduled items (reports, documents or administration tasks) that have been created.
2008 MicroStrategy, Inc. Scheduling jobs and administrative tasks
463
Automating tasks
2 Right-click any scheduled request, and select one of the following options from the shortcut menu: Quick view: brings up a summary page with general information about the schedule. View: allows you to view the schedule in different ways, such as large icons, small icons, details, and list. Line up icons: allows you to arrange the icons in a line if an icon selection is made in the View option. Refresh: refreshes the scheduled request. Expire: eliminates the scheduled request.
You cannot schedule new documents or reports using the Schedule Monitor.
To use the System-wide Report Scheduler
1 From the Administration menu, select Scheduling and then Schedule multiple reports. 2 You can view the scheduled items (reports, documents or administration tasks) that have been created.
You can delete a scheduled document, report, or a user from a scheduled report or document (select the item and click <). You can schedule new reports or documents (see Scheduling reports, page 457).
Automating tasks
settings all at once, without using the MicroStrategy Desktop or Narrowcast Administrator interface. In addition, you can create scripts to be run at times when it would not be convenient for you to make the changes. For example, you may want to change the system to allow more low priority jobs to complete at night than during regular hours. To do this, you could create a script to increase the number of low priority database connections and modify several Intelligence Server governor settings. Then, you could schedule the script to run at 8 P.M. You could then create another script that changes the database connections and Intelligence Server settings back for daytime use, and schedule that script to run at 6 A.M. To schedule a script to run at a certain time, use the Windows AT command with the cmdmgr executable. For the syntax for using the executable, see To invoke Command Manager from another application, page 467. Other examples of tasks you can perform using Command Manager include:
User management: Add, remove, or modify users or user groups; list user profiles Security: Grant or revoke user privileges; create security filters and apply them to users or groups; change security roles and user profiles; assign or revoke ACL permissions; disconnect users or disable their accounts Server management: Start, stop, or restart Intelligence Server; configure Intelligence Server settings; cluster Intelligence Server machines; change database connections and logins; manage error codes and customize output data; disconnect active sessions on server or project Database management: create, modify, and delete connections, connection mappings, logins, and database instances Project management: List or kill jobs; change a projects mode (idle, resume); expire and delete caches; change filter or metric definitions; manage facts and attributes;
465
Automating tasks
manage folders; update the projects schema; manage shortcuts; manage hidden properties; create tables and update warehouse catalog tables
Scheduling: Trigger an event to run scheduled reports Narrowcast Server administration: Start and stop a Narrowcast Server; start, stop, and schedule Narrowcast Server services; add, modify, and remove subscription book users; define and remove user authentication
Start the Command Manager graphical interface, which can be used to view sample script outlines, and to create, modify, delete, or run scripts. Invoke the Command Manager executable, including necessary parameters such as the script file to run, from the Windows scheduler, Windows command prompt, or other applications such as system management software. Command Manager Runtime is a lightweight version of Command Manager for bundling with OEM applications. Command Manager Runtime has fewer execution options and supports fewer statements than Command Manager. For more information about Command Manager Runtime, see Using Command Manager with OEM software, page 473.
From the Windows Start menu, point to Programs, then MicroStrategy, then Administrator, and then choose Command Manager. For more information about using Command Manager and for script syntax, see the online help. To access the online help, from the Help menu in the graphical Command Manager, select Command Manager Help.
Automating tasks
Call the cmdmgr.exe command with the following parameters: If the project source name, the input file, or an output file contain a space in the name or path, you must enclose the name in double quotes.
Effect Connection (required; choose one)
Connect to a project source -n ProjectSourceName -u UserName [-p Password] -w ODBC_DSN -u UserName [-p Password] -d Database [-s SystemPrefix]
Parameters
Note: If -p or -s are omitted, Command Manager assumes a null password or system prefix.
Note: If this parameter is omitted, the Command Manager GUI is launched. Script output (optional; choose only one)
Log script results, error messages, and status messages to a single file Log script results, error messages, and status messages to separate files, with default file names of: CmdMgrResults.log CmdMgrFail.log CmdMgrSuccess.log Log script results, error messages, and status messages to separate files, with specified names -o OutputFile -break
Note: You can omit one or more of these parameters. For example, if you only want to log error messages, use only the -of parameter.
467
Automating tasks
Parameters
-h
-i -e -showoutput
A full list of parameters can also be accessed from a command prompt by entering cmdmgr.exe /?. By default, the executable is installed in the following directory: Program Files\MicroStrategy\Administrator\ Command Manager If you create a batch file to execute a Command Manager script from the command line, the password for the Project Source or Narrowcast server login must be stored in plaintext in the batch file. This creates a security risk, since anyone who can view the batch file has access to a MicroStrategy administrator login and password. To protect the security of this information, make sure that the batch files are stored in a directory that only the MicroStrategy administrator has access to. For instructions on how to set the permissions on a directory, see your operating systems documentation.
Automating tasks
Reserved words appear in blue. Words or phrases in quotation marks appear in gray. All other text appears in black.
Other features of the script editor include a script syntax checker, the ability to jump to any line in a script, and the presence of script outlines.
Script outlines
The Command Manager script outlines help you insert script statements with the correct syntax into your scripts. Outlines are preconstructed statements with optional features and user-defined parameters clearly marked. Outlines are grouped based on the type of objects that they affect. The outlines that are available to be inserted depend on whether the active Script window is connected to a project source or a Narrowcast server. Only outlines appropriate to the connected metadata source are available.
469
Automating tasks
1 Start the Command Manager graphical interface. 2 Connect to a metadata source. 3 From the Edit menu, select Insert Outline. The Choose Outline dialog box opens. 4 Navigate the Outline tree to locate the outline you want, and select it. 5 Click Insert to place the selected outline into the script. 6 Click Cancel to close the Choose Outline dialog box. 7 Modify the outline as needed for your script.
reserved words, which are words with a specific meaning in a Command Manager script. For a complete list of reserved words, see the Command Manager online help. identifiers, which are words that the user provides as parameters for the script. For example, in the statement LIST MEMBERS FOR USER GROUP Managers; the word Managers is an identifier.
Automating tasks
Identifiers are usually enclosed in quotation marks for clarity, for example, "Managers". If an identifier contains a reserved word or a space, that identifier must be enclosed in quotation marks. In general, either double quotes or single quotes can be used to enclose identifiers. However, if you want to include either single quotes or double quotes as part of an identifier, you must either enclose that identifier in the other kind of quotes, or put a caret in front of the interior quote. For example, to refer to a metric named Count of "Outstanding" Customer Ratings, you would need to use one of the following methods: Use single quotes to enclose the identifier: 'Count of "Outstanding" Customer Ratings' Use double quotes to enclose the identifier, and put carets in front of the interior double quotes: "Count of ^"Outstanding^" Customer Ratings"
numbers in any notation other special characters such as carriage returns, tabs, or spaces
471
Automating tasks
Critical errors occur when the main part of the instruction is not able to complete. These errors interrupt script execution when the Stop script execution on error option is enabled (GUI) or when the -stoponerror flag is used (command line).
Automating tasks
For example, if you submit an instruction to create a user, user1, that already exists in the MicroStrategy metadata database, Command Manager is not able to create the user. Since creating the user is the main part of the instruction, this is a critical error. If the Stop script execution on error option is enabled, the script stops executing and any further instructions are ignored.
Non-critical errors occur when the main part of the instruction is able to complete. These errors never interrupt script execution.
For example, if you submit an instruction to create a MicroStrategy user group with two members, user1 and user2, but user2 does not exist in the MicroStrategy metadata database, Command Manager is still able to create the group. Since creating the group is the main part of the instruction (adding users is secondary), this is a non-critical error.
An error message is written to the Messages tab of the Script window for all execution errors, critical or non-critical. In addition, if logging is enabled in the Options dialog box, the error message is written to the log file.
473
Automating tasks
list of the commands available in Command Manager Runtime, with syntax and examples for each command, see Appendix G, Command Manager Runtime.
Automating tasks
475
Automating tasks
8
8.
Introduction
In a MicroStrategy system, a project is the environment in which reporting is done. A project:
Determines the set of data warehouse tables to be used, and therefore the set of data available to be analyzed. Contains all schema objects used to interpret the data in those tables. Schema objects include objects such as facts, attributes, and hierarchies. Contains all application objects used to create reports and analyze the data. Application objects include objects such as reports, metrics, and filters. Defines the security scheme for the user community that accesses these objects. Security objects include objects such as security roles, privileges, and access control lists.
477
This section describes the recommended methodology and tools for managing projects in the MicroStrategy system. These include:
The application life cycle, page 478 Implementing the recommended application life cycle, page 483 Duplicating a project, page 485 Updating projects with new objects, page 492 Copying objects between projects, page 495 Merging projects to synchronize objects, page 517 Comparing and tracking projects, page 526 Verifying reports and documents, page 530 Deleting unused schema objects: managed objects, page 551
For information about creating a project, creating attributes and facts, building a logical data model, and other project design tasks, see the MicroStrategy Project Design Guide.
For a description of the recommended scenario, see Recommended scenario: Development -> test -> production, page 479 For a real-life scenario, see Real-life scenario: New version from an application developer, page 481
For details on how to implement the application life cycle in your MicroStrategy environment, see Implementing the recommended application life cycle, page 483
Development
Test
Production
479
instructions on how to design a project schema and create application objects, see the MicroStrategy Project Design Guide.
Once the projects have been created, you can migrate specific objects between them via Object Manager. For example, after a new metric has been created in the development project, you can copy it to the test project. For detailed information about Object Manager, see Copying objects between projects, page 495. You can also merge two related projects with the Project Merge Wizard. This is useful when you have a large number of objects to copy. The Project Merge Wizard copies all the objects in a given project to another project. For an example of a situation in which you would want to use the Project Merge Wizard, see Real-life scenario: New version from an application developer, page 481. For detailed information about Project Merge, see Merging projects to synchronize objects, page 517. To help you decide whether you should use Object Manager or Project merge, see Comparing Project Merge to Object Manager, page 493. The Project Comparison Wizard can help you determine what objects in a project have changed since your last update. You can also save the results of search objects and use those searches to track the changes in your projects. For detailed information about the Project Comparison Wizard, see Comparing and tracking projects, page 526. For instructions on how to use search objects to track changes in a project, see Tracking your projects with the Search Export feature, page 529. Integrity Manager helps you ensure that your changes have not caused any problems with your reports. Integrity Manager executes some or all of the reports in a project, and can compare them against another project or a previously established baseline. For detailed information about Integrity Manager, see Verifying reports and documents, page 530.
481
in what you called version 1.1 and later, version 1.2., and so on. Now you have purchased version 2 of the project from the same vendor, and you wish to merge the new (Version 2) project with your existing (Version 1.2) project. MicroStrategy encourages vendors in these situations to include in the installation of version 2 an automatic upgrade to the project using Project Merge. In this way the vendor, rather than the user or purchaser, can configure the rules for this project merge. For information about executing Project Merge without user input, see Running Project Merge from the command prompt, page 521. This combination of the two projects creates Project version 2.1, as shown in the diagram below.
Vendor
Customer
Project version 1
...
Project version 2
The vendors new Version 2 project has new objects that are not in yours, which you feel confident in moving over. But some of the objects in the Version 2 project may conflict with objects that you had customized in the Version 1.2 project. How do you determine which of the Version 2 objects you want move into your system, or which of your Version 1.2 objects to modify? You could perform this merge object-by-object and migrate them manually using Object Manager, but this will be time-consuming if the project is large. It may be more efficient to use the Project Merge tool. With this tool, you can
482 The application life cycle
2008 MicroStrategy, Inc.
define rules for merging projects that help you identify conflicting objects and handle them a certain way. Project Merge then applies those rules while merging the projects. For more information about using the MicroStrategy Project Merge tool, see Merging projects to synchronize objects, page 517.
2 Create the test and production projects by duplicating the development project.
It is important that you duplicate the development project to create the test and production projects, rather than creating them separately. Duplicating ensures that all three projects have related schemas, enabling you to use Object Manager or Project Merge to copy objects between the projects. For instructions on how to duplicate a project, see Duplicating a project, page 485.
Duplicating a project
Duplicating a project is an important part of the application life cycle. To copy objects between two projects, the projects must have related schemas. This means that one must have originally been a duplicate of the other, or both must have been duplicates of a third project. Project duplication is done using the Project Duplication wizard. For detailed information about the duplication process, including step-by-step instructions, see Overview of the Project Duplication Wizard, page 486. To migrate a project from one database platform to another, you must use the Project Mover Wizard. For detailed information about this migration, see Migrating a project to a new database platform, page 490. You can duplicate a MicroStrategy project in one of the following ways:
From a three-tier (server) project source to a three-tier (server) project source From a three-tier (server) project source to a two-tier (direct) project source
Duplicating a project
485
From a two-tier (direct) project source to a two-tier (direct) project source From a two-tier (direct) project source to a three-tier (server) project source
A three-tier (direct) project source is connected to a MicroStrategy Intelligence Server, and has the full range of administrative options available. A two-tier (direct) project source is not connected to an Intelligence Server. For more information on three-tier and two-tier project sources, see Chapter 4 of the MicroStrategy Project Design Guide. Do not refresh the warehouse catalog in the destination project. Refresh the warehouse catalog in the source project, and then use Object Manager to move the updated objects into the destination project. For information about the warehouse catalog, see the Optimizing and Maintaining your Project chapter in the MicroStrategy Project Design Guide.
1 From the MicroStrategy Object Manager select the Project menu (or from MicroStrategy Desktop select the Schema menu), then select Duplicate Project. The Project Duplication Wizard opens. 2 Specify the project source and project information that you are copying from (the source). 3 Specify the project source and project information that you are copying to (the destination). 4 Indicate what types of objects to copy:
For project objects, whether to copy all objects or only schema objects (attributes, facts, and hierarchies) For configuration objects such as schedules and security filters, whether to copy all configuration objects or only those related to the project For users and groups, whether to copy only project-related users and groups, only the users and groups you select, or none If you choose to copy only project-related users, you can specify to copy all groups to preserve group structure. For example, if the user has a security role on a project, then he or she is related to that project. So a user may exist in a metadata with four projects and only have security roles on two of them. In this case, he or she is related to two projects and would be duplicated if either of those two projects were duplicated. If users are not being duplicated, ensure that no schema objects (including hierarchies) exist in user profile folders. If schema objects are not duplicated, then the next time the schema in the destination project is updated an Object not found error occurs.
Duplicating a project
487
5 Specify whether to keep or merge user properties if the users already exist in the destination project source. Also specify whether to keep or merge security role properties if the roles exist in the destination project source. For example, if properties such as password expiration and so on are different by default between the project sources, which set of properties do you want to use? 6 Specify whether you wish to see the event messages as they happen and, if so, what types. Also specify whether to create log files and, if so, what types of events to log, and where to locate the log files. By default Project Duplicator shows you error messages as they occur, and logs most events to a text file. This log file is created by default in C:\Program Files\Common Files\ MicroStrategy\.
After saving the settings from the Project Duplication Wizard, invoke the Project Duplication Wizard executable ProjectDuplicate.exe. By default this executable is located in C:\Program Files\Common Files\ MicroStrategy.
The syntax is: ProjectDuplicate.exe -f Path\XMLFilename [-sp SourcePassword] [-dp DestinationPassword] [-sup] [-md] [-dn OverwriteName] where:
Path is the path to the saved XML settings file. XMLFilename is the name of the saved XML settings file. SourcePassword is the password for the source projects project source. TargetPassword is the password for the destination projects project source. -sup indicates that feedback messages will be suppressed (silent mode). -md indicates that the metadata of the destination project source will be updated if it is older than the source project sources metadata. -dn OverwriteName specifies the name of the destination project. This overrides the name specified in the XML settings file.
For information on the syntax for the Windows AT command or a UNIX scheduler, see the documentation for your operating system.
In a direct (two-tier) configuration, without an Intelligence Server, the system checks that the language under the LOCAL_MACHINE registry key matches the language under the CURRENT_USER registry key.
Duplicating a project
489
In a server (three-tier) configuration, with an Intelligence Server, the system checks that the language under the CURRENT_USER registry key on the client machine matches the language under the LOCAL_MACHINE registry key on the server machine.
The MicroStrategy interface obtains the language information from the CURRENT_USER registry key and the server obtains the language information from the LOCAL_MACHINE registry key. This can lead to inconsistencies in the language display. The language check prevents these inconsistencies and ensures that the language display is consistent across the interface. The internationalization settings in Object Manager allow you to create related projects in different languages. For more information on this process, see Copying objects between projects in different languages, page 508.
The following high-level procedure provides an overview of what the Project Mover Wizard does. For an explanation of the information required at any given page in the wizard, see the online help (from the wizard, click Help, or press F1).
To migrate a project to a different database
1 From the Start menu point to Programs, then MicroStrategy, then Tools, and then select Project Mover. The Project Mover Wizard opens. 2 Select the warehouse and metadata databases that contain the source project, and then select the source project. 3 Select any SQL scripts you want to run on the data warehouse, either before or after project migration. 4 Select the database into which the project is to be migrated. 5 If project metadata already exists in the destination database, select whether to append the migrated project to the existing data, or overwrite that data. 6 Review your choices and click Finish on the Summary page of the Project Mover Wizard. The wizard migrates your project to the new database.
Duplicating a project
491
To create a response file, from the first page of the Project Mover Wizard click Advanced. On the Advanced Options page, select Generate a response file and enter the name and location of the new response file in the text field. To execute a response file from the Project Mover Wizard, from the first page of the wizard click Advanced. Then select the Use Response File option and load the response file. The Wizard opens the Summary page, which lists all the options set by the response file. After reviewing these options, click Finish. The Project Mover Wizard begins moving the project. To execute a response file from the command line, you need to invoke the Project Mover executable, demomover.exe. By default, this directory is C:\Program Files\Common Files\Microstrategy. The syntax is: demomover.exe -r "File Location\Filename.ini" where "File Location" is the path to the response file and "Filename.ini" is the name of the response file.
MicroStrategy has two tools available for updating the objects in a project:
Object Manager migrates a few objects at a time. For information about Object Manager, see Copying objects between projects, page 495. Project Merge migrates all the objects in a project at once. For information about Project Merge, see Merging projects to synchronize objects, page 517.
For a comparison of the two tools, see Comparing Project Merge to Object Manager, page 493. Note the following:
To move or copy objects between projects, those projects must have related schemas. This means that either one project must be a duplicate of the other, or both projects must be duplicates of a third project. For information about duplicating projects, including instructions, see Duplicating a project, page 485. If one of the projects is updated to a new MicroStrategy release, but another project is not updated, you cannot move or copy objects from the project using the updated version of MicroStrategy to the older version. However, you can move objects from the older version to the updated project.
Object Manager can move just a few objects, or just the objects in a few folders. Project Merge moves all the objects in a project.
Updating projects with new objects
493
Using Object Manager to merge whole projects means moving many objects individually or as a subset of all objects. This can be a long and tedious task. Project Merge packages the functionality for easier use because it moves all objects at one time. Object Manager must locate the dependents of the copied objects and then determine their differences before performing the copy operation. Project Merge does not do a dependency search, since all the objects in the project are to be copied. The Project Merge Wizard allows you to store merge settings and rules in an XML file. These rules define what is copied and how conflicts are resolved. Once they are in the XML file, you can load the rules and replay them with Project Merge. This can be useful if you need to perform the same merge on a recurring schedule. For example, if an application developer sends you a new project version quarterly, Project Merge can make this process easier. Project Merge can be run from the command prompt in Microsoft Windows. An added benefit of this feature is that project merges can be scheduled using the AT command in Windows and can be run silently in an installation routine.
Locking projects
When you open a project in Object Manager or Project Merge, you automatically place a metadata lock on the project. This lock prevents other MicroStrategy users from modifying any objects in the project in MicroStrategy Desktop or MicroStrategy Web, while objects are being copied with Object Manager or Project Merge. It also prevents other MicroStrategy users from modifying any configuration objects, such as users or groups, in the project source. Locking a project prevents metadata inconsistencies. When other users attempt to open an object in a locked project using MicroStrategy Desktop or MicroStrategy Web, they see a message that informs them that the project is locked because a user that opened the project first is modifying it. Users can then choose to open the object in
494 Updating projects with new objects
2008 MicroStrategy, Inc.
read-only mode or view more details about the lock. Users can execute reports in a locked project, but the report definition that is used is the last definition saved prior to the project being locked. If you lock a project by opening it in Object Manager, you can unlock the project by right-clicking the project in Object Manager, and choosing Disconnect from Project Source. Only the user who locked a project, or another user with the Bypass all security access checks and Create configuration objects privileges, can unlock a project. You can also lock or unlock a project or a configuration manually using MicroStrategy Desktop. For detailed steps on locking and unlocking projects manually, see the Desktop online help (press F1 from within Desktop). Command Manager scripts can be used to automate metadata lock management. For information about Command Manager, see Automating processes using Command Manager, page 464. For specific Command Manager syntax for managing metadata locks, see the online help (press F1 from within Command Manager).
To migrate objects between projects with Object Manager, those projects must have related schemas. This means that either one project must be a duplicate of the other, or both projects must be
Copying objects between projects
495
duplicates of a third project. For information about duplicating projects, including instructions, see Duplicating a project, page 485.
If one of the projects is updated to a new MicroStrategy release, but another project is not updated, you cannot move or copy objects from the project using the updated version of MicroStrategy to the older version. However, you can move objects from the older version to the updated project.
Copying objects, page 496 What happens when you copy or move an object, page 499 Resolving conflicts when copying objects, page 509
Copying objects
Object Manager can copy application, schema, and configuration objects. The objects in each category include the following:
Application objects: reports, metrics, templates, filters, consolidations, custom groups, prompts, searches, data mart reports, and folders Schema objects: attributes, facts, hierarchies, functions, tables, and transformations Configuration objects: database instances, schedules, and users and user groups
For background information on these objects, including how they are created and what roles they perform in a project, see the MicroStrategy Project Design Guide.
Copy application objects into the following project folders: My Personal Objects or any subfolder of My Personal Objects Public Objects or any subfolder of Public Objects. A list of all application objects can be found on page 496.
Copy schema objects into the appropriate Schema Objects sub- or descendent folders only. For example, if you are copying a hierarchy, you should only paste the hierarchy into the Project Name\Schema Objects\Hierarchies folder. A list of all schema objects can be found on page 496.
You cannot undo this action. Back up your metadata before you begin this procedure. Note the following:
To log in to a project source using Object Manager, you must have the Bypass all security access checks and Create configuration objects privileges for that project. To copy application or schema objects between projects, the two projects must have related schemas (one must be a duplicate of the other or both must be duplicates of a common project). For details on this, see Duplicating a project, page 485.
497
1 From the Windows Start menu, point to Programs, then MicroStrategy, then Administrator, then choose Object Manager. The Open Project Source dialog box opens.
2 In the list of project sources, select the check box for the project source you want to access. You can select more than one check box. 3 Click Open. One Login: project source dialog box opens for each project source selected in the previous step. 4 In the Login id box, type in your user login ID and in the Password box, type in your password. 5 Click OK. The MicroStrategy Object Manager window opens.
Use the appropriate sub-procedure depending on whether you want to Copy application and schema objects or Copy configuration objects.
Copy application and schema objects
6 In the Folder List, expand the project that contains the object you want to copy, then navigate to the object 7 Copy the object by right-clicking and selecting Copy.
If you copy a folder, Object Manager also copies all child objects and child folders of that object. For details about child objects, see Child dependencies, page 500. If you are copying objects between two different project sources, two windows are open within the main Object Manager window. In this case, instead
of right-clicking and selecting Copy and Paste, you can drag and drop objects between the projects.
8 Expand the destination project in which you want to paste the object, and then select the folder in which you want to paste the object. 9 Paste the application or schema object into the appropriate destination folder by right-clicking and selecting Paste. 10 If you copied any schema objects, you must update the destination projects schema. Select the destination project, and from the Schema menu, select Update Schema.
Copy configuration objects
11 In the Folder Lists for both the source and destination projects, expand the Administration folder, then select the appropriate manager for the type of configuration object you want to copy (Database Instance Manager, Schedule Manager, or User Manager). 12 From the list of objects displayed on the right-hand side in the source project source, drag the desired object into the destination project source and drop it.
To display the list of users on the right-hand side, expand User Manager, then on the left-hand side select a group.
499
If the object you are copying does exist in the destination project, a conflict occurs and Object Manager opens the Conflict Resolution dialog box. For information about resolving conflicts, see Resolving conflicts when copying objects, page 509. For more information about handling specific situations, see:
Managing object dependencies, page 500 Migrating dependent objects, page 503 Event logging, page 506 Timestamps for migrated objects, page 507 Copying objects between projects in different languages, page 508
Child dependencies
A child dependency occurs when an object uses other objects in its definition. For example, in the MicroStrategy Tutorial project, the metric named Revenue uses the base formula named Revenue in its definition. The Revenue metric is said to have a child dependency on the Revenue base formula. (Additionally, the Revenue base formula has a parent dependency of the Revenue metric.)
1 After you have opened a project source and a project using Object Manager, in the Folder List select the object. 2 From the Tools menu, choose Object child dependencies. The Child dependencies dialog box opens and displays a list of objects that the selected object uses in its definition. The sample dialog box below shows the child dependencies of the Revenue metric in the MicroStrategy Tutorial project: in this case, the child dependency is the Revenue base formula.
3 In the Child dependencies dialog box, you can do any the following:
View child dependencies for any object in the list by selecting the object and clicking the Object child dependencies toolbar icon. Open the Parent dependencies dialog box for any object in the list by selecting the object and clicking the Object parent dependencies icon on the toolbar. For information about parent dependencies, see Parent dependencies, page 502. View the properties of any object, such as its ID, version number, and access control lists, by selecting the object and from the File menu choosing Properties.
501
Parent dependencies
A parent dependency occurs when an object is used as part of the definition of other objects. For example, in the MicroStrategy Tutorial project, the Revenue metric has parent dependencies of many reports and even other metrics. The Revenue metric is said to be a child of these other objects. Parent dependents are not automatically migrated with their children. However, you cannot delete an object that has parent dependencies without first deleting its children.
To manage the parent dependencies of an object
1 After you have opened a project source and a project using Object Manager, from the Folder List select the object. 2 From the Tools menu, choose Object parent dependencies. The Parent dependencies dialog box opens and displays a list of objects that depend on the selected object for part of their definition. The sample dialog box below shows the parent dependencies for the Revenue metric in the MicroStrategy Tutorial project.
3 In the Parent dependencies dialog box, you can do any of the following:
View parent dependencies for any object in the list by by selecting the object and clicking the Object parent dependencies icon on the toolbar.
Open the Child dependencies dialog box for any object in the list by selecting the object and clicking the Object child dependencies icon on the toolbar. For information about child dependencies, see Child dependencies, page 500. View the properties of any object, such as its ID, version number, and access control lists, by selecting the object and from the File menu choosing Properties.
Folders have a child dependency on each object in the folder. Therefore, if you copy a folder using Object Manager, all the objects in that folder are also copied.
A Shortcut object has a child dependency on the object it is a shortcut to. Therefore, if you copy a shortcut using Object Manager, the object it is a shortcut to is also copied.
Security filters and users have a child dependency on the user groups they belong to. Therefore, if you copy a security filter or a user, the groups that it belongs to are also copied.
Groups have a parent dependency on the users and security filters that are associated with them. Copying a group does not automatically copy the users or security filters that belong to that group. To copy the users or security filters in a group, select the users from a list of that groups parent dependents and then copy them.
Attribute relationships are stored with the parent attribute, regardless of relationship type. If only the child attribute is moved, a new relationship is not reflected in the destination project. The child appears as a dependent of the parent, but not vice-versa. In other words, the parent dependency is not reestablished after the action.
503
Fact extensions are stored on the fact. The related attribute is not aware of a fact extension. Therefore, if the attribute is moved, the fact is not. However, if the fact is moved, the attribute is displayed in the dependency list, and may require conflict resolution. For detailed information about resolving conflicts in objects, see Resolving conflicts when copying objects, page 509.
objects there. (The object ID of the created User folder will be different from the ID in the source project.) The diagram below illustrates this process:
Source project copied Report 1 Destination project Report 1 Result Destination project Report 1
However, if you select the Use dependencies folder option, Object Manager looks through the destination project for a folder with an ID that matches the folders ID in the source project; in this case, ID 111. If such a folder exists, even if it has a different folder name or a different path, Object Manager saves all dependent objects in it. If a folder with an ID of 111 does not exist in the destination project, Object Manager creates a folder with the name Dependencies in the same folder as Report 1 (the base object) and saves all dependent objects there. The diagram below illustrates this option:
Source project copied Report 1 Destination project Report 1 Result Destination project Report 1
checks all folders in project (no folder with ID 111 in any location)
X
If folder ID 111 does not exist, "Dependencies" is created and dependent objects are copied to it
505
1 From the Tools menu, select Object Manager Preferences. The Object Manager Preferences dialog box opens. 2 Click the Conflict Resolution tab. 3 In the Dependent Objects section, select the Use dependencies folder when ID of parent folder of dependent object is not found in destination project option. 4 Click OK. The Object Manager Preferences dialog box closes.
Event logging
Object Manager features an event logging function. Depending on your logging preferences, you can record information about all objects that are copied between projects, or only about those objects for which a conflict occurs. The log file stores information in plain text, in a file and location of your choice. The log file records the following information:
Connection information, including the source and destination projects and the user name that is used to log in to each The objects that are being copied (Base Objects) and the destination folder (Target Folder) Each action that Object Manager performs in the copying process Information about the objects: ID Type Name and timestamp in the source project
Name and timestamp in the destination project, if applicable Action taken to resolve any conflicts
To set up Object Manager logging
1 From the Tools menu, select Object Manager Preferences. The Object Manager Preferences dialog box opens. 2 Click the Events tab. 3 To enable logging, ensure that the Log events to file check box is selected. 4 In the File Directory and File Name fields, enter the location and name of the file you want to log Object Manager information to. 5 To append logging information to the end of the previous log file, select the Append log to existing file check box. In the MB field, enter the maximum size (in megabytes) that the log file can become before being replaced by a new log file. 6 To log information about all objects copied, select the Show objects without conflicts check box. If you leave this check box cleared, information is logged only for those objects for which a conflict occurs. 7 Click OK. The Object Manager Preferences dialog box closes.
507
1 From the Tools menu, select Object Manager Preferences. The Object Manager Preferences dialog box opens. 2 Click the Conflict Resolution tab. 3 Under Options, select the Preserve object modification timestamp during migration check box to cause objects to keep the modification timestamp from the source project. If this check box is cleared, objects take the modification timestamp from the destination Intelligence Server at the time of migration. 4 Click OK. The Object Manager Preferences dialog box closes.
resolve the conflict. For details about the conflict resolution options, see Choosing an action to resolve a conflict, page 510.
To set the internationalization options
1 From the Tools menu, select Object Manager Preferences. The Object Manager Preferences dialog box opens. 2 Click the International tab. 3 From the Language drop-down list, select the language to be used in Object Manager. By default this is the language used in all MicroStrategy products installed on this system. 4 From the Locale drop-down list, select whether copied objects use the locale settings from the Language selection or from the machines regional settings. 5 To keep names, descriptions, and long descriptions when replacing an object, select the Object Descriptors check box. If this check box is cleared, object names, descriptions, and long descriptions are replaced along with the other object properties when the object is migrated. 6 Click OK. The Object Manager Preferences dialog box closes.
509
When copying objects across projects with Object Manager, if an object with the same ID as the source object exists anywhere in the destination project, a conflict occurs and the Conflict Resolution dialog box appears (shown below). It prompts you to resolve the conflict.
Explanation
The object ID, object version, and path are the same in the source and destination projects. The object ID is the same in the source and destination projects, but the object versions are different. The path may be the same or different. The object ID and object version are the same in the source and destination projects, but the paths are different. This occurs when one of the objects has been moved to a different folder, so that the path has changed. If you resolve the conflict with the Replace action, the destination object is updated to reflect the path of the source object.
When Object Manager reports a conflict it also suggests a default action to take for that conflict. For information on changing the default action, see Setting default actions for conflict resolutions, page 512.
User Action Use existing Replace Effect
No change is made to the destination object. The source object is not copied. The destination object is replaced with the source object.
Note: See table above for Exists identically except for path. Replace moves object into same parent folder as source object. If parent path is the same between source and destination but grandparent path is different, Replace may appear to do nothing because Replace puts the object into the same parent path. Non-empty folders in the destination location will never have the same version ID and modification time as the source, because the folder is copied first and the child objects are added to it, thus changing the version ID and modification times during the copy process. Keep both Use newer
No change is made to the destination object. The source object is duplicated in the destination location. If the source objects modification time is more recent than the destination objects, the Replace action is used. Otherwise, the Use existing action is used.
Use older
If the source objects modification time is more recent than the destination objects, the Use existing action is used. Otherwise, the Replace action is used.
Update in same path (folder only) Update in new path (folder only) Merge privileges (user/group only)
No change is made to the path of objects in the destination folder. Objects in the destination folder are updated to use the path of the source folder. The privileges of the source user or group are added to the privileges of the destination user or group.
Warehouse and other database tables associated with the objects moved are handled in specific ways, depending on your conflict resolution choices. For details, see Handling tables, page 515. If you choose to replace a schema object, the following message may appear:
511
The schema has been modified. In order for the changes to take effect, you must update the schema. This message also appears if you choose to replace an application object that depends on an attribute, and you have made changes to that attribute by modifying its form properties at the report level or its column definition through another attribute. For information about modifying the properties of an attribute, see the MicroStrategy Project Design Guide. To update the project schema, from the Object Manager Project menu, select Update Schema. For details about updating the project schema, see Chapter 8, Optimizing and Maintaining your Project, in the MicroStrategy Project Design Guide.
To resolve a conflict
1 Choose an option from the Action drop-down list (see table above). 2 Click Proceed on the toolbar.
Application objects (listed on page 496) Schema objects (listed on page 496) Configuration objects (listed on page 496) Folders Users and user groups
For example, you can set application objects to default to Use newer to ensure that you always have the most recent version of any metrics and reports. You can set schema objects to default to Replace to use the source projects version of attributes, facts, and hierarchies. You can always change the conflict resolution action for a given object when you copy that object.
To set the default conflict resolution actions
1 From the Tools menu, select Object Manager Preferences. The Object Manager Preferences dialog box opens. 2 Select the Conflict Resolution tab. 3 Click Set Default Actions. The Default Object Actions dialog box opens.
4 Make any changes to the default actions for each category of objects.
For a list of application, configuration, and schema objects, see page 496
513
For an explanation of each object action, see Choosing an action to resolve a conflict, page 510.
5 Click OK to save your changes and close the Default Object Actions dialog box.
1 From the Tools menu, select Object Manager Preferences. The Object Manager Preferences dialog box opens.
2 Click the Conflict Resolution tab. 3 Under Access Control List, select the Replace existing ACL when replacing objects option to cause objects to keep the ACL from the source project. 4 Click OK. The Object Manager Preferences dialog box closes.
Handling tables
When an attribute or fact is migrated from one project to another using Object Manager, all dependent tables are also migrated. This includes warehouse tables as well as MDX tables and XDA tables. The following list and related tables explain how the attribute - table or fact - table relationship is handled, based on the existing objects and tables and the conflict resolution action you select. In the following list and tables:
Attribute, fact, and table descriptions refer to the destination project. For example, new attribute means the attribute is new to the destination project: it exists in the source project but not the destination project. The word object refers to the attribute or fact.
New attribute or fact, new table: There is no conflict resolution. The table is moved with the object. New attribute or fact, existing table: The object in the source project contains a reference to the table in its definition. The table in the destination project has no reference to the object because the object is not present in the destination project. In this case the new object will have the same references to the table as it did in the source project.
515
Existing attribute or fact, new table: The object in the destination project does not refer to the table because the table does not exist in the destination project. The object in the source project contains a reference to the table in its definition.
Existing attribute or fact, existing table: The object has a reference to the table in the source project but has no reference to it in the destination project.
Existing attribute or fact, existing table: The object has no reference to the table in the source project but has a reference to it in the destination project.
The rules listed above apply regardless of whether the parent is moved explicitly or because it is a dependent of another object. For example, whether you move a report containing an attribute or just the attribute itself, neither move affects the attribute - table relationship. For an explanation of parent-child relationships, see Managing object dependencies, page 500.
Migrating objects through development, testing, and production projects as the objects become ready for use. Receiving a new version of a project from an application developer.
517
In either case, you must move objects from development to testing, and then to the production projects that your users use every day. Topics covered in this section include:
What happens when you merge projects, page 518 Merging projects with the Project Merge Wizard, page 519 Running Project Merge from the command prompt, page 521 Scheduling a project merge, page 524 Resolving conflicts when merging projects, page 525
If an object ID does not exist in the destination project, the object is copied from the source project to the destination project. If an object exists in the destination project and has the same object ID and version in both projects, the objects are identical and a copy is not performed. If an object exists in the destination project and has the same object ID in both projects but a different version, there is a conflict that must be resolved. The conflict is resolved by following the set of rules specified in the
Project Merge Wizard and stored in an XML file. The possible conflict resolutions are discussed in Project Merge conflict resolution rules, page 525.
519
You cannot undo a project merge. Back up your metadata before using Project Merge.
1 From the Windows Start menu, point to Programs, then MicroStrategy, then Administrator, then Object Manager, and then select Project Merge Wizard. The Project Merge Wizard opens. 2 Follow the steps in the wizard to set your options and conflict resolution rules.
For details about all settings available when running the wizard, see the online help (press F1 from within the Project Merge Wizard). For information about the rules for resolving conflicts, see Resolving conflicts when merging projects, page 525.
3 Near the end of the wizard, when you are prompted to perform the merge or generate a log file only, select Generate log file only. Also, choose to Save Project Merge XML. At the end of the wizard, click Finish. Because you selected to generate a log file only, this serves as a trial merge. 4 After the trial merge is finished, you can read through the log files to see what would have been copied (or not copied) if the merge had actually been performed. 5 Based on what you learn from the log files, you may wish to change some of the conflict resolution rules you set when going through the wizard. To do this, run the wizard again and, at the beginning of the wizard, choose to Load the Project Merge XML that you created in the previous run. As you proceed through the wizard, you can fine-tune the settings you specified earlier. At the end of the wizard, choose to Generate the log file only (thereby performing another trial) and choose Save the Project Merge XML. Repeat this step as many times as necessary until the log file indicates that objects are copied or skipped as you desire.
6 When you are satisfied that no more rule changes are needed, run the wizard a final time. At the beginning of the wizard, load the Project Merge XML as you did before. At the end of the wizard, when prompted to perform the merge or generate a log file only, select Perform merge and generate log file.
-sp[ ] -dp[ ]
-smp[ ]
521
Parameter
-dmp[ ] -sup
-MD
-SU
-h
A sample command using this syntax is provided below. The command assumes that hello is the password for all of the project source and database connections. The login IDs used with these passwords are stored in the XML file created by the Project Merge Wizard. projectmerge -fc:\temp\merge.xml -sphello -dphello -smphello -dmphello -MD -SU If the XML file contains a space in the name or the path, you must enclose the name in double quotes, such as: projectmerge -fc:program files\xml\ merge.xml -sphello -dphello -smphello -dmphello -MD -SU
be useful when you wish to propagate a change to several projects simultaneously. During a multiple merge, the Project Merge Wizard is prevented from locking the projects. This is so that multiple sessions of the wizard can access the source projects. You will need to manually lock the source project before beginning the merge. You will also need to manually lock the destination projects at the configuration level before beginning the merge. Failing to do this may result in errors in project creation due to objects being changed in the middle of a merge. For information on locking and unlocking projects, see Locking projects, page 494. For detailed instructions on how to manually lock and unlock projects, see the Desktop online help (press F1 from within Desktop). To do this, you must modify the Project Merge XML file, and then make a copy of it for each session you wish to run.
To execute multiple simultaneous merges from one project
1 In a text editor, open the Project Merge Wizard XML file. 2 In the OMOnOffSettings section of the file, add the following node: <Option><ID>OMOnOffSettings</ID><SkipProje ctMergeSourceLockingSkipProjectMergeDestCo nfigLocking/></Option>. 3 Make one copy of the XML file for each session of the Project Merge Wizard you wish to run. 4 In each XML file, make the following changes:
Correct the name of the destination project. Ensure that each file uses a different Project Merge log file name.
5 Manually lock the source project. For detailed steps on locking projects manually, see the Desktop online help (press F1 from within Desktop).
523
6 Manually lock the destination projects at the configuration level. For detailed steps on locking projects manually, see the Desktop online help (press F1 from within Desktop). 7 For each XML file, run one instance of the Project Merge Wizard from the command line.
1 From the Microsoft Windows machine where Project Merge is installed, from the Start menu, select Programs, then choose Command Prompt. 2 Change the drive to the one on which the Project Merge utility is installed. The default installation location is the C: drive (the prompt appears as: C:\>) 3 Type an AT command that calls the projectmerge command. For a list of the syntax options for this command, see Running Project Merge from the command prompt, page 521.
Note: Non-empty folders in the destination location will never have the same version ID and modification time as the source, because the folder is copied first and the child objects are added to it, thus changing the version ID and modification times during the copy process.
525
Effect
No change is made to the destination object. The source object is duplicated in the destination location. If the source objects modification time is more recent than the destination objects, the Replace action is used. Otherwise, the Use existing action is used. If the source objects modification time is more recent than the destination objects, the Use existing action is used. Otherwise, the Replace action is used.
Comparing objects between two projects, page 527 Tracking your projects with the Search Export feature, page 529
Identical in both projects Identical in both projects except for the folder path Only present in the source or destination project Different between projects, and newer in the source or destination project
You can print this result list, or save it as a text file or an Excel file. Since the Project Comparison Wizard is a part of Object Manager, you can also select objects from the result set to immediately migrate from the source project to the destination project. For more information about migrating objects using Object Manager, see Copying objects between projects, page 495.
527
the wizard, see the online help (from the wizard, click Help, or press F1). Note the following:
To compare two projects with the Project Comparison Wizard, those projects must have related schemas. This means that either one project must be a duplicate of the other, or both projects must be duplicates of a third project. For information about duplicating projects, including instructions, see Duplicating a project, page 485. The Project Comparison Wizard is a part of Object Manager and requires the same privileges as Object Manager to run. Specifically, to log in to a project source using Object Manager you must have the Bypass all security access checks and Create configuration objects privileges for that project. For an overview of Object Manager, see Copying objects between projects, page 495.
1 From the Windows Start menu, point to Programs, then MicroStrategy, then Administrator, then choose Object Manager. Object Manager opens. 2 Open a project source in Object Manager. 3 From the Project menu, select Compare Projects. The Project Comparison Wizard opens. 4 Select the source and destination projects. 5 Specify whether to compare all objects or just objects in a specific folder, and what types of objects to compare. 6 Review your choices at the summary screen and click Finish. The objects in the two projects are compared and the Project Comparison Result Set dialog opens. This dialog lists all the objects you selected and the results of their comparison.
7 To save the results, from the File menu select Save as text file or Save as Excel file. 8 To migrate objects from the source project to the destination project using Object Manager, select those objects in the list and click Proceed. For more information about Object Manager, see Copying objects between projects, page 495.
The user who was logged in when the search was performed. The search type, date and time, and project name. Any search criteria entered into the tabs of the Search for Objects dialog box. Any miscellaneous settings in Desktop that affected the search (such as whether hidden and managed objects were included in the search.) A list of all the objects returned by the search, including any folders. The list includes object names and paths (object locations in the Desktop interface.)
529
1 In MicroStrategy Desktop, from the Tools menu, select Search for Objects. The Search for Objects dialog box opens. 2 Perform your search. For information on how to configure a search in Desktop, consult the online help. (Click Help or press F1.) 3 After your search is complete, from the Tools menu in the Search for Objects dialog box, select Export to Text. The text file is saved by default to C:\Program Files\MicroStrategy\Desktop\SearchResults_ <date and timestamp>.txt, where <date and timestamp> is the day and time when the search was saved. For example, the text file named SearchResult_022607152554.txt was saved on February 26, 2007, at 15:25:54, or 3:25 PM.
Integrity Manager can execute any or all reports from the project, note whether or not those reports execute, and show you the results of each report. Integrity Manager can also test the performance of an Intelligence Server by recording how long it takes to execute a given report or document. You can execute the reports or documents multiple times in the same test and record the time for each execution cycle, to get a better idea of the average Intelligence Server performance time. For more information about performance tests, see Testing Intelligence Server performance, page 534. For reports you can test and compare the SQL, grid data, graph, or Excel output. For documents you can test and compare the Excel output, or test whether or not the documents execute properly. If you choose not to test and compare the Excel output, no output is generated for the documents. Integrity Manager still reports whether or not the documents executed successfully and how long it took them to execute. Note the following:
To execute an integrity test on a project, you must have the Use Integrity Manager privilege for that project. Integrity Manager can only test projects in Server (three-tier) mode. Projects in Direct Connection (two-tier) mode cannot be tested with this tool. To test the Excel export of a report or document, you must have Microsoft Excel installed on the machine running Integrity Manager.
This section describes how to use Integrity Manager to view and compare reports and documents. Topics include:
What is an integrity test?, page 532 Creating an integrity test, page 536 Executing an integrity test, page 539 Viewing the results of a test, page 544
531
Project-versus-project integrity tests compare reports/documents from two different projects. This is useful when you are moving a project from one environment to another (for instance, out of development and into production), and you want to ensure that the migration does not cause changes in any reports or documents in the project. Baseline-versus-project integrity tests compare reports/documents from a project against a previously established baseline. The baseline can be established by running a single-project integrity test, or taken from a previous execution of a project-versus-project integrity test.
Baseline-versus-project tests can be used as an alternative to project-versus-project tests when no base project is available, or when running against a production Intelligence Server would be too costly in terms of system resources. Also, by using baseline-versus-project tests a user can manually change the results which he wants to compare the target project with.
Baseline-versus-baseline integrity tests compare reports/documents from two previously established baselines against each other. These baselines can be established by running single-project integrity tests (see below), or taken from a previous execution of a project-versus-project integrity test.
These tests can be useful if you have existing baselines from previous tests that you want to compare. For example, your system is configured in the recommended application life cycle of development -> test -> production (for more information on this life cycle, see The application life cycle, page 478). You have an existing baseline from a single project test of the production project, and the results of a project versus project test on the development and test projects. In this situation, you can use a baseline versus baseline test to compare the production project to the test project.
In each of these comparative tests, Integrity Manager executes the specified reports/documents in both the baseline and the target. You can compare the report data, generated SQL code, graphs, and Excel exports for the tested reports; you can compare the Excel exports for tested
533
documents, or test the execution of the documents without exporting the output. Integrity Manager informs you which reports/documents are different between the two projects, and highlights in red the differences between them.
Performance comparison tests should be run as single-project integrity tests. This reduces the load on
Integrity Manager and ensures that the recorded times are as accurate as possible. To compare performance on two Intelligence Servers, MicroStrategy recommends following the steps below: a Perform a single project test against one project, saving the performance results. b Perform a single project test against the second project, saving the performance results. c Compare the two performance results in a baseline-versus-baseline test.
If you are using a baseline-versus-project test or a baseline-versus-baseline test, make sure that the tests have processed the reports/documents in the same formats. Execution times are not recorded for each format, only for the aggregate generation of the selected formats. Thus, comparing a baseline with SQL and Graph data against a test of only SQL data is likely to give inaccurate results. If the Use Cache setting is selected on the Select Execution Settings page of the Integrity Manager Wizard, make sure that a valid cache exists for each report or document to be tested. Otherwise the first execution cycle of each report/document takes longer than the subsequent cycles, because it must generate the cache for the other cycles to use. One way to ensure that a cache exists for each report/document is to run a single-project integrity test of each report before you run the performance test. In the Integrity Manager wizard, on the Select Execution Settings page, make sure Concurrent Jobs is set to 1. This causes Intelligence Server to run only one report or document at a time, and provides the most accurate benchmark results for that Intelligence Server. The Cycles setting on the Select Generation Options page of the Integrity Manager Wizard indicates how many times each report or document is executed. A high value for this setting can dramatically increase the execution time of your test, particularly if you are running many reports or documents, or several large reports/documents.
535
Wait until the performance test is complete before attempting to view the results of the test in Integrity Manager. Otherwise the increased load on the Integrity Manager machine may cause the recorded times to be increased for reasons not related to Intelligence Server performance.
1 Start Integrity Manager. (From the Start menu, point to Programs, then MicroStrategy, then Integrity Manager, then select Integrity Manager.) 2 From the File menu, select Create Test. The Integrity Manager Wizard opens and the Welcome page is displayed. 3 Select the type of test you want to create:
To compare reports/documents from two projects, select Project versus project integrity test. To compare reports/documents against a previously established baseline, select Baseline versus project integrity test. To compare reports/documents from two previously established baselines, select Baseline versus baseline integrity test. To confirm that reports/documents in a project execute without errors, select Single project integrity test.
4 Specify the baselines and projects to be tested. For each project, provide a MicroStrategy login and password with the Use Integrity Manager privilege for that project. 5 Select the reports/documents to be tested. You can select individual reports/documents, or entire folders. You can also select search objects; in this case, Integrity Manager tests all reports/documents from the results of the search object. 6 Specify test execution options, such as the answers to any unanswered value prompts, logging options, and whether to use the report/document cache. 7 Select what types of analysis to perform. For reports, you can analyze any or all of the grid data, underlying SQL, graph data, or Excel export. For documents you can analyze the Excel export.
Only reports that have been saved in Graph or Grid/Graph view can be analyzed as graphs. You can also select to record the execution time of each report/document, to analyze the performance of the Intelligence Server.
8 Review the information presented on the Summary page. 9 To save the settings for use in later tests, click Save Test. Navigate to the desired directory, enter a file name, and click OK.
For instructions on executing a saved test, see Saving and loading a test, page 538.
10 To execute the test immediately, regardless of whether you saved the settings, click Run. The Integrity Manager Wizard closes and Integrity Manager begins to execute the selected reports/documents. As the reports execute, the results of each report/document appear in the Results Summary area of the Integrity Manager interface.
537
1 Step through the Integrity Manager Wizard and answer its questions. For detailed instructions, see Creating an integrity test, page 536. 2 When you reach the Summary page of the Integrity Manager Wizard, click Save Test. A Save dialog opens. 3 Navigate to the desired folder and enter a file name to save the test as. By default this file will have an extension of .mtc. 4 Click OK. The test settings are saved to the specified file.
You can execute the test immediately by clicking Run. The Integrity Manager Wizard closes and Integrity Manager begins to execute the selected reports/documents. As they execute, their results appear in the Results Summary area of the Integrity Manager interface.
To load a previously saved test
1 In Integrity Manager, from the File menu select Load Test. An Open File dialog box opens.
538 Verifying reports and documents
2008 MicroStrategy, Inc.
2 Navigate to the file containing your test information and open it. The Integrity Manager Wizard opens at the Welcome page. The settings for the test are loaded into each page of the wizard.
The default extension for integrity test files is .mtc.
3 Step through the wizard and confirm the settings for the test. 4 At the Enter Base Project Information page and Enter Target Project Information page, enter the password for the login used to access the base or target project. 5 When you reach the Summary page, review the information presented there. When you are satisfied that the test settings shown are correct, click Run. The Integrity Manager wizard closes and Integrity Manager begins to execute the selected reports/documents. As they execute, their results appear in the Results Summary area of the Integrity Manager interface.
539
In Windows, to execute an integrity test against an Intelligence Server on a machine other than the one Integrity Manager is running on, you need to add an entry to the hosts file for the machine Integrity Manager is running on.
To add an entry to the hosts file
2 Open the hosts file with a text editor, such as Notepad. 3 For each Intelligence Server machine that you want to test against, add a line to the file in the same format as the examples given in the file. 4 Save and close the hosts file. You can now execute integrity tests against the Intelligence Servers specified in the file.
Prompts that cannot be answered at all, such as an element list prompt that contains no elements in the list Level prompts that use the results of a search object to generate a list of possible levels Prompted metric qualifications (used in filters or custom groups) MDX expression prompts
1 Any prompts that have default answers are answered with those defaults. 2 If the Answer optional prompts option is selected on the Select Prompt Settings page of the Integrity Manager Wizard, optional value and hierarchy prompts without default answers are answered in the next steps in the same way as required prompts without default answers.
If the Answer optional prompts option is not selected, any optional value and hierarchy prompts without default answers are left unanswered. For specific information about this part of the wizard, see the Integrity Manager online help (click Help or press F1 from inside Integrity Manager).
3 Any required value prompts without default answers are answered with the defaults provided in the Select Prompt Settings page. (A value prompt is a type of prompt that requires the user to type an entry in a field.)
If a list of prompt answers is provided for a type of value prompt, Integrity Manager cycles through the provided answers, starting at the beginning of the list.
4 Any required hierarchy prompts without default answers are answered by selecting as many elements from the hierarchy as are specified in the Select Prompt Settings page. (A hierarchy prompt requires the user to select elements from a hierarchy.) 5 Integrity Manager then uses its internal logic to attempt to answer any other required prompts without default answers. For example, a prompt that requires a certain number of elements to be selected from a list is answered by selecting that number of elements from the beginning of the list.
If a prompt cannot be answered by Integrity Manager, the report execution fails and the report's status changes to Not Supported. A detailed description of the prompt that could not be answered can be found in the Details tab of the Report
541
Data area for that failed report. To view this description, select the report in the Results summary area and then click the Details tab. For an introduction to prompts, see the MicroStrategy Basic Reporting Guide.
table prefix of PROD. You want Integrity Manager to treat the table prefixes as identical for purposes of comparison, because reports that differ only in their table prefixes should be considered identical. In this case, you can use the SQL Replacement feature to replace TEST with PREFIX in the base project, and PROD with PREFIX in the target project. Now, when Integrity Manager compares the report SQL, it treats all occurrences of TEST in the base and PROD in the target as PREFIX, so they are not considered to be differences. The changes made by the SQL Replacement Table are not stored in the SQL files for each report. Rather, Integrity Manager stores those changes in memory when it executes the integrity test. Access the SQL Replacement feature from the Advanced Options dialog box, on the Select Processing Options page of the Integrity Manager wizard.
After creating a saved test (for instructions, see To save test settings, page 538), call the Integrity Manager executable MIntMgr.exe. By default this executable is located in C:\Program Files\MicroStrategy\ Integrity Manager.
543
The syntax is: MIntMgr.exe -f FileLocation\Filename.mtc [-bp BasePassword] [-tp TargetPassword] where:
FileLocation is the path to the saved test file. Filename.mtc is the name of the saved test file. BasePassword is the password for the user specified in the test file to log in to the base project. This is not required for a baseline-versus-project or baseline-versus-baseline integrity test. TargetPassword is the password for the user specified in the test file to log in to the target project. This is not required for a single-project or baseline-versus-baseline integrity test.
For example, to run a saved single-project test file named MonthlyTest.mtc located in C:\Tests for which the password for the specified user in the base project is admin123, use the command MIntMgr.exe -f C:\Tests\MonthlyTest.mtc -bp admin123. For information on the syntax for the Windows AT command or a UNIX scheduler, see the documentation for your operating system.
Pending reports/documents have not yet begun to execute. Running reports/documents are in the process of executing.
In a performance test, this status appears as Running (#/#). The first number is the current execution cycle. The second number is the number of times the report/document will be executed in the test.
Paused (#/#) reports/documents, in a performance test, have executed some but not all of their specified number of cycles when the test execution is paused. The first number is the number of cycles that have been executed. The second number is the number of times the report/document will be executed in the test. Completed reports/documents have finished their execution without errors. Timed Out reports/documents did not finish executing in the time specified in the Max Timeout field in the Select Execution Settings page. These reports/documents have been cancelled by Integrity Manager and will not be executed again during this run of the test. Error indicates that an error has prevented this report or document from executing correctly. To view the error, double-click the status. The report details open in the Report Data area of Integrity Manager, below the Results Summary area. The error message is listed in the Execution Details section. Not Supported reports/documents contain one or more prompts for which an answer could not be automatically generated. To see a description of the errors, double-click the status. For details of how Integrity Manager answers prompts, see Executing prompted reports with Integrity Manager, page 540.
Additional information for Completed reports/documents is available in the Data, SQL, Graph, and Excel columns:
Matched indicates that the results from the two projects are identical for the report/document. In a single-project integrity test, Matched indicates that the report/documents executed successfully.
545
Not Matched indicates that a discrepancy exists between the two projects for the report/document. To view the reports/documents from each project in the Report Data area, select them in the Results Summary area. Not Compared indicates that Integrity Manager was unable to compare the reports/documents for this type of analysis. This can be because the report/document was not found in the target project, because one or more prompts are not supported by Integrity Manager, or because an error prevented the report or document from executing. Not Available indicates that Integrity Manager did not attempt to execute the report/document for this type of analysis. This may be because this type of analysis was not selected on the Select Processing Options page, or (if N/A is present in the Graph column) because the report was not saved as a Graph or Grid/Graph.
To view a Completed report or document and identify discrepancies, select its entry in the Results Summary. The report/document appears in the Report Data area of Integrity Manager, below the Results Summary. In a comparative integrity test, both the base and the target report/document are shown in the Report Data area. Any differences between the base and target are highlighted in red, as follows:
In the Data, SQL, or Excel view, the differences are printed in red. In Data and Excel view, to highlight and bold the next or previous difference, click the Next Difference or Previous Difference icon. In the Graph view, the current difference is circled in red. To circle the next or previous difference, click the Next Difference or Previous Difference icon. To change the way differences are grouped, use the Granularity slider. For more information about differences in graph reports, see Grouping differences in graph reports. Viewing graphs in Overlap layout enables you to switch quickly between the base and target graphs. This layout makes it easy to compare the discrepancies between the two graphs.
The white space between the words is the same in both the base and target reports. When the granularity is set to a low level, this unchanged space causes Integrity Manager to treat each word as a separate difference, as seen below:
547
If the granularity is set to a higher level, the space between the words is no longer sufficient to cause Integrity Manager to treat each word as a separate difference. The differences in the title are all grouped together, as seen below:
Integrity Manager also creates a separate folder within the output folder for the report/document results from each project. These folders are named after the Intelligence Servers on which the projects are kept.
For the baseline server, _0 is appended to the machine name to create the name of the folder. For the target server, _1 is appended to the machine name.
For example, the image below is taken from a machine that executes a project-versus-project integrity test at nine AM on the first Monday of each month. The baseline project is on a machine named ARCHIMEDES, and the target project is on a machine named PYTHAGORAS. The folder for the results from the baseline project is archimedes_0, and the folder for the results from the target project is pythagoras_1.
In a baseline-versus-project integrity test, the baseline folder is named baseline_0. In a baseline-versus-baseline integrity test, the baseline folder is named baseline_0 and the target folder is named baseline_1. Each results folder contains a number of files containing the results of each report that is tested. These files are named <ID>_<GUID>.<ext>, where <ID> is the number indicating the order in which the report was executed, <GUID> is the report object GUID, and <ext> is an extension based on the type of file. The report results are saved in the following files:
SQL is saved in plain text format, in the files <ID>_<GUID>.sql. Grid data is saved in CSV format, in the file <ID>_<GUID>.csv, but only if you select the Save CSV files check box in the Advanced Options dialog box. Otherwise it is not saved.
549
Graph data is saved in PNG format, in the file <ID>_<GUID>.png. Excel data is saved in XLS format, in the file <ID>_<GUID>.xls, but only if you select the Save XLS files check box in the Advanced Options dialog box. Otherwise it is not saved. Only report results for formats requested in the Select Processing Options page during test setup are generated. Only reports that have been saved in Graph or Grid/Graph format can be saved as graphs.
Each results folder also contains a file called baseline.xml that provides a summary of the tested reports. Baseline.xml is the file used to provide a baseline summary for baseline-versus-project and baseline-versus-baseline integrity tests. Integrity Manager creates a file named <ID>_<GUID>.ser for each report. These files contain serialized binary data that Integrity Manager uses when you open a previously saved set of test results, and are not intended for use by end users. These files are stored in the same folder as the test results.
To open a previously saved set of test results
1 In Integrity Manager, from the File menu, select Open Test Results. 2 Browse to the location of the saved test. 3 Select the ResultsSummary.xml file and click Open. The saved test results open in Integrity Manager, just as if you had executed the test.
Freeform SQL Query Builder OLAP cube sources such as SAP BI, Hyperion Essbase, and Microsoft Analysis Services
For information on Freeform SQL and Query Builder, see the MicroStrategy Advanced Reporting Guide. For information on OLAP cube sources, see the MicroStrategy OLAP Cube Reporting Guide. Managed objects are stored in a special system folder, and can be difficult to delete individually due to how these objects are created and stored. If you use one of the features listed above, and then decide to remove some or all of that features related reports and OLAP cubes from the project, there may be unused managed objects included in your project that can be deleted. This section covers the following topics:
Deleting managed objects one-by-one, page 552 Deleting all unused managed objects, page 553
551
1 In Desktop, delete any Freeform SQL, Query Builder, or OLAP cube reports in the project that depend on the managed objects you want to delete.
If you are removing OLAP cube managed objects, you must also remove any OLAP cubes that these managed objects depend on.
2 Right-click the project and select Search for Objects. The Search for Objects dialog box opens. 3 From the Tools menu, select Options. The Search Options dialog box opens. 4 Select the Display managed objects and Display managed objects only check boxes. 5 Click OK to return to the Search for Objects dialog box. 6 Enter your search criteria and select Find Now. A list of managed objects appears. 7 Manually delete managed objects by right-clicking their name in the search result and selecting Delete.
552 Deleting unused schema objects: managed objects
2008 MicroStrategy, Inc.
1 Remove all reports created with Freeform SQL, Query Builder, or OLAP cubes.
If you are removing OLAP cube managed objects, you must also remove all imported OLAP cubes.
2 In Desktop, right-click the project and select Project Configuration. The Project Configuration Editor opens. 3 Expand the Database instances category. 4 Select either Relational or OLAP Cube, depending on the database instance you want to remove.
Freeform SQL and Query Builder use relational database instances, while SAP BI, Essbase, and Analysis Services use OLAP cube database instances. For more information on the difference between the two, see the MicroStrategy Installation and Configuration Guide.
553
5 Clear the check box for the database instance you want to remove from the project. You can only remove a database instance from a project if the database instance has no dependent objects in the project. 6 Click OK to accept the changes and close the Project Configuration Editor.
This procedure removes some preliminary object dependencies. Attribute and metric managed objects are not automatically deleted by this procedure, because you can reuse the managed attributes and metrics at a later time. If you do not plan to use the attribute and metric managed objects and want to delete them permanently from your project, continue through the rest of this procedure.
To delete unused attribute and metric managed objects
7 In Desktop, from the Administration menu, select Projects, and then select Delete unused managed objects.f
9
9.
TROUBLESHOOTING
Introduction
This chapter provides guidance for finding and fixing trouble spots in the system. While the material in the chapter does not go into great detail, it does provide references to the relevant portions of this guide where the topic or remedy is discussed in more detail. The discussions in this chapter are:
Methodology for finding trouble spots, page 556 Finding trouble spots using diagnostics, page 558 Troubleshooting authentication, page 584 Fixing inconsistencies in the metadata, page 592 Troubleshooting date/time functions, page 596 Troubleshooting performance, page 597 Troubleshooting report results, page 598 Failure to start MicroStrategy Intelligence Server, page 602
555
Troubleshooting
Troubleshooting clustered environments, page 605 Troubleshooting Enterprise Manager, page 608 Resources for troubleshooting, page 611
Tune the system as necessary (see Tuning the system for best performance, page 368)
Use reports in MicroStrategy Enterprise Manager to see: Which components of the system are slow (use the Execution Cycle Breakdown report) to see if reports can be designed differently
Troubleshooting
When the system is slowest and if that relates to concurrency (use the Peak Time Period, Average Execution Time vs. Number of Sessions, and Average Execution Time vs. Number of Jobs per User, reports) Whether scheduled reports are running during peak times (use the Scheduled Report Load report) and if so, schedule them at off-peak times Whether caching certain reports would improve performance (use the Cache Analysis, Top 10 Reports and Top 10 Longest Executing reports)
Tune the system See Tuning the system for best performance, page 368 See FAQs for configuring and tuning MicroStrategy Web products, page 946)
The connection to metadata may not be working (see Connecting to the MicroStrategy metadata, page 17)
The connection to the data warehouse may not be working or there may be problems with the data warehouse (see Connecting to the data warehouse, page 19) The result set row for the report may have exceeded the limit specified in the Project Configuration Editor or the VLDB Properties editor. (see Troubleshooting report results, page 598)
557
Troubleshooting
Check that the correct port numbers are set if you are using firewalls in your configuration (see Using firewalls, page 935)
Configure what is logged using the Diagnostics and Performance Logging tool, as described in the next section View logged information in real time with the MicroStrategy Monitor, as described in Viewing log files in the Monitor, page 569
Troubleshooting
You can access the Diagnostics tool either through MicroStrategy Desktop as described below, or you can access it without logging into Desktop. To do the latter, from the Start menu, select Programs, select MicroStrategy, select Tools, and select Diagnostics Configuration.
To configure logging with the Diagnostics and Performance Logging tool
If you save any changes to settings within the Diagnostics and Performance Logging tool, you cannot automatically return to the out-of-the-box settings. If you might want to return to the original default settings at any time, record the default setup for your records.
559
Troubleshooting
For details about this editor while you are using it, press F1 to see the online help.
2 Select from one of two configurations to see the current default setup, as follows: Machine Default: The components and counters displayed reflect the client machine. CastorServer instance: The components and counters displayed reflect server-specific features.
When you select the CastorServer instance, you can select whether to use the default configuration. On the Diagnostics tab, this check box is named Use Default Diagnostics Configuration; on the Performance tab, this check box is called Use Default Performance Configuration. This check box refers to the Machine Default settings. No matter what you have changed and saved on either tab when CastorServer instance is selected, if you
560 Finding trouble spots using diagnostics
2008 MicroStrategy, Inc.
Troubleshooting
check the Use Default Configuration box, the system logs whatever information is configured for Machine Default at run time.
4 From the File menu, select Save. Your new settings are saved in the registry, and Intelligence Server begins logging the information you configured.
If logging does not take effect, restart Intelligence Server. Once the system begins logging information, you can analyze it by viewing the appropriate log file. Log files are discussed below.
Diagnostics configuration
The Diagnostics and Performance Logging tool allows you to control two types of diagnostics logging, represented by two tabs, Diagnostics Configuration and Performance Configuration. The Diagnostics Configuration tab in the Diagnostics and Performance Logging tool is where you determine which specific system components you want to log diagnostics for. For each component, you can select the log file that Intelligence Server writes messages to. The Diagnostics Configuration tab contains the following columns:
Component: The MicroStrategy system component or feature for which you can log diagnostics messages.
561
Troubleshooting
Dispatcher: The trace type or system activity about which a message is written to a given log file. System log: The Event Viewer system log file. The Event Viewer is a Windows tool, located by default in Administrative Tools. To access it, from the Start menu, select Settings, select Control Panel, and choose Administrative Tools. Console log: The Monitor console log file. The Monitor is a MicroStrategy tool. To access it, from your Start menu, select Program Files, select MicroStrategy, select Tools, and select Monitor. File log: The MicroStrategy .log files. This column contains several file logs you can select from for logging information from each component/dispatcher combination, including:
DSSErrors: The default diagnostics log file DSSDMX: A log file specifically for recording DMX-related messages Any other custom file log(s) you may have created These log files are located at C:\Program Files\Common Files\MicroStrategy\Log. This location is set during installation and cannot be changed.
Common customizations
You should customize diagnostics only if you have experience administering your MicroStrategy system. If you do not, contact MicroStrategy Technical Support for help. The component/dispatcher combinations you choose to enable logging for depend on your environment, your system, and your users activities. In general, the most useful dispatchers to select are the following:
Error: This dispatcher logs the final message before an error occurs, which can be important information to help detect the system component and action that caused or preceded the error.
Troubleshooting
Fatal: This dispatcher logs the final message before a fatal error occurs, which can be important information to help detect the system component and action that caused or preceded the server fatality. Info: This dispatcher logs every operation and manipulation that occurs on the system.
Some of the most common customizations to the default diagnostics setup are shown in the following table. Each component/dispatcher combination in the table is commonly added to provide diagnostic information about that particular component and its related trace (dispatcher). To add a combination, select its check box.
Component Dispatcher
Authentication Server Trace Database Classes Database Classes Database Classes Database Classes Database Classes Metadata Server Metadata Server Engine Element Server Element Server Object Server Object Server Object Server Report Net Server Report Server Report Server Report Server Kernel Connection Management Connection Instances Info SQL Trace Error SQL Trace Content Source Trace DFC Engine Element Source Trace Object Source Trace Content Source Trace Object Source Trace Scope Trace Scope Trace Cache Trace Object Source Trace Report Source Trace Scheduler Trace
563
Troubleshooting
Component
Kernel Kernel XML API
Dispatcher
User Trace Trace
Note: If you enable this diagnostic, for the log file you select in the File Log column, make sure the Max File Size (KB) is set to at least 2000.
Performance configuration
The Performance Configuration tab in the Diagnostics and Performance Logging tool is where you fine-tune performance diagnostics logging by determining which specific operating system categories and counters you want to log diagnostics for. For example, you may want to discover exactly what amount of time the CPU takes to operate a given system function. On the Performance Configuration tab you can also create a new log file which Intelligence Server can write messages to for performance counter logging. The Performance Configuration tab contains the following columns:
Category: The operating system component or feature for which you can log performance diagnostics messages. Counter: The type of information about which a message is written to a selected log file. The actual counters listed change dynamically depending on which operating system is running. File log: The MicroStrategy .log files. For each category/counter combination, this column allows you to choose whether to log to a selected file, shown at the far right of the Performance Configuration tab, as follows:
The default DSS Performance Monitor file log Any other custom file log(s) you may have created These logs are located at C:\Program Files\Common Files\MicroStrategy\Log. This location is set during installation and cannot be changed.
Troubleshooting
1 In the Diagnostics and Performance Logging tool, on the right-hand side of the Performance Configuration tab, from the File Name drop-down box, select a file to which you want to log performance counter data.
If you want to create a new log file, from the File Name drop-down box select New. The Log Destination Editor opens. See Creating a new log file below for steps to create a log file.
2 From the Logging Frequency (sec) field, enter how often you want data to be logged. 3 From the Persist Counters drop-down box, select whether you want counters persisted, as follows: Yes: If the server is restarted, the system creates a separate log file (using the same file name but with a 2 added) to continue logging information. No: If the server is restarted, the system continues to append logged information to the selected log file.
4 When you are finished configuring the performance counter log file, click Save on the toolbar. Your choices are saved for the selected log file.
565
Troubleshooting
1 In the Diagnostics and Performance Logging tool, from the Tools menu, select Log Destinations. The Log Destination Editor opens. 2 From the Select Log Destination drop-down box, select New. 3 In the File Name field, enter a name for your new log file. 4 In the Max File Size (KB) field, enter the size you want to limit your file to. General guidelines for setting this number are as follows:
If the Kernel XML API component is selected in the Diagnostics and Performance Logging tool, the maximum file size should be set to no lower than 2000 KB. To keep a longer history of information in this log file, you can enter 10,000 KB or more. Log files are always appended to. When a file reaches its maximum size, the file is backed up (with a .bak extension) and a new file is created.
5 From the File Type drop-down box, select whether you want this log file to record diagnostics data or performance data, as follows: Diagnostics: If you select Diagnostics as the file type, your new log file becomes available as a selection from the File Log column on the Diagnostics Configuration tab, and can record information for any of the component/dispatcher combinations. Performance: If you select Performance as the file type, your new log file becomes available as a selection from the right-hand side of the Performance Configuration tab, and can record information on performance counters.
Troubleshooting
6 When you are finished setting up your new log file, click Save. All log files are saved to C:\Program Files\Common Files\MicroStrategy\Log. This location is set during installation and cannot be changed.
Definition
ID of the process that performed the action ID of the thread that performed the action Date and time at which the action happened Name of the MicroStrategy component that performed the action Type of the log file entry Message about the action
567
Troubleshooting
Intelligence Server loads the report definition object named Length of Employment from the metadata.
286:[THR:480][02/07/2003::12:24:23:860][DSS ReportServer][Report Source Tracing] where Definition = Object(Name="Length of Employment" Type=3(Report Definition) ID=D1AE564911D5C4D04C200E8820504F4F Proj=B19DEDCC11D4E0EFC000EB9495D0F44F Ver=493C8E3447909F1FBF75C48E11AB7DEB )
Intelligence Server checks to see whether the report exists in the report cache.
286:[THR:480][02/07/2003::12:24:25:181][DSS ReportServer][Report Source Tracing]Finding in cache: ReportInstance(Name="Length of Employment" ExecFlags=0x1000180(OSrcCch UptOSrcCch ) ExecActn=0x1000180(RslvCB LclCch ) )
Intelligence Server checks for prompts and finds none in the report.
286:[THR:314][02/07/2003::12:24:25:432][DSS ReportServer][Report Source Tracing]No prompts in ReportInstance(Name="Length of Employment" ExecFlags=0x1000180(OSrcCch UptOSrcCch ) ExecActn=0x1000180(RslvCB LclCch ) )
Troubleshooting
If the Monitor is launched from the client machine, the Monitor logs only client information. If the Monitor is launched from the server machine, the Monitor logs only server information. If the server and client are on the same machine, when the Monitor is launched from this machine it logs both Desktop and server information.
The Monitor is a viewer-type tool and does not allow you to edit logged information.
To open the Monitor
1 From your Start menu, select Program Files, then MicroStrategy, then Tools, then select Monitor. 2 The Monitor console window opens. Logged information appears in this console as soon as it is logged by the system.
569
Troubleshooting
1 Access the Web Administrator Page. 2 Click View Error log on the left side of the page.
- or In Web Universal click View logs on the left side of the page. For more information, see the online help accessible from the Administrator Page in the Web product.
Troubleshooting
Examples of problems that trigger an SSD are memory depletion or exceptions. Changes to the server definition (by using the ApplyRunTime API call via Command Manager or by making changes in Desktop) triggers a subset of the SSD information.
571
Troubleshooting
Maximum jobs per project Maximum connections per project Number of projects Communication protocol and port WorkingSet File Directory and Max RAM for WorkingSet Cache values are currently not listed.
Review the above governor settings for unusually high values. This may help you understand why memory depletions are occurring.
Project name Cache settings Governor settings DBRole used DBConnection settings
Review the above settings for unusually high values which may contribute to memory depletions. Also review the settings to ensure that the project is configured as you expect it to be.
Thread load balancing mode Memory throttling Inbox settings Idle timeouts
2008 MicroStrategy, Inc.
Troubleshooting
Review the above governor settings for unusually high values. This may help you understand why memory depletions are occurring. The memory and XML governor settings are especially important in memory depletion cases. MCM is specifically designed to help you avoid memory depletions. For more information on MCM, see Memory Contract Manager, page 583.
This shows the memory profile of the Intelligence Server process and machine. If any of these values are near their limit, memory may be a cause of the problem.
573
Troubleshooting
Look for an unexpectedly high number of users or jobs. These may have caused the problem.
Reports Documents Administration tasks, such as idling projects and other tasks related to cache management
Look for unexpected schedules that might be running at the time of the dump. Schedules can add significant load to the system and contribute to problems such as memory depletions.
Troubleshooting
Working Set Result Pool memory consumption Total Num of Inbox Messages Total Num of Report Messages Total Num of Document Messages Total Num of Document Child Report Messages
Look for high values in any of these numbers, especially in cases of memory depletion. They may also contribute to slow performance.
General information related to the current status of the job Details on job steps completed and currently executing, along with timing Job SQL, if available
These may be useful to see what the current load on Intelligence Server was, as well as what was executing at the time of the error. If the error is due to a specific report, the information here can help you reproduce it (to help MicroStrategy Technical Support). It may also help you avoid the problem while it is being fixed.
575
Troubleshooting
Be proactive. During the project planning/building phase, apply those concepts that are presented in the Tuning section. For information, see Tuning the system for best performance, page 368 Be aware of the primary memory consumers and implement governors and system limits to reign them in. Monitor the system with the Windows Performance Monitor (see Manage system memory and resources: Windows Performance Monitor, page 249) and MicroStrategy Enterprise Manager (see Analyzing system usage with Enterprise Manager, page 251). Users practices of system use can evolve over time. New problems can arise as users are added to system and as they become savvy in using the system. These new potential problems may require tuning changes (see Tuning the system for best performance, page 368).
Troubleshooting
Virtual memory is Physical memory (RAM) + Disk Page file (also called the swap file). It is shared by all processes running on the machine including the operating system.
The User Address Space (UAS) is independent of virtual memory, and is of finite size. It is measured per process on the machine (such as the MSTRSVR.exe Intelligence Server application). By definition, in a 32-bit operating system, virtual bytes is limited to 4GB (232). The Windows operating system divides this into two parts UAS and System Address Space (SAS). The UAS is, in this case, for Intelligence Server to store data and code, while the SAS is for the operating systems use. Normally, Windows 2000 divides this 50/50 between the UAS and SAS for 2 GB each. Although, when using 4GT mode in Windows 2000 Advanced Server (or higher edition) and NT Enterprise divides it 75/25 (3GB UAS and 1 GB SAS).
Virtual bytes measures the use of the UAS. When virtual bytes reaches the UAS limit, it causes a memory depletion.
The Commit Limit in Windows Task Manager is not equal to Virtual bytes
Private bytes reflect virtual memory usage and they are a subset of allocated virtual bytes.
To help determine what is causing memory depletion, answer these questions:
How is the project being used? Are there very complex reports? Reporting on very large data sets? Is the Scheduler being used heavily? Are many reports running on a single schedule? Is there a schedule that runs during the peak usage time? What is the prompted / non-prompted report mix? How is the History List used? Are there many messages? Is there high user concurrency?
577
Troubleshooting
Is there high job concurrency (either jobs or large reports)? Are the governor settings too high for Working set, XML cells, result set, and so on?
To answer these questions, you must be familiar with the system and how it is being used (Enterprise Manager reports will help you with this). But perhaps most useful is to know what the system was doing when the memory depletion occurred. To answer this question, use:
Windows Performance Monitor Monitoring memory use with Performance Monitor, page 415. Use this to characterize memory use over time. Examine it in relation to job and user concurrency. Typically, log these counters: Process / Virtual Bytes (MSTRSVR process) Process / Private Bytes (MSTRSVR process) Process / Thread Count (MSTRSVR process) MicroStrategy Server Jobs / Executing Reports MicroStrategy Server Users / Open Sessions Are there spikes in memory? Large reports or exports being executed? Is there an upward trend over time?
The DssErrors.log file for details about what was happening when the memory depletion occurred (see Reading log files, page 567) Your knowledge of the system and whether or not the top memory consumers are heavily used
Intelligence Server memory footprint, page 579 Report cache memory, page 580 Working set, page 580 History List (Inbox) filled with messages, page 581
2008 MicroStrategy, Inc.
Troubleshooting
A single large report, page 581 Export to Excel from MicroStrategy Web or Web Universal, page 582 If memory depletion does not seem to fall into one of known categories, page 583
What happens when Intelligence Server starts?, page 10 How much memory does Intelligence Server use when it starts up?, page 416 How does Intelligence Server use memory after it is running?, page 418
Many data warehouse connection threads (approximately 1MB/thread) The number of caches and their size Number of projects Project schema size Number of schedules (with autoprompts)
Possible solutions:
Reduce the number of data warehouse connection threads or limit the amount of time they can exist (see Managing database connection threads, page 389) Reduce the Maximum RAM size for report caches (see Maximum RAM usage, page 195, and below) Split projects to multiple Intelligence Servers
579
Troubleshooting
Disable report caching Is caching required? If most of your users do ad hoc reporting or if the data warehouse is updated frequently, caching may not be helping the efficiency of your systems operation. Check the cache hit ratio using the Enterprise Manager report Cache Analysis. Heavy prompt usage decreases cache usage (disable caching of prompted reports).
Reduce the Maximum RAM size for report caches (see Maximum RAM usage, page 195)
Working set
This features memory use typically correlates with the number of open user sessions and the size of reports the users run. Possible solutions:
Governing user resources (Governing user resources, page 379) Reduce the working set size Reduce size of reports
Troubleshooting
Limit the user sessions either in number or by time (Governing active users, page 374)
Decrease the session idle timeouts so that the users Inbox can be unloaded (see Governing active users, page 374) Limit the number of Inbox Maximum number of messages per user (see Governing user resources, page 379).
A large amount of data returned from the data warehouse Large amount of data retrieved for element prompts Multiple page-by fields and subtotals being used in the report Use of Custom groups High Analytical Engine complexity A large amount of XML returned to the MicroStrategy Web product Large reports in a document
Possible solutions: For details on all of the solutions below, see Results processing, page 400 (unless otherwise noted).
Redefine the report or split the report into multiple smaller reports
581
Troubleshooting
Restrict object prompt options Do not allow creating a report with object prompts. These prompts allow the user to throw many objects on a report. For example, a report designer might allow the user to select from a list of many metrics and attributes, with no restriction on the number of objects to place on the report. This makes it easy for a user to create such a large report that, when executed, results in a large data set or complex set of multiple metrics that use lots of memory. (To limit the number of prompt answers: When creating a prompt using the Prompt Generation Wizard, select the Choose default prompt answers check box, then click Next. You can then set a minimum and maximum number of prompt answers allowed.) Another way to restrict prompt options is for the report designers to be more conscious about what they are doing when they give the user a list of prompt options.
Restrict element prompts For example, do not allow an element prompt to bring back a list of thousands of elements to the user (see Limiting the number of elements displayed and cached at a time, page 205). Just the process of getting all these elements back from the data warehouse can cause a memory depletion, especially if several of these are executing concurrently.
Restrict the reports maximum size so users cannot create large reports (see Governing results delivery, page 405) Reduce the Maximum number of XML cells setting Reduce the Maximum number of results rows setting
Troubleshooting
This chunk size = Maximum number of XML cells. For more information on this, see What happens when I export a report from Web?, page 43. Possible solutions:
Have your users export using the Plain text, Excel with plain text, or CSV file format settings, which use less memory than others Reduce the Maximum number of XML cells setting (see Governing results delivery, page 405)
If memory depletion does not seem to fall into one of known categories
Use tools: DssErrors.log, Performance Monitor Look for jumps in memory usage Regular or sporadic depletions? Based on your understanding of the system usage, try to find those actions that lead to significant memory usage
Governing Intelligence Server memory use, page 419 Memory Contract Manager, page 583
583
Troubleshooting
Troubleshooting authentication
This section gives you a list of things to check according to the type of authentication you are using. Refer to the appropriate authentication section below.
Windows authentication in MicroStrategy Web Failure to log in to server (three-tier) project source LDAP authentication
Troubleshooting
In cases where you have configured a project source to use Windows authentication, when logging in to MicroStrategy Web, your users may see the error message: You cannot login as an NT User. Please ask the MicroStrategy Server Administrator to link this NT User account to a MicroStrategy Server User. This may happen even if the users Windows account is properly linked to the MicroStrategy user account. The problem may be that several settings in Internet Information Services (IIS) Service Manager are incorrect. To fix this:
1 On the MicroStrategy Web server machine(s), access the IIS Internet Service Manager. The Microsoft Management Console dialog box opens. 2 Right-click the MicroStrategy virtual folder and select Properties. 3 On the Directory Security tab, under Anonymous Access and Authentication Control, click Edit. The Authentication Methods dialog box opens.
Clear the Allow anonymous access check box Select the Windows Challenge/Response check box
4 Click OK, then OK. 5 Restart the machine for the changes to take effect.
Troubleshooting authentication
585
Troubleshooting
This may happen because the server definition has not been initialized correctly. To fix this, re-create the server definition as follows:
1 Launch the MicroStrategy Control Center on MicroStrategy Intelligence Server Universal. 2 From the Applications menu, select Configuration Wizard. 3 In the Configuration Wizard, select MicroStrategy Intelligence Server and click Next. Proceed through the Configuration Wizard to create a new server definition. 4 Restart Intelligence Server to load the new server definition created above. 5 Connect to Intelligence Server from Desktop via a 3-tier project source.
LDAP authentication
LDAP authentication problems within Intelligence Server usually fall into one of these categories:
Authentication issues that include clear text and Secure Socket Layer (SSL) authentication modes Functionality problems/questions about importing users or groups, and synchronizing LDAP users within the MicroStrategy metadata
These are discussed below. For flowcharts, more details, and answers to frequently asked questions, see MicroStrategy Tech Note TN5300-722-0369.
Troubleshooting
Missing components: If you receive an error message stating that LDAP components could not be found, it may mean the DLL files are not located in the appropriate directory. See Implementing LDAP authentication, page 69 for details on choosing the correct DLL files based on your SDK and installing them in the correct location. If you move your DLL files, be sure to restart Intelligence Server. Invalid login/password: If you receive an Incorrect login/password error message, check your login name and password carefully. Also check that you have the correct DN search root and the correct user search filter syntax in the Intelligence Server Configuration Editor. See the online help for steps to access these fields and for syntax examples.
Cannot contact LDAP Server: If you receive a Cant connect to the LDAP Server error message, try to connect using the LDAP Server IP address. You should also check to be sure the LDAP machine is running. Another possibility is that the SSL certificate files are not valid. For additional SSL troubleshooting, see the next section.
Troubleshooting authentication
587
Troubleshooting
There are two modes of authentication against LDAP: Clear Text and SSL. Depending on which you are using, answering the following questions may help you reach a solution:
Troubleshooting
Does the certificate reside on the MicroStrategy Intelligence Server machine within the correct system path? (For information on how to import the certificate on to the MicroStrategy Intelligence Server machine, see the MicroStrategy Knowledge Base Tech Note TN5300-072-0203) Do the LDAP server side logs show success messages? The LDAP administrator can access these logs.
Troubleshooting authentication
589
Troubleshooting
in. The authentication user can be anyone who has search privileges in the LDAP Server, and is generally the LDAP administrator. The authentication user DN is specified on the MicroStrategy Intelligence Server Configuration Editor, in the LDAP: Configuration category in the User distinguished name (DN) box. The users DN is specified on the User Editor, Authentication tab in the User distinguished name (DN) box. If no explicit link is specified, the LDAP user is imported as a new MicroStrategy user under the LDAP Users group if the Import Users check box is selected. The user can then be treated as any MicroStrategy user and assigned privileges. The user object in the metadata for the MicroStrategy user now also contains a link to the LDAP user after the import. Intelligence Server also allows LDAP groups to be imported. With this option selected, all the groups to which the user belongs are also imported under the LDAP Users group (similar to the imported user) when an LDAP user logs in. You cannot link MicroStrategy system groups (such as the Everyone group) to an LDAP group. The hierarchical visual relationship between users and their user group is not maintained within the LDAP Users folder because it is maintained within the LDAP server directory. In spite of the visual link not appearing, the actual link between the user and his/her group does exist and is maintained. The Synchronize at Login options for both users and groups cause Intelligence Server to check (at the time of next login) whether:
MicroStrategy user information in the metadata and the LDAP user information in the LDAP Server are synchronous MicroStrategy group information in the metadata and LDAP group information in the LDAP Server are synchronous
Troubleshooting
The names and links between the two may or may not be synchronized depending on whether the synchronize option is selected in combination with whether users and groups are to be imported. For specific information about this and to see a flowchart, see MicroStrategy Tech Note TN5300-722-0369. When the user is logged in as a temporary LDAP user/group:
There is no link persisted in the metadata and the user has the privileges of a Guest user as long as the user is logged in. The user has the privileges of the MicroStrategy Public/Guest group. The non-imported user does not have an Inbox, because the user is not physically present in the metadata. The non-imported user cannot create objects and cannot schedule reports.
How can I assign security filters, security roles (privileges), or access control (permissions) to individual LDAP users?
Security filters, security roles, and access control may be assigned to users after they are imported into the MicroStrategy metadata, but this information may not be assigned dynamically from information in the LDAP repository.
Troubleshooting authentication
591
Troubleshooting
To allow users to dynamically inherit this information, you should assign these permissions at the group level in the MicroStrategy metadata. Group membership information is dynamically determined each time an LDAP user logs into the system, based on the group they become part of.
May two different users have different LDAP links, but the same user name?
No. The MicroStrategy metadata may not contain two users with the same login name or user name. If you attempt to create a user with the same user login or user name, the import fails. Each user object in the MicroStrategy metadata must have a unique user login and user name.
What happens if there are two users with similar descriptions in the LDAP directory?
If the DN descriptor that specifies a particular user is not sufficient for Intelligence Server to identify him/her, then the user will fail to log in. You should enhance the User search filter and Group search filter in the LDAP configuration category to help identify the user.
What happens if I import a User Group along with all its members in the LDAP directory into MicroStrategy metadata and then assign a connection mapping to the imported group?
The connection mapping of the imported user group to which the user belongs will not readily apply to the user. For this to work, you will need to manually assign the user as a member of the group after she or he has been imported.
Troubleshooting
You can verify that the object no longer exists in the project by disconnecting and reconnecting to the project. The following table lists all the object types and object descriptions that occur in the MicroStrategy metadata. You can refer to the type of the missing object from the table and restrict your search only to that particular object. This way you do not have to search through all the objects in a project.
Object Type
-1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Object Classification
DssTypeUnknown DssTypeReserved DssTypeFilter DssTypeTemplate DssTypeReportDefinition DssTypeMetric Unused DssTypeAutostyles DssTypeAggMetric DssTypeFolder Unused DssTypePrompt DssTypeFunction DssTypeAttribute DssTypeFact DssTypeDimension DssTypeTable
Object Description
The type of object is not specified None Filter Template Report Metric None Autostyle Base formula Folder None Prompt Function Attribute Fact Hierarchy Logical table
593
Troubleshooting
Object Type
16 17 18 19 20 21 22 23
Object Classification
Unused DssTypeFactGroup DssTypeShortcut DssTypeResolution Unused DssTypeAttributeForm DssTypeSchema DssTypeFindObject
Object Description
None Fact group Shortcut (a reference to another object) Saved prompt answer None Attribute form Schema (the collection of objects that define the data warehouse structure) Search definition (a simple search)
Note: This object type is deprecated. Search objects are object type 39 (DssTypeSearch).
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 DssTypeCatalog DssTypeCatalogDefn DssTypeColumn DssTypePropertyGroup DssTypePropertySet DssTypeDBRole DssTypeDBLogin DssTypeDBConnection DssTypeProject DssTypeServerDef DssTypeUser Unused DssTypeConfiguration DssTypeRequest Unused DssTypeSearch DssTypeSearchFolder Catalog (a list of relevant tables in a database) Catalog definition (a description of how a catalog is constructed) Column (a property needed to define a column of a database table) Property group (an internal object used to cache lists of property sets) Property set (an internal object) Database Instance Database Login Database Connection Project Server definition (a description of a configuration of an Intelligence Server) MicroStrategy User or Group None Intelligence Server configuration (a top level object representing a MicroStrategy installation) Scheduled request None Search object Search folder (a folder-like object used to store the result of a search object)
Troubleshooting
Object Type
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58
Object Classification
Unused
Object Description
None
DssTypeFunctionPackageDefinition Function Definition DssTypeRole DssTypeSecurityRole DssTypeInBox DssTypeInBoxMsg DssTypeConsolidation DssTypeConsolidationElement DssTypeScheduleEvent DssTypeScheduleObject DssTypeScheduleTrigger DssTypeLink DssTypeDBTable DssTypeTableSource DssTypeDocumentDefinition DssTypeDrillMap DssTypeDBMS DssTypeMDSecurityFilter Transformation Security Role History folder, or inbox (a folder-like object used to store History messages) History message (an object that describes the status of a report execution request) Consolidation Consolidation element Schedule event (an event that can trigger a scheduled object) Scheduled object (an object that can be triggered) Scheduled trigger (the binding between schedule event and object) Link (the holder of a property that spans objects) Physical database table Same suffix and prefix tables (a collection of physical database tables with the same suffix and prefix) Report Services Document Drill map DBMS definition (an object that holds information about a physical database) Security filter
Manual editing of values in the MicroStrategy metadata incorrectly may cause serious, project-wide problems that may make the project unusable. Since these are user-initiated changes, they are not covered by any MicroStrategy warranty. Users are strongly encouraged to back up their metadata prior to any alteration.
Scan MD
595
Troubleshooting
Scan MD is a utility that detects and repairs logical inconsistencies in the MicroStrategy metadata without requiring code-level changes to the core platform. The Scan MD tool searches for logical errors and fixes them within the platform object structure. The Scan MD tool executes tests to detect inconsistencies in the metadata. These tests are based upon modules, which are activated by keys. Each module can contain a number of tests as well as fixes to any problems that the test finds. The product is shipped with a default module-key pair that can run a standard set of tests to detect inconsistencies in your project. This default key is displayed in the Process options page of the wizard automatically. In some cases, Technical Support will provide you with a key-module pair to fix other issues that are not included in the standard tests. These keys and modules are provided to address a customer-specific issue and are not required for every customer. In such cases, it is always best to work with a support representative to understand the nature of your issue. The support representative will determine if there is a Scan MD module that will address your problem. For more information on contacting Technical Support see Technical Support, page xxix.
Troubleshooting
Troubleshooting performance
Project performance
You may notice a project that takes longer to load than others, or you may see the following error message in the server log file: [DSS Engine] [Error]DSSSQLEngine: WARNING: Object cache MaxCacheCount setting (200) is too small relative to estimated schema size (461). The project loading time involves retrieving a significant amount of data and can be time-consuming depending on the projects size, your hardware configuration, and your network. If a project takes a long time to load or you see the error message above, there are some things you can look at:
Load at startup: During installation of MicroStrategy, did you select to have the project Load at startup? This option generally speeds subsequent requests for the project because much of the information necessary to load the project is cached on the server.
You can check whether your project is set to load at startup using the Intelligence Server Configuration Editor. (In Desktop or Control Center, right-click the project source and select Configure MicroStrategy Intelligence Server. In the Projects category, select General.)
Maximum RAM usage for object cache: Depending on the size of your schema objects, you may need to raise the Maximum RAM usage (KBytes) setting. (To locate this setting: In Desktop, right-click on the project and select Project Configuration. From the Caching category, select Objects.)
The error message shown above will contain an estimated schema size. This number should be multiplied by 10 and
Troubleshooting performance
597
Troubleshooting
the result entered in the Maximum RAM usage setting. This setting may help optimize your project. Raising the number for Maximum RAM usage may cause high memory use, which may cause your machine to experience problems. Be sure you understand the full ramifications of all settings before you make significant changes to them.
Turn off diagnostic tracing: If you have tracing turned on, turn it off to ensure the project is loaded without logging any unnecessary information that can slow down performance. (In Desktop, from the Tools menu, select Diagnostics. Click Help for details on the various tracing options.)
Troubleshooting
If you change this setting, reports may still continue to be returned and displayed showing more than the maximum number of report result rows allowed. This is because this setting is designed to apply only to reports created after this setting has been changed. When you change this setting, no existing reports are affected. You can limit the number of report result rows on existing (and new) reports individually, using the Result Set Row Limit VLDB property. The Result Set Row Limit VLDB property can be specified for any report. For details on this VLDB property, see Results Set Row Limit, page 647. For information on accessing the VLDB Properties Editor, see Accessing and working with VLDB properties, page 620.
1 In Desktop, right-click the project and select Project Configuration. 2 In the Project Configuration Editor, expand the Database instances category and select General. 3 Click VLDB Properties. The VLDB Properties dialog box opens.
599
Troubleshooting
4 Expand Governing and select the Maximum SQL/MDX Size VLDB setting. 5 Clear the Use default inherited value - (Default Settings) check box. 6 Increase the Maximum SQL/MDX Size value as required. You can enter any number between 1 and 999999999. If you enter 0 or -1, the Maximum SQL/MDX Size is set to the default value of 64000. This default size may be different for different databases. It depends on the database instance that you select.
You should enter a value that a certified ODBC driver can handle; a large value can cause the report to fail in the ODBC driver. This is dependent on the database type you are using. If increasing the value of this VLDB property does not resolve the issue, try simplifying the report. You can simplify a report by removing attributes, metrics, and filters. Importing large sets of elements for filters can often cause large SQL/MDX size. For more information, see Maximum SQL/MDX Size, page 646 in the SQL Generation and Data Processing: VLDB Properties appendix of this guide
Troubleshooting
This error message can result from an incorrect setting in the database instance. If the database instance is using a Microsoft Excel file as a data source and the database instance type is set to Generic DBMS, there is a change in the syntax. This change in the syntax generates the error message. To avoid this, change the Database connection type of the database instance to Microsoft Excel 2000/2003 as follows:
1 From the MicroStrategy Desktop Folder List, expand Administration, and then select Database Instance Manager. 2 Right-click the name of the database instance you wish to modify and select Edit. The Database Instance dialog box opens. 3 On the General tab, select the Database connection type as Microsoft Excel 2000/2003. 4 Click OK.
In order for the change to take effect, you must restart the MicroStrategy Intelligence Server which uses this database instance.
601
Troubleshooting
versions of MicroStrategy Web prior to 8.1.0. To view the document, you must upgrade your version of MicroStrategy Web to 8.1.0 or later.
Troubleshooting
2 From the Service drop-down list, select MicroStrategy Intelligence Server. 3 Click Options. The Server Options dialog box opens. 4 In the Password and Confirm Password fields, enter the correct password. 5 Click OK. The Server Options dialog box closes. 6 To start the Intelligence Server, click Start.
Unable to start Intelligence Server 8.0.2 or later: When starting the Intelligence Server using the MicroStrategy Service Manager, you might get the following error messages: Failed to start service Error code: -1 Error Message: Intelligence Server cannot start because it has not been activated with MicroStrategy. Please activate your installation via License Manager or by visiting the activation web site at https://licensing.microstrategy.com.
603
Troubleshooting
Unable to start Intelligence Server 8.0.1 or earlier: The method used to start the Intelligence Server and the corresponding error message is mentioned in the following table:
Method Used
MicroStrategy Service Manager
Error Message
Failed to start service Error code: -1 Error Message:
Note: The following error message is recorded in the Event Viewer System Log:
The MicroStrategy Intelligence Server service terminated with service-specific error 4294967295 Control Panel -> Administrative Tools -> Services Windows could not start the MicroStrategy Intelligence Server on Local Computer. For more information, review the System Event log. If this is a non-Microsoft service, contact the service vendor, and refer to server-specific error code -1.
In all these cases, the error occurs when Intelligence Server is not activated within the 30-day calendar grace period. To start the Intelligence Server, activate the license for MicroStrategy Intelligence Server installation. For more information on activating the Intelligence Server installation, see the MicroStrategy Installation and Configuration Guide. A related error message when attempting to start Intelligence Server is: 2147205286: Fail to authenticate the licensed feature In this case, the system date on the Intelligence Server machine is incorrect, and is beyond the expiration date of the installation key. To start Intelligence Server, correct the date on the Intelligence Server machine and then restart the machine.
Troubleshooting
605
Troubleshooting
Troubleshooting
607
Troubleshooting
Statistics logging
Each project in a MicroStrategy system can be configured to log usage statistics to a database. For detailed information about statistics logging, see Analyzing statistics with Enterprise Manager, page 283. If statistics do not appear to be logged for your project, first verify that the Intelligence Servers to be monitored are correctly logging information in the statistics tables, and that these tables are correctly located within the Enterprise Manager Warehouse. When the statistics tables are created
Troubleshooting
using the MicroStrategy Configuration Wizard, they must be created in the database that will be used as the Enterprise Manager warehouse. The following Structured Query Language (SQL) scripts can confirm whether statistics are being recorded correctly. They should be executed using a native layer query tool (for example, SQL+ for Oracle, Query Analyzer for SQL Server).
Check statistics logged by user sessions: SELECT * FROM IS_SESSION_ STATS; Check report-related statistics: SELECT * FROM IS_REPORT_STATS; SELECT * FROM IS_REP_STEP_STATS; SELECT * FROM IS_REP_SQL_STATS; SELECT * FROM IS_REP_SEC_STATS; Check schedule-related statistics: SELECT * FROM IS_SCHEDULE_STATS; Check cache-related statistics: SELECT * FROM IS_CACHE_HIT_STATS; Check document-related statistics: SELECT * FROM IS_DOCUMENT_STATS; SELECT * FROM IS_DOC_STEP_STATS;
For a detailed list of the statistics tables and columns, see Enterprise Manager statistics data dictionary, page 875. If data is not being logged to statistics, or the information is not up-to-date, possible reasons include:
The statistics database is not selected in the Project Configuration dialog box. To check this, right-click a project that should be logging statistics and select Project Configuration. Then select the Statistics category and confirm that the correct database instance is selected.
The statistics database instance has not been configured correctly. To check this, expand the Administration section for the project source that should be logging statistics. Select Database Instance Manager. Right-click the statistics
609
Troubleshooting
database instance and select Edit. Then verify the database connection type, data source name (DSN), and default database connection.
The statistics database DSN has not been configured correctly. To check this, open the Windows Open Database Connectivity (ODBC) Data Sources for the machines where the Intelligence Servers logging statistics are installed. Verify that the DSN used by the Statistics Database Instance are defined correctly using certified drivers to connect to the Enterprise Manager warehouse, which also hosts the statistics tables.
If the Intelligence Servers for the projects being monitored and the database server do not have synchronized clocks, some data may be incorrect in the statistics tables. For example, if statistics appear for Deleted report in Enterprise Manager reports, it may be because statistics are being logged for reports that according to the warehouses timestamp should not exist. For additional information about the need to synchronize the data warehouse and the Intelligence Servers, see Synchronizing Intelligence Server time with the data warehouse time, page 271.
Data loading
Data loading is the process by which the data in the statistics table is migrated into the Enterprise Manager data warehouse, in a format that can be used by the Enterprise Manager reports. For a detailed discussion of data loading, see Data loading and Enterprise Manager maintenance tasks, page 285. Enterprise Manager logs all the information from the data load process, including all errors, in the MSTRMigration.log text file. By default this file is stored in: C:\Program Files\MicroStrategy \Administrator\Enterprise Manager\
Troubleshooting
If an error occurs during the execution of a SQL script, the SQL script that caused the error is recorded in the log file. The script can then be run using a native layer query tool against the Enterprise Manager warehouse. This can identify whether the error is due to an error in the SQL script, or to missing or incorrect statistics data. Below are some of the possible reasons why a data load might fail:
Data cannot be loaded from a project that is set to Request Idle, Execution Idle, or Full Idle mode. Before loading data from a project, make sure it is not in any of these idle modes. For instructions on how to set a projects mode, see the MicroStrategy Project Monitor online help. For an explanation of the different project modes, see Setting Project Modes, page 323. If you have changed the password for the administrator that configured a project for Enterprise Manager, data cannot be loaded from that project until you reconfigure it in the Enterprise Manager console. In the 2. Configure step, remove the project source from the list of project sources being monitored, and then add it to the list again. If you have deleted a project in MicroStrategy Desktop that is being monitored by Enterprise Manager, the data load process fails until you remove that project from the list of projects being monitored.
For additional possible causes of data load failure, see Tech Note TN6400-8X-0493 in the MicroStrategy Knowledge Base.
611
Troubleshooting
Windows: From the Windows Start menu, point to Programs, MicroStrategy, and select ReadMe. UNIX/Linux: Double-click the readme.htm file located in the install folder where you installed MicroStrategy.
Readme
The readme provides general information on the MicroStrategy release, as well as information on the different MicroStrategy products. Each product generally has sections of information pertaining to:
System requirements: MicroStrategy products require certain hardware, software, and other system requirements to function correctly. If your system does not have the necessary requirements for a MicroStrategy product, the component may encounter errors, loss of functionality, poor performance, and so on. Compatibility and interoperability Certified and supported configurations
Troubleshooting
Release notes
The release notes supplied for each MicroStrategy product generally has sections of information pertaining to:
Known issues: You can review any functionality that has been identified as having a known issue for the related version. A description of the issue is given, and in many cases workarounds are provided. Resolved issues: You can review any issues that have been resolved for the related version. Troubleshooting. You can review a list of troubleshooting tips for each MicroStrategy component. The troubleshooting tips may include a description of the symptom of the issue, the cause of the issue, and a resolution for the problem.
Specific troubleshooting documents Release notes User manuals and FAQs White papers Newsletters
The troubleshooting documents that are included in the Knowledge base are created by MicroStrategy developers, engineers, and consultants. You can find helpful tips and solutions for various MicroStrategy error codes and issues. The Knowledge Base is available to all MicroStrategy employees, distributors, partners, VARS, and customers with licensed, active maintenance agreements. It is also available to all evaluating customers for the duration of the evaluation
613
Troubleshooting
period. You can access the Knowledge Base by navigating to the MicroStrategy Support page at the following URL: https://support.microstrategy.com/home.asp.
Search Tips: Lists Knowledge Base Frequently Asked Questions (FAQ) as well as search tips that can help narrow your search criteria to return documents that fit your specific questions. Multi-Word Searches: Lists example multi-word searches that can be modified and used to search the Knowledge Base. Information on keyword operators in English and other languages is also provided. Words Ignored by Search: Lists numbers, symbols, and words that are ignored as keywords for a Knowledge Base search.
In addition to the search details and tips found in the resources listed above, the following list provides some additional best practices that can improve your Knowledge Base search:
If you are troubleshooting an error message or error code, you can copy the error message into your Knowledge Base search. You may have to enclose the error message in double quotes ( ) if it includes certain characters.
Troubleshooting
If a specific search for the error code, error message, or troubleshooting task does not return helpful results, consider simplifying your search and using the categories on the left-hand side of the Search Results page to locate a helpful document. In addition to being able to choose the product line in the advanced search options, you can also include the product line as a keyword for your search. For example, you can include product line keywords such as 8.x, 8.1.2, and so on.
Customer Forum
The MicroStrategy Customer Forum is a message board where MicroStrategy customers can have open discussions on implementation experiences, troubleshooting steps, and any fixes or best practices for the various MicroStrategy products. You can post and respond to message threads. The threads can help answer questions by pooling together one or more experiences and solutions to an issue. The Customer Forum is not meant to replace Technical Support, and, while MicroStrategy employees monitor the system from time to time, they are not responsible for answering messages posted to the Customer Forum. You can access the Customer Forum from the following URL: https://forums.microstrategy.com. You are prompted to create a Customer Forum account if you have not already done so. For those customers with an active MicroStrategy account who are also designated as Support Liaisons for their organizations, you may already have access to post and view messages on the Customer Forum.
615
Troubleshooting
Products: Discussion thread topics cover various MicroStrategy products such as Desktop, Intelligence Server, Web, and so on. Database: Discussion thread topics cover different databases that can be used with MicroStrategy such as Oracle, DB2, Informix, and so on. Customization: Discussion thread topics cover customizations to ASP, Java/JSP, and XML/XSL. Industries: Discussion thread topics center around different industry types such as Banking/Finance, Healthcare, Retail, and so on. Miscellaneous: Discussion thread topics cover data modeling, statistical analysis, portals, Web Services, and other general topics.
The Customer Forum also provides a search utility with which you can search on keywords, author, date, and so on.
Technical Support
MicroStrategy Technical Support helps to answer questions and troubleshoot issues related to your MicroStrategy products. If the troubleshooting resources described above do not provide you with a viable solution to your problem, you can call Technical Support to help troubleshoot your products. For more information on Technical Support and how best to ensure a timely solution to your questions, see Technical Support, page xxix in the Preface of this guide.
A
SQL GENERATION AND DATA PROCESSING: VLDB PROPERTIES
A.
Introduction
VLDB properties allow you to customize the SQL that MicroStrategy generates, and determine how data is processed by the Analytical Engine. You can configure properties such as SQL join types, SQL inserts, table creation, Cartesian join evaluation, check for null values, and so on. VLDB properties can provide support for unique configurations and optimize performance in special reporting and analysis scenarios. You can use the VLDB Properties Editor to alter the syntax or behavior of a SQL statement and take advantage of unique, database-specific optimizations. You can also alter how the Analytical Engine processes data in certain situations, such as subtotals with consolidations and sorting null values. Each VLDB property has two or more VLDB settings. VLDB settings are the different options available for a VLDB property. For example, the Metric Join Type VLDB property has two VLDB settings, Inner Join and Outer Join.
617
Complete database support: VLDB properties allow you to easily incorporate and take advantage of new database platforms and versions. Optimization: You can take advantage of database-specific settings to further enhance the performance of queries. Flexibility: VLDB properties are available at multiple levels so that the SQL generated for one report, for example, can be manipulated separately from the SQL generated for another, similar report. For a diagram, see Order of precedence, page 619. Modifying any VLDB property should be performed with caution only after understanding the effects of the VLDB settings you want to apply. A given VLDB setting can support or optimize one system setup, but the same setting can cause performance issues or errors for other systems. Use this manual to learn about the VLDB properties before modifying any default settings.
Supporting your system configuration, page 618 Order of precedence, page 619
Accessing and working with VLDB properties, page 620 Viewing and changing advanced VLDB properties, page 626
Details for all VLDB properties, page 631 Default VLDB settings for specific databases, page 827
DBMS used. For example, when using a Microsoft Access 2000 database, the Join Type VLDB property is set to Join 89. This type of initialization ensures that different DBMS types can be supported. These initializations are also used as the default VLDB settings for the respective DBMS type. To review a detailed list of all the default VLDB settings for different DBMS types, see Default VLDB settings for specific databases, page 827. VLDB properties also help you configure and optimize your system. You can use MicroStrategy for different types of data analysis on a variety of data warehouse implementations. VLDB properties offer different configurations to support or optimize your reporting and analysis requirements in the best way. For example, you may find that enabling the Set Operator Optimization VLDB property provides a significant performance gain by utilizing set operators such as EXCEPT and INTERSECT in your SQL queries. On the other hand, this property must offer the option to be disabled, since not all DBMS types support these types of operators. VLDB properties offer you a choice in configuring your system.
Order of precedence
VLDB properties can be set at multiple levels, providing flexibility in the way you can configure your reporting environment. For example, you can choose to apply a setting to an entire database instance or only to a single report associated with that database instance.
619
The following diagram shows how VLDB properties that are set for one level take precedence over those set for another.
Report (highest priority)
Template
Metric
Database Instance
The arrows depict the overwrite authority of the levels, with the report level having the greatest authority. For example, if a VLDB property is set one way for a report and the same property is set differently for the database instance, the report setting takes precedence. Properties set at the report level override properties at every other level. Properties set at the template level override those set at the metric level, the database instance level, and the DBMS level, and so on. A limited number of properties can be applied at each level.
Opening the VLDB Properties Editor, page 621 Creating a VLDB settings report, page 623 Viewing and changing VLDB properties, page 624
Viewing and changing advanced VLDB properties, page 626 Setting all VLDB properties to default, page 627 Upgrading the VLDB options for a particular database type, page 630 Modifying the VLDB properties for a warehouse database instance, page 631
Metric
621
Only a single property, called Unbalanced or Ragged Hierarchy, can be set at the hierarchy level. This propertys purpose and instructions to set it are described in the MicroStrategy Project Design Guide, in the chapter Using the OLAP Cube Catalog. VLDB properties exist at the filter level and the function level, but they are not accessible through the VLDB Properties Editor. All VLDB properties at the DBMS level are used for initialization and debugging only. You cannot modify a VLDB property at the DBMS level.
VLDB Settings list: Shows the list of folders into which the VLDB properties are grouped. Expand a folder to see the individual properties. The settings listed depend on the level at which the VLDB Properties Editor was accessed (see the table above). For example, if you access the VLDB Properties Editor from the project level, you only see Analytical Engine properties. Options and Parameters box: Where you set or change the parameters that affect the SQL syntax. SQL preview box: (Only appears for VLDB properties that directly impact the SQL statement.) Shows a sample
SQL statement and how it changes when you edit a property. When you change a property from its default, a check mark appears on the folder in which the property is located and on the property itself.
Display the physical setting names alongside the names that appear in the interface. The physical setting names can be useful when you are working with MicroStrategy Technical Support to troubleshoot the effect of a VLDB property.
623
Display descriptions of the values for each setting. For example, if the Distinguish Duplicated Rows property is set to False, its description is 7.x client behavior, duplicated report rows will not be distinguished. Hide all settings that are currently set to default values. This can be useful if you want to see only those properties and their settings which have been changed from the default.
1 Open the VLDB Properties Editor to display the VLDB properties for the level at which you want to work. (For information on object levels, see Order of precedence, page 619.) 2 From the Tools menu, select Create VLDB Settings Report. 3 A report is generated that displays all VLDB properties available at the level from which you accessed the VLDB Properties Editor. It also displays all current settings for each VLDB property. 4 You can choose to have the report display or hide the information described above, by selecting the appropriate check boxes. 5 You can copy the content in the report using the Ctrl+C keys on your keyboard. Then paste the information into a text editor or word processing program (such as Microsoft Word) using the Ctrl+V keys.
setting can support or optimize one system setup, but the same setting can cause performance issues or errors for other systems. Use this manual to learn about the VLDB properties before modifying any default settings. Some VLDB properties are characterized as advanced properties: advanced properties are relevant only to certain projects and system configurations. To work with advanced VLDB properties, see Viewing and changing advanced VLDB properties, page 626.
To view and change VLDB properties
1 Open the VLDB Properties Editor to display the VLDB properties for the level at which you want to work. (For information on object levels, see Order of precedence, page 619.) 2 Modify the VLDB property you want to change. For use cases, examples, sample code, and other information on every VLDB property, see Details for all VLDB properties, page 631. 3 If necessary, you can ensure that a property is set to the default. At the bottom of the Options and Parameters area for that property (on the right), select the Use default inherited value check box. Next to this check box name, information appears about what level the setting is inheriting its default from. 4 Click Save and Close to save your changes and close the VLDB Properties Editor. 5 You must also save in the object or editor window through which you accessed the VLDB Properties Editor. For example, if you accessed the VLDB properties by opening the Metric Editor and then opening the VLDB Properties Editor, after you click Save and Close in the VLDB Properties Editor, you must also click Save and Close in the Metric Editor to save your changes to VLDB properties.
625
1 Open the VLDB Properties Editor to display the VLDB properties for the level at which you want to work. (For information on object levels, see Order of precedence, page 619.) 2 From the Tools menu, select Show Advanced Settings. All advanced properties display with the other properties. 3 Modify the VLDB property you want to change. For use cases, examples, sample code, and other information on every VLDB property, see Details for all VLDB properties, page 631. 4 If necessary, you can ensure that a property is set to the default. At the bottom of the Options and Parameters area for that property (on the right), select the Use default inherited value check box. Next to this check box name, information appears about what level the setting is inheriting its default from. 5 Click Save and Close to save your changes and close the VLDB Properties Editor.
6 You must also save in the object or editor window through which you accessed the VLDB Properties Editor. For example, if you accessed the VLDB properties by opening the Metric Editor and then opening the VLDB Properties Editor, after you click Save and Close in the VLDB Properties Editor, you must also click Save and Close in the Metric Editor to save your changes to VLDB properties.
1 Use either or both of the following methods to see your systems VLDB properties that are not set to default. You should know which VLDB properties you will be affecting when you return properties to their default settings:
Generate a report listing VLDB properties that are not set to the default settings. For steps, see Creating a VLDB settings report, page 623, and select the check box named Do not show settings with Default values. Display an individual VLDB property by viewing the VLDB property whose default/non-default status you are interested in. (For steps, see Viewing and changing VLDB properties, page 624.) At the bottom of the Options and Parameters area for that property (on the right), you can see whether the Use default
627
inherited value check box is selected. Next to this check box name, information appears about what level the setting is inheriting its default from. 2 Open the VLDB Properties Editor to display the VLDB properties that you want to set to their original defaults. (For information on object levels, see Order of precedence, page 619.) 3 In the VLDB Properties Editor, you can identify any VLDB properties that have had their default settings changed, because they are identified with a check mark. The folder in which the property is stored has a chicanery on it (as shown on the Joins folder in the example image below), and the property name itself has a chicanery on it (as
shown on the gear icon in front of the Cartesian Join Warning property name in the second image below).
4 From the Tools menu, select Set all values to default. See the warning above if you are unsure about whether to set properties to the default. 5 In the confirmation window that appears, click Yes. All VLDB properties that are displayed in the VLDB Properties Editor are returned to their default settings.
629
6 Click Save and Close to save your changes and close the VLDB Properties Editor. 7 You must also save in the object or editor window through which you accessed the VLDB Properties Editor. For example, if you accessed the VLDB properties by opening the Metric Editor and then opening the VLDB Properties Editor, after you click Save and Close in the VLDB Properties Editor, you must also click Save and Close in the Metric Editor to save your changes to VLDB properties.
It loads new database types. It loads updated properties for existing database types that are still supported. It keeps properties for existing database types that are no longer supported. If an existing database type does not have any updates, but the properties for it have been removed from the Database.PDS file, the process does not remove them from your metadata.
To upgrade database types, see the Desktop online help. To access the Help procedure, from the Desktop Folder List, expand Administration, then select Database Instance Manager.
631
Limiting report rows, SQL size, and SQL time-out: Governing, page 645 Retrieving data: Indexing, page 649 Relating column data with SQL: Joins, page 659 Modifying third-party cube sources in MicroStrategy: MDX, page 692 Calculating data: Metrics, page 700 Customizing SQL statements: Pre/Post Statements, page 720 Optimizing queries, page 743 Selecting and inserting data with SQL: Select/Insert, page 785 Creating and supporting tables with SQL: Tables, page 807
about each property, including examples where necessary, are provided in the sections following the table.
Property
Display NULL On Top
Description
Determines where NULL values appear when you sort data.
Possible Values Display NULL values on bottom while sorting Display NULL values on top while sorting 7.x client behavior, duplicated report rows will not be distinguished Recommended setting for all 8.x projects, distinguish the duplicated report rows 6.x Evaluation Order 7.x Evaluation Order
Default Value
Display NULL values on top while sorting
Determines how the Analytical Engine handles duplicate IDs and their associated metric values. In MicroStrategy 7.x and earlier, only one row for each ID is returned. In MicroStrategy 8.x and later, rows with the same ID are returned as separate rows for data analysis. However, MicroStrategy does not support IDs with multiple descriptions. Determines the order in which the Analytical Engine resolves different types of calculations. This is an advanced property.
Evaluation Ordering
NULL checking Determines whether NULL is interpreted as the number 0 when the Analytical for Analytical Engine calculates and sorts data. Engine True means the Analytical Engine converts NULL to 0. Subtotal and Aggregation Compatibility Determines whether subtotaling and aggregation are compatible with MicroStrategy version 7.1 or version 7.2. If this is set for 7.1, the Subtotal Dimensionality Aware setting is ignored.
True False
True
7.1 client sub-total behavior, 7.2 metric aggregations will not be dimensionally aware Recommended setting for all 7.2 projects
633
Property
Subtotal Dimensionality Aware
Description
Determines how level metrics are totaled. (Level metrics were formerly called dimensional metrics.) If this is set to True, and a report contains a metric that is calculated at a higher level than the report level, the subtotal of the metric is calculated based on the metrics level. For example, a report at the quarter level containing a yearly sales metric shows the yearly sales as the subtotal instead of summing the rows on the report. Determines how consolidation elements are totaled. In MicroStrategy version 7.2.x and earlier, totals include all related attribute elements, not just those in the consolidation. In MicroStrategy 7.5 and later, you can set the Analytical Engine to total values for only those elements that are part of the consolidation. This property must be set at the project level.
Default Value
True
Evaluate subtotals over consolidation elements and their corresponding attribute elements (behavior for 7.2.x and earlier) Evaluate subtotals over consolidation elements only (behavior for 7.5 and later)
Evaluate subtotals over consolidation elements and their corresponding attribute elements (behavior for 7.2.x and earlier)
Wherever NULL values occur in a report, they are displayed as user-defined strings. NULL values result from a variety of scenarios. NULL values can come from data retrieved from the database, from cross-tabulation on a report, or from data aggregation on a report. You can determine what characters or strings are displayed for NULL values. To do this, access the Project Configuration Editor, select the Report Definition: NULL values category, then type the strings you want to have displayed, in the appropriate fields.
Year
2003 2003 2004 2003 2003
Sales ($K)
10 20 25 15 30
The combination of Country and Year returns two pairs of duplicate rows for the year 2003. The Distinguish Duplicated Rows setting allows you to handle data of this type in the following ways:
In MicroStrategy 7.x and earlier versions, the duplicate rows are not distinguished. Therefore, when a duplicate row is returned the Analytical Engine returns only one of the rows. In this scenario, the data given in the previous table may be returned in a report as shown below.
Country
USA USA Germany
Year
2003 2004 2003
Sales ($K)
10 25 15
Since two duplicate rows are not returned for the report, the metric data for the missing rows cannot be analyzed.
2008 MicroStrategy, Inc. Details for all VLDB properties
635
In MicroStrategy 8.x, duplicate rows can be distinguished and included in reports. This functionality allows you to analyze all of the metric values for duplicate rows. For the example with Country, Year, and Sales (in thousands), the report returns the data as shown in the first table in this section, which displays all duplicate rows. MicroStrategy does not support IDs with multiple descriptions.
Evaluation Ordering
Evaluation Ordering is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties, page 626. An evaluation order is the order in which the MicroStrategy Analytical Engine performs different kinds of calculations during the data population stage. The Evaluation Ordering property determines the order in which calculations are resolved. Some result data can differ depending on the evaluation order.
In MicroStrategy 6.x and earlier versions, the evaluation order could not be changed. The evaluation order for non-subtotal values was compound metric, consolidation, and then metric limit. The evaluation order for subtotal values was consolidation, subtotal, and then compound metric. MicroStrategy 7.x provides a new evaluation order and the ability to alter the order. In MicroStrategy 7.x and later, the evaluation order for both non-subtotal and subtotal values is compound metric, consolidation, metric limit, and then subtotal. To change the evaluation order for
MicroStrategy 7.x and later, on the Report Editor menu bar, select Data and then Report Data Options. If the Evaluation Ordering VLDB setting is set to the 6.x evaluation order, then all custom ordering is ignored.
In 7.1, subtotaling is not level-aware, but dynamic aggregation is because the SQL is re-executed when attributes are removed from the report. By default, in 7i and later, the subtotaling and dynamic aggregation are made level-aware. With 7i the SQL is no
637
longer re-executed if an attribute is removed from the report grid. If you are running in an environment with mixed client versions, then your 7.1 clients get level-aware subtotals only. If you do not want your 7.1 or 7i clients to see the new level-aware subtotals, then you can change this setting to FALSE. However, this also disables the level-aware dynamic aggregation for 7i users. MicroStrategy recommends that you leave the setting as TRUE.
Examples
7.1 Report Example
Notice that the Dollar Sales by Quarter totals are simply a sum of the quarterly totals.
Quarter
Q1 Q1
Category
Food Food
Sub-Category
Cheese Cookies Category Total
Dollar Sales
100 150 250 200 425 625 875 75 125 200 225 375 600
Q1 Q1
Electronics Electronics
Q2 Q2
Food Food
Electronics Electronics
Quarter
Category
Sub-Category
Quarter Total Grand Total
Dollar Sales
800 1675
If Sub-Category is removed from the report above, the SQL is re-executed and the results are as follows. Notice that Dollar Sales by Quarter still calculates the incorrect subtotals, but the value for Dollar Sales by Quarter is correct.
Quarter
Q1 Q1
Category
Food Electronics Quarter Total
Dollar Sales
250 625 875 200 600 800 1675
Q2 Q2
Category
Food Food
Sub-Category
Cheese Cookies Category Total
Dollar Sales
100 150 250 200 425 625 875
Q1 Q1
Electronics Electronics
639
Quarter
Q2 Q2
Category
Food Food
Sub-Category
Cheese Cookies Category Total
Dollar Sales
75 125 200 225 375 600 800 1675
Electronics Electronics
When the Sub-Category attribute is removed, you get the results shown below. Notice that the subtotals are still level-aware. This is not because the SQL is re-executed, but because the 7i Analytical Engine is smart enough to calculate the numbers correctly.
Quarter
Q1 Q1
Category
Food Electronics Quarter Total
Dollar Sales
250 625 875 200 600 800 1675
Q2 Q2
If you have turned off Subtotal Dimensionality Aware and you want dynamic aggregation to have the same values as the subtotals, set this property to 7.1 client sub-total behavior,
7.2 metric aggregations will not be dimensionality-aware. This changes the above report to the report shown below, after dynamic aggregation.
Quarter
Q1 Q1
Category
Food Electronics Quarter Total
Dollar Sales
250 625 875 200 600 800 1675
Q2 Q2
The default setting is Recommended setting for all 7.2 projects. However, to see the subtotals the way they appeared in pre-7i versions, select 7.1 client sub-total behavior, 7.2 metric aggregations will not be dimensionality-aware.
641
Example
Quarterly Dollar Sales metric is defined as Sum(Revenue) Dimensionality = Quarter Yearly Dollar Sales metric is defined as Sum(Revenue) Dimensionality = Year
Year
2002 2002 2002 2002
Quarter
1 2 3 4 Grand Total
them. The Subtotals over Consolidations Compatibility property allows you to determine how the Analytical Engine calculates consolidations.
Evaluate subtotals over consolidation elements and their corresponding attribute elements (behavior for 7.2.x and earlier): In MicroStrategy version 7.2.x and earlier, if a calculation includes a consolidation, the Analytical Engine calculates subtotals across the consolidation elements as well as across all attribute elements that comprise the consolidation element expressions. Evaluate subtotals over consolidation elements only (behavior for 7.5 and later): In MicroStrategy version 7.5 and later, if a calculation includes a consolidation this setting allows the Analytical Engine to calculate only those elements that are part of the consolidation.
When you enable this setting, be aware of the following requirements and options: This VLDB property must be set at the project level for the calculation to be performed correctly. The setting takes effect when the project is initialized, so after this setting is changed you must reload the project or restart Intelligence Server. After you enable this setting, you must enable subtotals at either the consolidation level or the report level. If you enable subtotals at the consolidation level, subtotals are available for all reports in which the consolidation is used. (Consolidation Editor -> Elements menu -> Subtotals -> Enabled.) If you enable subtotals at the report level, subtotals for consolidations can be enabled on a report-by-report basis. (Report Editor -> Report Data Options -> Subtotals -> Yes. If Default is selected, the Analytical Engine reverts to the Enabled/Disabled property as set on the consolidation object itself.) If the project is registered on an Intelligence Server version 7.5.x but is accessed by clients using Desktop version 7.2.x or earlier, leave this property setting on Evaluate subtotals over consolidation elements and their corresponding attribute elements. Otherwise,
643
metric values may return as zeroes when Desktop 7.2.x users execute reports with consolidations, or when they pivot in such reports. Change this property from the default only when all Desktop clients have upgraded to MicroStrategy version 7.5.x.
Example
Three consolidations called Super Regions are created, defined as follows:
East (({Cust Region=Northeast} + {Cust Region=Mid-Atlantic}) + {Cust Region=Southeast}) Central ({Cust Region=Central} + {Cust Region=South}) West ({Cust Region=Northwest} + {Cust Region=Southwest})
With the first setting selected, Evaluate subtotals over consolidation elements and their corresponding attribute elements, the report appears as follows:
The Total value is calculated for more elements than are displayed in the Super Regions column. The Analytical Engine is including the following elements in the calculation: East + (Northeast + Mid-Atlantic + Southeast) + Central + (Central + South) + West + (Northwest + Southwest).
With the second setting selected, Evaluate subtotals over consolidation elements only, and with subtotals enabled, the report appears as follows:
The Total value is now calculated for only the Super Regions consolidation elements. The Analytical Engine is including only the following elements in the calculation: East + Central + West.
Description
The maximum number of rows returned to the server for each intermediate pass. (0 = unlimited number of rows; -1 = use value from higher level.) Maximum size of SQL string accepted by ODBC driver (bytes). The maximum number of rows returned to the Server for the final result set. (0 = unlimited number of rows; -1 = use value from higher level.) Single SQL pass time-out in seconds. (0 = no time-out.)
Possible Values
Default Value
645
from the final pass, pure SELECT statements are usually executed if there are analytical functions or partition pre-queries to process. Since the partition pre-queries return only a handful of rows, the SELECT statements issued for analytical function processing decide the number of rows set in most cases. If the limit is exceeded, the report fails with an error message. When it is set to the default, the Intermediate Row Limit takes the value of the Result Set Row Limit VLDB property at the report (highest) level. The table below explains the possible values and their behavior:
Value
0 Number
Behavior
No limit on number of rows returned Number of rows returned is limited to the specified number
Behavior
No limit on SQL pass size The maximum SQL pass size (in bytes) is limited to the specified number By selecting the check box Use default inherited value, the value is set to the default for the database type used for the related database instance. The default size varies depending on the database type.
Increasing the maximum to a large value can cause the report to fail in the ODBC driver. This is dependent on the database type you are using.
If the report result set exceeds the limit specified in the Result Set Row Limit, the report execution is terminated. This property overrides the Number of report result rows setting in the Project Configuration Editor: Governing category. (For details on this project setting, see the Project Configuration Editor: Governing topic in the online help.)
When the report contains a custom group, this property is applied to each element in the group. Therefore, the final result set displayed could be larger than the predefined setting. For example, if you set the Result Set Row Limit to 1,000, it means you want only 1,000 rows to be returned. Now apply this setting to each element in the custom group. If the group has three elements and each uses the maximum specified in the setting (1,000), the final report returns 3,000 rows. The table below explains the possible values and their behavior:
Value
0 Number
Behavior
No limit on the number of result rows Number of rows returned limited to the specified number
647
Behavior
No limit on pass SQL size The maximum amount of time (in seconds) a SQL pass can execute is limited to the specified number
Description
Determines whether or not to allow the creation of indexes on fact or metric columns.
Possible Values Dont allow the creation of indexes on metric columns Allow the creation of indexes on metric columns (if the Intermediate Table Index setting is set to create)
User-defined
Default Value
Dont allow the creation of indexes on metric columns
Defines the string that is appended at the end of the CREATE INDEX statement. For example: IN INDEXSPACE
NULL
Index Prefix
Defines the prefix to use when automatically creating indexes for intermediate SQL passes. The prefix is added to the beginning of the CREATE INDEX statement. Defines the string to parse in between the CREATE and INDEX words. For example: CLUSTERED
User-defined
NULL
User-defined
NULL
Determines whether and when to create an index for the intermediate table.
Don't create an index Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem) Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem) and Secondary index on intermediate table Create table, insert into table, create index on intermediate table (all platforms other than Teradata, DB2 UDB EEE, Red Brick and Tandem)
649
Property
Description
Possible Values
User-defined
Default Value
0 (No limit)
Max Columns in Determines the Column Placeholder maximum number of columns that replace the column wildcard ("!!!") in pre and post statements. 0 = all columns (no limit). Max Columns in Index Determines the maximum number of columns that can be included in partition key or index. Determines whether a primary key is created instead of a partitioning key for databases that support both types, such as UDB.
User-defined
No limit
Create primary key if the intermediate table index setting is set to create a primary index. Create primary index (Teradata) or partitioning key (UDB) if the intermediate table index setting is set to create a primary index.
Create primary key if the intermediate table index setting is set to create a primary index.
Example
Do not allow creation of indexes on fact or metric columns
create table ZZT8L005Y1YEA000 ( CATEGORY_ID BYTE, REGION_ID BYTE, YEAR_ID SHORT, WJXBFS1 DOUBLE, WJXBFS2 DOUBLE) insert into ZZT8L005Y1YEA000 select a13.[CATEGORY_ID] AS CATEGORY_ID, a15.[REGION_ID] AS REGION_ID, a16.[YEAR_ID] AS YEAR_ID, sum((a11.[QTY_SOLD] * (a11.[UNIT_PRICE] a11.[DISCOUNT]))) as WJXBFS1, sum((a11.[QTY_SOLD] * ((a11.[UNIT_PRICE] a11.[DISCOUNT]) - a11.[UNIT_COST]))) as
WJXBFS2
from [ORDER_DETAIL] a11, [LU_ITEM] a12, [LU_SUBCATEG] a13, [LU_EMPLOYEE] a14, [LU_CALL_CTR] a15, [LU_DAY] a16 where a11.[ITEM_ID] = a12.[ITEM_ID] and a12.[SUBCAT_ID] = a13.[SUBCAT_ID] and a11.[EMP_ID] = a14.[EMP_ID] and a14.[CALL_CTR_ID] = a15.[CALL_CTR_ID] and a11.[ORDER_DATE] = a16.[DAY_DATE] and a15.[REGION_ID] in (1) group by a13.[CATEGORY_ID], a15.[REGION_ID], a16.[YEAR_ID]
651
Index Prefix
This property allows you to define the prefix to add to the beginning of the CREATE INDEX statement when automatically creating indexes for intermediate SQL passes. For example, the index prefix you define appears in the CREATE INDEX statement as shown below:
create index(index prefix)
IDX_TEMP1(STORE_ID, STORE_DESC)
All platforms except Teradata: create <<Index Qualifier>> index i_[Table Name] on [Table Name] ([Column List]) <<Index Post String>>
Teradata: create <<Index Qualifier>> index i_[Table Name] ([Column List]) on [Table Name] <<Index Post String>>
Example
Index Post String
The Index Post String setting allows you to add a custom string to the end of the CREATE INDEX statement.
Index Post String = /* in tablespace1 */ create index IDX_TEMP1(STORE_ID, STORE_DESC) /* in "tablespace1*/
653
Examples
Do not create an index
create table TEMP1( STORE_ID INT, STORE_DESC VARCHAR(20), SALES FLOAT)
Create Primary index (Teradata)/Partition key (DB2/UDB EEE)/Primary Key (RedBrick, Tandem)
NCR Teradata
STORE_ID INT, STORE_DESC VARCHAR(20), SALES FLOAT) primary index(STORE_ID, STORE_DESC)
DB2/UDB
STORE_ID INT, STORE_DESC VARCHAR(20), SALES FLOAT)
Create Primary index (Teradata)/Partition key (DB2/UDB EEE)/Primary Key (RedBrick, Tandem) and Secondary index on intermediate table
NCR Teradata
STORE_ID INT, STORE_DESC VARCHAR(20),
SALES FLOAT) primary index(STORE_ID, STORE_DESC) insert into TEMP1 ... create index IDX_TEMP1 (STORE_ID, STORE_DESC)
DB2/UDB
STORE_ID INT, STORE_DESC VARCHAR(20), SALES FLOAT)
partitioning key(STORE_ID, STORE_DESC) insert into TEMP1 ... create index IDX_TEMP1 (STORE_ID, STORE_DESC)
primary key(STORE_ID, STORE_DESC)) insert into TEMP1 ... create index IDX_TEMP1 (STORE_ID, STORE_DESC)
Create table, insert into table, create index on intermediate table (all platforms other than Teradata, UDB EEE, RedBrick and Tandem)
create table TEMP1( STORE_ID INT, STORE_DESC VARCHAR(20), SALES FLOAT) insert into TEMP1 ...
655
Behavior
All attribute ID columns go into the index The maximum number of attribute ID columns to use with the wildcard
Columns in Index properties, you can designate any attribute to be included in the index. You can define attribute weights in the Project Configuration Editor. Select the Report definition: SQL generation category, and in the Attribute weights section, click Modify.
Behavior
All attribute ID columns are placed in the index The maximum number of attribute ID columns that are placed in the index
RedBrick and Tandem: When the Intermediate Table Index property is set to create a primary index, then a primary key is created. Teradata: Defaults to a primary index. DB2 UDB EEE: A partition key is used. However, DB2 users may want to create a primary key rather than a partition key, and this setting allows you to do this. UDB is currently the only platform that can use either option. This changes the create table pattern.
657
Example (DB2)
Create primary key if the intermediate table index setting is set to create a primary index
create table ZZ01070A00 ( CUSTOMER_ID SMALLINT, WJXBFS1 OUBLE) primary key (CUSTOMER_ID)
Create Primary Index (Teradata)/Partitioning Key (DB2 UDB EEE)/Primary Key (RedBrick & Tandem)
create table ZZ01070A00 (CUSTOMER_ID SMALLINT, WJXBFS1 OUBLE) partitioning key (CUSTOMER_ID)
Description
Controls whether tables are joined only on the common keys or on all common columns for each table.
Possible Values Join common key on both sides Join common attributes (reduced) on both sides
Default Value
Join common key on both sides
Controls whether two fact tables are directly joined together. If you choose Temp Table Join, the Analytical Engine calculates results independently from each fact table and places those results into two intermediate tables. These intermediate tables are then joined together. Allows the MicroStrategy SQL Engine to use a new algorithm for evaluating whether or not a Cartesian join is necessary. Action that occurs when the Analytical Engine generates a report that contains a Cartesian join.
Do not reevaluate cartesian joins Reevaluate cartesian joins Execute Cancel execution Cancel execution only when warehouse table is involved in either side of cartesian join If only one side of cartesian join contains warehouse tables, SQL will be executed without warning
659
Property
Downward Outer Join Option
Description
Allows users to choose how to handle metrics which have a higher level than the template.
Possible Values Do not preserve all the rows for metrics higher than template level Preserve all the rows for metrics higher than template level w/o report filter Preserve all the rows for metrics higher than template level with report filter Do not do downward outer join for database that support full outer join Do not do downward outer join for database that support full outer join, and order temp tables in last pass by level No star join Partial star join
Default Value
Do not preserve all the rows for metrics higher than template level
DSS Star Join Controls which lookup tables are included in the join against the fact table. For a partial star join, the Analytical Engine joins the lookup tables of all attributes present in either the template or the filter or metric level, if needed. From Clause Order Determines whether to use the normal FROM clause order as generated by the Analytical Engine or to switch the order.
No star join
Normal FROM clause order as generated by the engine Move last table in normal FROM clause order to the first Move MQ table in normal From clause order to the last (for RedBrick) Reverse FROM clause order as generated by the engine No support Support Join 89 Join 92 SQL 89 Inner Join and Cross Join and SQL 92 Outer Join SQL 89 Inner Join and SQL 92 Outer Join and Cross Join Partially based on attribute level (behavior prior to version 8.0.1) Fully based on attribute level. Lookup tables for lower level attributes are joined before those for higher level attributes
Indicates whether the database platform supports full outer joins. Type of column join.
No support Join 89
Property
Max Tables in Join Max Tables in Join Warning
Description
Maximum number of tables to join together. Action that occurs when the Analytical Engine generates a report that exceeds the maximum number of tables in the join limit. Perform an outer join to the final result set in the final pass.
Default Value
No limit Cancel execution
Preserve common elements of final pass result table and lookup/relationship table Preserve all final result pass elements Preserve all elements of final pass result table with respect to lookup table but not relationship table Do not listen to per report level setting, preserve elements of final pass according to the setting at attribute level. If this choice is selected at attribute level, it will be treated as preserve common elements (i.e. choice 1) Preserve common elements of lookup and final pass result table Preserve lookup table elements joined to final pass result table based on fact table keys Preserve lookup table elements joined to final pass result table based on template attributes without filter Preserve lookup table elements joined to final pass result table based on template attributes with filter
Preserve common elements of final pass result table and lookup/relation ship table.
661
Attribute to join when key from neither side can be supported by the other side
The Attribute to join when key from neither side can be supported by the other side is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties. This VLDB property determines how MicroStrategy joins tables with common columns. The options for this property are
Join common key on both sides: Joins on tables only use columns that are in each table, and are also keys for each table. Join common attributes (reduced) on both sides: Joins between tables use all common attribute columns to perform the join. This functionality can be helpful in a couple of different scenarios.
You have two different tables named Table1 and Table2. Both tables share 3 ID columns for Year, Month, and Date along with other columns of data. Table1 uses Year, Month, and Date as keys while Table2 uses only Year and Month as keys. Since the ID column for Date is not a key for Table2, you must set this option to include Day to join the tables along with Year and Month. You have a table named Table1 that includes the columns for the attributes Quarter, Month of Year, and Month. Since Month is a child of Quarter and Month of Year, its ID column is used as the key for Table1. There is also a temporary table named TempTable that includes the columns for the attributes Quarter, Month of Year, and Year, using all three ID columns as keys of the table. It is not possible to join Table1 and TempTable unless you set this option because they do not share any common keys. If you set this option, Table1 and TempTable can join on the common attributes Quarter and Month of Year.
Example
Use Temp Table Join
select a11.MARKET_NBR MARKET_NBR, sum(a11.CLE_SLS_DLR) CLEARANCESAL into #ZZTIS00H5D3SP000 from HARI_MARKET_DIVISION a11 group by a11.MARKET_NBR select a11.MARKET_NBR MARKET_NBR,
663
sum(a11.COST_AMT)
COSTAMOUNT into #ZZTIS00H5D3SP001 from HARI_COST_MARKET_DIV a11 group by a11.MARKET_NBR select pa1.MARKET_NBR MARKET_NBR, a11.MARKET_DESC MARKET_DESC, pa1.CLEARANCESAL WJXBFS1, pa2.COSTAMOUNT WJXBFS2 from #ZZTIS00H5D3SP000 pa1 left outer join #ZZTIS00H5D3SP001 pa2 on (pa1.MARKET_NBR = pa2.MARKET_NBR) left outer join HARI_LOOKUP_MARKET a11 on (pa1.MARKET_NBR = a11.MARKET_NBR)
This property allows the MicroStrategy SQL Engine to use a new algorithm for evaluating whether or not a Cartesian join is necessary. The new algorithm can sometimes avoid a Cartesian join when the old algorithm cannot. For backward compatibility, the default is the old algorithm. If you see Cartesian joins that appear to be avoidable, use this property to determine whether the engines new algorithm avoids the Cartesian join.
Examples
Do Not Reevaluate Cartesian Joins
select a12.ATTR1_ID ATTR1_ID, max(a12.ATTR1_DESC) ATTR1_DESC, a13.ATTR2_ID ATTR2_ID, max(a13.ATTR2_DESC) ATTR2_DESC, count(a11.FACT_ID) METRIC from FACTTABLE a11 cross join LU_TABLE1 a12 join LU_TABLE2 a13 on (a11.ATTR3_ID = a13.ATTR3_ID and a12.ATTR1_ID = a13.ATTR1_CD) group by a12.ATTR1_ID, a13.ATTR2_ID
665
on (a11.ATTR3_ID = a13.ATTR3_ID) join LU_TABLE1 a12 on (a12.ATTR1_ID = a13.ATTR1_CD) group by a12.ATTR1_ID, a13.ATTR2_ID
Execute: When any Cartesian join is encountered, execution continues without warning. Cancel execution: When a report contains any Cartesian join, execution is canceled. Cancel execution only when warehouse table is involved in either side of Cartesian join: The execution is canceled only when a warehouse table is involved in a Cartesian join. In other words, the Cartesian join is allowed when all tables involved in the join are intermediate tables. If only one side of Cartesian join contains warehouse tables, SQL will be executed without warning: When all tables involved in the Cartesian join are intermediate tables, the SQL is executed without warning. This option also allow a Cartesian join if a warehouse table is only on one side of the join and cancels it if both sides are warehouse tables.
Note the following:
In the rare situation when a warehouse table is Cartesian-joined to an intermediate table, the execution is usually canceled. However, there may be times when you want to allow this to execute. In this case, you can choose the option: If only one side of Cartesian join contains warehouse tables, SQL will be executed without warning. If this option is selected, the execution is canceled only when warehouse tables are involved in both sides of the Cartesian join.
Some Cartesian joins may not be a direct table-to-table join. If one join Cartesian joins to another join, and one of the joins contains a warehouse table (not an intermediate table), then the execution is either canceled or allowed depending on the option selected (see below). For example, if (TT_A join TT_B) Cartesian join (TT_C join WH_D) the following occurs based on the following settings:
If the setting Cancel execution only when warehouse table is involved in Cartesian join is selected, execution is canceled. In the above example, execution is canceled because a warehouse table is used, even though TT_A, TT_B, and TT_C are all intermediate tables. If the setting If only one side of Cartesian... is selected, SQL runs without warning. In the above example, execution continues because a warehouse table (WH_D) is used on only one side of the join.
667
Traditionally, the outer join flag is ignored, because M2 (at Region level) is higher than the report level of Store. It is difficult to preserve all of the stores for a metric at the Region level. However, starting with 7i, you can preserve rows for a metric at a higher level than the report. Since M2 is at the region level, it is impossible to preserve all regions for M2 because the report only shows Store. To do that, a downward join pass is needed to find all stores that belong to the region in M2, so that a union is formed among all these stores with the stores in M1. When performing a downward join, another issue arises. Even though all the stores that belong to the region in M2 can be found, these stores may not be those from which M2 is calculated. If a report filters on a subset of stores, then M2 (if it is a filtered metric) is calculated only from those stores, and aggregated to regions. When a downward join is done, either all the stores that belong to the regions in M2 are included or only those stores that belong to the regions in M2 and in the report filter. Hence, this property has three options.
Example
Using the above example and applying a filter for Atlanta and Charlotte, the Do not preserve all the rows for metrics higher than template level option returns the following results. Note that Charlotte does not appear because it has no sales data in the fact table; the outer join is ignored. The outer join flag on metrics higher than template level is ignored.
Store
Atlanta
Using Preserve all the rows for metrics higher than template level without report filter returns the results shown below. Now Charlotte appears because the outer join is used, and it has an inventory, but Washington appears as well because it is in the Region, and the filter is not applied.
Store
Atlanta Charlotte Washington
Using Preserve all the rows for metrics higher than template level with report filter produces the following results. Washington is filtered out but Charlotte still appears because of the outer join.
Store
Atlanta Charlotte
For backward compatibility, the default is to ignore the outer join flag for metrics higher than template level. This is the SQL Engine behavior for MicroStrategy 6.x or lower, as well as for MicroStrategy 7.0 and 7.1.
669
partial star join is not necessarily the same as the lookup table defined in the attribute form editor. Any table that acts as a lookup table rather than a fact table in the SQL and contains the column is considered a feasible lookup table.
Examples
No Star Join
select distinct a11.PBTNAME PBTNAME from STORE_ITEM_PTMAP a11 where a11.YEAR_ID in (1994) select a11.ITEM_NBR ITEM_NBR, a11.CLASS_NBR a13.ITEM_DESC a13.CLASS_DESC a11.STORE_NBR a14.STORE_DESC CLASS_NBR, ITEM_DESC, CLASS_DESC, STORE_NBR, STORE_DESC, WJXBFS1
sum(a11.REG_SLS_DLR) from STORE_ITEM_94 a11, LOOKUP_DAY a12, LOOKUP_ITEM a13, LOOKUP_STORE a14
where a11.CUR_TRN_DT = a12.CUR_TRN_DT and a11.CLASS_NBR = a13.CLASS_NBR and a11.ITEM_NBR = a13.ITEM_NBR and a11.STORE_NBR = a14.STORE_NBR anda12.YEAR_ID in (1994) group by a11.ITEM_NBR, a11.CLASS_NBR, a13.ITEM_DESC, a13.CLASS_DESC, a11.STORE_NBR, 670 Details for all VLDB properties
2008 MicroStrategy, Inc.
a14.STORE_DESC
671
Examples
Normal FROM clause order as generated by the engine
select a12.CUSTOMER_ID CUSTOMER_ID, sum(a11.ORDER_AMT) WJXBFS1 from ORDER_FACT a11 join LU_ORDER a12 on (a11.ORDER_ID = a12.ORDER_ID) group by a12.CUSTOMER_ID
Move MQ Table in normal FROM clause order to the last (for RedBrick)
This setting is added primarily for RedBrick users. The default order of table joins is as follows:
1 Join the fact tables together. 2 Join the metric qualification table. 3 Join the relationship table. 4 Join the lookup tables if needed.
This option changes the order to the following:
1 Join the fact tables together. 2 Join the relationship table. 3 Join the lookup tables.
672 Details for all VLDB properties
2008 MicroStrategy, Inc.
Examples
Full Outer Join Not Supported
select a12.YEAR_ID YEAR_ID, sum(a11.TOT_SLS_DLR) TOTALSALESCO into #ZZTIS00H5MJMD000 from HARI_REGION_DIVISION a11 join HARI_LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) where a12.MONTH_ID = 199411 group by a12.YEAR_ID select a12.YEAR_ID YEAR_ID, sum(a11.TOT_SLS_DLR) TOTALSALESCO into #ZZTIS00H5MJMD001
2008 MicroStrategy, Inc. Details for all VLDB properties
673
from HARI_REGION_DIVISION a11 join HARI_LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) where a12.MONTH_ID = 199311 group by a12.YEAR_ID select pa1.YEAR_ID YEAR_ID into #ZZTIS00H5MJOJ002 from #ZZTIS00H5MJMD000 pa1 union select pa2.YEAR_ID YEAR_ID from #ZZTIS00H5MJMD001 pa2 select distinct pa3.YEAR_ID YEAR_ID, a11.YEAR_DESC YEAR_DESC, pa1.TOTALSALESCO TOTALSALESCO, pa2.TOTALSALESCO TOTALSALESCO1 from #ZZTIS00H5MJOJ002 pa3 left outer join #ZZTIS00H5MJMD000 pa1 on (pa3.YEAR_ID = pa1.YEAR_ID) left outer join #ZZTIS00H5MJMD001 pa2 on (pa3.YEAR_ID = pa2.YEAR_ID) left outer join HARI_LOOKUP_YEAR a11 on (pa3.YEAR_ID = a11.YEAR_ID)
select a12.YEAR_ID YEAR_ID, sum(a11.TOT_SLS_DLR) TOTALSALESCO into #ZZTIS00H5MKMD001 from HARI_REGION_DIVISION a11 join HARI_LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) where a12.MONTH_ID = 199311 group by a12.YEAR_ID select distinct coalesce(pa1.YEAR_ID, pa2.YEAR_ID) YEAR_ID, a11.YEAR_DESC YEAR_DESC, pa1.TOTALSALESCO TOTALSALESCO, pa2.TOTALSALESCO TOTALSALESCO1 from #ZZTIS00H5MKMD000 pa1 full outer join #ZZTIS00H5MKMD001 pa2 on (pa1.YEAR_ID = pa2.YEAR_ID) left outer join HARI_LOOKUP_YEAR a11 on (coalesce(pa1.YEAR_ID, pa2.YEAR_ID) =
a11.YEAR_ID)
Join Type
The Join Type property determines which ANSI join syntax pattern to use. Some databases, such as Oracle, do not support the ANSI 92 standard yet. Some databases, such as DB2, support both Join 89 and Join 92. Other databases, such as Tandem and some versions of Teradata, have a mix of the join standards and therefore need their own setting. If the Full Outer Join Support VLDB property is set to YES, this property is ignored.
675
Examples
Join 89
select a22.STORE_NBR STORE_NBR, max(a22.STORE_DESC) STORE_DESC, a21.CUR_TRN_DT CUR_TRN_DT, sum(a21.REG_SLS_DLR) WJXBFS1 from STORE_DIVISION a21, LOOKUP_STORE a22 where a21.STORE_NBR = a22.STORE_NBR group by a22.STORE_NBR, a21.CUR_TRN_DT
Join 92
select a21.CUR_TRN_DT CUR_TRN_DT, a22.STORE_NBR STORE_NBR, max(a22.STORE_DESC) STORE_DESC, sum(a21.REG_SLS_DLR) WJXBFS1 from STORE_DIVISION a21 join LOOKUP_STORE a22 on (a21.STORE_NBR = a22.STORE_NBR) group by a21.CUR_TRN_DT, a22.STORE_NBR
left outer join COST_STORE_DEP a22 on (a21.DEPARTMENT_NBR = a22.DEPARTMENT_NBR and a21.CUR_TRN_DT = a22.CUR_TRN_DT and a21.STORE_NBR = a22.STORE_NBR) left outer join STORE_DEPARTMENT a23 on (a21.STORE_NBR = a23.STORE_NBR and a21.DEPARTMENT_NBR = a23.DEPARTMENT_NBR and a21.CUR_TRN_DT = a23.CUR_TRN_DT), LOOKUP_MARKET a24 where a21.MARKET_NBR = a24.MARKET_NBR group by a21.MARKET_NBR
and
a21.CUR_TRN_DT = a22.CUR_TRN_DT and a21.STORE_NBR = a22.STORE_NBR) left outer join STORE_DEPARTMENT a23 on (a21.STORE_NBR = a23.STORE_NBR and a21.DEPARTMENT_NBR = a23.DEPARTMENT_NBR and a21.CUR_TRN_DT = a23.CUR_TRN_DT), LOOKUP_MARKET a24 where a21.MARKET_NBR = a24.MARKET_NBR group by a21.MARKET_NBR
2008 MicroStrategy, Inc. Details for all VLDB properties
677
Partially based on attribute level (behavior prior to version 8.0.1) (default setting) Fully based on attribute level. Lookup tables for lower level attributes are joined before those for higher level attributes
If you select the first (default) option, lookup tables are loaded for join in alphabetic order. If you select the second option, lookup tables are loaded for join based on attribute levels, and joining is performed on the lowest level attribute first.
The table below explains the possible values and their behavior:
Value
0 Number
Behavior
No limit on the number of tables in a join The maximum number of tables in a join is set to the number specified
679
Store Name
East Central South North
Fact table
Store ID
1 2 3 1 2 3 4 5
Year
2002 2002 2002 2003 2003 2003 2003 2003
Dollar Sales
1000 2000 5000 4000 6000 7000 3000 1500
The Fact table has data for Store IDs 4 and 5, but the Store table does not have any entry for these two stores. On the other hand, notice that the North Store does not have any entries in the Fact table. This data is used to show examples of how the next two properties work.
If you choose the Preserve common elements of final pass result table and lookup table option, the SQL Engine generates an equi-join. Therefore, you only see elements that are common to both tables. If you choose the Preserve all final result pass elements option, the SQL Engine generates an outer join, and your report contains all of the elements that are in the final result set. When this setting is turned ON, outer joins are generated for any joins from the fact table to the lookup table, as well as to any relationship tables. This is because it is hard to distinguish which table is used as a lookup table and which table is used as a relationship table, the two roles one table often plays. For example, LOOKUP_DAY serves as both a lookup table for the Day attribute, as well as a relationship table for Day and Month. This setting should not be used in standard data warehouses, where the lookup tables are properly maintained and all elements in the fact table have entries in the respective lookup table. It should be used only when a certain attribute in the fact table contains more (unique) attribute elements than its corresponding lookup table. For example, in the example above, the Fact Table contains sales for five different stores, but the Store Table contains only four stores. This should not happen in a standard data warehouse because the lookup table, by definition, should contain all the attribute elements. However, this could happen if the fact tables are updated more often than the lookup tables.
If you choose the Preserve all elements of final pass result table with respect to lookup table but not relationship table option, the SQL Engine generates an inner join on all passes except the final pass; on the final pass it generates an outer join. If you choose the Do not listen to per report level setting, preserve elements of final pass according to the setting at attribute level. If this choice is selected at attribute level, it will be treated as preserve common elements (i.e. choice 1) option at the database instance, report, or template level, the setting for this VLDB property at the attribute level is used. This value
681
should not be selected at the attribute level. If you select this setting at the attribute level, the VLDB property is set to the Preserve common elements of final pass result table and lookup table option. This setting is useful if you have only a few attributes that require different join types. For example, if among the attributes in a report only one needs to preserve elements from the final pass table, you can set the VLDB property to Preserve all final pass result elements setting for that one attribute. You can then set the report to the Do not listen setting for the VLDB property. When the report is run, only the attribute set differently causes an outer join in SQL. All other attribute lookup tables will be joined using an equal join, which leads to better SQL performance.
Examples
The first two example results below are based on the Preserving data using outer joins example above. The third example, for the Preserve all elements of final pass result table with respect to lookup table but not relationship table option, is a separate example designed to reflect the increased complexity of that options behavior.
Example 1: Preserve common elements of final pass result table and lookup table
A report has Store and Dollar Sales on the template. The Preserve common elements of final pass result table and lookup table option returns the following results using the SQL below.
Store
East
Dollar Sales
5000
Store
Central South
Dollar Sales
8000 12000
select a11.Store_id Store_id, max(a12.Store) Store, sum(a11.DollarSls) WJXBFS1 from Fact a11 join Store a12 on (a11.Store_id = a12.Store_id) group by a11.Store_id
Dollar Sales
5000 8000 12000 3000 1500
select a11.Store_id Store_id, max(a12.Store) Store, sum(a11.DollarSls) WJXBFS1 from Fact a11 left outer join Store a12 on (a11.Store_id = a12.Store_id) group by a11.Store_id
683
Example 3: Preserve all elements of final pass result table with respect to lookup table but not to relationship table
A report has Country, Metric 1, and Metric 2 on the template. The following fact tables exist for each metric:
CALLCENTER_ID Fact 1
1 2 1 2 3 4 1000 2000 1000 2000 1000 1000
EMPLOYEE_ID
1 2 1 2 3 4 5
Fact 2
5000 6000 5000 6000 5000 5000 1000
The SQL Engine performs three passes. In the first pass, the SQL Engine calculates metric 1. The SQL Engine inner joins the Fact Table (Metric 1) table above with the call center lookup table LU_CALL_CTR below:
CALLCENTER_ID COUNTRY_ID
1 2 3 1 1 2
to create the following metric 1 temporary table, grouped by country, using the SQL that follows:
COUNTRY_ID
1 2
Metric 1
6000 1000
WJXBFS1
from ORDER_DETAIL a11, LU_CALL_CTR a12 where a11.CALL_CTR_ID = a12.CALL_CTR_ID group by a12.COUNTRY_ID
In the second pass, metric 2 is calculated. The SQL Engine inner joins the Fact Table (Metric 2) table above with the employee lookup table LU_EMPLOYEE below:
EMPLOYEE_ID
1 2 3
COUNTRY_ID
1 2 2
to create the following metric 2 temporary table, grouped by country, using the SQL that follows:
COUNTRY_ID
1 2
Metric 2
10000 17000
create table ZZSP01 nologging as select a12.COUNTRY_ID sum(a11.FREIGHT) from ORDER_FACT a11, COUNTRY_ID, WJXBFS1
685
In the third pass, the SQL Engine uses the following country lookup table, LU_COUNTRY:
COUNTRY_ID
1 3
COUNTRY_DESC
United States Europe
The SQL Engine left outer joins the METRIC1_TEMPTABLE above and the LU_COUNTRY table. The SQL Engine then left outer joins the METRIC2_TEMPTABLE above and the LU_COUNTRY table. Finally, the SQL Engine inner joins the results of the third pass to produce the final results. The Preserve all elements of final pass result table with respect to lookup table but not to relationship table option returns the following results using the SQL below.
COUNTRY_ID COUNTRY_DESC
1 2 United States
Metric 1
6000 1000
Metric 2
10000 17000
select pa1.COUNTRY_ID a11.COUNTRY_NAME pa1.WJXBFS1 pa2.WJXBFS1 from ZZSP00 pa1, ZZSP01 pa2, LU_COUNTRY a11
COUNTRY_ID, COUNTRY_NAME,
WJXBFS1, WJXBFS2
Option 1: Preserve common elements of lookup and final pass result table.
This is the default option. The Analytical Engine does a normal (equal) join to the lookup table.
Option 2: Preserve lookup table elements joined to final pass result table based on fact table keys.
Sometimes the fact table level is not the same as the report or template level. For example, a report contains Store, Month, Sum(Sales) metric, but the fact table is at the level of Store, Day, and Item. There are two ways to keep all the store and month elements:
Do a left outer join first to keep all attribute elements at the Store, Day, and Item level, then aggregate to the Store and Month level.
Details for all VLDB properties
687
Do aggregation first, then do a left outer join to bring in all attribute elements. This option is for the first approach. In the example given previously, it makes two SQL passes:
Pass 1: LOOKUP_STORE cross join LOOKUP_DAY cross join LOOKUP_ITEM TT1 Pass 2: TT1 left outer join Fact_Table on (store, day, item)
The advantage of this approach is that you can do a left outer join and aggregation in the same pass (pass 2). The disadvantage is that because you do a Cartesian join with the lookup tables at a much lower level (pass 1), the result of the Cartesian joined table (TT1) can be very large.
Option 3: Preserve lookup table elements joined to final pass result table based on template attributes without filter.
This option corresponds to the second approach described above. Still using the same example, it makes three SQL passes:
Pass 1: aggregate the Fact_Table to TT1 at Store and Month. This is actually the final pass of a normal report without turning on this setting. Pass 2: LOOKUP_STORE cross join LOOKUP_MONTH TT2 Pass 3: TT2 left outer join TT1 on (store, month)
This approach needs one more pass than the previous option, but the cross join table (TT2) is usually smaller.
Option 4: Preserve lookup table elements joined to final pass result table based on template attributes with filter.
This option is similar to Option 3. The only difference is that the report filter is applied in the final pass (Pass 3). For example, a report contains Store, Month, and Sum(Sales) with a filter of Year = 2002. You want to display every store in every month in 2002, regardless of whether there are sales.
However, you do not want to show any months from other years (only the 12 months in year 2002). Option 4 resolves this issue. When this VLDB setting is turned ON (Option 2, 3, or 4), it is assumed that you want to keep ALL elements of the attributes in their lookup tables. However, sometimes you want such a setting to affect only some of the attributes on a template. For a report containing Store, Month, Sum(Sales), you may want to show all the store names, even though they have no sales, but not necessarily all the months in the LOOKUP_MONTH table. In 7i, you can individually select attributes on the template that need to preserve elements. This can be done from the Data menu, selecting Report Data Option, and then choosing Attribute Join Type. Notice that the 4 options shown on the upper right are the same as those in the VLDB dialog box (internally they are read from the same location). In the lower-right part, you see individual attributes. By default, all attributes are set to Outer, which means that every attribute participates with the Preserve All Lookup Tables Elements property. You still need to turn on this property to make it take effect, which can be done using either this dialog box or the VLDB dialog box.
Examples
The Preserve common elements of lookup and final pass result table option simply generates a direct join between the fact table and the lookup table. The results and SQL are as follows.
Store
East Central South
Dollar Sales
5000 8000 12000
689
max(a12.Store) Store, sum(a11.DollarSls) WJXBFS1 from Fact a11 join Store a12 on (a11.Store_id = a12.Store_id) group by a11.Store_id
The Preserve lookup table elements joined to final pass result table based on fact keys option creates a temp table that is a Cartesian join of all lookup table key columns. Then the fact table is outer joined to the temp table. This preserves all lookup table elements. The results and SQL are as below:
Store
East Central South North
Dollar Sales
5000 8000 12000
select distinct a11.Year Year into #ZZOL00 from Fact a11 select pa1.Year Year, a11.Store_id Store_id into #ZZOL01 from #ZZOL00 pa1 cross join Store a11 select pa2.Store_id Store_id, max(a12.Store) Store, sum(a11.DollarSls) WJXBFS1 from #ZZOL01 pa2 left outer join Fact a11 on (pa2.Store_id = a11.Store_id and pa2.Year = a11.Year) join Store a12 on (pa2.Store_id = a12.Store_id)
The Preserve lookup table elements joined to final pass result table based on template attributes without filter option preserves the lookup table elements by left outer joining to the final pass of SQL and only joins on attributes that are on the template. For this example and the next, the filter of Store not equal to Central is added. The results and SQL are as follows:
Store
East Central South North 12000
Dollar Sales
5000
select a11.Store_id Store_id, sum(a11.DollarSls) WJXBFS1 into #ZZT5X00003UOL000 from Fact a11 where a11.Store_id not in (2) group by a11.Store_id select a11.Store_id Store_id, a11.Store Store, pa1.WJXBFS1 WJXBFS1 from Store a11 left outer join #ZZT5X00003UOL000 pa1 on (a11.Store_id = pa1.Store_id) drop table #ZZT5X00003UOL000
691
The Preserve lookup table elements joined to final pass result table based on template attributes with filter option is the newest option and is the same as above, but you get the filter in the final pass. The results and SQL are as follows:
Store
East South North
Dollar Sales
5000 12000
select a11.Store_id Store_id, sum(a11.DollarSls) WJXBFS1 into #ZZT5X00003XOL000 from Fact a11 where a11.Store_id not in (2) group by a11.Store_id select a11.Store_id Store_id, a11.Store Store, pa1.WJXBFS1 WJXBFS1 from Store a11 left outer join #ZZT5X00003XOL000 pa1 on (a11.Store_id = pa1.Store_id) where a11.Store_id not in (2) drop table #ZZT5X00003XOL000
Hyperion Essbase. Additional details about each property, including examples where necessary, are provided in the sections following the table.
Property Description Possible Values Ignore error in conversion and continue report execution using column data type set by data source Fail report execution
User-defined
Default Value
Fail report execution
Datatype Conversion Determines the OLAP cube report error Warning resolution for OLAP cube data defined with an incompatible data type. Format for Date/Time Values Coming from Data Source Defines the date format used in your OLAP cube source. This ensures the date data is integrated into MicroStrategy correctly. Determines whether or not data is returned from rows that have null or empty values.
DD.MM.YYYY
Do not add the non empty keyword in the MDX select clause Add the non empty keyword in the MDX select clause only if there are metrics on the report Always add the non empty keyword in the MDX select clause Do not add the generation number property Add the generation number property
Add the non empty keyword in the MDX select clause only if there are metrics on the report
Determines whether level (from the bottom of the hierarchy up) or generation (from the top of the hierarchy down) should be used to populate the report results. Supports an OLAP cube reporting scenario in which filters are created on attribute ID forms and metrics.
Do not verify the level of literals in limit or filter expressions Verify the level of literals in limit or filter expressions
For more information on OLAP cube sources, see the MicroStrategy OLAP Cube Reporting Guide.
693
Ignore error in conversion and continue report execution using column data type set by data source: An OLAP cube report that includes OLAP cube data mapped to an incompatible data type is executed successfully using the default MicroStrategy data type for the OLAP cube data. The incompatible data type is recognized during report execution and is replaced with MicroStrategys default character string data type. Fail report execution: An OLAP cube report that includes OLAP cube data mapped to an incompatible data type fails and no data is returned. When an OLAP cube report fails for this reason, an error message is displayed that identifies the data that has been mapped to an incompatible data type.
For information on the pros and cons of each error resolution options and general information on defining column data types for OLAP cube data, see the MicroStrategy OLAP Cube Reporting Guide.
Do not add the non empty keyword in the MDX select clause: When this option is selected, data is returned from rows that contain data and rows that have null or empty metric values (similar to an outer join in SQL). The null or empty values are displayed on the OLAP cube report. Add the non empty keyword in the MDX select clause only if there are metrics on the report: When this option is selected, and metrics are included on an OLAP
Details for all VLDB properties
695
cube report, null or empty data is not returned from the OLAP cube source and is not included on OLAP cube reports (similar to an inner join in SQL). If no metrics are present on an OLAP cube report, then any null and empty values for the attributes are returned and displayed on the OLAP cube report.
Always add the non empty keyword in the MDX select clause: When this option is selected, null or empty data is not returned from the OLAP cube source and is not included on OLAP cube reports (similar to an inner join in SQL).
Example
Do not add the non empty keyword in the MDX select clause
with set [dim0_select_members] as '{[0D_SOLD_TO].[LEVEL01].members}' set [dim1_select_members] as '{[0CALQUARTER].[LEVEL01].members}' select {[Measures].[3STVV9JH7ATAV9YJN06S7ZKSQ]} on columns, CROSSJOIN(hierarchize({[dim0_select_members]} ), hierarchize({[dim1_select_members]})) dimension properties [0D_SOLD_TO].[20D_SOLD_TO], [0D_SOLD_TO].[10D_SOLD_TO] on rows from [0D_DECU/QCUBE2]
columns, non empty CROSSJOIN(hierarchize({[dim0_select_members]} ), hierarchize({[dim1_select_members]})) dimension properties [0D_SOLD_TO].[20D_SOLD_TO], [0D_SOLD_TO].[10D_SOLD_TO] on rowsfrom [0D_DECU/QCUBE2]
697
The level SubItem causes the hierarchy to be unbalanced, which displaces the levels of the hierarchy when populated on a report in MicroStrategy. For more information on unbalanced and ragged hierarchies, see the MicroStrategy Project Design Guide. You can choose from the following settings:
Do not add the generation number property (default): When this option is selected, an unbalanced or ragged hierarchy from Essbase is populated on a grid from the bottom of the hierarchy up, as shown in the image above. Add the generation number property: When this option is selected, an unbalanced or ragged hierarchy from Essbase is populated on a grid from the top of the hierarchy down. If this setting is selected for the example scenario described above, the report is populated as shown in the image below.
The unbalanced hierarchy is now displayed on the report with an accurate representation of the corresponding levels. Setting this VLDB property to Add the generation number property for a ragged hierarchy from Essbase can cause incorrect formatting. For more information on unbalanced and ragged hierarchies, see Tech Note TN5200-802-2356 in the MicroStrategy Knowledge Base.
When you run the report, you receive an error that alerts you that an unexpected level was found in the result. This is because the filter on the attributes ID form can include other levels due to the structure of ID values in some OLAP cube sources. When these other levels are included, the metric filter cannot be evaluated correctly by default. You can support this type of report by modifying the MDX Verify Limit Filter Literal Level. This VLDB property has the following options:
Do not verify the level of literals in limit or filter expressions (default): While the majority of OLAP cube reports execute correctly when this option is selected, the scenario described above will fail. Verify the level of literals in limit or filter expressions: Selecting this option for an OLAP cube report allows reports fitting the scenario described above to execute correctly. This is achieved by adding an intersection in the
Details for all VLDB properties
699
MDX statement to support the execution of such an OLAP cube report. For example, the OLAP cube report described in the scenario above executes correctly and displays the following data.
Description
The Analytical Engine can either: perform the non-aggregation calculation with a subquery, or place the results that would have been selected from a subquery into an intermediate table and join that table to the rest of the query.
Possible Values Use subquery Use temp table as set in the Fallback Table Type setting
Default Value
Use subquery
Property
Compute Non-Agg before/after OLAP Functions (e.g. Rank) Calculated in Analytical Engine
Description
This property controls whether the non-aggregation calculation is performed before or after an Analytical Engine calculation. Use this property to determine, for example, whether the engine ranks the stores and then performs a non-aggregation calculation, or performs the non-aggregation calculation first. Compound attributes are usually counted by concatenating the keys of all the attributes that form the key. If the database platform does not support COUNT on concatenated strings, this property should be disabled. Some database platforms do not support count on a column (COUNT(COL)). This property converts the COUNT(COL) statement to a COUNT(*). Allows you to choose whether you want to use the metric name as the column alias or whether to use a MicroStrategygenerated name. This property determines whether to add a .0 after the integer.
Possible Values
Default Value
Calculate Calculate non-aggregation non-aggregation before before OLAP OLAP Functions/Rank Functions/Rank Calculate non-aggregation after OLAP Functions/Rank
COUNT(column) Support
Use COUNT(column)
Do not use the metric name as the default metric column alias Use the metric name as the default metric column alias Add .0 to integer constant in metric expression Do Not Add .0 to integer constant in metric expression
User-defined
Do not use the metric name as the default metric column alias
256
701
Property
Metric Join Type Non-Agg Metric Optimization
Description
Type of join used in a metric. Influences the behavior for non-aggregation metrics by either optimizing for smaller temporary tables or for less fact table access. Indicates how to handle arithmetic operations with NULL values.
Possible Values Inner Join Outer Join Optimized for less fact table access Optimized for smaller temp table
Default Value
Inner Join Optimized for less fact table access
NULL Check
Do nothing Check for NULL in all queries Check for NULL in temp table join only One pass Multiple count distinct, but count expression must be the same Multiple count distinct, but only one count distinct per pass No count distinct, use select distinct and count(*) instead 7.1 style. Apply transformation to all applicable attributes 7.2 style. Only apply transformation to highest common child when it is applicable to multiple attributes Do nothing Check for zero in all queries Check for zero in temp table join only
Indicates how to handle COUNT (and other aggregation functions) when DISTINCT is present in the SQL.
Zero Check
When a report contains an absolute non-aggregation metric, the pass that gets the non-aggregated data can be performed in a subquery or in a temporary table.
Use Temp Table as set in the Fallback Table Type setting: When this option is set, the table creation type follows the option selected in the VLDB property Fallback Table Type. The SQL Engine reads the Fallback Table Type VLDB setting and determines whether to create the intermediate table as a true temporary table or a permanent table.
In most cases, the default Fallback Table Type VLDB setting is Temporary table. However, for a few databases, like UDB for 390, this option is set to Permanent table. These databases have their Intermediate Table Type defaulting to True Temporary Table, so you set their Fallback Table Type to Permanent. If you see permanent table creation and you want the absolute non-aggregation metric to use a True Temporary table, set the Fallback Table Type to Temporary table on the report as well.
Use subquery: With this setting, the engine performs the non-aggregation calculation with a subquery.
Examples
Use Sub-query
select a11.CLASS_NBR CLASS_NBR, a12.CLASS_DESC CLASS_DESC, sum(a11.TOT_SLS_QTY) WJXBFS1 from DSSADMIN.MARKET_CLASS a11, DSSADMIN.LOOKUP_CLASS a12 where a11.CLASS_NBR = a12.CLASS_NBR and (((a11.MARKET_NBR) in (select s21.MARKET_NBR
703
from DSSADMIN.LOOKUP_STORE s21 where s21.STORE_NBR in (3, 2, 1))) and ((a11.MARKET_NBR) in (select min(c11.MARKET_NBR) from DSSADMIN.LOOKUP_MARKET c11 where ((c11.MARKET_NBR) in (select s21.MARKET_NBR from DSSADMIN.LOOKUP_STORE s21 where s21.STORE_NBR in (3, 2, 1)))))) group by a11.CLASS_NBR, a12.CLASS_DESC
Compute Non-Agg before/after OLAP Functions (e.g. Rank) Calculated in Analytical Engine
Compute Non-Agg Before/After OLAP Functions/Rank is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties, page 626. When reports contain calculations based on non-aggregation metrics, this property controls the order in which the non-aggregation and calculations are computed.
Examples
Calculate Non-Aggregation Before Analytical
select a12.YEAR_ID YEAR_ID, sum(a11.TOT_SLS_QTY) WJXBFS1 from HARI_REGION_DIVISION a11 join HARI_LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) where a11.CUR_TRN_DT) in (select min(a11.CUR_TRN_DT) from HARI_LOOKUP_DAY a11 group by a11.YEAR_ID)) group by a12.YEAR_ID create table #ZZTIS00H5J7MQ000( YEAR_ID DECIMAL(10, 0)) [Placeholder for an analytical SQL] select a12.YEAR_ID YEAR_ID, max(a13.YEAR_DESC) YEAR_DESC, sum(a11.TOT_SLS_QTY) TSQDIMYEARNA
705
from HARI_REGION_DIVISION a11 join HARI_LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) join #ZZTIS00H5J7MQ000 pa1 on (a12.YEAR_ID = pa1.YEAR_ID) join HARI_LOOKUP_YEAR a13 on (a12.YEAR_ID = a13.YEAR_ID) where ((a11.CUR_TRN_DT) in (select min(a15.CUR_TRN_DT) from #ZZTIS00H5J7MQ000 pa1 join HARI_LOOKUP_DAY a15 on (pa1.YEAR_ID = a15.YEAR_ID) group by pa1.YEAR_ID)) group by a12.YEAR_ID
(datetime, '1993-12-01 00:00:00', 120), 1993, 44) [The rest of the INSERT statements have been omitted from display].
select distinct pa1.YEAR_ID YEAR_ID, pa1.WJXBFS1 WJXBFS1 from #ZZTIS00H5J8NB000 pa1 where ((pa1.CUR_TRN_DT) in (select min(c11.CUR_TRN_DT) from HARI_LOOKUP_DAY c11 group by c11.YEAR_ID)) create table #ZZTIS00H5J8MQ001( YEAR_ID DECIMAL(10, 0), WJXBFS1 FLOAT) [Placeholder for an analytical SQL] select a12.YEAR_ID YEAR_ID, max(a13.YEAR_DESC) YEAR_DESC, sum(a11.TOT_SLS_QTY) TSQDIMYEARNA from HARI_REGION_DIVISION a11 join HARI_LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) join #ZZTIS00H5J8MQ001 pa2 on (a12.YEAR_ID = pa2.YEAR_ID) join HARI_LOOKUP_YEAR a13 on (a12.YEAR_ID = a13.YEAR_ID) where ((a11.CUR_TRN_DT) in (select min(a15.CUR_TRN_DT) from #ZZTIS00H5J8MQ001 pa2 join HARI_LOOKUP_DAY a15 on (pa2.YEAR_ID = a15.YEAR_ID) group by pa2.YEAR_ID)) group by a12.YEAR_ID
707
Examples
COUNT expression enabled
select a21.DIVISION_NBR DIVISION_NBR, max(a22.DIVISION_DESC) DIVISION_DESC, count(distinct char(a21.ITEM_NBR) || || char(a21.CLASS_NBR)) ITEM_COUNT from LOOKUP_ITEM a21 join LOOKUP_DIVISION a22 on (a21.DIVISION_NBR = a22.DIVISION_NBR) group by a21.DIVISION_NBR
count(a21.ITEM_NBR) ITEM_COUNT from TEMP1 a21 join LOOKUP_DIVISION a22 on (a21.DIVISION_NBR = a22.DIVISION_NBR) group by a22.DIVISION_NBR
COUNT(column) Support
COUNT(column) Support is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties, page 626. The COUNT(column) Support property is used to specify whether COUNT on a column is supported or not. If it is not supported, the COUNT(column) is computed by using intermediate tables and COUNT(*).
Examples
Use COUNT(column)
select a11.STORE_NBR STORE_NBR, max(a12.STORE_DESC) STORE_DESC, count(distinct a11.COST_AMT) COUNTDISTINCT from HARI_COST_STORE_DEP a11 join HARI_LOOKUP_STORE a12 on (a11.STORE_NBR = a12.STORE_NBR) group by a11.STORE_NBR
Use COUNT(*)
select a11.STORE_NBR STORE_NBR, a11.COST_AMT WJXBFS1 into #ZZTIS00H5JWDA000 from HARI_COST_STORE_DEP a11
709
select distinct pa1.STORE_NBR STORE_NBR, pa1.WJXBFS1 WJXBFS1 into #ZZTIS00H5JWOT001 from #ZZTIS00H5JWDA000 pa1 where pa1.WJXBFS1 is not null select pa2.STORE_NBR STORE_NBR, max(a11.STORE_DESC) STORE_DESC, count(*) WJXBFS1 from #ZZTIS00H5JWOT001 pa2 join HARI_LOOKUP_STORE a11 on (pa2.STORE_NBR = a11.STORE_NBR) group by pa2.STORE_NBR
If you select the option Use the metric name as the default metric column alias, you should also set the maximum metric alias size. See Max Metric Alias Size below for information on setting this option.
Examples
Do not use the metric name as the default metric column alias
insert into ZZTSU006VT7PO000 select a11.[MONTH_ID] AS MONTH_ID, a11.[ITEM_ID] AS ITEM_ID, a11.[EOH_QTY] AS WJXBFS1 from [INVENTORY_Q4_2003] a11, [LU_MONTH] a12, [LU_ITEM] a13 where a11.[MONTH_ID] = a12.[MONTH_ID] and a11.[ITEM_ID] = a13.[ITEM_ID] and (a13.[SUBCAT_ID] in (25) and a12.[QUARTER_ID] in (20034))
711
712 Details for all VLDB properties
Sales Metric in Fairfax County for 2002 Sales Metric in Fairfax County for 2003
2008 MicroStrategy, Inc.
The system limits the names to 30 characters based on the VLDB option you set in this example, which means that the metric aliases for both columns is as follows:
The SQL engine adds a 1 to one of the names because the truncated versions of both metric names are identical. That name is then 31 characters long and so the database rejects it. Therefore, in this example you should use this feature to set the maximum metric alias size to fewer than 30 (perhaps 25), to allow room for the SQL engine to add one or two characters during processing in case the first 25 characters of any of your metric names are the same.
At the DBMS and database instance levels, it is set in the VLDB Properties Editor. This setting affects all the metrics in this project, unless it is overridden at a lower level. At the metric level, it can be set in either the VLDB Properties Editor or from the Metric Editors Tools menu, and choosing Metric Join Type. The setting is applied in all the reports that include this metric.
713
At the report level, it can be set from the Report Editors Data menu, by pointing to Report Data Options, and choosing Metric Join Type. This setting overrides the setting at the metric level and is applied only for the currently selected report.
There is a related but separate property called Formula Join Type that can also be set at the metric level. This property is used to determine how to combine the result set together within this metric. This normally happens when a metric formula contains multiple facts that cause the Analytical Engine to use multiple fact tables. As a result, sometimes it needs to calculate different components of one metric in different intermediate tables and then combine them. This property can only be set in the Metric Editor from the Tools menu, by pointing to Advanced Settings, and then choosing Formula Join Type. Both Metric Join Type and Formula Join Type are used in the Analytical Engine to join multiple intermediate tables in the final pass. The actual logic is also affected by another VLDB property, Full Outer Join Support. When this property is set to YES, it means the corresponding database supports full outer join (92 syntax). In this case, the joining of multiple intermediate tables makes use of outer join syntax directly (left outer join, right outer join, or full outer join, depending on the setting on each metric/table). However, if the Full Outer Join Support is NO, then the left outer join is used to simulate a full outer join. This can be done with a union of the IDs of the multiple intermediate tables that need to do an outer join and then using the union table to left outer join to all intermediate tables, so this approach generates more passes. This approach was also used by MicroStrategy 6.x and earlier. Also note that when the metric level is higher than the template level, the Metric Join Type property is normally ignored, unless you enable another property, Downward Outer Join Option. For detailed information, see Downward Outer Join Option, page 667.
Examples
Optimized for less fact table access
The following example first creates a fairly large temporary table, but then never touches the fact table again.
select a11.REGION_NBR REGION_NBR, a11.REGION_NBR REGION_NBR0, a12.MONTH_ID MONTH_ID, a11.DIVISION_NBR DIVISION_NBR, a11.CUR_TRN_DT CUR_TRN_DT, a11.TOT_SLS_DLR WJXBFS1 into ZZNB00 from REGION_DIVISION a11 join LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
715
select pa1.REGION_NBR REGION_NBR, pa1.MONTH_ID MONTH_ID, min(pa1.CUR_TRN_DT) WJXBFS1 into ZZMB01 from ZZNB00 pa1 group by pa1.REGION_NBR, pa1.MONTH_ID select pa1.REGION_NBR REGION_NBR, pa1.MONTH_ID MONTH_ID, count(pa1.WJXBFS1) WJXBFS1 into ZZNC02 from ZZNB00 pa1 join ZZMB01 pa2 on (pa1.CUR_TRN_DT = pa2.WJXBFS1 and pa1.MONTH_ID = pa2.MONTH_ID and pa1.REGION_NBR = pa2.REGION_NBR) group by pa1.REGION_NBR, pa1.MONTH_ID select distinct pa3.REGION_NBR REGION_NBR, a13.REGION_DESC REGION_DESC, a12.CUR_TRN_DT CUR_TRN_DT, pa3.WJXBFS1 COUNTOFSALES from ZZNC02 pa3 join LOOKUP_DAY a12 on (pa3.MONTH_ID = a12.MONTH_ID) join LOOKUP_REGION a13 on (pa3.REGION_NBR = a13.REGION_NBR)
min(a11.CUR_TRN_DT) WJXBFS1 into ZZOP00 from REGION_DIVISION a11 join LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) group by a11.REGION_NBR, a12.MONTH_ID select a11.REGION_NBR REGION_NBR, a12.MONTH_ID MONTH_ID, count(a11.TOT_SLS_DLR) COUNTOFSALES into ZZMD01 from REGION_DIVISION a11 join LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) join ZZOP00 pa1 on (a11.CUR_TRN_DT = pa1.WJXBFS1 and a11.REGION_NBR = pa1.REGION_NBR and a12.MONTH_ID = pa1.MONTH_ID) group by a11.REGION_NBR, a12.MONTH_ID select distinct pa2.REGION_NBR REGION_NBR, a13.REGION_DESC REGION_DESC, a12.CUR_TRN_DT CUR_TRN_DT, pa2.COUNTOFSALES COUNTOFSALES from ZZMD01 pa2 join LOOKUP_DAY a12 on (pa2.MONTH_ID = a12.MONTH_ID) join LOOKUP_REGION a13 on (pa2.REGION_NBR = a13.REGION_NBR)
717
NULL Check
NULL Check indicates how to handle arithmetic operations with NULL values. If NULL Check is enabled, the NULL2ZERO function is added, which changes NULL to 0 in any arithmetic calculation (+,-,*,/).
and Month are unrelated. This Month, Week, and Day hierarchy setup is the most common scenario where this property makes a difference.
Example
You have a report with Week, Sales, and Last Year Sales on the template, filtered by Month. The default behavior is to calculate the Last Year Sales with the following SQL. Notice that the date transformation is done for Month and Week.
insert into ZZT6T02D01 select a14.DAT_YYYYWW DAT_YYYYWW, sum(a11.SALES) SALESLY from FT1 a11 join TRANS_DAY a12 on (a11.DAT_YYYYMMDD = a12.DAT_YYYYMMDD) join TRANS_DAY_MON a13 on (a12.DAT_YYYYYYMM = a13.DAT_LYM) join TRANS_DAY_WEEK a14 on (a12.DAT_YYYYWW = a14.DAT_LYW) where a13.DAT_YYYYMM in (200311) group by a14.DAT_YYYYWW
The new behavior applies transformation only to the highest common child when it is applicable to multiple attributes. The SQL is shown in the following syntax. Notice that the date transformation is done only at the Day level, because Day is the highest common child of Week and Month. So the days are transformed, and then you filter for the correct Month, and then Group by Week.
insert into ZZT6T02D01 select a12.DAT_YYYYWW DAT_YYYYWW, sum(a11.SALES) SALESLY from FT1 a11 join TRANS_DAY a12 on (a11.DAT_YYYYMMDD = a12.DAT_YYYYMMLYT) where a12.DAT_YYYYMM in (200311) group by a12.DAT_YYYYWW
719
Zero Check
Zero Check indicates how to handle division by zero. If zero checking is enabled, the ZERO2NULL function is added, which changes 0 to NULL in the denominator of any division calculation.
Description
Appends string after final drop statement. SQL statements issued between multiple insert statements. For the first four statements, each contains single SQL. The last statement can contain multiple SQL statements concatenated by ;. SQL statements issued after create, after first insert only for explicit temp table creation. For the first four statements, each contains single SQL. The last statement can contain multiple SQL statements concatenated by ;. SQL statements issued after create before first insert only for explicit temp table creation. For the first four statements, each contains single SQL. The last statement can contain multiple SQL statements concatenated by ;. SQL statements issued after Report Block. For the first four statements, each contains single SQL. The last statement can contain multiple SQL statements concatenated by ;.
Possible Values
User-defined User-defined
Default Value
NULL NULL
User-defined
NULL
User-defined
NULL
User-defined
NULL
Property
Report Pre Statement 1-5
Description
SQL statements issued before Report Block. For the first four statements, each contains single SQL. The last statement can contain multiple SQL statements concatenated by ;. SQL statements issued after creating new table and insert. For the first four statements, each contains single SQL. The last statement can contain multiple SQL statements concatenated by ;. SQL statements issued before creating new table. For the first four statements, each contains single SQL. The last statement can contain multiple SQL statements concatenated by ;.
Possible Values
User-defined
Default Value
NULL
User-defined
NULL
User-defined
NULL
You can insert the following syntax into strings to populate dynamic information by the Analytical Engine:
!!! inserts column names, separated by commas (can be used in Table Pre/Post and Insert Pre/Mid statements). !! inserts an exclamation (!) (can be used in Table Pre/Post and Insert Pre/Mid statements). Note that !!= inserts a not equal to sign in the SQL statement. ??? inserts the table name (can be used in Data Mart Insert/Pre/Post statements, Insert Pre/Post, and Table Post statements). ;; inserts a semicolon (;) in Statement5 (can be used in all Pre/Post statements). Note that a single ; (semicolon) acts as a separator. !a inserts column names for attributes only (can be used in Table Pre/Post and Insert Pre/Mid statements). !d inserts the date (can be used in all Pre/Post statements). !o inserts the report name (can be used in all Pre/Post statements). !u inserts the user name (can be used in all Pre/Post statements).
721
!j inserts the Intelligence Server Job ID associated with the report execution (can be used in all Pre/Post statements). !r inserts the report GUID, the unique identifier for the report object that is also available in the Enterprise Manager application (can be used in all Pre/Post statements). !t inserts a timestamp (can be used in all Pre/Post statements). !p inserts the project name with spaces omitted (can be used in all Pre/Post statements). !z inserts the project GUID, the unique identifier for the project (can be used in all Pre/Post statements). !s inserts the user session GUID, the unique identifier for the users session that is also available in the Enterprise Manager application (can be used in all Pre/Post statements).
The # character is a special token that is used in various patterns and is treated differently than other characters. One single # is absorbed and two # are reduced to a single #. For example to show three # characters in a statement, enter six # characters in the code. You can get any desired string with the right number of # characters. Using the # character is the same as using the ; character.
The table below shows the location of some of the most important VLDB/DSS settings in a Structured Query Language (SQL) query structure. If the properties in the table are set, the values replace the corresponding tag in the query:
Tag
<1> <2> <3> <4> <5> <6> <7>
Tag
<8> <9> <10> <11> <12> <13> <14> <15> <16> <17> <18> <19> <20> <21> <22>
Query structure
<1> <2> CREATE <3> TABLE <4> <5><table name> <6> (<fields' definition>) <7> <8> <9>(COMMIT) <10> INSERT INTO <5><table name><11> SELECT <12> <fields list> FROM <tables list> WHERE <joins and filter> <13>(COMMIT) <14> <15>
723
<16> CREATE <17> INDEX <index name> ON <fields list> <18> SELECT <12> <fields list> FROM <tables list> WHERE <joins and filter> <19> <20> DROP TABLE TABLENAME <21> <22>
The Commit after Final Drop property (<21>) is sent to the warehouse even if the SQL View for the report does not show it.
Example
In the following example the setting values are:
Cleanup Post Statement1=/* Cleanup Post Statement1 */
Create table TABLENAME (ATTRIBUTE_COL1 VARCHAR(20), FORM_COL2 CHAR(20), FACT_COL3 FLOAT) primary index (ATTRIBUTE_COL1, FORM_COL2) insert into TABLENAME select A1.COL1, A2.COL2, A3.COL3 from TABLE1 A1, TABLE2 A2, TABLE3 A3 where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5 insert into TABLENAME select A1.COL1, A2.COL2, A3.COL3 from TABLE4 A1, TABLE5 A2, TABLE6 A3 where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5 create index IDX_TEMP1(STORE_ID, STORE_DESC) select A1.STORE_NBR, max(A1.STORE_DESC) from LOOKUP_STORE Where A1 A1.STORE_NBR = 1 group by A1.STORE_NBR drop table TABLENAME
725
Examples
In the following example, the setting values are:
Insert MidStatement1=/* ??? Insert MidStatement1 */
ITEM_NBR DECIMAL(10, 0), CLASS_NBR DECIMAL(10, 0), STORE_NBR DECIMAL(10, 0), XKYCGT INTEGER, TOTALSALES FLOAT) insert into ZZTIS00H5YAPO000 select a11.ITEM_NBR ITEM_NBR, a11.CLASS_NBR CLASS_NBR, a11.STORE_NBR STORE_NBR, 0 XKYCGT, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_ITEM_93 a11 group by a11.ITEM_NBR, a11.CLASS_NBR, a11.STORE_NBR /* ZZTIS00H5YAPO000 Insert MidStatement1 */ insert into ZZTIS00H5YAPO000 select a11.ITEM_NBR ITEM_NBR, a11.CLASS_NBR CLASS_NBR, a11.STORE_NBR STORE_NBR, 1 XKYCGT, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_ITEM_94 a11 group by a11.ITEM_NBR, a11.CLASS_NBR, a11.STORE_NBR select pa1.ITEM_NBR ITEM_NBR, pa1.CLASS_NBR CLASS_NBR, max(a11.ITEM_DESC) ITEM_DESC, max(a11.CLASS_DESC) CLASS_DESC, pa1.STORE_NBR STORE_NBR, max(a12.STORE_DESC) STORE_DESC,
727
sum(pa1.TOTALSALES) TOTALSALES from ZTIS00H5YAPO000 pa1 join HARI_LOOKUP_ITEM a11 on (pa1.CLASS_NBR = a11.CLASS_NBR and pa1.ITEM_NBR = a11.ITEM_NBR) join HARI_LOOKUP_STORE a12 on (pa1.STORE_NBR = a12.STORE_NBR) group by pa1.ITEM_NBR, pa1.CLASS_NBR, pa1.STORE_NBR
1 XKYCGT, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_ITEM_94 a11 group by a11.ITEM_NBR, a11.CLASS_NBR, a11.STORE_NBR select pa1.ITEM_NBR ITEM_NBR, pa1.CLASS_NBR CLASS_NBR, max(a11.ITEM_DESC) ITEM_DESC, max(a11.CLASS_DESC) CLASS_DESC, pa1.STORE_NBR STORE_NBR, max(a12.STORE_DESC) STORE_DESC, sum(pa1.TOTALSALES) TOTALSALES from ZZTIS00H5YEPO000 pa1 join HARI_LOOKUP_ITEM a11 on (pa1.CLASS_NBR = a11.CLASS_NBR and pa1.ITEM_NBR = a11.ITEM_NBR) join HARI_LOOKUP_STORE a12 on (pa1.STORE_NBR = a12.STORE_NBR) group by pa1.ITEM_NBR, pa1.CLASS_NBR, pa1.STORE_NBR
729
Multiple INSERT INTO SELECT statements to the same table occur in reports involving partition tables and outer joins. The UNION Multiple Inserts VLDB property does not affect this property, but the Table Creation Type property does. The Table Creation Type property is applicable when the Intermediate Table Type VLDB property is set to Permanent or Temporary table.
Example
In the following example, the setting values are:
Insert PostStatement1=/* ??? Insert PostStatement1 */
a11.CLASS_NBR, a11.STORE_NBR insert into ZZTIS00H601PO000 select a11.ITEM_NBR ITEM_NBR, a11.CLASS_NBR CLASS_NBR, a11.STORE_NBR STORE_NBR, 1 XKYCGT, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_ITEM_94 a11 group by a11.ITEM_NBR, a11.CLASS_NBR, a11.STORE_NBR /* ZZTIS00H601PO000 Insert PostStatement1 */ select pa1.ITEM_NBR ITEM_NBR, pa1.CLASS_NBR CLASS_NBR, max(a11.ITEM_DESC) ITEM_DESC, max(a11.CLASS_DESC) CLASS_DESC, pa1.STORE_NBR STORE_NBR, max(a12.STORE_DESC) STORE_DESC, sum(pa1.TOTALSALES) TOTALSALES from ZZTIS00H601PO000 pa1 join HARI_LOOKUP_ITEM a11 on (pa1.CLASS_NBR = a11.CLASS_NBR and pa1.ITEM_NBR = a11.ITEM_NBR) join HARI_LOOKUP_STORE a12 on (pa1.STORE_NBR = a12.STORE_NBR) group by pa1.ITEM_NBR, pa1.CLASS_NBR, pa1.STORE_NBR
731
select a11.ITEM_NBR ITEM_NBR, a11.CLASS_NBR CLASS_NBR, a11.STORE_NBR STORE_NBR, 0 XKYCGT, sum(a11.TOT_SLS_DLR) TOTALSALES into ZZTIS00H60BPO000 from HARI_STORE_ITEM_93 a11 group by a11.ITEM_NBR, a11.CLASS_NBR, a11.STORE_NBR insert into ZZTIS00H60BPO000 select a11.ITEM_NBR ITEM_NBR, a11.CLASS_NBR CLASS_NBR, a11.STORE_NBR STORE_NBR, 1 XKYCGT, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_ITEM_94 a11 group by a11.ITEM_NBR, a11.CLASS_NBR, a11.STORE_NBR select pa1.ITEM_NBR ITEM_NBR, pa1.CLASS_NBR CLASS_NBR, max(a11.ITEM_DESC) ITEM_DESC, max(a11.CLASS_DESC) CLASS_DESC, pa1.STORE_NBR STORE_NBR, max(a12.STORE_DESC) STORE_DESC, sum(pa1.TOTALSALES) TOTALSALES from ZTIS00H60BPO000 pa1 join HARI_LOOKUP_ITEM a11 on (pa1.CLASS_NBR = a11.CLASS_NBR and pa1.ITEM_NBR = a11.ITEM_NBR) join HARI_LOOKUP_STORE a12 on (pa1.STORE_NBR = a12.STORE_NBR)
Examples
In the following examples, the setting values are:
Insert PreStatement1=/* ??? Insert
PreStatement1 */
733
create table ZZTIS00H601PO000 ( ITEM_NBR DECIMAL(10, 0), CLASS_NBR DECIMAL(10, 0), STORE_NBR DECIMAL(10, 0), XKYCGT INTEGER, TOTALSALES FLOAT) /* ZZTIS00H601PO000 Insert PreStatement1 */ insert into ZZTIS00H601PO000 select a11.ITEM_NBR ITEM_NBR, a11.CLASS_NBR CLASS_NBR, a11.STORE_NBR STORE_NBR, 0 XKYCGT, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_ITEM_93 a11 group by a11.ITEM_NBR, a11.CLASS_NBR, a11.STORE_NBR insert into ZZTIS00H601PO000 select a11.ITEM_NBR ITEM_NBR, a11.CLASS_NBR CLASS_NBR, a11.STORE_NBR STORE_NBR, 1 XKYCGT, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_ITEM_94 a11 group by a11.ITEM_NBR, a11.CLASS_NBR, a11.STORE_NBR select pa1.ITEM_NBR ITEM_NBR, pa1.CLASS_NBR CLASS_NBR, max(a11.ITEM_DESC) ITEM_DESC, max(a11.CLASS_DESC) CLASS_DESC, pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC, sum(pa1.TOTALSALES) TOTALSALES from ZZTIS00H601PO000 pa1 join HARI_LOOKUP_ITEM a11 on (pa1.CLASS_NBR = a11.CLASS_NBR and pa1.ITEM_NBR = a11.ITEM_NBR) join HARI_LOOKUP_STORE a12 on (pa1.STORE_NBR = a12.STORE_NBR) group by pa1.ITEM_NBR, pa1.CLASS_NBR, pa1.STORE_NBR
735
a11.STORE_NBR select pa1.ITEM_NBR ITEM_NBR, pa1.CLASS_NBR CLASS_NBR, max(a11.ITEM_DESC) ITEM_DESC, max(a11.CLASS_DESC) CLASS_DESC, pa1.STORE_NBR STORE_NBR, max(a12.STORE_DESC) STORE_DESC, sum(pa1.TOTALSALES) TOTALSALES from ZZTIS00H60BPO000 pa1 join HARI_LOOKUP_ITEM a11 on (pa1.CLASS_NBR = a11.CLASS_NBR and pa1.ITEM_NBR = a11.ITEM_NBR) join HARI_LOOKUP_STORE a12 on (pa1.STORE_NBR = a12.STORE_NBR) group by pa1.ITEM_NBR, pa1.CLASS_NBR,
Example
In the following example, the setting values are:
Report Post Statement1=/* Report Post Statement1 */ Create table TABLENAME (ATTRIBUTE_COL1 VARCHAR(20), FORM_COL2 CHAR(20), FACT_COL3 FLOAT) primary index (ATTRIBUTE_COL1, FORM_COL2) insert into TABLENAME select A1.COL1, A2.COL2, A3.COL3 from TABLE1 A1, TABLE2 A2, TABLE3 A3 where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5 insert into TABLENAME select A1.COL1, A2.COL2, A3.COL3 from TABLE4 A1, TABLE5 A2, TABLE6 A3 where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5 create index IDX_TEMP1(STORE_ID, STORE_DESC) select A1.STORE_NBR, max(A1.STORE_DESC) from LOOKUP_STORE Where A1 A1.STORE_NBR = 1
737
Example
In the following example, the setting values are:
Report Pre Statement1=/* Report Pre Statement1 */ /* Report Pre Statement 1*/ Create table TABLENAME (ATTRIBUTE_COL1 VARCHAR(20), FORM_COL2 CHAR(20), FACT_COL3 FLOAT) primary index (ATTRIBUTE_COL1, FORM_COL2) insert into TABLENAME select A1.COL1, A2.COL2, A3.COL3 738 Details for all VLDB properties
2008 MicroStrategy, Inc.
from TABLE1 A1, TABLE2 A2, TABLE3 A3 where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5 insert into TABLENAME select A1.COL1, A2.COL2, A3.COL3 from TABLE4 A1, TABLE5 A2, TABLE6 A3 where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5 create index IDX_TEMP1(STORE_ID, STORE_DESC) select A1.STORE_NBR, max(A1.STORE_DESC) from LOOKUP_STORE Where A1 A1.STORE_NBR = 1 group by A1.STORE_NBR drop table TABLENAME
739
Intermediate Table Type VLDB property is set to Permanent or Temporary table or Views. The custom SQL is applied to every intermediate table or view.
Example
In the following example, the setting values are:
Table PostStatement1=/* ??? Table PostStatement1 */ select a11.DEPARTMENT_NBR DEPARTMENT_NBR, a11.STORE_NBR STORE_NBR into #ZZTIS00H63PMQ000 from HARI_STORE_DEPARTMENT a11 group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.TOT_SLS_DLR) > 100000 /* #ZZTIS00H63PMQ000 Table PostStatement 1*/ select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_DEPARTMENT a11 join #ZZTIS00H63PMQ000 pa1 on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR) join HARI_LOOKUP_DEPARTMENT a12 on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR) join HARI_LOOKUP_STORE a13
Example
In the following example, the setting values are:
Table PreStatement1=/* ??? Table
PreStatement1 */
/*Table PreStatement 1*/ create table ZZTIS00H63RMQ000 ( DEPARTMENT_NBRDECIMAL(10, 0), STORE_NBRDECIMAL(10, 0)) insert into ZZTIS00H63RMQ000 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, a11.STORE_NBR STORE_NBR
741
from HARI_STORE_DEPARTMENT a11 group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.TOT_SLS_DLR) > 100000 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_DEPARTMENT a11 join ZZTIS00H63RMQ000 pa1 on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR) join HARI_LOOKUP_DEPARTMENT a12 on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR) join HARI_LOOKUP_STORE a13 on (a11.STORE_NBR = a13.STORE_NBR) group by a11.DEPARTMENT_NBR, a11.STORE_NBR
Optimizing queries
The table below summarizes the Query Optimizations VLDB properties. Additional details about each property, including examples where necessary, are provided in the sections following the table.
Property
Additional Final Pass Option
Description
Determines whether the Engine calculates an aggregation function and a join in a single pass or in separate passes in the SQL. Indicates during which pass the report filter is applied.
Possible Values (default) Final pass CAN do aggregation and join lookup tables in one pass One additional final pass only to join lookup tables Apply filter only to passes touching warehouse tables Apply filter to passes touching warehouse tables and last join pass, if it does a downward join from the temp table level to the template level Apply filter to passes touching warehouse tables and last join pass Use Count(Attribute@ID) to calculate total element number (uses count distinct if necessary) Use ODBC cursor to calculate total element number
Default Value
Final pass CAN do aggregation and join lookup tables in one pass
Attribute Controls how the total Element Number number of rows are calculated for incremental Count Method fetch.
Use Count(Attribute@ID) to calculate total element number (uses count distinct if necessary)
Helps optimize custom group banding when using the Count Banding method. You can choose to use the standard method that uses the Analytical Engine or database-specific syntax, or you can choose to use case statements or temp tables.
Treat banding as normal calculation Use standard case statement syntax Insert band range to database and join with metric value
743
Property
Custom Group Banding Points Method
Description
Helps optimize custom group banding when using the Points Banding method. You can choose to use the standard method that uses the Analytical Engine or database-specific syntax, or you can choose to use case statements or temp tables. Helps optimize custom group banding when using the Size Banding method. You can choose to use the standard method that uses the Analytical Engine or database-specific syntax, or you can choose to use case statements or temp tables. Determines level (dimension) replacement for non parent-child related attributes in the same hierarchy. Enable or disable the Analytical Engine's ability to treat attributes defined on the same column with the same expression as attribute roles. Allows you to choose how to handle prequerying the metadata partition. Determines how calculation ranking is performed. Determines whether Group By and aggregations are used for attributes with the same primary key.
Possible Values Treat banding as normal calculation Use standard case statement syntax Insert band range to database and join with metric value
Default Value
Treat banding as normal calculation
Treat banding as normal calculation Use standard case statement syntax Insert band range to database and join with metric value
Dimensionality Model
Enable Engine Attribute Role Disable Engine feature Attribute Role feature Disable Engine Attribute Role feature
MD Partition Prequery Option Rank Method if DB Ranking Not Used Remove Group By Option
Use count(*) in prequery Use constant in prequery Use ODBC ranking (MSTR 6 method) Analytical engine performs rank Remove aggregation and Group By when Select level is identical to From level Remove aggregation and Group By when Select level contains all attribute(s) in From level
Remove aggregation and Group By when Select level is identical to From level
Property
Set Operator Optimization
Description
Allows you to use set operators in sub queries to combine multiple filter qualifications. Set operators are only supported by certain database platforms and with certain sub query types. Determines the level by which SQL queries in reports are optimized.
Possible Values Disable Set Operator Optimization Enable Set Operator Optimization (if supported by database and [Sub Query Type])
Default Value
Disable Set Operator Optimization
Level 0: No optimization Level 1: Remove Unused and Duplicate Passes Level 2: Level 1 + Merge Passes with Different SELECT WHERE EXISTS (SELECT * ...) WHERE EXISTS (SELECT col1, col2...) WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT * ...) for multiple columns IN WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT col1, col2 ...) for multiple columns IN Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery Remove unrelated filter Keep unrelated filter
Level 0: No optimization
Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery
Determines whether the Analytical Engine should keep or remove the unrelated filter. Determines the table used for qualifications in the WHERE clause.
745
Example
The following SQL example was created using SQL Server metadata and warehouse. Consider the following structure of lookup and fact tables:
LU_Emp_Mgr has 4 columns, namely: Emp_ID, Emp_Desc, Mgr_ID, and Mgr_Desc In this structure, Emp_ID is the primary key of LU_Emp_Mgr table
LU_Dept has 2 columns, namely: Dept_ID and Dept_Desc In this structure, Dept_ID is the primary key of LU_Dept table
Fact table Emp_Dept_Salary has 3 columns, namely: Emp_ID, Dept_ID, and fact Salary
From the above warehouse structure, define the following schema objects:
attribute Employee with 2 forms: Employee@ID (defined on column Emp_ID) and Employee@Desc (defined on column Emp_Desc) attribute Manager with 2 forms: Manager@ID (defined on column Mgr_ID) and Manager@Desc (defined on column Mgr_Desc) attribute Department with 2 forms: Department@ID (defined on column Dept_ID) and Department@Desc (defined on column Dept_Desc) fact Fact_Salary, which is defined on Salary column The Manager attribute is defined as the parent of the Employee attribute via LU_Emp_Mgr table. This is a common practice in a star schema.
In a report called Employee_Salary, put the Salary metric on a template with the Manager attribute. In this example, the Employee_Salary report generates the following SQL:
Pass0 select a12.Mgr_Id a11.Dept_Id sum(a11.Salary) Mgr_Id, WJXBFS1 Dept_Id,
into #ZZTUW0200LXMD000 from dbo.Emp_Dept_Salary a11 join dbo.Emp_Mgr a12 on (a11.Emp_Id = a12.Emp_Id) group by a12.Mgr_Id, a11.Dept_Id Pass1 select pa1.Mgr_Id avg(pa1.WJXBFS1) Mgr_Id, Mgr_Desc, WJXBFS1 max(a11.Mgr_Desc)
747
join dbo.Emp_Mgr a11 on (pa1.Mgr_Id = a11.Mgr_Id) group by pa1.Mgr_Id Pass2 drop table #ZZTUW0200LXMD000
The problem in the SQL pass above that appears in italics is that the join condition and the aggregation function are in a single pass. The SQL joins the ZZTUW0200LXMD000 table to the Emp_Mgr table on column Mgr_ID, but Mgr_ID is not the primary key to the LU_Emp_Mgr table. Therefore, there are many rows on the LU_Emp_Mgr table with the same Mgr_ID. This results in a repeated data problem. Clearly, if both the conditions, aggregation and join, do not exist on the same table, this problem does not occur. To resolve this problem, select the option One additional final pass only to join lookup tables in the VLDB Properties Editor. With this option selected, the report, when executed, generates the following SQL:
Pass0 select a12.Mgr_Id a11.Dept_Id sum(a11.Salary) Mgr_Id, WJXBFS1 Dept_Id,
into #ZZTUW01006IMD000 from dbo.Emp_Dept_Salary a11 join dbo.Emp_Mgr a12 on (a11.Emp_Id = a12.Emp_Id) group by a12.Mgr_Id, a11.Dept_Id Pass1 select pa1.Mgr_Id avg(pa1.WJXBFS1) Mgr_Id, WJXBFS1
from #ZZTUW01006IEA001 pa2 join dbo.Emp_Mgr a11 on (pa2.Mgr_Id = a11.Mgr_Id) Pass3 drop table #ZZTUW01006IMD000 Pass4 drop table #ZZTUW01006IEA001
In this SQL, the italicized sections show that the Engine calculates the aggregation function, which is the Average function, in a separate pass and performs the join operation in another pass.
Apply filter only to passes touching warehouse tables: This is the default option. It applies the filter to only SQL passes that touch warehouse tables, but not to other passes. This option works in most situations. Apply filter to passes touching warehouse tables and last join pass, if it does a downward join from the temporary table level to the template level: The filter is applied in the final pass if it is a downward join. For example, you have Store, Region Sales, and Region Cost on the report, with the filter store=1. The intermediate passes calculate the total sales and cost for Region 1 (to which Store 1 belongs). In the final pass, a downward join is done from the Region level to the Store level, using the relationship table LOOKUP_STORE. If the store = 1
Details for all VLDB properties
749
filter in this pass is not applied, stores that belong to Region 1 are included on the report. However, you usually expect to see only Store 1 when you use the filter store=1. So, in this situation, you should choose this option to make sure the filter is applied in the final pass.
Apply filter to passes touching warehouse tables and last join pass: The filter in the final pass is always applied, even though it is not a downward join. This option should be used for special types of data modeling. For example, you have Region, Store Sales, and Store Cost on the report, with the filter Year=2002. This looks like a normal report and the final pass joins from Store to Region level. But the schema is abnormal: certain stores do not always belong to the same region, perhaps due to rezoning. For example, Store 1 belongs to Region 1 in 2002, and belongs to Region 2 in 2003. To solve this problem, put an additional column Year in LOOKUP_STORE so that you have the following data.
Region
1 2
Store
1 1 ...
Year
2002 2003
Apply the filter Year=2002 to your report. This filter must be applied in the final pass to find the correct store-region relationship, even though the final pass is a normal join instead of a downward join.
Use Count(Attribute@ID) to calculate total element number (uses count distinct if necessary): This is the default option. In this case, the database determines the total number of rows. Use ODBC cursor to calculate the total element number: This setting causes Intelligence Server to determine the total number of rows by looping through the table after the initial SELECT pass.
The difference between the two approaches is whether the database or Intelligence Server determines the total number of records. MicroStrategy recommends using the Use ODBC cursor... option (having Intelligence Server determine the total number of records) if you have a heavily taxed data warehouse or if the SELECT COUNT DISTINCT query itself introduces contention in the database. Having Intelligence Server determine the total number of rows results in more traffic between Intelligence Server and the database. For Tandem databases, the default is Use ODBC Cursor to calculate the total element number.
751
Examples
The following SQL examples were created in MicroStrategy Tutorial. The report contains a Custom Group Customer Value Banding, which uses the Count method and the Revenue metric. The SQL for each of the three settings for this property is presented below. All three options start with the same SQL passes. In this example, the first six passes are the same. The remaining SQL passes differ depending on the Custom Group Banding Count Method setting selected.
create table ZZMD00 (CUSTOMER_ID SHORT, WJXBFS1 DOUBLE) insert into ZZMD00 select a11.CUSTOMER_ID AS CUSTOMER_ID, a11.TOT_DOLLAR_SALES as WJXBFS1 from CUSTOMER_SLS a11 create table ZZMD01 (WJXBFS1 DOUBLE) insert into ZZMD01 select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from YR_CATEGORY_SLS a11 select pa1.CUSTOMER_ID AS CUSTOMER_ID, (pa1.WJXBFS1 / pa2.WJXBFS1) as WJXBFS1 from ZZMD00 pa1, ZZMD01 pa2 create table ZZMQ02 (CUSTOMER_ID SHORT, DA57 LONG) [Placeholder for an analytical SQL] insert into ZZMQ02 values (DummyInsertValue) Treat banding as normal calculation select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from CUSTOMER_SLS a11, ZZMQ02 a12 where a11.CUSTOMER_ID = a12.CUSTOMER_ID select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from CUSTOMER_SLS a11, ZZMQ02 a12 where a11.CUSTOMER_ID = a12.CUSTOMER_ID group by a12.DA57 drop table ZZMD00 drop table ZZMD01 drop table ZZMQ02
753
then 1 when (pa3.WJXBFS1 >= 100.9 and pa3.WJXBFS1 200.8) then 2 when (pa3.WJXBFS1 >= 200.8 and pa3.WJXBFS1 300.7) then 3 when (pa3.WJXBFS1 >= 300.7 and pa3.WJXBFS1 400.6) then 4 when (pa3.WJXBFS1 >= 400.6 and pa3.WJXBFS1 500.5) then 5 when (pa3.WJXBFS1 >= 500.5 and pa3.WJXBFS1 600.4) then 6 when (pa3.WJXBFS1 >= 600.4 and pa3.WJXBFS1 700.3) then 7 when (pa3.WJXBFS1 >= 700.3 and pa3.WJXBFS1 800.2) then 8 when (pa3.WJXBFS1 >= 800.2 and pa3.WJXBFS1 900.1) then 9 when (pa3.WJXBFS1 >= 900.1 and pa3.WJXBFS1 <= 1000 then 10 end) as DA57 from ZZMQ02 pa3 select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from CUSTOMER_SLS a11, ZZOP03 a12 where a11.CUSTOMER_ID = a12.CUSTOMER_ID select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from CUSTOMER_SLS a11, ZZOP03 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID group by a12.DA57 drop table ZZMD00 drop table ZZMD01 drop table ZZMQ02 drop table ZZOP03
755
group by a12.DA57 drop table ZZMD00 drop table ZZMD01 drop table ZZMQ02 drop table ZZOP03 drop table ZZOP04
Examples
The following SQL examples were created in MicroStrategy Tutorial. The report contains a Custom Group Customer Value Banding using the point method and the Revenue metric. The SQL for each of the three settings for this property is presented below. All three options start with the same SQL passes. In this example, the first six passes are the same. The remaining SQL passes differ depending on the Custom Group Banding Count Method selected.
create table ZZMD00 (CUSTOMER_ID SHORT, WJXBFS1 DOUBLE) insert into ZZMD00
select a11.CUSTOMER_ID AS CUSTOMER_ID, a11.TOT_DOLLAR_SALES as WJXBFS1 from CUSTOMER_SLS a11 create table ZZMD01 (WJXBFS1 DOUBLE) insert into ZZMD01 select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from YR_CATEGORY_SLS a11 select pa1.CUSTOMER_ID AS CUSTOMER_ID, (pa1.WJXBFS1 / pa2.WJXBFS1) as WJXBFS1 from ZZMD00 pa1, ZZMD01 pa2 create table ZZMQ02 (CUSTOMER_ID SHORT, DA57 LONG) [Placeholder for an analytical SQL] insert into ZZMQ02 values (DummyInsertValue)
757
Use standard case statement syntax create table ZZOP03 (CUSTOMER_ID SHORT, DA57 LONG) insert into ZZOP03 select pa3.CUSTOMER_ID AS CUSTOMER_ID, (case when (pa3.WJXBFS1 >= 1 and pa3.WJXBFS1 < 2) then 1 when (pa3.WJXBFS1 >= 2 and pa3.WJXBFS1 <= 3) then 2 end) as DA57 from ZZMQ02 pa3 select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from CUSTOMER_SLS a11, ZZOP03 a12 where a11.CUSTOMER_ID = a12.CUSTOMER_ID select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from CUSTOMER_SLS a11, ZZOP03 a12 where a11.CUSTOMER_ID = a12.CUSTOMER_ID group by a12.DA57 drop table ZZMD00 drop table ZZMD01 drop table ZZMQ02 drop table ZZOP03
create table ZZOP04 (CUSTOMER_ID SHORT, DA57 LONG) insert into ZZOP04 select pa3.CUSTOMER_ID AS CUSTOMER_ID, pa4.BandNo as DA57 from ZZMQ02 pa3, ZZOP03 pa4 where ((pa3.WJXBFS1 >= pa4.BandStart and pa3.WJXBFS1 < pa4.BandEnd) or (pa3.WJXBFS1 = pa4.BandEnd and pa4.BandNo = 2)) select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from CUSTOMER_SLS a11, ZZOP04 a12 where a11.CUSTOMER_ID = a12.CUSTOMER_ID select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from CUSTOMER_SLS a11, ZZOP04 a12 where a11.CUSTOMER_ID = a12.CUSTOMER_ID group by a12.DA57 drop table ZZMD00 drop table ZZMD01 drop table ZZMQ02 drop table ZZOP03 drop table ZZOP04
759
The Custom Group Banding Size Method helps optimize custom group banding when using the Size Banding method. You can choose to use the standard method that uses the Analytical Engine or database-specific syntax, or you can choose to use case statements or temp tables.
Examples
The following SQL examples were created in MicroStrategy Tutorial. The report contains a Custom Group Customer Value Banding that uses the size method and the Revenue metric. The SQL for each of the three settings for this property is presented below. All three options start with the same SQL passes. In this example, the first six passes are the same. The remaining SQL passes differ depending on the Custom Group Banding Count Method selected.
create table ZZMD000 (CUSTOMER_ID SHORT, WJXBFS1 DOUBLE) insert into ZZMD000 select a11.CUSTOMER_ID AS CUSTOMER_ID, a11.TOT_DOLLAR_SALES as WJXBFS1 from CUSTOMER_SLS a11 create table ZZMD001 (WJXBFS1 DOUBLE) insert into ZZMD001 select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1 from YR_CATEGORY_SLS a11 select pa1.CUSTOMER_ID AS CUSTOMER_ID, (pa1.WJXBFS1 / pa2.WJXBFS1) as WJXBFS1 from ZZMD000 pa1, ZZMD001 pa2
create table ZZMQ002 (CUSTOMER_ID SHORT, WJXBFS1 DOUBLE) [Placeholder for an Analytical SQL] insert into ZZMQ02 values (DummyInsertValue)
761
when (pa3.WJXBFS1 >= .8 and pa3.WJXBFS1 <= 1) then 5 end) as DA57 from ZZMQ002 pa3 drop table ZZMD000 drop table ZZMD001 drop table ZZMQ002 drop table ZZOP003
where a11.CUSTOMER_ID = a12.CUSTOMER_ID group by a12.DA57 drop table ZZMD000 drop table ZZMD001 drop table ZZMQ002 drop table ZZOP003 drop table ZZOP004
Dimensionality Model
Dimensionality Model is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties, page 626. The Dimensionality Model property is strictly for backward compatibility with MicroStrategy 6.x or earlier.
Use relational model: For all projects, Use relational model is the default value. With the Use relational model setting, all the dimensionality (level) resolution is based on the relationship between attributes. Use dimensional model: The Use dimensional model setting is for cases where attribute relationship dimensionality (level) resolution is different from dimension-based resolution. There are very few cases when the setting needs to be changed to Use dimensional model. The following situations may require the Use dimensional model setting:
Metric Conditionality: You have a report with the Year attribute and the Top 3 Stores Dollar Sales metric on the template and the filters Store, Region, and Year. Therefore, the metric has a metric conditionality of Top 3 Stores. In MicroStrategy 7.x and later, metric conditions are set to remove related report filter elements by default; therefore, with the above report the filters on Store and Region are ignored, because they are related to the metric conditionality. Year is not removed because it is not related to the metric conditionality. In MicroStrategy 6.x and earlier, all filters were used; in MicroStrategy 7.x and later, if you
763
set this property to Use dimensional model, the filters are not ignored. Note that if you change the default of the Remove related report filter element option in advanced conditionality, the Use dimensional model setting does not make a difference in the report. For more information regarding this advanced setting, see the Metrics chapter in the Advanced Reporting Guide. Metric Dimensionality Resolution: In MicroStrategy 6.x and earlier, every attribute belonged to a dimension. MicroStrategy 7.x and later does not have the concept of dimension, but instead has the concept of metric level. For a project upgraded from 6.x to 7.x, the dimension information is kept in the metadata. Attributes created in 7.x do not have this information. For example, you have a report that contains the Year attribute and the metric Dollar Sales by Geography. The metric is defined with the dimensionality of Geography, which means the metric is calculated at the level of whatever Geography attribute is on the template. If there is no attribute on the template that is a member of the Geography dimension, with MicroStrategy 6.x or lower, the metric then defaults the metric dimensionality to the highest level in the Geography dimension, for example Country or All Geography. In MicroStrategy 7.x and later, the metric dimensionality is ignored, and therefore defaults to the report level or the level that is defined for the report. The MicroStrategy 6.x and earlier behavior also occurs if you set this VLDB property to Use dimensionality model and if the attribute was originally created under a MicroStrategy 6.x product. Analysis level calculation: For the next situation, consider the following split hierarchy model.
Market State
Store
Market and State are both parents of Store. A report has the attributes Market and State and a Dollar Sales metric with report level dimensionality. In MicroStrategy 7.x and later, with the Use relational model setting, the report level (metric dimensionality level) is Market and State. To choose the best fact table to use to produce this report, the Analytical Engine
764 Details for all VLDB properties
2008 MicroStrategy, Inc.
considers both of these attributes. With MicroStrategy 6.x and earlier, or with the Use dimensional model setting in MicroStrategy 7.x and later, Store is used as the metric dimensionality level and for determining the best fact table to use. This is because Store is the highest common descendent between the two attributes.
The first approach is a procedure called table aliasing, where you can define multiple logical tables in the schema that point to the same physical table, and then define different attributes and facts on these logical tables. Table aliasing provides you a little more control and is best when upgrading or when you have a complex schema. Table aliasing is described in detail in the MicroStrategy Project Design Guide. The second approach is called Engine Attribute Role. With this approach, rather than defining multiple logical tables, you only need to define multiple attributes and facts on the same table. The MicroStrategy Engine automatically detects multiple roles of certain attributes and splits the table into multiple tables internally. There is a limit on the number of tables into which a table can split. This limit is known as the Attribute Role limit. This limit is hard coded to 128 tables. If you are a new
765
MicroStrategy user starting with 7i or later, it is suggested that you use the automatic detection (Engine Attribute Role) option. The algorithm to split the table is as follows:
If two attributes are defined on the same column from the same table, have the same expression, and are not related, it is implied that they are playing different roles and must be in different tables after the split. If two attributes are related to each other, they must stay in the same table after the split. Attributes should be kept in as many tables as possible as long as algorithm 1 is not violated.
Given the diversity of data modeling in projects, the above algorithm cannot be guaranteed to split tables correctly in all situations. Thus, this property is added in the VLDB properties to turn the Engine Attribute Role on or off. When the feature is turned off, the table splitting procedure is bypassed.
Note that LU_DAY appears twice in the SQL, playing different roles. Also, note that in this example, the Analytical Engine does not split table FT1 because Ship Day and Order Day are defined on different columns.
Because of backward compatibility and because the Analytical Engines automatic splitting of tables may be wrong for some data models, this propertys default setting is to turn OFF the Engine Attribute Role feature. If this property is turned ON, and you use this feature incorrectly, the most common error message from the Analytical Engine is Fact not found at requested level.
767
This feature is turned OFF by default starting from 7i Beta 2. Before that, this feature was turned OFF for upgraded projects and turned ON by default for new projects. So for some 7i beta users, if you create a new metadata using the Beta1 version of 7i, this feature may be turned on in your metadata. While updating the schema, if the Engine Attribute Role feature is ON, and if the Attribute Role limit is exceeded, you may get an error message from the Engine. You get this error because there is a limit on the number of tables into which a given table can be split internally. In this case, you should turn the Engine Attribute Role feature OFF and use table aliasing instead.
1 If the metric that is being ranked has to be calculated in the Analytical Engine, then the ranking is calculated in the Analytical Engine as well. 2 If the database supports the Rank function, then the ranking is done in the database. 3 If neither of the above criteria is met, then the Rank Method property setting is used.
The most common form of ranking is referred to as Open Database Connectivity (ODBC) ranking. This was the standard method used by MicroStrategy 6.x and earlier. This method makes multiple queries against the warehouse to determine the ranking. ODBC ranking is the default ranking technique, when the database does not support native ranking, because it is more efficient for large datasets. Analytical Engine ranking generates the result of the ranking operation in the MicroStrategy Analytical Engine and then moves the result set back to the warehouse to perform any further operations and compile the final result set.
769
The Remove Group By Option property determines when Group By conditions and aggregations can be omitted in specific SQL generation scenarios, depending on which of the following options you select:
Remove aggregation and Group By when Select level is identical to From level: This default setting provides the common behavior of omitting Group By conditions and aggregations if the level of the SELECT clause is identical to the level of the FROM clause. For example, a SQL statement that only includes the ID column for the Store attribute in the SELECT clause and only includes the lookup table for the Store attribute in the FROM clause does not include any Group By conditions. Remove aggregation and Group By when Select level contains all attribute(s) in From level: You can choose to omit Group By conditions and aggregations in the unique scenario of including multiple attributes on a report, which are built from columns of the same table in the data warehouse. For example, you have separate attributes for shipping data such as Shipping ID, Shipping Time, and Shipping Location which are mapped to columns in the same table which has the primary key mapped to Shipping ID. By selecting this setting, when these three attributes are included on a report, Group By conditions and aggregations for the shipping attributes are omitted.
equivalent logical operators such as AND NOT and AND. Set operators can be used to combine two or more of the following types of set qualifications:
relationship qualifications metric qualifications when combined with other types of set qualifications with the logical operators AND, NOT, or OR report as filter qualifications when combined with the logical operators AND, NOT, or OR Note the following:
Set operators can only be used to combine the filter qualifications listed above if they have the same output level. For example, a relationship qualification with an output level set to Year and Region cannot be combined with another relationship qualification with an output level of Year. Metric qualifications and report-as-filter qualifications, when combined with AND, render as inner joins by default to avoid a subquery in the final result pass. When Set Operator Optimization is enabled, the inner joins are replaced by subqueries combined using INTERSECT. Metric qualifications at the same level are combined into one set qualification before being applied to the final result pass. This is more efficient than using a set operator. Consult MicroStrategy Tech Note TN5200-802-0535 for more details. For more information on filters and filter qualifications, see Chapter 3, Advanced Filters of the MicroStrategy Advanced Reporting Guide.
Along with the restrictions described above, SQL set operators also depend on the subquery type and the database platform. For more information on sub query type, see Sub Query Type, page 778. Set Operator Optimization can be used with the following sub query types:
WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT * ...) for multiple columns IN
Details for all VLDB properties
771
WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT col1, col2 ...) for multiple columns IN Use Temporary Table, falling back to IN (SELECT COL) for correlated sub query If either of the two sub query types that use fallback actions perform a fallback, Set Operator Optimization is not applied.
Intersect
Yes Yes No Yes Yes Yes (2005 and later) No Yes
Intersect ALL
Yes Yes No No Yes No No Yes
Except
Yes Yes No Yes (Minus) Yes Yes (2005 and later) No Yes
Except ALL
Yes Yes No No Yes No No Yes
Union
Yes Yes Yes Yes Yes Yes No Yes
Union ALL
Yes Yes Yes Yes Yes Yes No Yes
If you enable Set Operator Optimization for a database platform that does not support operators such as EXCEPT and INTERSECT, the Set Operator Optimization property is ignored. The Set Operator Optimization property provides you with the following options:
Disable Set Operator Optimization: Operators such as IN and AND NOT are used in SQL sub queries with multiple filter qualifications. Enable Set Operator Optimization (if database support and [Sub Query Type]): This setting can improve performance by using SQL set operators such as EXCEPT, INTERSECT, and MINUS in SQL sub queries to
combine multiple filter qualifications that have the same output level. All of the dependencies described above must be met for SQL set operators to be used. If you enable SQL set operators for a database platform that does not support them, this setting is ignored and filters are combined in the standard way with operators such as IN and AND NOT. For a further discussion on the Set Operator Optimization VLDB property, refer to MicroStrategy Tech Note TN5200-802-0534.
Level 0: No optimization: This is the default option. SQL queries are not optimized. Level 1: Remove Unused and Duplicate Passes: Redundant, identical, and equivalent SQL passes are removed from queries during SQL generation. Level 2: Level 1 + Merge Passes with different SELECT: Level 1 optimization takes place as described above, and combinable SQL passes from different SELECT statements are consolidated.
773
Year attribute Region attribute Sum(Profit) {~+, Category%} metric (calculates profit for each Category, ignoring any filtering on Category)
SQL Pass 1: Retrieves the set of categories that satisfy the metric qualification
SELECT a11.CATEGORY_ID into #ZZTRH02012JMQ000 FROM YR_CATEGORY_SLS a11 GROUP BY a11.CATEGORY_ID HAVING sum(a11.TOT_DOLLAR_SALES) > 1000000.0 CATEGORY_ID
SQL Pass 2: Final pass that selects the related report data, but does not use the results of the first SQL pass
SELECT a13.YEAR_ID a12.REGION_ID max(a14.REGION_NAME) WJXBFS1 FROM DAY_CTR_SLS a11 join LU_CALL_CTR a12 on (a11.CALL_CTR_ID = a12.CALL_CTR_ID) join LU_DAY a13 on (a11.DAY_DATE = a13.DAY_DATE) join LU_REGION a14 on (a12.REGION_ID = a14.REGION_ID) YEAR_ID, REGION_NAME, REGION_ID,
sum((a11.TOT_DOLLAR_SALES - a11.TOT_COST))
GROUP BY a13.YEAR_ID,a12.REGION_ID
SQL Pass 1 is redundant because it creates and populates a temporary table, #ZZTRH02012JMQ000, that is not accessed again and is unnecessary to generating the intended SQL result. If you select either the Level 1: Remove Unused and Duplicate Passes or Level 2: Level 1 + Merge Passes with different SELECT option, only one SQL passthe second SQL pass described aboveis generated because it is sufficient to satisfy the query on its own. By selecting either option, you reduce the number of SQL passes from two to one, which can potentially decrease query time.
Region attribute Metric 1 = Sum(Revenue) {Region+} (calculates the total revenue for each region) Metric 2 = Count<FactID=Revenue>(Call Center) {Region+} (calculates the number of call centers for each region) Metric 3 = Metric 1/Metric 2 (Average Revenue = Total Revenue/Number of Call Centers)
775
into [ZZTI10200U2MD000] FROM [CITY_CTR_SLS] a11, [LU_CALL_CTR] a12 WHERE a11.[CALL_CTR_ID] = a12.[CALL_CTR_ID] GROUP BY a12.[REGION_ID]
SQL Pass 3: Final pass that calculates Metric 3 = Metric 1/Metric 2 and displays the result
SELECT pa11.[REGION_ID] AS REGION_ID, a13.[REGION_NAME] AS REGION_NAME, pa11.[WJXBFS1] AS WJXBFS1, IIF(ISNULL((pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0, NULL, pa12.[WJXBFS1]))), 0, (pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0, NULL,pa12.[WJXBFS1]))) AS WJXBFS2 FROM [ZZTI10200U2MD000] pa11, [ZZTI10200U2MD001] pa12, [LU_REGION] a13 WHERE pa11.[REGION_ID] = pa12.[REGION_ID] and pa11.[REGION_ID] = a13.[REGION_ID]
Because SQL passes 1 and 2 contain almost exactly the same code, they can be consolidated into one SQL pass. Notice the italicized SQL in Pass 1 and Pass 2. These are the only unique characteristics of each pass; therefore, Pass 1 and 2 can be combined into just one pass. Pass 3 remains as it is.
You can achieve this type of optimization by selecting the Level 2: Level 1 + Merge Passes with different SELECT option. The SQL that results from this level of SQL optimization is as follows: Pass 1:
SELECT a12.[REGION_ID] AS REGION_ID, count(a11.[CALL_CTR_ID]) AS WJXBFS1 sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1 into [ZZTI10200U2MD001] FROM [CITY_CTR_SLS] a11, [LU_CALL_CTR] a12 WHERE a11.[CALL_CTR_ID] = a12.[CALL_CTR_ID] GROUP BY a12.[REGION_ID]
Pass 2:
SELECT pa11.[REGION_ID] AS REGION_ID, a13.[REGION_NAME] AS REGION_NAME, pa11.[WJXBFS1] AS WJXBFS1, IIF(ISNULL((pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0, NULL, pa12.[WJXBFS1]))), 0, (pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0, NULL, pa12.[WJXBFS1]))) AS WJXBFS2 FROM [ZZTI10200U2MD000] pa11, [ZZTI10200U2MD001] pa12, [LU_REGION] a13 WHERE pa11.[REGION_ID] = pa12.[REGION_ID] and pa11.[REGION_ID] = a13.[REGION_ID]
For a further discussion on the SQL Global Optimization VLDB property, refer to MicroStrategy Tech Note TN5200-802-0526.
777
Default
Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery
DB2 UDB for OS/390 Where Exists (Select *...) Microsoft Access 2000/2002/2003 Microsoft Excel 2000/2003 Netezza Oracle PostgreSQL RedBrick Teradata Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery Where (col1, col2...) in (Select s1.col1, s1.col2...) Where (col1, col2...) in (Select s1.col1, s1.col2...) Where (col1, col2...) in (Select s1.col1, s1.col2...) Where col1 in (Select s1.col1...) falling back to Exists (Select col1, col2...) for multiple column in Use Temporary Table, falling back to in (Select col) for correlated subquery
Notice that some options have a fallback action. In some scenarios, the selected option does not work, so the SQL Engine must fall back to an approach that always works. The typical scenario for falling back is when multiple columns are needed in the IN list, but the database does not support it and the correlated subqueries. For a further discussion of the Sub Query Type VLDB property, refer to MicroStrategy Tech Note TN5200-75x-0539.
Examples
WHERE EXISTS (Select *)
select a31.ITEM_NBR ITEM_NBR, a31.CLASS_NBR CLASS_NBR, sum(a31.REG_SLS_DLR) REG_SLS_DLR from REGION_ITEM a31 where (exists (select * from REGION_ITEM r21,
779
LOOKUP_DAY r22 where r21.CUR_TRN_DT = r22.CUR_TRN_DT and r22.SEASON_ID in (199501) and r21.ITEM_NBR = a31.ITEM_NBR and r21.CLASS_NBR = a31.CLASS_NBR)) group by a31.ITEM_NBR, a31.CLASS_NBR
WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT * ...) for multiple columns IN
select a31.ITEM_NBR ITEM_NBR, sum(a31.REG_SLS_DLR) REG_SLS_DLR from REGION_ITEM a31 where ((a31.ITEM_NBR) in (select r21.ITEM_NBR ITEM_NBR, from REGION_ITEM r21, LOOKUP_DAY r22 where r21.CUR_TRN_DT = r22.CUR_TRN_DT and r22.SEASON_ID in (199501))) group by a31.ITEM_NBR
Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery
create table TEMP1 as select r21.ITEM_NBR ITEM_NBR, from REGION_ITEM r21, LOOKUP_DAY r22 where r21.CUR_TRN_DT = r22.CUR_TRN_DT and r22.SEASON_ID in 199501 select a31.ITEM_NBR ITEM_NBR, sum(a31.REG_SLS_DLR) REG_SLS_DLR from REGION_ITEM a31 join TEMP1 a32 on a31.ITEM_NBR = a32.ITEM_NBR group by a31.ITEM_NBR
WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT col1, col2 ...) for multiple columns IN
select a31.ITEM_NBR ITEM_NBR, sum(a31.REG_SLS_DLR) REG_SLS_DLR from REGION_ITEM a31
781
where ((a31.ITEM_NBR) in (select r21.ITEM_NBR ITEM_NBR, from REGION_ITEM r21, LOOKUP_DAY r22 where r21.CUR_TRN_DT = r22.CUR_TRN_DT and r22.SEASON_ID in (199501))) group by a31.ITEM_NBR
Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery
create table TEMP1 as select r21.ITEM_NBR ITEM_NBR, from REGION_ITEM r21, LOOKUP_DAY r22 where r21.CUR_TRN_DT = r22.CUR_TRN_DT and r22.SEASON_ID in 199501 select a31.ITEM_NBR ITEM_NBR, sum(a31.REG_SLS_DLR) REG_SLS_DLR from REGION_ITEM a31 join TEMP1 a32 on a31.ITEM_NBR = a32.ITEM_NBR group by a31.ITEM_NBR
Examples
An example where it will not work:
In this case, the filter is removed regardless of this VLDB property setting, assuming there is no relationship between Country and Year defined in the schema. An example where it will work:
Report Filters FL02 = (Country, Quarter) in {(French, Q1 2002), (England, Q3 2003)} FL01 = Country in {USA}
Template: Year
For the setting to work, filter FL02 can come from joint element list (as above), Metric Qualification/Relationship Filter with Country and Quarter as the output levels, or Report as Filter with Country and Quarter on the template of Report as Filter. The SQL generated with each setting is as follows.
783
where (((s21.[COUNTRY_ID] = 4 and s22.[QUARTER_ID] = 20021) or (s21.[COUNTRY_ID] = 3 and s22.[QUARTER_ID] = 20023)) and s21.[COUNTRY_ID] in (1))))
Description
Allows you to choose whether to select attribute forms that are on the template in the intermediate pass (if available). Allows you to choose whether to select additional attributes (usually parent attributes) needed on the template as the join tree and their child attributes have already been selected in the Attribute Form Selection option for Intermediate Pass. Determines whether multiple insert statements are issued in the ODBC call, and if together, the string to connect the multiple insert statements. Allows you to choose whether to use a GROUP BY and how the GROUP BY should be constructed when working with a column that is a constant. Allows you to determine the order in which datamart columns are created.
Possible Values Select ID form only Select ID and other forms if they are on template and available in existing join tree (Default) Select only the attributes needed Select other attributes in current join tree if they are on template and their child attributes have already been selected.
Default Value
Select ID form only
User-defined
NULL
Pure select, no group by Use max, no group by Group by column (expression) Group by alias Group by position Columns created in order based on attribute weight Columns created in order in which they appear on the template
User-defined
Lets you define the syntax pattern for Date data. Use this to determine how attributes are treated, for those attributes that are not in the attribute weights list.
785
Property
Disable Prefix in WH Partition Table
Description
Allows you to choose whether or not to use the prefix partition queries. The prefix is always used with pre-queries.
Possible Values (Default) Use prefix in both warehouse partition pre-query and partition query Use prefix in warehouse partition prequery but not in partition query Use DISTINCT No DISTINCT, no GROUP BY Use GROUP BY
Default Value
(Default) Use prefix in both warehouse partition pre-query and partition query Use DISTINCT
If no aggregation is needed and the attribute defined on the table is not a primary key, tells the Analytical Engine whether to use Select Distinct, Group by, or neither. Determines how to group by a selected ID column when an expression is performed on the ID expression. Determines how to handle columns for non_ID attributes. Determines the string that is inserted at the end of insert and implicit table creation statements. Determines the string inserted after table name in insert statements; analogous to table option. Sets the maximum number of digits in a constant literal in an insert values statement. (0 = no limit). Determines how to handle metrics that have the same definition. Sets the string to be appended to all SELECT statements generated by the Analytical Engine, for example, FOR FETCH ONLY. Sets the format for date in engine-generated SQL.
GROUP BY ID Attribute
Group by expression
User-defined
NULL
User-defined
No limit
User-defined
yyyy-mm-dd
Property
SQL Decimal Separator
Description
Used to change the decimal separator in SQL statements from a decimal point to a comma, for international database users. This string is placed after the SELECT statement. Sets the format of the timestamp literal accepted in the Where clause. Determines whether to map long integers of a certain length as BigInt data types when MicroStrategy creates tables in a database. Allows the Analytical Engine to UNION multiple insert statements into the same temporary table.
Possible Values Use ." as decimal separator (ANSI standard) Use ," as decimal separator
User-defined User-defined
Default Value
Use ." as decimal separator (ANSI standard) NULL yyyy-nn-dd hh:mm:ss Do not use BigInt
SQL Hint SQL Timestamp Format Try to Use BigInt Data Type in Table Creation for Long Integers Up to the Following Threshold UNION Multiple INSERT
787
Example
A report template contains the attributes Region and Store, and metrics M1 and M2. M1 uses the fact table FT1, which contains Store_ID, Store_Desc, Region_ID, Region_Desc, and f1. M2 uses the fact table FT2, which contains Store_ID, Store_Desc, Region_ID, Region_Desc, and F2. With the normal SQL Engine algorithm, the intermediate pass that calculates M1 selects Store_ID and F1, the intermediate pass that calculates M2 selects Store_ID and F2. Then the final pass joins these two intermediate tables together. But that is not enough. Since Region is on the template, it should join upward to the region level and find the Region_Desc form. This can be done by joining either FT1 or FT2 in the final pass. So with the original algorithm, either FT1 or FT2 is being accessed twice. If these tables are big, and they usually are, the performance can be very slow. On the other hand, if Store_ID, Store_Desc, Region_ID, and Region_Desc are picked up in the intermediate passes, there is no need to join FT1 or FT2 does not need to be joined in the final pass, thus boosting performance. For this reason, the following two properties are available in MicroStrategy:
Attribute Form Selection Option for Intermediate Pass Attribute Selection Option for Intermediate Pass Note the following:
These properties intend to use bigger (wider) intermediate tables to save additional joins in the final pass and exchange space for time. These two properties work independently. One does not influence the other. Each property has two values. The default behavior is the original algorithm. When the property is enabled:
The SQL Engine selects additional attributes or attribute forms in the intermediate pass, when they are directly available.
The SQL Engine does not join additional tables to select more attributes or forms. So for intermediate passes, the number of tables to be joined is the same as when the property is disabled.
789
The Bulk Insert String property appends the string provided in front of the INSERT statement. For Teradata, this property is set to ; to increase query performance. The string is appended only for the INSERT INTO SELECT statements and not the INSERT INTO VALUES statement that is generated by the Analytical Engine. Since the string is appended for the INSERT INTO SELECT statement, this property takes effect only during explicit, permanent, or temporary table creation.
Example
Bulk Insert String = ;
Examples
Pure select, no GROUP BY
insert into ZZTP00 select a11.QUARTER_ID QUARTER_ID, 0 XKYCGT, sum(a11.REG_SLS_DLR) WJXBFS1 from SALES_Q1_2002 a11 group by a11.QUARTER_ID insert into ZZTP00 select a11.QUARTER_ID QUARTER_ID, 1 XKYCGT, sum(a11.REG_SLS_DLR) WJXBFS1 from SALES_Q2_2002 a11 group by a11.QUARTER_ID
791
GROUP BY alias
insert into ZZTP00 select a11.QUARTER_ID QUARTER_ID, 0 XKYCGT, sum(a11.REG_SLS_DLR) WJXBFS1 from SALES_Q1_2002 a11 group by a11.QUARTER_ID, XKYCGT insert into ZZTP00 select a11.QUARTER_ID QUARTER_ID, 1 XKYCGT, sum(a11.REG_SLS_DLR) WJXBFS1 from SALES_Q2_2002 a11 group by a11.QUARTER_ID, XKYCGT
GROUP BY position
insert into ZZTP00 select a11.QUARTER_ID QUARTER_ID, 0 XKYCGT, sum(a11.REG_SLS_DLR) WJXBFS1 from SALES_Q1_2002 a11 group by a11.QUARTER_ID, 2 insert into ZZTP00 select a11.QUARTER_ID QUARTER_ID, 1 XKYCGT, sum(a11.REG_SLS_DLR) WJXBFS1 from SALES_Q2_2002 a11 group by a11.QUARTER_ID, 2
Columns created in order based on attribute weight: Datamart columns are created in an order based on their attribute weights. For more information about attribute weights, see Default Attribute Weight, page 793.
Columns created in order in which they appear on the template: Datamart columns are created in the same order as they appear on the report template.
Date Pattern
Date Pattern is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties, page 626. The Data Pattern property is used to add or alter a syntax pattern for handling date columns.
Example
Default Oracle Tandem No extra syntax pattern for handling dates
To_Date (#0)
(d#0)
793
Use the Default Attribute Weight property to determine how attribute weights should be treated, for those attributes that are not in the attribute weights list. You can access the attribute weights list from the Project Configuration Editor. In the Project Configuration Editor, collapse Report Definition and select SQL generation. From the Attribute weights section, select Modify to open the attribute weights list. The attribute weights list allows you to change the order of attributes used in the SELECT clause of a query. For example, suppose the Region attribute is placed higher on the attribute weights list than the Customer State attribute. When the SQL for a report containing both attributes is generated, Region is referenced in the SQL before Customer State. However, suppose another attribute, Quarter, also appears on the report template but is not included in the attribute weights list. In this case, you can select either of the following options within the Default Attribute Weight property to determine whether Quarter is considered highest or lowest on the attribute weights list:
Lowest: (default) When you select this option, those attributes not in the attribute weights list are treated as the lightest weight. Using the example above, with this setting selected, Quarter is considered to have a lighter attribute weight than the other two attributes. Therefore, it is referenced after Region and Customer State in the SELECT statement. Highest: When you select this option, those attributes not in the attribute weights list are treated as the highest weight. Using the example above, with this setting selected, Quarter is considered to have a higher attribute weight than the other two attributes. Therefore, it is referenced before Region and Customer State in the SELECT statement.
795
the partition query no longer shares the prefix from the PMT. Instead, the PBTNAME column (DB1.PBT1, DB2.PBT2, and so on) is used as the full PBT name. Even when this property is turned ON, the partition prequery still applies a prefix, if there is one.
If there is aggregation, the Analytical Engine does GROUP BY, not DISTINCT. If there is no attribute (only metrics), the Analytical Engine does not do DISTINCT. If there is COUNT (DISTINCT ) and the database does not support it, the Analytical Engine does a SELECT DISTINCT pass and then a COUNT(*) pass. If for certain selected column data types, the database does not allow DISTINCT or GROUP BY, the Analytical Engine does not do it. If the select level is the same as the table key level and the tables true key property is selected, the Analytical Engine does not issue a DISTINCT.
When none of the above conditions are met, the Analytical Engine uses this property.
GROUP BY ID Attribute
The GROUP BY ID Attribute is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties, page 626. This property determines how to group by a selected ID column when an expression is performed on the ID expression. Each of the options is described below. The code fragment following each description replaces the section named group by ID in the following sample SQL statement.
select a22.STORE_NBR STORE_NBR, a22.MARKET_NBR * 10 MARKET_ID, sum(a21.REG_SLS_DLR) WJXBFS1 from STORE_DIVISION a21 join LOOKUP_STORE a22 on (a21.STORE_NBR = a22.STORE_NBR) where a22.STORE_NBR = 1 group by a22.STORE_NBR, group by ID
Group by expression: Group by the expression performed in the SELECT statement on the ID column.
a22.MARKET_NBR * 10
Group by column: Group by the column ID, ignoring the expression performed on the ID column.
a22.MARKET_NBR
797
Examples
Use Max
select a11.MARKET_NBR MARKET_NBR, max(a14.MARKET_DESC) MARKET_DESC, a11.CLASS_NBR CLASS_NBR, max(a13.CLASS_DESC) CLASS_DESC, a12.YEAR_ID YEAR_ID, max(a15.YEAR_DESC) YEAR_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from MARKET_CLASS a11 join LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) join LOOKUP_CLASS a13 on (a11.CLASS_NBR = a13.CLASS_NBR) join LOOKUP_MARKET a14 on (a11.MARKET_NBR = a14.MARKET_NBR) 798 Details for all VLDB properties
2008 MicroStrategy, Inc.
a12.YEAR_ID
Use Group by
select a11.MARKET_NBR MARKET_NBR, a14.MARKET_DESC MARKET_DESC, a11.CLASS_NBR CLASS_NBR, a13.CLASS_DESC CLASS_DESC, a12.YEAR_ID YEAR_ID, a15.YEAR_DESC YEAR_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from MARKET_CLASS a11 join LOOKUP_DAY a12 on (a11.CUR_TRN_DT = a12.CUR_TRN_DT) join LOOKUP_CLASS a13 on (a11.CLASS_NBR = a13.CLASS_NBR) join LOOKUP_MARKET a14 on (a11.MARKET_NBR = a14.MARKET_NBR) join LOOKUP_YEAR a15 on (a12.YEAR_ID = a15.YEAR_ID) group by a11.MARKET_NBR, a14.MARKET_DESC, a11.CLASS_NBR, a13.CLASS_DESC, a12.YEAR_ID, a15.YEAR_DESC
799
reduced to a single #. For example to show three # characters in a statement, enter six # characters in the code. You can get any desired string with the right number of # characters. Using the # character is the same as using the ; character.
Example
Insert into TABLENAME select A1.COL1, A2.COL2, A3.COL3 from TABLE1 A1, TABLE2 A2, TABLE3 A3 where A1.COL1=A2.COL1 and A2.COL4=A3.COL5 */Insert Post String/*
Example
Insert into TABLENAME */Insert Table Option/*
select A1.COL1, A2.COL2, A3.COL3 from TABLE1 A1, TABLE2 A2, TABLE3 A3 where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5
Examples
Database-specific setting
SQL Server Teradata 28 18
801
Example
Select Post String = /* For Fetch Only */ select a11.STORE_NBR STORE_NBR, sum(a11.CLE_SLS_DLR) CLEARANCESAL from STORE_DIVISION a11 group by a11.STORE_NBR FOR FETCH ONLY
Example
Default Oracle Teradata
Examples
. as the decimal separator
select a11.DEPARTMENT_NBR DEPARTMENT_NBR, a11.STORE_NBR STORE_NBR into #ZZTIS00H5K4MQ000 from HARI_COST_STORE_DEP a11 group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.COST_AMT) > 654.357
803
SQL Hint
The SQL Hint property is used for the Oracle SQL Hint pattern. This string is placed after the SELECT word in the Select statement. This property can be used to insert any SQL string that makes sense after the SELECT in a Select statement, but it is provided specifically for Oracle SQL Hints.
Example
SQL Hint = /* FULL */ Select /* + FULL */ A1.STORE_NBR, max(A1.STORE_DESC) From LOOKUP_STORE A1 Where A1.STORE_NBR = 1 Group by A1.STORE_NBR
Example
Default DB2 RedBrick
Try to Use BigInt Data Type in Table Creation for Long Integers Up to the Following Threshold
Try to Use BigInt Data Type in Table Creation for Long integers Up to the Following Threshold is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties, page 626. With this VLDB property you can determine whether long integers are mapped to a BigInt data type when MicroStrategy creates tables in the database. A datamart is an example of a MicroStrategy feature that requires MicroStrategy to create tables in a database. When long integers from databases are integrated into MicroStrategy, the BigDecimal data type is used to define the data in MicroStrategy. Long integers can be of various database data types such as Number, Decimal, and BigInt. In the case of BigInt, when data that uses the BigInt data type is integrated into MicroStrategy as a BigDecimal, this can cause a data type mismatch when MicroStrategy creates a table in the database. MicroStrategy does not use the BigInt data type by default when creating tables. This can cause a data type mismatch between the originating database table that contained the BigInt and the database table created by MicroStrategy. You can use the following VLDB settings to support BigInt data types:
Do not use BigInt: Long integers are not mapped as BigInt data types when MicroStrategy creates tables in the database. This is the default behavior.
If you use BigInt data types, this can cause a data type
805
mismatch between the originating database table that contained the BigInt and the database table created by MicroStrategy.
Up to 18 digits: Long integers that have up to 18 digits are converted into BigInt data types.
This setting is a good option if you can ensure that your BigInt data uses no more than 18 digits. The maximum number of digits that a BigInt can use is 19. With this option, if your database contains BigInt data that uses all 19 digits, it is not mapped as a BigInt data type when MicroStrategy creates a table in the database. However, using this setting requires you to manually modify the column data type mapped to your BigInt data. You can achieve this by creating a column alias for the column of data in the Attribute Editor or Fact Editor in MicroStrategy. The column alias must have a data type of BigDecimal, a precision of 18, and a scale of zero. For steps to create a column alias to modify a column data type, see the MicroStrategy Project Design Guide.
Up to 19 digits: Long integers that have up to 19 digits are converted into BigInt data types.
Using this option enables BigInt data that uses up to 19 digits to be correctly mapped as a BigInt data types when MicroStrategy creates tables in the database. This option does not require you to create a column alias. However, this option can cause an overflow error if you have long integers that use exactly 19 digits, and its value is greater than the maximum allowed for a BigInt (9,223,372,036,854,775,807).
Description
Used to alter the pattern for aliasing column names. Automatically set for Microsoft Access users.
Possible Values
User-defined
Default Value
AS
Attribute ID Constraint Defines the column constraints (for example, NULL or NOT NULL) put on the ID form of attributes. Column Pattern Commit After Final Drop Used to alter the pattern for column names. Determines whether to issue a COMMIT statement after the final DROP statement
User-defined
NULL
User-defined
No Commit after the final Drop statement Commit after the final Drop statement
807
Property
Commit Level
Description
Sets when to issue a COMMIT statement after creating an intermediate table. Defines the string appended after the CREATE TABLE statement.
Possible Values
No Commit Post DDL Post DML Post DDL and DML
Default Value
No Commit
Create Post String (see Table Prefix, Table Qualifier, Table Option, Table Descriptor, Table Space, & Create Post String) Drop Temp Table Method Fallback Table Type
User-defined
NULL
Determines when to drop an intermediate object. Determines the type of table that is generated if the Analytical Engine cannot generate a derived table or common table. Allows string characters to be converted into specific character encoding required for some Unicode implementations.
Do not apply hexadecimal character transformation to quoted strings Apply hexadecimal character transformation to quoted strings
Permanent table Permanent table Derived table Common table expression True temporary table Temporary view Explicit table Implicit table
User-defined Explicit table NULL
Table Creation Type Table Descriptor (see Table Prefix, Table Qualifier, Table Option, Table Descriptor, Table Space, & Create Post String)
Determines the method to create an intermediate table. Defines the string to be placed after the word TABLE in the CREATE TABLE statement.
Property
Table Option (see Table Prefix, Table Qualifier, Table Option, Table Descriptor, Table Space, & Create Post String) Table Prefix (see Table Prefix, Table Qualifier, Table Option, Table Descriptor, Table Space, & Create Post String) Table Qualifier (see Table Prefix, Table Qualifier, Table Option, Table Descriptor, Table Space, & Create Post String) Table Space (see Table Prefix, Table Qualifier, Table Option, Table Descriptor, Table Space, & Create Post String)
Description
Defines the string to be placed after the table name in the CREATE TABLE statement.
Possible Values
User-defined
Default Value
NULL
Defines the string to be added to a table name, for example, CREATE TABLE prefix.Tablename. (See Note below.)
User-defined
NULL
Defines the key words placed immediately before table. For example, CREATE volatile Table.
User-defined
NULL
String appended after the CREATE TABLE Statement but before any Primary Index/Partition key definitions. (See Note below.)
User-defined
NULL
To populate dynamic information by the Analytical Engine, insert the following syntax into Table Prefix and Table Space strings:
!d inserts the date. !o inserts the report name. !u inserts the user name.
809
Alias Pattern
Alias Pattern is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties, page 626. The Alias Pattern property allows you to alter the pattern for aliasing column names. Most databases do not need this pattern, because their column aliases simply follow the column name with only a space between them. However, Microsoft Access needs an AS between the column name and the given column alias. This pattern is automatically set for Microsoft Access users. This property is provided for customers using the Generic DBMS object because some databases may need the AS or another pattern for column aliasing.
Attribute ID Constraint
This property is available at the attribute level. You can access this property by opening the Attribute Editor, selecting the Tools menu, then choosing VLDB Properties. When creating intermediate tables in the explicit mode, you can specify the NOT NULL/NULL constraint during the table creation phase. This takes effect only when permanent or temporary tables are created in the explicit table creation mode. Furthermore, it applies only to the attribute columns in the intermediate tables.
Example
NOT NULL
create table ZZTIS003HHUMQ000 ( DEPARTMENT_NBR NUMBER(10, 0) NOT NULL, STORE_NBR NUMBER(10, 0) NOT NULL)
Column Pattern
Column Pattern is an advanced property that is hidden by default. For information on how to display this property, see Viewing and changing advanced VLDB properties, page 626. The Column Pattern property allows you to alter the pattern for column names. Most databases do not need this pattern altered. However, if you are using a case-sensitive database and need to add double quotes around the column name, this property allows you to do that.
Example
The standard column pattern is #0.#1. If double quotes are needed, the pattern changes to: "#0.#1"
811
Commit Level
The Commit Level property is used to issue COMMIT statements after the Data Definition Language (DDL) and Data Manipulation Language (DML) statements. When this property is used in conjunction with the INSERT MID Statement, INSERT PRE Statement, or TABLE POST Statement VLDB properties, the COMMIT is issued before any of the custom SQL passes specified in the statements are executed. The only DDL statement issued after the COMMIT is issued is the explicit CREATE TABLE statement. Commit is issued after DROP TABLE statements even though it is a DDL statement. The only DML statement issued after the COMMIT is issued is the INSERT INTO TABLE statement. If the property is set to Post DML, the COMMIT is not issued after an individual INSERT INTO VALUES statement; instead, it is issued after all the INSERT INTO VALUES statements are executed. The Post DDL COMMIT only shows up if the Intermediate Table Type VLDB property is set to Permanent tables or Temporary tables and the Table Creation Type VLDB property is set to Explicit mode. The Post DML COMMIT only shows up if the Intermediate Table Type VLDB property is set to Permanent tables, Temporary tables, or Views. Not all database platforms support COMMIT statements and some need special statements to be executed first, so this property must be used in projects whose warehouse tables are in databases that support it.
Examples
Table Creation Type is set to Explicit
No Commit
create table ZZTIS00H8L8MQ000 ( DEPARTMENT_NBR NUMBER(10, 0), STORE_NBR NUMBER(10, 0)) tablespace users insert into ZZTIS00H8L8MQ000 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, a11.STORE_NBR STORE_NBR from HARI_STORE_DEPARTMENT a11 group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.TOT_SLS_DLR) > 100000 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_DEPARTMENT a11, ZZTIS00H8L8MQ000 pa1, HARI_LOOKUP_DEPARTMENT a12, HARI_LOOKUP_STORE a13 where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR and a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and a11.STORE_NBR = a13.STORE_NBR group by a11.DEPARTMENT_NBR, a11.STORE_NBR
813
commit insert into ZZTIS00H8LHMQ000 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, a11.STORE_NBR STORE_NBR from HARI_STORE_DEPARTMENT a11 group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.TOT_SLS_DLR) > 100000 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_DEPARTMENT a11, ZZTIS00H8LHMQ000 pa1, HARI_LOOKUP_DEPARTMENT a12, HARI_LOOKUP_STORE a13 where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR and a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and a11.STORE_NBR = a13.STORE_NBR group by a11.DEPARTMENT_NBR, a11.STORE_NBR
a11.STORE_NBR STORE_NBR from HARI_STORE_DEPARTMENT a11 group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.TOT_SLS_DLR) > 100000 commit select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_DEPARTMENT a11, ZZTIS00H8LZMQ000 pa1, HARI_LOOKUP_DEPARTMENT a12, HARI_LOOKUP_STORE a13 where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR and a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and a11.STORE_NBR = a13.STORE_NBR group by a11.DEPARTMENT_NBR, a11.STORE_NBR
815
a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_DEPARTMENT a11, ZZTIS00H8LCMQ000 pa1, HARI_LOOKUP_DEPARTMENT a12, HARI_LOOKUP_STORE a13 where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR and a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and a11.STORE_NBR = a13.STORE_NBR group by a11.DEPARTMENT_NBR, a11.STORE_NBR
817
having sum(a11.TOT_SLS_DLR) > 100000 commit select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HARI_STORE_DEPARTMENT a11, ZZTIS00H8M3MQ000 pa1, HARI_LOOKUP_DEPARTMENT a12, HARI_LOOKUP_STORE a13 where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR and a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and a11.STORE_NBR = a13.STORE_NBR group by a11.DEPARTMENT_NBR, a11.STORE_NBR
819
Examples
Do not apply hexadecimal character transformation to quoted strings
insert into mytable values ('A')
Where 4100 is the hexadecimal representation of the character A using UTF-8 Unicode encoding.
Default
True Temporary Tables Common Table Expression
Database
Informix IDS and XPs SQL Server Oracle RedBrick Sybase ASE and IQ Teradata
Default
True Temporary Tables True Temporary Tables True Temporary Tables True Temporary Tables True Temporary Tables Derived Table
Examples
The following is an output from a DB2 UDB 7.x project.
Permanent Table
create table ZZIS03CT00 ( DEPARTMENT_NBR DECIMAL(10, 0), STORE_NBR DECIMAL(10, 0)) insert into ZZIS03CT00 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, a11.STORE_NBR STORE_NBR from HSTORE_DEPARTMENT a11 group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.TOT_SLS_DLR) > 100000 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HSTORE_DEPARTMENT a11
821
join ZZIS03CT00 pa1 on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR) join HLOOKUP_DEPARTMENT a12 on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR) join HLOOKUP_STORE a13 on (a11.STORE_NBR = a13.STORE_NBR) group by a11.DEPARTMENT_NBR, a11.STORE_NBR
Derived Table
select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HSTORE_DEPARTMENT a11 join (select a11.DEPARTMENT_NBR DEPARTMENT_NBR, a11.STORE_NBR STORE_NBR from HSTORE_DEPARTMENT a11 group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.TOT_SLS_DLR) > 100000 ) pa1 on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR) join HLOOKUP_DEPARTMENT a12 on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR) join HLOOKUP_STORE a13 on (a11.STORE_NBR = a13.STORE_NBR) group by a11.DEPARTMENT_NBR, a11.STORE_NBR
Temporary Table
declare global temporary table session.ZZIS03CU00( DEPARTMENT_NBR DECIMAL(10, 0), STORE_NBR DDECIMAL(10, 0)) on commit preserve rows not logged insert into session.ZZIS03CU00 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, a11.STORE_NBR STORE_NBR from HSTORE_DEPARTMENT a11
2008 MicroStrategy, Inc. Details for all VLDB properties
823
group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.TOT_SLS_DLR) > 100000 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HSTORE_DEPARTMENT a11 join session.ZZIS03CU00 pa1 on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR) join HLOOKUP_DEPARTMENT a12 on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR) join HLOOKUP_STORE a13 on (a11.STORE_NBR = a13.STORE_NBR) group by a11.DEPARTMENT_NBR, a11.STORE_NBR
Views
create view ZZIS03CV00 (DEPARTMENT_NBR, STORE_NBR) as select a11.DEPARTMENT_NBR DEPARTMENT_NBR, a11.STORE_NBR STORE_NBR from HSTORE_DEPARTMENT a11 group by a11.DEPARTMENT_NBR, a11.STORE_NBR having sum(a11.TOT_SLS_DLR) > 100000 select a11.DEPARTMENT_NBR DEPARTMENT_NBR, max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC, a11.STORE_NBR STORE_NBR, max(a13.STORE_DESC) STORE_DESC, sum(a11.TOT_SLS_DLR) TOTALSALES from HSTORE_DEPARTMENT a11
join ZZIS03CV00 pa1 on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and a11.STORE_NBR = pa1.STORE_NBR) join HLOOKUP_DEPARTMENT a12 on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR) join HLOOKUP_STORE a13 on (a11.STORE_NBR = a13.STORE_NBR) group by a11.DEPARTMENT_NBR, a11.STORE_NBR
Examples
Explicit table
create table TEMP1 ( STORE_NBR INTEGER, TOT_SLS DOUBLE, PROMO_SLS DOUBLE) insert into TEMP1 select a21.STORE_NBR STORE_NBR, (sum(a21.REG_SLS_DLR) + sum(a21.PML_SLS_DLR)) TOT_SLS, sum(a21.PML_SLS_DLR) PROMO_SLS from STORE_DIVISION a21 where a21.STORE_NBR = 1
2008 MicroStrategy, Inc. Details for all VLDB properties
825
group by a21.STORE_NBR
Implicit table
create table TEMP1 as select a21.STORE_NBR STORE_NBR, (sum(a21.REG_SLS_DLR) + sum(a21.PML_SLS_DLR)) TOT_SLS, sum(a21.PML_SLS_DLR) PROMO_SLS from STORE_DIVISION a21 where a21.STORE_NBR = 1 group by a21.STORE_NBR
Table Prefix, Table Qualifier, Table Option, Table Descriptor, Table Space, & Create Post String
These properties can be used to customize the CREATE TABLE SQL syntax for any platform. All of these properties are reflected in the SQL statement only if the Intermediate Table Type VLDB property is set to Permanent Table. Customizing a CREATE TABLE statement is only possible for a permanent table. For all other valid Intermediate Table Type VLDB settings, the SQL does not reflect the values set for these properties. The location of each property in the CREATE TABLE statement is given below.
create /* Table Qualifier */ table /*Table Descriptor*//* Table Prefix */ZZTIS003RB6MD000 /*Table Option*/ ( STORE_NBR NUMBER, CLEARANCESAL DOUBLE) /* Table Space */ /* Create PostString */
For platforms like Teradata and DB2 UDB 6.x and 7.x versions, the Primary Index or the Partition Key SQL syntax is placed between the Table Space and Create Post String VLDB property.
Property Set
Joins Joins Metrics Metrics Query Optimizations Query Optimizations Select/Insert Select/Insert Select/Insert Tables
Property
Value
Cartesian Join Evaluation Reevaluate cartesian joins Max Tables In Join COUNT(column) Support Separate COUNT DISTINCT MD Partition Prequery Option Sub Query Type GROUP BY ID Attribute SQL Date Format TimeStamp Format Intermediate Table Type 15 use Count(*) Multiple count distinct, but count expression must be the same Use constant in prequery Where Exists (Select * ...) Group by column yyyy-mm-dd yyyy-mm-dd-hh.nn.ss True temporary table
Joins
827
Database Platform
Property Set
Joins Query Optimizations Query Optimizations Select/Insert Select/Insert Select/Insert Tables Tables
Property
Max Tables In Join MD Partition Prequery Option Sub Query Type GROUP BY ID Attribute SQL Date Format TimeStamp Format Fallback Table Type Intermediate Table Type
Value
15 Use constant in prequery Where Exists (Select * ...) Group by column yyyy-mm-dd yyyy-mm-dd-hh.nn.ss True temporary table Common table expression
Cartesian Join Evaluation Reevaluate cartesian joins Max Tables In Join GROUP BY ID Attribute SQL Date Format TimeStamp Format UNION Multiple INSERT 97 Group by column yyyy-mm-dd yyyy-mm-dd-hh.nn.ss Use UNION
Cartesian Join Evaluation Reevaluate cartesian joins Max Tables In Join SQL Date Format TimeStamp Format UNION Multiple INSERT Intermediate Table Type Intermediate Table Index 97 yyyy-mm-dd yyyy-mm-dd-hh.nn.ss Use UNION Common table expression Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem Create a Primary Index (Teradata) or Partitioning Key (UDB) if the intermediate table index setting is set to create a primary index
Indexing
Indexing
Joins Joins
Cartesian Join Evaluation Reevaluate cartesian joins Full Outer Join Support Supported
Database Platform
Property Set
Query Optimizations Query Optimizations Tables Tables
Property
MD Partition Prequery Option Sub Query Type
Value
Use constant in prequery Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery True temporary table Common table expression Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem Create a primary index (Teradata) or partitioning key (UDB) if the intermediate table index setting is set to create a primary index
Indexing
Indexing
Joins Joins Query Optimizations Query Optimizations Tables Tables DB2 Information Integrator Indexing
Cartesian Join Evaluation Reevaluate cartesian joins Full Outer Join Support MD Partition Prequery Option Sub Query Type Supported Use constant in prequery Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery True temporary table Derived table Create a Primary Index (Teradata) or Partitioning Key (UDB) if the intermediate table index setting is set to create a primary index
Joins Select/Insert Select/Insert Select/Insert Generic DBMS Metrics Metrics HP Nonstop SQL/MP D42/D45 Joins Joins
Cartesian Join Evaluation Reevaluate cartesian joins SQL Date Format TimeStamp Format UNION Multiple INSERT NULL Check Zero Check Join Type Max Tables In Join yyyy-mm-dd yyyy-mm-dd-hh.nn.ss Use UNION Do nothing Do nothing SQL 89 Inner and Cross Join and SQL 92 Outer Join (Tandem) 15
829
Database Platform
Property Set
Metrics Metrics Query Optimizations Select/Insert Select/Insert
Property
COUNT Compound Attribute Separate COUNT DISTINCT Attribute Element Number Count Method Date Pattern SQL Date Format Date Pattern SQL Date Format Join Type COUNT Compound Attribute Separate COUNT DISTINCT Intermediate Table Type Table Creation Type Join Type COUNT Compound Attribute Separate COUNT DISTINCT SQL Global Optimization Intermediate Table Type Table Creation Type Join Type COUNT Compound Attribute GROUP BY ID Attribute TimeStamp Format Intermediate Table Type Table Creation Type Join Type
Value
COUNT(expression) disabled Multiple count distinct, but count expression must be the same Use ODBC cursor to calculate total element number {d'#0'} yyyy-mm-dd #0.[#1] yyyymmdd Join 89 Count expression disabled Multiple count distinct, only one count distinct per pass True temporary table Implicit Table Join 89 Count expression disabled Multiple count distinct, only one count distinct per pass Level 2: Level 1 + Merge Passes with Different SELECT True temporary table Implicit Table Join 89 Count expression disabled Group by column yyyy-mm-dd hh:nn:ss True temporary table Implicit Table Join 89
Hyperion Essbase
Select/Insert Select/Insert
Joins
Database Platform
Property Set
Metrics Query Optimizations Select/Insert Select/Insert Tables Tables
Property
Separate COUNT DISTINCT Sub Query Type
Value
No count distinct, use select distinct and count(*) instead Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery ###0## yyyy-mm-dd AS #0.[#1] Join 89 No count distinct, use select distinct and count(*) instead Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery ###0## yyyy-mm-dd AS #0.[#1] #0.[#1] yyyymmdd #0.[#1] yyyymmdd Join 89 No count distinct, use select distinct and count(*) instead Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery ###0## yyyy-mm-dd AS
Date Pattern SQL Date Format Alias Pattern Column Pattern Join Type Separate COUNT DISTINCT Sub Query Type
Date Pattern SQL Date Format Alias Pattern Column Pattern Date Pattern SQL Date Format Date Pattern SQL Date Format Join Type Separate COUNT DISTINCT Sub Query Type
Select/Insert Select/Insert
Select/Insert Select/Insert
831
Database Platform
Property Set
Tables
Property
Column Pattern Full Outer Join Support Max Tables In Join Max Digits In Constant SQL Date Format UNION Multiple INSERT Intermediate Table Type Table Creation Type Full Outer Join Support Max Tables In Join Set Operator Optimization SQL Global Optimization Max Digits In Constant SQL Date Format UNION Multiple INSERT Intermediate Table Type Table Creation Type Full Outer Join Support Max Tables In Join Max Digits In Constant SQL Date Format UNION Multiple INSERT Intermediate Table Type Table Creation Type
Value
#0.[#1] Supported 255 28 yyyy-mm-dd Use UNION True temporary table Implicit Table Supported 255 Enable Set Operator Optimization (if supported by database and [Sub Query Type]) Level 2: Level 1 + Merge Passes with Different SELECT 28 yyyy-mm-dd Use UNION True temporary table Implicit Table Supported 255 28 yyyy-mm-dd Use UNION True temporary table Implicit Table
Joins Joins Query Optimizations Query Optimizations Select/Insert Select/Insert Select/Insert Tables Tables
Database Platform
My SQL 5.0
Property Set
Indexing
Property
Intermediate Table Index
Value
Create table, insert into table, create index on intermediate table (all platforms other than Teradata, DB2 UDB EEE, Red Brick and Tandem) Level 2: Level 1 + Merge Passes with Different SELECT True temporary table Derived table Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem 4 Create a primary index (Teradata) or partitioning key (UDB) if the intermediate table index setting is set to create a primary index Supported WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) Apply filter to passes touching warehouse tables and last join pass Use GROUP BY
SQL Global Optimization Fallback Table Type Intermediate Table Type Intermediate Table Index
Indexing Indexing
Full Outer Join Support Sub Query Type Apply Filter Options Distinct/Group By option (when no aggregation and not table key) Drop Temp Table Method Fallback Table Type Intermediate Table Type Intermediate Table Index
Do nothing True temporary table Derived table Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem 4 Create a primary index (Teradata) or partitioning key (UDB) if the intermediate table index setting is set to create a primary index Supported
Indexing Indexing
Joins
833
Database Platform
Property Set
Query Optimizations Select/Insert Select/Insert
Property
Sub Query Type Apply Filter Options Distinct/Group By option (when no aggregation and not table key) Drop Temp Table Method Fallback Table Type Intermediate Table Type Table Creation Type Join Type Max Metric Alias Size Sub Query Type Date Pattern GROUP BY Non-ID Attribute SQL Date Format Intermediate Table Type Table Creation Type Join Type Max Metric Alias Size Sub Query Type Date Pattern GROUP BY Non-ID Attribute SQL Date Format Intermediate Table Type Table Creation Type Join Type Max Metric Alias Size
Value
WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) Apply filter to passes touching warehouse tables and last join pass Use GROUP BY
Tables Tables Tables Table Oracle 8i R2/R3 Enterprise Editions Joins Metrics Query Optimization Select/Insert Select/Insert Select/Insert Tables Tables Oracle 8i R2/R3 Standard Edition Joins Metrics Query Optimization Select/Insert Select/Insert Select/Insert Tables Tables Oracle 9i Joins Metrics
Do nothing True temporary table Derived table Implicit Table Join 89 30 WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) To_Date('#0') Use Group By dd-mm-yyyy True temporary table Implicit Table Join 89 30 WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) To_Date('#0') Use Group By dd-mm-yyyy True temporary table Implicit Table Join 89 30
Database Platform
Property Set
Query Optimization Select/Insert Select/Insert Select/Insert Tables Tables
Property
Sub Query Type Date Pattern GROUP BY Non-ID Attribute SQL Date Format Intermediate Table Type Table Creation Type Full Outer Join Support Join Type Max Metric Alias Size Sub Query Type Date Pattern GROUP BY Non-ID Attribute SQL Date Format Intermediate Table Type Table Creation Type Full Outer Join Support Join Type Max Metric Alias Size Set Operator Optimization SQL Global Optimization Sub Query Type Date Pattern GROUP BY Non-ID Attribute SQL Date Format Intermediate Table Type
Value
WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) To_Date('#0', dd-mm-yyyy) Use Group By dd-mm-yyyy True temporary table Implicit Table Supported Join 89 30 WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) To_Date('#0', dd-mm-yyyy) Use Group By dd-mm-yyyy True temporary table Implicit Table Supported Join 89 30 Enable Set Operator Optimization (if supported by database and [Sub Query Type]) Level 2: Level 1 + Merge Passes with Different SELECT WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) To_Date('#0', dd-mm-yyyy) Use Group By dd-mm-yyyy True temporary table
Oracle 10g
Joins Joins Metrics Query Optimization Select/Insert Select/Insert Select/Insert Tables Tables
Oracle 10gR2
Joins Joins Metrics Query Optimizations Query Optimizations Query Optimization Select/Insert Select/Insert Select/Insert Tables
835
Database Platform
Property Set
Tables
Property
Table Creation Type Full Outer Join Support Set Operator Optimization SQL Global Optimization Sub Query Type Alias Pattern Fallback Table Type Intermediate Table Type
Value
Implicit Table Supported Enable Set Operator Optimization (if supported by database and [Sub Query Type]) Level 2: Level 1 + Merge Passes with Different SELECT WHERE (COL1, COL2...) IN (SELECT s1.COL1, s1.COL2...) AS True temporary table Derived table
Joins Query Optimizations Query Optimizations Query Optimization Tables Tables Tables
Redbrick 6.2
Cartesian Join Evaluation Reevaluate cartesian joins DSS Star Join Full Outer Join Support Sub Query Type Partial star join Supported WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT col1, col2 ...) for multiple columns IN mm/dd/yyyy hh:nn:ss NOT NULL True temporary table
Select/Insert Tables Tables Redbrick 6.3 Joins Joins Joins Query Optimization
Cartesian Join Evaluation Reevaluate cartesian joins DSS Star Join Full Outer Join Support Sub Query Type Partial star join Supported WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT col1, col2 ...) for multiple columns IN mm/dd/yyyy hh:nn:ss NOT NULL True temporary table
Maximum SQL/MDX Size 131072 Date Pattern SQL Date Format #0.[#1] yyyymmdd
Database Platform
Sybase Adaptive Server 12.x
Property Set
Joins Joins Tables Tables
Property
Join Type Max Tables In Join Intermediate Table Type Table Creation Type Max Columns in Index Max Tables In Join Merge Same Metric Expression Option SQL Date Format Fallback Table Type Intermediate Table Type Table Creation Type Join Type Max Tables In Join SQL Global Optimization Intermediate Table Type Table Creation Type Intermediate Table Index
Value
Join 89 50 True temporary table Implicit Table 1 16 Do not merge same metric expression yyyy-mm-dd True temporary table True temporary table Implicit Table Join 89 50 Level 2: Level 1 + Merge Passes with Different SELECT True temporary table Implicit Table Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem 16 Create a Primary Index (Teradata) or Partitioning Key (UDB) if the intermediate table index setting is set to create a primary index Supported SQL 89 Inner Join and SQL 92 Outer and Cross Join (Teradata) 64 30 Multiple count distinct, but count expression must be the same
Sybase IQ 12.x
Teradata V2R4.1
Indexing
Indexing Indexing
Full Outer Join Support Join Type Max Tables In Join Max Metric Alias Size Separate COUNT DISTINCT
837
Database Platform
Property Set
Query Optimizations Select/Insert Select/Insert Select/Insert Select/Insert Select/Insert Tables Tables
Property
Attribute Element Number Count Method Bulk Insert String Date Pattern Max Digits In Constant SQL Date Format UNION Multiple INSERT Fallback Table Type Intermediate Table Type
Value
Use ODBC cursor to calculate total element number ; DATE #0 18 yyyy-mm-dd Use UNION True temporary table Derived table
Teradata V2R5
Governing Indexing
Maximum SQL/MDX Size 1048576 Intermediate Table Index Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem 16 Create a Primary Index (Teradata) or Partitioning Key (UDB) if the intermediate table index setting is set to create a primary index Support SQL 89 Inner Join and SQL 92 Outer and Cross Join (Teradata) 64 30 Use ODBC cursor to calculate total element number Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery ; DATE #0 18 yyyy-mm-dd Use UNION True temporary table
Indexing Indexing
Joins Joins Joins Metrics Query Optimizations Query optimization Select/Insert Select/Insert Select/Insert Select/Insert Select/Insert Tables
Full Outer Join Support Join Type Max Tables In Join Max Metric Alias Size Attribute Element Number Count Method Sub Query Type
Bulk Insert String Date Pattern Max Digits In Constant SQL Date Format UNION Multiple INSERT Fallback Table Type
Database Platform
Property Set
Tables
Property
Intermediate Table Type
Value
Derived table
Teradata V2R5.1.x
Governing Indexing
Maximum SQL/MDX Size 1048576 Intermediate Table Index Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem 16 Create a Primary Index (Teradata) or Partitioning Key (UDB) if the intermediate table index setting is set to create a primary index Support SQL 89 Inner Join and SQL 92 Outer and Cross Join (Teradata) 64 30 Use ODBC cursor to calculate total element number Enable Set Operator Optimization (if supported by database and [Sub Query Type]) Level 2: Level 1 + Merge Passes with Different SELECT Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery ; DATE #0 18 yyyy-mm-dd Use UNION True temporary table Derived table
Indexing Indexing
Joins Joins Joins Metrics Query Optimizations Query Optimizations Query Optimizations Query optimization Select/Insert Select/Insert Select/Insert Select/Insert Select/Insert Tables Tables Teradata V2R6.x Governing
Full Outer Join Support Join Type Max Tables In Join Max Metric Alias Size Attribute Element Number Count Method Set Operator Optimization SQL Global Optimization Sub Query Type
Bulk Insert String Date Pattern Max Digits In Constant SQL Date Format UNION Multiple INSERT Fallback Table Type Intermediate Table Type
839
Database Platform
Property Set
Indexing
Property
Intermediate Table Index
Value
Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem 16 Create a Primary Index (Teradata) or Partitioning Key (UDB) if the intermediate table index setting is set to create a primary index Support SQL 89 Inner Join and SQL 92 Outer and Cross Join (Teradata) 64 30 Use ODBC cursor to calculate total element number Enable Set Operator Optimization (if supported by database and [Sub Query Type]) Level 2: Level 1 + Merge Passes with Different SELECT Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery ; DATE #0 18 yyyy-mm-dd Use UNION True temporary table Derived table
Indexing Indexing
Joins Joins Joins Metrics Query Optimizations Query Optimizations Query Optimizations Query optimization Select/Insert Select/Insert Select/Insert Select/Insert Select/Insert Tables Tables Teradata V2R6.1.x Governing Indexing
Full Outer Join Support Join Type Max Tables In Join Max Metric Alias Size Attribute Element Number Count Method Set Operator Optimization SQL Global Optimization Sub Query Type
Bulk Insert String Date Pattern Max Digits In Constant SQL Date Format UNION Multiple INSERT Fallback Table Type Intermediate Table Type
Maximum SQL/MDX Size 1048576 Intermediate Table Index Create Primary index (Teradata)/Partition key (DB2 UDB EEE)/Primary key (Red Brick, Tandem 16
Indexing
Database Platform
Property Set
Indexing
Property
Primary Index Control
Value
Create a Primary Index (Teradata) or Partitioning Key (UDB) if the intermediate table index setting is set to create a primary index Support SQL 89 Inner Join and SQL 92 Outer and Cross Join (Teradata) 64 30 Use ODBC cursor to calculate total element number Enable Set Operator Optimization (if supported by database and [Sub Query Type]) Level 2: Level 1 + Merge Passes with Different SELECT Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery ; DATE #0 18 yyyy-mm-dd Use UNION True temporary table Derived table
Joins Joins Joins Metrics Query Optimizations Query Optimizations Query Optimizations Query optimization Select/Insert Select/Insert Select/Insert Select/Insert Select/Insert Tables Tables
Full Outer Join Support Join Type Max Tables In Join Max Metric Alias Size Attribute Element Number Count Method Set Operator Optimization SQL Global Optimization Sub Query Type
Bulk Insert String Date Pattern Max Digits In Constant SQL Date Format UNION Multiple INSERT Fallback Table Type Intermediate Table Type
841
B
B.
Introduction
This appendix provides reference information for the following security items:
Access Control List permissions for an object, page 844 Permissions for server governing and configuration, page 846 List of all privileges, page 847 Client-based privileges: Web Reporter privileges, page 849 Web Analyst privileges, page 850 Web Professional privileges, page 852 Web MMT Option privilege, page 853 Common privileges, page 854 MicroStrategy Office privileges, page 854
843
MicroStrategy Mobile privileges, page 855 Desktop Analyst privileges, page 855 Desktop Designer privileges, page 856 Administrator privileges: Architect privileges, page 858 MicroStrategy Administrator privileges, page 858 Integrity Manager privileges, page 859 Administration privileges, page 859
Default user group privileges, page 861 Privileges for special user roles, page 862
Modify
Grouping
Full Control
Denied All
Default Custom...
845
Permissions Granted
Browse
Browse Read
View server definition properties View statistics settings Use the Job Monitor Use the Database Connection Monitor Use the User Connection Monitor Use the Schedule Monitor Use the Cache Monitor
Grouping
Administration
Permissions Granted
Browse Read Use Execute
Configuration
Default
Custom...
Allows you to Perform the tasks your custom create a selections allow. custom combination of permissions
Web Reporter privileges, page 849 Web Analyst privileges, page 850 Web Professional privileges, page 852
847
Web MMT Option privilege, page 853 Common privileges, page 854 MicroStrategy Office privileges, page 854 MicroStrategy Mobile privileges, page 855 Desktop Analyst privileges, page 855 Desktop Designer privileges, page 856
Architect privileges, page 858 MicroStrategy Administrator privileges, page 858 Integrity Manager privileges, page 859 Administration privileges, page 859
Privilege availability
Privileges are available to be assigned to users or groups depending on whether the appropriate license has been purchased for a given product. A privilege is available if it is enabled in the User Editor, that is, if it can be checked off and is not grayed out. If you have not purchased a license for a particular product, that products privileges are grayed out in both the User Editor and the Security Role editor. Certain privileges are marked with asterisks in the tables below, for the following reasons:
Privileges marked with * are included only if you have OLAP Services installed as part of the Intelligence Server. They are unavailable (grayed out) in the User Editor if you have not installed OLAP Services. Privileges marked with ** are included only if you have Report Services installed.
To determine your license information, use License Manager tool to check whether OLAP Services or Report Services are available. For more information on using the User Editor, see Chapter 2, Setting Up User Security.
849
Privilege
Web user Web view History List
All MicroStrategy Web users that are licensed for MicroStrategy Report Services may view and interact with a document in Flash Mode. Certain interactions in Flash Mode have additional licensing requirements:
Users are required to license MicroStrategy Web Analyst to pivot row or column position in a grid or cross-tabular grid of data in Flash Mode. Users are required to license MicroStrategy Web Professional to modify the properties of Widgets used in a document in Flash Mode.
Privilege
Web add to History List Web advanced drilling Web alias objects Web choose attribute form display Web configure toolbars Web create file location Web create new report
Web create print location Web drill on metrics Web execute bulk export report
Web execute data mart reports Web filter on selections Web manage objects Web modify Subtotals Web pivot report
Web report details Web report SQL Web save to My Reports Web save to Shared Reports Web simple graph formatting Web simultaneous execution Web use locked headers
851
All MicroStrategy Web users that are licensed for MicroStrategy Report Services may view and interact with a document in Flash Mode. Certain interactions in Flash Mode have additional licensing requirements:
Users are required to license MicroStrategy Web Analyst to pivot row or column position in a grid or cross-tabular grid of data in Flash Mode. Users are required to license MicroStrategy Web Professional to modify the properties of Widgets used in a document in Flash Mode.
Web modify the list of report objects (use Object Browser - all objects)
Privilege
Web set column widths Web use design mode Web use report filter editor
All MicroStrategy Web users that are licensed for MicroStrategy Report Services may view and interact with a document in Flash Mode. Certain interactions in Flash Mode have additional licensing requirements:
Users are required to license MicroStrategy Web Analyst to pivot row or column position in a grid or cross-tabular grid of data in Flash Mode. Users are required to license MicroStrategy Web Professional to modify the properties of Widgets used in a document in Flash Mode.
853
Common privileges
The predefined MicroStrategy Web Reporter and Desktop Analyst user groups are assigned the common privileges by default.
Privilege
* Drill within Intelligent Cube Create application objects
Note: The appropriate editor privileges listed for Desktop privileges are also required.
Create new folder Create schema objects Create shortcut Schedule request Use server cache create new folders create schema objects create shortcuts to objects schedule a report use the caches on the Intelligence Server
855
Privilege
Re-execute report against warehouse Save custom autostyle Send to e-mail Use Data Explorer Use Desktop
Use Grid Options Use History List Use Report Data Options Use Report Editor
Note: If a user has this privilege but not the Use design view privilege (in Desktop Designer privileges), she can still create new reports from templates, but the Blank report option is not available.
Use Search Editor Use Thresholds Editor use the search feature on all editors and Desktop use the Thresholds Editor
Define Freeform SQL report define a new report using Freeform SQL, and see the Freeform SQL icon in the Create Report dialog box. Define OLAP cube report define a new report that accesses an OLAP cube.
Privilege
Define Query Builder report Format graph Modify the list of report objects (use Object Browser)
Use Consolidation Editor Use Custom Group Editor Use Data Mart Editor Use design mode Use Drill Map Editor Use Find and Replace dialog Use Formatting Editor Use HTML Document Editor Use Metric Editor Use project documentation Use Prompt Editor Use Report Filter Editor Use Subtotal Editor Use Template Editor Use VLDB Property Editor View ETL information
857
Architect privileges
These privileges correspond to the functionality available to users of the Architect product, that is, project designers. The predefined MicroStrategy Architect group is assigned these privileges by default. License Manager counts any user who has any of these privileges as an Architect user.
Privilege
Bypass schema objects security access checks Import function Import OLAP cube Use Architect editors
Note: A user cannot log into a project source in Object Manager unless she has this privilege on at least one project within that project source. Additionally, a user cannot open a project in Object Manager unless she has this privilege on that project.
Administration privileges
These privileges control access to the Administration features listed just below the project sources name. They also control access to options in the Administration menu.
Privilege
Bypass all object security access checks
Note: This privilege is required to use Project Duplicator, Project Mover, and Object Manager
Create configuration objects create configuration objects using Desktop
Note: This is a server-level privilege, to obtain server-level information. It cannot be broken out by project.
use the Cache Monitor dialog box use the Cluster Monitor dialog box use the Database Connection Monitor dialog box use the Database Instance Manager dialog box use the Job Monitor dialog box use the Project Monitor dialog box
Use Cache Monitor Use Cluster Monitor Use Database Connection Monitor Use Database Instance Manager Use Job Monitor Use Project Monitor
859
Privilege
Use Project Status editor
Use Schedule Manager Use Schedule Monitor Use Security Filter Manager Use User Connection Monitor Use User Manager Web administration
In the case of the Web user groups, the Web Analyst inherits the privileges of the Web Reporter. The Web Professional inherits the privileges of both the Web Analyst and Web Reporter. Therefore, the Web Professional user group has the complete set of Web privileges.
In the case of Desktop user groups, the Desktop Designer inherits the privileges of the Desktop Analyst.
The System Administrators user group inherits the privileges of the System Monitors user group.
Default Privileges
Alias objects Change user preferences Choose attribute form display Create new folder Format graph Modify report subtotals Modify sorting Modify list of report objects Pivot report Re-execute report against warehouse
Use data explorer Use Design Mode Use Desktop Use formatting editor Use grid options Use History List Use report data options Use search editor View SQL
861
LDAP Public group: No privileges are assigned to this group by default. LDAP Users group: The common privileges are assigned to this group by default. For a list and description of these privileges, see Common privileges, page 854.
1 Log in to MicroStrategy Desktop. To assign most of the privileges in this section, you must log in as an administrator. 2 Expand the project source that contains the project(s) you want to assign privileges for, expand Administration, then expand User Manager. 3 Click on the user group that contains the user to whom you want to assign privileges. 4 On the right-hand side of Desktop, double-click on the user to whom you want to assign privileges. The User Editor opens.
5 Select the Project Access tab, and assign the appropriate permissions to the user based on the role you want them to be responsible for. Common user roles are described in the sections below. 6 When you are finished assigning privileges to this user, click OK to save your selections and close the User Editor.
Create application objects (under Common privileges, page 854) Use Report Filter Editor (under Desktop Designer privileges, page 856 and Web Professional privileges, page 852)
To allow this (or another) role to apply security filters to other users in the system, grant the Use Security Filter Manager privilege (under Administration privileges, page 859). You may also need to change the Access Control List for the folders where this role must store security filters, so that this role has permission to modify the folders. See Access Control List permissions for an object, page 844.
863
C
ENTERPRISE MANAGER DATA MODEL AND OBJECT DEFINITIONS
C.
Introduction
This appendix provides reference information for the following MicroStrategy Enterprise Manager items:
Enterprise Manager data model, page 866 Enterprise Manager statistics data dictionary, page 875 Enterprise Manager data warehouse tables, page 899 Enterprise Manager metadata tables, page 917 Enterprise Manager attributes, page 918
For more information about specific Enterprise Manager objects, use the Project Documentation link in the Enterprise Manager project. For information about configuring Enterprise Manager and how you can use it to help tune the MicroStrategy system, as well as information about setting up project documentation so it is available to networked users, see Analyzing system usage with Enterprise Manager, page 251.
865
867
869
871
873
875
Teradata, and Access databases. Bold column names indicate primary keys and (I) indicates that the column is used in an index.
IS_SESSION_STATS, page 876 IS_PROJ_SESS_STATS, page 879 IS_DOCUMENT_STATS, page 880 IS_DOC_STEP_STATS, page 883 IS_REPORT_STATS, page 886 IS_REP_STEP_STATS, page 889 IS_CACHE_HIT_STATS, page 892 IS_REP_SQL_STATS, page 893 IS_SCHEDULE_STATS, page 895 IS_REP_SEC_STATS, page 896 IS_REP_COL_STATS, page 897 IS_REP_MANIP_STATS, page 898
IS_SESSION_STATS
Logs every Intelligence Server user session. This table is used when the User Sessions option is selected in the Statistics section of the Project Configuration editor. The IS_SESSION_STATS table does not contain any project-level information and is therefore not affected by statistics purges at the project level. For more information about statistics purges, see Improving database performance by purging statistics data, page 293.
Column
SESSIONID SERVERID SERVERNAM E SERVERMAC HINE PROJECTID
Description
Oracle Datatype
CHAR(32) CHAR(32)
DB2 Datatype
CHAR(32) CHAR(32)
Teradata Datatype
CHAR(32) CHAR(32)
Access Datatype
CHAR(32) CHAR(32)
Server Instance Name VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) (Server machine name:port number) pair GUID of the project. VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32)
CLIENTMACHI Client machine name NE or IP address when machine name is not available SESSIONEVE NT Connection event or disconnection event: 0: Project logout 1: Project login 2: Server sign out 3: Server sign in
INTEGER
NUMBER (10)
INTEGER
INTEGER
INTEGER
877
Column
EVENTSOUR CE
Description
Oracle Datatype
NUMBER (10)
DB2 Datatype
INTEGER
Teradata Datatype
INTEGER
Access Datatype
INTEGER
Source in which the INTEGER SessionEvent originated: 1: Desktop 2: Intelligence Server Administrator 3: Web Administrator 4: Intelligence Server 5: Project Upgrade 6: Web 7: Scheduler 8: Custom Application 9: Narrowcast Server 10: Object Manager 11: ODBO Provider 12: ODBO Cube Designer 13: Command Manager 14: Enterprise Manager 15: Command Line Interface 16: Project Builder 17: Configuration Wizard 18: MD Scan 19: Cache Utility 20: Fire Event 21: MicroStrategy Java admin clients 22: MicroStrategy Web Services 23: MicroStrategy Office 24: MicroStrategy tools
Column
Description
Oracle Datatype
DATE
DB2 Datatype
TIMESTA MP
Teradata Datatype
TIMESTA MP
Access Datatype
TIMESTA MP
RECORDTIME Timestamp logged into DATETIM database according to E database local system time SERVERINST NAME WEBMACHIN E Reserved for future use. Web server machine from which a web session originates.
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) VARCHAR (255)
CONNECTTIM Timestamp at which DATETIM E the session is opened. E DISCONNECT TIME Timestamp at which the session is closed. DATETIM E
DATE DATE
TIMESTA MP TIMESTA MP
TIMESTA MP TIMESTA MP
TIMESTA MP TIMESTA MP
IS_PROJ_SESS_STATS
Records statistics related to project session. This table is used when the Project Sessions option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype
CHAR(32) CHAR(32)
Column SESSIONID
SERVERID SERVERNAME SERVERMACHI NE
Description
Session object GUID. Server definition GUID. Server Instance Name The Intelligence Server machine name and IP address. GUID of the user performing the action. Project GUID.
Oracle Datatype
CHAR(32) CHAR(32)
DB2 Datatype
CHAR(32) CHAR(32)
Teradata Datatype
CHAR(32) CHAR(32)
Access Datatype
CHAR(32) CHAR(32)
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) VarChar (255) VarChar (255) VarChar (255) VarChar (255) VarChar (255)
USERID
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
PROJECTID
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
879
Column
CONNECTTIME
Description
Timestamp at which the session is opened.
Oracle Datatype
DATE
DB2 Datatype
TIMESTA MP TIMESTA MP
Teradata Datatype
TIMESTA MP TIMESTA MP
Access Datatype
TIMESTA MP TIMESTA MP
DISCONNECTTI Timestamp at which ME the session is closed. EVENTNOTES Description of the SessionEvent Server/Project connection or disconnection Current timestamp when the record is inserted into the statistics database.
DATE
RECORDTIME
DATETIM E
DATE
TIMESTA MP
TIMESTA MP
TIMESTA MP
IS_DOCUMENT_STATS
Tracks document executions that pass through Intelligence Server. This table is used when the Document Jobs (Basic) option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype
INTEGER CHAR(32) CHAR(32)
Description
JOBID GUID for user session GUID for serverDef Server Instance Name
Oracle Datatype
NUMBER (10) CHAR(32) CHAR(32)
DB2 Datatype
INTEGER CHAR(32) CHAR(32)
Teradata Datatype
INTEGER CHAR(32) CHAR(32)
Access Datatype
INTEGER CHAR(32) CHAR(32)
Server machine VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) name or IP address (255) (255) (255) (255) GUID for project GUID for user CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32)
Column
Description
Oracle Datatype
DATE
DB2 Datatype
TIMESTA MP
Teradata Datatype
TIMESTA MP
Access Datatype
TIMESTA MP
REQUESTRECT Time Duration IME between request Receive time and Document Job first step start An absolute time point REQUESTQUE UETIME Time Duration between request Receive time and Document Job first step start An offset STARTTIME Time duration between request receive time and document job was created An offset FINISHTIME Time duration between request receive time and document job last step was finished An offset
FLOAT
FLOAT
DOUBLE
FLOAT
DOUBLE
FLOAT
FLOAT
DOUBLE
FLOAT
DOUBLE
FLOAT
FLOAT
DOUBLE
FLOAT
DOUBLE
881
Column
EXECSTATUS
Description
Status of execution JOB_READY = 0 JOB_EXECUTING =1 JOB_WAITING=2 JOB_COMPLETED =3 JOB_ERROR=4 JOB_CANCELED= 5 JOB_STOPPED=6 JOB_WAITING_ON _GOVERNER = 7 JOB_WAITING_FO R_AUTOPROMPT =8 JOB_WAITING_FO R_PROJECT = 9 JOB_WAITING_FO R_CACHE = 10 JOB_WAITING_FO R_CHILDREN = 11 JOB_WAITING_FO R_FETCHING_RE SULTS = 12
Oracle Datatype
NUMBER (10)
DB2 Datatype
INTEGER
Teradata Datatype
INTEGER
Access Datatype
INTEGER
EXECERRORC ODE
INTEGER
NUMBER (10)
INTEGER
INTEGER
INTEGER
Number of reports contained in the document Was this document job canceled
INTEGER
INTEGER
INTEGER
INTEGER
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER
Column
Description
Oracle Datatype
NUMBER (10) DATE
DB2 Datatype
INTEGER TIMESTA MP
Teradata Datatype
INTEGER TIMESTA MP
Access Datatype
INTEGER TIMESTA MP
CACHEDINDICA Was the document TOR cached? RECORDTIME Timestamp logged into database, according to database local system time
IS_DOC_STEP_STATS
Tracks each step in the document execution process. This table is used when the Document Jobs (Detailed) option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype
INTEGER INTEGER CHAR(32) CHAR(32)
Column
Description
Oracle Datatype
NUMBER (10) NUMBER (10) CHAR(32) CHAR(32)
DB2 Datatype
INTEGER INTEGER CHAR(32) CHAR(32)
Teradata Datatype
INTEGER INTEGER CHAR(32) CHAR(32)
Access Datatype
INTEGER INTEGER CHAR(32) CHAR(32)
883
Column
STEPTYPE
Description
Oracle Datatype
NUMBER (10)
DB2 Datatype
INTEGER
Teradata Datatype
INTEGER
Access Datatype
INTEGER
Type of the step. For INTEGER details, see Report and document steps, page 915. 1: MD object request step 2: Close job 3: SQL Engine server task 4: Query Engine server task 5: Analytical Engine server task 6: Resolution server task 7: Report net server task 8: Element request step 9: Get report instance 10: Error message sender task 11: Output message sender task 12: Find report cache task 13: Document execute step 14: Document sender step 15: Update report cache task 16: Request execute step 17: Datamart execute step 18: Document data preparation 19: Document formatting 20: Document manipulation 21: Apply view context 22: Export engine
Column
STARTTIME FINISHTIME QUEUETIME
Description
Timestamp of the step start time Timestamp of the step finish time Time duration between last step finish and next step start, in milliseconds Used CPU time during this step, in milliseconds FinishTime StartTime, in milliseconds Timestamp logged into database, according to database local system time (Server machine name:port number) pair GUID of the project.
Oracle Datatype
DATE DATE FLOAT
DB2 Datatype
TIMESTA MP TIMESTA MP DOUBLE
Teradata Datatype
TIMESTA MP TIMESTA MP FLOAT
Access Datatype
TIMESTA MP TIMESTA MP DOUBLE
CPUTIME
FLOAT
FLOAT
DOUBLE
FLOAT
DOUBLE
STEPDURATIO N RECORDTIME
FLOAT
FLOAT
DOUBLE
FLOAT
DOUBLE
DATETIM E
DATE
TIMESTA MP
TIMESTA MP
TIMESTA MP
SERVERMACHI NE PROJECTID
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32)
885
IS_REPORT_STATS
Tracks job level statistics information about every completed report execution that occurs in Intelligence Server. This table is used when the Report Jobs (Basic) option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype
INTEGER CHAR(32)
Description
Job ID Session GUID Server Instance Name
Oracle Datatype
NUMBER (10) CHAR(32)
DB2 Datatype
INTEGER CHAR(32)
Teradata Datatype
INTEGER CHAR(32)
Access Datatype
INTEGER CHAR(32)
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255)
SERVERMACHIN Server machine E name, or IP address when machine name is not available PROJECTID (I) USERID REPORTID GUID for project GUID for user GUID for report For an ADHOC report, the Template ID is created on the fly and there is no corresponding object with this GUID in the object lookup table. FILTERID TEMPLATEID GUID for filter GUID for template For an ADHOC report, the Template ID is created on the fly and there is no corresponding object with this GUID in the object lookup table.
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
Column
PARENTJOBID
Description
JOBID for the parent document job, if current job is a document job's child -1 if current job is NOT a document job's child
Oracle Datatype
NUMBER (10)
DB2 Datatype
INTEGER
Teradata Datatype
INTEGER
Access Datatype
INTEGER
DBINSTANCEID DBUSERID
Physical DBInstance GUID DB User ID for the physical DBInstance 1 if this job is a document jobs child, otherwise 0 Timestamp when the request is received Total queue time of all steps in this request Time passed before the first step started. An offset of the RequestRecTime.
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
INTEGER
INTEGER
INTEGER
INTEGER
DATETIM E FLOAT
TIMESTA MP DOUBLE
TIMESTA MP FLOAT
TIMESTA MP DOUBLE
FLOAT
FLOAT
FLOAT
DOUBLE
FLOAT
DOUBLE
EXECFINISHTIM E
Time passed when the last step is finished. An offset of the RequestRecTime.
FLOAT
FLOAT
DOUBLE
FLOAT
DOUBLE
SQLPASSES JOBSTATUS
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER
887
Column
JOBERRORCOD E
Description
Job Error Code A long number indicating type of error Was the job canceled? Was the report ad hoc? This includes any executed job that is not saved in the project as a report (for example: drilling results, attribute element prompts, creating and running a report before saving it). Was the report prompted?
Oracle Datatype
NUMBER (10)
DB2 Datatype
INTEGER
Teradata Datatype
INTEGER
Access Datatype
INTEGER
CANCELINDICAT OR ADHOCINDICAT OR
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER
PROMPTINDICA TOR
INTEGER INTEGER
NUMBER (10) NUMBER (10) NUMBER (10) NUMBER (10) NUMBER (10) NUMBER (10) NUMBER (10) NUMBER (10) NUMBER (10)
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER
DATAMARTINDIC Did the report ATOR create a data mart? ELEMENTLOADI NDIC DRILLINDICATO R Was the report a result of an element browse? Was the report the result of a drill?
INTEGER
INTEGER
INTEGER
INTEGER
SCHEDULEINDIC Was this report run ATOR from a scheduler? CACHECREATEI NDIC PRIORITYNUMB ER JOBCOST FINALRESULTSI ZE Did the report create a cache? QueryExecution Step priority User supplied report cost Number of rows in the report
Column
RECORDTIME (I)
Description
Timestamp logged into database, according to database local system time An internal enumeration for the type of report. For future use. The error message displayed to the user when an error is encountered. GUID of the object that was drilled from. For future use.
Oracle Datatype
DATE
DB2 Datatype
TIMESTA MP
Teradata Datatype
TIMESTA MP
Access Datatype
TIMESTA MP
REPORTTYPE
INTEGER
NUMBER (10)
INTEGER
INTEGER
INTEGER
ERRORMESSAG E
DRILLTEMPLATE UNIT
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
NEWOBJECT
GUID of the object that was drilled to. For future use.
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
DRILLTYPE
INTEGER
NUMBER (10)
INTEGER
INTEGER
INTEGER
IS_REP_STEP_STATS
Tracks each step in the report execution process. This table is used when the Report Jobs (Detailed) option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype
INTEGER INTEGER CHAR(32)
Column
Description
Oracle Datatype
NUMBER (10) NUMBER (10) CHAR(32)
DB2 Datatype
INTEGER INTEGER CHAR(32)
Teradata Datatype
INTEGER INTEGER CHAR(32)
Access Datatype
INTEGER INTEGER CHAR(32)
STEPSEQUENC Sequence number for a job's steps E JOBID (I) SESSIONID (I)
Job ID User session GUID
889
Column
STEPTYPE
Description
Oracle Datatype
NUMBER (10)
DB2 Datatype
INTEGER
Teradata Datatype
INTEGER
Access Datatype
INTEGER
Type of the step. For INTEGER details, see Report and document steps, page 915. 1: MD object request step 2: Close job 3: SQL Engine server task 4: Query Engine server task 5: Analytical Engine server task 6: Resolution server task 7: Report net server task 8: Element request step 9: Get report instance 10: Error message sender task 11: Output message sender task 12: Find report cache task 13: Document execute step 14: Document sender step 15: Update report cache task 16: Request execute step 17: Datamart execute step 18: Document data preparation 19: Document formatting 20: Document manipulation 21: Apply view context 22: Export engine
Column
STARTTIME FINISHTIME QUEUETIME
Description
Timestamp of the step start time Timestamp of the step finish time Time duration between last step finish and next step start, in milliseconds Used CPU time during this step, in milliseconds FinishTime StartTime, in milliseconds Timestamp logged into database, according to database local system time (Server machine name:port number) pair GUID of the project.
Oracle Datatype
DATE DATE FLOAT
DB2 Datatype
TIMESTA MP TIMESTA MP DOUBLE
Teradata Datatype
TIMESTA MP TIMESTA MP FLOAT
Access Datatype
TIMESTA MP TIMESTA MP DOUBLE
CPUTIME
FLOAT
FLOAT
DOUBLE
FLOAT
DOUBLE
STEPDURATIO N RECORDTIME
FLOAT
FLOAT
DOUBLE
FLOAT
DOUBLE
DATETIM E
DATE
TIMESTA MP
TIMESTA MP
TIMESTA MP
SERVERMACHI NE PROJECTID
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32)
891
IS_CACHE_HIT_STATS
Tracks job executions that hit the report cache. This table is used when the Caches option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype Oracle Datatype
NUMBER (10) CHAR(32) CHAR(32) DATE NUMBER (10)
Column CACHEINDEX
Description
DB2 Datatype
INTEGER CHAR(32) CHAR(32) TIMESTA MP INTEGER
Teradata Datatype
INTEGER CHAR(32) CHAR(32) TIMESTA MP INTEGER
Access Datatype
INTEGER CHAR(32) CHAR(32) TIMESTA MP INTEGER
A sequential INTEGER number for this table CHAR(32) CHAR(32) DATETIM E INTEGER
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
Job ID for partial INTEGER cache hit, or document parent job ID if cache hit originated form document child report. Timestamp logged into database, according to database local system time DATETIM E
NUMBER (10)
INTEGER
INTEGER
INTEGER
RECORDTIME
DATE
TIMESTA MP
TIMESTA MP
TIMESTA MP
Column
SERVERMACHI NE PROJECTID
Description
(Server machine name:port number) pair GUID of the project.
Oracle Datatype
DB2 Datatype
Teradata Datatype
Access Datatype
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32)
The table below lists combinations of CacheHitType and JobID that can occur in the IS_CACHE_HIT_STATS table and what those combinations mean.
CacheHitType
0 0 1 2
JobID
-1 real JobID parent JobID child JobID
Description
for normal report, full cache hit for normal report, partial cache hit for child report from document, full cache hit, so no child report for child report from document, partial cache hit, child report has a job
IS_REP_SQL_STATS
Enables access to the SQL for a given report execution. This table is used when the Job SQL option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype
INTEGER CHAR(32) CHAR(32)
Description
Job ID GUID for user session GUID for server def
Oracle Datatype
NUMBER (10) CHAR(32) CHAR(32)
DB2 Datatype
INTEGER CHAR(32) CHAR(32)
Teradata Datatype
INTEGER CHAR(32) CHAR(32)
Access Datatype
INTEGER CHAR(32) CHAR(32)
893
Column
Description
Oracle Datatype
NUMBER (10) DATE DATE FLOAT
DB2 Datatype
INTEGER TIMESTA MP TIMESTA MP DOUBLE
Teradata Datatype
INTEGER TIMESTA MP TIMESTA MP FLOAT
Access Datatype
INTEGER TIMESTA MP TIMESTA MP DOUBLE
TEXT INTEGER
VARCHAR LONG VARCHAR TEXT(255) 2(4000) VARCHAR (26000) NUMBER (10) INTEGER INTEGER INTEGER
INTEGER TEXT
NUMBER (10)
INTEGER
INTEGER
INTEGER
VARCHAR LONG VARCHAR TEXT(255) 2(4000) VARCHAR (4000) DATE TIMESTA MP TIMESTA MP TIMESTA MP
DATETIM E
Column
SERVERMACHI NE PROJECTID
Description
(Server machine name:port number) pair GUID of the project.
Oracle Datatype
DB2 Datatype
Teradata Datatype
Access Datatype
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32)
IS_SCHEDULE_STATS
Tracks which reports have been run as the result of a schedule. This table is used when the Schedules option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype
INTEGER
Column SCHEDULEID
Description
JOBID if HITCACHE = 0. CacheIndex if HITCACHE = 1. GUID for user session GUID for server definition GUID for trigger object 0 for Report, 1 for Document 0 if the job does not hit the cache, 1 if it does. Timestamp logged into database, according to database local system time
Oracle Datatype
NUMBER (10) CHAR(32) CHAR(32) CHAR(32) NUMBER (10) NUMBER (10) DATE
DB2 Datatype
INTEGER
Teradata Datatype
INTEGER
Access Datatype
INTEGER
HITCACHE
RECORDTIME
DATETIM E
TIMESTA MP
TIMESTA MP
TIMESTA MP
895
Column
SERVERMACHI NE PROJECTID
Description
(Server machine name:port number) pair GUID of the project.
Oracle Datatype
DB2 Datatype
Teradata Datatype
Access Datatype
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32)
IS_REP_SEC_STATS
Tracks executions that used security filters. This table is used when the Security Filters option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype
INTEGER CHAR(32) CHAR(32) CHAR(32) DATETIM E
Description
Job ID Session GUID Serverdef GUID Security Filter GUID Timestamp logged into database, according to database local system time (Server machine name:port number) pair GUID of the project.
Oracle Datatype
NUMBER (10) CHAR(32) CHAR(32) CHAR(32) DATE
DB2 Datatype
INTEGER CHAR(32) CHAR(32) CHAR(32) TIMESTA MP
Teradata Datatype
INTEGER CHAR(32) CHAR(32) CHAR(32) TIMESTA MP
Access Datatype
INTEGER CHAR(32) CHAR(32) CHAR(32) TIMESTA MP
SECURITYFILT ERID
RECORDTIME
SERVERMACHI NE PROJECTID
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32)
IS_REP_COL_STATS
Tracks the column-table combinations used in the SQL during report executions. This table is used when the Columns/Tables option is selected in the Statistics section of the Project Configuration editor.
SQL Server Datatype
CHAR(32) CHAR(32)
Column
SESSIONID SERVERID TABLEID COLUMNID JOBID
Description
GUID for user session GUID for server definition
Oracle Datatype
CHAR(32) CHAR(32) CHAR(32) CHAR(32) NUMBER (10) Numeric (38, 0) Numeric (38, 0)
DB2 Datatype
CHAR(32) CHAR(32) CHAR(32) CHAR(32) INTEGER Numeric (38, 0) Numeric (38, 0)
Teradata Datatype
CHAR(32) CHAR(32) CHAR(32) CHAR(32) INTEGER Numeric (38, 0) Numeric (38, 0)
Access Datatype
CHAR(32) CHAR(32) CHAR(32) CHAR(32) INTEGER Numeric (38, 0) Numeric (38, 0)
GUID of the CHAR(32) database tables used GUID of columns used Report job ID CHAR(32) INTEGER Numeric (38, 0) Numeric (38, 0)
SQLCLAUSETY The SQL clause in PEID which the column is being used COUNTER The number of times a specific column / table / clause type combination occurs within a report execution. Timestamp logged into database, according to database local system time
RECORDTIME
DATETIM E
DATE
TIMESTA MP
TIMESTA MP
TIMESTA MP
COLUMNNAME Description of the column used SERVERMACHI (Server machine NE name:port number) pair PROJECTID GUID of the project.
VarChar (255)
VarChar (255)
VarChar (255)
VarChar (255)
TEXT (255)
VARCHAR VARCHAR VARCHAR VARCHAR TEXT(255) (255) (255) (255) (255) CHAR(32) CHAR(32) CHAR(32) CHAR(32) CHAR(32)
897
IS_REP_MANIP_STATS
Records report manipulation statistics related to user actions after a report has been executed. This table is reserved for future use.
SQL Server Datatype
CHAR(32) INTEGER INTEGER INTEGER
Description
GUID of the current user session. The GUID of the job. Reserved for future use. Type of manipulation: pivot, page-by, sorting, and so on. Reserved for future use.
Oracle Datatype
CHAR(32) NUMBER (10) NUMBER (10) NUMBER (10) VarChar (255) FLOAT
DB2 Datatype
CHAR(32) INTEGER INTEGER INTEGER
Teradata Datatype
CHAR(32) INTEGER INTEGER INTEGER
Access Datatype
CHAR(32) INTEGER INTEGER INTEGER
MANIPXML QUEUETIME
VarChar (255)
Aggregated queue FLOAT time across all processing units used. Time when the user request is received. Time when Intelligence Server finishes processing a user request. Time spent executing a user action. (FinishTime StartTime), in milliseconds. Current timestamp when the record is inserted into the statistics database. Server definition GUID. Project GUID. DATETIME DATETIME
STARTTIME FINISHTIME
DATE DATE
TIMESTA MP TIMESTA MP
TIMESTA MP TIMESTA MP
TIMESTA MP TIMESTA MP
FLOAT FLOAT
FLOAT FLOAT
DOUBLE DOUBLE
FLOAT FLOAT
DOUBLE DOUBLE
DATETIME
DATE
TIMESTA MP
TIMESTA MP
TIMESTA MP
SERVERID PROJECTID
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
CHAR(32) CHAR(32)
Column
Description
Oracle Datatype
VarChar (255) CHAR(32) CHAR(32)
DB2 Datatype
VarChar (255) CHAR(32) CHAR(32)
Teradata Datatype
VarChar (255) CHAR(32) CHAR(32)
Access Datatype
VarChar (255) CHAR(32) CHAR(32)
SERVERMAC The Intelligence HINE Server machine name and IP address. USERID REPORTID GUID of the user performing the action. GUID of the report on which the users action is performed. Job ID resulting from SQL manipulation. Null for any other manipulation type.
CHILDJOBID
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
CHAR(32)
Fact tables, page 900 Lookup tables, page 911 Transformation tables, page 913 Indicator lookup tables, page 914 Relationship tables, page 917 Temporary tables are created and used by the data loading process when data is migrated from the statistics tables to the Enterprise Manager warehouse. These temporary tables include:
899
Fact tables
IS_SESSION_FACT
Enables session concurrency analysis. Keeps data on each session for each hour of connectivity.
Source tables
IS_SESSION: Lookup table for session objects DT_DAY: Lookup table for dates TM_HOUR: Lookup table for hours
Column Name
EM_RECORD_TS EM_LOAD_TS IS_SESSION_ID IS_SERVER_ID EM_USER_ID IS_CONNECT_TS IS_DISCONNECT_TS IS_TMP_DISCON_TS
Column Description
Timestamp when information was recorded by IS into the _Stats tables. Timestamp when Enterprise Manager ETL process began loading records. GUID of the session object. Integer ID of the server where the session was created. Integer ID of the user who created the session. Timestamp of the beginning of the session (sign-in.) Timestamp of the end of the session (sign-out.) NULL if the session is still open at the time of EM data load. Represents temporary end of a session, if that session is still open. Used to calculate the session time.
Column Name
IS_CONNEC_M_ID IS_DISCON_M_ID IS_SESSION_TM_SEC IS_SESSION_OPENED IS_SESSION_CLOSED EM_CONNECT_SOURCE EM_DISCONN_REASON DAY_ID HOUR_ID
Column Description
Integer representation of the day and hour when the connection began. Format: YYYYMMDDHH (24 hours) Integer representation of the day and hour when the connection ended. Format: YYYYMMDDHH (24 hours) Duration within the hour, in seconds, of the session. Indicates (0/1) whether the session was opened within the hour (hour_id). Indicates (0/1) whether the session was closed within the hour (hour_id). Integer ID of the source component through which the session was established (ex.: Desktop, Web.) Always -1, information not yet available. Integer ID of reason for disconnection (ex.: timeout, logout.) Integer ID of the day. Format: YYYYMMDD Integer ID of the hour. Format HH (24 hours)
IS_PROJECT_FACT
Represents the number of logins to a project within a given day, by user session and project. This table is created as a view from the source tables listed below.
Source tables
IS_SESSION_STATS: _Stats table for Intelligence Server to record session activity IS_SERVER: Lookup table for servers EM_USER: Lookup table for users IS_PROJ: Lookup table for projects
901
Column Name
EM_RECORD_TS EM_LOAD_TS IS_SESSION_ID IS_SERVER_ID EM_USER_ID IS_PROJ_ID
Column Description
Timestamp when the information was recorded by IS into the _Stats table. Timestamp when the EM ETL process began loading records. GUID of the session object. Integer ID of the server where the session was created. Integer ID of the user who created the session. Integer ID of the project logged into.
IS_PROJ_LOGIN_NBR Number of times the project was logged into. DAY_ID Integer ID of the day. Format YYYMMDD
IS_DOC_STEP_FACT
Contains information on each processing step of a document execution. This table is created as a view from the source tables below.
Source tables
IS_DOC_STEP_STATS: _Stats table for Intelligence Server to record processing steps of document execution IS_PROJ: Lookup table for projects IS_DOCUMENT_STATS: _Stats table for Intelligence Server to record document executions IS_SESSION: Lookup table for session objects
Column Name
EM_RECORD_TS EM_LOAD_TS IS_PROJ_ID IS_DOC_JOB_SES_ID IS_DOC_JOB_ID
Column Description
Timestamp when the information was recorded by IS into the _Stats table. Timestamp when the EM ETL process began loading records. Integer ID of the project logged into. GUID of the session that created the cache if a cache was hit in this execution; otherwise, current session (default behavior.) Integer ID of the document job execution.
IS_DOC_STEP_SEQ_ID Integer ID of the document job execution step. IS_DOC_STEP_TYP_ID IS_DOC_EXEC_ST_TS IS_DOC_EXEC_FN_TS IS_DOC_QU_TM_MS IS_DOC_CPU_TM_MS IS_DOC_EXEC_TM_MS Integer ID of the document job execution step type. Timestamp of the execution start. Timestamp of the execution finish. Queue duration in milliseconds. CPU duration in milliseconds. Execution duration in milliseconds.
IS_DOC_FACT
Contains information on the execution of a document job. Primary key:
Source tables
IS_DOCUMENT_STATS: _Stats table for Intelligence Server to record document executions EM_IS_LAST_UPD_2: Configuration table which drives the loading process (for example, data loading window)
Enterprise Manager data warehouse tables
903
EM_USER: Lookup table for users IS_DOC: Lookup table for documents IS_SESSION: Lookup table for session objects
Column Name
EM_RECORD_TS EM_LOAD_TS IS_SERVER_ID IS_SESSION_ID IS_DOC_JOB_SES_ID IS_DOC_JOB_ID IS_DOC_CACHE_IDX
Column Description
Timestamp when the information was recorded by IS into the _Stats table. Timestamp when the EM ETL process began loading records. Integer ID of the server where the session was created. GUID of the current session object. GUID of the session that created the cache if a cache was hit in this execution; otherwise, current session (default behavior.) Integer ID of the document job execution. Always 0; not yet available, documents not currently cached. Integer ID of the cache hit index; similar to Job ID but only for cache hits. -1 if no cache hit. Always 0; not yet available, documents not currently cached. Indicator (0/1) of whether the job hit a cache. Always 0, not yet available. Indicates whether a cache was created. Integer ID of the project logged into. Integer ID of the user who created the session. Integer ID of the document that was executed. Timestamp of the execution request; request of the current session. Timestamp of the execution request; request time of the original execution request if a cache was hit, otherwise current sessions request time. Timestamp of the execution start. Timestamp of the execution finish. Queue duration in milliseconds. CPU duration in milliseconds. Execution duration in milliseconds. Number of reports contained in the document job execution. Number of steps processed in the document job execution.
IS_CACHE_HIT_ID IS_CACHE_CREATE_ID IS_PROJ_ID EM_USER_ID IS_DOC_ID IS_DOC_REQ_TS IS_DOC_EXEC_REQ_TS IS_DOC_EXEC_ST_TS IS_DOC_EXEC_FN_TS IS_DOC_QU_TM_MS IS_DOC_CPU_TM_MS IS_DOC_EXEC_TM_MS IS_DOC_NBR_REPORTS IS_DOC_NBR_PU_STPS
Column Name
IS_DOC_NBR_PROMPTS EM_JOB_STATUS_ID IS_JOB_ERROR_ID IS_CANCELLED_ID IS_PROMPT_ID DAY_ID MINUTE_ID
Column Description
Number of prompts in the document job execution. Integer ID of the job execution status. Integer ID of the jobs error message, if any. Indicator (0/1) of whether the job was cancelled. Indicator (0/1) of whether the job was prompted. Integer ID of the day. Format YYYYMMDD. Integer ID of the minute. Format HHMM (24 hours.)
IS_REP_FACT
Contains information about report job executions. Primary key:
Source tables
IS_REPORT_STATS: _Stats table containing information about report job executions IS_SESSION: Lookup table for session objects EM_IS_LAST_UPD_2: Configuration table which drives the loading process (for example, data loading window) IS_REP: Lookup table for report objects IS_TEMP: Lookup table for template objects IS_FILT: Lookup table for filter objects IS_REP_SEC_STATS: _Stats table containing information about job executions with security filters
905
IS_SCHEDULE_STATS: _Stats table containing information about job executions run by a schedule IS_SCHED: Lookup table for schedule objects IS_DOCUMENT_STATS: _Stats table containing information about document job executions IS_DOC: Lookup table for document objects IS_CACHE_HIT_STATS: _Stats table containing information about job executions that hit a cache IS_DOC_FACT: Fact table containing information about document job executions
Column Name
EM_RECORD_TS EM_LOAD_TS IS_SERVER_ID IS_SESSION_ID IS_REP_JOB_SES_ID IS_REP_JOB_ID IS_REP_CACHE_IDX IS_CACHE_HIT_ID IS_CACHE_CREATE_ID EM_USER_ID EM_DB_USER_ID IS_DB_INST_ID IS_PROJ_ID IS_REP_ID IS_EMB_FILT_IND_ID IS_EMB_TEMP_IND_ID
Column Description
Timestamp when the information was recorded by IS into the _Stats table. Timestamp when the EM ETL process began loading records. Integer ID of the server where the session was created. GUID of the current session object. GUID of the session that created the cache if a cache was hit in this execution; otherwise, current session (default behavior.) Integer ID of the report job execution. Integer ID of the cache hit index; similar to Job ID but only for cache hits. -1 if no cache hit. Indicator (0/1) of whether the job hit a cache. Indicates whether a cache was created. Integer ID of the user who created the session. DB User used to log in to the warehouse. Integer ID of the db instance object. Integer ID of the project logged in to. Integer ID of the report object. Indicates (0/1) whether the report filter is embedded. Indicates (0/1) whether the report template is embedded.
Column Name
IS_FILT_ID IS_TEMP_ID IS_DOC_JOB_ID IS_DOC_ID IS_REP_REQ_TS IS_REP_EXEC_REQ_TS IS_REP_EXEC_ST_TS IS_REP_EXEC_FN_TS IS_REP_QU_TM_MS IS_REP_CPU_TM_MS IS_REP_EXEC_TM_MS IS_REP_ELAPS_TM_MS IS_REP_NBR_SQL_PAS IS_REP_RESULT_SIZE IS_REP_SQL_LENGTH IS_REP_NBR_TABLES IS_REP_NBR_PU_STPS
Column Description
Integer ID of the filter object. Integer ID of the template object. Integer ID of the parent document execution if current report is a child of a document. Otherwise, -1. Integer ID of the parent document object if current report is a child of a document. Otherwise, -2. Timestamp of the execution request; request of the current session. Timestamp of the execution request; request time of the original execution request if a cache was hit, otherwise current sessions request time. Timestamp of the execution start. Timestamp of the execution finish. Queue duration in milliseconds. CPU duration in milliseconds. Execution duration in milliseconds. Difference between start time and finish time; includes time for prompt responses. Number of SQL passes. Number of rows in the result set. Not yet available. Number of characters. Not yet available. Number of tables. Number of steps processed in the execution.
IS_REP_NBR_PROMPTS Number of prompts in the report execution. EM_JOB_STATUS_ID IS_JOB_ERROR_ID IS_ERROR_IND_ID IS_DB_ERROR_IND_ID IS_CANCELLED_ID IS_AD_HOC_ID IS_PROMPT_IND_ID IS_DATAMART_ID IS_ELEM_LOAD_ID IS_DRILL_ID Integer ID of the job execution status. Integer ID of the jobs error message, if any. Indicator (0/1) of whether the job got an error. Indicator (0/1) of whether the database returned an error. Indicator (0/1) of whether the job was cancelled. Indicator (0/1) of whether the job was created ad hoc. Indicator (0/1) of whether the job was prompted. Indicator (0/1) of whether the job created a datamart. Indicator (0/1) of whether the job was the result of an element load. Indicator (0/1) of whether the job was the result of a drill.
907
Column Name
IS_SEC_FILT_IND_ID IS_SEC_FILT_ID IS_SCHED_ID IS_SCHED_IND_ID IS_REP_PRIO_NBR IS_REP_COST_NBR DAY_ID MINUTE_ID DRILLFROM DRILLFROM_OT_ID DRILLTO DRILLTO_OT_ID DRILLTYPE
Column Description
Indicator (0/1) of whether the job had a security filter associated with it. Integer ID of the security filter applied. Integer ID of the schedule that executed the job. Indicator (0/1) of whether the job was executed by a schedule. Priority of the report execution. Cost of the report execution. Integer ID of the day. Format YYYYMMDD. Integer ID of the minute. Format HHMM (24 hours.) Integer ID of an attribute, metric, or other object that is drilled from. Integer ID for the object type of the object that is drilled from. Integer ID of an attribute, template, or other object that is drilled to. Integer ID for the object type of the object that is drilled to. Integer flag indicating the type of drill performed (for example, drill to template, drill to attribute, and so on).
IS_REP_SQL_FACT
Contains the SQL that is executed on the warehouse by report job executions. Created as a view from the source tables listed below.
Source tables
IS_REP_FACT: Contains information about report job executions IS_PROJ: Lookup table for projects IS_REP_SQL_STATS: _Stats table which contains SQL statement information
Column Name
EM_RECORD_TS EM_LOAD_TS IS_PROJ_ID IS_REP_JOB_SES_ID IS_REP_JOB_ID IS_PASS_SEQ_NBR IS_REP_EXEC_ST_TS IS_REP_EXEC_FN_TS IS_REP_EXEC_TM_MS IS_REP_SQL_STATEM IS_REP_SQL_LENGTH IS_REP_NBR_TABLES IS_PASS_TYPE_ID IS_REP_DB_ERR_MSG
Column Description
Timestamp when the information was recorded by IS into the _Stats table. Timestamp when the EM ETL process began loading records. Integer ID of the project logged into. GUID of the current session object. Integer ID of the report job execution. Integer ID of the sequence of the pass. Timestamp of the execution start. Timestamp of the execution finish. Execution duration in milliseconds. SQL statement. Length of SQL statement. Number of tables accessed by SQL statement. Integer ID of the type of SQL pass. Error returned from database; NULL if no error.
IS_REP_STEP_FACT
Contains information about the processing steps through which the report execution passes. Created as a view from the source tables listed below.
Source tables
IS_REP_STEP_STATS: _Stats table containing information about report job processing steps IS_REPORT_STATS: _Stats table containing information about report job executions IS_SESSION: Lookup table for session objects IS_PROJ: Lookup table for projects
909
Column Name
EM_RECORD_TS EM_LOAD_TS IS_PROJ_ID IS_REP_JOB_SES_ID IS_REP_JOB_ID
Column Description
Timestamp when the information was recorded by IS into the _Stats table. Timestamp when the EM ETL process began loading records. Integer ID of the project logged into. GUID of the current session object. Integer ID of the report job execution.
IS_REP_STEP_SEQ_ID Integer ID of the sequence of the step. IS_REP_STEP_TYP_ID IS_REP_EXEC_ST_TS IS_REP_EXEC_FN_TS IS_REP_QU_TM_MS IS_REP_CPU_TM_MS Integer ID of the type of step. Timestamp of the execution start. Timestamp of the execution finish. Queue duration in milliseconds. CPU duration in milliseconds.
IS_REP_COL_FACT
Used to analyze which data warehouse tables and columns are accessed by MicroStrategy report jobs, by which SQL clause they are accessed (SELECT, FROM, and so on), and how frequently they are accessed. This fact table is at the level of a Report Job rather than at the level of each SQL pass executed to satisfy a report job request. The information available in this table can be useful for database tuning.
Source tables
This fact table is a view based off the following tables:
IS_COL
Column Name
EM_RECORD_TS EM_LOAD_TS IS_JOB_ID IS_SESSION_ID IS_SERVER_ID IS_TABLE_ID IS_COL_ID IS_COL_NAME SQL_CLAUSE_TYPE_I D COUNTER IS_PROJ_ID
Column Description
Timestamp when information was recorded by Intelligence Server into the _Stats tables. Timestamp when Enterprise Manager ETL process began loading records. Integer ID of the report job execution. GUID of the current session object. Integer ID of the server where the session was created. Integer ID of the physical database table that was used. Integer ID of the column in the database table that was used. Name of the column in the database table that was used. Integer ID of the type of SQL clause (SELECT, FROM, WHERE, and so on). The number of times a specific column / table / clause type combination occurs within a report execution. Integer ID of the project logged into.
IS_SESSION_MONITOR
For future use. A view table that provides an overview of recent session activity.
Lookup tables
Table Name
DT_DAY DT_MONTH DT_MONTH_OF_YR
Function
Lookup table for Days in the Date hierarchy. Lookup table for Months in the Date hierarchy. Lookup table for Months of the Year in the Date hierarchy.
911
Table Name
DT_QUARTER DT_QUARTER_OF_YR DT_WEEKDAY DT_YEAR EM_APP_SRV_MACHINE EM_CLIENT_MACHINE EM_DB_USER EM_OWNER
Function
Lookup table for Quarters in the Date hierarchy. Lookup table for the Quarters of the Year in the Date hierarchy. Lookup table for the Days of the Week in the Date hierarchy. Lookup table for Years in the Date hierarchy. Lookup table for Intelligence Server machines used in statistics. Lookup table for Client Machines used in the statistics. Lookup table for the database users used in the statistics. Provides descriptive information about the owners being tracked. This table is a view on top of EM_USER.
EM_USER EM_USR_GP
Provides descriptive information about the users being tracked. Provides descriptive information about the user groups being tracked.
EM_WEB_SRV_MACHINE Lookup table for the Web Server Machines used in the statistics. IS_ATT IS_ATT_FORM IS_COL IS_CONS IS_CUST_GP IS_DB_INST IS_DB_TAB IS_DOC IS_EVENT IS_FACT IS_FILT IS_HIER IS_MET IS_PROJ IS_PROMPT IS_REP IS_SCHED IS_SERVER Provides descriptive information about the attributes being tracked. Provides descriptive information about the attribute forms being tracked. Provides descriptive information about the columns being tracked. Provides descriptive information about the consolidations being tracked. Provides descriptive information about the custom groups being tracked. Provides descriptive information about the database instances in the monitored Intelligence Servers. Lookup table for the database tables being monitored. Provides descriptive information about the document objects being tracked. Provides descriptive information about the events being tracked. Provides descriptive information about the facts being tracked. Provide descriptive information about the filters being tracked. Provides descriptive information about the hierarchies being tracked. Provides descriptive information about the metrics being tracked. Provides descriptive information about the projects being tracked. Provides descriptive information about the prompts being tracked. Provides descriptive information about the reports being tracked. Provides descriptive information about the schedules being tracked. Provides descriptive information about the server definitions being tracked.
Table Name
IS_SESSION IS_SQL_CLAUSE_TYPE
Function
Lookup table for the session statistics logged by Intelligence Servers. Lookup table for SQL clause types; used to determine which SQL clause (SELECT, WHERE, GROUP BY, and so on) a particular column was used in during a report execution. SQL Clause Type attributes: Select: Column was used in the SELECT clause, but was not aggregated, nor does it appear in a Group By clause. For example, a11.Report column in Select a11.Report from LU_REPORT a11. Select Group By: Column was used in the GROUP BY clause. For example, a11.Report Column in select a11.Report, sum(a11.Profit) from LU_REPORT group by a11.Report. Select Aggregate: Column was used for aggregation. For example, a11.Report column in select count(a11.Report) from LU_REPORT. From: Column was used in a FROM clause Where: Column was used in a WHERE clause Order By: Column was used in an ORDER BY clause.
Provides descriptive information about the logical tables being monitored. Provides descriptive information about the templates being monitored. Provides descriptive information about the transformations being monitored. Lookup table for Hour in the Time hierarchy. Lookup table for Minute in the Time hierarchy. Lookup table for Second in the Time hierarchy. Lookup table for security filters.
Transformation tables
Table Name
DT_MONTH_YTD DT_QUARTER_YTD TM_HOUR_DTH
Function
Transformation table to calculate the Year to Date values for Month. Transformation table to calculate the Year to Date values for Quarter. Transformation table to calculate the Hour to Day values for Hour.
913
Function
Lookup table for the connection source of a session on Intelligence Server. Lookup table for the reason of the disconnection of a session on Intelligence Server. Lookup table for the existence status of objects. Lookup table for the hidden status of objects. Lookup table for the job status of job executions on Intelligence Server. Lookup table for the Ad-Hoc indicator. Lookup table for the Cache Creation indicator. Lookup table for the Cache Hit indicator. Lookup table for the canceled indicator. Lookup table for the Datamart indicator. Lookup table for the Database Error indicator. Lookup table for the step types in document execution. For a list and explanation of values, see Report and document steps, page 915. Lookup table for the Drill indicator. Lookup table for the Element Load indicator. Lookup table for the Error indicator. Lookup table for the Drillable Hierarchy indicator. Lookup table for the Job Error indicator. Lookup table for the Job Priority indicator. Lookup table for the Prompt indicator. Lookup table for the SQL Pass types of report execution. Lookup table for the step types of report execution. For a list and explanation of values, see Report and document steps, page 915. Lookup table for the Schedule indicator. Lookup table for the transformation mapping types. Indicator lookup table for priority maps.
Table Name
IS_REPTYPE_IND
Function
Indicator lookup table for report type. Report types include: -1 UNKNOWN: The server is unable to retrieve the report type. 0 RESERVED: Ad-hoc reports. May include other reports as well that are not persisted in the metadata at the point of execution. 1 RELATIONAL: All regular project reports. 2 MDX: Reports built from SAP BW, Essbase, Analysis Services, and other cube sources. 3 CUSTOM SQL FREE FORM: MicroStrategy Freeform SQL reports, in which the SQL is entered directly into the interface. 4 CUSTOM SQL WIZARD: MicroStrategy Query Builder reports. 5 FLAT FILE: Reserved or future use. Indicator lookup table for document type. Indicator lookup table for table type.
IS_DOCTYPE_IND IS_TABLETYPE_IND
Task description
Requesting an object definition from the project metadata Closing a job and removing it from the list of pending jobs Executing the SQL generated for the report to retrieve data Generating the SQL needed to retrieve data, based on the schema and the report definition Applying analytical processing to the data retrieved from the data source Loading the definition of an object Transmitting the results of a report Attribute element browsing Retrieving a report instance from the metadata Sending an error message
5: Analytical Engine 6: Resolution Server 7: Report Net Server 8: Element Request 9: Get Report Instance 10: Error Message Send
915
Task name
11: Output Message Send 12: Find Report Cache 13: Document Execution 14: Document Send 15: Update Report Cache 16: Request Execute 17: Datamart Execute 18: Document Data Preparation 19: Document Formatting 20: Document Manipulation 21: Apply View Context 22: Export Engine
Task description
Sending a message other than an error message Searching or waiting for a report cache Executing a document Transmitting a document Updating report caches Requesting the execution of a report Executing a datamart report Constructing a document structure using data from the documents datasets Exporting a document to the requested format Applying a users changes to a document Reserved for future use Exporting a document or report to PDF, plain text, Excel spreadsheet, or XML
Relationship tables
Table Name
IS_ATT_ATT_FORM IS_ATT_HIER IS_COL_TABLE IS_MET_TEMP IS_REP_DOC IS_TABLE_FACT IS_TEMP_ATT IS_TRANS_ATT_TABLE IS_USR_GP_USER
Function
Relationship table between Attribute and Attribute Form. Relationship table between Attribute and Hierarchy. Relationship table between Column and Table. Relationship table between Metric and Template. Relationship table between Report and Document. Relationship table between Table and Fact. Relationship table between Template and Attribute. Joint child relationship table from Transformation to Attribute and Table. Relationship table between User and User Group.
Note: This table models a recursive hierarchy as a flattened many-to-many relationship so that number of jobs, session, and so on can be reported at the USER GROUP level. However, this can result in double counting if the USER GROUP hierarchy is not flat.
IS_USER_PROJ_SF IS_SCHED_RELATE Relationship table for users/groups and associated security filters. Relationship table for schedules and reports.
Function
Defines all MicroStrategy components being monitored. The abbreviation specifies the prefix used on tables relevant to the component. When a new component is added to the MicroStrategy product line, it can be entered in this table for monitoring. Examples: Intelligence Server, Narrowcast Server
EM_ITEM
Defines all items in each component of the MicroStrategy product line being monitored. When a new item is added to a component, it can be entered in this table for monitoring, without any change to the migration code. This table also specifies the items object type according to server and the abbreviation used in the lookup table name. Examples: Report, Service, User
917
Table Name
EM_LOG
Function
Stores logging information for Enterprise Manager data loads. The logging option is enabled from the Enterprise Manager console, Tools menu, Options selection.
Note: Data warehouse purges are not logged in this table. For more information about purging the data warehouse, see Improving warehouse performance by purging unused data, page 295.
EM_SQL Provides the SQL necessary to insert, update, and delete a row from the lookup item table once the necessary information from the component API is available. If the SQL must be changed, make the change in this table (no changes in the code are necessary). This table also provides the SQL used to transform the logged statistics into the lookup tables. Examples: SQL statements to insert an attribute into the lookup attribute table in SQL Server 7.0 EM_PROPS EM_IS_LAST_UPDATE EM_ITEM_PROPS Shows properties of the Enterprise Manager application (for example: which projects and servers are currently being tracked). Provides the Data Loading process with a working window that identifies the time period during which data should be moved into production area tables. Identifies properties being tracked on a given item for a given component. The SQL for the item is a concatenation of the general SQL in the EM_ITEM_SQL table and the column names and values stored in this table. Examples: Attribute Number of Parents, Hierarchy Drill Enabled EM_RELATE_ITEM Contains a list of many-to-many relationship tables and the MicroStrategy items they relate.
Application Objects attributes, page 919 Configuration Objects attributes, page 919 Data Load attributes, page 920 Date attributes, page 920 Document Job attributes, page 921 Indicators and Flags attributes, page 921 Report Job attributes, page 922
Schema Objects attributes, page 923 Session attributes, page 923 Time attributes, page 924
Attribute name
Consolidation Custom Group Document Filter Metric Prompt Report Security Filter Template
Function
Lists all consolidations in projects that are set up to be monitored by Enterprise Manager. Lists all custom groups in projects that are set up to be monitored by Enterprise Manager. Lists all documents in projects that are set up to be monitored by Enterprise Manager. Lists all filters in projects that are set up to be monitored by Enterprise Manager. Lists all metrics in projects that are set up to be monitored by Enterprise Manager. Lists all prompts in projects that are set up to be monitored by Enterprise Manager. Lists all reports in projects that are set up to be monitored by Enterprise Manager. Lists all security filters in projects that are set up to be monitored by Enterprise Manager. Lists all templates in projects that are set up to be monitored by Enterprise Manager.
Attribute name
Configuration_Owner DB Instance DB User Event Folder Intelligence Server Defn
Function
Lists the owners of configuration objects. Lists all database instances. Lists all database users used in report executions. Lists all events being tracked. Lists all folders within projects. Lists all Intelligence Server definitions.
919
Attribute name
Owner Project Schedule User User Group
Function
Lists the owners of all objects. Lists all projects. Lists all schedules. Lists all users being tracked. Lists all user groups.
Attribute name
Data Load Finish Time Data Load Project Data Load Start Time
Function
Displays the time-stamp of the end of the data load process for the projects that are being monitored. Lists all projects that are being monitored. Lists the time-stamp of the start of the data load process for the projects that are being monitored.
Date attributes
Attribute name
Calendar Week Day Month Month of Year Quarter Quarter of Year Week of Year
Function
Lists every calendar week from 2000-01-01 to 2010-12-31, as a number ranging from 1 to 575. Lists all days, from 1990 to 2010. Lists all months. Lists all months in a specified year. Lists all quarters. Lists all quarters of the year. Lists all weeks in all the years from 2000 to 2010. Weeks in 2000 are represented as a number ranging from 200001 to 200053, weeks in 2001 are represented as a number ranging from 200101 to 200153, and so on.
Attribute name
Weekday Year
Function
Lists all days of the week. Lists all years.
Attribute name
Document Job Document Job Step
Function
Indicates an execution of a document. Lists all possible steps of a document execution.
Attribute name
Ad-Hoc Indicator Cache Creation Indicator Cache Hit Indicator Cancelled Indicator Configuration Object Exists Status Datamart Indicator DB Error Indicator Disconnection Reason Document Job Status Document Job Step Type Document Type Indicator Drill Indicator Element Load Indicator Error Indicator
Function
Indicates whether or not an execution is ad hoc. Indicates whether or not an execution has created a cache. Indicates whether or not an execution has hit a cache. Indicates whether or not an execution has been cancelled. Indicates whether or not a configuration object exists. Indicates whether or not an execution created a datamart. Indicates whether or not an execution encountered a database error. Lists the reasons for a user being disconnected from an Intelligence Server. Lists the statuses of document job executions. Lists all possible steps of document job execution. Indicates the type (HTML or Report Services) of a document. Indicates whether or not an execution is a result of a drill. Indicates whether or not an execution is a result of an element load. Indicates whether or not an execution encountered an error.
921
Attribute name
Hierarchy Drilling Job ErrorCode Job Priority Map Job Priority Nbr Object Exists Status Object Hidden Status Object_Creation_Date Object_Creation_ Weekofyear Prompt Indicator
Function
Indicates whether or not a hierarchy is used as a drill hierarchy. Lists all the possible errors that can be returned during job executions. Lists the priorities of job executions. Enumerates the upper limit of the priority ranges for high, medium, and low priority jobs. Default values are 332, 666, and 999. Indicates whether or not an object exists. Indicates whether or not an object is hidden. Indicates the date on which an object was created. Indicates the week of the year on which an object was created. Indicates whether or not an execution was prompted.
Report Job SQL Pass Type Lists the types of SQL passes that the Intelligence Server generates. Report Job Status Report Job Step Type Report Type Indicator Schedule Indicator Sql Clause Type Lists the statuses of report executions. Lists all possible steps of report job execution. Indicates the type (XDA, relational, and so forth.) of a report. Indicates whether or not an execution was scheduled. Lists the various SQL clause types used by the SQL Engine.
Attribute name
Drill_From_Obj Drill_To_Obj Report Job Report Job SQL Pass Report Job Step Sequence
Function
For report jobs resulting from a drilling action, indicates the object that was drilled from. For report jobs resulting from a drilling action, indicates the object that was drilled to. Indicates an execution of a report. Indicates that SQL was executed during report execution. Lists the sequence number of a report job step.
Attribute name
Attribute Attribute Form Column DB Table Fact Hierarchy Table Transformation
Function
Lists all attributes in projects that are set up to be monitored by Enterprise Manager. Lists all attribute forms in projects that are set up to be monitored by Enterprise Manager. Lists all columns in projects that are set up to be monitored by Enterprise Manager. Lists all tables in projects that are set up to be monitored by Enterprise Manager. Lists all facts in projects that are set up to be monitored by Enterprise Manager. Lists all hierarchies in projects that are set up to be monitored by Enterprise Manager Lists all tables in projects that are set up to be monitored by Enterprise Manager. Lists all transformations in projects that are set up to be monitored by Enterprise Manager.
Session attributes
Attribute name
Client Machine Component Connection Source Intelligence Server Machine Session Session Connect Minute Session Disconnect Minute Web Server Machine
Function
Lists all machines that have had users connect to the Intelligence Server. Lists all MicroStrategy components (servers). Lists all connection sources to Intelligence Server. Lists all machines that have logged statistics as an Intelligence Server. Indicates a user connection to an Intelligence Server. Indicates the minute in which a session was started. Indicates the minute in which a session was ended. Lists all machines used as web servers.
923
Time attributes
Attribute name
Day To Hour Hour Hour Minute Second
Function
Lists the 24 hours of the day. For example, 9, 10, 11, 12, 13, and so on. Lists the hours in a day. For example, 09 AM - 10 AM, 10 AM - 11 AM, and so on. Lists all the minutes in an hour. For example, if the hour specified is 10 AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM, and so on. Lists all seconds in the specified minute.
D
D.
Introduction
As a MicroStrategy system administrator, you may be responsible for managing the MicroStrategy Web or Web Universal environments. Some of these tasks are done using the MicroStrategy Desktop interface, such as managing user and group privileges for Web users, or registering a project in 3-tier mode so it can be available in Web. However, there are other administrative parameters that are set using the MicroStrategy Web or Web Universal interface. In addition to the information in this appendix, each option in the MicroStrategy Web administration interface is documented in the Web Administrator online help. Click Help in the administration portion of MicroStrategy Web to access the administrator online help. Topics covered in this appendix include:
925
Using the MicroStrategy Web Administrator page, page 928 Defining project defaults, page 930 Integrating Narrowcast Server with MicroStrategy Web products, page 933 Enabling users to install MicroStrategy Office from Web, page 935 Using additional security features for MicroStrategy Web products, page 935 FAQs for configuring and tuning MicroStrategy Web products, page 946
Web Professional or Web Universal Professional Web Analyst or Web Universal Analyst Web Reporter or Web Universal Reporter
The privileges available in each edition are listed in Appendix B, Permissions and Privileges. For more information about which privileges can be granted to a given user under each edition, see List of all privileges, page 847. You can also print a report of all privileges assigned to each user based on license type; to do this, see Auditing your system for the proper licenses, page 154.
All MicroStrategy Web users that are licensed for MicroStrategy Report Services may view and interact with a document in Flash Mode. Certain interactions in Flash Mode have additional licensing requirements:
Users are required to license MicroStrategy Web Analyst to pivot row or column position in a grid or cross-tabular grid of data in Flash Mode. Users are required to license MicroStrategy Web Professional to modify the properties of Widgets used in a document in Flash Mode.
A user assigned to an edition is entitled to a complete set, or identified subset, of the privileges listed for that edition. If a user is assigned to multiple user groups, the privileges of those groups are additive, and determine the edition usage of that particular user. For example, if a user is a member of both the Finance and the Accounting user groups, privileges for that user are equivalent to the cumulative set of privileges assigned to those two groups. One privilege, Web administration, can be assigned to any edition of Web user. This privilege allows the user to access the Administrator page to manage server connections, and to access the Project defaults link on the Preferences page to set defaults for all users. The MicroStrategy security model enables you to set up user groups that can have subgroups within them, thus creating a hierarchy. The following applies to the creation of user subgroups:
A child subgroup automatically inherits privileges assigned to its parent group. A child subgroup can be assigned other privileges in addition to inherited privileges.
User groups corresponding to the three editions of MicroStrategy Web products are predefined with the appropriate privilege sets. These user groups are available
927
under the User Group folder within the Administration folder for your project. You need project administration privileges to view and modify user group definitions. See your license agreement as you determine how each user is assigned to a given privilege set. MicroStrategy Web products provide three Web editions (Professional, Analyst, Reporter), defined by the privilege set assigned to each. Assigning privileges outside those designated for each edition changes the users edition. For example, if you assign to a user in a Web Reporter group a privilege available only to a Web Analyst, MicroStrategy considers the user to be a Web Analyst user. Within any edition, privileges can be removed for specific users or user groups. For more information about security and privileges, see Chapter 2, Setting Up User Security. License Manager enables you to perform a self-audit of your user base and, therefore, helps you understand how your licenses are being used. For more information, see Verifying, auditing, and updating licenses, page 149.
You can also configure MicroStrategy Web products to recognize MicroStrategy Narrowcast Server and enable the Scheduled Delivery and Send Now options. For more information about enabling these features, see Integrating Narrowcast Server with MicroStrategy Web products, page 933.
assign the Web administration privilege to select users use Microsoft IIS and Windows security to limit access to the page file For information about controlling access to this page when using different Web servers, see the MicroStrategy Installation and Configuration Guide. Specifically, see the section entitled Launch mstrWebAdmin servlet or Controlling access to the Web Administrator page according to the platform you are using.
You are logged in to a project and have the Web administration privilege. Your MicroStrategy Web product is not connected to any MicroStrategy Intelligence Servers. In this case, there is no way to tell whether you have the Web administration privilege because there is no Intelligence Server to verify your credentials. However, once you connect to an Intelligence Server, you do not see the link unless you log in to a project in which you have the Web administration privilege.
For steps on how to assign this privilege to a user, see the Desktop online help.
929
Using Microsoft IIS and Windows security for MicroStrategy Web products (ASP.NET)
Users without the Web administration privilege cannot access the Administrator page from within MicroStrategy Web. However, this does not prevent someone from simply typing the URL in a Web browser to navigate to the Administrator page. To prevent this from happening with the ASP.NET version of MicroStrategy Web products using Microsoft IIS, you must limit access to the file itself using Microsoft IIS and Windows security. The default location of the Administrator page file is Program Files/MicroStrategy/WebAspx/asp/ Admin.aspx In the J2EE version, the Administrator page is a servlet and access to the servlet is controlled using the Web and application servers. The default location of the Administrator servlet varies depending on the platform you are using. For details, see the MicroStrategy Installation and Configuration Guide.
Any changes you make to the project defaults become the default settings for the current project or for all Web projects if you select the Apply to all projects on the current Intelligence Server (server name) option from the drop-down list. The project defaults include user preference options, which each user can override, and other project default settings accessible only to the administrator. For information on the History List settings, see Managing report caches: History List, page 221. If you, as an administrator, have problems setting project defaults, you may see a message such as: Error Saving Personal Preferences. Your project default preferences could not be saved. Your permissions on the Web folder or in its contents may be incorrect. To fix this, make sure you can write to the file AdminOptions.xml. By default, this file is located in C:\Program Files\Common Files\MicroStrategy\Log\WebConfig
When the administrator who is setting the Project defaults clicks Load Default Values, the original values shipped with the MicroStrategy Web products are loaded on the page. When users who are setting User preferences click Load Default Values, the project default values that the administrator set on the Project defaults pages are loaded.
The settings are not saved until you click Apply. If you select the Apply to all projects on the current Intelligence Server (server name) from the drop-down menu, the settings are applied to all projects, not just the one you are currently configuring.
2008 MicroStrategy, Inc. Defining project defaults
931
General Folder Browsing Grid display Graph display History List Export Print (PDF) Drill mode Prompts Report Services Security (see note below for Web Universal) Project display (see note below for Web Universal) Office Some of the following categories are only displayed under certain circumstances. For example, the Report Services link only appears if you have a license to use Report Services.
Each category comprises its own page and includes related settings that are accessible only to users with the Web administration privilege. For details on each setting, see the MicroStrategy Web online help for the Web Administrator.
MicroStrategy Narrowcast Server and MicroStrategy Web products can automatically share this drive for the local Administrators group. The Subscription Administrator service should run under an account that is a member of the local Administrators group. You can unshare the drive or folder after the system is configured. If you do not want to automatically share the drive, perform the steps listed here.
933
1 Modify the Admin. properties file located on the Subscription Engine machine: ..\Microstrategy\Narrowcast Server\ Subscription Engine\build\server\
Modify the file contents so the corresponding two lines are as follows: TransactionEngineLocation=machine_name:\\ Subscription Engine\\build\\server TransactionEngineLocation=MACHINE_NAME:/Su bscription Engine/build/server where machine_name is the name of the machine where the Subscription Engine is installed.
2 Share the folder where the Subscription Engine is installed for either the local Administrators group or for the account under which the Subscription Administrator service account runs. This folder must be shared as Subscription Engine.
You should ensure that the password for this account does not expire. If the Subscription Engine machines drive is shared and unshared multiple times, the following Windows message displays: System Error: The network name was deleted. This message does not indicate a problem. Click OK to make the Subscription Administrator service functional.
Using firewalls
A firewall is a kind of technology that enforces an access control policy between two systems. A firewall can be thought of as something that exists to block certain network traffic while permitting other network traffic. Though the actual means by which this is accomplished varies widely, firewalls can be implemented using both hardware and software, or a combination of both. Firewalls are most frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. If you use MicroStrategy Web products over the Internet to access projects on an Intelligence Server that is most likely on an intranet, there is the possibility that a malicious user can exploit the security hole created by the connection between the two systems.
935
Therefore, in many environments and for a variety of reasons you may wish to put a firewall between your Web servers and the Intelligence Server or cluster. This does not pose any problems for the MicroStrategy system, but there are some things you need to know to ensure that the system functions as expected. Another common place for a firewall is between the Web clients and the Web servers. The following diagram shows how a MicroStrategy system might look with firewalls in both of these locations:
EXTERNAL FIREWALL INTERNAL FIREWALL Metadata
Web client
Data Warehouse
Regardless of how you choose to implement your firewall(s), you must make sure that the client Web browsers can communicate with MicroStrategy Web products, that MicroStrategy Web products can communicate with Intelligence Server, and vice versa. To do this, certain communication ports must be open on the server machines and the firewalls must allow Web server and Intelligence Server communications to go through on those ports. Most firewalls have some way to specify this. Consult the documentation that came with your firewall solution for details.
To enable communication through a firewall
1 Client Web browsers communicate with MicroStrategy Web on port 80 (HTTP). So, if you have a firewall between your clients and MicroStrategy Web servers, you must make sure port 80 is allowed to send and receive requests through the firewall.
Depending on how you deployed, Web Universal may communicate on a different port number.
2 MicroStrategy Web products can communicate with MicroStrategy Intelligence Server using any port that is greater than 1024. By default, the port used is 34952. So, if you have a firewall between your Web servers and Intelligence Server, you must make sure port 34952 is allowed to send and receive TCP/IP requests through the firewall.
You can change this port number if you wish. See the steps in the next procedure To change the port through which MicroStrategy Web and Intelligence Server communicate, page 937 to learn how.
3 The MicroStrategy Listener Service communicates with MicroStrategy Web products and Intelligence Server on port 30172. So, if you are using the Listener Service, you must make sure port 30172 is allowed to send and receive TCP/IP and UDP requests through the firewall.
You cannot change this port number.
To change the port through which MicroStrategy Web and Intelligence Server communicate
1 By default, MicroStrategy Web and Intelligence Server communicate with each other using port 34952 (Web Universal may use a different port depending on how you deployed it). If you wish to change this, you must change it for both the Web servers and the Intelligence Servers. The port numbers on both sides must match.
If you are using clusters, you must make sure that all machines in the Web server cluster can communicate with all machines in the Intelligence Server cluster.
To change the port number for Intelligence Server
2 In Desktop, log in to the project source that connects to the server whose port you want to change. 3 In the Service Manager, click Options.
937
4 On the Intelligence Server Options tab, type the port number you wish to use in the Port Number box. Save your changes. 5 A message appears telling you to restart the Intelligence Server. Click OK to continue. 6 Restart the Intelligence Server. 7 In Desktop, right-click the project source that connects to the Intelligence Server whose port number you changed and choose Modify Project Source. The Project Source Manager opens. 8 On the Connection tab, enter the new port number and click OK to save your changes.
You must update this port number for all project sources in your system that connect to this Intelligence Server.
To change the port number for MicroStrategy Web
9 Open the Administrator page in MicroStrategy Web. 10 If your MicroStrategy Web product is connected to the Intelligence Server whose port number you changed, click Disconnect to disconnect it. You cannot change the port while connected to an Intelligence Server.
It probably is not connected because the MicroStrategy Web product does not yet know the new port number you assigned to the Intelligence Server.
11 In the entry that corresponds to the appropriate Intelligence Server, click Modify (in the Properties column, all the way to the right). The Server property detail page opens. 12 In the Port box, type the port number you wish to use. This port number must match the port number you set for the Intelligence Server. An entry of 0 means use port 34952 (the default). 13 Click Save to save your changes and you can connect to the Intelligence Server again.
938 Using additional security features for MicroStrategy Web products
2008 MicroStrategy, Inc.
If the port numbers for your MicroStrategy Web product and Intelligence Server do not match, you get an error when the MicroStrategy Web product tries to connect to the Intelligence Server. For detailed steps for any of the high-level steps listed above, see the Desktop online help.
1 You need to get a digital ID (also known as an authentication certificate or digital certificate) from a trusted third-party source (certificate authority), which can verify your identity. You can get certificates from companies like Verisign and Thawte or you can create your own. 2 Once you have the certificate, you need to install the certificate on your Web server and enable SSL. Refer to your Web server documentation for information on installing the certificate and configuring the Web server to use SSL.
939
3 The client browsers need to have a corresponding certificate that enables them to communicate with the secure Web server. The secure Web server and browser automatically exchange certificate information when establishing a connection. If the certificate can be authenticated by a trusted certificate authority, the secure page opens automatically.
Once you have a secure Web server, all you need to do is use https:// to ensure secure communication between your Web clients and the Web server. For example, the URL to access MicroStrategy Web in your environment is http://machine_name/microstrategy/asp, where machine_name is the name of the Web server. If you enable SSL, you would use the following URL instead: https://machine_name/microstrategy/asp Most browsers come with a pre-defined list of trusted certificate authorities that include Verisign and Thawte, and sometimes GeoTrust/Equifax. If you connect to a secure server authenticated by an authority not included in your browser's list (if you created your own, for example), or if any of the information does not match, or, in some cases, if you use an old browser that needs an update, you may get a security warning. In most cases you will be able to proceed and have an encrypted connection using SSL. When using SSL, MicroStrategy Web products communicate with the client on the port number designated by your Web server for SSL. The default port number for SSL may vary according to Web server, but it is often 443. Check your Web servers documentation for the default port number. If you have a firewall between your clients and Web servers and you are using SSL, you must make sure the appropriate port is allowed to send and receive requests through the firewall.
Using cookies
A cookie is a piece of information that is sent to your Web browseralong with an HTML pagewhen you access a Web site or page. When a cookie arrives, your browser saves this
information to a file on your hard drive. When you return to the site or page, some of the stored information is sent back to the Web server, along with your new request. This information is usually used to remember details about what a user did on a particular site or page for the purpose of providing a more personal experience for the user. For example, you have probably visited a site such as Amazon.com and found that the site recognizes you. It may know that you have been there before, when you last visited, and maybe even what you were looking at the last time you visited. MicroStrategy Web products use cookies for a wide variety of things. In fact, it uses them for so many things that the application cannot work without them. Cookies are used to hold information about user sessions, preferences, available projects, language settings, window sizes, and so on. For a complete and detailed reference of all cookies used in MicroStrategy Web and MicroStrategy Web Universal, see Appendix E, MicroStrategy Web Cookies. The sections below describe the cookie related settings available in each product.
Disable cookies: The application does not store any cookies. This means that no settings are stored in cookies; instead, the application stores persistable settings (for example, the open and close state of a view filter) in the metadata. To make your application highly secure, you can select this option. Enable cookies: The application stores some of your settings using browser cookies. This is the default option. Temporary settings stored in cookies are lost when you close the browser; these settings include the last page visited, filter and template definitions for report execution, shared report location, and so on.
941
Persistable settings are stored in cookies and can be restored when you visit the application in the future, for example when you open or close editors. If you enable cookies, you also have the option to enable or disable:
Store Intelligence Server sessions information in temporary cookies instead of on the Web Server: This option specifies whether Intelligence Server session information should be saved in cookies or not. Since the Intelligence Server session information is sensitive, it is not secure to turn on this option in most cases. This is checked in those cases when a cluster is set up and does not automatically handle session replication. The session replication is the distribution of the session information on the client instead of on the Web server so that the user can connect seamlessly to any of the Intelligence Server machines.
When you enable this, temporary information such as Session ID of the Intelligence Server sessions is saved. If the Disable cookies option is selected, selecting this option does not save any information in the cookies.
Using encryption
Encryption is the translation of data into a sort of secret code for security purposes. The most common use of encryption is for information that is sent across a network so that a malicious user cannot gain anything from intercepting a network communication. Sometimes information stored in or written to a file is encrypted. The SSL technology described earlier is one example of an encryption technology. MicroStrategy Web products can use encryption in many places, but by default, most are not used unless you enable them.
1 Go to the Administrator Page. 2 At the top of the page or in the column on the left, click Security to see the security settings. 3 Select the Traffic between the Web and Iserver machines is encrypted check box. 4 Click Save to save your changes. Now all communication between the Web server and Intelligence Server is encrypted.
943
For example, with Microsoft IIS, by default only the Internet guest user needs access to the virtual directory. This is the account under which all file access occurs for Web applications. In this case, the Internet guest user needs the following privileges to the virtual directory: read, write, read and execute, list folder contents, and modify. However, only the administrator of the Web server should have these privileges to the Admin folder in which the Web Administrator pages are located. When secured in this way, if users attempt to access the Administrator page, the application prompts them for the machines administrator login ID and password. In addition to the file-level security for the virtual directory and its contents, the Internet guest user also needs full control privileges to the Log folder in the MicroStrategy Common Files, located by default in C:\Program Files\Common Files\MicroStrategy. This ensures that any application errors that occur while a user is logged in can be written to the log files. The file-level security described above is all taken care of for you when you install the ASP.NET version of MicroStrategy Web using Microsoft IIS. These details are just provided for your information. If you are using the J2EE version of MicroStrategy Web Universal you may be using a different Web server, but most Web servers have similar security requirements. Consult the documentation for your particular Web server for information about file-level security requirements.
MicroStrategy Web
MicroStrategy user login and password encrypted when transfered over the network to Intelligence Server
TCP/IP
Application Security based on MicroStrategy user login with either standard or NT authentication TCP/IP
If MicroStrategy user login is mapped to warehouse user and password, it is encrypted in the metadata
ODBC Data Warehouse Database MicroStrategy Intelligence Server Metadata password encrypted in NT registry ODBC
Metadata Database
945
How do time-out settings in MicroStrategy Web and Intelligence Server affect MicroStrategy Web users?
There are three settings related to session time-out that may affect MicroStrategy Web users. First, in the Intelligence Server Configuration, under Governing - General, the Web user session idle time determines the number of seconds a user can remain idle before being logged out of Intelligence Server. Second, in the web.config file, located by default in C:\Program Files\MicroStrategy\Web ASPx, the time-out setting determines the length of time (in minutes) after which the .NET session object is released if it has not been accessed. This time-out is independent of the Intelligence Server time-out above. The section of the web.config file containing the time-out setting is as follows: <sessionState mode=InProc stateConnectionString= tcpip=127.0.0.1:42424 sqlConnectionString=data source= 127.0.0.1;user id=sa;password= cookieless=false timeout=20 />
946 FAQs for configuring and tuning MicroStrategy Web products
2008 MicroStrategy, Inc.
This setting does not affect Web Universal, since it does not use .NET architecture. The third setting is the MicroStrategy Web Administration setting Allow automatic login if session is lost. This setting can be found in the Security section of the Web Administration page. It enables users to be automatically reconnected to Intelligence Server if the session is lost. This setting does not automatically reconnect the .NET session object. The following table demonstrates how these settings interact in various combinations.
Intelligence Server time-out
45 minutes 20 minutes 20 minutes 20 minutes web.config time-out 20 minutes 45 minutes 45 minutes 45 minutes
Result
User must log back in User must log back in User is automatically logged back in User must log back in
Clustering multiple web servers together improves performance. For more information about this, see Clustering multiple machines in the system, page 329. You can modify certain settings in the MicroStrategy Web server machine or application for best performance. Details for MicroStrategy Web and Web Universal follow:
947
increase the server machines Java Virtual Machine heap size. For information on doing this, see MicroStrategy Tech Note TN5000-071-0001.
E
MICROSTRATEGY WEB COOKIES
E.
Introduction
This appendix provides detailed information for all cookies used in:
MicroStrategy Web (see Cookies in MicroStrategy Web, page 950) MicroStrategy Web Universal (see Cookies in MicroStrategy Web Universal, page 961)
In addition to listing the different cookies used in the application, this appendix describes the potential consequences for each cookie if it gets lost during the HTTP request process.
949
Cookie subkeys
Subkey
Tkn PWD
Purpose
Stores the Intelligence Server session ID. Stores the password used to log in. An Administrator Preference determines whether the password is encrypted or not (not encrypted by default). The password is saved here if the user has not chosen to save the password on the Web browser. Indicates whether the user logged in with Windows authentication. 0 means the user was not logged in using Windows credentials. 1 means the user was logged in using Windows credentials.
NT
Gst
Indicates whether the user logged in as Guest. 0 means the user was not logged as Guest. 1 means the user was logged as Guest.
Subkey
Prv1
Purpose
Stores the first set of User Privileges as a hexadecimal value (denoted by &H). Possible values are as follows: PRIVILEGE_WEBADMINISTRATOR = &H1 PRIVILEGE_WEBUSER = &H2 PRIVILEGE_WEBVIEWHISTORYLIST = &H4 PRIVILEGE_WEBREPORTMANIPULATIONS = &H8 PRIVILEGE_WEBCREATENEWREPORT = &H10 PRIVILEGE_WEBOBJECTSEARCH = &H20 PRIVILEGE_WEBCHANGEUSEROPTIONS = &H40 PRIVILEGE_WEBSAVEREPORT = &H80 PRIVILEGE_WEBDRILLANYWHERE = &H100 PRIVILEGE_WEBEXPORT = &H200 PRIVILEGE_WEBPRINTMODE = &H400 PRIVILEGE_WEBDELETE = &H800 PRIVILEGE_WEBPUBLISH = &H1000 PRIVILEGE_WEBREPORTDETAILS = &H2000 PRIVILEGE_WEBREPORTSQL = &H4000 PRIVILEGE_WEBADDHISTORYLIST = &H00008000 PRIVILEGE_WEBCHANGEVIEWMODE = &H10000 PRIVILEGE_WEBDRILL = &H20000 PRIVILEGE_WEBDRILLONMETRICS = &H40000 PRIVILEGE_WEBCHANGESTYLE = &H80000 PRIVILEGE_WEBSCHEDULING = &H100000 PRIVILEGE_WEBSIMULTANEOUSEXECUTION = &H200000 PRIVILEGE_WEBSORT = &H400000 PRIVILEGE_WEBSWITCHPAGEBY = &H800000 PRIVILEGE_WEBSAVETEMPLATEFILTER = &H1000000 PRIVILEGE_WEBFILTERONSELECTION = &H2000000 PRIVILEGE_WEBUSEREPORTFILTEREDITOR = &H4000000 PRIVILEGE_WEBCREATEDERIVEDMETRICS = &H8000000 PRIVILEGE_WEBMODIFYSUBTOTALS = &H10000000 PRIVILEGE_WEBUSEREPORTOBJECTSWINDOW = &H20000000
951
Subkey
Prv2
Purpose
Stores the second set of User Privileges as a hexadecimal value (denoted by &H). Possible values are as follows: PRIVILEGE_WEBFORMATTINGEDITOR = &H40000001 PRIVILEGE_WEBSCHEDULEEMAIL = &H40000002 PRIVILEGE_WEBSENDNOW = &H40000004 PRIVILEGE_WEBMODIFYREPORTLIST = &H40000008 PRIVILEGE_WEBUSEDESIGNMODE = &H40000010 PRIVILEGE_WEBALIASOBJECTS = &H40000020 PRIVILEGE_WEBCONFIGURETOOLBARS = &H40000040 PRIVILEGE_WEBUSEQUERYFILTEREDITOR = &H40000080 PRIVILEGE_WEBREEXECUTEREPORTAGAINSTWH = &H40000100 PRIVILEGE_WEBSIMPLEGRAPHFORMATTING = &H40000200 PRIVILEGE_WEBUSELOCKEDHEADERS = &H40000400 PRIVILEGE_WEBSETCOLUMNWIDTHS = &H40000800
Lng ReturnURL ReturnName StartPageURL StartPageName WrkSet Template Filter ShowFilterTemplate Categories Series ShRptsID ShRprtsName Utf Ver CurrProjID
Indicates the locale ID associated with the Intelligence Server session. Stores the URL Query String of the last report, document, or folder visited or executed. Stores the metadata object name of the last report, document, or folder visited or executed. Stores the URL Query String of the default start page for the Intelligence Server project. Stores the Web Application Page or Feature name of the default start page for the project. For example, Shared Reports, My Reports, and so on. Indicates if the user has privileges to use the Working Set. Stores the DssObjectId of the last template chosen by the user (by clicking on it while browsing folders inside the Web application). Stores the DssObjectId of the last filter chosen by the user (by clicking on it while browsing folders inside the Web application). Indicates whether to allow the user to execute filter and template combinations. Indicates the number of categories for the graph mode. Indicates the number of series for the graph mode. The folder ID that used for the "Shared Reports" folder. The name of the folder used for the "Shared Reports" folder. Indicates if the client is in Unicode mode. Stores the Intelligence Server version number. Stores the current project ID.
Project information
MSTRPrj_<Server_name>_<Project_Name> This cookie stores information relevant to a specific project. This is a permanent cookie. Information stored in this cookie is available even if the web browser is closed or the client's Web session expires. If this cookie is not present at request time, the application automatically logs the user in again, using the information contained in the URL Query String and the Connection Info cookie. This happens because the application believes the user is trying to connect to a new project even if this is not true. Since the User ID is saved in this cookie, if the cookie is not present, the user ID label that shows up on the Desktop page is lost. The user preferences are also saved in this cookie so they too will be lost.
Cookie subkeys
Subkey
Uid PWD Disp_Mode aa io
Purpose
Stores the user ID last used to log in to this project. Stores the password used to log in if the user decided to save the password in the Web browser. The value of this cookie is never encrypted. Indicates whether or not the user had the outline mode option in the report page selected the last time he used the application. This set of cookiesfrom the aa subkey to the io subkeyrepresents the user preferences. Each user preference has been assigned one subkey. See the Preferences section at the end of this appendix for a complete list of preferences and their codes.
953
Current language
Lng Indicates the locale ID used in the current or last visited project. This is a temporary cookie. This cookie is created at login time. If the cookie is lost in the middle of a session, the application recovers this information from the information included in the session information cookie.
GUI settings
MstrWeb This cookie stores information specific to the look and feel of the application. This is a temporary cookie. If the cookie is not present at request time, the values it holds take the default values.
Cookie subkeys
Subkey
Toolbar
Purpose
Indicates whether the left toolbar is visible or not. 0 means the toolbar is not shown to the user. 1 means the toolbar is shown to the user.
HelpSection
Indicates whether the help section on the left toolbar is visible or not. 0 means the help section is not shown to the user. 1 means the help section is shown to the user.
Connection information
ConnectionInfo This cookie stores the information relevant to the currently connected Intelligence Server. This is a temporary cookie. If the information in this cookie is lost during a request, the application logs the user in again with the information contained on the URL Query String. This happens because the application believes the user is trying to connect to a new project even if this is not true.
Cookie subkeys
Subkey
CurrServer CurrProject CurrPort
Purpose
Stores the name of the Intelligence Server. Stores the name of the Intelligence Servers project. Stores the Intelligence Server port number.
955
different cookies. If this cookie is not received at request time, it is created from the Clusters collection in the XML API.
Cookie subkeys
Subkey
P<n>SERVER P<n>PROJECT P<n>PORT P<n>ALIAS
Purpose
Stores the Intelligence Server name. Stores the Intelligence Servers project name. Stores the Intelligence Servers port number. Stores the project alias of the project. This alias is set via the Preferences page.
Cookie subkeys
Subkey
aa io
Purpose
This set of cookiesfrom the aa subkey to the io subkeyrepresents the user preferences. Each user preference has been assigned one subkey. See the Preferences section at the end of this appendix for a complete list of preferences and their codes.
Cached preferences
CurrUsrOpt
When the user enters the application and logs in to a specific project, all the applicable preferences, either from the Administrator Preferences, Project Defaults, or the User Preferences are combined and cached in this cookie. This is a temporary cookie. The application assumes all cookies are cached at login time. If the information contained in this cookie is lost, the application might behave unpredictably depending on the preference being read. All check box preferences behave as FALSE or cleared, numeric preferences might raise an error.
Cookie subkeys
Subkey
aa io
Purpose
This set of cookiesfrom the aa subkey to the io subkeyrepresents the user preferences. Each user preference has been assigned one subkey. See the Preferences section at the end of this appendix for a complete list of preferences and their codes.
Preferences
For reference, this section lists the different preferences and their corresponding codes.
Code Preference
aa GRID_STYLE_OPTION
Code Preference
da DROPDOWN_START_ PAGE_URL_OPTION CURRENT_START_ PAGE_NAME_OPTION CURRENT_START_ PAGE_URL_OPTION NEW_START_PAGE_ NAME_OPTION NEW_START_PAGE_ URL_OPTION ACTUAL_START_PAGE _URL_OPTION ACTUAL_START_PAGE _NAME_OPTION
Code Preference
fv EXPORT_PDF_NUM_ COLS_PER_PAGE_ OPTION PROMPT_ON_EXPORT_ PDF_OPTION ACCESSIBILITY_ OPTION PAGE_BY_EDITOR_ OPTION FILTER_EDITOR_ OPTION OB_SEARCH_ID_ OPTION FORMULA_BAR_ OPTION
ab ac ad ae af ag
USE_DEFAULT_GRID_ STYLE_OPTION GRID_ROWS_OPTION GRID_COLUMNS_ OPTION LOCALE_OPTION EXPORT_FORMAT_ OPTION USER_HEADER_ OPTION
db dc dd de df dg
fw fx fy fz ga gb
957
Code Preference
ah USER_FOOTER_ OPTION PRINT_ROWS_OPTION PRINT_COLUMNS_ OPTION MAX_PRINT_ROWS_ OPTION MAX_PRINT_COLUMNS _OPTION START_PAGE_OPTION CANCEL_JOBS_ OPTION
Code Preference
dh PROMPT_ADMIN_ CANCEL_JOBS_ OPTION
Code Preference
gc REPORT_TOOLBAR_ OPTION WORKING_SET_ OPTION EXPORT_PDF_MAX_ ROWS_PER_PAGE_ OPTION EXPORT_PDF_MAX_ COLS_PER_PAGE_ OPTION PRINT_HEADER_ FOOTER_OPTION TOP_EDITOR_OPTION VIEW_FILTER_EDITOR_ OPTION EXPORT_ALL_HEADER S_AS_TEXT_OPTION ENCRYPT_CONNECTIO N_OPTION OB_FOLDER_ID_ OPTION OB_SEARCH_TEXT_ OPTION OB_START_OBJECT_ OPTION FILTER_SHOW_EDIT_ OPTION FILTER_AUTO_APPLY_ OPTION FORMULABAR_OPTION PRINT_HEADER_LEFT_ OPTION PRINT_HEADER_ CENTER_OPTION Not used; HTML reserved character
ai aj
di dj
PROMPT_ADMIN_ gd DELETE_JOBS_OPTION ALLOW_USER_EXPORT ge _TEMP_FILES_OPTION ALLOW_USER_USE_ DHTML_OPTION KEEP_PARENT_ OPTION USE_EXPORT_TEMP_ FILES_OPTION WORKING_SET_SIZE_ OPTION DRILL_PRIVILEGE_ OPTION gf
ak
dk
al am an ao ap aq ar as at au av aw
dl dm do
gg gh gi gj
DELETE_JOBS_OPTION dp SAVE_PWD_OPTION AUTHENTICATION_ OPTION INBOX_EXPIRES_ OPTION INBOX_MAX_JOBS_ OPTION INBOX_MAX_SIZE_ OPTION GRAPH_SIZE_OPTION GRAPH_HEIGHT_ OPTION GRAPH_WIDTH_ OPTION OBJECT_BROWSING_ OPTION EXPORT_NEW_ WINDOW_OPTION dq dr ds dt du dv dw dx
USE_DHTML_PROMPTS gk _OPTION USE_DHTML_VALUE_ OPTION SEARCH_WORKING_ SET_SIZE_OPTION GRAPH_FORMAT_ OPTION PREVIEW_NEW_ WINDOW_OPTION EXPORT_VALUES_AS_ TEXT_OPTION REFRESH_METHOD_ OPTION PRINT_GRID_AND_ GRAPH_TOGETHER_ OPTION gl gm gn go gp gq gr
ax ay
dy dz
Code Preference
az PAGE_NUMBERING_ OPTION PAGE_LAYOUT_ OPTION ADMIN_HEADER_ OPTION ADMIN_FOOTER_ OPTION MAX_EXPORT_COLS_ OPTION GRID_CSS_OPTION MAX_EXPORT_ROWS_ OPTION MAX_EXPORT_ROWS_ TEXT_OPTION
Code Preference
ea EXPORT_COLS_ SPREADSHEET_ OPTION EXPORT_GRAPH_ FORMAT_OPTION
Code Preference
gu PRINT_HEADER_RIGHT _OPTION PRINT_FOOTER_LEFT_ OPTION PRINT_FOOTER_ CENTER_OPTION PRINT_FOOTER_RIGHT _OPTION FORMAT_SEARCH_ID_ OPTION FORMAT_BAR_OPTION DD_MENU_OPTION
ba bb
eb ec
gv
EXPORT_GRAPH_AND_ gw GRID_FORMAT_ OPTION POP_UP_DRILL_ OPTION PROMPTS_ON_ONE_ PAGE_OPTION REQUIRED_PROMPTS_ FIRST_OPTION GRAPH_CATEGORIES_ AND_SERIES_FROM_ DESKTOP_OPTION GRAPH_CATEGORIES_ OPTION GRAPH_SERIES_ OPTION EXPORT_PLAINTEXT_ DELIMITER_OPTION SEARCH_OBJECTS_ OPTION DOCUMENT_EXPORT_ FORMAT_OPTION ALLOW_USER_SET_ GRAPH_SETTINGS_ OPTION REPORT_TAB_OPTION REUSE_MESSAGE_ FOR_SCHEDULED_ REPORTS_OPTION gx gy gz ha
bc bd bf bg
ed ef eg eh
bh bi bk bm bn bo
ei
hb hc hd he hf hg
UNICODE_OPTION REPORT_BAR_OPTION CHANGE_EXPIRED_ PWD_OPTION PRINT_CELLS_PER_ BLOCK_OPTION PRINT_GRAPHS_PER_ BLOCK_OPTION PRINT_MARGIN_LEFT_ OPTION PRINT_MARGIN_RIGHT _OPTION PRINT_MARGIN_TOP_ OPTION PRINT_MARGIN_ BOTTOM_OPTION PRINT_HEADER_SIZE_ OPTION
DEFAULT_AUTHENTICA ej TION_OPTION PIVOT_MODE_OPTION INBOX_SORT_OPTION CANCEL_JOBS_ PROMPT_OPTION DELETE_JOBS_ PROMPT_OPTION USE_SECURITY_ PLUGIN_OPTION SECURITY_PLUGIN_ CLASS_OPTION SECURITY_PLUGIN_ FREQ_OPTION PROMPT_ON_PRINT_ OPTION ek el em en
bp bq
eo ep
hh hi
br bs
eq er
959
Code Preference
bt bu bw by bz ca cb cc cd ce cf cg ch ci cj ck cl
Code Preference
ATTRIBUTE_DISPLAY_ IN_GRIDS_OPTION ALLOWED_FILE_ EXTENSION_OPTION MAXIMUM_FILE_SIZE_ TO_UPLOAD_OPTION WARNING_LEVEL_ OPTION
Code Preference
hl hm hn ho PRINT_FOOTER_SIZE_ OPTION PRINT_POPUP_PRINT_ DIALOG_OPTION PRINT_COVER_PAGE_ OPTION PRINT_FIT_ROW_TO_ PAGE_OPTION PRINT_FIT_COL_TO_ PAGE_OPTION PRINT_SCALING_ OPTION PRINT_SHRINK_FONT_ OPTION SERVER_LOCALE_ OPTION HIDE_MYREPORTS_ OPTION IFRAME_YES_NO_ OPTION ALLOW_SEAMLESS_ LOGIN_OPTION SECURE_OPTION WORDWRAP_ ATTRIBUTES_OPTION ENABLE_PDF_EXPORT _OPTION PRINT_EXPAND_ PAGEBY_OPTION ENABLE_LOCK_ HEADERS_OPTION ENABLE_ADD_ EMBEDDED_FILTER_ OPTION SHARED_REPORTS_ FOLDER_ID_NAME_ OPTION
PROMPT_ON_EXPORT_ es OPTION EXECUTION_MODE_ OPTION EXECUTION_WAIT_ TIME_OPTION PRINT_FILTER_ DETAILS_OPTION EXPORT_FILTER_ DETAILS_OPTION ELE_PROMPT_BLOCK_ COUNT_OPTION EXPORT_SECTION_ OPTION SHOW_DATA_OPTION SHOW_TOOLBAR_ OPTION SHOW_HELP_OPTION WAIT_TIME_IN_WAIT_ PAGE_OPTION et eu ev ew ex ey ez fa fb fc
SUBSCRIPTION_SORT_ hs OPTION AUTOMATIC_PAGE_BY _OPTION USE_TIMESTAMP_ VALUE_OPTION FONT_FAMILY_OPTION SORT_METRICS_PAGE _BY_OPTION OVERWRITE_CSS_ FONT_OPTION FILTER_TEMPLATE_ OPTION SUBSCRIPTION_ OPTION FILTER_ON_ SELECTION_OPTION PAGE_BY_DHTML_ OPTION START_PAGE_LAYOUT _OPTION ht hu hv hx hy hz ia ib ic
cm
fj
id
Code Preference
cn SEARCH_TIMEOUT_ OPTION DEFAULT_OBJECT_ NAME_OPTION
Code Preference
fk START_PAGE_NAME_ OPTION
Code Preference
ie ENABLE_HIDDEN_ FORMS_ADV_SORT_ OPTION SAVE_REPORT_ SETTINGS_OPTION DISABLE_RMC_DRILL_ OPTION SHOW_EVAL_LEVEL_ VIEW_FILTER_OPTION ENABLE_COLUMN_ WIDTHS_OPTION HIGH_SECURY_CHECK _OPTION TIME_ZONE_OPTION OFFICE_VERSION_ OPTION EXPORT_PDF_NUMBER _CELLS_OPTION DEFAULT_HEADER_ WIDTH_OPTION DEFAULT_GRID_WIDTH _OPTION
co cp cq cr cs ct cv
fl
MAX_PROJECT_NAME_ fm OPTION PROJECT_ALIAS_ OPTION MAX_PROJECT_TABS_ OPTION SHOW_TAB_OPTION ADD_TO_MY_HISTORY _LIST_OPTION ORIENTATION_ PREVIEW_OPTION PAPER_SIZE_PREVIEW _OPTION fn fo fp fq fr
EXPORT_PDF_SHRINK_ ik GRAPH_OPTION EXPORT_PDF_GRID_ GRAPH_SAME_PAGE_ OPTION EXPORT_PDF_MAX_ ROWS_OPTION EXPORT_PDF_MAX_ COLS_OPTION EXPORT_PDF_NUM_ ROWS_PER_PAGE_ OPTION il
cw cx cz
fs
im in io
Permanent cookies: used for holding information that, by design, should be remembered at all times, no matter if the user has logged out of a project or closed the browser.
961
Temporary cookies: are remembered only while the user keeps the same browser session open. The information saved in these cookies is used for all projects that a user might access in the same browser session. These cookies are deleted when the browser is closed. Project cookies: similar to temporary cookies, but these are different from project to project. These cookies are deleted when a user logs out of MicroStrategy Web (even though the browser may remain open).
Permanent cookies, page 962 Temporary cookies, page 967 Project cookies, page 968
Permanent cookies
Server location
sLoc This cookie holds the identifier of the locale used by default when rendering HTML content by the application. It is used by the Admin servlet and the Session Manager to display the appropriate localized interface to the user. It should be a copy of the actual preference saved in the metadata. It is overwritten if the value found in the metadata is different from the one stored in the cookie.
actual preference saved in the metadata. It is overwritten each time the user logs in if the value found in the metadata is different from the one stored in the cookie.
Cancel requests
cr Stores whether or not the user wants to cancel at logout time the jobs requested during a session that are still pending execution. This is used each time the user logs in to a new project. It should be a copy of the actual preference saved in the Metadata. It is overwritten each time the user logs in if the value found in the metadata is different from the one stored in the cookie.
963
the bean. If the bean name changes in the pageConfig.xml file, the name of the cookie is changed (and a leftover ob cookie is kept, but is not used).
Help display
hlp Determines if the Help section is to be shown. This is used by all pages that show the Need Help link (most pages in the application). All pages that have a collapsible help section have the same value to ensure consistency (if it is hidden on one page, it is also hidden for the rest of the pages).
Determines whether or not the report toolbar displays when accessing a report (grid or graph or both). It is used by the ReportFrameBean that checks if its child report toolbar is displayed (value is BrowserSettings.BROWSER_SETTING_OPENED).
Debug flags
dbf
965
This cookie is used to store the debug flags, which determine the debug information to include when transforming a bean. The information is rendered as a comment in the HTML and it usually includes the bean's XML, formal parameter values, and request keys. This cookie is set by adding debug=value in the URL.
Page-by display
pbs This is the cookie for the browser setting that determines whether or not to display the page-by section. The ReportFrameBean modifies this cookie each time the user toggles the display.
iFrame display
iFrameVisible This cookies indicates whether or not the hidden IFrame used for report manipulations is visible to the user on the HTML pages. This cookie is intended mainly for debugging purposes. Users should not see the IFrame in regular use.
Temporary cookies
Features state
mstrFeat Used to save the state of the Features component on a cookie. This cookie is only used when the Administrator has allowed users to save this type of information in user cookies instead of session variables. This cookie is deprecated as of MicroStrategy Web version 7.3.1.
SessionManager state
mstrSmgr Used to save the state of the SessionManager component in a cookie. It contains all the information about the Intelligence Server sessions that a user opens during a browser session. This cookie is only used when the Administrator has allowed users to save this type of information in user cookies instead of session variables.
967
Project cookies
Last page name
lpn Saves the name of the page that was last accessed successfully. It is used by the Cancel feature for the application to know which page to return if an action is cancelled by the user.
Return to page
rtn Name of the page that is the target of a Return To link. This is used by the Return To feature for the application to know which page to return when the user clicks this link.
Return to state
rts Saves the minimal state of the beans on the page that is the target of a Return To link. It is used by the Return To feature to know the state of objects on the page to return if a user clicks this link.
Return to title
rtt Saves the title of the page that is the target of a Return To link. It is used by the Return To feature for the application to know the title of the page to be displayed as the link label.
Last folder
lf Name of the browser setting that represents the last folder visited.
Filter ID
fid
969
Used by the TemplateFilterExecBean to identify the currently selected filter to be used in filter/template execution.
Template ID
tid Used by the TemplateFilterExecBean to identify the currently selected template to be used in filter/template execution.
F
DIAGNOSTICS AND PERFORMANCE LOGGING
F.
971
components listed below represent only a subset of the components available for logging diagnostics in the Diagnostics and Performance Logging tool.
Trace Level
Function Level Tracing
Attribute Editor
All the components perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
Consolidation Editor
All the components perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
All the components perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
All the components perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing. Function Level Tracing
Components
DSS ColumnEditor DSS CommonDialogsLib DSS Components DSS EditorContainer DSS EditorManager DSS ExpressionboxLib DSS ExtensionEditor DSS FactEditor DSS CommonDialogsLib DSS CommonEditorControlsLib DSS Components DSS DateLib DSS EditorContainer DSS EditorManager DSS EditorSupportLib DSS ExpressionboxLib DSS FilterLib DSS FTRContainerLib DSS ObjectsSelectorLib DSS PromptEditorsLib DSS PromptsLib DSS CommonDialogsLib DSS EditorContainer DSS EditorManager DSS HierarchyEditor DSS CommonDialogsLib DSS Components DSS DimtyEditorLib DSS EditorContainer DSS EditorManager DSS ExpressionboxLib DSS MeasureEditorLib DSS PromptsLib DSS PropertiesControlsLib DSS CommonDialogsLib DSS Components DSS DataSliceEditor DSS EditorContainer DSS EditorManager DSS FilterLib DSS PartitionEditor
Trace Level
All the components perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
Filter Editor
All the components perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
Hierarchy Editor
Metric Editor
All the components perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
Partition Editor
All the components perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
Print Schema
973
Components
DSS AttributeCreationWizard DSS FactCreationWizard DSS ProgressIndicator DSS ProjectCreationLib DSS WHCatalog DSS AsynchLib DSS ProgressIndicator DSS ProjectUpgradeLib DSS SchemaManipulation DSS AsynchLib DSS ProgressIndicator DSS ProjectUpgradeLib DSS SchemaManipulation DSS CommonDialogsLib DSS CommonEditorControlsLib DSS Components DSS EditorContainer DSS EditorManager DSS EditorSupportLib DSS PromptEditorsLib DSS PromptStyles DSS SearchEditorLib
Trace Level
Function Level Tracing
Project Duplication
Project Upgrade
Prompt Editor
All the components perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
Components
DSS CommonDialogsLib DSS CommonEditorControlsLib DSS Components DSS DateLib DSS EditorContainer DSS EditorManager DSS EditorSupportLib DSS ExportLib DSS ExpressionboxLib DSS FilterLib DSS FTRContainerLib DSS GraphLib DSS GridLib DSS ObjectsSelectorLib DSS PageByLib DSS PrintGraphInterface DSS PrintGridInterface DSS PromptEditorsLib DSS PromptsLib DSS PropertySheetLib DSS RepDrillingLib DSS RepFormatsLib DSS RepFormsLib DSS ReportControl DSS ReportDataOptionsLib DSS ReportSortsLib DSS ReportSubtotalLib
Trace Level
All the counters perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
Server Administration
DSS AdminEditorContainer DSS DatabaseInstanceWizard DSS DBConnectionConfiguration DSS DBRoleConfiguration DSS DiagnosticsConfiguration DSS EVentsEditor DSS PriorityMapEditor DSS PrivilegesEditor DSS ProjectConfiguration DSS SecurityRoleEditor DSS SecurityRoleViewer DSS ServerConfiguration DSS UserEditor DSS VLDBEditor
DSS CommonDialogsLib DSS EditorContainer DSS EditorManager DSS TableEditor
Table Editor
975
Components
DSS CommonDialogsLib DSS Components DSS EditorContainer DSS EditorManager DSS ExportLib DSS FTRContainerLib DSS GraphLib DSS GridLib DSS PageByLib DSS PrintGraphInterface DSS PrintGridInterface DSS PromptsLib DSS PropertySheetLib DSS RepDrillingLib DSS RepFormatsLib DSS RepFormsLib DSS ReportControl DSS ReportDataOptionsLib DSS ReportSortsLib DSS ReportSubtotalLib DSS CommonDialogsLib DSS Components DSS EditorContainer DSS EditorManager DSS ExpressionboxLib DSS TransformationEditor DSS CommonDialogsLib DSS DatabaseInstanceWizard DSS DBRoleConfiguration DSS SchemaManipulation DSS WHCatalog
Trace Level
All the counters perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
Transformation Editor
All the counters perform Function Level Tracing except for DSS Components, which can also perform Explorer and Component Tracing.
Client Connection
Authentication Tracing Session Tracing Data Source Tracing Data Source Enumerator Tracing Report Source Tracing Element Source Tracing Element Source Tracing Element Source Tracing Element Source Tracing
Element Browsing
DSS DBElementServer
Trace Level Object Tracing Access Tracing SQL Tracing Content Source Tracing Object Source Tracing Content Source Tracing Content Source Tracing Content Source Tracing Report Source Tracing Process Tracing Process Tracing Report Source Tracing
DSS MDServer
Object Browsing
DSS ObjectServer
DSS ReportNetClient
977
G
COMMAND MANAGER RUNTIME
G.
Introduction
MicroStrategy Command Manager can perform various MicroStrategy administrative and application development tasks by means of a simple scripting language. For detailed information about Command Manager, see Automating processes using Command Manager, page 464. Command Manager Runtime, a lightweight version of Command Manager for bundling with OEM applications, is also available. For more information about Command Manager Runtime, see Using Command Manager with OEM software, page 473. This appendix provides a list of the commands available for Command Manager Runtime. Topics include:
2008 MicroStrategy, Inc.
Executing a script with Command Manager Runtime, page 980 Syntax reference guide, page 982 Cache management statements, page 983
Introduction
979
Connection mapping management statements, page 989 DBConnection management statements, page 992 DBInstance management statements, page 996 DBLogin management statements, page 1000 Event management statements, page 1003 Miscellaneous statements, page 1005 Project configuration management statements, page 1007 Project management statements, page 1011 Schedule management statements, page 1013 Server configuration management statements, page 1018 Server management statements, page 1022
To execute a script with Command Manager Runtime, call the Command Manager Runtime executable, cmdmgrlt.exe, with the following parameters: If the project source name, the input file, or an output file contain a space in the name or path, you must enclose the name in double quotes.
Effect Connection (required)
Connect to a project source -n ProjectSourceName -u UserName -p Password
Parameters
Note: If -p is omitted, Command Manager Runtime assumes a null password. Script input (required)
Specify the script file to be executed
-f InputFile
Note: You can omit one or more of these parameters. For example, if you only want to log error messages, use only the -of parameter. Script output options (optional)
Begin each log file with a header containing information such as the version of Command Manager used Print instructions in each log file and on the console Print tech support error and exit codes in each log file and on the console Display script output on the console
-h -i -e -showoutput
981
A full list of parameters can be accessed from a command prompt by entering cmdmgrlt.exe /?.
The first section provides a brief description of the statement's function. The second section contains the statement's syntax, with reserved words in UPPER CASE and identifiers in italics. The third section describes identifiers and other included tokens not in a reserved-word list, and their associated restrictions. The fourth section provides an example statement.
Some symbols used in the syntax reference and in the outlines are not part of the syntax at all, but have specific meanings related to the statement:
Tokens between parentheses, separated by a pipe, ( | ) are exclusive options; that is, only one of the tokens is used in a statement. Tokens in italics are identifiers. They should be replaced by the desired parameter. They may have additional restrictions; see the individual statement syntax guides for these restrictions. Identifiers that have an index (1, 2, ... , n) are of the same type. Each represents a parameter; their use may have additional restrictions based on the type used.
983
| FALSE)] [ENABLEPROMPTEDCACHING (TRUE | FALSE)] [ENABLENONPROMPTEDCACHING (TRUE | FALSE)] [CREATECACHESPERUSER (TRUE | FALSE)] [CREATECACHESPERDBLOGIN (TRUE | FALSE)] [CREATECACHESPERDBCONN (TRUE | FALSE)] [CACHEEXP (NEVER | [IN] number_of_hours HOURS)]; where:
project_name is the name of the project in which report server caching is to be altered, of type string, between double quotes (" "). cache_file_directory is the directory for cache files, of type string, between double quotes (" "). number_of_Kb is the maximum number of kilobytes to be used in RAM for report caching, of type integer. number_of_caches is the maximum number of caches to be stored, of type integer. number_of_hours is the duration that the report cache is to have, of type integer.
Example
ALTER REPORT CACHING IN PROJECT "MicroStrategy Tutorial" ENABLED CACHEFILEDIR ".\Caches\JSMITH4" MAXRAMUSAGE 10240 CREATECACHESPERUSER FALSE CACHEEXP IN 24 HOURS MAXNOCACHES 10000;
where:
project_name is the name of the project in which the object server caching is to be modified, of type string, between double quotes (" "). number_of_Kb is the maximum number of kilobytes of RAM to be used for object caching, of type integer.
Example
ALTER OBJECT CACHING IN PROJECT "MicroStrategy Tutorial" MAXRAMUSAGE 10240;
project_name is the name of the project in which the element server caching is to be modified, of type string, between double quotes (" "). number_of_Kb is the maximum number of kilobytes of RAM to be used for element caching, of type integer.
Example
ALTER ELEMENT CACHING IN PROJECT "MicroStrategy Tutorial" MAXRAMUSAGE 10240;
985
project_name is the name of the project for which the caching properties are to be listed, of type string, between double quotes (" ").
Example
LIST ALL PROPERTIES FOR REPORT CACHING IN PROJECT "MicroStrategy Tutorial";
project_name is the name of the project for which caching is to be purged, of type string, between double quotes (" ").
Example
PURGE ELEMENT CACHING IN PROJECT "MicroStrategy Tutorial";
cache_name is the name of the report cache to be purged/deleted, of type string, between double quotes (" "). project_nameN is the project from which the invalid report caches are to be deleted, of type string, between double quotes (" ").
Example
DELETE REPORT CACHE "Customer List" IN PROJECT "MicroStrategy Tutorial";
cache_name is the name of the cache to be expired, of type string, between double quotes (" "). project_name is the project from which the report cache is to be expired, of type string, between double quotes (" ").
987
Example
EXPIRE ALL REPORT CACHES IN PROJECT "Enterprise Manager";
cache_name is the name of the cache for which the properties are to be listed, of type string, between double quotes (" "). project_name is the project from which the report cache properties are to be listed, of type string, between double quotes (" ").
Example
LIST ALL PROPERTIES FOR REPORT CACHE "Top 10 Reports" IN PROJECT "Enterprise Manager";
project_name is the project from which the report caches are to be listed, of type string, between double quotes (" ").
Example
LIST ALL REPORT CACHES FOR PROJECT "MicroStrategy Tutorial";
WH_table_name is the warehouse table name that has changed, and is to be used to invalidate all report caches depending on this table, of type string, between double quotes (" "). cache_name is the name of the report cache to be invalidated, of type string, between double quotes (" "). project_name is the project from which the report caches are to be invalidated, of type string, between double quotes (" ").
Example
INVALIDATE ALL REPORT CACHES WHTABLE "PMT_INVENTORY" IN PROJECT "MicroStrategy Tutorial";
989
login_name is the login name of an existing user to whom the connection map is to apply, of type string, between double quotes (" "). dbinstance_name is the name of an existing database instance (role) that will be used for the connection map, of type string, between double quotes (" "). dbconnection_name is the name of an existing database connection to be used for the mapping, of type string, between double quotes (" "). dblogin_name is the name of an existing database login to be used for the mapping, of type string, between double quotes (" "). project_name is the name of the project in which the connection map is to be used, of type string, between double quotes (" ").
Example
CREATE CONNECTION MAP FOR USER "palcazar" DBINSTANCE "DBRole1" DBCONNECTION "DBConn1" DBLOGIN "DBLogin1" ON PROJECT "3-Tier Project";
ALTER CONNECTION MAP FOR USER login_name DBINSTANCE dbinstance_name [DBCONNECTION dbconnection_name] [DBLOGIN dblogin_name] ON PROJECT project_name; where:
login_name is the login name of an existing user to whom the connection map applies, of type string, between double quotes (" "). dbinstance_name is the name of an existing database instance (role) to be used for the connection map, of type string, between double quotes (" "). dbconnection_name is the name of an existing database connection to be used for the mapping, of type string, between double quotes (" "). dblogin_name is the name of an existing database login to be used for the mapping, of type string, between double quotes (" "). project_name is the name of the project in which the connection map is to be used, of type string, between double quotes (" ").
Example
ALTER CONNECTION MAP FOR USER "palcazar" DBINSTANCE "DBRole1" DBCONNECTION "DBConn1" DBLOGIN "DBLogin1" ON PROJECT "3-Tier Project";
991
where:
login_name is the login name of an existing user that has a connection map, of type string, between double quotes (" "). dbinstance_name is the name of an existing database instance (role) that is being used for the connection map, of type string, between double quotes (" "). project_name is the name of the project in which the connection map is active, of type string, between double quotes (" ").
Example
DELETE CONNECTION MAP FOR USER "palcazar" DBINSTANCE "DBRole1" ON PROJECT "3-Tier Project";
[USEPARAMQUERIES (TRUE | FALSE)] [MAXCANCELATTEMPT number_of_seconds] [MAXQUERYEXEC number_of_seconds] [MAXCONNATTEMPT number_of_seconds] [CHARSETENCODING (MULTIBYTE | UTF8)] [TIMEOUT number_of_seconds] [IDLETIMEOUT number_of_seconds]; where:
dbconnection_name is the name of the database connection to be created, of type string, between double quotes (" "). odbc_data_source_name is an ODBC data source name local to the machine, of type string, between double quotes (" "). default_login_name is the name of the default database login to be used with the connection, of type string, between double quotes (" "). number_of_seconds is the length of time for those properties that require a time, in seconds, of type integer.
Example
CREATE DBCONNECTION "DBConn1" ODBCDSN "SQLSERVER_WH" DEFAULTLOGIN "DBLogin1" DRIVERMODE MULTIPROCESS EXECMODE ASYNCHRONOUS STATEMENT MAXCANCELATTEMPT 10 MAXQUERYEXEC 20 TIMEOUT 50 IDLETIMEOUT 100;
993
FALSE)] [USEPARAMQUERIES (TRUE | FALSE)] [MAXCANCELATTEMPT new_number_of_seconds] [MAXQUERYEXEC new_number_of_seconds] [MAXCONNATTEMPT new_number_of_seconds] [CHARSETENCODING (MULTIBYTE | UTF8)] [TIMEOUT new_number_of_seconds] [IDLETIMEOUT new_number_of_seconds]; where:
dbconnection_name is the name of the database connection to be altered, of type string, between double quotes (" "). new_dbconnection_name is the new name of the database connection to be altered, of type string, between double quotes (" "). new_odbc_data_source_name is the new ODBC data source name local to the machine that the connection is to use, of type string, between double quotes (" "). new_default_login_name is the name of the new default database login to be used with the connection, of type string, between double quotes (" "). new_number_of_seconds is the new length of time for those properties that require a time, in seconds, of type integer.
Example
ALTER DBCONNECTION "BConn1" TIMEOUT 100;
where:
dbconnection_name is the name of the database connection to be deleted, of type string, between double quotes (" ").
Example
DELETE DBCONNECTION "DBConn1";
property_nameN is the name of the property to be listed for the database login, from the following list: ID, NAME, ODBCDSN, DEFAULTLOGIN, DRIVERMODE, EXECMODE,USEEXTENDEDFETCH, USEPARAMQUERIES MAXCANCELATTEMPT, MAXQUERYEXEC, MAXCONNATTEMPT, CHARSETENCODING, TIMEOUT, IDLETIMEOUT. dbconnection_name is the name of the database connection for which to list properties, of type string, between double quotes (" ").
Example
LIST ALL PROPERTIES FOR DBCONNECTION "DBConn1";
995
Example
LIST ALL DBCONNECTIONS;
dbinstance_name is the name of the DBInstance to be created, used as a primary DBInstance or used to specify a data mart, of type string, between double-quotes (" "). dbconnection_type is the type of the database connection (DBMS), of type string, between double quotes (" ").
2008 MicroStrategy, Inc.
dbconnection_name is the DBConnection to be used in this DBInstance, of type string, between double quotes (" "). description is the description of the DBInstance, of type string, between double quotes (" "). database_name is the name of the database for the intermediate table storage, of type string, between double quotes (" "). tablespace_name is the table space for the intermediate table storage, of type string, between double quotes (" "). dbinstance_name2 is the name of the DBInstance, the new data mart, or the DBInstance to be used as primary, of type string, between double quotes (" "). table_prefix is the prefix to be used in this DBInstance, of type string, between double quotes (" "). no_high_connections is the number of connections to be established for high priority jobs, of type integer. no_medium_connections is the number of connections to be established for medium priority jobs, of type integer. no_low_connections is the number of connections to be established for low priority jobs, of type integer.
Example
CREATE DBINSTANCE "Production Database" DBCONNTYPE "Microsoft SQL Server 2000" DBCONNECTION "Data" DESCRIPTION "Production Database" HIGHTHREADS 10 MULTITHREADS 5 LOWTHREADS 7;
997
new_dbconnection_name] [DESCRIPTION new_description] [DATABASE new_database_name] [TABLESPACE new_tablespace_name] [PRIMARYDBINSTANCE new_dbinstance_name] [DATAMART new_dbinstance_name] [TABLEPREFIX new_table_prefix] [HIGHTHREADS new_no_high_connections] [MEDIUMTHREADS new_no_medium_connections] [LOWTHREADS new_no_low_connections]; where:
dbinstance_name is the DBInstance to be modified, of type string, between double quotes (" "). new_dbinstance_name is the new name of the DBInstance, the new data mart, or the DBInstance to be used as primary, of type string, between double quotes (" "). new_dbconnection_type is the type of the database connection (DBMS), of type string, between double quotes (" "). new_dbconnection_name is the new DBConnection to be used in this DBInstance, of type string, between double quotes (" "). new_description is the new description of the DBInstance, of type string, between double quotes (" "). new_database_name is the new name of the database for the intermediate table storage, of type string, between double quotes (" "). new_tablespace_name is the new table space for the intermediate table storage, of type string, between double quotes (" "). new_table_prefix is the new prefix to be used in this DBInstance, of type string, between double quotes (" "). new_no_high_connections is the new number of connections to be established for high priority jobs, of type integer. new_no_medium_connections is the new number of connections to be established for medium priority jobs, of type integer.
2008 MicroStrategy, Inc.
new_no_low_connections is the new number of connections to be established for low priority jobs, of type integer.
Example
ALTER DBINSTANCE "Production Database" DBCONNTYPE "Oracle 8i" DATABASE "Production" TABLESPACE "managers" HIGHTHREADS 8;
dbinstance_name is the name of the DBInstance to be deleted, of type string, between double quotes (" ").
Example
DELETE DBINSTANCE "Production Database";
dbinstance_name is the name of the DBInstance for which its properties will be listed, of type string, between double quotes (" ").
999
Example
LIST ALL PROPERTIES FOR DBINSTANCE "Production Database";
Example
LIST ALL DBINSTANCES;
dblogin_name is the name of the new database login, of type string, between double quotes (" ").
database_login is the database user to be used by the database login, of type string, between double quotes (" "). database_password is the database password to be used by the database login, of type string, between double quotes (" ").
Example
CREATE DBLOGIN "DbLogin1" LOGIN "dbadmin" PASSWORD "dbadmin";
dblogin_name is the name of the database login to be modified, of type string, between double quotes (" "). new_dblogin_name is the new name of the database login, of type string, between double quotes (" "). new_database_login is the database user to be used by the database login, of type string, between double quotes (" "). new_database_password is the database password to be used by the database login, of type string, between double quotes (" ").
Example
ALTER DBLOGIN "DbLogin1" NAME "DbLogin2" LOGIN "dbadmin" PASSWORD "dbadmin";
1001
dblogin_name is the name of the database login to be deleted, of type string, between double quotes (" ").
Example
DELETE DBLOGIN "DbLogin1";
property_nameN is the name of the property to be listed for the database login, from the following list: ID, NAME, LOGIN. dblogin_name is the name of the database login for which properties are to be listed, of type string, between double quotes (" ").
Example
LIST NAME, LOGIN FOR DBLOGIN "DbLogin1";
Example
LIST ALL DBLOGINS;
event_name is the name of the event to be triggered, of type string, between double quotes (" ").
Example
TRIGGER EVENT "My Daily Report";
1003
event_name is the name of the event to be created, of type string, between double quotes (" "). description is the description of the event, of type string, between double quotes (" ").
Example
CREATE EVENT "Database Load" DESCRIPTION "Database Load";
event_name is the name of the event to be altered, of type string, between double quotes (" "). new_event_name is the new name of the event, of type string, between double quotes (" "). new_description is the new description of the event, of type string, between double quotes (" ").
Example
ALTER EVENT "Database Load" NAME "DBMS Load";
event_name is the name of the event to be deleted, of type string, between double quotes (" ").
Example
DELETE EVENT "DBMS Load";
Example
LIST ALL EVENTS;
Miscellaneous statements
Start service statement, page 1005 Stop service statement, page 1006
Miscellaneous statements
1005
service_name is the service name (NOT the display name) to be started, of type string, between double quotes (" "). machine_name is the network name of the machine running a service, of type string, between double quotes (" ").
Example
START SERVICE "MAPPING" IN "HOST_MSTR";
service_name is the service name (NOT the display name) to be stopped, of type string, between double quotes (" "). machine_name is the network name of the machine running a service, of type string, between double quotes (" ").
Example
STOP SERVICE "MAPPING" IN "HOST_MSTR";
1007
where:
project_description is the description of the project, of type string, between double quotes (" "). html_input_file is the name of the Windows file that contains the HTML or plain text to be used as input for the project status, of type string, between double quotes (" "). warehouse_name is the warehouse for the project, of type string, between double quotes (" "). folder_path is the location of the HTML document directory, of type string, between double quotes (" "). number_of_attribute_elements is the maximum number of elements to display when browsing attributes, of type integer. number_of_seconds is the maximum report execution time in seconds, of type integer. number_of_report_result_rows is the maximum number of report result rows, of type integer. number_of_element_rows is the maximum number of element rows, of type integer. number_of_intermediate_result_rows is the maximum number of intermediate result rows, of type integer. number_of_jobs_per_user_acct is the maximum number of jobs per user account, of type integer. number_of_jobs_per_user_session is the maximum number of jobs per user session, of type integer. number_of_executing_jobs_per_user is the maximum number of executing jobs per user, of type integer. number_of_jobs_per_project is the maximum number of jobs per project, of type integer. number_of_user_sessions_per_project is the maximum number of user sessions per project, of type integer.
drill_map is the default project drill map to be assigned to the configuration, including the location path, of type string, between double quotes (" "). report_template is the default template for the report object, of type string, between double quotes (" "). template_template is the default template for the template object, of type string, between double quotes (" "). metric_template is the default template for the metric object, of type string, between double quotes (" "). project_name is the project for which the configuration will be modified, of type string, between double quotes (" ").
Example
ALTER PROJECT CONFIGURATION DESCRIPTION "MicroStrategy Tutorial Project (version 7.2)" WAREHOUSE "Tutorial Data" DOCDIRECTORY C:\Program Files\MicroStrategy\Tutorial" MAXNOATTRELEMS 1000 USEWHLOGINEXEC FALSE MAXREPORTEXECTIME 600 MAXNOREPORTRESULTROWS 32000 MAXNOELEMROWS 32000 MAXNOINTRESULTROWS 32000 MAXJOBSUSERACCT 100 MAXJOBSUSERSESSION 100 MAXEXECJOBSUSER 100 MAXJOBSPROJECT 1000 MAXUSERSESSIONSPROJECT 500 IN PROJECT "MicroStrategy Tutorial"; ALTER PROJECT CONFIGURATION PROJDRILLMAP "\Public Objects\Drill Maps\MicroStrategy Tutorial Drill Map" REPORTTPL "Blank Report"REPORTSHOWEMPTYTPL FALSE TEMPLATETPL "Default Template" TEMPLATESHOWEMPTYTPL FALSE METRICTPL "Default Metric" METRICSHOWEMPTYTPL FALSE IN PROJECT "MicroStrategy Tutorial";
1009
dbinstance_name is the available data mart database instance to be added to the project, of type string, between double quotes (" "). project_name is the project for which the data mart database instances will be added, of type string, between double quotes (" ").
Example
ADD DBINSTANCE "Extra Tutorial Data" TO PROJECT "MicroStrategy Tutorial";
dbinstance_name is the data mart database instance to be removed from the project, of type string, between double quotes (" "). project_name is the project for which the data mart database instance will be removed, of type string, between double quotes (" ").
Example
REMOVE DBINSTANCE "Tutorial Data" FROM PROJECT "MicroStrategy Tutorial";
project_name is the name of the project to be resumed, of type string, between double quotes (" ").
Example
RESUME PROJECT "MicroStrategy Tutorial";
project_name is the name of the project to be loaded, of type string, between double quotes (" ").
1011
Example
LOAD PROJECT "MicroStrategy Tutorial";
project_name is the name of the project to be unloaded, of type string, between double quotes (" ").
Example
UNLOAD PROJECT "MicroStrategy Tutorial";
project_name is the name of the project to be updated, of type string, between double quotes (").
Example
UPDATE PROJECT "MicroStrategy Tutorial";
project_name is the name of the project to be deleted, of type string, between double quotes (").
Example
DELETE PROJECT "MicroStrategy Tutorial";
1013
(month_of_year1 | month_of_year2 | | month_of_year12))) EXECUTE (time_of_day | ALL DAY EVERY number (MINUTES | HOURS [START AFTER MIDNIGHT number MINUTES]))); where:
schedule_name is the name of the schedule to be created, of type string, between double quotes (" "). description is the description of the schedule, of type string, between double quotes (" "). start_date is the start date that determines the first day the schedule will be active, of type date (mm/dd/yyyy). end_date is the end date of the schedule, of type date (mm/dd/yyyy). The schedule will never be triggered again after this date. event_name is the name of the event to be linked to an event-triggered schedule, of type string, between double quotes (" "). The schedule is activated when this specific event occurs. number is the number of days, weeks, months, minutes, and hours, or the number of the day within a month, to be specified for a time-based schedule, of type integer. In general this number specifies an interval, for example, every number weeks. When used with a month it can specify a specific date, for example, March number. day_of_weekN is the day of the week to be specified for a time-based schedule, from the following list: SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY. month_of_yearN is the month of the year to be specified for a time-based schedule, from the following list: JANUARY, FEBRUARY, MARCH, APRIL, MAY, JUNE, JULY, AUGUST, SEPTEMBER, OCTOBER, NOVEMBER, DECEMBER. time_of_day is the time of the day (24-hour format) to be specified for a time-based schedule, of type time (hh:mm).
Example
CREATE SCHEDULE "Database Load" STARTDATE 09/10/2002 ENDDATE NEVER TYPE EVENTTRIGGERED EVENTNAME "Database Load"; CREATE SCHEDULE "Schedule1" STARTDATE 09/10/2002 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY 5 DAYS EXECUTE 10:00; CREATE SCHEDULE "Schedule2" STARTDATE 09/10/2002 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY EXECUTE ALL DAY EVERY 5 MINUTES; CREATE SCHEDULE "Schedule3" STARTDATE 09/10/2002 ENDDATE NEVER TYPE TIMETRIGGERED WEEKLY EVERY 5 WEEKS ON MONDAY, TUESDAY, WEDNESDAY EXECUTE 18:00; CREATE SCHEDULE "Schedule4" STARTDATE 09/10/2002 ENDDATE 09/27/02 TYPE TIMETRIGGERED MONTHLY DAY 3 OF EVERY 5 MONTHS EXECUTE ALL DAY EVERY 5 HOURS START AFTER MIDNIGHT 10 MINUTES;
1015
new_number | (FIRST | SECOND | THIRD | FOURTH | LAST) (day_of_week1 | day_of_week2 | | day_of_week7) OF (month_of_year1 | month_of_year2 | | month_of_year12))) [EXECUTE (new_time_of_day | ALL DAY EVERY new_number (MINUTES | HOURS [START AFTER MIDNIGHT new_number MINUTES]))])]; where:
schedule_name is the name of the schedule to be altered, of type string, between double quotes (" "). new_schedule_name is the new name of the schedule, of type string, between double quotes (" "). new_description is the new description of the schedule, of type string, between double quotes (" "). new_start_date is the new start date that determines the first day the schedule will be active, of type date (mm/dd/yyyy). new_end_date is the new end date of the schedule, of type date (mm/dd/yyyy). The schedule will never be triggered again after this date. new_event_name is the new name of the event to be linked to an event-triggered schedule, of type string, between double quotes (" "). The schedule is activated when this specific event occurs. new_number is the new number of days, weeks, months, minutes, hours and day within a month, to be specified for a time-based schedule, of type integer. day_of_weekN is the day of the week to be specified for a time-based schedule, from the following list: SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY. month_of_yearN is the month of the year to be specified for a time-based schedule, from the following list: JANUARY, FEBRUARY, MARCH, APRIL, MAY, JUNE, JULY, AUGUST, SEPTEMBER, OCTOBER, NOVEMBER, DECEMBER.
new_time_of_day is the new time of the day (24-hour format) to be specified for a time-based schedule, of type time (hh:mm).
Example
ALTER SCHEDULE "Schedule8" NAME "NewSchedule1" DESCRIPTION "1" STARTDATE 09/10/2002 ENDDATE NEVER TYPE TIMETRIGGERED YEARLY LAST WEDNESDAY OF MAY EXECUTE 15:30;
schedule_name is the name of the schedule to be deleted, of type string, between double quotes (" ").
Example
DELETE SCHEDULE "Schedule4";
schedule_name is the name of the schedule for which its properties are to be listed, of type string, between double quotes (" ").
1017
Example
LIST PROPERTIES FOR SCHEDULE "Schedule1";
Example
LIST ALL SCHEDULES;
[LICENSECHECKTIME time_of_day] [HISTORYDIR folder_path] [MAXNOMESSAGES number_of_messages] [MESSAGELIFETIME number_of_days] [MAXNOJOBS number_of_jobs] [MAXNOCLIENTCONNS number_of_client_connections] [IDLETIME number_of_seconds] [WEBIDLETIME number_of_seconds] [MAXNOXMLCELLS number_of_XML_cells] [MAXNOXMLDRILLPATHS number_of_XML_drill_paths] [ENABLEWEBTHROTTLING (TRUE | FALSE)] [MAXMEMUSAGE percentage] [MINFREEMEM percentage] [ENABLEMEMALLOC (TRUE | FALSE)] [MAXALLOCSIZE number_of_MBs] [ENABLEMEMCONTRACT (TRUE | FALSE)] [MINRESERVEDMEM number_of_MBs] [MEMIDLETIME number_of_seconds] [WORKSETDIR folder_path] [MAXRAMWORKSET number_of_KBs]; where:
description is the description to apply to the server, of type string, between double quotes (" "). number_of_threads is the number of threads to apply to the server, of type integer. number_of_minutes is the number of minutes to apply to the different properties, of type integer. number_of_seconds is the number of seconds to apply to the different properties, of type integer. time_of_day is the time of the day (24-hour format) to be specified for the license check process, of type time (hh:mm). folder_path is the directory that the server is to use for the History List, of type string, between double quotes (" "). number_of_messages is the maximum number of History List messages that the server is to store, of type integer. number_of_days is the number of days that a History List message will be valid, of type integer.
1019
number_of_jobs is the maximum number of concurrent jobs that the server is to execute, of type integer. number_of_client_connections is the maximum number of concurrent user connections that the server is to accept, of type integer. number_of_XML_cells is the maximum number of XML cells that the server is to generate for a given report, of type integer. number_of_XML_drill_paths is the maximum number of XML drill paths that the server is to generate for a given report, of type integer. percentage is the minimum percentage that the server is to have for free memory, of type integer. number_of_MBs is the number of megabytes to apply to the different properties, of type integer. number_of_KBs is the maximum number of kilobytes that the server is to use on memory, of type integer.
Example
ALTER SERVER CONFIGURATION MAXCONNECTIONTHREADS 5 BACKUPFREQ 0 USENTPERFORMANCEMON TRUE USEMSTRSCHEDULER TRUE BALSERVERTHREADS FALSE HISTORYDIR ".\INBOX\dsmith" MAXNOMESSAGES 10 MAXNOJOBS 10000 MAXNOCLIENTCONNS 500 WEBIDLETIME 0 MAXNOXMLCELLS 500000 MAXNOXMLDRILLPATHS 100 MINFREEMEM 0;
where:
property_nameN is the name of the configuration property to be listed for the MicroStrategy Intelligence Server, from the following list: MAXCONNECTIONTHREADS, BACKUPFREQ, USENTPERFORMANCEMON, USEMSTRSCHEDULER, SCHEDULERTIMEOUT, BALSERVERTHREADS, HISTORYDIR, MAXNOMESSAGES, MAXNOJOBS, MAXNOCLIENTCONNS, IDLETIME, WEBIDLETIME, MAXNOXMLCELLS, MAXNOXMLDRILLPATHS, MAXMEMUSAGE, MINFREEMEM, DESCRIPTION, CACHECLEANUPFREQ, MESSAGELIFETIME, ENABLEWEBTHROTTLING, ENABLEMEMALLOC, MAXALLOCSIZE, ENABLEMEMCONTRACT, MINRESERVEDMEM, MEMIDLETIME, WORKSETDIR, MAXRAMWORKSET.
Example
LIST ALL PROPERTIES FOR SERVER CONFIGURATION;
Example
APPLY RUN TIME SETTINGS;
1021
start_date is the start date for the statistics to be purged, of type date (mm/dd/yyyy). end_date is the end date for the statistics to be purged, of type date (mm/dd/yyyy). timeout_secs is the time-out in seconds for the purging process, of type integer.
Example
PURGE STATISTICS FROM 4/1/2002 TO 5/1/2002 TIMEOUT 15;
where:
machine_name is the network name of the machine running MicroStrategy Intelligence Server, of type string, between double quotes (" ").
This command causes Command Manager to lock the schema. To unlock the schema, use the following command: UNLOCK CONFIGURATION FORCE;
Example
START SERVER IN "d_smith";
machine_name is the network name of the machine running MicroStrategy Intelligence Server, of type string, between double quotes (" ").
Example
STOP SERVER IN "d_smith";
1023
where:
machine_name is the network name of the machine running MicroStrategy Intelligence Server, of type string, between double quotes (" ").
This command causes Command Manager to lock the schema. To unlock the schema, use the following command: UNLOCK CONFIGURATION FORCE;
Example
RESTART SERVER IN "d_smith";
machine_name is the network name of the machine running MicroStrategy Intelligence Server, of type string, between double quotes (" ").
Example
LIST PROPERTIES FOR SERVER "d_smith";
Example
LIST ALL SERVERS;
1025
GLOSSARY
access control list A list of users and groups and the access permissions that (ACL) each has for an object. active user A user who logs in to a MicroStrategy system. When a user logs in to the system, a user session is established and remains open until the user logs out of the system or the system logs the user out. Users that are logged in but are not doing anything still consume some resources on Intelligence Server. application object A MicroStrategy object used to provide analysis of and insight into relevant data. Application objects are developed in MicroStrategy Desktop and they are the building blocks for reports and documents. Application objects include these object types: report, document, template, filter, metric, custom group, consolidation, prompt. authentication The system process of validating user login information. A login ID and password are compared against an authorized list, and if a match is detected, specific access rights and application privileges are granted to the user.
Glossary
cache A special data store holding recently accessed information for quick future access. This is normally done for frequently requested reports, which execute faster because they need not run against the data warehouse. Results from the data warehouse are stored separately and can be used by new job requests that require the same data. In the MicroStrategy environment, when a user runs a report for the first time, the job is submitted to the database for processing. However, if the results of that report are cached, the results can be returned immediately without having to wait for the database to process the job the next time the report is run. child dependency Occurs when an object uses other objects in its definition.
See also parent dependency.
cluster A collection of two or more machines that provide services to a common set of users. Each machine in the cluster is called a node.
See also clustering.
clustering A configuration strategy that provides uninterrupted access to data, enhanced scalability, and increased performance for users.
See also cluster.
concurrent users Users who execute reports or use the system in one way or another in the same time. configuration object A MicroStrategy object appearing in the system layer and usable across multiple projects. Configuration objects include (among others) these object types: users, database instances, database login IDs, schedules.
1028 cache
Glossary
connection borrowing Occurs when Intelligence Server executes a job on a lower priority connection because no connections that correspond to the jobs priority are available at execution time. High priority jobs can run on high, medium, and low priority connections. Medium priority jobs can run on medium and low priority connections. connection mapping The process of mapping MicroStrategy users to database connections and database logins. For MicroStrategy users to execute reports, they must be mapped to a database connection and database login. cookie A piece of information that is sent to your Web browseralong with an HTML pagewhen you access a Web site or page. This information is usually used to remember details about what a user did on a particular site or page for the purpose of providing a more personal experience for the user. data source name Provides connectivity to a database through an ODBC driver. (DSN) A DSN generally contains host machine name or IP address, instance name, database name, directory, database driver, User ID, password, and other information. The exact information included in the DSN varies by DBMS. Once you create a DSN for a particular database, you can use it in an application to call information from the database.
There are three types of DSNs:
System DSN: can be used by anyone who has access to the machine. DSN information is stored in the registry. User DSN: is created for a specific user. Also stored in the registry. File DSN: DSN information is stored in a text file with a .DSN extension.
Glossary
data warehouse 1. A database, typically very large, containing the historical data of an enterprise. Used for decision support or business intelligence, it organizes data and allows coordinated updates and loads.
2. A copy of transaction data specifically structured for query, reporting, and analysis.
database connection Stores all data warehouse specific connection information such as DSN, driver mode and SQL execution mode as well as connection caching information. database instance 1. A MicroStrategy object created in MicroStrategy Desktop that represents a connection to the warehouse. A database instance specifies warehouse connection information, such as the data warehouse DSN, Login ID and password, and other data warehouse specific information.
2. Database server software running on a particular machine. Though it is technically possible to have more than one instance running on a machine, there is usually only one instance per machine.
database login Stores the login ID and password that MicroStrategy Intelligence Server uses to connect to a particular database.
See also:
login ID password
database management A collection of programs that enables you to store, modify, system (DBMS) and extract information from a database. dataset A MicroStrategy report that retrieves data from the data warehouse or cache. It is used to define the data available on a document. diagnostics The process of logging and analyzing the information on the operational performance of Intelligence Server.
1030 data warehouse
2008 MicroStrategy, Inc.
Glossary
distinguished name The unique identifier of an entry in the LDAP directory. (DN) document 1. A container for objects representing data coming from one or more reports, as well as positioning and formatting information. A document is used to format data from multiple reports in a single display of presentation quality.
2. The MicroStrategy object that supports the functionality defined in (1).
document instance Facilitates the processing of a document through Intelligence Server like a report instance is used for reports. It contains the report instances for all the dataset reports and therefore has access to all that may be included in the dataset reports, including prompts, formats, and so on. element browsing The process of navigating through hierarchies of attribute elements. For example, viewing the list of months in a year. element cache Most-recently used lookup table elements that are stored in memory on Intelligence Server or MicroStrategy Desktop machines so they can be retrieved more quickly. encryption The translation of data into a sort of secret code for security purposes. Extraction, 1) The process used to populate a data warehouse from Transformation, and disparate existing database systems. Loading (ETL) 2) Third-party software used to facilitate such a process. failover support Ensures that a business intelligence system remains available for use in the event of an application or hardware failure. Clustering provides failover support in two ways: load distribution and request recovery.
Glossary
firewall A type of technology that enforces an access control policy between two systems. It can be thought of as something that exists to block certain network traffic while permitting other network traffic. group (short name for user group) A collection of users, such as Everyone, System Administrators, LDAP users, and so on. Groups provide a convenient way for managing a large number of users. You can assign privileges to groups as well as permissions to objects. high watermark A value used by Intelligence Server to calculate the available (HWM) memory for a new memory request for both virtual memory and virtual bytes (if it has exceeded an acceptable level). The HWM represents the highest value that the sum of private bytes and outstanding memory contracts may reach before triggering memory request idle mode. This defaults to 90%, but you can specify a lower value in the Maximum use of virtual address space (%) setting.
See also low watermark (LWM).
history cache Report results saved for future reference via the History List by a specific user. History List A folder where users put report results for future references. HTML document 1) A container for formatting, displaying, and distributing multiple grid and graph reports from a single request.
2) The MicroStrategy object that supports such functionality.
HTML document Facilitates the processing of the HTML document through instance Intelligence Server (like a report instance is used for processing reports). It contains the report instances for all the child reports, the XML results for the child reports, and any prompt information that may be included in the child reports.
1032 firewall
Glossary
idle time The time during which a user stops actively using a session, for example, not using the project, not creating or executing reports. inbox synchronization The process of synchronizing inboxes across all nodes in the cluster so that all the nodes contain the same History List messages. integrity test A test performed in Integrity Manager. Reports from a base project are executed and you are informed as to which reports failed to execute. Depending on the type of integrity test, those reports may be compared against reports from another project, or against reports from a previously established baseline. job A request to the system, created by users submitting requests from Desktop, Web, Narrowcast Server, Intelligence Servers internal scheduler, or a custom-coded application. Common requests include report execution, object browsing, element browsing, Report Services document execution, and HTML document execution. Job processing involves several procedures, depending on the specific request. job priority Defines the order in which jobs are processed. Lightweight Directory An open standard Internet protocol running over TCP/IP and Access Protocol designed to maintain and work with large directory services. (LDAP) An LDAP directory can be used to centrally manage users in a MicroStrategy environment by implementing LDAP authentication. load balancing A strategy aimed at achieving even distribution of MicroStrategy Web user sessions across MicroStrategy Intelligence Servers. MicroStrategy achieves four-tier load balancing by incorporating load balancers into MicroStrategy Web. login ID A text string usually entered along with a password during login; sometimes called a user name.
idle time 1033
Glossary
low watermark A value used by Intelligence Server to calculate the available (LWM) memory for a new memory request for both virtual memory and virtual bytes. The low watermark is set as 95 percent of the high watermark.
See also high watermark (HWM).
managed object A managed object is a schema object unrelated to the project schema, which is created by the system and stored in a separate system folder. Managed objects are used to map data to attributes, metrics, hierarchies and other schema objects for Freeform SQL, Query Builder, and OLAP cube reports. matching cache Report results retained for the purpose of being reused by the same report requests later on. matching-history cache A matching cache with at least one History List message referencing it. memory request idle The mode in which Intelligence Server denies requests for mode memory until its memory usage drops below the low watermark. message lifetime Determines how long (set in days) messages can exist in a users History List. metadata A repository whose data associates the tables and columns of a data warehouse with user-defined attributes and facts to enable the mapping of the business view, terms, and needs to the underlying database structure. Metadata can reside on the same server as the data warehouse or on a different database server. It can even be held in a different RDBMS. metadata The process of synchronizing object caches across all nodes in synchronization a cluster. node Each machine in a cluster.
1034 low watermark (LWM)
Glossary
object browsing The process of retrieving objects from the metadata by expanding or selecting a folder in MicroStrategy Desktop or Web. object cache A recently used object definition stored in memory on MicroStrategy Desktop and MicroStrategy Intelligence Server. ODBC See Open Database Connectivity. ODBC driver A software routine that translates MicroStrategy Intelligence Server requests into commands that the DBMS understands. Open Database An open standard with which client computers can Connectivity (ODBC) communicate with relational database servers. Client machines make a connection to a particular logical database, on a particular physical database server, using a particular ODBC driver. orphan session An entry in the statistics database that indicates that a session was initiated in Intelligence Server, but no information was recorded when the session ended. parent dependency Occurs when an object is used as part of the definition of other objects.
See also child dependency.
password Preserves user account integrity in an application. Many applications can associate both a password and a password hint with each user. permissions Define for objects the degree of control users have over them.
Glossary
personal page A type of Narrowcast Server implementation that executes execution one multi-page report for all users in a segment and then uses this single report to provide personalized content (pages) for different users. All users have their reports executed under the context of the same Intelligence Server user, so individual security profiles are not maintained. However, load on the Intelligence Server may be significantly lower than for personalized report execution (PRE) in some cases.
See also personal report execution.
personal report A type of Narrowcast Server implementation that executes a execution separate report for each set of users with unique personalization. Users may have reports executed under the context of the corresponding Intelligence Server user if desired. Using this option, security profiles defined in MicroStrategy Desktop are maintained. However, if there are many users who all have unique personalization, this option can place a large load on Intelligence Server.
See also personalized page execution.
physical warehouse A detailed graphic representation of your business data as it schema is stored in the data warehouse. It organizes the logical data model in a method that makes sense from a database perspective.
See also schema.
private bytes The current number of bytes a process has allocated that cannot be shared with other processes.
See also virtual bytes.
privilege Defines what types of operations certain users and user groups can perform in the MicroStrategy system. For example, which objects a given user can create and which applications and editors he can use.
Glossary
project 1) The MicroStrategy object in which you define all of the schema and application objects, which together provide for a flexible reporting environment. A project is the highest-level intersection of a data warehouse, metadata repository, and user community, containing reports, filters, metrics, and functions.
2) An object containing the definition of a project, as defined in (1). The project object is specified when requesting the establishment of a session.
project source Defines a connection to the metadata database and is used by various MicroStrategy products to access projects. A direct project source is a two-tier connection directly to a metadata repository. A server project source is a three-tier connection to a MicroStrategy Intelligence Server. One project source can contain many projects and the administration tools found at the project source level are used to monitor and administer all projects in the project source. report cache A result set from an executed report that is stored on MicroStrategy Intelligence Server. report cost An arbitrary value you can assign to a report to help determine its priority in relation to other requests. report instance A container for all objects and information needed and produced during report execution including templates, filters, prompt answers, generated SQL, report results, and so on. It is the only object that is referenced when executing a report, being passed from one special server to another as execution progresses. schedule A MicroStrategy object that contains information specifying when a task is to be executed. scheduling A MicroStrategy Intelligence Server feature that is used to automate specific tasks.
project 1037
Glossary
schema 1) The set of tables in a data warehouse associated with a logical data model. The attribute and fact columns in those tables are considered part of the schema itself.
2) The layout or structure of a database system. In relational databases, the schema defines the tables, the fields in each table, and the relationships between fields and tables.
schema object MicroStrategy object created, usually by a project designer, that relates the information in the logical data model and physical warehouse schema to the MicroStrategy environment. These objects are developed in MicroStrategy Architect, which can be accessed from MicroStrategy Desktop. Schema objects directly reflect the warehouse structure and include attributes, facts, functions, hierarchies, operators, partition mappings, tables, and transformations. secure sockets layer An encryption technology that encodes the communication (SSL) between a Web browser and Web server so that only the recipient can read it. security filter A qualification associated with a user or user group that is applied to all queries executed by that user or group. security role A MicroStrategy object that is used to store a particular grouping of privileges that you can apply to users or groups from project to project. security view A feature of most relational databases that restricts a users access to the data so he can view only a subset of it. server definition An instance of MicroStrategy Intelligence Server and all of its configuration settings. server object A configuration-level object in the metadata called server definition. It contains governing settings that apply at the server level, a list of projects registered on the server, connection information to the metadata repository, and so on.
1038 schema
Glossary
server state dump A collection of information related to the current state of the (SSD) Intelligence Server that is written to a log file. system prompt A special type of prompt that does not require an answer from the user. A system prompt is answered automatically by the system. For example, the User Login system prompt is answered automatically with the login name of the user who runs the report. System prompts can be used in filters and metric expressions. task list A list of tasks that must be accomplished to complete a job within Intelligence Server. user A person who can log in to a MicroStrategy system, create and own objects such as reports, execute reports, and take advantage of all the other features in the system. user address space Sometimes referred to as virtual address space. Independent of virtual memory and of finite size. It is measured per process on the machine (such as the MSTRSVR.exe Intelligence Server application). By definition, in a 32 bit operating system, virtual bytes is limited to 4GB (232). By default, Windows operating system divides this into two parts UAS and System Address Space (SAS). The UAS is, in this case, for Intelligence Server to store data and code while the SAS is for the operating systems use. user group Group for short. A collection of users. user profile What the user is doing when he or she is logged in to the system. user session Established when each user connects to Intelligence Server from a MicroStrategy client (Web, Desktop, Narrowcast Server, and so on).
Glossary
virtual bytes The limit associated with Intelligence Servers virtual address space allocation is the committed address space (memory actually being used by a process) plus the reserved address space (memory reserved for potential use by a process).
See also private bytes.
virtual memory The amount of physical memory (RAM) plus Disk Page file (also called the swap file). VLDB property A group of settings used to control SQL syntax or behavior for different DBMS platforms. VLDB properties initialize the SQL generation standards for each DBMS platform and allow you to optimize SQL generation for your data warehouse configuration. VLDB settings Settings that affect the way MicroStrategy Intelligence Server interacts with the data warehouse to take advantage of the unique optimizations that different databases offer. Each VLDB property has two or more of VLDB settings. working set A collection of messages that reference in-memory report instances. A message is added to the working set when a user executes a report or retrieves a message from his or her Inbox. XML cache A report cache in XML format that is created and available for use on the Web.
INDEX
Numerics
6.X database authentication mode 67 copying 496 Architect privileges 858 architecture overview 1 archiving History List messages 229 assigning permissions 131 privileges individually 126 privileges with security role 126 AT command. See Windows AT command. auditing for license compliance 154 authentication defined on 55 implementing Windows 58 LDAP 69 LDAP binding 92 LDAP password comparison 92 modes 55 standard 57 troubleshooting 584 authentication mode 6.X database 67 anonymous 68 automating a process with Command Manager 464
A
access control application functionality 124 to data via the application 105 to data via the database 101 Access Control List defined on 130 assigning 131 controlling application access 133 Object Manager 514 User Merge Wizard 139 ACL. See Access Control List. activation, software 153 active user defined on 374 Administration privileges 859 administration tasks, scheduling 461 analyzing statistics 297 anonymous authentication mode 68 application development life cycle 478 developer 481 recommended 479 application object defined on 15
1041
Index
B
baseline-versus-baseline integrity test 533 baseline-versus-project integrity test 533 BigInt supporting 805
C
cache defined on 169 centralized cache file 340 database connection 20 element. See element cache. object. See object cache. purging for a project 240 report. See report cache. sharing 340 types of 169 Cache Monitor 180, 246 callstack 573 catalog. See project. centralized cache file 340 certificate, digital 939 child dependency defined on 500 locating 500 Object Manager 503 cluster defined on 329 See also clustering. Cluster Monitor 248 clustering defined on 329 architecture 333 benefits 329 components 331 deleting Intelligence Server from 249 Intelligence Server machines 344 joining Intelligence Server 249 monitoring 248 removing Intelligence Server from 249
requirements 332 sharing History List across a cluster 338 sharing information between nodes 337 sharing metadata across a cluster 337 Web 331 combining users 137 Command Manager 464 running 466 scheduling with Windows AT command 465 comma-separated value. See CSV. common privileges 854 comparing reports 530 baseline-versus-baseline comparison 533 baseline-versus-project comparison 533 project-versus-project comparison 533 viewing differences 546 complying with license 154 concurrent user defined on 374 configuration object defined on 15 copying 496 moving 496 configuring system architecture 430 conflict resolution default action 512 connection borrowing defined on 395 connection mapping defined on 105 example 146 purpose 106 User Merge Wizard 141 cookie defined on 940 available projects information 955 cached preferences 957 connection information 955 current language 954
1042
Index
default user name 953 global user preferences 956 GUI settings 954 preferences 957 project information 953 session information 950 copying application object 496 configuration object 496 object 492 project 479, 485 resolving a conflict when 510 schema object 496 CPU affinity 158 MicroStrategy Web 163 CPU license managing 149 verifying 152 creating group 54 integrity test 536 log file 565 schedule 452 user 54 CSV, exporting as 44
D
data access connection mapping 105 passthrough execution 109 security filter 111 security view 102 splitting a fact table by column 104 splitting a fact table by row 102 data access security MicroStrategy measures 105 RDBMS measures 101
data loading manual 288 scheduling Enterprise Manager 280 troubleshooting 610 data mart supporting BigInt 805 data provider. See project source. data warehouse defined on 2 authentication by 65 connection mapping 105 connection thread 388 DBMS upgrading 23 design and tuning 431 Enterprise Manager 255, 899 ETL 24 leveraging the security view 106 limiting connection threads 579 monitoring connection to 244 monitoring usage trends 251 navigating in 14 optimizing connection threads 392 performance 431 placement 433 platform 431 setting optimal number of connection threads 389 SQL maximum size 388 SQL timeout per pass 387 timestamp 271 troubleshooting 557 data warehouse connection, governing 390 database authentication mode 65 database connection cache 20 Enterprise Manager 285 governing summary 446 information in the SSD 574
1043
Index
restricting element caching 212 database connection cache 20 governing 393 Database Connection Monitor 244 database connection thread governing 390 optimizing 392 prioritizing 394 timeout 393 dataset defined on 38 deactivation, software 153 default location for the Web log file 570 default options loading 931 Web 930 deleting element cache 214 History List messages 232 object cache 219 deployment, sizing 368 designing a Narrowcast Server application 448 Desktop Analyst privileges 855 Desktop Designer privileges 856 diagnostics defined on 558 common customizations 562 configuring 561 configuring a performance counter 565 log file anatomy 567 sample log file 568 Diagnostics and Performance Logging tool 558, 971 digital certificate 939 disabling element caching 204 object caching 217 report caching 179
single report caching 199 disconnecting database 244 user 243 Distinguished Name defined on 83, defined on 589 distinguished name defined on 70 document defined on 37 Enterprise Manager instance 38 system performance and 439 testing execution of 532 document instance defined on 38 Drill mode. See page-by. drill path, XML 134 duplicating a project 479, 485
E
element browsing defined on 35 job processing steps 34 element cache 201, defined on 201 deleting 214 enabling 204 limiting attribute elements 211 limiting memory usage 210 limiting number of elements 205 matching algorithm 204 optimizing 208 pool 203 restricting by database connection 212 restricting by database login 213 settings 215 storage location 203 terminology 202 element caching disabling 204 element incremental fetch optimizing 208
2008 MicroStrategy, Inc.
1044
Index
setting 205 element request 202 enabling element caching 204 single report caching 199 encryption defined on 942 metadata 14 with SSL 939 Enterprise Manager 251 administering 282 Administrator user group 259 architecture 254 attribute description 918 best practices 252 best practices for installing and configuring 258 configuring 257, 258 console 272 data load, scheduling 280 data model 866 data warehouse 262 data warehouse in DB2 259 data warehouse table 899 database connection 285 DB2 for the data warehouse 259 document in EMAdmin user group 282 Freeform SQL and 318 indicator lookup table 914 installing 257, 258 log statistics 266 lookup table 911 metadata database 261 metadata table 917 permissions 299 project documentation 253 project, using 297 real-time analysis 317, 318
relationship table 917 renaming a project 287 statistics table 875 statistics table, creating 263 statistics troubleshooting 608 table 865 timestamp 271 transformation table 913 troubleshooting 608 user 282 ETL. See Extraction, Transformation, and Loading. event triggering automatically 456 triggering manually 457 event-based schedule 455 Everyone user group 51 example connection mapping 146 fact table partitioning 146 MCM 427 memory depletion 427 partitioned fact table 146 passthrough execution 144 security 142 standard authentication 143, 146 Windows authentication 144 Excel file with formatting, exporting as 45 exception 584 execute permission 134 executing a job 27 as a variable 370 canceling 239 current job 238 governing 385 governing by project 386 governing by user 385 prioritizing 394
1045
Index
seeing the current job 238 SSD information 575 executing an integrity test 537 command line 543 Integrity Manager Wizard 537 execution idle project mode 326 execution time, governing 383 expiring a report cache 188 export size, governing 409 exporting CSV 44 Excel file with formatting 45 PDF 45 plain text 44 report from Web 43, 409 Extraction, Transformation, and Loading defined on 24 enabling a user to view 26 viewing for a report 26
F
fact table partitioning as a security measure 102 example 146 failover 330 failover support defined on 330 fatal exception 584 firewall defined on 935 Freeform SQL 297 Enterprise Manager and 318 full idle project mode 327
export size 409 History List 379 jobs per project 386 jobs per user 385 memory for Web requests 429 number of jobs 385, 387 report size 400 request 382 results delivery 405 settings 439 SQL 387 user profile 377 user resources 379 user session 374 working set 380 XML cell 408 XML drill path 410 graph in Integrity Manager 547 group defined on 51 creating 54 default 51 importing 54 merging 137 groups importing LDAP groups 83 LDAP search filters 81 linking LDAP groups 89 Guest user group 52
H
hierarchy, unbalanced or ragged 622 high watermark defined on 424 history cache defined on 174 Cache Monitor 248 History List defined on 221 accessing 224
G
governing database connection cache 393 database connection thread 390 execution time 383
1046
Index
archiving 229 backing up 223 caching and 223 clustered environment and 224 deleting messages 232 governing 379 lifetime 232 managing 231 retrieving archived 230 settings 233 sharing across a cluster 338 storage location 222 HTML document defined on 40 instance 41 job processing steps 40 HTML document instance defined on 41 HWM. See high watermark.
I
idle time defined on 376 idling a project 240 IIS, tuning 947 importing group 54 user 54 user group 54 inbox synchronization defined on 338 inbox, SSD information 574 Inbox. See History List. incremental fetch in Web 407 index file for the report cache 176 inheriting privileges 127, 860 installation, silent 475 Integrity Manager 530 creating an integrity test 536 graph 547 integrity test creation 536
loading saved test settings 538 prerequisites for using 531 privileges 859 prompted report and 540 report data 546 report execution output 548 saving test settings 538 test results, viewing 544 viewing differences in reports 546 viewing saved results of a test 548, 550 viewing test results 544 Integrity Manager Wizard 532, 537 integrity test defined on 532, 532 baseline-versus-baseline 533 baseline-versus-project 533 creating 536 executing from command line 543 executing with Integrity Manager Wizard 537 loading 538 project-versus-project 533 report status 544 saving 538 single-project 532 viewing results of 544 viewing saved results 548, 550 Intelligence Server clustering 249 component, MCM 419 components information in the SSD 576 CPU type and speed 412 CPUs, number of 412 diagnostics 558 governing 440 job processing 27 logging 558 number of CPUs 412
1047
Index
request types 27 running mode 3 runtime memory use 418 SSD components information 576 SSD version information 571 starting 10 starting as a service 5 starting as an application 9 starting automatically 12 startup memory use 416 startup, troubleshooting 557 stopping 11 troubleshooting 557 unregistering as a service 4 version in SSD 571 Intelligent Cube report 437 job processing steps 32 privileges 377 international support xxx invalidating a report cache 184, 246
L
latency and project failover 357 LDAP defined on 69 anonymous/guest users 94 authentication 69 binding authentication 92 clear text 75 database passthrough 95 directory defined on 69 group search filters 81 host 74 importing at login 85 importing users and groups 83 IP address 74 linking users and groups 89 password comparison only 92 port 75 SDK 71 SDK on UNX/Linux 73 search root 78 server defined on 69 SSL 75 SSL certificates 75 user group 52 user search filters 80 LDAP authentication FAQ 591 troubleshooting 586 LDAP user group 52 license auditing 154 check 151
J
job defined on 27 execution 388 governing 387 Job Monitor 238 job priority defined on 394 setting 396 variables 396 job processing steps 27 element browsing 34 graph report 32 HTML document 40 Intelligent Cube report 32 object browsing 33 report 28 report with a prompt 31
1048
Index
CPU 152 managing 149 time to run license check 151 updating 156 verifying named user 150 license compliance, CPU affinity and 159 License Manager 152 Lightweight Directory Access Protocol. See LDAP. linked warehouse login. See passthrough execution. Linux install LDAP SDK 73 load balancing defined on 330 loaded project mode 324 loading project 240 project defaults 931 saved Integrity Manager test settings 538 saved report cache 190 local cache file 339 locking a project 494 Log Destination Editor 566 log file creating 565 diagnostics 567 location 567 reading 567 viewing in the Monitor 569 logging configuration 558 login ID, using to restrict element caching 213 low watermark defined on 424 LWM. See low watermark.
M
machine size recommendations 368
managed object defined on 551 deleting from a project 553 managing license 149 object individually 495 scheduled request 463 matching caches defined on 173 matching-history cache defined on 174 maximum single allocation size 421 maximum use of virtual address space 421 MCM criteria for granting a request 422 example 427 governing Intelligence Server memory use 419 granting or denying memory requests 422 logic flow 423 managing virtual address space fragmentation 425 maximum use of virtual address space 421, 424 memory request idle time 422, 426 memory watermarks 424 minimum reserved memory 421, 424 preventing memory depletion 583 single memory allocation governing 421 tasks governed 419 using 420 XML and 419 memory 414, 577 governing for Web requests 429 SSD information 572 top consumers 578 Memory Contract Manager. See MCM. memory depletion example 427
1049
Index
overview 576 preventing 583 SSD information 573 memory request idle mode defined on 426 memory request idle time 422 recommended value 426 merging groups 137 projects 517 users 137 message lifetime defined on 232 metadata defined on 14 connecting to 17 fixing inconsistencies in 592 sharing across a cluster 337 synchronization 337 metadata synchronization defined on 337 microcube. See caching. MicroStrategy Administrator privileges 858 MicroStrategy Integrity Manager 530 MicroStrategy Mobile privileges 855 MicroStrategy Office privileges 854 MicroStrategy Project Comparison Wizard 526 MicroStrategy Project Merge 517 MicroStrategy user group. See user group. MicroStrategy User Merge Wizard 137 MicroStrategy user. See user. MicroStrategy Web CPU affinity feature 163 managing 925 migrating an object 492 minimum reserved memory 421 Monitor console 569 monitoring cluster 248
database connection 244 job 238 machine performance 249 MicroStrategy system 251 project 240 report cache 246 schedule 245 user connection 243 moving a configuration object 496
N
named user license managing 149 setting when to verify 151 verifying 150 Narrowcast Server application design 448 connecting to Intelligence Server 450 information source 450 system resource use and 447 network bandwidth 434 monitoring 435 performance considerations 432 recommended configuration 433 network address of the temp client 239, 244 node defined on 329
O
object browsing defined on 33 job processing steps 33 object cache defined on 216 deleting 219 disabling 217 limiting memory usage 218 matching algorithm 217
1050
Index
summary of settings 220 Object Manager 495 copying an object 492 handling a child dependency 503 handling an Access Control List 514 international settings 508 parent dependency 502 Project Merge versus 493 resolving a conflict 510 saving a dependent object 504 starting 498 table migration 515 timestamp 507 OLAP Cube VLDB property 622 OLAP Services and system performance 437 Oracle 8i 24, 596 orphan session defined on 290
P
package. See project. page-by and system performance 436 parent dependency defined on 502 partitioned fact table 102 example 146 passthrough execution 109 example 144 PDF, exporting as 45 performance counters and diagnostics 564 Performance Monitor 249 permissions defined on 128 assigning 131 grouping 844 list of 845 server object 846 use and execute 134 personal page execution defined on 448
2008 MicroStrategy, Inc.
personal report execution defined on 448 personalized drill path 134 physical disk utilization 413 physical warehouse schema defined on 14 plain text, exporting as 44 preferences, Web 932 primary server 248 prioritizing database connection thread 394 private bytes defined on 414 privileges defined on 124 Administration 859 Architect 858 assigning 124 common 854 default user group 861 denying 126 Desktop Analyst 855 Desktop Designer 856 governing with 377 granting 126 inherited 860 inheriting 127 Integrity Manager 859 list of 847 merging in User Merge 139 MicroStrategy Administrator 858 MicroStrategy Mobile 855 MicroStrategy Office 854 Web Analyst 850 Web MMT Option 853 Web Professional 852 Web Reporter 849 processing results 400 processor affinity 158 project defined on 16 cache purging 240 comparing 526
1051
Index
comparing reports between 533 copying 479, 485 default options for Web 930 duplicating 479, 485 governing summary 444 idling 240 loading 240 locking 494 merging 517 monitoring 240 renaming 287 resuming 240 SSD information 572, 573 state information in the SSD 573 testing 530 tracking 529 unloading 240 unlocking 494 Web default options 930 Project Comparison Wizard 526 comparing objects between two projects 527 project comparison process 527 using 527 project documentation in Enterprise Manager 253 project failover support 357 Project Merge 517 command 521 conflict resolution 525 Object Manager versus 493 project merge process 518 resolving a conflict 525 scheduling with Windows AT command 524 using 519 Project Merge Wizard 519 conflict resolution 525
merging projects 520 resolving a conflict 525 XML 494 project mode 323 execution idle 326 full idle 327 loaded 324 request idle 325 unloaded 324 Project Monitor 240 project source defined on 16 project-versus-project integrity test 533 prompt Integrity Manager 540 resolving 31 system 122 system performance and 439 prompted report in Integrity Manager 540 Public user group 52 purging report cache 189 statistics 294
Q
query model. See project.
R
ragged hierarchy 622 real-time analysis and Enterprise Manager 317 report cache. See report cache. comparing from two baselines 533 comparing with a baseline 533 comparing with another report 530 cost 398 Enterprise Manager 297
1052
Index
instance 28 prompted in Integrity Manager 540 scheduling 457 size 400 system performance and 436 testing execution of 532 report cache defined on 171 configuring settings 190 deleting 187 disabling for a project 179 disabling for a single report 199 enabling for a single report 199 expiring 188 file format 175 history 174 index file 176 invalidating 184, 246 loading from disk 190, 246 location 175 managing 183 matching 173 matching algorithm 176 matching-history 174 memory usage 195 monitoring 180, 246 project settings 193 purging 189 scheduling an update 185 settings 190, 200 sharing options in cluster 339 status 247 types 173 unloading to disk 189, 247 updating on a schedule 458 viewing for one project 247 XML 174 report comparison tool 530 report complexity and system resource
use 404 report cost defined on 398 report instance defined on 28 Report Services document. See document. report size, governing 400 request governing 382 types 27 request idle project mode 325 resolving a prompt 31 response file configuring 474 installing 474 result row limit troubleshooting 598 results delivery 405 processing 400 results summary of integrity tests 544 resuming a project 240
S
saving Integrity Manager test settings 538 Scan MD tool 596 schedule defined on 452 creating 452 event-based 455 monitoring 245 SSD information 574 time-triggered 453 User Merge Wizard 141 Schedule Monitor 245 scheduled e-mail enabling with Web 933 impact on system resources 449 privilege 378 scheduled request 463 scheduling defined on 451
1053
Index
administration tasks 461 Command Manager script 465 Project Merge to run 524 report 457 using the command prompt 280 schema object defined on 15 copying 496 scope of analysis. See Intelligent Cube report. script 464 Search Export feature 529 secure sockets layer. See SSL. security authentication 55 controlling application functionality 124 example 142 permissions 128 privileges 124 security role 126 setting up 49 single sign-on 50 security filter defined on 110 merging multiple 120 Security Filter Manager 240 User Merge Wizard 140 Security Filter Manager 240 security role defined on 126 User Merge Wizard 140 security view defined on 102 purpose 102 send now enabling with Web 933 impact on system resources 449 privilege 378 server definition defined on 15 Intelligence Server startup 10 permissions 846
SSD information 571, 572 server object defined on 846 server state dump defined on 570 contents 571 Service Manager 5 setting job priority 396 setting up security 49 sharing caches 339, 340 silent installation 475 single memory allocation governing 421 single sign-on 50 single-project integrity test 532 software activation 153 software deactivation 153 SQL, limiting reports SQL size 387 SSD. See server state dump. SSL defined on 939 certificates 75 encryption 939 LDAP 75 troubleshooting in LDAP 588 using with Web 939 standard authentication 57 example 143, 146 starting Intelligence Server 10 Intelligence Server automatically 12 statistics analyzing 297 options 283 purging 294 setting the project to log 266 troubleshooting 608 using in Enterprise Manager 251 status of cache 247 stopping Intelligence Server 11 support international xxx
1054
Index
support. See technical support. system activity summary 302 system architecture 1 configuring 430 system prompt defined on 122 system resources governing 410 system usage trend, analyzing 297
T
table Enterprise Manager 865 migrating with Object Manager 515 partitioning 102 task list defined on 28 technical support xxxi temp client 239, 244 testing document execution 532 project 530 report execution 532 timestamp 271 migrated object 507 time-triggered schedule 453 transformer model. See project. triggering an event automatically 456 manually 457 troubleshooting 556 data loading 610 report result rows 598 tuning 368, 369, 372 runtime capacity variables 370 system design 371 system requirements 369
U
unbalanced hierarchy 622
2008 MicroStrategy, Inc.
universe. See project or metadata. UNIX install LDAP SDK 73 unknown exception 584 unloaded project mode 324 unloading project 240 report cache to disk 189 unlocking a project 494 updating a license 156 upgrading Enterprise Manager 282 VLDB properties 23 usage trend analysis 297 use permission 134 user defined on 50 assigning permissions 132 creating 54 disconnecting 243 importing 54 merging 137 profile 377 resources 379 sharing an account 56 user address space defined on 415 User Connection Monitor 243 user group defined on 51 default 51 Everyone 51 Guest 52 importing 54 LDAP 52 Public 52 User Merge Wizard 137 ACL 139 connection mapping 141 group membership 139 privileges 139
1055
Index
process 138 profile folder 139 running 141 schedule 141 security filter 140 security role 140 user mode process dump 584 user preferences, Web 932 user session defined on 373 as a variable 370 governing 374 information in the SSD 575 users anonymous/guest for LDAP 94 importing LDAP users 83 LDAP search filters 80 linking LDAP users 89
Web Reporter privileges 849 Windows AT command Command Manager 465 Enterprise Manager data load 280 Project Merge 524 Windows authentication example 144 implementing 58 troubleshooting 585 Windows Performance Monitor 249 memory 415 working set defined on 380 governing 380 History List and 234
X
XML cache 174 converting to HTML for Web 43 delta and reports 380 drill path, governing 410 drill path, personalized 134 MCM and 419 Project Merge Wizard 494 results delivery and 406 size 408 XML cache defined on 174 Cache Monitor 248
V
verifying a named user license 150 viewing results of an integrity test 544 virtual bytes defined on 415 virtual memory defined on 414 VLDB properties defined on 22 governing summary 447 upgrading 23
W
Web 570 Administrator page 928 default options 930 edition 926 log file 570 user settings 932 Web Analyst privileges 850 Web MMT Option privileges 853 Web Professional privileges 852
1056