Documente Academic
Documente Profesional
Documente Cultură
Note – The executable names listed above may change with the official release with the
exception of the JDK.
Post Install Utilities and Configuration
• Copy the Oracle 12c Migration tool from the Install home to your 11g environment:
Default install location – Oracle_Middleware_Home\bi\migration-tool\jlib\bi-migration-tool.jar
Example - C:\Oracle\Middleware\Oracle_Home\bi\migration-tool\jlib\bi-migration-tool.jar
• - bi-migration-tool.jar parameters:
• Expected arguments:
out <oracle 11g home> <domain home> <output export bundle path>
(Export from an existing system)
in <oraclehome> <domainhome> <export bundle> <service instance name>
(Import into an existing system)
get <pluginname> <export bundle>
(Retrieve data produced by a plugin)
put <pluginname> <new data file> <new export bundle> <existing export bundle>
(Repack jar, replacing data from a plugin)
• Migrate \ export bundle from 11g system:
java -jar \path\to\12c\bi-migration-tool.jar
out
\path\to\11g\mwhome\Oracle_BI1
\path\to\11g\mwhome\user_projects\domains\bifoundation_domain
\path\to\my-export-bundle.jar
• Copy / Move the 11g exported jar file to the machine where the 12c instance resides if it differs from the 11g instance machine
• Import bundle into 12c system:
java -jar \path\to\12c\oraclehome\bi-migration-tool\jlib\bi-migration-tool.jar
in
\path\to\12c\oraclehome
\path\to\12c\oraclehome\user_projects\domains\bi
\path\to\my-export-bundle.jar
service1 or ssi
• Example:
c:\Java\jdk1.8.0_45\bin>java -jar C:\Oracle\Middleware\Oracle_Home\bi\migration-tool\jlib\bi-migration-tool.jar
in
C:\Oracle\Middleware\Oracle_Home
C:\Oracle\Mddleware\Oracle_Home\user_projects\domains\bi
C:\2.Builds\BIEE\12c\11g_bars\11.1.1.9.0_SampleAppLite.jar
service1 or ssi
NOTE: Examples have return lines between parameters to highlight them only. Do Not use return lines in actual calls.
Example for Linux installations
Copy the Oracle 12c Migration tool from the Install home to your 11g environment:
• Copy / Move the 11g exported jar file to the machine where the 12c instance resides if it differs from the 11g instance machine
• Import bundle into 12c system:
• Example:
/scratch/aime/Java/jdk1.8.0_45/bin>java -jar C:/Oracle/Middleware/Oracle_Home/bi/migration-tool/jlib/bi-migration-tool.jar
in
/scratch/aime/Oracle_Home
/scratch/aime/Oracle_Home/user_projects/domains/bi
/scratch/aime/Builds/BIEE/12c/11g_bars/11.1.1.9.0_SampleAppLite.jar
service1 or ssi
NOTE: Examples have return lines between parameters to highlight them only. Do Not use return lines in actual calls.
Example using the Migration Script
• You may use a migrated 11g jar file when using the BI config script
‘…\mwhome\bi\bin\config.cmd’ or ‘…/mwhome/bi/bin/config.sh’
• Create the 11g migration jar file using the appropriate steps above.
• Run '\mwhome\bi\bin\config.cmd' or '/mwhome/bi/bin/config.sh', select
the required settings, and provide the necessary parameters.
• Choose 'Single Instance' and select the required components (Essbase,
BIEE and/or BI Publisher).
• Choose to create the Schemas, or use existing schemas if previously
created via RCU.
• Either use the Default Port assignments 9500 - 9999, or change to
another range.
• Choose to use export bundle and browse to the location of the migrated
11g jar file:
NOTE: Password is the password for the migrated 11g RPD
• Optionally save Response file
• Optionally save the Configuration info to a file.
Manual create or drop the required schemas
for an OBIEE install using RCU
Note:
A fresh schema is required for a OBIEE 12c install. You cannot
reuse a pre-existing OBIEE 12c schema.
Usage Tracking
Usage tracking settings are no longer managed by
Enterprise Manager. To enable usage tracking, it is
necessary to modify the corresponding settings in
NQSConfig.ini manually.
Usage Tracking
There are 9 new columns in usage tracking tables in OBIEE 12c compared to
11.1.1.7. But most of them already existed in 11.1.1.9.
In S_NQ_ACCT:
• ECID varchar2(1024) added in 11.1.1.9
corresponds to ECID in biserver-diagnostic.log
• TENANT_ID varchar2(128) added in 11.1.1.9
tenant id, used in multitenancy
• SERVICE_NAME varchar2(128) added in 11.1.1.9
service name, used in multitenancy
• SESSION_ID number(10,0) added in 11.1.1.9
corresponds to biserver session for use in analyzing user behavior by session
• HASH_ID varchar2(128) added in 11.1.1.9
logical query hash id, joins to s_nq_db_acct.hash_id
• TOTAL_TEMP_KB number(20,0) added in 12c
total temp space used by a query during execution
• RESP_TIME_SEC number(10,0) added in 12c
how much time server took before it started fetching records. This is the response time that the end user
would experience with few early records being displayed on dashboard, while server continues fetching more
In S_NQ_DB_ACCT:
• HASH_ID varchar2(128) added in 11.1.1.9
logical query hash id, joins to s_nq_acct.hash_id
• PHYSICAL_HASH_ID varchar2(128) added in 11.1.1.9
used for tracing physical queries to the backend database
Usage Tracking Direct Database Inserts
In NQSConfig.INI, modify the following parameters:
• ENABLE – turns usage tracking on or off, off by default
• PHYSICAL_TABLE_NAME – usage tracking table as defined in the .rpd file
• CONNECTION_POOL – connection pool for usage tracking table as defined in
the .rpd file
For example:
###############################################################################
#
# Usage Tracking Section
#
# Collect usage statistics on each logical query submitted to the
# server.
#
###############################################################################
[USAGE_TRACKING]
ENABLE = YES;
#==============================================================================
# Parameters used for writing data to a flat file (i.e. DIRECT_INSERT = NO).
#
# Note that the directory should be relative to the instance directory.
# In general, we prefer directo insert to flat files. If you are working in a cluster, it is strongly recommended you use direct insert. If there is only one
Oracle BI Server instance, then you may use flat file data.
# The directory is then assumed relative to the process instance. For example, "UTData" is resolved to
"$(ORACLE_INSTANCE)/bifoundation/OracleBIServerComponent/<instance_name>/UTData
STORAGE_DIRECTORY = "<directory path>";
CHECKPOINT_INTERVAL_MINUTES = 5;
FILE_ROLLOVER_INTERVAL_MINUTES = 30;
CODE_PAGE = "ANSI"; # ANSI, UTF8, 1252, etc.
Usage Tracking Direct Database Inserts
#==============================================================================
DIRECT_INSERT = YES;
#==============================================================================
# Parameters used for inserting data into a table (i.e. DIRECT_INSERT = YES).
#
# Init-Block Tracking Options are commented out and as a result disabled.
# To enable Init-Block Tracking Feature, define the two parameters for
# Init-Block, INIT_BLOCK_TABLE_NAME and INIT_BLOCK_CONNECTION_POOL.
#
PHYSICAL_TABLE_NAME = "UsageTracking"."server1_biplatform"."S_NQ_ACCT";
CONNECTION_POOL = "UsageTracking"."Connection Pool";
# INIT_BLOCK_TABLE_NAME = "<Database>"."<Catalog>"."<Schema>"."<Table>" ;
# INIT_BLOCK_CONNECTION_POOL = "<Database>"."<Connection Pool>" ;
BUFFER_SIZE = 250 MB;
BUFFER_TIME_LIMIT_SECONDS = 5;
NUM_INSERT_THREADS = 5;
MAX_INSERTS_PER_TRANSACTION = 5 ;
JOBQUEUE_SIZE_PER_INSERT_THREADPOOL_THREAD = 100; #default is 100 while 0 means unlimited.
THROW_INSERT_WHEN_JOBQUEUE_FULL = NO; # Default is no.
#
#==============================================================================
• Usage tracking tables and columns need to be defined in the .rpd file. If just the tables are
defined, an error will be recorded in obisn-diagnostic.log (formerly nqsserver.log) similar to:
• If usage tracking has previously been defined in the .rpd, since there are a limited number of
new columns the easiest modification would be to add the new columns manually in the .rpd.
Hot deployment of the OBIS RPD
metadata file
• There is a command line tool to upload and download an OBIS
RPD in 12c:
• ‘data-model-cmd.cmd’ at ‘…\user_projects\domains\bi\bitools\bin\bitools\bin’.
• You run this in a cmd window at
‘…\user_projects\domains\bi\bitools\bin\bitools\bin’.
• This tool has several other options as well. Type ‘-H’ for a listing.
• Currently, there is no equivalent utility to upload/download the
OBIPS Webcat.
Hosting the BIEE metadata on a shared
network location for clustering
• The process is to:
• Move/copy the entire ‘bidata’ in the ‘/user_projects/domains/bi/’ directory to shared
storage.
• Update the config file bi-environment.xml
(DOMAIN_HOME/config/fmwconfig/bienv/core/) with the location of this new ‘singleton
data directory’.
• Example on Windows:
• Copy the host ‘bidata’ folder to the target share folder on the network share machine.
• On the network windows box, share the target share folder to the appropriate user(s).
• On the BIEE Windows server, map a network drive to the share location created in step 2.
• Update the bi-environment.xml <bi:singleton-data-directory> tag with the share created in step 3 – e.g.:
<bi:singleton-data-directory>Z:\bidata</bi:singleton-data-directory>.
• Stop/start the BIEE stack.
• Everything under the bidata directory would be on the shared directory/NAS. One of the
changes in 12c is to cleanly separate the metadata and config in the deployments hence just
copying the bidata directory.
Legend in Stacked Bar Graph
11g 12c
To check the point in time entropy available on a server, run this command:
• cat /proc/sys/kernel/random/entropy_avail
• Anything below 500 is at risk of running out of entropy.
Increasing the entropy has been seen to dramatically decrease service start up
times. The following resources explain how this may be accomplished:
Linux 64 bit
• Explode '.../Oracle_Home/bi/bifoundation/advanced_analytics/r-installer.tar.gz':
tar -xvzf ./r-installer.tar.gz
• Change Directory to 'RInstaller'
• Modify proxy.txt to use the appropriate proxy:
Default is proxy=http://[proxy-host:proxy-port]. Modify this to the site specific proxy
• Install R:
./RInstaller.sh install (default install path is '/usr/lib64/R')
• Install required R packages:
./RInstaller.sh installpackages
• Start R and execute the Sys.BlasLapack function:
> Sys.BlasLapack()
$vendor
[1] "R internal BLAS and LAPACK"
$nthreads
[1] 1
• Modify NQSConfig.ini section 'Advance Analytics Script Section' parameters - e.g.:
[ ADVANCE_ANALYTICS_SCRIPT ]
# R EXECUTABLE PATH
# Specify the script executable binary path.
# R_EXECUTABLE_PATH = "/usr/bin/R";
R_EXECUTABLE_PATH = "/usr/lib64/R/bin/R"; # Include the actual executable 'R'
# R COMMAND ARGS
# Specify the script executable command line arguments.
R_COMMAND_ARGS = "--no-restore --no-save --no-timing";
# Max Number of R Process that can be active at any given point in time
R_MAX_PROCESS = 20;
# THE CONNECTION POOL HAS TO BE SET IF Advanced Analytics NEED TO RUN ON THE DATABASE (eg: ORE)
# CONNECTION_POOL = "<Database>"."<Connection Pool>";
• After downloading the rpm from the above links, perform a 'rpm -ivh
<rpm_name>' as the root user for each rpm:
• rpm -ivh texlive-epsf-svn21461.2.7.4-32.el7.noarch.rpm
• rpm -ivh texinfo-tex-5.1-4.el7.x86_64.rpm
R> Sys.BlasLapack()
$vendor
[1] "Intel Math Kernel Library (Intel MKL)"
$nthreads
[1] -1
• The returned value of $vendor indicates that MKL has replaced the BLAS and LAPACK that are native to R.
• The returned value of nthreads indicates the number of threads to be used by MKL. By default all available
threads are used ($nthreads= -1).
• Optional - You can change the number of threads to be used by MKL by editing the system environment
variable MKL_NUM_THREADS
• If MKL_NUM_THREADS does not exist, then you must create it at:
• Control Panel > System and Security > System > Advanced system settings > Environment Variables > System variables
• After setting MKL_NUM_THREADS to 3, the output of Sys.BlasLapack shows a value of 3 for $nthreads.
R> Sys.BlasLapack()
$vendor
[1] "Intel Math Kernel Library (Intel MKL)"
$nthreads
[1] 3
• Modify NQSConfig.ini section 'Advance Analytics Script Section' parameters - e.g.:
[ ADVANCE_ANALYTICS_SCRIPT ]
# R EXECUTABLE PATH
# Specify the script executable binary path.
# R_EXECUTABLE_PATH = "/usr/bin/R";
R_EXECUTABLE_PATH = "C:/Program Files/R/R-3.1.1/bin/x64/R"; # Include the actual executable 'R‘
# R COMMAND ARGS
# Specify the script executable command line arguments.
R_COMMAND_ARGS = "--no-restore --no-save --no-timing";
# Max Number of R Process that can be active at any given point in time
R_MAX_PROCESS = 20;
# EXECUTION TARGET WHERE SCRIPT GETS EXECUTED
# Defaults to Mid Tier R. The other targets are ORE, etc
TARGET = "R“;
• Successful connection:
• Now when importing a new data source, JDBC (Direct Driver) and JDBC (JNDI) Connection
Types are available:
Oracle BITech Demo YouTube channel
• https://www.youtube.com/user/OracleBITechDemos