Documente Academic
Documente Profesional
Documente Cultură
Best Practices
Washington Systems Center –
Ralf Schmidt-Dannert
dannert@us.ibm.com
Washington Systems Center – Oracle
IBM
IBM’s statements regarding its plans, directions, and intent are subject to change
or withdrawal without notice and at IBM’s sole discretion.
The development, release, and timing of any future features or functionality described for
our products remains at our sole discretion.
Know the AIX tuning parameters and their “best practice” values for
an Oracle database server.
– Memory
– CPU
– I/O
– Network
– Oracle Patches
– Miscellaneous
Physical
Automatic (*1)
Automatic Memory
Free lists 4KB 64KB 16MB 16GB
psmd proc. Manual
free free free free
16GB
Automatic free
DSO (*2)
Used lists 4KB 64KB 16MB 16MB 16GB
psmd proc.
used used used used
used
16GB
Memory Pool 1
Paging
Space
On Disk This is a simplified view
(*1) Only when large amounts of memory are requested at once and not enough free pages on 4KB / 64KB free lists.
(*2) IBM AIX Dynamic System Optimizer (DSO) “MPSS” is a chargeable feature pre AIX 7.2. 16MB pages generated by DSO
are handled differently from pre-allocated / non-pageable 16MB pages!
© Copyright IBM Corporation 2018 6
April 2013
M B used
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
1 4 :0 2
4kb free
1 4 :0 6
4kb used
1 4 :0 6
1 4 :0 6
1 4 :0 7
1 4 :0 7
1 4 :0 7
4K - 64K - 16MB Page Dynamics
1 4 :0 8
1 4 :0 8
1 4 :0 8
1 4 :0 9
1 4 :0 9
1 4 :0 9
1 4 :0 9
Time
1 4 :1 0
1 4 :1 0
1 4 :1 0
April 2013
Note: DSO 16MB page conversion is currently only reported in svmon.
1 4 :1 1
1 4 :1 1
64kb used
64kb free
1 4 :1 1
1 4 :1 2
1 4 :1 2
1 4 :1 2
1 4 :1 2
1 4 :1 3
1 4 :1 3
1 4 :1 3
1 4 :1 4
1 4 :1 4
1 4 :1 4
1 4 :1 5
1 4 :1 5
1 4 :1 5
1 4 :1 5
1 4 :1 6
4KB_used MB 4KB_free MB 64KB_used MB 64KB_free MB 16MB_usedMB 16MB_freeMB
1 4 :1 6
1 4 :1 6
1 4 :1 7
1 4 :1 7
1 4 :1 7
1 4 :1 8
4KB pages
64KB pages
16MB pages
7
AIX Memory Management Concepts
90
80
Free Memory
70
% Physical memory used
60
50
File cache is always
40
4KB memory pages !
30
20
10
0
12:01
12:06
12:12
12:17
12:22
12:28
12:33
12:38
12:44
12:49
12:54
13:00
13:05
13:10
13:16
13:21
13:26
13:32
13:37
13:42
13:48
13:53
13:58
14:04
14:09
14:14
14:20
14:25
14:30
14:36
14:41
14:46
14:52
14:57
15:02
15:08
15:13
15:18
15:24
15:29
15:34
15:40
15:45
15:50
15:56
16:01
16:06
16:12
16:17
16:22
16:28
16:33
16:38
16:44
16:49
16:54
17:00
17:05
17:10
Time
Definitions:
• lrud = VMM page stealing process = LRU Daemon (1 per memory pool)
• numperm, numclient = # pages currently used for filesystem buffer cache
• maxperm, maxclient = target maximum # pages to use for filesystem buffer cache
• free pages = # pages immediately available to satisfy new memory requests
vmo Parameters:
• minperm% = target min % real memory for filesystem buffer cache
• maxperm%, maxclient% = target max % real memory for filesystem buffer cache
• minfree = target minimum number of free memory pages
• maxfree = number of free memory pages at which lrud stops stealing pages
When does lrud (for a given memory pool and page size) start?
• When free pages < minfree (4K and 64K pages)
• When (maxclient - numclient) < minfree (4K pages only)
When does lrud stop?
• When free pages > maxfree (4K and 64K pages)
• When (maxclient – numclient) > maxfree (4K pages only)
• AIX 7.2, 7.1 and 6.1 defaults are acceptable for most workloads
• Consider increasing if vmstat ‘fre’ column frequently approaches zero, or if “vmstat –s” shows
significantly increasing “free frame waits” over time
Example:
10-way LPAR with SMT-4 enabled, with maxpgahead=8 and j2_maxPageReadAhead=128 and 2 memory pools:
minfree = 2400 = max(960,(120 x 10 x 4) / 2
maxfree = 4960 = 1200 + ((max(128,8) x 10 x 4) / 2)
JFS2 utilizes two caches - one for inodes and one for metadata
Unused
Caches grow in size until maximum size is reached before
cache slots are reused
Default values are tuned for a file server! File cache
Note: *1 Default values pre AIX 7.1 are 400 (10%) , 400 (4%)
Workloads with large memory footprints and low spatial locality may perform poorly due to Segment
Lookaside Buffer (SLB) faults
• May consume up to 20% of total execution time for some workloads
Architectural trend toward smaller SLB sizes can exacerbate SLB related performance issues:
• POWER6 has 64 SLB entries – 20 for kernel, 44 for user processes – allowing 11GB of accessible memory before
incurring SLB faults
• POWER7/POWER8/POWER9 have 32 SLB entries – 20 for kernel, 12 for user processes – allowing 3GB of accessible
memory before incurring SLB faults with default segment sizes
Supported in AIX 6.1 TL8 and AIX 7.1 TL2 on IBM Power 7/7+; AIX 7.1 TL04 SP2 and AIX 7.2 TL01 SP2
on POWER8 and AIX 7.2 TL2 SP2 on POWER9
MPSS transparently supports Oracle SGA page size conversion from 4K/64K pages to 16MB pages;
only shared memory supported
Unit: page
-------------------------------------------------------------------------------
Pid Command Inuse Pin Pgsp Virtual
16711740 oracle 10199406 10144 325 10180604
– REF1: Hardware provided reference point identifying sets of resources that are near each other. e.g. socket in scale-out
servers or node in scale-up servers.
– SRAD: A Scheduler Resource Affinity Domain, i.e. an individual group of processors that all reside on the same chip
– MEM: The amount of local memory (in Megabytes) allocated to the SRAD
– CPU: The logical CPUs within the SRAD, e.g. with SMT4 enabled, 0-3 would be for the first physical CPU, 4-7 would be for
the second physical CPU, etc…
Most AIX 7.2, AIX 7.1, AIX 6.1 parameters configured by default to be ‘correct’
for most workloads
When migrating from AIX 5.3 to AIX 6.1, AIX 7.1 or AIX 7.2, existing parameter
override settings in AIX 5.3 will be transferred to AIX 6.1 or later environment
– After migration, review/verify parameter values are properly set
PGA
PGA PGA
RVWR PMON SMON
PGA
PGA D000
DB • SGA is Shared among processes
Files • PGA is Private to an individual server
or background process
Computational
Some used for AIX kernel processing
Some used by Oracle/client executable programs
Includes Oracle SGA and PGA memory
AMM dynamic resizing of the shared pool can cause a fair amount of “cursor: pin s” wait time. One
strategy to minimize this is to set minimum sizes for memory areas you particularly care about.
In addition, you can change the frequency how often AMM analyzes and adjusts the memory
distribution. See: Metalink note: 742599.1 ( _memory_broker_stat_interval)
12c: inmemory_size
– When the Oracle “In Memory” Option is used, specifies the size within the SGA to reserve for “In-Memory” objects
pinned in memory
– This parameter is not dynamic in 12.1, but can be dynamically increased in 12.2 or later
– And should not be confused with the keep cache.
Recommended:
1. Use SGA_TARGET and SGA_MAX SIZE rather than MEMORY_TARGET and MEMORY_MAX_TARGET
2. Most environments should use 64K pages rather than pinned 16M pages
3. If you do pin the SGA, make sure you also pin the kernel with vmm_klock_mode=2
Note:
MEMORY_TARGET/MEMORY_MAX_TARGET are not hard limits and Oracle can utilize significantly more memory for
PGA if needed. With Oracle 12c, Oracle also seems to take configured paging space into account as “memory” when
calculating the active limits.
263GB
210GB (> than physical memory !)
© Copyright IBM Corporation 2018 27
SGA_MAX_SIZE and LOCK_SGA implications (12c, 11g, 10.2.4.0+)
LOCK_SGA=false Preferred
• Oracle dynamically allocates memory for the SGA only as needed up to the
size specified by SGA_TARGET
• SGA_TARGET may be dynamically increased, up to SGA_MAX_SIZE
• 64K pages automatically used for SGA if supported in the environment.
– If needed, 4K (or 16M) pages are converted to 64K pages.
– Down-conversion of 16M pages to 64K pages is only triggered at DB
startup if needed.
– After startup, additional unused 16M pages are not converted, even if not
enough 4K or 64K pages are available potential for paging to paging
space.
Note: If you utilize environment variable ORACLE_SGA_PGSZ to set SGA memory page size manually,
then Oracle will allocate all memory specified via sga_max_size at startup! Memory is not pinned.
LOCK_SGA=true Discouraged
• Oracle pre-allocates all memory as specified by SGA_MAX_SIZE and pins it in memory,
even if it’s not all usable (i.e. SGA_TARGET < SGA_MAX_SIZE)
• If sufficient 16M pages are available, those will be used. Otherwise, all the SGA
memory will be allocated from 64K (if supported) or 4K pages (if 64K pages are not
supported). If needed, 4K or 16M pages will be converted to 64K pages, but 16M
pages are never automatically created.
• If a value for SGA_MAX_SIZE is specified larger than the amount of available memory
for computational pages, the system can become unresponsive due to system paging.
• If the specified SGA_MAX_SIZE is much larger than the currently available pages on
the combined 64K and 16M page free lists, the database startup may fail with error:
“IBM AIX RISC System/6000 Error: 12: Not enough space”. In this case re-try to start
the database.
64K available with POWER5+ and later & AIX 5.3 TL4+
– Can be paged to paging space
– Can be converted to 4K pages if not enough 64K pages are available Preferred
– Can be utilized for application code, data and stack as well, if specified for Oracle
– Kernel page size used in AIX 6.1, AIX 7.1 and AIX 7.2 (can be configured) DB!
– In 11g and later Oracle will automatically use 64k for SGA if supported by system
– May also be used for program data, text and stack areas by setting:
export LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K oracle
16G (Huge Pages) available with POWER5+ and later & AIX 5.3 TL4+ and later AIX releases
– Must be explicitly preconfigured and reserved, even if not being used – Configured via HMC and requires physical server to be powered off.
– No automatic conversion and any change in assignment to a LPAR requires at minimum involved LPAR to be powered off
– Requires at minimum 3 additional 16GB pages above what is specified via sga_max_size
© Copyright IBM Corporation 2018 30
Agenda
– Memory
– CPU
– I/O
– Network
– Oracle Patches
– Miscellaneous
CUoD n1 n2 n3 n4 Shared processor pool #1 Shared processor pool #2 Shared proc. pool #3
VIOS VIOS IBM i Linux Max Cap: 5 processors Max Cap: 6 processors Max Cap: 4 processors
Physical Shared Pool (12 processor
cores)
7 1 1 2 1 1 2 3 4 5 6 7 8 9 10 11 12
Slower unfolding
Slowest unfolding
Slower unfolding
Core 2 P S T T CPU 4-7 P S T T CPU 4-7 P S T T CPU 4-7 P S T T T T T T CPU 8-15
P Primary SMT thread S Secondary SMT thread T Tertiary SMT thread busy idle
vpm_throughput_core_threshold: Specifies the number of cores that must be unfolded before vpm_throughput_mode parameter
is honored (Default: 1). If fewer processors are unfolded, the system behaves like vpm_throughput_mode parameter set as 1.
© Copyright IBM Corporation 2018 36
CPU Related Oracle Parameters
– Degree of Parallelism
• Can be set at the user level, table level, or query level
• Restricted by PARALLEL_MAX_SERVERS
• Default setting = 1
• Default degree = (CPU_COUNT * PARALLEL_THREADS_PER_CPU)
Set PARALLEL_THREADS_PER_CPU=1 (at least with SMT 4 or SMT 8, potentially SMT 2 as well)
Micropartitioning Guidelines
– Virtual CPUs (vCPUs) should always be <= physical processors in shared CPU pool
– Use default processor folding behavior unless IBM AIX support recommends otherwise
CAPPED
– vCPUs should be the nearest integer >= capping limit
UNCAPPED
– vCPUs should be set to the max peak demand requirement
– Preferably, number of vCPUs should not be more than 1.5x to 2x entitlement
DLPAR considerations
Oracle 10g/11g/12c
– Oracle CPU_COUNT dynamically recognizes change in # cpus (physical and logical)
– Max CPU_COUNT limited to 3x CPU_COUNT at instance startup
This restriction does not exist in 12c
Monitoring
– mpstat -s
– nmon -> ‘c’ – this is an estimated value
RAC and Oracle Clusterware Best Practices and Starter Kit (AIX) 811293.1
– Older versions of this document recommended – incorrectly – disabling VP folding
– Document has been modified to correctly reflect that current TL levels should be used for support of processor folding
Frequency
Determinism
– Under nominal environmental conditions, the same
workload running on the same system configuration will
result in the same performance
Note: The graph is for example only. Actual results will vary based on
system model, system configuration, supported processor core count, and
active processor core count.
Load Level
– Memory
– CPU
– I/O
– Network
– Oracle Patches
– Miscellaneous
queue_depth
pbuf num_cmd_elems
pbuf
pbuf
pbuf
Note: RAW LV are not supported with Oracle 12c databases, except as “devices” used in ASM.
• Use fast (rather than delayed) fail over for multipath environments:
# chdev –l vscsi0 –a vscsi_err_recov=fast_fail
• Allow the client adapter to check the health of the VIO server vscsi path
# chdev –l vscsi0 –a vscsi_path_to=30
• queue_depth on the client LPAR vscsi disks and VIO server hdisks should match
• Calculate the max # of luns for a VSCSI adapter and configure this # or fewer. If more
LUNs are needed create additional VSCSI adapters.
Max = (# command elements - # cmd elems reserved for adapter – 3 cmd elems per LUN)
= (512-2)/(queue_depth + 3).
Example:
queue_depth = 32, (512-2)/(32+3) = maximum 14 LUNs per VSCSI adapter
0 paging space I/Os blocked with no psbuf if blocked on psbuf, stop paging or
add more paging spaces
2484 filesystem I/Os blocked with no fsbuf if blocked on fsbuf (JFS), increase
numfsbufs (ioo restricted) to 1568
0 client filesystem I/Os blocked with no fsbuf if blocked on client fsbuf (NFS/Veritas)
increase nfso nfs_vX_pdts and
nfs_vX_vm_bufs values (“X” = 2,3, or 4)
0 external pager filesystem I/Os blocked with no fsbuf if blocked on JFS2 fsbuf,
1) increase j2_dynamicBufferPreallocation (ioo) to 128 (or higher)
2) If that is not sufficient, increase j2_nBufferPerPagerDevice (ioo restricted)
to 2048 and unmount/remount JFS2 filesystems
Old Wisdom
Isolate files based on function and/or usage
– Manually intensive effort
– Leads to I/O hotspots over time that impact throughput capacity and
performance
New Wisdom
Stripe objects across as many physical disks as possible
– Minimal manual intervention
– Evenly balanced I/O across all available physical components
– Good average I/O response time and object throughput capacity with no
hotspots
Implementation Options:
– ASM and GPFS do this automatically within a given disk group or file system
– Can be implemented with conventional Volume Managers and file systems
Example…
2. Stripe or spread individual objects across multiple LUNs (hdisks) for
maximum distribution
– Each object is spread across 4 LUNs, each from different array (16 drives)
AIX Storage
Volume (Disk) Group HW Striping
IBM GPFS
hdisk 2 LUN 2
hdisk 3 LUN 3
hdisk 4 LUN 4
2
Note: ASM, AIX LVM with FS or GPFS can not share the same hdisks.
PGA
PGA PGA
RVWR PMON SMON
PGA
Flash-
Shared Pool DB Buffer In-Memory Redo Log PGA
back Log Cache Area Buffer
LGWRn
PGA
User PGA
PGA Control
Files
PGA D000
DB Oracle
Files Binaries
ACFS ACFS
(JFS) / JFS2 RAW LV GPFS ASM
(11.2.0.2) (12.2)*
Database
Files
Redo Log
Files
Control Files
Archive Log
Files
Oracle
Binaries
Redo logs on
HDD
Redo logs on
FlashSystems
4K redo log option in 11.2.0.3+ (can benefit Flash, with JFS2 or ASM)
Watch out for
redo wastage
Set database instance parameter
reported in AWR.
“_disk_sector_size_override”=TRUE
Then add 4K logfiles and delete old 512Byte log files: For Oracle 12.2 or later also see:
SQL> alter database add log file ‘+RECO’ size 5G blocksize 4096; https://docs.oracle.com/en/database/oracle/oracle-
database/12.2/ostmg/create-diskgroups.html#GUID-CACF13FD-1CEF-
4A2B-BF17-DB4CF0E1800C
© Copyright IBM Corporation 2018 57
Improve Database IO service time with Host Level Striping
ASM
– Stripes by default when multiple LUNs configured per ASM disk group.
– 10gR2 Strip size is 128k (Fine-grained) or Allocation Unit (AU) Size (Coarse-grained)
– 11g+ Strip size = Allocation Unit (AU) Size, default = 1 MB
– The AU size can be changed at the Disk Group level. An example would be 4MB or 8MB size for
data warehouse type of workload.
New Wisdom:
Depending on workload, a higher hit% may provide significant improvements
– For a given workload with a buffer hit% of 98%, a 1% increase (to 99%) will reduce
physical I/O requests by 50%
– Reducing IOPS typically also improves response time for remaining I/Os
– In many cases, adding server memory may be cheaper than adding I/O subsystem
cache memory or short-stroking disks
Evaluate impact of increasing db_cache_size on physical I/O
Monitor for and address potential impact:
– Increased logical read rates and higher peak CPU demand due to reduced I/O wait
time (increase CPU capacity as appropriate to benefit from reduced IO wait time)
– System paging due to memory shortage (add physical memory as necessary)
IOPS
1,500 150,000
1,000 100,000
500 50,000
0 0
6.5 101.5
Buffer Cache Size (GB)
– Release Behind Write (RBW): memory pages released (available for stealing) after pages
written to disk
– Release Behind Read (RBR): memory pages release after pages read from cache
– No Access Time (NOATIME): do not update last accessed time when file is accessed
Note that access to a single data file is illustrated. There is one independent “inode lock” per file
to control concurrent access.
April 2013
When using DIO/CIO, FS buffer cache isn’t used. Consider the following Oracle database changes:
Increase db_cache_size
Increase db_file_multiblock_read_count (With 11gR2, 12c use database default!)
AIX APAR IV76026: CIO/DIO ENABLED FILESYSTEMS CAN CRASH THE SYSTEM WITH ISI_PROC (affects AIX 6.1, 7.1 and 7.2 releases)
Oracle Binaries
• Don’t use CIO or DIO
• Use NOATIME to reduce ‘getcwd’ overhead
Oracle parameters
disk_asynch_io = TRUE
filesystemio_options = {ASYNCH | SETALL}
db_writer_processes (typically let default)
db_writer_io_slaves (do not set when using AIO)
Syntax / Description
– rendev –l <original name> -n <new name>
– The device entry under /dev will be renamed corresponding to <new name>
– Certain devices such as /dev/console, /dev/mem, /dev/null, and others that are identified only with /dev special
files cannot be renamed
– Command will fail for any device that does not have both a Configure and an Unconfigure method
– Any name that is 15 characters or less and not already used in the system can be used
If used to rename hdisk devices for ASM use, it is recommended that you keep the "hdisk" prefix, as
this will allow the default ASM discovery string to match the renamed hdisks. Corresponding rhdisk is
renamed as well.
Example:
# rendev –l hdisk10 –n hdiskASM10
# ls /dev/*ASM*
/dev/hdiskASM10
/dev/rhdiskASM10
Syntax:
lkdev [ -l <Name> -a | -d [ -c <Text> ] ]
<Name>Name of device to be changed (required)
-a Locks the specified device.
-d Unlocks the specified device.
-c <Text> Specifies a text of up to 64 printable characters with no embedded spaces.
Examples:
– To enable the lock for the hdiskASM10 disk device and create a text label, enter the following command:
# lkdev -l hdiskASM10 -a -c ASMdisk
– To remove the lock for the hdisk1 disk device and remove the text label, enter the following command:
# lkdev -l hdiskASM10 -d
Note:
The text label of a locked device can not be changed! Instead, the device needs to
be first unlocked and then locked again with the new text label specified.
– lkdev with no parameters will return a list of all locked devices, including any defined label
information
# lkdev
hdiskASM10 asmdisk
The lspv command has been extended to display the device status of a locked device.
$ lspv
hdisk0 00f623c450d9a96f rootvg active
hdisk1 00f623c469960b72 orabinvg active
hdiskASM10 none asmdisk locked
Recommendation
Increase the asm_hbeatiowait to 120 seconds to prevent this issue occurring.
Applies to Oracle Database - Enterprise Edition - Version 11.2.0.3 to 12.1.0.1 [Release 11.2 to 12.1] on any platform
© Copyright IBM Corporation 2018 69
April 2013
Oracle In-Memory – Impact on CPU / IO for data warehouse queries
Row format:
• 726GB fact table
• CPU bound
• Peak 5.5GB/s read
• Sustained > 2.5GB/s
• 12TB of data read from
disk!
IM format -
With In-Memory:
• 171GB compressed In-
Memory fact table
• CPU bound
• Peak 0.11MB/s read
• 8MB of data read from
disk!
© Copyright IBM Corporation 2018 70
Agenda
– Memory
– CPU
– I/O
– Network
– Oracle Patches
– Miscellaneous
Generally appropriate parameters for 1 or 10 Gigabit Ethernet Oracle public network interfaces:
– tcp_sendspace = 262144
– tcp_recvspace = 262144
– rfc1323 = 1
tcp_nodelay on the network adapter (do not set tcp_nodelayack via “no”)
– Useful for RAC interconnect and/or LAN connected application server and database
– chdev –l <enX> -a tcp_nodelay=1
– Memory
– CPU
– I/O
– Network
– Oracle Patches
– Miscellaneous
Source: Oracle
© Copyright IBM Corporation 2018 74
Agenda
AIX Configuration/Tuning for Oracle
– Memory&CPU
– I/O
– Network
– Oracle Patches
– Miscellaneous
IBM HIPER APAR - ORA 600 ERRORS AND ORACLE CORE DUMPS AFTER AIX SP UPGRADE
PROBLEM SUMMARY:
The thread_cputime or thread_cputime_fast interfaces can cause invalid data in the
FP/VMX/VSX registers if the thread page faults in this function.
iFix / APAR
Affected AIX Levels Fixed In
ftp://aix.software.ibm.com/aix/ifixes/
/etc/security/limits
– Set to “-1” for everything except core for Oracle, grid and root users
Environment variables:
– AIXTHREAD_SCOPE=S
– LDR_CNTRL settings – See Oracle 12.1.x and 11.2.0.4 Database Performance Considerations with AIX on POWER8
technical notes white paper (WP102608) for more details on how to set it
Time synchronization – For RAC environments, use the xntpd “-x” flag
Things to Consider Before Upgrading to 11.2.0.3 to Avoid Poor Performance or Wrong Results – 1392633.1
Things to Consider Before Upgrading to 11.2.0.4 to Avoid Poor Performance or Wrong Results – 1645862.1
RAC and Oracle Clusterware Best Practices and Starter Kit (AIX) 811293.1
Oracle Database on UNIX AIX, HP-UX, etc Unix Operating Systems Installation and Configuration Requirements Quick
Reference 169706.1
Best Practices: Proactively Avoiding Database and Query Performance Issues – 1482811.1
Recommended Bundle patch for AIX and 11.2.0.3 with critical fixes – 1528081.1
Recommended Bundle patch for AIX and 11.2.0.4 with critical fixes – 2022567.1
Recommended Bundle patch for AIX and 12.1.0.2 with critical fixes – 2022559.1
© Copyright IBM Corporation 2018 78
IBM Key Resources
Oracle 12.1.x and 11.2.0.4 Database Performance Considerations with AIX on POWER8
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102608
IBM Power System, AIX and Oracle Database Performance Considerations (for Oracle versions up to 11.2.0.3)
https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102171
Oracle Database 11g and 12c on IBM Power Systems S924, S922 and S914 with POWER9 processors
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102750
# svmon -G
size inuse free pin virtual
memory 1179648 926225 290287 493246 262007
pg space 1572864 5215
U.S. Government Users Restricted Rights — use, duplication or customers have used IBM products and the results they may have
disclosure restricted by GSA ADP Schedule Contract with IBM. achieved. Actual performance, cost, savings or other results in other
operating environments may vary.
Information in these presentations (including information relating to
products that have not yet been announced by IBM) has been reviewed References in this document to IBM products, programs, or services does
for accuracy as of the date of initial publication and could include not imply that IBM intends to make such products, programs or services
unintentional technical or typographical errors. IBM shall have no available in all countries in which IBM operates or does business.
responsibility to update this information. This document is distributed
“as is” without any warranty, either express or implied. In no event, Workshops, sessions and associated materials may have been prepared by
shall IBM be liable for any damage arising from the use of this independent session speakers, and do not necessarily reflect the views of
information, including but not limited to, loss of data, business IBM. All materials and discussions are provided for informational purposes
interruption, loss of profit or loss of opportunity. IBM products and only, and are neither intended to, nor shall constitute legal or other
services are warranted per the terms and conditions of the agreements guidance or advice to any individual participant or their specific situation.
under which they are provided.
It is the customer’s responsibility to insure its own compliance with legal
IBM products are manufactured from new parts or new and used parts. requirements and to obtain advice of competent legal counsel as to
In some cases, a product may not be new and may have been the identification and interpretation of any relevant laws and regulatory
previously installed. Regardless, our warranty terms apply.” requirements that may affect the customer’s business and any actions the
customer may need to take to comply with such laws. IBM does not provide
Any statements regarding IBM's future direction, intent or product legal advice or represent or warrant that its services or products will ensure
plans are subject to change or withdrawal without notice. that the customer follows any law.