Sunteți pe pagina 1din 25

 

Tegile Best Practices


for Oracle Databases

Pg. 1  
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Contents

Executive Summary .............................................................................................................................................. 3


Disclaimer ............................................................................................................................................................. 3
About This Document ............................................................................................................................................ 3
Quick Start Guide .................................................................................................................................................. 4
LUN Sizing Recommendations .............................................................................................................. 5
Tegile IntelliFlash™ Storage Array Setup ............................................................................................................. 6
Pools ...................................................................................................................................................... 6
Projects .................................................................................................................................................. 6
LUNs ...................................................................................................................................................... 9
LUN block size for REDO Logs. .......................................................................................................... 10
Compression and Deduplication .......................................................................................................... 10
Linux OS Setup ................................................................................................................................................... 11
Linux Multipathing ................................................................................................................................ 11
Adding Multipath Aliases ..................................................................................................................... 11
LUN permissions and UDEV rules ....................................................................................................... 13
Oracle Grid Install (ASM) .................................................................................................................................... 15
Using Multiple Arrays with Oracle ASM ............................................................................................... 15
ASM Redundancy Options .................................................................................................................. 15
Best Practice for Multi-Array Configurations ........................................................................................ 15
Oracle DB Install ................................................................................................................................................. 15
Deploying Oracle Databases in VMWare vSphere virtualization ........................................................................ 16
LUN Creation Guidelines ..................................................................................................................... 16
Virtual Machine Creation Guidelines ................................................................................................... 17
Hypervisor tuning ................................................................................................................................. 18
Linux Guest Configuration ................................................................................................................... 19
Tegile LUNs setup for Oracle Database in a VMware vSphere environment ..................................................... 20
Project and LUN and Project Parameters ............................................................................................ 20
How to create Tegile Snapshots for Oracle Database ........................................................................................ 21
Tegile Snapshot creation ..................................................................................................................... 21
How to create clones of Oracle Database for test-dev from Tegile Snapshots................................................... 22
Tegile Clone Creation .......................................................................................................................... 22
Additional References ......................................................................................................................................... 23
Tegile Best Practices and Reference Architectures for vSphere ......................................................... 23
Additional References for Oracle in in VMware Environments ............................................................ 23
Appendix ............................................................................................................................................................. 24
multipath.conf for Tegile arrays with 2.x firmware or older .................................................................. 24
multipath.conf for Tegile arrays with 3.x firmware or newer ................................................................ 24

Pg. 2
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Executive Summary

This document describes the process for installing an Oracle 12cR1 single instance database on a
Redhat or OEL 6- or 7-compatible operating system in order to use Tegile flash storage. For the purpose
of this document Oracle 12.1.0.2 and Oracle Linux 6.7 were used. However, Oracle version 11gR2 and
earlier versions of Linux will have very similar, if not identical, setup methods. Any version specific
alterations in procedures will be called out in the document. The physical characteristics of the system
include a 2-socket, 12-core (6 cores per socket) server with 48GB of memory connected via 8GB fiber
channel to a Tegile T3700 all-flash array running firmware version 2.1.3.5 configured in an active/active
controller configuration.

In order to take advantage of the extreme performance characteristics of Tegile flash storage the
Oracle Automatic Storage Management (ASM) volume manager will be used to achieve raw
performance (as opposed to using a file system).

Disclaimer

Note that this document describes the process for building a generic system and does not take
into account individual customer’s requirements for security, performance, resilience and other
operational aspects that may be relevant. Customers with existing operational guidelines should
treat those guidelines with higher priority – and where any advice in this document conflicts with
existing policies those policies should be adhered to. Tegile does not accept liability for any
issues experienced as a result of following this document.

About This Document

This document will detail each step necessary to complete the installation process along with
examples and expected outputs. However, experienced users may find this level of detail
unnecessary, so also included is a “Quick Start” section showing only the high level steps
required.

Pg. 3
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Quick Start Guide

This section shows a high-level summary of the steps required to complete the installation:

1. Create LUNs from Tegile’s GUI (see section Storage Array Setup)

2. Install the oracle-rdbms-server-12cR1-preinstall package using yum (for 11gR2 utilize the
oracle-rdbms-server11gR2-preinstall package)

3. Install and configure the device mapper multipathing software – note that there are
specific device details required when adding entries into the multipath.conf file for Tegile
arrays (see section Multipathing)

4. Add aliases in the multipath.conf file for each LUN presented from Tegile arrays (see
section Add Multipath Aliases)

5. Create UDEV rules to handle LUNs presented from Tegile arrays – note that again there
are specific configuration settings which must be set using these UDEV rules (see
section LUN permissions and UDEV rules)

6. Create the Oracle Grid Infrastructure (see section Oracle Grid Install )

7. Create the Oracle Database (see section Oracle DB Install)

8. Creating a separate ASM DiskgGroup for REDO Logs and guidelines on REDO Logs.

Pg. 4
Best Practices Guide
Oracle Database on Tegile IntelliFlash

High Level Recommendations

Tegile makes the following recommendations for the use of Oracle software with Tegile arrays:

• Oracle Database and Grid Infrastructure (ASM) software of version 11g Release 2 or
later is recommended.

• Databases placed on Tegile all flash arrays should have a database block size or 4K or
greater (e.g. the default value of 8K is acceptable).

LUN Sizing Recommendations

The design of Tegile arrays allows for a single LUN to deliver the full performance capability of
each active controller. However, since this full performance capability is so high, many operating
systems exhibit bottlenecks at the OS queue level if a single LUN is used. For this reason, Tegile
recommends using multiple LUNs in groups of eight per array (four per active controller when
active/active is configured) for each data storage point (e.g. ASM diskgroup or filesystem)

• ASM diskgroups containing database DATA or fast_recovery areas should comprise of 8


LUNs spread equally across active controllers.

• If multiple arrays are used, the above recommendation should be adapted to allow a
minimum of 8 LUNs per diskgroup spread over all arrays. For example, a +DATA
diskgroup spread over four arrays would have a minimum of 2 LUNs per array (1 LUN
per controller), making 8 LUNs in total.

• For locations containing files which are infrequently accessed (e.g. database parameter
files, +GRID diskgroups etc.) its is recommended to put it on a mirrored DiskGroup for
redundancy.

• To avoid the unnecessary overhead of ASM rebalances, Tegile recommends that


customers do not add LUNs to a striped diskgroups to increase capacity, but instead
increase the size of existing LUNs.

• Unless there are specific use cases driving a smaller block size on the Tegile LUNs,
“Database – 8K Block Size” should be used to ensure maximum performance from the
array as well as maximized compression results when compression is enabled. This
recommendation should be used for Bare-Metal Environments (i.e. Linux OS running
without a Hypervisor).

Pg. 5
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Tegile IntelliFlash™ Storage Array Setup

Tegile is pioneering a new generation of affordable feature-rich storage arrays that are
dramatically faster, and can store more effective data than traditional arrays. The Tegile all
flash array utilizes an active-active controller architecture to provide an Oracle environment
the highest level of array performance while maintaining a fully redundant, highly available
system. The following array setup takes this architecture into account when creating LUNs to
be presented to ASM for the absolute highest performing design.

In the 2.1.3.5 version of the Tegile T3700 array GUI, there are five IP addresses assigned for
managing the array. Two IPMI addresses, one for each controller, two management
addresses for managing the controllers individually, and one HA address for managing the
entire array. In an active-active configuration the HA address can be utilized for provisioning
storage to both controllers. If the array was configured active-passive then each controller
would need to be managed by their individual MGMT addresses.

Pools

By clicking on the Data menu item at the top of the GUI, the first items viewed are the pool-a
and pool-b pools. The pools can be understood as the storage associated with each
controller. By selecting a pool, the storage available for that particular controller is able to be
provisioned in terms of projects, LUNs, and file systems.

Projects

Projects are an elegant way to encapsulate a group of like LUNs into their base characteristics.
By creating a LUN or group of LUNs into a project, all the activities like snapshot scheduling and
clone creation can be managed from a single place for the entire group of LUNs. Furthermore, a
default set of parameters such as networking settings, block sizes, compression algorithms and
others can be generated so the LUNs created under the project will contain the like settings.

Pg. 6
Best Practices Guide
Oracle Database on Tegile IntelliFlash

For Oracle database best practices the following should be followed for project creation:
1) Provide a project name and select Generic under the purpose. Future versions of the
GUI will have these Oracle best practice settings incorporated into a template. Select a
networking protocol.

2) Based on your specific requirement for the LUNs to be created, complete the FC Target
Group information accordingly.

Pg. 7
Best Practices Guide
Oracle Database on Tegile IntelliFlash

3) Complete the Initiator Group settings.

4) The next screen provides for the types of data Deduplication and Compression. By
default, Oracle databases are not good candidates for data deduplication as each
database block is unique with header and DB storage metadata. Compression however,
is a very valid selection with negligible performance impacts. For absolute top
performance while providing adequate levels of compression lz4 should remain selected
for the compression algorithm.

5) Fulfill the Snapshot Policy for your environment.

6) Review and finish.

Pg. 8
Best Practices Guide
Oracle Database on Tegile IntelliFlash
LUNs
As per the information mentioned in the High Level Recommendations section previously in this
document, there will be a total of 9 LUNs created in this best practice exercise. However, if an
FRA (Flash Recovery Area) were to be configured, this number would then increase to a
number of 17 LUNs, 1 for the grid infrastructure files (ASM), 8 for the +DATA diskgroup and 8
for the +FRA diskgroup. Adopt a meaningful LUN naming methodology to easily identify devices
on the Oracle host. The LUN naming methodology demonstrated here is in the format Pool
letter_useage_blockize_LUNsize = a_grid01_8k_5GB

Unless there are specific use cases driving a smaller block size on the Tegile LUNs, “Database –
8K Block Size” should be used to ensure maximum performance from the array as well as
maximized compression results when compression is enabled.

1) Create the single small LUN for the Oracle Grid infrastructure files on one of the
controllers. This example shows this occurring on the “a” controller or “pool” in the orcl-
micro1 “Project”.

2) Create the remainder of the database LUNs following a similar naming convention for the
+DATA and +FRA (if necessary) disk groups. The final configuration will appear as
below.

Pool-a

Pg. 9
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Pool-b
LUN block size for REDO Logs.
Redo Logs are transactional journals and each transaction is recorded in the REDO logs. Redo Logs
flush to disk at regular intervals which are decided by multiple factors beyond the scope of this document.
It is recommended to create a separate ASM Disk Group with redundancy and assign multiple LUNs to it
up-to 8 . When creating LUNS for REDO logs , it please create the LUN with a higher LUN block size >
64K < 128K and disable deduplication on these LUNs and assign “LOGBIAS=Latency”
To determine the ideal LUN block size for REDO Logs , an AWR report snapshot can determine the
highest block size count for your database and redo-wastage .If you determine based on your AWR
analysis that the LUN block size for your REDO Logs is not ideal and there is too much redo-wastage ,
you could create new LUNs with a different block size and them to a new ASM diskgroup, create new
REDO Logs on the new DiskGroup and drop the old REDO logs and add the new REDO Log files.

Compression and Deduplication


Inline deduplication and compression enhance usable capacity well beyond raw capacity,
reducing overall storage capacity requirements by as much as 50 percent according to Tegile
customer deployments in the field. However, due to the nature of Oracle database blocks and
the underlying data, Oracle deployments are NOT well suited for data Deduplication so this
option should be avoided for standard Oracle installations. Compression, on the other hand,
is a valid and powerful way of reducing the overall Oracle capacity requirements of the
database. Different compression algorithms have different characteristics so for best practice
purposes the lz4 algorithm should be used for database workloads. Refer to the below chart
for other algorithms and their characteristics.

Pg. 10
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Linux OS Setup
Follow the Oracle installation guide for setting up the Oracle server environment. For the
Oracle Enterprise and RHEL Linux OS, the pre-requisite shell script should be executed for the
appropriate Oracle version.

oracle-rdbms-server-12cR1-preinstall and oracle-rdbms-server-11gR2-preinstall packages

Linux Multipathing

Multipathing software allows for resilience and performance benefits to be gained when multiple
paths exist between storage devices and servers. In the case of fiber-channel storage solutions
there will usually be multiple paths through the fiber-channel network over which LUNs can be
presented from storage. The multipathing software is used to detect which duplicate paths
correspond to each underlying physical device so that they can then be combined into a single
virtual device. The primary benefit of this virtual device is that any underlying path failure can be
tolerated provided there is at least one remaining path available. The multipathing software is
able to detect failed paths and re-issue any failed I/O requests on a remaining active path in a
manner that’s transparent to the caller. This transparency is essential for Oracle software such as
ASM and the database because they are unaware of its existence and have no built-in
functionality to perform the same task.

An additional benefit of multipathing software is the lower latency, which can be gained by
spreading I/O requests over numerous underlying paths. This is of particular importance when
using high performance storage such as Tegile flash arrays.

Adding Multipath Aliases


By default the multipath virtual devices corresponding to LUNs presented from Tegile will have
default names, which may not be useful to administrators. For reasons of manageability Tegile
recommends renaming these devices to names, which are more obviously associated with their
corresponding target. Possible naming conventions include the use of the array name or the
intended ASM disk (e.g. “DATA1”).

Each LUN presented from Tegile has a unique identifier. These identifiers are used to create the
user-friendly aliases in the multipath configuration file, so a list of the existing LUNs needs to be
used – the command multipath –ll will also show all existing devices known to the multipathing
software:

[root ~]# multipath -ll | grep TEGILE | sort ( partial listing)


3600144f0d16d890000005653461a0010 dm-3
TEGILE,ZEBI-FC
3600144f0d16d89000000565346320011 dm-5
TEGILE,ZEBI-FC
3600144f0d16d89000000565346490012 dm-6
TEGILE,ZEBI-FC

Pg. 11
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Based on these values, entries should be added to the /etc/multipath.conf file :


defaults {
polling_interval 5
path_grouping_policy multibus
failback immediate
user_friendly_names yes
max_fds 8192
}
devices {
device {
vendor "TEGILE"
product "ZEBI-FC" #2.x fiber-channel ”
#product “ZEBI-ISCSI” #2.x iSCSI 2.x “
#product “INTELLIFLASH” # 3.x Fiber-Channel & FC”
hardware_handler "1 alua"
path_selector "round-robin 0"
path_grouping_policy "group_by_prio"
no_path_retry 10
dev_loss_tmo 50
path_checker tur
prio alua
failback 30
rr_min_io 128
}
}

multipaths
# example for setting user defined names for multipath devices;
{
multipath {
wwid 3600144f0d16d89000000563d35810008
alias a_data01_8k_125GB
}
multipath {
wwid 3600144f0d16d89000000563d35960009
alias a_data02_8k_125GB
}
}

NOTE – The above listing is for Tegile arrays running code versions 2.x. If the Tegile array
is running 3.x or newer code the only difference is the product listing needs to be changed
from product "ZEBI-FC" to product "INTELLIFLASH"

The final step in the process is now to flush the device mapper and order multipath to pick up the
new user-defined configuration:
[root ~]# multipath -F
[root ~]# multipath –v2

Pg. 12
Best Practices Guide
Oracle Database on Tegile IntelliFlash

The devices now exist in the /dev/mapper directory as expected:

[root ~]# ls -l /dev/mapper/DATA*


lrwxrwxrwx 1 root root 7 Nov 23 09:49 a_data01_8k_125GB -> ../dm-3
lrwxrwxrwx 1 root root 7 Nov 23 09:49 a_data02_8k_125GB -> ../dm-5
lrwxrwxrwx 1 root root 7 Nov 23 09:49 a_data03_8k_125GB -> ../dm-6
lrwxrwxrwx 1 root root 7 Nov 23 09:49 a_data04_8k_125GB -> ../dm-7

lrwxrwxrwx 1 root root 7 Nov 23 06:04 a_grid_8k_5GB -> ../dm-2


lrwxrwxrwx 1 root root 7 Nov 23 09:49 b_data01_8k_125GB -> ../dm-8
lrwxrwxrwx 1 root root 7 Nov 23 09:49 b_data02_8k_125GB -> ../dm-9
lrwxrwxrwx 1 root root 8 Nov 23 09:49 b_data03_8k_125GB -> ../dm-10
lrwxrwxrwx 1 root root 8 Nov 23 09:49 b_data04_8k_125GB -> ../dm-11

LUN permissions and UDEV rules

The I/O scheduler determines the way in which block I/O operations are submitted to storage.
There are a number of different I/O schedulers available in the Linux kernel by default, but a
common theme in their behavior is the aim to reduce the impact of hard drive “seek time”. Most
work by assigning I/O operations into queues and then reordering them to reduce the amount of
time that disk heads spend moving to each location. For the SLES kernel the cfq scheduler is
enabled by default. Flash memory has no issues with seek times and exhibits latencies that are
frequently less than a millisecond, so there is minimal gain from using this scheduler. Tests have
consistently shown a significant increase in performance when switching to the most simple noop
scheduler.

In order to set all Tegile devices to use these values, a new UDEV rule must be created. UDEV is
the Linux device manager which dynamically creates and maintains the device files found in the
/dev directory. UDEV uses a number of rules files located in the /etc/udev/rules.d directory, so to
make this change a new file should be created. The name of the file – and its contents – will be
dependent on the version of Linux in use.

Red Hat Enterprise Linux 6 /


Oracle Linux 6 Create a file with
the name 50-tegile.rules:
[root ~]# vi /etc/udev/rules.d/50-tegile.rules

Pg. 13
Best Practices Guide
Oracle Database on Tegile IntelliFlash
This file will contain the following UDEV rules (take care not to introduce any additional carriage
returns – this syntax is very sensitive):
*****************************
* Code levels 2.x & 3.x *
*****************************
### /etc/udev/rules.d/50-tegile.rules #######
### This example is for 2.x FC . For 2.x iSCSI replace SYSFS{model}==”ZEBI-ISCSI”
### For 3.x FC and iSCSI , replace SYSFS{model}==”INTELLIFLASH*”

# RH 6 Set scheduler and queue depth for Tegile SCSI devices


KERNEL=="sd*[!0-9]|sg*", BUS=="scsi", SYSFS{vendor}=="TEGILE", SYSFS{model}=="ZEBI-FC", RUN+="/bin/sh
-c 'echo noop > /sys/$devpath/queue/scheduler && echo 128 > /sys/$devpath/queue/nr_requests'"

# Set owner to oracle/DBA for Tegile multipath devices


KERNEL=="dm-[0-9]*", ENV{DM_UUID}=="mpath-3600144f0*", OWNER:="oracle", GROUP:="dba", MODE:="660",
RUN+="/bin/sh -c 'echo noop > /sys/$devpath/queue/scheduler && echo 128 > /sys/$devpath/queue/nr_requests'"

Finally, UDEV subsystem must be told to reread and apply the new rules:
[root ~]# udevadm control --reload-rules
[root ~]# udevadm trigger

Check that the new rules have taken place by looking at the device owner of the /dev/dm*
Tegile devices have been changed to oracle:dba [root ~]# ls –l /dev/dm* ( partial listing)

brw-rw---- 1 oracle dba 252, 10 Nov 23 09:49 /dev/dm-10


brw-rw---- 1 oracle dba 252, 11 Nov 23 09:49 /dev/dm-11
brw-rw---- 1 oracle dba 252, 2 Nov 23 10:18 /dev/dm-2
brw-rw---- 1 oracle dba 252, 3 Nov 23 09:49 /dev/dm-3

Pg. 14
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Oracle Grid Install (ASM)

The procedure for installation of Oracle Automatic Storage Management and the creation of diskgroups
follows the standard process described in the Oracle documentation.

Using Multiple Arrays with Oracle ASM


For configurations using multiple Tegile arrays with Oracle Automatic Storage Management,
additional design considerations exist. Further assistance with this design process can be sought
from the Tegile technical organization.

ASM Redundancy Options


The standard recommendation for ASM redundancy is to use the EXTERNAL option, i.e. where
data will not be mirrored by ASM. However, for multi-array configurations there are situations
where NORMAL or HIGH redundancy may be preferable:
• Oracle Real Application Clusters extended cluster configurations where data must be
mirrored across sites
• Configurations which require the highest level of availability where data will be mirrored
across multiple arrays (thereby adding ASM redundancy on top of Tegile’s RAID feature)
• Diskgroups containing the critical Oracle files listed in the following section Best Practice
for Multi-Array Configurations

Best Practice for Multi-Array Configurations


For any multi-array configuration Tegile recommends that the following critical Oracle files should
be mirrored across more than one array:
• Oracle Database control files
• Oracle Database online redo logs
• Oracle Clusterware voting disks
• Oracle Clusterware cluster registry (OCR)
This can be achieved either through the creation of multiplexed files (for example by placing
online redolog members on multiple arrays) or through the use of ASM mirroring (the creation of
a single ASM diskgroup across multiple arrays using NORMAL or HIGH redundancy).

Oracle DB Install
To achieve optimum performance, there are three elements to consider when configuring the
Oracle database to run on NAND flash storage:
• Database block size (set by parameter db_block_size): Allowable values for this
parameter in Oracle are 2K, 4K, 8K (the default), 16K and 32K. In order to ensure optimal
performance, values of 8K or greater should always be used with Tegile arrays.
• Online redo log block size: by default, this is 512 bytes. Please note this value is what Oracle
discovers when querying the geometry of the LUN and not the LUN block size. LUN block size for
REDO logs was discussed in a previous section.
• Database Creation
There are no special procedures required during the creation of databases on Tegile arrays

Pg. 15
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Deploying Oracle Databases in VMWare vSphere virtualization

LUN Creation Guidelines


When creating LUNS on Tegile Arrays to be used by ASM by Virtual Machines running Linux and
Oracle RDBMS the golden rules are below.
• Create Multiple LUNS (at-least 4 for ASM
Data)
• Create 2 LUNS for GRID (ASM )
• Create ASM LUNs within a project for
snapshot and cloning/backup purposes

• Always select Thin Provisioned LUNS


• Select Purpose as “Virtual Server” for
VMDK/VMFS Datastore ( Do not select
Database and a lower block size)

• Select the Intended Protocol: FC or iSCSI


• Deduplication is disabled –do not enable it
• LZ4 compression is always enabled and leave
it enabled
• After bringing the LUN under ESX Control as a
VMFS Datastore, create one thin VDISK per
LUN. Do not span multiple VDISKS over one
VMFS Datastore created on one Tegile LUN
for performance reasons.

Pg. 16
Best Practices Guide
Oracle Database on Tegile IntelliFlash

There are situations where the customer wants to use Raw Disk Mappings versus VMDK and we
list the advantage and disadvantages of each.
VMware Disk Type Advantages Disadvantages

RAW Disk Mapping RDM Legacy, Easy P->V Migration VM using RDM cannot be live-
Array Snapshots can be used migrated
Hypervisor completely bypassed Storage cannot be migrated using S-
VMotion
Cannot use SIOC (Storage IO
Control
VMFS filesystem ( VMDK/Vdisk) Array snapshots can be used
datastore Hypervisor Latency is minimal with No known disadvantages
proper tuning.
VMotion, Storage VMotion, SIOC
can be used
vSphere Replication using Tegile
SRA.

Virtual Machine Creation Guidelines


The following guidelines are highly recommended when deploying Oracle RDBMS under Linux
running in a Virtual Machine running on VMware Hypervisor.
Virtual Machine Version Always Use the latest Ver 8 or 9 recommended
depending on version of ESXi
(VMwareParavirtual)PVSCSI vs LSI Logic PVSCSI is recommended as it uses less VCPU and is
Controller more efficient Queue depths PVSCSI are configurable to
256 per device and 1024 per adapter
Refer to VMware Knowledgebase article # 1010398

for Best Write


performance

for better storage


efficiency.
Add each VDISK used for ASM to a different SCSI Add ASM-DATA1 to scsi(1,0)
ID and controller so that the IO is spread over Add ASM-DATA2 to scsi(2,0)
multiple virtual SCSI controllers for better Add ASM-DATA2 to scsi(3,0)
performance Add ASM-DATA2 to scsi(0,1)

Configure the VM to produce true UUID for LUNs This is only required if RDM’s LUNs are exposed to a
as seen by Linux Virtual
Machine running Linux and Oracle RDBMS

Pg. 17
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Hypervisor tuning
When using VMDK’s for ASM, one needs to ensure that the tunables are set for the Hypervisor for
FC and iSCSI for optimal performance.
It is highly advisable to use the Tegile vSphere Plugin to set the tunables
This below table lists the parameters commands, which can be used in lieu of using the vSphere
Plugin
These commands vary a bit depending on vSphere release. The Syntax provided is for vSphere 5.5

Parameter Value Require Reference


reboot
Set HBA Queue Depth 256 Yes Please refer to VMware KB Article 1267
for Qlogic and
Emulex& Brocade
Set Maximum 64 No Please refer to VMware KB Article 1268
Outstanding Disk
Requests for virtual
machines

Multipath Policy Round- No Please refer to VMware KB Article


robin 2069356

IOPS Tunable No This value should be tuned to see what yields


best performance for a certain workload.
Adjust Maximum 8192 Yes esxcli system module parameters set -p
Queue Depth for "iscsivmk_HostQDepth=8192
Software iSCSI iscsivmk_LunQDepth=1024"
-m iscsi_vmk
iSCSI Jumbo Frames 9000 Yes Please refer to VMware KB Article
1007654

Knowledgebase Articles can be retrieved from this URL


http://kb.vmware.com/selfservice/microsites/microsite.do by using the KB article number.

Pg. 18
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Linux Guest Configuration

Always install VMware tools inside the guest on Linux

Guest Operating System Disk Timeout for RDM and VMware Virtual DISKS

On a Linux VM, add an udev rule with the following entry


DRIVERS=="sd", SYSFS{TYPE}=="0|7|14", RUN+="/bin/sh -c 'echo 180 > /sys$$DEVPATH/timeout'"

Change PVSCSI queue depth


Using the following parameter “vmw_pvscsi.cmd_per_lun=254 vmw_pvscsi.ring_pages=32” in GRUB
configuration or creating a new boot image by putting this in /etc/modprobe.d/pvscsi (create new file)
This change requires a reboot and please verify this change using the below commands
$ cat /sys/module/vmw_pvscsi/parameters/cmd_per_lun
$ cat /sys/module/vmw_pvscsi/parameters/ring_pages
UDEV Parameters inside Guest
set IO Scheduler to NOOP set
NR_requests to 128
### /etc/udev/rules.d/51-tegile.rules ####### ( This example is VMware VDisk only)
# RH 6 Set scheduler and queue depth for Tegile SCSI devices
KERNEL=="sd*[!0-9]|sg*", BUS=="scsi", SYSFS{vendor}=="VMware", SYSFS{model}=="Virtual Disk", RUN+="/bin/sh -c
'echo noop > /sys/$devpath/queue/scheduler && echo 128 > /sys/$devpath/queue/nr_requests'"

Guest Operating System Disk Timeout


On a Linux VM, add an udev rule with the following entry DRIVERS=="sd",
SYSFS{TYPE}=="0|7|14", RUN+="/bin/sh -c 'echo 180 > /sys$$DEVPATH/timeout'"

Pg. 19
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Tegile LUNs setup for Oracle Database in a VMware vSphere


environment
Project and LUN and Project Parameters
This section provides an overview of how to setup Tegile LUNS for vSphere using Oracle RDBMS
Tegile Storage configuration is unique in terms of Projects which allow a user to group LUNs into
different buckets which offer parameter inheritance, target and Initiator Grouping and ability to snap
and clone group of LUNs in a Project. The below table highlights a typical Project Configuration for
vSphere.

Project Template Purpose LUN parameters


Name compr dedup DRAM SSD Logbias Block
ess Cache Cache size

vSphere-Server Virtual LUNS for Booting LZ4 ON meta meta Through 32K
Boot LUNS Server ESXi Server put

vSphere-Virtual- Virtual Datastore for LZ4 ON all All latency 32K


MC-OS- Server hosting VMs
Datastores running Oracle on
Linux This
datastore
will have multiple
VMs
Oracle-ASM- Virtual LUNS used only LZ4 ON all All latency 32K
Datastores Server for
ASM Data /Logs
Each datastore
maps to only one
VDISK

The below figure shows a typical Project Schema and how snapshots and clones work

1. Project for boot-VMs is on CTLR-B. It could be on a hybrid Pool


2. Project for OS-VMs is on CTLR-A. It is recommended to be on an all-Flash Pool
3. Projects for Oracle LUNS CTLRA: It is recommended to be on an all-Flash Pool
4. VM1 boots from Datastore OS-VMS
5. Project Oracle-LUNS-VM1 has 7 LUNs – 4 DATA, 2 GRID and 1 REDO
6. These LUNS are brought into ESXi control as VMFS datastores.
7. Then they are exported to VM1 as virtual disks.

Pg. 20
Best Practices Guide
Oracle Database on Tegile IntelliFlash

How to create Tegile Snapshots for Oracle Database


Tegile Snapshot creation
Referencing the above diagram, A Tegile snapshot can be taken for the Project Oracle-ASM-VM1
This snapshot will take a instant snapshot of all the LUNS in that project

A space-optimized snapshot can be triggered from the project properties from the GUI or a REST
API call to the Array. If quiesce is turned on the snapshot will be synchronously crash-consistent
across all LUNS

Pg. 21
Best Practices Guide
Oracle Database on Tegile IntelliFlash

How to create clones of Oracle Database for test-dev from Tegile


Snapshots
Tegile Clone Creation
A Clone of all the LUNs can be created for test-dev use by GUI or REST API call
Select the snapshot and click on “Clone” and Click YES

                                     

Provide a Clone name and click on “inherit settings”. This will make the clone LUNS available to the
same ESXI server.

The clone LUNs can be brought into VM3 as a test-dev environment. These clones are space
optimized clones and multiple such test-dev copies can be created. This can also be automated
using the REST API.

Pg. 22
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Additional References

Tegile Best Practices and Reference Architectures for vSphere

** Tegile Best Practices for VMware vSphere


** Tegile and Oracle Reference Architecture with Cisco UCS

Additional References for Oracle in in VMware Environments

** Oracle Databases on VMware – Best Practices Guide.


** Oracle Databases on VMware – High Availability Guidelines.
** Oracle Databases High Availability on VMware vSphere.
** Oracle Databases on VMware – Workload Characterization.
** Oracle Databases on VMware – RAC Deployment Guide.

Pg. 23
Best Practices Guide
Oracle Database on Tegile IntelliFlash

Appendix

multipath.conf for Tegile arrays with 2.x firmware or older


defaults {
polling_interval 5
path_grouping_policy
multibus
failback immediate
user_friendly_names yes
max_fds 8192
}
devices {
device {
vendor "TEGILE"
product "ZEBI-FC"
hardware_handler "1
alua"
path_selector "round-
robin 0"
path_grouping_policy
"group_by_prio"

no_path_retry 10
dev_loss_tmo 50
path_checker tur
prio alua
failback 30
rr_min_io 128
}
}
multipaths {
multipath {
wwid 3600144f0d16d89000000563d35ad000a
alias b_data01_8k_125GB
}
multipath {
wwid 3600144f0d16d89000000563d35810008
alias b_data02_8k_125GB
}
}

multipath.conf for Tegile arrays with 3.x firmware or newer


defaults {
polling_interval 5
path_grouping_policy
multibus failback
immediate
user_friendly_names yes
max_fds 8192

Pg. 24
Best Practices Guide
Oracle Database on Tegile IntelliFlash
}
devices {
device {
vendor "TEGILE"
product "INTELLIFLASH"
hardware_handler "1
alua"
path_selector "round-
robin 0"
path_grouping_policy
"group_by_prio"

no_path_retry 10
dev_loss_tmo 50
path_checker tur
prio alua
failback 30
rr_min_io 128
}
}
multipaths {
multipath {
wwid 3600144f0d16d89000000563d35ad000a
alias b_data01_8k_125GB
}
multipath {
wwid 3600144f0d16d89000000563d35810008
alias b_data02_8k_125GB
}
}

Pg. 25

S-ar putea să vă placă și