Sunteți pe pagina 1din 25

Oracle Best Practices on Compellent Storage Center

Dell | Compellent Technical Best Practices

2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Trademarks used in this text: Dell , the DELL logo, and Compellent are trademarks of Dell Inc.
TM TM TM

Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own. March 2011 Rev. A

Part number goes here

Page ii

Contents
Introduction ............................................................................................................... 2 Customer Support ...................................................................................................... 2 General Syntax .......................................................................................................... 3 Document Revision ..................................................................................................... 3 Audience ................................................................................................................. 3 Overview of Compellent Storage Center ............................................................................ 4 Data Instant Replay (DIR) ............................................................................................. 4 Data Progression ........................................................................................................ 4 Dynamic Capacity (Thin Provisioning)............................................................................... 4 Consistency Group ...................................................................................................... 5 Storage Setup and Configuration...................................................................................... 6 Disk Drives & RAID Recommendations .............................................................................. 6 RAID levels and Data Progression for Oracle databases .......................................................... 7 Recommended Disk Configuration Layouts ......................................................................... 8 ASM Disk Group Devices ............................................................................................... 11 Filesystem ................................................................................................................ 11 Database Setup and Configuration .................................................................................. 12 Putting it all Together ................................................................................................. 12 Using Compellent Data Progression and Data Instant Replay Features ..................................... 13 Table 4A. SSD & Data Progression & Data Instant Replay ..................................................... 14 Table 4B. Without SSD & Data Progression & Data Instant Replay ........................................... 14 Oracle RAC Tested Configuration ................................................................................... 16 Network Setup ........................................................................................................ 16 Network Configuration ............................................................................................... 16 Conclusion ............................................................................................................... 17 Appendix 1: Example of ASM installation on Linux ............................................................ 18 Appendix 2: 11g R2 with Multipath Setting....................................................................... 19

Page 1

Introduction
This white paper describes the best practices for running Oracle Databases (single instance or RAC) on a Compellent Storage Center. Oracle performance tuning is beyond the scope of this paper. Please visit www.oracle.com for the Oracle Database Performance Tuning Guide for more in depth information in tuning your database.

Table 1. Benefits of Oracle on Compellent Storage Center Benefit Lower total cost of ownership (TCO) Greater manageability Simplified RAC Implementation High availability and scalability Compellent Information Life Cycle (ILM) benefits Details Reduces acquisition, administration, and maintenance costs Ease of use, implementation, provisioning, and management Provides shared storage (raw or filesystems) Clustering provides higher levels of data availability and combined processing power of multiple server for greater throughput and scalability Provides tiered storage, dynamic capacity, data progression, thin provisioning, instant replay (snapshot) and more

Customer Support
Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week, 365 days a year. For additional support, email Compellent at support@compellent.com. Compellent responds to emails during normal business hours.

Page 2

General Syntax
Table 1.
Item Menu items, dialog box titles, field names, keys Mouse click required User Input User typing required System response to commands Output omitted for brevity Website addresses Email addresses

Conventions
Convention Bold Click Monospace Font Type: Blue <snipped> http://www.dell.com name@dell.com

Document Revision
Table 2.
Date 5/26/2011

Revision History
Revision A Description Initial draft

Audience
This paper is intended for database administrators, system administrators and storage administrators that need to understand how to configure Oracle databases on Dell Compellent Storage Center. Readers should be familiar with Compellent Storage Center and have prior experience in configuring and operating the following: Oracle 10, 11g Real Application Clusters (RAC) General understanding of SAN technologies Automated Storage Management (ASM)

Page 3

Overview of Compellent Storage Center


Data Instant Replay (DIR)
Data Instant Replay is most compared to "snapshot" technology. This is the ability to create pointin-time-copies where further changes to a volume of data are journaled in a way that allows the volume to be rolled back to its original state when the point-in-time-copy was taken. These pointin-time-copies can be mounted as volumes of data for the sake of partial data restore as well as full volume restore. Of course, these point-in-time-copies have various other uses in an organization but for the sake of this paper we will focus on backup/recovery. Data Instant Replay is one of many features that Compellent offers. This feature gives you the ability to make a point in time snapshot of your volumes (filesystems). There is no limit on the number of Instant Replay taken.

Data Progression
Data Progression is another feature that Compellent offers. This feature allows your least frequently used data to migrate to lower tier (cheaper SATA) disks, therefore saving space on the higher tier (expensive FC/SAS) disks. With Data Progression feature enabled on the Compellent system, you dont have to worry about disk space taken up by data that has not been accessed. You can change the number of days on a volume to alert Data Progression when to move the data. The default is 12 days. Vice versa, if the data has been accessed for a certain amount of cycles that data will then migrate back to higher tier disks for performance. This also can be changed based on your preference.

Dynamic Capacity (Thin Provisioning)


With traditional storage systems, administrators must purchase, allocate and manage capacity upfront, speculating where to place storage resources and creating large, underutilized volumes with long term growth built in. This practice leaves the majority of disk space allocated yet unused, and only available to specific applications. Compellent Dynamic Capacity eliminates the allocated but unused capacity that is an unfortunate by-product of traditional storage allocation methods. Page 4

Compellents Thin Provisioning, called Dynamic Capacity, delivers the highest storage utilization possible by eliminating allocated but unused capacity. Dynamic Capacity completely separates storage allocation from utilization, enabling users to create any size virtual volume upfront, yet only consume actual physical capacity when data is written by the application.

Consistency Group
Compellent Consistency Group feature allows storage administrators to take a snapshot of an Oracle database atomically. When creating a snapshot of a running Oracle database using storage functionality, you must ensure that all storage volumes (LUNs) that make up your database be atomically snapped because of multiplexed control files and redo log files. Remember that Oracle writes to multiplexed control files and redo log files concurrently, so without a consistency group you cannot create a usable snapshot of a running database. Without a consistency group, in order to create a usable snapshot of an online database, the database must be configured with all control files in one volume and all redo log files in the same volume or another volume but cannot be spread across volumes whether file system or Oracle ASM was used. The Consistency Group feature gives you the ability to create a usable snapshot of an online database with control files and redo log files spread across volumes which is to safeguard against single point of failure. Also, you can create a re-startable copy of an Oracle database with Consistency Group without having to put the database in hot backup mode. This scenario is similar to having a power outage on the database server. At restart, Oracle performs crash recovery, rolling forward any changes that did not make it to the data files and rolling back changes that had not committed. However, roll-forward recovery using archive logs to a point-in-time after the re-startable copy is created is NOT supported. When creating a volume on Compellent Storage Center, you guarantee data redundancy at the disk level whether or not the control files and redo log files are multiplexed at different locations. However, if the control files and the redo log files are not spread across mount points and the operating system cannot access the mount point that holds the control files or redo log files for whatever reason, your Oracle database will stop functioning. But if you spread those files across mount points without Consistency Group feature, then you cannot create a functional snapshot of your database online.

A Consistency Group replay profile should be created for all respective volumes of a database.

Page 5

Storage Setup and Configuration


Disk Drives & RAID Recommendations
Table 2. Disk Drives & RAID Recommendations

Description Data files Control files Online Redo Logs Archived Redo Logs Flashback Recovery Area OCR files / Voting Disk

RAID10 SSD OK (W) Recommended (W) Recommended (W) Not Required Not Required Not Required

RAID10/FC/SAS 15K rpm Recommended (W) OK (W) OK (W) Recommended (W) Recommended (W) Recommended (W)

RAID10/FC/SAS 10K rpm OK (W) OK (W) OK (W) OK (W) OK (W) OK (W)

RAID5/FC/SAS 15K rpm DP Not Required Avoid Not Required Not Required Avoid

RAID5/FC/SAS 10K rpm DP Not Required Avoid Not Required Not Required Avoid

All RAID FC/SAS/SATA DP Not Required Avoid DP DP Avoid

Abbreviation: W Writes DP Data Progression If Fast Track is licensed, then it is enabled by default and will be utilized behind the scenes. No manual configuration is required. Drives with higher RPM provide higher overall random-access throughput and shorter response times than drives with lower RPM. Because of better performance, SAS or Fibre Channel drives with 15K rpm are always recommended for storing Oracle datafiles and online redo logs. Serial ATA and lower cost Fibre Channel drives have slower rotational speed and therefore recommended for Oracle archived redo logs and flashback recovery area with Data Progression.

Page 6

RAID levels and Data Progression for Oracle databases


Solid State Disks (SSD)
Before implementing SSD on Compellent Storage Center for Oracle databases, you must determine if the database in question warrants the high performance that SSD provides. Since the price per gigabyte of SSD is more expensive than SAS or Fibre Channel, you need to carefully evaluate your database performance. Compellent recommends using SSD RAID10 for database online redo logs if the database is transactional and is IO constrained. Data Warehouse databases should not use SSD for online redo logs as you will not have any performance gain unless the whole database resides on SSD which may be very costly depending on how large the database is. If using SSD for online redo logs, do not configure Data Progression on this volume as you will not gain any space back due to the nature of online redo logs.

SAS or Fibre Channel Disks


Compellent highly recommends using RAID10 with 15K rpm disks for datafile volumes, online redo log volumes (if SSD is not available), and archived log volumes for all production databases. For non-production systems, the use of RAID10 with 10K rpm disks is sufficient. Whether you are using 15K rpm disks or 10K rpm disks, the same rules apply to Data Progression and Data Instant Replay. Please refer to the table 4 below under the Data Progression and Data Instant Replay section for more information on how to configure the settings appropriately.

Page 7

Recommended Disk Configuration Layouts


Oracle with Cooked File Systems

Storage Profile
Drive Type Writable Data Tier 1 Raid 10 Replay Data Tier 1 Raid 5/5 VOL1

Tablespaces System Sysaux Temp Undo

Frequently access Data Files Control Files Online Redo Logs

FC / SAS 15K

VOL2

Other Tablespaces X Y Z

Frequently access Data Files Multiplex Controlfiles Multiplex Online Redo Logs

Drive Type

Writable Data Tier 1 Raid 10

Replay Data Tier 1 Raid 5/5 Archive Redo Logs Flashback Logs RMAN backup sets

FC / SAS 15K

VOL3 SATA Tier 3 Raid 5/5

Page 8

Oracle with SSD Storage Profile


Drive Type Writable Data Tier 1 Raid 10 Replay Data Tier 1 Raid 10

SSD

VOL1

Online Redo Logs Control Files

VOL2

Multiplex Online Redo Logs Multiplex Control Files

Drive Type

Writable Data Tier 2 Raid 10

Replay Data VOL3 Tier 2 Raid 5/5

Tablespaces System Sysaux Temp Undo

Frequently access Data Files

FC / SAS 15K

VOL4

Other Tablespaces X Frequently access Data Files Y Z

Drive Type

Writable Data Tier 2 Raid 10

Replay Data Tier 2 Raid 5/5 VOL5 Tier 3 Raid 5/5 Archive Redo Logs Flashback Logs RMAN backup sets

FC / SAS 15K

SATA

Page 9

Oracle with Automatic Storage Management (ASM)

Storage Profile
Drive Type Writable Data Tier 1 Raid 10 Replay Data Tier 1 Raid 5/5

ASM Data Disk Group

FC / SAS 15K

VOL1

VOL2

Frequently access Data Files Control Files Online Redo Logs

ASM FRA Disk Group


Drive Type Writable Data Tier 1 Raid 10 Replay Data Tier 1 Raid 5/5

FC / SAS 15K

VOL1 SATA Tier 3 Raid 5/5

VOL2

Archive Redo Logs Flashback Logs Multiplex Control Files Multiplex Online Redo Log Files RMAN backup sets

Note: If your Storage Center is on 4.x code, there is no consistency group available when using Data Instant Replay for snapshot. In order to take a snapshot of your database while it is running, you need to setup your Oracle database online redo log files and control files on the same volume. You cannot multiplex your online redo logs and control files across volumes. If your Storage Center is on 5.x code, then there is no restriction. Also note that you need to test to determine the number LUNs required for your database for optimal configuration in terms of performance since every operating system is different.

Page 10

Table 3. ASM Disk Group Layout ASM Disk Group +DATA +FLASH Contents Oracle Datafiles Flashback Recovery Area

RAW Device Support


Yes Yes

ASMLib Suport Yes Yes

ASM Disk Group Devices


Compellent Storage Center supports RAW devices with or without ASMLib service. Oracle recommends that generally no more than two disk groups be maintained and managed per RAC cluster or single ASM instance. Select External Redundancy when creating ASM disk group. When creating LUNs for ASM disk group, make sure the LUNs are on the same disk characteristics (ex. FC 15K rpm or 10K rpm and 146GB or 300GB). LUNs should not contain mixed speed drives. When creating multiple LUNs for an ASM disk group, make sure to create the same size LUN to avoid imbalance. Create larger LUNs to reduce LUN management overhead. This will reduce LUN management and will allow future growth. Since the Compellent Storage Center provides thin provisioning, disk space only taken up when there is data written to it.

Filesystem
Compellent Storage Center supports various filesystems and raw devices. o JFS & JFS2 o UFS o VxFS o EXT2 & 3 & 4 o ReiserFS o NTFS o OCFS o Etc.

Page 11

Database Setup and Configuration


Direct I/O and Async I/O
Oracle recommends using direct I/O and async I/O. The use of Direct I/O bypasses the filesystem buffer cache, hence reduces CPU overhead on your server. Direct I/O is very beneficial to Oracles log writer, both in terms of throughput and latency. Async I/O is beneficial for datafiles I/O. When mounting filesystems for your Oracle database, make sure to understand how your Operating System supports Direct I/O. Some Operating Systems require certain mount options to enable Direct I/O or Async I/O.

Putting it all Together


When provisioning storage for your Oracle database server, you need to take into consideration the following before deciding how to setup and configure the Compellent Storage Center: Operating System Types (UNIX, Linux, Windows, etc.) File System Types (NTFS, VxFS, JFS, Ext3, UFS, etc.) Number of Oracle databases per server Database Types (OLTP, Data Warehouse, Reporting, etc.) Database Usage (Heavy, Medium, Light) Archived Log Mode or No Archived Log Mode Database Block Size

Based on a small workload OLTP database, you would need to create one volume dedicated to your data files, one volume dedicated to your online redo logs, and one volume dedicated to your archived redo logs. Of course, you need to create other volumes for your Oracle software and any non-Oracle related data. Refer to the table 2 of this document to decide which RAID level you should be using for your volumes.

When creating volumes for your Oracle server, you do not have to use any software striping at the Operating System level. The data in the Compellent volume is automatically striped depending on which RAID level you have selected for your volume. However, you should create multiple LUNs for

Page 12

your larger workload database and use Operating System striping mechanism (eg. LVM, VxVM) to get better performance because of multiple disk queues at the OS level.

For a more complex database, you might have to create multiple volumes dedicated to your data files, assuming you separate your data tablespaces from your index tablespaces.

Again, when creating these volumes, refer to your Operating System manual on how to mount these volumes with Direct I/O and Async I/O options. This will benefit your database performance. Also remember to set the filesystemio_options to setall or DirectIO in your initialization parameter file.

When creating volumes for your databases, it is recommended that you initially create the volumes (datafiles, archived redo logs, flash recovery area) larger than needed for future database growth. Since the Compellent Storage Center has the ability to dynamically provision storage, disk space is not taken up until actual data has been written to it. This way you can create your tablespaces with the AUTOEXTEND parameter so you dont have to worry about running out of disk space in that volume.

Compellent Data Instant Replay is perfect for Oracle backup and recovery. With Data Instant Replay there is no limit on the number of replays (snapshots) taken on the Compellent Storage Center. If you need to perform an online backup (your database must run in archive log mode in order to take an online backup), you need to put your database in online backup mode, take a replay of your data file volumes, redo log volumes, and archived log volumes, end your database online backup, and mount these replays on your backup server and send the data to tape or you can just leave these replays as is and not expire them until you have copied them to tape for offsite storage. If you use export and import frequently, you should create a separate volume dedicated for your export dump files.

*for more information on Oracle backup and recovery please refer to the document Oracle Backup & Recovery for Compellent Storage Center

Using Compellent Data Progression and Data Instant Replay Features


In order to utilize Data Progression effectively, you need to use Data Instant Replay whether or not backup and recovery is required. For Data Progression to work effectively, a replay (snapshot) should be taken at least once a week on all volumes. Below is the recommended Data Progression setting for your Oracle database volumes:

Page 13

There are two different configurations. One with SSD and one without.

In a SSD configuration, SSD will become Tier 1 and 15K & 10K rpm disks will become Tier 2, and SATA is Tier 3.

Table 4A. SSD & Data Progression & Data Instant Replay
Tier 1 SSD R10 R5/5 R5/9 Tier 2 15K/10K FC/SAS R10 R5/5 R5/9 Tier 3 SATA R10 R5/5 R5/9 Archived Logs / FRA Datafiles / Archived Logs / FRA / OCR & VOTE Data Files Writeable Data Online Redo Logs /Control Files Replay Data

In a Non-SSD configuration, 15K rpm disks will be in Tier 1, and 10K rpm disks will become Tier 2, and SATA is still Tier 3.

Table 4B. Without SSD & Data Progression & Data Instant Replay
Tier 1 15K FC/SAS R10 R5/5 R5/9 Tier 2 10K FC/SAS Writeable Data Datafiles / Archived Logs / FRA / OCR & VOTE Data Files Replay Data

Page 14

R10 R5/5 R5/9 Tier 3 SATA R10 R5/5 R5/9

Data Files

Archived Logs / FRA

Based on the above two tables, for optimal database performance you should create volumes with the following settings:

For SSD
o o o o o Datafile volumes Refer to the Recommended Disk Configuration Layout. Online Redo Log volumes Refer to the Recommended Disk Configuration Layout. Archived Redo Log volumes Refer to the Recommended Disk Configuration Layout. Flash Recovery Area Refer to the Recommended Disk Configuration Layout. OCR & VOTE Disk Create a new Storage Profile and select RAID10 only in the RAID Levels Used section and select Tier 2 in the Storage Tiers Used section and apply to volume.

For Non-SSD
o o o o o Data file volumes Refer to the Recommended Disk Configuration Layout. Online Redo Log volumes Refer to the Recommended Disk Configuration Layout. Archived Redo Log volumes Refer to the Recommended Disk Configuration Layout. Flash Recovery Area Refer to the Recommended Disk Configuration Layout. OCR & VOTE Disk Create a new Storage Profile and select RAID10 only in the RAID Levels Used section and select Tier 1 in the Storage Tiers Used section and apply to volume.

Page 15

Oracle RAC Tested Configuration


One configuration was tested consisting of a two node cluster running Oracle RAC 11g Release 2 with ASM running Oracle Enterprise Linux.

Two Nodes Oracle RAC Configuration

Node 1

Compellent Storage Center

21 U

Cisco MDS9124 FC Switch

FC
Public IP Network Network Switch
Catalyst 2970
1X
SYST RPS STAT DUPLX SPEED

Private IP Network

FC
Inter-Connect Switch

SER IES

11X

13X

23X

MODE

2X

12X

14X

24X

FC
Private IP Network Public IP Network

FC

Node 2

Network Setup
Table5. Network Setup VLAN ID or Separate Switches 1 or switch A 2 or switch B Description Client Network RAC Interconnect CRS Setting Public Private

Network Configuration
If possible, configure jumbo frames for private network. Note: when configuring jumbo frames, you need to configure on all legs of the RAC interconnect networks (i.e. servers, switches, etc.). If configuring jumbo frames is not possible, you need to configure the interconnect network with at least 1Gbps link.

Page 16

Conclusion
With Compellent SAN technologies combined with Oracle software, this has become a very attractive storage system for Oracle Databases. Running Oracle Database (single instance or RAC) with Compellent Storage Center provides the best availability, scalability, manageability, and performance for your database applications.

Page 17

Appendix 1: Example of ASM installation on Linux


Step 1: Make sure the following RPMs are installed.

# rpm -qa | grep asm oracleasmlib-2.0.4-1.el5 oracleasm-2.6.18-194.0.0.0.3.el5-2.0.5-1.el5 oracleasm-support-2.1.3-1.el5 Step 2: Configure oracleasm. # cd /etc/init.d # ./oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface [grid]: Default group to own the driver interface [oinstall]: Start Oracle ASM library driver on boot (y/n) [y]: Scan for Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: Scanning the system for Oracle ASMLib disks:

[ [

OK OK

] ]

Step 3: Modify /etc/sysconfig/oracleasm for multipathing


Change ORACLEASM_SCANORDER to dm and ORACLEASM_SCANEXCLUDE to sd I.e. # ORACLEASM_SCANORDER: Matching patterns to order disk scanning ORACLEASM_SCANORDER="dm" # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan ORACLEASM_SCANEXCLUDE="sd"

Page 18

Step 4: In Compellent Storage Center, create two new volumes for the ASM diskgroups DATA and FRA and map to server.

Step 5: Scan new LUNs and configure multipathing. # vi /etc/multipath.conf } multipath { wwid alias } multipath { wwid alias }

36000d3100003d0000000000000001f10 asm_data

36000d3100003d0000000000000001f11 asm_fra

Step 6: Label ASM disks via oracleasm


# cd /etc/init.d # ./oracleasm createdisk DATA /dev/mapper/asm_data Marking disk "DATA" as an ASM disk: # ./oracleasm createdisk FRA /dev/mapper/asm_fra Marking disk "FRA" as an ASM disk:

[ [

OK OK

] ]

Step 7: Proceed with 10g or 11g Oracle software installation. Please refer to installation documentation at www.oracle.com/technetwork/indexes/documentation/index.html

Appendix 2: 11g R2 with Multipath Setting


############################################# ## ## ## Oracle 11gR2 RAC with Multipath Setting ## ## ## #############################################

RAC NODES - Eldorado & Lynx

Page 19

############################# ## /etc/modprobe.conf file ## ############################# - Eldorado [root@eldorado ~]# cat /etc/modprobe.conf alias scsi_hostadapter ata_piix alias scsi_hostadapter1 qla2xxx alias eth0 bnx2 alias eth1 bnx2 options qla2xxx qlport_down_retry=5

- Lynx [root@lynx ~]# cat /etc/modprobe.conf alias scsi_hostadapter ata_piix alias scsi_hostadapter1 qla2xxx alias eth0 bnx2 alias eth1 bnx2 options qla2xxx qlport_down_retry=5

############################################################################ ## After changing the modprobe.conf file, you need to recreate the initrd ## ############################################################################ - cd /boot - cp -p initrd-2.6.18-194.0.0.0.3.el5.img initrd-2.6.18194.0.0.0.3.el5.img.bak - mkinitrd -f initrd-$(uname -r).img $(uname -r) - reboot

## Verify [root@eldorado ~]# cat /sys/module/qla2xxx/parameters/qlport_down_retry 5 [root@lynx ~]# cat /sys/module/qla2xxx/parameters/qlport_down_retry 5 Page 20

############################## ## /etc/multipath.conf file ## ############################## - Eldorado [root@eldorado ~]# cat /etc/multipath.conf # This is a basic configuration file with some examples, for device mapper # multipath. # For a complete list of the default configuration values, see # /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults # For a list of configuration options with descriptions, see # /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated defaults { udev_dir polling_interval selector path_grouping_policy getuid_callout prio_callout path_checker rr_min_io max_fds rr_weight failback no_path_retry user_friendly_names }

/dev 10 "round-robin 0" multibus "/sbin/scsi_id -g -u -s /block/%n" /bin/true readsector0 100 8192 priorities immediate queue yes

blacklist { wwid 26353900f02796769 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" } multipaths { multipath { wwid 36000d310000069000000000000000df8 alias grid } multipath { wwid 36000d310000069000000000000000df7 alias ocr } } Page 21

- Lynx [root@lynx ~]# cat /etc/multipath.conf # This is a basic configuration file with some examples, for device mapper # multipath. # For a complete list of the default configuration values, see # /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults # For a list of configuration options with descriptions, see # /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated defaults { udev_dir polling_interval selector path_grouping_policy getuid_callout prio_callout path_checker rr_min_io max_fds rr_weight failback no_path_retry user_friendly_names }

/dev 10 "round-robin 0" multibus "/sbin/scsi_id -g -u -s /block/%n" /bin/true readsector0 100 8192 priorities immediate queue yes

blacklist { wwid 26353900f02796769 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" } multipaths { multipath { wwid 36000d310000069000000000000000df9 alias grid } multipath { wwid 36000d310000069000000000000000df7 alias ocr } }

For more information, please refer to the Dell Compellent Linux Best Practices document. Page 22

Page 23

S-ar putea să vă placă și