Sunteți pe pagina 1din 16

Performance improvements using Concurrent I/O on HP-UX 11i v3 with OnlineJFS 5.0.

1 and the HP-UX 11i Logical Volume Manager


Technical white paper

Table of contents Abstract.......................................................................................................................................2 Introduction ..................................................................................................................................2 Purpose .......................................................................................................................................3 Audience .....................................................................................................................................3 Terminology .................................................................................................................................3 Balanced Configuration .............................................................................................................3 I/O subsystem ..........................................................................................................................3 Logical Volume Manager ...........................................................................................................3 OnlineJFS (License for Veritas Filesystem Bundle)...........................................................................3 Transactions per Second (TPS) ....................................................................................................4 Veritas Filesystem (VxFS) ............................................................................................................4 Concurrent I/O ............................................................................................................................4 Use cases of Concurrent I/O ......................................................................................................4 Enabling Concurrent I/O ...........................................................................................................5 VxFS Mount Options .....................................................................................................................5 cio ..........................................................................................................................................5 delaylog ..................................................................................................................................5 Experimental configuration ............................................................................................................6 System configuration .................................................................................................................7 Storage configuration ................................................................................................................7 Software configuration ..............................................................................................................8 DB configuration .......................................................................................................................8 Swingbench configuration ..........................................................................................................8 Test results ...................................................................................................................................8 Result 1: Single instance user scalability ......................................................................................8 Result 2: Single instance varying read/write ratio .........................................................................9 Observations..............................................................................................................................13 Generalized results .................................................................................................................14 Test results and production environments....................................................................................15 How to obtain license to use Concurrent I/O on LVM volumes......................................................15 Conclusions ...............................................................................................................................16

Abstract
For many customers, HP-UX 11i v3 is the ideal operating system for mission-critical environments. Often an organizations most critical applications require the highest throughput and have the largest footprint of database instances. HP customers have several storage deployment options for these mission-critical databases and tradeoffs exist, which require careful consideration during the planning phase. For example, an Oracle database that runs on a filesystem is easier to manage than one that uses raw devices or volumes, but raw volumes or devices are sometimes chosen as filesystems can have some level of performance degradation. Fortunately, the Concurrent I/O option with OnlineJFS 5.0.1 is instrumental in bridging this gap in performance, making the filesystem an even more appealing solution. Near-raw performance has been previously available through filesystems on HP-UX 11i with the utilization of HP Storage Management Suites and Oracle Disk Manager (ODM). This is an ideal choice for Oracle RAC deployments when Veritas VxVM volumes are used with OnlineJFS. With the 5.0.1 version of OnlineJFS for HP-UX 11i, HP now supports the use of Concurrent I/O with databases where customers are using the HP-UX 11i Logical Volume Manager (LVM) as the underlying volume manager. This white paper quantifies the relative performance that can be achieved in an Oracle single-instance database environment when using:

OnlineJFS (which is the full version of VxFS) with Concurrent I/O VxFS without Concurrent I/O (VxFS Plain) Oracle ASM Raw LVM volumes, rather than a filesystem
Benchmarking was run using Swingbench on Oracle 11gR1 patchset version 11.1.0.7. All known bottlenecks in the system were removed, so the only factor that limited transactions per minute (at a high workload) was the performance of the I/O subsystem or the CPU capacity of the Oracle server. The results show that using Concurrent I/O provides a valuable improvement in the performance of filesystem-based environments. For the majority of HP-UX 11i customers, the manageability provided by deploying a database in a filesystem-based environment significantly outweighs the small performance degradation associated with the filesystem. In singleinstance environments where LVM is the underlying volume manager and OnlineJFS 5.0.1 is used, HP recommends the Concurrent I/O mount option for applications that perform their own file-level locking.

Introduction
Traditionally, database administrators implemented databases on raw devices or raw volumes to get the best performance from their hardware. Some loss of manageability is incurred in those implementations, but better performance was the key factor in choosing raw devices or raw volumes over filesystems. Today, the majority of UNIX customers are utilizing extent-based filesystems because of their ability to significantly reduce administrative and planned downtime by providing advanced manageability features, many of which can be done online. HP-UX 11i v3 offers filesystem and volume management capabilities that provide all the benefits of a filesystem, without forcing customers to sacrifice I/O performance as compared to raw. For some releases earlier than OnlineJFS 5.0.1, HP supports the use of Concurrent I/O and ODM for near-raw filesystem performance where Veritas Volume Manager (VxVM) is the underlying volume manager. In the 5.0.1 version of OnlineJFS, HP now supports the Concurrent I/O mount option where LVM is the underlying volume manager. For single-instance environments, this represents a new option for HP-UX 11i customers. This configuration creates another deployment option for enterprise applications that must run with exceptional availability while providing a foundation that is reliable, easy-to-manage, and most efficiently utilizes the hardware platform it is running on. This white paper explains the operational characteristics of Concurrent I/O, and presents the results of a performance evaluation of Concurrent I/O with Oracle 11g Database.

Purpose
This white paper presents the results of tests with a single-instance Oracle database in order to help system architects make informed decisions when choosing an I/O subsystem configuration. We focus on a single-instance Oracle database in a variety of configurations, to quantify the performance differences on filesystems and raw LVM volumes and to determine the relative performance improvements that can be achieved using Concurrent I/O. Although the tests are performed on an Oracle database, the results may be appropriately generalized for other databases that do their own file-level locking such as Informix, DB2, MaxDB, and Sybase.

Note: HP recommends that a chosen solution be tested to verify that it meets the expected peak performance needs using the actual target application. Doing this in an environment that is identical to the production environment is the most effective way to estimate system behavior.

Audience
This white paper is for HP customers, presales personnel, and field personnel wanting to understand the possible performance benefits of the Concurrent I/O feature in HP OnlineJFS 5.0.1 (B3929GB). The reader is expected to have knowledge of the following:

Oracle databases Filesystems for HP-UX 11i (VxFS) HP-UX 11i Logical Volume Manager (LVM) Performance benchmarking

Terminology
Balanced Configuration
An end-to-end system tuned so that there are no bottlenecks to limit performance. For example, the backend storage is configured so that it will not limit performance, and Oracle can take full advantage of available CPU resources.

I/O subsystem
One of the four tested configurations:

VxFS Plain filesystem VxFS filesystem with Concurrent I/O as provided by HP OnlineJFS 5.0.1 Raw LVM volumes Oracle ASM

Logical Volume Manager


A subsystem used to manage disk storage and providing a level of abstraction from the traditional view of disks and physical partitions. It provides more flexibility to system administrator in allocating storage for applications and users.

OnlineJFS (License for Veritas Filesystem Bundle)


OnlineJFS is the name for Veritas License for filesystem, which includes all online manageability features such as online defragmentation, online filesystem grown and shrink, and filesystem intent log resize.

Transactions per Second (TPS)


The measure of efficiency of the system in converting CPU cycles into work. As the workload increases, and if the system is not limited, the transaction rate increases.

Veritas Filesystem (VxFS)


In these experiments, VxFS v5.0.1 is used.

Concurrent I/O
Concurrent I/O allows multiple processes to read from or write to the same file without blocking other read(2) or write(2) calls. POSIX semantics requires read and write calls to be serialized on a file with other read and write calls. With POSIX semantics, a read call either reads the data before or after the write call occurs. With the VX_CONCURRENT advisory set, the read and write operations are not serialized as in the case of a character device. This advisory is generally used by applications that require high performance for accessing data and do not perform overlapping writes to the same file. It is the responsibility of the application or the running threads to coordinate the write activities to the same file when using Concurrent I/O. Concurrent I/O requirements

With Concurrent I/O, the read and write operations are not serialized. It is the responsibility of the application

or the running threads to coordinate the write activities and verify that they are written to non-overlapping blocks of the same file.

To gain maximum throughput, the application must perform non-overlapping writes to the same file. Performance increases if application write offsets are block-aligned and size of I/Os are multiples of device
block size.

Concurrent I/O bypasses inode locking and hence application (or database used) must have its own inode-locking
(serialization) mechanism for multiple writers.

The starting file offset must be aligned to a 1024-byte boundary. The ending file offset must be aligned to a 1024-byte boundary, or the length must be a multiple of 1024 bytes.
Note: If the Concurrent I/O alignment requirements are not met properly, then I/Os defaults to data synchronous I/O which can cause performance degradation.

Use cases of Concurrent I/O


Useful for applications that require high performance for accessing data and do not perform overlapping writes to
the same file.

Better suited for workloads with high write ratio with writes being parallel and at different offsets. Databases (like Oracle) which have their own serialization mechanism for multiple writers. Applications that have concurrent reader/writer processes accessing the same file and need to bypass inode lock. Applications that issue large disjoint writes on the same file.

Enabling Concurrent I/O


Concurrent I/O can be enabled in the following ways: By using the -o cio mount option. The read(2) and write(2) operations occurring on all of the files in this particular filesystem will use Concurrent I/O. Steps (For new filesystems created using OnlineJFS 5.0.1) # mount -F vxfs -o cio <device_special_file> <mount_point> Steps (For already existing filesystems which were mounted without Concurrent I/O or created with older VxFS versions) Existing filesystems (older filesystems created or filesystems that were not mounted with Concurrent I/O option) will have to be unmounted and mounted again with -o cio to enable Concurrent I/O. Please note that the remount command should not be used.

1. Unmount the filesystem

# umount <mount_point>

2. Upgrade to VxFS 5.0.1 with OnlineJFS 5.0.1 installed on the system 3. Mount the filesystem with -o cio option

Please refer to Veritas 5.0.1 Installation Guide on hhtp://docs.hp.com for detailed upgrade instructions. # mount -F vxfs -o cio,<other_options_as_needed> <device_special_file> <mount_point>

Concurrent I/O is a licensed feature of VxFS. If -o cio is specified, but the feature is not licensed, the mount command prints an error message and terminates the operation without mounting the filesystem. By specifying the VX_CONCURRENT advisory flag for the file descriptor in the VX_SETCACHE ioctl command. Only the read(2) and write(2) calls occurring through this file descriptor use concurrent I/O. The read and write operations occurring through other file descriptors for the same file will still follow the POSIX semantics. Disabling Concurrent I/O The Concurrent I/O option cannot be disabled through a remount. To disable Concurrent I/O, the filesystem must be unmounted and mounted again without the -o cio option.

VxFS Mount Options


When performing benchmark tests, it was important to consider the mount options used for the Oracle data files and then to keep those mount options consistent across different measurements. The various mount options used in these experiments are as follows: ioerror=mwdisable This option (meta-data write disable) is the default policy for handling I/O errors. On metadata write errors, the filesystem would be disabled; otherwise, it is degraded.

cio
The cio (Concurrent I/O) option specifies the filesystem to be mounted for concurrent readers and writers. Concurrent I/O is a licensed feature of VxFS. If cio is specified, but the feature is not licensed, the mount command prints an error message and terminates the operation without mounting the filesystem.

delaylog
The default logging mode is delaylog. In delaylog mode, the effects of most system calls other than write(2), writev(2), and pwrite(2) are guaranteed to be persistent approximately 15 to 20 seconds after the system call returns to the application. Contrast this with the behavior of most other filesystems in which most system calls are not persistent until approximately 30 seconds or more after the call has returned. Fast filesystem recovery works with this mode.

Experimental configuration
Many factors can affect the performance of a single-instance Oracle database installation. The first objective was to find a balanced configuration and to establish a setup where the overall throughput of the system was only limited by the CPU or by the I/O subsystem being studied. Swingbench was used to generate a load that would stress the I/O subsystem. It consists of a load generator, a coordinator, and a cluster overview. The software enables a load to be generated and the transactions/response times to be charted. Swingbench generates loads for Oracle RDBMS, which can be used to demonstrate and test technologies such as Real Application Clusters, Online table rebuilds, Standby databases, Online backup and recovery, and so on. The code that ships with Swingbench includes two benchmarks: OrderEntry and CallingCircle. OrderEntry is based on the oe schema that ships with Oracle9i and latter Oracle releases. It has been modified so that Spatial, Intermedia, and the Oracle9i schemas do not need to be installed. It can be run continuously (that is until you run out of space). It introduces heavy contention on a small number of tables and is designed to stress interconnects and memory. It is installed using the oewizard located in the bin directory. CallingCircle simulates the SQL that is generated for an online Telco application. It requires data files to be generated and copied from the database server to the load generator before each run. Both benchmarks are heavily CPU intensive. For this benchmark, we used HP OnlineJFS for Veritas Filesystem 5.0.1 Bundle (Part Number: HP B3929GB). For this experiment, Swingbenchs (v2.3.0.422) OrderEntry schema with transactions representing a typical OLTP environment was used. It models the classic order entry stress test with a similar profile to the TPC-C benchmark. This version models an online order entry system with users sign up and log on before purchasing goods. Load profile: Static PL/SQL with a small table (INVENTORY) that is heavily updated.

Select 50% Insert 30% Update 20% Delete 0%


Data size was 50 GB and indexes were 150 GB. Total for data + index was 200 GB. We enabled the following 11gR1 database features: flashback area and log archiving to facilitate faster database recovery after each run. The TPS is a measure of how well the system can convert user CPU cycles into effective throughput. Different workloads with different physical read/write ratio were tested to measure performance.

Figure 1: The experimental configuration used for the benchmark

System configuration
Model Processors BL870c Integrity Blade Server 4 Intel Itanium 2 9100 series processors (1.59 GHz, 18 MB) 532 MT/s bus, CPU version A1 8 logical processors (2 per socket) RAM 64 GB

Storage configuration
Internal disk 2*72 GB An HP EVA 8100 array with 2 controllers and 6 disk enclosures (2C6D) configuration was connected. The benchmark used 56 disk spindles each of 146 GB running at 15k rpm.

Software configuration
Operating System: HP-UX 11i v3 September 2009 OEUR Oracle 11gR1 11.1.0.7, Enterprise edition, single instance

DB configuration
ASM disk groups, external redundancy, and raw LVM OLTP Workload with SGA size of 32 GB The Oracle parameter memory_target was used to set SGA size Three database instances, one using ASM, one using VxFS, and the other using LVM raw The flashback database restore feature of Oracle 11gR1 was used to start each run with a consistent set of data

Swingbench configuration
Swingbench 2.3 OLTP workload Varying users 200 GB DB size 30 minute run duration Use sar(1m) to determine the IO load sar(1m) samples cumulative activity counters (like usage of CPU, cache, disk, and so on) in the operating system at given intervals of time.

Test results
In this section, we compare the performance numbers and throughputs achieved under each of the tested I/O configurations. During the course of all benchmark runs, identical I/O subsystem configurations were set up using the same server hardware and disk array storage configuration for the database server. Also, environment factors such as kernel parameters, Oracle initialization parameters, database sizing, and log file sizes were the same. For ease of comparison, all performance throughputs are presented through series of graphs. In addition, OnlineJFS and VxFS version 5.0.1 were used and server running HP-UX 11i v3 DCOE September 2009 OEUR.

Result 1: Single instance user scalability


In this section, we present the performance numbers achieved under different I/O configurations with varying number of users. Figure 2 compares the performance workload of 25400 users. Table 1 depicts the TPS numbers. For 200 clients and above, the measured TPS leveled out at about 3400. This gradual leveling shows that the system is bound by the number of disks and the number of paths to each disk. Other factor is also raw contention for disks, as many users who were trying to update and read the same block. sar(1m) outputs showed 100% utilization of disks for each test run. With adequate storage and network bandwidth higher TPS can be achieved.

Figure 2: Comparison of TPS with increasing number of users

4000 3500 3000 2500 TPS 2000 1500 1000 500 0 25 50 100 Number of Users 200 400 Raw LVM CIO ASM VxFS Plain

Table 1
No. of Users vs. Type of I/O Raw LVM CIO ASM VxFS Plain 25 756 716 716 700 50 1353 1304 1222 1100 100 2140 2022 2107 1604 200 3149 3109 3034 2314 400 3494 3384 3381 2751

From the above results, we can see that Concurrent I/O gives near raw performance. As the load progresses, we can see that ASM and Concurrent I/O almost perform at the same level. On an average, throughput of VxFS Plain was 30%40% lower than raw LVM volumes whereas Concurrent I/O was 5%10% lower than LVM raw volumes. Poor performance of VxFS Plain can be largely due to severe lock contention and serialization of read/writes on a file. VxFS Plain encounters excessive amounts of blocking due to serialization whereas Concurrent I/O does not serialize read/writes, which results in faster throughputs and parallel execution of read(2) and writes(2) calls from different users. Concurrent I/O and raw LVM volume throughputs have very low variations at higher workloads whereas VxFS Plain has high variation, which shows the effect of severe lock contention.

Result 2: Single instance varying read/write ratio


In this section, we present the performance numbers achieved when the number of users is kept constant and the read/write request ratio is changed. By varying the read/write ratio, we try to simulate different application scenarios. For all the cases mentioned here, the OLTP workload as explained in section Experimental Configuration was used.

Case 1: Typical OLTP load The results in Figure 3 show performance numbers for a typical OLTP workload. Table 2 depicts the TPS numbers. In this experiment number of users was kept constant at 400. This case covers the general read/write ratio. Some of the example operations for this case are Customer registrations, Order products, Process orders, Browse products, and Browse orders running in parallel. Bytes read/write ratio: 81/19 I/O requests read/write ratio: 96/4

Figure 3: Comparison of TPS for typical OLTP Load

4000 3500 3000 2500 TPS 2000 1500 1000 500 0 Raw LVM CIO ASM Type of I/O Configuration VxFS Plain
TPS

Table 2
Type of I/O TPS Raw LVM 3494 CIO 3384 ASM 3381 VxFS Plain 2751

Case 2: High read ratio The results in Figure 4 show performance numbers for a typical read-intensive application. Table 3 depicts the TPS numbers. In this experiment number of users was kept constant at 200. This case shows throughput achieved by read intensive operations like Browse products and Browse orders. Bytes read/write ratio: 99/1 I/O requests read/write ratio: 99/1

10

Figure 4: Comparison of TPS under loads with high read ratio

7800 7600 7400 7200 TPS 7000 6800 6600 6400 Raw LVM CIO ASM Type of I/O Configuration VxFS Plain

TPS

Table 3
Type of I/O TPS Raw LVM 7696 CIO 7614 ASM 7605 VxFS Plain 6878

Case 3: Typical production environment read/write ratio The results in Figure 5 show performance numbers for a typical application in production environment. This case depicts general read/write ratio. Table 4 depicts the TPS numbers. In this experiment number of users was kept constant at 200. This case shows throughput achieved by operations like Order products, Process orders, and Customer registrations running in parallel. Bytes read/write ratio: 36/64 I/O requests read/write ratio: 80/20

11

Figure 5: Comparison of TPS under typical production environment loads

3000 2500 2000 TPS 1500 1000 500 0 Raw LVM CIO ASM VxFS Plain Type of I/O Configuration TPS

Table 4
Type of I/O TPS Raw LVM 2572 CIO 2570 ASM 2560 VxFS Plain 2039

Case 4: High write ratio The results in Figure 6 show performance numbers for a typical write-intensive application. Table 5 depicts the TPS numbers. In this experiment number of users was kept constant at 200. This case shows throughput achieved by operations like Customer registrations or insert operations running in parallel. Bytes read/write ratio: 9/91 I/O requests read/write ratio: 23/67

12

Figure 6: Comparison of TPS under loads with high write ratio

528 526 524 522 520 TPS 518 516 514 512 510 508 Raw LVM CIO ASM Type of I/O Configuration VxFS Plain
TPS

Table 5
Type of I/O TPS Raw LVM 527 CIO 525 ASM 520 VxFS Plain 515

Observations
The experiments did not intend to use specially modified versions or Oracle database, tuned for better performance, to achieve the upper limit of possible TPS numbers. Hence, off-the-shelf components were used (for example, the standard operating environment and standard versions of Oracle databases). By carefully removing bottlenecks and keeping the test configuration consistent, the goal was to compare the relative performance of each I/O subsystem. The results show significant improvement in TPS when Concurrent I/O is with single-instance Oracle when compared to VxFS Plain (Cached I/O). In a filesystem without Concurrent I/O, a small number of data files limit the parallelism that can be achieved by Oracle processes to perform useful work. Without Concurrent I/O, using Base-VxFS-501 mount options, HP-UX 11i write file-locking serializes I/O activity, limiting performance. When Concurrent I/O is enabled, there is improved performance as it allows multiple processes to read from or write to the same file without blocking other read(2) or write(2) calls. In Figure 7, we compare the maximum throughputs achieved while running the performance tests and provide an illustration of how different I/O configurations perform when compared to each other. This provides an overall picture of how each I/O configuration performs when compared to raw LVM performance.

13

Figure 7: Overall comparison of TPS

100 90 Performance % w.r.t Raw LVM 80 70 60 50 40 30 20 10 0 Raw LVM CIO ASM Type of I/O Configuration VxFS Plain
Average Performance Least Performance Best Performance

Table 6
Type of I/O Best performance Average performance Least performance LVM Raw 100 100 100 CIO 99.9 98.6 96.8 ASM 99.5 98.4 96.7 VxFS Plain 89.3 83.2 72

Generalized results
1. Concurrent I/O performs up to 99.9% and with averages in the range 93%99% of raw LVM volumes. 2. ASM performs up to 99.5% and with averages in the range 96%99% of raw LVM volumes. 3. VxFS Plain (Cached I/O) performs up to 89.3% and with averages in the range 72%89% of raw LVM volumes.

14

Test results and production environments


Note that OnlineJFS 5.0.1 and Base-VxFS 5.0.1 are supported only on 11i v3 and later instances of HP-UX 11i. Additionally, HP-UX 11i v3 must be updated to the September, 2009 release before installing either Base-VxFS or OnlineJFS 5.0.1. Base-VxFS 4.1 and 5.0 are included in the HP-UX 11i Base Operating Environment (with 5.0.1 now available for download), while OnlineJFS 4.1 and 5.0 (the full version of VxFS) are included in all other v3 Operating Environments (with 5.0.1 now available on independent media). All customers utilizing OnlineJFS who are on support today have the right to use OnlineJFS 5.0.1. All HP-UX 11i v3 customers without existing OnlineJFS licenses on support can either purchase a v3 Virtual Server Environment Operating Environment (VSE-OE), HighAvailability Operating Environment (HA-OE), Data Center Operating Environment (DC-OE), or if using a Base Operating Environment (BOE), purchase OnlineJFS 5.0.1 separately (sku B3929GB). In production environments, performance improvements might be gained through an iterative process of measuring, analyzing, and resolving each bottleneck encountered. Actual performance can also depend on the workload pattern and server configuration. A 5%10% performance difference was measured for Concurrent I/O versus a raw LVM volume configuration. From the test results, we can see that Concurrent I/O is best suited for read-intensive applications and applications that perform read/writes in the range of 90/1060/40. In a production environment, this difference may higher or lower depending on other factors. It is likely that other applications will be active, using CPU cycles, disk bandwidth that might also reduce the transaction rate. Nevertheless, the test results provide a clear view of how well the different subsystems behave in a controlled lab environment. They isolate other factors that could mask the comparative performance of individual I/O subsystems and provide insight into the benefits of using Concurrent I/O.

How to obtain license to use Concurrent I/O on LVM volumes


1. All customers who are using OnlineJFS on HP-UX 11i v3 today and are on support have the right to upgrade to OnlineJFS 5.0.1, where C i/o can be enabled over LVM volumes. As of the September 2010 HP-UX 11i update, the version of the filesystem which installs by default is 5.0, so customers wanting to utilize version 5.0.1 will need to obtain the independent media. Customers who use Software Update Manager (SUM) will receive email or hardcopy notification of the right to use the new version at which point they should request online for codeword and physical media. Non-SUM customers receive hardcopy notification by default. Non-SUM customers must fax or email request for password and physical media. For new licenses of the HP-UX 11i BOE, customers will need to purchase B3929GB or B3929GBE (the electronically downloadable version) to obtain OnlineJFS 5.0.1. The HP-UX 11i VSE, HA, and DCOE include the right to use OnlineJFS, so a free upgrade to 5.0.1 would be provided for customers who are on support. For the complimentary high availability and virtualization technologies, we recommend customers utilize the VSE, HA, or DCOE when making new purchase decisions.

2.

15

Conclusions
As measured in the benchmarks, Concurrent I/O can significantly improve performance of a filesystem-based Oracle database installation. The key findings of this white paper are as follows: Raw LVM volumes showed the highest level of performance. Performance degradation of 30%35% is seen when using a filesystem environment without Concurrent I/O, compared to the performance of raw LVM volumes. If Concurrent I/O is used with the filesystem as provided in OnlineJFS 5.0.1, the performance degradation is on an average of 5%10% compared to raw LVM volumes. Thus, mounting Concurrent I/O with the filesystem provides performance very close to that of raw volumes. As would be expected, raw LVM volumes provide the highest transaction rate. However, where the manageability of a filesystem is a key factor, Concurrent I/O provides a valuable bridge, and the decision in balancing performance versus manageability is much less difficult. For clustered environments such as those utilizing Oracle RAC, HP Storage Management Suite offerings with ODM are the ideal choice for near raw performance on HP-UX 11i. For single instance environments where I/O requirements are high, Concurrent I/O provides a new option wherever applications are able to do their own file-level locking.

HP welcomes your input. Please give us comments about this white paper, or suggestions for mass storage or related documentation, through our technical documentation feedback website: http://docs.hp.com/en/feedback.html

To know how you can make informed decisions when choosing an I/O subsystem configuration, visit: http://h71028.www7.hp.com/enterprise/w1/en/os/hpux11i-fsvm-learn-more.html

Copyright 20102011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel Itanium is a trademark of Intel Corporation in the U.S. and other countries. Oracle is a registered trademark of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group. 4AA1-5719ENW, Created May 2010; Updated August 2011, Rev. 3

S-ar putea să vă placă și