Sunteți pe pagina 1din 20

An Oracle White Paper October 2009

RAC System Load Testing Tools Revision 2.1

Oracle White Paper RAC System Testing Tools

Contents
Contents.............................................................................................. 2 Introduction ......................................................................................... 1 RAC Pre-Installation Testing............................................................... 1 I/O Subsystem Testing.................................................................... 2 Network Testing .............................................................................. 5 RAC Validation Testing ....................................................................... 9 Swingbench Overview................................................................... 11 Hammerora Overview ................................................................... 12 Cluster Stress Testing................................................................... 13 Production Simulation Testing .......................................................... 15 Conclusion ........................................................................................ 16

Oracle White Paper RAC System Testing Tools

Introduction
The key to a successful deployment of a Real Application Clusters (RAC) system is a solid implementation plan that includes thorough testing and validation throughout the entire deployment process. Each completed step in the deployment process will have a subsequent test to validate that component before moving onto the next step in the build process. The integration of testing into the deployment process ensures that each component is functioning properly and meeting its expected levels of performance prior to moving on to the next phase in the build process. When using this method, those components which have failed validation testing can easily be corrected without impact to dependent components. This concept applies at a hardware and software validation level all the way through to final application validation/acceptance testing. For example, the Oracle software stack is dependent on the proper functionality and acceptable performance of the OS, Network and I/O subsystems while final application validation/acceptance testing is dependent on the proper functionality and acceptable performance of the Oracle software stack and database. The purpose of this paper is to provide guidance on what to test and what tools are available to perform this testing to allow for integration of testing into the overall deployment process. This paper is broken down into 3 major sections: PreInstallation Testing, RAC Validation Testing, and Production Simulation Testing.

RAC Pre-Installation Testing


The shared I/O subsystem and Private Network can be thought of as the heart and soul of a RAC system, without stability and performance of either of these components it is simply not possible to sustain life within a cluster. That being said, the testing for proper configuration of these components prior to installing RAC software is a critical success factor that is too often overlooked in the building of a new cluster. When testing these components, it is important to ensure not only the functionality but also that the components are performing within the expectations for that particular component. For example, if the private interconnect is running on Gigabit Ethernet across a single channel the expectation using the TCP/IP protocol is to be able to transfer about 125MB/s across the private network. If our actual maximum throughput is 60MB/s, while the private network is functional it is not performing at the expected levels. Assuming the RAC technology stack is able to be successfully installed on top of this private network, the database will likely not perform at the desired levels and cluster stability will likely be compromised. This would be an issue to address prior

Oracle White Paper RAC System Testing Tools

to installing the RAC technology stack. The following sections will cover how to go about testing these key components prior to installing the RAC software stack.

I/O Subsystem Testing


The overall objective to testing the I/O subsystem before building out a cluster is to ensure proper configuration and that performance is within expected levels. This allows for corrective action on issues to be taken prior to installing RAC and provides the ability to predict database I/O performance without having the complexities of the database, application servers, load testing suites etc. Oracle Orion is the preferred method of testing the I/O subsystem for servers that will be running an Oracle database. Orion has the ability to simulate database type workloads using the actual I/O stack that the Oracle database uses without having to install any Oracle database software. Orion measures the performance in terms of IOPS, MBps and I/O response time in milliseconds on the following types of database workloads: Small Random I/O workloads to simulate OLTP types of transactions. This type of workload consists of single block reads and/or writes of a given block size (default of 8k). Large Sequential I/O workloads to simulate DSS types of transactions. This type of workload consists of large sequential read and/or write streams of a given size (default of 1MB). Large Random I/O workloads to simulate multi-user sequential I/O. This type of workload consists of large random read and/or write streams of a given size (default of 1MB). Mixed Workloads which consists of Small Random I/O and Large Sequential or Large Random I/O workloads.

Each of the above workloads can be run at different levels to increase or decrease the amount of I/O that is performed. When performing Orion testing it is highly recommended to customize the test to simulate the I/O activity that will be performed by the database that will be running on the new cluster. Oracle Orion is available for download on Oracle Technology Network (OTN). The results of the Orion tests are captured into 5 files: summary.txt - Details the input parameters used to run the test as well as an overall summary of the Maximum Large IOPS, Maximum Small MBPS and Minimum Small latency trace.txt Raw unprocessed data for the test

Oracle White Paper RAC System Testing Tools

mbps.csv Comma separated value file containing a 2 dimensional table of the MBPS details of the run. At a database level, MBPS correlate to multiblock I/O requests. These operations are typically performed in DSS types of database operations such as full table scans, parallel queries etc. iops.csv Comma separated value file containing a 2 dimensional table of the IOPS details of the run. At a database level, IOPS generally play a factor in OLTP types of workloads where single-block reads are dominant over multi-block reads. lat.csv Comma separated value file containing a 2 dimensional table of the latency details of the run. As a general recommendation the read latency on database files should fall within the 10ms range.

The CSV formatting of the detailed output files allow for graphing of the results in Microsoft Excel allowing for ease of data analysis. Figures 1 through 3 below show the latency, MBPS and IOPS output from an Orion test run in Normal mode (Large Random I/O and Small Random I/O) with the expectation of the deployment of an OLTP database on the cluster. Figure 1 Orion Latency
Random I/O Latency (Read Response Time) Detail RAID 1 DMX-3
9.00 8.00 Latency (Milliseconds) 7.00 6.00 5.00 4.00 3.00 2.00 1.00 0.00 1 2 3 4 5 6 7 Large I/O (1MB) New 1 Small I/O New 5 Small I/O New 10 Small I/O New 15 Small I/O

Random I/O Latency (Read Response Time) RAID 1 DMX-3


6.00 5.80 Latency (Milliseconds) 5.60 5.40 5.20 5.00 4.80 4.60 4.40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Small I/O (8KB) RAID 1 New

The left side of Figure 1 shows the Random Large I/O latency increases as the number of concurrent large I/Os and small I/Os increase but the latency never exceeds 8ms. The right side of figure 2 shows the increase in latency based on the number of concurrent small I/O requests, this latency never exceeds 5.8ms.

Oracle White Paper RAC System Testing Tools

Figure 2 Orion MBPS


Random I/O Transfer Speed Detail RAID 1 DMX-3
250

RANDOM I/O Transfer Speed RAID 1 DMX-3


43 42

200

41
150 MBps New 0 Small I/0 New 5 Small I/0 New 10 Small I/0 100 New 15 Small I/0

MBps

40 39 38
RAID 1 New

50

37
0 1 2 3 4 5 6 Large I/O (1MB)

36 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Small I/O (8KB)

The left side of Figure 2 shows the Random Large I/O MBPS increase linearly as the number of concurrent large I/Os and small I/Os increase. The right side of Figure 2 shows a slight decrease in MBPS as the number of concurrent small I/O requests increase, which is somewhat expected due to the increase in workload. Figure 3 Orion IOPS
Random IOPS(I/Os per second) Detail RAID 1 DMX-3
3,000 2,500

Random IOPS (I/Os per second) RAID 1 DMX-3


3,000 2,500 I/O Per Second 2,000 RAID 1 New 1,500 1,000 500 0

I/O Per Second

2,000 1,500 1,000 500 0 1 2 3 4 5 6 7 Large I/O (1MB)

New 1 Small I/O New 5 Small I/O New 10 Small I/O New 15 Small I/O

9 10 11 12 13 14 15

Small I/O (8KB)

The left side of Figure 3 shows a slight decrease in the Random Large I/O IOPS as the number of concurrent large I/Os and small I/Os increase. The right side of Figure 3 shows a linear increase in IOPS as the number of concurrent small I/O requests increase. The statistics gathered by Orion are influenced by a number of factors such as disk layout in the storage unit (RAID level, disks per array, storage array cache), type of connectivity to the storage array (iscsi over gig-e, 2Gbps fabric, 4Gbps fabric), number of paths to the storage unit etc. Measurement of success in testing the I/O subsystem with Orion is measured in the following terms:

Oracle White Paper RAC System Testing Tools

1. Are the I/O numbers within the expected range for the given hardware? 2. If the database that will be running on the cluster is a DSS type of database, are MBPS results high enough to yield the desired performance of the database that will run on the cluster? 3. If the database that will be running on the cluster is an OLTP database, are the IOPS results high enough to handle the expected workload of the database that will run on the cluster? 4. Is the I/O latency within expectations? Well performing Oracle databases generally have access times for a given Oracle datafile in the range of 10ms. 5. If using multi-path I/O, is the load balanced across available channels and is a failed channel able to fail-over to a surviving channel without service interruption? 6. Were there any I/O related errors reported in any logs within the I/O technology stack (OS, SAN/NAS switch, storage array)? Given the fact that it is much easier to correct I/O subsystem issues before it is in use by Oracle RAC it is highly recommended to take corrective action on potential issues prior to continuing the build process.

Network Testing
A common misconception about the private network (cluster interconnect) used by a RAC system is that it is merely used for a heartbeat. When a RAC database is running on the cluster it uses the cluster interconnect to maintain cache coherency and perform cache fusion operations. In pure network terms, entire database blocks (defined by the database block size, default 8k) multiplied by the number of blocks able to be read in a single read (defined by the database multi-block read count, default 16) can potentially (and is likely to) be transferred at any given time across the interconnect. For example, in a 2 node cluster a user issues a query requiring a full table scan on TABLE_A (which is 64 blocks) on instance 1. The database blocks to fulfill this query are NOT in the buffer cache on instance 1 but they ARE in the buffer cache on instance 2. Since it is cheaper from a performance perspective to perform network I/O opposed to physical I/O, the 64 database blocks are pulled from instance 2s cache to instance 1s cache to fulfill the query request. In this case these 64 blocks are transferred in 4 reads of 128 KB (assuming 8KB database block size and 16 multi-block read count) in size. This is a simplistic and somewhat worst-case

Oracle White Paper RAC System Testing Tools

scenario but really does show that any compromise in the performance and reliability of the private interconnect of a RAC system will result in poor performance and instability within the cluster. In order for the private network to be able to support the network traffic generated by a RAC cluster while maintaining the cluster stability and performance, all of the components involved in the network must be working in harmony allowing the network to perform error free and at its expected optimum gigabit Ethernet levels. Ideally this performance and stability check will be performed prior to installing the RAC software to minimize the need for and complexity of configuration changes once the RAC software is installed. There are many utilities that can be found on the Internet to test network throughput and response time. This document will focus on a utility called Netperf. Netperf is free of charge and can be compiled on virtually any platform. Netperf consists of 2 pieces, a client side piece which provides the brains of the testing that will be performed and a server side piece which simply listens and responds to the requests made by the client piece. The tests that are possible with Netperf are: TCP Stream Performance (this is the default test) UDP Stream Performance DLPI Connection Oriented Stream Performance DLPI Connectionless Stream Performance UNIX Domain Stream Socket Performance UNIX Domain Datagram Socket Performance Fore ATM API Stream Performance TCP Request/Response Performance UDP Request/Response Performance DLPI Connection Oriented Request/Response Performance DLPI Connectionless Request/Response Performance UNIX Domain Stream Request/Response Performance UNIX Domain Datagram Request/Response Performance Fore ATM API Stream Request/Response Performance

Details on each of the above available tests can be found in the Netperf users manual located on the Netperf Training Website. Netperf source is available for download on the netperf.org Website.

Oracle White Paper RAC System Testing Tools

Assuming a default configuration for Oracle RAC using TCP for Clusterware communication and UDP for RDBMS traffic, it is recommended to test the bandwidth and request/response latency for the TCP and UDP protocols over the private interconnect. Additional testing may include TCP throughput and request/response over the public interface. Figure 4 Netperf TCP Stream Test

Figure 4 shows, the average throughput TCP was 114.12 MB per second. On Gigabit Ethernet the maximum possible throughput is about 125 MB per second minus the overhead of the TCP over Ethernet (~5 5.5%) and wire latency (dependent on infrastructure), which means that 114.12 MB per second is in keeping with the expected throughput. Figure 5 Netperf TCP Request/Response Test

Figure 5 shows, the transaction rate of 6018.04 per second for 1 byte messages. Dividing 1 second by the transaction rate will show the round trip latency for each 1

Oracle White Paper RAC System Testing Tools

byte message as 167 microseconds. Again this falls in line with the expected latency of gigabit Ethernet on the given hardware. Figure 6 Netperf UDP Stream Test

Figure 6 shows, the average throughput UDP was 115.72 MB per second. On Gigabit Ethernet the maximum possible throughput is about 125 MB per second minus the overhead of the UDP over Ethernet (~4.9 5.3%) and wire latency (dependent on infrastructure), which means that 115.72 MB per second is in keeping with the expected throughput. Figure 7 Netperf UDP Request/Response Test

Oracle White Paper RAC System Testing Tools

Figure 7 shows, the UDP transaction rate of 6328.72 per second for 1 byte messages. Dividing 1 second by the transaction rate will show the round trip latency for each 1 byte message as 158 microseconds. Again this falls in line with the expected latency of gigabit Ethernet on the given hardware. Gigabit network throughput and latency are influenced by several factors such as shared PCI bus speed, switch latency, length of cabling etc. That being said, a private interconnect network implemented per best practices (dedicated redundant switches) on server class hardware should achieve ~95% of the advertised bandwidth with latency in the 150-200 microsecond range for a 1 byte message. Success in network testing is measured as follows: 1. Do the test results (bandwidth and latency) fall within the expected ranges for gigabit Ethernet? 2. If using a redundant fault tolerant NIC configuration, is a failed path able to be failed-over to the surviving path without service interruption? 3. If using an active/active redundant NIC configuration, is traffic being directed according to the active/active implementation specifications? 4. Are the network interfaces reporting errors or dropped packets during testing? 5. Are any errors or issues being reported at the network protocol level (netstat s)? 6. Were there any network related errors reported in any logs within the network technology stack (OS or Switch)? It is highly recommended to investigate and correct any potential issues before beginning the installation of the RAC software stack. This approach avoids having to perform configuration changes to the RAC software due to underlying network configuration changes as well as significantly reducing the potential for a failed installation.

RAC Validation Testing


Once the Oracle RAC software stack has been installed and a database has been built, the next step is to validate the integrity, resiliency, stability and performance of the cluster. RAC validation testing should be considered part of the cluster build

Oracle White Paper RAC System Testing Tools

process, the personnel involved will be those involved in the actual build of the cluster (System Admins, Storage Admins, Network Admins and DBAs). Validation testing is to be performed BEFORE handing the cluster over to the application team. This allows for the focus to remain solely on the cluster allowing for the technical team who built the cluster to identify and resolve potential issues prior to application involvement. The goal of this testing is to minimize the issues that may be encountered when performing Production Simulation Testing resulting in a higher degree of confidence among management and the application team with the new cluster. At a high level the following test objectives are to be included in RAC Validation testing: NOTE: Details on each of the following objectives are found in the associated document entitled RAC System Test Plan Outline. Cluster Stress Testing Generate a database workload to achieve approximately 70% server resource utilization per cluster node. This allows for identification of system bottlenecks, cluster bottlenecks, cluster stability etc. The I/O and network related data should be compared to the data collected in the pre-installation testing to ensure we are getting the most possible out of the cluster. Any identified issues should be resolved before moving to the next test. Cluster Destructive Testing The objective of this testing is to validate that typical types of system failures are properly handled by the cluster. For example, if redundant Network Interface Cards are in place for the cluster interconnect, unplugging the network cable from the active card will have no impact on the cluster. It is recommended that Cluster Destructive Testing is performed with a light to moderate load. RAC Scale Up/Down Procedures Test the ability to add and remove nodes from the cluster. Aside from the obvious is it possible objective this test provides the added benefit of training the Administration team on the procedures necessary to perform this task. ASM Testing Test the ability to perform ASM administration tasks such as adding and removing disks, managing ASM objects (datafiles, etc) and testing ASM tools (asmcmd, rman, dbms_file_transfer). Again, aside from the obvious is it possible objective this test provides the added benefit of training the Administration team on the procedures necessary to perform this task. OCFS2 Testing (Linux only) - Test the ability to perform OCFS2 administration tasks such as creating and mounting OCFS2 file systems. Again, aside from the obvious is it possible objective this test provides the

10

Oracle White Paper RAC System Testing Tools

added benefit of training the Administration team on the procedures necessary to perform this task. NOTE: Though RAC Validation testing will include all of the above objectives, the Cluster Stress Testing objective will be the focus of this paper. The concepts set forth in the demonstrated Cluster Stress Testing are applicable across all testing objectives. The preferred approach to RAC Validation testing is performed without the complexity of the actual application. This means that some tool must be employed to generate load on the database. This is where tools such as Swingbench and Hammerora can be employed. Both Swingbench and Hammerora come with canned applications which can be used to conduct each of their respective stress tests. The use of these tools in conjunction with their respective canned applications carries the following advantages: Known expected results. For example, with Swingbench: upon a node eviction, the Swingbench Benchmark Application will continue to run and those transactions on the failed node will receive ORA-1013 and/or TNS12152 errors. Minimal to no involvement of testing teams. Tests can be conducted by DBAs. DBAs and System Administrators can collect and analyze the statistical data collected during testing. Potential issues can be investigated and resolved prior to handing the cluster over to a testing team for testing of the actual application that will be running on the database. Provides the Administration team (DBAs, System Administrators, etc) with time to become familiar with the tasks required to maintain, tune and troubleshoot issues on a RAC system. Provides the DBAs and System Administrators a peace of mind prior to running the actual application against the cluster.

Swingbench Overview

Swingbench is a simple yet flexible Java based load generation tool designed to load test and benchmark Oracle 9i, 10g and 11g databases. Swingbench 2.3 comes packaged with 4 supplied benchmarks:

11

Oracle White Paper RAC System Testing Tools

Order Entry A Order Entry benchmark based off of the 10g OE sample schema. This is a TPC-C like benchmark having approximately 60/40 read/write ratio. This is the default benchmark. Calling Circle - A telco based self-service application with a read/write ratio of 70/30. Stress Test Simple insert, update, delete and select test based off the OE schema with a read/write ratio of 50/50. Sales History

In addition to the 4 supplied benchmarks, Swingbench provides the ability to write custom transactions through the use of PL/SQL using the supplied PL/SQL stubs or Java for the more advanced user. The Swingbench application has 3 user interfaces: charbench - A character based front end to the Swingbench kernel minibench A scaled down version of the default graphical application Swingbench Full scale GUI version of the application allowing for graphical manipulation of the configuration and detailed real-time graphing of the generated load.

Each of the above applications can be run in a simplistic client server load test or a more complex cluster implementation of multiple Swingbench applications using the Cluster Coordinator and Cluster Overview portions of the application. Swingbench is extremely lightweight allowing it to be run from laptops, PCs or full blown servers. As mentioned earlier Swingbench is Java based, which enables it to be run on virtually any platform. Swingbench documentation and downloads are available on the Swingbench Website. Though it is entirely possible to run Swingbench from the database server itself, it is recommended to run the utility from a separate system. This is recommended to provide more accurate results by eliminating the resource overhead of the utility on the database server(s).
Hammerora Overview

Hammerora is an open source load generation tool written in TCL to perform load testing on Oracle 8i, 9i and 10g databases. Hammerora comes with two supplied benchmarks:

12

Oracle White Paper RAC System Testing Tools

TPC-C OLTP order-entry benchmark much like the Order Entry benchmark within Swingbench TPC-H DSS benchmark

In addition to the two supplied benchmarks, Hammerora provides the ability to convert a 10046 trace (SQL trace) into a runnable TCL script. This is a very powerful feature that allows for simulation of custom applications on a given database. Much like Swingbench, the above Hammerora load tests can be run in a standalone configuration or a clustered configuration to allow for increased workloads. The lightweight nature of the utility enables benchmarking to be run from laptops, PCs or full blown servers. Hammerora binary installations are provided for the Linux x86, Linux x86_64 and Windows x86 platforms. The source code is available for download, allowing for the utility to be compiled and run on other platforms. Hammerora documentation and downloads are available on the Source Forge Hammerora Website. In order to provide the most accurate results it is recommended to run Hammerora from a system outside of the cluster being tested.

Cluster Stress Testing


Calibrate the load testing tool of choice to produce a load of ~70% of maximum capacity per RAC node, this can be achieved by adjusting the think time and number of users of the generated load. Depending on the power of the cluster nodes it may be necessary to use multiple load generators; both Swingbench and Hammerora provide this functionality. Details on how to calibrate the load can be found on each of the respective load generators websites. Once the load testing tool has been calibrated to produce the desired stress-test load on the cluster, the load should be allowed to run for a specific amount of time, it is generally recommended to perform a few 1 hour runs and 1 or 2 extended multihour (overnight) runs. The extended multi-hour runs are very important in that they allow the hardware components and software stack enough time for potential issues to manifest. During each iteration of the stress test it is essential that the appropriate diagnostic data and performance metrics are collected and/or monitored. These metrics and diagnostic data includes but is not limited to:

13

Oracle White Paper RAC System Testing Tools

Database alert logs Look for errors that relate to the cluster and/or rdbms itself, application errors caused by the load generator should be ignored. CRS Logs Review the Clusterware alert log crsd, cssd, and evmd logs/traces for anomalies such as missed checkins etc. OS Logs Review the OS logs for issues. OS Statistics These statistics should be collected and saved off for later analysis. OSWatcher (Found under MetaLink Note 301137.1) or Oracle IPD/OS (available for Windows and Linux on OTN) can be used to facilitate this. Interconnect Bandwidth utilization- Ensure saturation is not occurring, this is most easily achieved by looking at the switch port statistics but can also be observed at the server level (sar n DEV <freq sec> <iterations> on Linux). Keep in mind that these stats are averaged over the specified interval, so on gig-e if the average is ~45MBps saturation may be occurring on occasion. Saturation of the interconnect will likely result in lost blocks at the database level. Network Interface Dropped packets Dropped packets lead to instability within the cluster as well as performance problems (ifconfig on Linux). TCP/IP and UDP statistics Look for packet reassembly failures and fragment drops (netstat s on Linux). Server CPU utilization Review User CPU time vs. sys and I/O wait. Server I/O throughput Ensure the read/write times fall within the expected range. Database Statistics Keep in mind, the goal is not to tune the load generator application, it is to ensure the cluster is stable and performing optimally when under load. The following statistics should be collected using AWR or Statspack. I/O related wait events e.g. db file scattered read, db file sequential read, log file parallel write etc - These I/O times should be consistent with the I/O statistics at the server level and fall in line with the ORION test results from the Cluster Pre-Installation testing. Global Cache Statistics Assuming interconnect saturation is not a factor, are these values within the expected range. A general rule of thumb is < 10ms. Global Cache Lost Blocks Ensure the number of lost blocks is close to 0.

Upon completion of each stress-test, perform a detailed review of the data gathered during the run. In the pre-installation testing measurements of the expected I/O performance and expected interconnect network performance were recorded, these

14

Oracle White Paper RAC System Testing Tools

measurements should be compared against the statistics of the database to ensure that the cluster database is able to achieve what the underlying hardware is capable of handling. For example, if the Orion average read response time for sequential I/O was 10ms, the expected database statistic of db file sequential read should be in the 10ms range. Any potential issues are to be investigated and resolved if necessary before continuing to the next phase of testing. The cluster should now have proven itself to be stable under load and all potential issues discovered during stress testing should have been investigated and resolved. The remaining Cluster Validation tests should be performed as described in the associated "RAC System Test Plan Outline" document. The RAC System Test Plan Outline provides details on each test that is to be performed, how to perform that test and the expected results for that test. When performing the Destructive Tests provided in the outline it is recommended to use one of the load testing tools in this section to provide a light to moderate load on the cluster. This will provide for more accurate test results, failures are not likely on a system without load. At this phase of testing it is essential that corrective action be taken on any test in which the results do not match those results within the RAC System Test Plan Outline before continuing to Production Simulation Testing. This approach will keep a high degree of confidence in the cluster among management and the application teams by minimizing the likelihood of cluster, OS and hardware related issues encountered during Production Simulation Testing.

Production Simulation Testing


Production Simulation Testing is usually the last step in the testing process prior to go-live. At this stage the cluster should have already been proven to be stable, reliable and resilient to common hardware/OS failures leaving the remaining focus on how the production application interacts with the RAC database. Production Simulation Testing often carries the most visibility over any other step in the deployment process. For this reason it is very important that the cluster has been proven to be stable, reliable and performing within its expectations through Cluster Validation Testing. Production Simulation Testing is to be performed with the following objectives: End-to-End Application performance review and tuning. End-to-End Application functionality review Application resiliency to cluster related failures e.g. node evictions, instance evictions etc. Final determination of the production readiness of the application, End-toEnd.

15

Oracle White Paper RAC System Testing Tools

There are several tools available to facilitate End-to-End Production Simulation Testing with the most popular being HP LoadRunner (formerly Mercury LoadRunner). LoadRunner provides the ability to simulate a production load on just about any type of application through the use of virtual users. The virtual users, which can range in the hundreds into the thousands, simulate production activity by logging into an application and performing transactions as a real application user would through the application UI. The response times of the virtual users is recorded and displayed in graphical format within the LoadRunner UI. Diagnostic probes are available to analyze performance and identify bottlenecks within virtually every tier of the application infrastructure (web tier, application server tier, etc). HP LoadRunner does require licensing, more details on HP LoadRunner can be found on the HP LoadRunner Website. With the introduction of 11gR1 came a database option called Real Application Testing. Within the Real Application Testing option is an extremely flexible yet powerful load testing feature called Database Replay. Database Replay allows for the capture of database workloads from production databases on versions 9.2.0.8 and higher. These captured workloads are then able to be replayed on Oracle 11gR1 and higher databases with the exact same characteristics as the captured workload. Database Replay workloads are purely at the database level therefore no application tier components are necessary to provide the capture/replay functionality. With its ability to replay real production workloads as they were run on the production system itself, the Database Replay feature is a key success factor in migrations to RAC, database upgrades, hardware upgrades as well as just about any other change that could potentially occur at the database level. More information on Real Application Testing can be found in the Oracle Real Application Testing Data Sheet.

Conclusion
Successful deployment of a Real Application Clusters system requires a solid implementation plan that includes thorough testing and validation throughout the entire deployment process. As the deployment progresses into Production Simulation Testing it is essential that the cluster has been proven to be stable, reliable and performing at the expected levels. RAC related issues encountered during Production Simulation Testing are often highly visible resulting in a lack of confidence in the system. It is the job of the DBAs and System Administrators to mitigate this risk by ensuring that the proper deployment and testing procedures are included in the implementation plan and are followed through to completion before handing the newly deployed system over for Production Simulation Testing. The tools mentioned in this paper can be used to facilitate the testing that is to be performed throughout the deployment process. These tools with the exception of

16

Oracle White Paper RAC System Testing Tools

HP LoadRunner and Real Application Testing carry the added benefit of being free of charge. Integration of testing with these (or similar) tools into the overall deployment plan will ensure a successful deployment of Real Application Clusters.

17

White Paper Title May 2009 Author: Bryan Vongray Contributing Authors: Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com 0109 Copyright 2009, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

S-ar putea să vă placă și