Sunteți pe pagina 1din 64

Best Practices

QLogic FC HBA in an EMC Environment


QLogic Press
V1 JULY 2007

Best Practices Guide

QLogic FC HBA in an EMC Environment


Version 1.0

Abstract
This guide discusses the best practices for planning, deploying, maintaining, and performance tuning a QLogic 4Gb Fibre Channel HBA in a SAN connected to EMC storage arrays.

Copyright 2003-2007 QLogic Corporation. All rights reserved. THE INFORMATION PROVIDED IN THIS DOCUMENT IS PROVIDED AS IS WITHOUT WARRANTY OF ANY KIND, INCLUDING ANY WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, INTEROPERABILITY, OR COMPATIBILITY. All of the Partners' products are warranted in accordance with the agreements under which the warranty for the products are provided. Unless otherwise specified, the product manufacturer, supplier, or publisher of non-Partner products provides warranty, service, and support directly to you. THE PARTNERS MAKE NO REPRESENTATIONS OR WARRANTIES REGARDING THE PARTNERS PRODUCTS OR NON-PARTNER PRODUCTS, AND NO WARRANTY IS PROVIDED FOR EITHER THE FUNCTIONALITY OR PROBLEM RESOLUTION OF ANY PRODUCTS. The inclusion of the Partners' products on an interoperability list is not a guarantee that they will work with the other designated storage products. In addition, not all software and hardware combinations created from compatible components will necessarily function properly together. The following list includes products developed or distributed by companies other than the Partners. The Partners do not provide service or support for the non-Partner products listed, but do not prohibit these products from being used together with Partner storage products. During problem debug and resolution, the Partners may require that hardware or software additions be removed from products to provide problem determination and resolution on the supplied hardware/software. For support issues regarding non-Partner products, please contact the manufacturer of the product directly. This information could include technical inaccuracies or typographical errors. The Partners do not assume any liability for damages caused by such errors as this information is provided AS IS for convenience only; readers use this information at their own risk and should confirm any information contained herein with the associated vendor. Changes are periodically made to the content of this document. These changes will be incorporated in new editions of the document. The Partners may make improvements and/or changes in the product(s) and/or the program(s) described in this document at any time without notice. Any references in this information to non-Partner websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this interoperability guide, and the use of those websites is at your own risk. Information concerning non-Partner products was obtained from the suppliers of those products, their published announcements, or other publicly available sources. The Partners have not tested those products and cannot confirm the accuracy of performance, compatibility, or any other claims related to those products. Questions about the capabilities of non-Partner products should be addressed to the suppliers of those products. All statements regarding the Partners' future direction or intent are subject to change or withdrawal without notice and represent goals and objectives only. This information is for planning purposes only; any use of the information contained herein is at the user's sole risk. The information herein is subject to change before the products described become available. QLogic, the QLogic logo, QLogic Press, the QLogic Press logo, Powered by QLogic, QLA, SANblade, SAN Pro, SANsurfer, SANsurfer Management Suite, and SANtrack are trademarks or registered trademarks of QLogic Corporation in the United States, other countries, or both. EMC and PowerPath are trademarks or registered trademarks of EMC Corporation in the United States, other countries, or both. Microsoft and Windows are trademarks or registered trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of others. The QLogic home page on the Internet can be found at http://www.qlogic.com. QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, California 92656 Tel: 949.389.6000 Fax: 949.389.6114

Table of Contents
Introduction ............................................................................................................... 9
Audience ........................................................................................................................................ 9 What are Host Bus Adapter Best Practices?.............................................................................. 9 Supported HBAs ......................................................................................................................... 10 How this Guide is Organized ..................................................................................................... 10

Statement of Services and Support ...................................................................... 11 SAN Planning and Deployment Best Practices ................................................... 13
Using a 4Gb HBA in a 2Gb FC SAN........................................................................................... 13 High Availability Best Practices ................................................................................................ 14 Choosing One or More Single/Dual/Quad Port HBAs ............................................................ 14 High Availability Features of the QLogic HBA ........................................................................ 14 Connecting the SAN for High Availability ............................................................................... 14 Validating Failover Configurations .......................................................................................... 15 Choosing the Right Port on the Switch for Host and Storage Ports ....................................... 16 Considerations for Tape Access via an HBA ........................................................................... 16 Check Interoperability of SAN Components ............................................................................ 16

HBA Installation Best Practices ............................................................................ 19


HBA Handling Best Practices .................................................................................................... 19 Verifying the HBA Installation.................................................................................................... 19 Use Fast!UTIL to Verify HBA BIOS and Connectivity............................................................. 20 Verify that the Host Can See the HBA.................................................................................... 20 Verify that the Host Can See the LUNs .................................................................................. 21 Latest HBA Driver Versions ....................................................................................................... 21 Understand the LED Scheme for QLogic HBAs....................................................................... 22

HBA Configuration Best Practices........................................................................ 23

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 5

Table of Contents

HBA Performance Tuning Best Practices ............................................................ 25


Introduction ................................................................................................................................. 25 Understanding Application Workloads..................................................................................... 25 Transaction Based Processes (IOPs) .................................................................................... 26 Throughput Based Processes (MBps).................................................................................... 26 Differences Between Transaction Based and Throughput Based Workloads ........................ 27 Characterization of Common Application Workloads ............................................................. 27 HBA Parameters that Impact HBA Performance...................................................................... 28 HBA Performance Tuning in Microsoft Windows .................................................................... 28 Execution Throttle................................................................................................................... 29 HBA Performance Tuning in VMware ESX Server ................................................................... 31 HBA Queue Depth.................................................................................................................. 31 Maximum Scatter/Gather List ................................................................................................. 38 Maximum Sectors ................................................................................................................... 40 Monitoring Performance............................................................................................................. 40 Gathering Host Server Data ................................................................................................... 41 Gathering Fabric Network Data .............................................................................................. 42 Choosing the Optimal PCI Slot for Your HBA .......................................................................... 43 General SAN Considerations for Performance ........................................................................ 46 Fencing High Performance Applications ................................................................................ 46 Minimize ISL ........................................................................................................................... 46 Upgrade to 4Gb ...................................................................................................................... 46 Set Data Rate to Auto-negotiate............................................................................................. 46 Fan-In Considerations ............................................................................................................ 46 Avoiding RSCNs..................................................................................................................... 46 Linear Scaling of HBAs........................................................................................................... 47 I/O Load Balancing...................................................................................................................... 47

Fibre Channel Security Best Practices ................................................................. 49


Setting a Password for QLogic SANsurfer FC HBA Manager................................................. 49 Generic Fibre Channel SAN Security ........................................................................................ 50

PAGE 6

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

Table of Contents

Appendix A QLogic HBA Tasks and Tools Sheet............................................. 51 Appendix B QLogic HBA LED Scheme.............................................................. 55 References............................................................................................................... 57 QLogic Press Review ............................................................................................. 59 QLogic Company Information ............................................................................... 61

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 7

Table of Contents

PAGE 8

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

Introduction
QLogic 4Gb Fibre Channel (FC) host bus adapters (HBAs) are designed and developed to provide industry leading performance, simplify management, and offer fully interoperable solutions in the most demanding enterprise storage area network (SAN) environments. This guide QLogic FC HBA in an EMC Environment presents a compilation of best practices to apply when using QLogic FC HBAs in an EMC SAN.

Audience
This guide is intended for SAN architects, IT administrators, and storage system professionals that currently utilize, or are considering utilizing, a QLogic HBA in a Fibre Channel SAN.

What are Host Bus Adapter Best Practices?


Best practices are process-oriented steps that are planned and prioritized to facilitate a certain activity. Using best practices helps ensure the highest quality of service (QoS). Best practices also provide guidelines on how to best utilize your HBA in a SAN. Please note that these practices are only recommendations, not requirements. Not following these recommendations does not affect whether your solution is supported by QLogic or EMC. Not all recommendations apply to every scenario. QLogic believes that their customers will benefit from reviewing these recommendations before making any implementation decision. As with virtually any best practices recommendation, implementing security best practices inevitably involves some tradeoffs. The best practice recommendations provided in this guide include discussions regarding potential tradeoffs. Every organizations requirements for a Fibre Channel SAN are different. The requirements are frequently driven by factors such as the type of business in which the organization engages, the type of data stored in the SAN, the organizations customer base, etc. All these factors must be carefully evaluated in conjunction with these best practices to develop an appropriate plan for deploying and using a SAN. This evaluation should include specific considerations as to how the data is stored and managed in a Fibre Channel SAN. This set of best practices enables administrators to quickly and easily enhance the value of existing storage resources with little or no investment.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 9

Introduction Supported HBAs

Supported HBAs
This guide supports the following QLogic 4Gb FC HBAs used in EMC SAN environments. These QLogic SANblade HBAs are collectively referred as QLogic HBAs or adapters throughout this guide.
QLogic SANblade HBA
QLA2460-E QLA2462-E QLE2460-E QLE2462-E

FC Rate
4Gb 4Gb 4Gb 4Gb

Host Bus
PCI-X 2.0 PCI-X 2.0 PCI Express PCI Express

Number of Ports
1 2 1 2

How this Guide is Organized


The contents of this guide are described in the following paragraphs. SAN Planning and Deployment Best Practices (page 13). This section discusses what to consider before implementing a SAN in your organization, including helping you choose the right equipment and making the right interconnections. HBA Installation Best Practices (page 19). This section discusses best practices that you can implement to verify HBA installation, verify that there are no conflicts, choose the right driver, etc. HBA Configuration Best Practices (page 23). This section discusses EMC recommended HBA parameter configurations for optimal operation in an EMC environment. HBA Performance Tuning Best Practices (page 25). This section discusses various HBA driver and firmware parameters that you can modify to enhance the performance of the HBA. This section also discusses how server and slot types can affect HBA performance. Fibre Channel Security Best Practices (page 49). This section discusses how to implement a secure SAN using the available features of the QLogic HBA. Appendix A QLogic HBA Tasks and Tools Sheet (page 51). This appendix lists the various common tasks a SAN Administrator performs on an HBA and the tools that can help complete the tasks. Appendix B QLogic HBA LED Scheme (page 55). This appendix defines the blink patterns of the various LEDs on the back of a QLogic HBA.

PAGE 10

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

Statement of Services and Support


The QLogic SAN Pro Service and Support Program allows customers to choose from a variety of service plans designed to meet specific business requirements. SAN Pro covers a diverse range of services, including remote technical assistance, on-site support and advanced hardware replacement. Best of all, no matter which plan is selected, customer satisfaction is guaranteed by QLogic and its network of authorized service partners. QLogic switch products allow a wide range of organizations to exploit the power of a SAN. Whether it's a fast growing small firm implementing a network with 1020 devices or a Fortune 100 Corporation creating a large infrastructure with thousands of devices, QLogic SANtrack Service and Support Program effectively addresses either set of business requirements. The SAN Pro Service and Support Program offers a diverse range of services including SAN Pro Prime, SAN Pro Select, and SAN Pro Exchange. Customers may choose among the services that best meet the demands of their business. Most importantly, customers are assured complete satisfaction since QLogic and its qualified partners fully guarantee all products and services.links below. For additional information on support, please see the QLogic website at: http://support.qlogic.com/support/service_programs.asp. For warranty information, please visit QLogic at: http://support.qlogic.com/support/warranty.asp.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 11

Statement of Services and Support

PAGE 12

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

SAN Planning and Deployment Best Practices


Using a 4Gb HBA in a 2Gb FC SAN
As a best practice, use a 4Gb HBA even if the EMC SAN has 2Gb components (switch or storage) for the following reasons: QLogic 4Gb FC HBA technology is backward compatible with 2Gb and 1Gb infrastructures. QLogic 4Gb FC HBAs are offered at very competitive pricing compared to 2Gb FC HBAs. User needs and data volumes continue to grow. Consequently, choosing and deploying a 4Gb FC HBA enables better investment protection to meet future needs. QLogic 4Gb FC HBAs offer significantly higher performance, enhanced product and data reliability, and allow superior scalability (refer to the white paper Benefits of 4Gb HBA in 2Gb SANs, posted on http://www.qlogic.com/knowledgecenter/briefs_papers.asp). A summary of the benefits of 4Gb FC HBAs versus 2Gb FC HBAs is described in the table below.
2Gb Higher Performance
Maximum IOPs/port Intelligent Interleaved DMA Dual Read DMA Out of Order Frame Reassembly 40,000 No No No 150,000 Yes Yes Yes Higher performance Enhanced Return on Investment (ROI) in mixed speed SANs Superior performance through faster parallel processing Reduces congestion and improves network performance

4Gb

QLogic 4Gb Benefits

Superior Scalability
Virtualization (N_Port Virtual ID) VSAN No No Yes Yes Higher availability and security Improved QoS and lower TCO

Enhanced Reliability
T10 CRC Support Overlapping Protection Domains No No Yes Yes Provides end-to-end data integrity Higher data reliability

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 13

SAN Planning and Deployment Best Practices High Availability Best Practices

High Availability Best Practices


This section discusses the various High Availability features of the QLogic HBAs and discusses how to choose between single, dual, and quad port HBAs. In addition, the section provides best practices on how to connect the HBA port to a switch to attain High Availability and how to avoid a single point of failure (of an HBA).

Choosing One or More Single/Dual/Quad Port HBAs


Use the following guidelines when deciding how many ports per HBA you need for a server: To achieve High Availability, there must be more than one physical HBA in a server. The second best option to multiple HBAs is to have a single HBA with multiple ports. Use this option when server PCI slot space can accommodate only one HBA. Two single-port HBAs provide better reliability and performance compared to one dual-port HBA. When using a SAN to back up your array, it is best to have a dedicated third HBA. The third HBA allows the storage HBAs to perform their tasks while providing unprecedented bandwidth to the backup application.

High Availability Features of the QLogic HBA


The QLogic HBAs provide several High Availability features. To best utilize these features, use EMC PowerPath software. EMC PowerPath is a server-resident solution that works closely with the QLogic HBA to enhance performance and application availability. It combines multiple path I/O capabilities, automatic load balancing, and path Failover functions into one integrated package.

Connecting the SAN for High Availability


It is recommended that all SANs be setup such that a single point of failure (failure of a single component of the SAN) does not disrupt application I/O. Redundant HBAs, switches, storage processors, and fabric should be used with EMC PowerPath to achieve a highly available SAN. The following EMC PowerPath table summarizes the various connectivity options and their benefits. Choose which option you want to deploy in your enterprise.

PAGE 14

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

SAN Planning and Deployment Best Practices High Availability Best Practices

Validating Failover Configurations


Once the host environment has been successfully configured for Failover, including setting up redundant paths and Path Management software, QLogic recommends that the host be tested to determine that Failover and Failback work correctly. This testing should be done in the deployment stage. The following scenarios should be individually tested while there is active I/O level at the host. In each case, verify that the LUNs fail over to the alternate path. For each of these cases, following a successful Failover, revive/reconnect the failed component and see if Failback works as expected. Disconnect the fiber cable connecting the HBA to the switch. Disconnect the fiber cable connecting the switch to the storage system. Disable or physically remove the HBA from the host. Disable one of the storage controllers.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 15

SAN Planning and Deployment Best Practices Considerations for Tape Access via an HBA

Choosing the Right Port on the Switch for Host and Storage Ports
In addition to validating failover configuration, if the switch being used contains multiple ASICs, QLogic recommends trying to connect host and storage pairs to the same ASIC. This connection prevents using the shared internal data transfer bus and reduces switch latency. In addition to fault tolerance concerns, performance is also a consideration. For example, if a host has two HBAs, each one accessing its own storage port, do not attach both HBAs, both storage ports, or all of the HBA and storage ports to the same switch ASIC.
Host
HBA HBA HBA

Host
HBA

ASIC 1

FC Switch

ASIC 1

SPA

SPB

SPA

SPB

Array

Array

Considerations for Tape Access via an HBA


QLogic recommends that a separate HBA and switch zone be used in the fabric for tape access. The reasons for this recommendation are as follows: Separate zoning keeps streaming tape traffic segregated from standard disk traffic in the SAN. Zoning out the tape HBA and the tape port on the switch decreases the impact of tape reset/rewind issues on disk traffic.

Check Interoperability of SAN Components


Todays IT environments incorporate products from many different vendors and sources with the possibility of millions of combinations. To take advantage of this breadth of products and to use the best product for the task at hand whether servers, storage, operating system, or other components it is critical to know: Which products have been rigorously tested with all of the others and are known to work properly. What configuration information and best practices are available to ensure that these products actually work together. Where you can go for support of a mixed-vendor installation.

PAGE 16

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

SAN Planning and Deployment Best Practices Check Interoperability of SAN Components

EMC recommends using the E-Lab Interoperability Navigator to obtain this information. Follow the links from https://elabnavigator.emc.com/. QLogic also provides a comprehensive SANtrack interoperability guide. This guide is available in electronic or print form from http://www.qlogic.com/interopguide/.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 17

SAN Planning and Deployment Best Practices Check Interoperability of SAN Components

PAGE 18

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Installation Best Practices


This section provides a list of best practices that QLogic recommends to ensure that your QLogic HBA is installed correctly, the right drivers are loaded, and that each of these steps are completed without error. EMC recommends using the Customer Procedure Generator software to get step-by-step information about setting up a supported configuration. QLogic provides EMC Installation and Configuration Guides for various operating systems at http://support.qlogic.com/support/oem_emc.asp. QLogic recommends using these manuals to complement the step-by-step instructions generated by the EMC Customer Procedure Generator.

HBA Handling Best Practices


To minimize the possibility of ESD-related damage, EMC strongly recommends using both a workstation antistatic mat and an ESD wrist strap. Observe the following precautions to avoid ESD-related problems: Leave the HBA in its antistatic bag until you are ready to install it in the system. Always use a properly fitted and grounded wrist strap or other suitable ESD protection when handling the HBA. Also observe proper ESD grounding techniques. Hold the HBA by the edge of the PCB or mounting bracket, not the connectors. Place the HBA on a properly grounded antistatic work surface pad when the HBA is out of its protective antistatic bag.

Verifying the HBA Installation


QLogic recommends that every HBA installation be verified to guarantee that the HBA and drivers have been installed successfully and are working properly. After you have installed the QLogic HBA as per EMC guidelines and connected the HBA port to a storage array, follow these instructions to determine if your QLogic FC HBA and its driver have been correctly installed.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 19

HBA Installation Best Practices Verifying the HBA Installation

Use Fast!UTIL to Verify HBA BIOS and Connectivity


While the host server is booting, the QLogic BIOS banner displays. At this time, press CRTL+Q to enter Fast!UTIL. Use the Fast!UTIL menu options to scan for fibre devices and view serial number and WWPN information. Verify that the HBA can see the EMC Storage Array LUNs. Detailed instructions on how to use Fast!UTIL can be found at http://www.qlogic.com/ in the EMC section.

Verify that the Host Can See the HBA


Verify that the host can see the QLogic HBA. Do one of the following, depending in your operating system (OS): In Windows, check that the QLogic HBA is visible in the Windows Device Manager under the SCSI and RAID Controllers section. In Linux, run /sbin/lspci or /usr/bin/lsdev to verify that the QLogic HBA is among the list of devices. In Solaris, use the configuration administration command cfgadm to check the status of the fc-fabric occupant. In VMware, log into the Management User Interface (Virtual Infrastructure Client or Virtual Center), navigate to the Configuration Tab for the host, and then proceed to Storage Adapters. Verify that the QLogic HBA appears in the list of adapters.

PAGE 20

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Installation Best Practices Latest HBA Driver Versions

Verify that the Host Can See the LUNs


Verify that the host can see all the LUNs that were presented to the QLogic HBA ports (the ones that were visible at the HBA BIOS level in Use Fast!UTIL to Verify HBA BIOS and Connectivity on page 20). Do one of the following, depending in your OS: In Windows, do one of the following: Navigate to Computer Management and then Disk Management; verify that the LUNs can be seen as disks. Use the QLogic SANsurfer Management application to view the LUNs. In Linux, execute the cat /proc/scsi/scsi command to see a list of LUNs presented to the Linux host via the QLogic HBA. The information displayed by the cat /proc/scsi/scsi command is not dynamic and does not reflect state changes caused by fabric changes. Use the QLogic Linux Tool ql-scan-lun.sh to rescan the Fibre Channel for new LUNs. No reboot is required. In Solaris, use the command format to see a list of LUNs. In VMware ESX, log into the Management User Interface (Virtual Infrastructure Client or Virtual Center), navigate to the Configuration tab for the host. From there, go to Storage Adapters and verify that the LUNs can be seen. Try doing a rescan if you think new LUNs may have been added since the host was booted.

Latest HBA Driver Versions


Ensure that the latest HBA driver is installed on all the HBAs in your environment. The latest driver versions can be downloaded from http://support.qlogic.com/support/oem_emc.asp. QLogic drivers follow the unified driver model, where the firmware is bundled along with the driver package; this prevents any possible mismatch between the driver and firmware versions, which in turn reduces the number of software components a SAN administrator has to manage.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 21

HBA Installation Best Practices Understand the LED Scheme for QLogic HBAs

Understand the LED Scheme for QLogic HBAs


See Appendix B QLogic HBA LED Scheme (page 55) to identify the HBA LEDs, and how the LED color and blink patterns indicate HBA status and connected components. Use this information as a starting point in all of your SAN link troubleshooting exercises and after you have successfully installed the QLogic HBA.

PAGE 22

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Configuration Best Practices


EMC recommends that you use the EMC E-Lab Interop Navigator to obtain EMC approved HBA driver and NVRAM settings for optimal operation. See the following EMC QLogic published documents to obtain EMC approved HBA settings. The documents can be found at http://powerlink.emc.com/. NOTE: For most cases, the default HBA parameter settings are also the recommended values. For a detailed explanation of each parameter, refer to the appropriate guide listed below, depending on your operating system.
.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 23

HBA Configuration Best Practices

PAGE 24

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices


Introduction
SAN performance depends on many factors, including workload, hardware, vendor, HBA types, RAID levels, cache sizes, and stripe sizes. This section describes performance topics as they apply to the QLogic Fibre Channel HBAs. The HBA is more than a piece of the SAN performance puzzle; it is the internal plumbing of the SAN. SAN performance tuning is not dependent on just the HBA level performance tuning; QLogic recommends that the SAN administrator look into all factors that influence SAN performance. This section also describes the different workloads generated by enterprise applications, their impact on performance, and how performance can be addressed by configuration and parameter modifications. Managing SAN performance means meeting the demands of current workloads as well as ensuring the ability of applications to scale and handle larger workloads and peak demands. To ensure adequate performance, administrators must be able to monitor and measure system performance. This section describes the tools available to an administrator to measure and analyze performance, and shows how to use the tools effectively. When tuning a SAN for performance, it is important to understand the system's operation and architectural design from end-to-end, so that you can match components such as I/Os per second (IOPS) and queuing. End-to-end refers to all operations and components of the entire system, from handling an application request to the underlying support software, the physical HBA, the SAN fabric components, the disk array controllers, and the disk drive. Without an understanding of the supporting components, you might make blind changes, or changes to one area of the system without considering all others. These changes not only yield no overall system performance improvement, but may make things worse. This section provides insight into many of these architectural considerations, and offers advice on how to avoid common mistakes. Recommendations made in this section should enable optimal balance between performance, management, and functionality. NOTE: QLogic recommends that performance tuning be conducted in a test SAN environment, and that I/O performance is monitored before implementing the changes in a live SAN environment. Performance monitoring and tuning is a continual process.

Understanding Application Workloads


In general, there are two types of data workload (processing): Transaction based Throughput based These two workloads are very different, and must be planned for differently. Knowing and understanding how your host servers and applications handle their workload is an important part of being successful with your storage configuration efforts and the resulting performance of the QLogic HBA.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 25

HBA Performance Tuning Best Practices Understanding Application Workloads

Transaction based and throughput based processing are types of workload. The workload is the total amount of work that is performed at the storage server, and is measured through the following formula: Workload = [transactions (number of host IOPS)] * [throughput (amount of data sent in one I/O)] Since a storage server can sustain a given maximum workload, the above formula shows that when the number of host transactions increases, the throughput decreases. Conversely, if the host is sending large volumes of data with each I/O, the number of transactions decreases. A workload characterized by a high number of transactions (IOPS) is called a transaction based workload. A workload characterized by large I/Os is called throughput based workload. These two workload types are conflicting in nature, and consequently require different configuration settings across all parts of the storage solution. Generally, I/O (and therefore application) performance is best when the I/O activity is evenly spread across the entire I/O subsystem. On the Windows platform, the I/O transfer size of an application can be determined using Perfmon. The following sections further describe each workload type.

Transaction Based Processes (IOPs)


High performance in transaction based environments cannot be created with a low cost model (with a small number of physical drives) storage server. Transaction process rates are heavily dependent on the number of back-end drives that are available for the controller to use for parallel processing of the initiator/host generated I/Os. Generally, transaction intense applications also use a small random data block pattern to transfer data. With this type of data pattern, having more back-end drives enables more host I/Os to be processed simultaneously, as read cache is far less effective, and the misses need to be retrieved from disk. In many cases, slow transaction performance problems can be traced directly to hot files that cause a bottleneck on a critical component (such as a single physical disk). This situation can occur even when the overall storage server has a fairly light workload. When bottlenecks occur, they can present a very difficult and frustrating task to resolve. As workload content can continually change throughout the day, these bottlenecks can be very mysterious in nature and appear and disappear, or move from one location to another over time. Generally, I/O (and therefore application) performance is best when the I/O activity is evenly spread across the entire I/O subsystem. While transaction based workloads reduce block contention, they have a relatively large host CPU overhead.

Throughput Based Processes (MBps)


Throughput based workloads are seen with applications or processes that require large amounts of data to be sent, and generally use large sequential blocks to reduce disk latency. Generally, only a small number of drives (2028) are needed to reach maximum throughput rates with the EMC storage servers. In this environment, read operations use the cache to stage greater chunks of data at a time, which improves overall performance.

PAGE 26

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices Understanding Application Workloads

Throughput rates are heavily dependent on the storage servers internal bandwidth. Throughput based workloads have a relatively low host CPU overhead.

Differences Between Transaction Based and Throughput Based Workloads


While no single enterprise application can be classified as having a solely transaction or throughput based workload, a large percentage of an enterprise application workload will be one or the other. Use this large percentage tilt to decide how you want to tune your QLogic HBA for optimum performance. With the QLogic HBA, these two workload types have different parameter settings to optimize their specific workload environments. These settings are not limited to the storage server; they span the entire solution. With care and consideration, you can create an environment of very good performance with both workload types and share the same QLogic HBA. However, portions of the storage server configuration will be tuned to better serve one workload or the other. For maximum performance of both workloads, QLogic recommends considering two separate QLogic HBAs, each tuned for their specific workload type, instead of sharing a single HBA to serve both workloads at the same time. However, datacenter budgets may dictate which model you choose.

Characterization of Common Application Workloads


The following table characterizes the most common workloads in an enterprise. This is a generalized characterization; use these definitions as a starting point and refine them based on your environment. Table 1. Workload Characterization
Block Size
4, 8, and 16KB 1632KB 3264KB

Workload Type
Transaction oriented Throughput oriented Throughput oriented

Enterprise Application
Oracle, Microsoft Exchange & SQL, Lotus Domino, Web Server, File Server Log files Backup and recovery; video, and audio streaming

Oracle I/O Characteristics


Table 2. Oracle I/O Characteristics
Subsystem
OLTP Log

I/O Block Size


Up to 60KB (Sector Aligned)

Access Type
Sequential

I/O Type
Read or Write

Workload Type
Primarily throughput

OLTP Data OLTP Lazy Write OLTP Checkpoint Read ahead (DSS, table/index scans) Bulk insert

8KB 8k256K 8k256K 8k256K 8k128K

Random Random Random Sequential Sequential

Read Write Write Read Write

Transactional Primarily transactional Primarily transactional Primarily throughput Primarily transactional

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 27

HBA Performance Tuning Best Practices HBA Parameters that Impact HBA Performance

Microsoft Exchange I/O Characteristics


Table 3. Microsoft Exchange I/O Characteristics
Subsystem
Database Store .EDB Database Store .STM Database Store .LOG

I/O Block Size


4KB 3264KB 4KB

Access Type
Random Random Sequential

I/O Type
Equally split reads and writes Equally split reads and writes Write

Workload Type
Transactional Throughput Transactional

HBA Parameters that Impact HBA Performance


The following table lists the programmable HBA driver and HBA parameters (also called NVRAM parameters). Modifying these parameters may impact HBA performance. This section provides a brief description of the parameters, and shows their default values and range. The following sections discuss several of these parameters in more detail.
HBA Parameter
Execution Throttle Frame Size Fibre Channel Data Rate

Definition
Specifies the maximum number of I/O commands allowed to execute on a HBA port. Specifies the size of a Fibre Channel frame per I/O. Specifies the HBA adapter data rate. When set to Auto, the adapter auto-negotiates the data rate with the connecting SAN device.

Default Value
256 2048 Auto

Range
1256 5122048 1 (Auto), 2 (1Gb), 3 (2Gb), 4 (4Gb)

OS Support
Windows All All

Maximum Queue Depth Maximum Scatter Gather List Size Maximum Sectors

Specifies the maximum number of I/O commands allowed to execute/queue on a HBA port. Specifies the size of the list of DMA items that are reported to SCSI mid-level per I/O request. Specifies the maximum number of disk sectors that are reported to SCSI mid-Level per I/O request.

32 32 512

1-65535 1-255 512, 1024, 2048

VMware ESX VMware ESX VMware ESX

The following sections focus on tuning HBA performance in Windows and VMware ESX environments.

HBA Performance Tuning in Microsoft Windows


In Microsoft Windows environments, the following three HBA parameters affect HBA performance: Execution Throttle, Frame Size, and Fibre Channel Data Rate. Of these, the Frame Size and Fibre Channel Data Rate default settings are pre-set to 2112 bytes (2048 bytes + headers) and auto-negotiate to provide the best possible performance in any environment. Therefore, Execution Throttle is the only HBA parameter that you can tune to improve HBA performance in a Windows environment.

PAGE 28

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices HBA Performance Tuning in Microsoft Windows

Execution Throttle
What is Execution Throttle?
Execution throttle is a HBA parameter that controls the maximum number of I/O commands executing on any one HBA port. When a ports execution throttle is reached, no new commands are executed until the current command finishes executing. The valid options for this setting are 1256. For Microsoft Windows environments, this parameter is set to the default value of 16 or to an OEM specific value. With this value, at any point in time, one HBA port can have a maximum of 16 I/O commands executing. Any more I/O commands will have to wait before they can be scheduled in the HBA. The value of the Execution Throttle parameter can be increased to allow the HBA to execute more commands or decreased to decrease the amount of I/O an HBA port can put on the wire.

Tuning Execution Throttle


In a SAN configuration with three or more servers accessing the same storage array, QLogic recommends changing the default Execution Throttle value for each HBA. By default all QLogic EMC 4Gb FC HBAs have their Execution Throttle value set to maximum; if you decide to change this from its default value, use the guidelines below to derive a new value. To calculate the new execution throttle, first determine if all servers carry the same I/O load. If all servers carry the same I/O load, calculate the value by dividing 250 by the number of servers in the SAN. Set each HBA in the SAN to the calculated value. For example, in a four-server configuration, divide 250 by 4 to arrive at 62.5. The Execution Throttle value for each HBA is 62. Assign the value of 62 to all HBAs. If some of the servers carry heavier I/O loads, first calculate the Execution Throttle value by dividing 250 by the number of servers, and then adjust the values so that servers with higher I/O loads have higher Execution Throttle values and servers with lower I/O loads have lower execution throttle values. For example, in a four-server configuration, you can assign the value of 72 to the HBAs in the server with the highest I/O load, the value of 52 to the HBAs in the second server, and the value of 62 to the HBAs in the remaining two servers. When calculating the server Execution Throttle value, consider the following: EMC recommends setting all HBAs in a server to the same Execution Throttle value. When adding the values, treat each server as having a single value. If the adapters in the first server are set to 72, the adapters in the second server are set to 52, and the adapters in the third and fourth servers are set to 62, the sum is 248. In most environments, it is OK to set the value of Execution Throttle to its maximum for all HBAs in the SAN. In general, increasing the value of the execution throttle will have a more profound increase in performance for transaction-oriented applications as compared to throughput-oriented applications. The effects of changing the execution throttle of a QLogic HBA are most noticeable when the application workload type is transaction oriented.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 29

HBA Performance Tuning Best Practices HBA Performance Tuning in Microsoft Windows

How to Change the Execution Throttle


The Execution Throttle value for each port of an HBA can be easily changed with the QLogic SANsurfer FC HBA Manager application (or the SANsurfer command line interface (CLI)) on Windows environments (see the figure below).

Key Notes on Tuning Execution Throttle


CAUTION: When experimenting with Execution Throttle, consider: There is a risk of flooding the storage array with I/O from just one HBA port that has a high Execution Throttle. This flooding will neither allow nor delay access to this storage device from other HBA ports. If the Execution Throttle value is too high, attached servers may indicate an I/O timeout issued by the QLogic Fibre Channel device driver in the data section of the Windows Event Viewer System log. In many cases, the speed of the HBA is limited by speed of the storage array. Do not increase the Execution Throttle to a value such that the I/O overwhelms the storage array.

PAGE 30

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server

HBA Performance Tuning in VMware ESX Server


A virtual machine, which is no different than a physical server, hosts an operating system and runs applications. Tuning a physical server is similar to tuning a virtual machine environment, but there are a few considerations and special options. This section describes the QLogic HBA parameters that apply to the VMware ESX Server that can impact the performance of the virtual machines hosted by the server. There are three such HBA parameters: Maximum Queue Depth Maximum Sectors Maximum Scatter Gather Entries

HBA Queue Depth


What is HBA Queue Depth?
HBA queue depth is an HBA driver parameter that refers to the number of HBA SCSI command buffers that can be allocated by an HBA port. It relates to the maximum number of outstanding commands that can execute on any one HBA port. When the QLogic driver loads, it registers the value of the queue depth with the SCSI mid-level, which notifies the SCSI layer above the HBA driver how many commands the HBA driver is willing to accept and process. The default for ESX 3.0.0 is 32 and 16 for ESX 2.5.x. The valid queue depth range is from 1256, but the best value for your environment depends on the following factors: The total number of LUNs exposed through the storage array (target) ports The queue depth of the storage array (target) port
Parameter Description
HBA Max Queue Depth

Denoted in Driver by
ql2xmaxqdepth

Default in ESX 2.5.x


16

Default in ESX 3.0.x


32

Range
1256

The effects of changing the HBA Max Queue Depth parameter are most noticeable when the application workload type is transaction oriented.

Tuning HBA Queue Depth for your Environment


When determining the right value of HBA queue depth for your environment, consider the effect of the HBA queue depth on other SAN components. This section describes the relationship between the HBA queue depth and the target device queue depth, and discusses whether increasing the HBA queue depth can increase performance without adversely affecting other SAN components.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 31

HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server

I/O Loading Fundamentals


The section presents the fundamentals of I/O loading using EMC Disk Array examples. This model should extend to other devices in the SAN as well. Loading is presented from the target device port loading perspective, with host and HBA factors that affect the storage arrays optimal utilization. The concepts and variables presented are standard SCSI terminology. The terminology may change depending on the host operating system. For example, in Solaris, the QLogic driver queue depth per LUN is the variable sd_max_throttle; in VMware, it is the variable ql2xmaxqdepth.

When determining loading in any mass storage SAN setup, is necessary to understand the factors used; these factors are at the SCSI layer level in both the initiator (host) and the target (device). These factors provide an I/O flow control mechanism between the initiator and the target. The host flow control variable is the queue depth per LUN; the device flow control variable is target port queue depth. The above diagram shows the placement of these variables in the I/O architecture. The following five main variables assess the loading factor on a device port: P = The number of host paths connected to that array device port. q = Queue depth per LUN on the hosts (for the host port); i.e., the maximum I/Os outstanding per LUN from the host at any given time. This is a QLogic FC driver/SCSI driver parameter. L = The number of LUNs configured on the array device port as seen by a particular host path. T = The maximum queue depth per array device port, which signifies the maximum number of I/Os outstanding (that can be handled) on that port at a given time. The value for T is typically 2048 for EMC arrays. Q = Queue depth per HBA port; i.e., the maximum I/Os that an HBA can have outstanding at an instant. This parameter is needed only if q (queue depth per LUN) is not available.

PAGE 32

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server

These variables relate to each other through the following equation: T => P * q * L where: The target queue depth should be greater than or equal to the product of host paths, HBA queue depth, and the number of LUNs configured. For heterogeneous hosts connected to the same port on the device, use the following equation: T => Host OS 1 (P * q * L) + Host OS 2 (P * q * L) + ............ + Host OS n (P * q * L) Key Considerations The target port queue depth value (2048 for many EMC arrays) used in the equations above is on a per-port basis; that is, 2048 outstanding I/Os (simultaneous) at a time. Therefore, one EMC controller with four ports can have 2048 x 4 = 8192 outstanding I/Os at a time handled by the EMC controller without any port overloading. Two controllers can have 16384 outstanding I/Os. Check the specifications of your EMC array to get the exact number. Not abiding by the above equations can result in either under utilizing (T < actual queue depth) or flooding (T > actual queue depth) the target device. Flooding the target device queue results in a QFULL condition. A QFULL is an I/O throttling command that is sent by the Storage Array SCSI layer to an HBA port to notify the port that its I/O processing limit has been reached and it cannot accept any more I/Os until it completes its current set. In VMware ESX, when a QFULL condition is received from the target, the HBA driver typically decreases the LUNs maximum queue depth to the minimum value (usually 1). This value throttles I/O to the target port. When the target stops issuing QFULL conditions, the HBA driver starts gradually increasing the LUN queue depth value, thereby slowly increasing I/O to the target port. Excessive QFULL commands will drastically decrease the performance of your SAN connection. If you suspect flooding, you can enable the Extended Error Logging flag in the HBA parameters to view extended logging and see if a QFULL command is being received by the HBA.

Determining HBA Queue Utilization


Use the VMware ESX tool esxtop to view the current HBA queue utilization while I/O is active on the HBA ports. Navigate your way through esxtop to find disk statistics. A man esxtop command issued from the ESX console provides detailed information on its usage.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 33

HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server

The figure below shows the output of the storage statistics section of esxtop while I/O is active. LQLEN shows the current HBA queue depth set in the QLogic HBA driver. A value of LOAD > 1 indicates that the host application is placing more data in the HBA queue than its current size can handle. A system with this issue can benefit from an increase in the HBA queue depth. Note that any increase in the HBA queue depth should be guided by the equation discussed in I/O Loading Fundamentals on page 32.

The figure below shows the effect of increasing the HBA queue depth. This result of esxtop has been captured after increasing the HBA queue depth from its default value of 32 to 64. Note that the LOAD is < 1 and there is a significant increase in the READS/s operations, which means that performance has increased.

PAGE 34

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server

How to Change the HBA Queue Depth


To change the queue depth of a QLogic HBA in VMware ESX, follow these steps: 1. 2. 3. 4. Log on to the VMware ESX Console as root. Create a copy of /etc/vmware/esx.conf so you have a backup copy. Edit the file /etc/vmware/esx.conf in your favorite editor. Locate the following entry:
/vmkmodule[0002]/module = "qla2300_707.o" /vmkmodule[0002]/options = ""

5.

Modify the entry as shown, where xx is the queue depth value:


/vmkmodule[0002]/module = "qla2300_707.o" /vmkmodule[0002]/options = "ql2xmaxqdepth=xx"

6. 7.

Save the file. Reboot the VMware ESX Server.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 35

HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server

How to Verify the Queue Depth Change


The current QLogic HBA queue depth and any changes to it can be verified by one of the methods described in the following paragraphs. 1. View the entries for the QLogic FC adapter driver in the proc File System. a. Execute the command cat /proc/scsi/qla2300/x at the VMware ESX console, where x is the port number of the QLogic HBA. The port number is typically 0 for a single port HBA, and 0 and 1 for a dual port HBA. b. Look for the string Device queue depth. The value assigned to this field is a hexadecimal representation of the currently configured queue depth for the HBA port specified in the command.

PAGE 36

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server

2.

Look at the VMware kernel log files. a. View the file /var/log/vmkernelin your favorite editor after logging on to the VMware ESX console. b. Look for the string Queue depth. The value assigned to this field is a decimal representation of the currently configured queue depth for the specified HBA port.

Key Notes on Tuning the HBA Queue Depth


Consider the following when tuning the HBA queue depth: For QLogic driver version 6.04, the default setting for the HBA queue depth is 16; for versions 6.07 and 7.x (in Virtual Infrastructure 3), the default is 32. Increasing the queue depth in the QLogic Fibre Channel HBA above 65 has little effect, because the maximum parallel execution (or queue depth) of SCSI operations is 64. Experiment with your SAN setup and determine the value that best suits your environment. Increasing the HBA queue depth will likely result in an increased number of I/O operations that can be placed in the HBA queue for servicing, consequently increasing the CPU utilization of the host. Consider the I/O performance of disks or another I/O device that is in between the storage controller port and disks. Array controller I/O queue clearance depends on these devices, which allow outstanding I/Os to be processed from the host side. The number of disks, cache settings, RAID type used, etc., should be considered with EMC arrays and the I/O performance of the QLogic HBA.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 37

HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server

Maximum Scatter/Gather List


What is Scatter/Gather List?
Scatter/gather is a QLogic driver feature that processes DMA data transfers that are written to noncontiguous areas of memory. A scatter/gather list is a list of vectors, each of which gives the location and length of one segment in the overall read or write request. QLogic Fibre Channel adapters have a DMA controller on each adapter, which is programmed to support scatter/gather operations. Any I/O operation performed by an application to a disk first reaches a block device driver in the host operating system; the block device driver stores the data contained in the I/O into locations in memory. The data for a single I/O operation processed from an application could span across multiple memory locations. The block device driver passes these memory locations and their length to the HBA driver via the SCSI layers. A scatter/gather list is stored as a table in host kernel memory; the memory has a list of all these memory locations and the length of each such location. The HBA driver then passes this table to the firmware on the physical adapter, which in turn does a DMA operation to retrieve the data. The QLogic HBA parameter HBA Max SG List defines the size of this table. The larger the table, the more entries it can store and process. The larger the I/O operation, the more scatter/gather entries are needed to store the I/O data. The following table specifies the VMware ESX specific parameter name, defaults, and range. This HBA driver parameter is not available on Windows operating systems.
Parameter Description
HBA Max SG List

Denoted in Driver by
ql2xmaxsgs

Default in ESX 2.5.x


32

Default in ESX 3.0.x


32

Range
1255

Tuning the Maximum Scatter/Gather List


Based on the maximum scatter/gather list concept, conceptually larger size I/O operations (throughput based) will benefit from an increased value of the HBA Max SG List parameter. However, changing the size of the scatter/gather list to a very large value (even for large I/O block sizes) may adversely effect the performance, as a very large scatter/gather list may put a undue load on the HBA firmware, host memory, and the DMA engine. QLogic recommends experimenting by increasing the scatter/gather list for large size I/Os or decreasing the list for small size I/Os, then measuring the performance gain. Do not expect a big performance jump, as the default value for the scatter/gather list is pre-optimized for most application environments. QLogic does not recommend setting the value of the scatter/gather list to its maximum value (255) for any application environment.

PAGE 38

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server

How to Change Maximum Scatter/Gather List


To change the HBA Max SG List parameter of a QLogic HBA in VMware ESX, perform the following steps: 1. 2. 3. 4. Log on to the VMware ESX console as root. Create a copy of /etc/vmware/esx.conf so you have a backup copy. Edit the file /etc/vmware/esx.conf in your favorite editor. Locate the following entry:
/vmkmodule[0002]/module = "qla2300_707.o" /vmkmodule[0002]/options = ""

5.

Modify as the entry as follows, where xx is the new HBA Max SG List value:
/vmkmodule[0002]/module = "qla2300_707.o" /vmkmodule[0002]/options = "ql2xmaxsgs=xx"

If you need to change more than one option, separate them with spaces in the options field. For example:
/vmkmodule[0002]/module = "qla2300_707.o" /vmkmodule[0002]/options = "ql2xmaxsgs=xx q12xmaxqdepth=64"

6. 7.

Save the file. Reboot the VMware ESX Server.

How to Verify the Max Scatter/Gather List Change


Any change in the QLogic HBA Max SG List parameter can be verified by looking at the VMware kernel log files, as follows: View the file /var/log/vmkernel in your favorite editor after logging on to the VMware ESX console. Look for the string sg_tablesize. The value assigned to this field is a decimal representation of the currently configured size of the scatter/gather list, as shown in the following screenshot.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 39

HBA Performance Tuning Best Practices Monitoring Performance

Maximum Sectors
This QLogic HBA driver parameter defines the maximum number of disk sectors for a LUN that are reported by the HBA driver to the SCSI mid-level. The value of this parameter is used by the block device driver to obtain a DMA transaction size for each I/O operation. The following table specifies the HBA Max Sectors parameter name, defaults, and range. This HBA driver parameter is not available on Windows operating systems.
Parameter Description
HBA Max Sectors

Denoted in Driver by
ql2xmaxsectors

Default in ESX 2.5.x


512

Default in ESX 3.0.x


512

Range
512, 1024, 2048

QLogic recommends experimenting with increasing the HBA Max Sectors parameter for large size I/Os or decreasing the parameter for small size I/Os, then measuring for a performance gain. Do not expect a big performance jump, as the default value for the HBA Max Sectors parameter is pre-optimized for most application environments. QLogic does not recommend setting the value of HBA Max Sectors above 2048 in any application environment. The recommended procedure for changing and verifying the Max Sector HBA parameter is similar to the procedure for changing the HBA Max SG List detailed in HBA Performance Tuning in VMware ESX Server on page 31. When verifying for a successful change, look for the string max_sectors in the vmkernel file.

Monitoring Performance
To determine where a performance problem exists or where a potential problem will happen, it is important to gather data from all the components of the storage solution. It is not uncommon to be misled by a single piece of information and misdiagnosing the cause of poor system performance, only to realize that another component of the system is causing the problem. This section looks at what utilities, tools, and monitors are available to help you analyze what is happening within your SAN environment. Transaction performance is generally perceived to be poor when the following conditions occur: Random read/write operations are exceeding 20ms (without write cache). Random write operations are exceeding 2ms with the cache enabled. I/Os are queuing up in the operating system I/O stack (due to a bottleneck). Throughput performance is generally perceived to be poor when the disk capability is not being reached. Causes of this can stem from the following situations: With read operations, read-ahead is being limited, preventing higher amounts of immediate data available. I/Os are queuing up in the operating system I/O stack (due to a bottleneck). Most operating systems, Fibre Channel adapters, Fibre Channel switches, and disk arrays provide a host of utilities (add-on and built-in) to help you monitor I/O performance.

PAGE 40

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices Monitoring Performance

QLogic recommends breaking down the measurement into the following three parts: Gathering host server data. Gathering fabric network data. Gathering EMC storage server data.

Gathering Host Server Data


Most operating systems (OSs) have several built-in utilities to measure I/O performance and the effect of this performance on other host server components (CPU consumption, memory utilization, etc.). The following table summarizes many of the commonly used tools.
Tool
SANsurfer

OS
Windows, Linux, Solaris Windows

Description
QLogic HBA performance monitor

Key Statistics
HBA port IOPS, MBps, error statistics CPU, disk, memory, network

Usage
http://support.qlogic.com/support /oem_emc.asp Menu under Administrative tools Performance or by entering perfmon at the command line /usr/bin/esxtop

perfmon

Windows built-in real time performance monitoring tool. Ability to set alerts for various thresholds.

esxtop

VMware

Display ESX server resource utilization statistics in real time. Performance tabs for these VMware applications display performance charts per VM and host server

CPU, disk, memory, network Virtual machine CPU, disk, memory, network statistics Virtual machine CPU, disk, memory, network statistics

Virtual Center/Virtual Infrastructure Client vmkusage

VMware

http://<ip address of host>/

VMware 2.x

Historical graphs that show physical server and virtual machine system statistics

http://<ip address of host>/vmkusage

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 41

HBA Performance Tuning Best Practices Monitoring Performance

Gathering Fabric Network Data


One of the easiest ways of measuring and monitoring I/O performance in a SAN is to look at I/O statistics at the heart of the SAN the Fibre Channel switch. These statistics provide a view of SAN performance from both the initiator and target. QLogic recommends looking at the FC switch statistics as the first step in any performance monitoring exercise. QLogic Fibre Channel switches have a built-in performance monitor agent. Use the SANsurfer Performance Viewer to connect to a Fibre Channel switch and monitor I/O performance for one or more switch ports of your choice. The viewer provides real time graphs of current I/O workloads for easy comparison.

PAGE 42

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices Choosing the Optimal PCI Slot for Your HBA

Gathering Disk Array Performance Data


All EMC storage arrays have built-in capabilities for performance monitoring and measurement. These tools provide a wealth of information about how I/O is being processed by the storage array, which disks are taking the most I/O hits, what is the cache hit percentage, what is the split between read and write operations across the array, etc. QLogic recommends installing and using the EMC Navisphere Manager for CLARiiON arrays to supplement any performance monitoring done in your SAN. Navisphere Manager fully integrates the Navisphere Analyzer, which provides detailed real-time information about your array and system performance. There is an easy-to-use data record and playback facility for reviewing the captured data. Navisphere Analyzer lets you automatically record data metrics from selected arrays, LUNs, and storage processors. You can set the schedule of when to record the data, where to store the results, and from which hosts to gather data. These features allow you to investigate performance patterns and trends before problems occur, and provide a powerful tool for capacity planning and intelligent load balancing. Explore http://germany.emc.com/pdf/products/navisphere/navisphere_analyzer.pdf for more information about the EMC Navisphere Analyzer.

Choosing the Optimal PCI Slot for Your HBA


A typical enterprise server may have several add on cards SCSI, VGA, LAN, HBA, etc., as well as a variety of slots to house these cards. In a shared bus environment like PCI, it is imperative that the right PCI slot be chosen to host the QLogic HBA. The type and placement of the slot makes a significant difference in the HBA performance. The following table matches each QLogic HBA to the appropriate slot. This section will help you choose the right slot in your server with multiple prospective slots. Table 4. EMC-Supported QLogic HBAs
HBA
QLA2460-E QLA2462-E QLE2460-E QLE2462-E

PCI Slot
PCI-X 2.0 PCI-X 2.0 PCI Express 1.0a PCI Express 1.0a

Bus Length
64-bit 64-bit x4 lane x4 lane

Power
3.3V 3.3V 3.3V 3.3V

Slot Key
3.3V 3.3V n/a n/a

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 43

HBA Performance Tuning Best Practices Choosing the Optimal PCI Slot for Your HBA

A typical motherboard I/O diagram is depicted below; it shows the maximum available bandwidth in each direction for various slot types and the difference between these slot types.

General Rules for PCI and PCI-X HBAs


A PCI bus operates at the clock rate of the slowest card installed on that bus. Therefore, for optimum performance, PCI/PCI-X cards of different speeds should not be installed on the same bus. Many PCI/PCI-X slots share the bus bandwidth; closely spaced PCI slots most likely share the same PCI bus and bridge (the figure below shows an example). QLogic recommends that you obtain the I/O Diagram for your host server from its manufacturer to understand which PCI/PCI-X slots share a PCI bridge and bus. For example, on an HP Integrity rx5670 server, a common 66-MHz bus is shared between slots 4 and 5, slots 6 and 7, and slots 8 and 9. To maximize PCI bus bandwidth, populate only one slot on each bus and leave the second slot empty. For example, install cards in slots 4, 6, and 8, but leave slots 5, 7, and 9 for non-performance critical adapters.

PAGE 44

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices Choosing the Optimal PCI Slot for Your HBA

Match the slots speeds with device/adapter speeds; slower or non-performance critical should take up the slower PCI/PCI-X slots. HBAs without a slot key can be inserted into any PCI slot; you can insert a 64-bit HBA into a 32-bit PCI slot if no 64-bit PCI-X slots are available. In this case, however, the data transmission rate is limited to standard PCI speed. Avoid such HBA placements.

General Rules for PCI Express (PCIe) HBAs


Unlike PCI/PCI-X which divides bandwidth between all devices on the bus, PCI Express provides each device with its own dedicated data pipeline. Data is sent serially in packets through pairs of transmit and receive signals called lanes, which enable 250 MBps bandwidth per direction, per lane. Multiple lanes can be grouped together into x1 (by-one), x2, x4, x8, x12, x16, and x32 lane widths to increase bandwidth to the slot. This allows a QLogic PCIe HBA to be placed in any available PCIe slot on a server that can fit the HBA connector without any performance impedances. A QLogic PCIe x4 FC HBA will not fit in an x1 or an x2 slot, but will fit into an x4, x8 and x16 slot. An x4 slot is the right slot for this kind of a HBA. Placing an x4 HBA into an x8 or x16 slot will not make it run faster, such a placement only wastes a high performance slot which could be used for a higher performance device. The bar chart below compares Fibre Channel HBA capabilities with PCI technologies hosting the HBA. Use this as a reference to see if you are imposing any limitations on the performance of the HBA by putting it in a slower slot. 8Gb data is a future projection. PCI Express denotations are for Gen1 (1.0a)

FC Interfaces Bar Chart

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 45

HBA Performance Tuning Best Practices General SAN Considerations for Performance

General SAN Considerations for Performance


Fencing High Performance Applications
It is good practice to fence out a high performance demanding application away from other applications by assigning it a dedicated SAN fabric. If this fencing is not implemented, high performing applications can overwhelm other applications competing for the same storage resource over the same port.

Minimize ISL
QLogic recommends that you locate the HBA and the storage ports it will access on the same switch. Otherwise, try to minimize ISLs and decrease the number of hops. The more hops the data takes, the slower it reaches its destination.

Upgrade to 4Gb
QLogic recommends replacing all 2Gb Fibre Channel adapters with QLogic 4Gb adapters, as 4Gb Fibre Channel adapters have many advantages over 2Gb adapters. Besides performance advantages, they offer new features, High Availability, and better troubleshooting capabilities.

Set Data Rate to Auto-negotiate


EMC recommends that all Fibre Channel ports (HBAs, switches, and storage ports) be set to auto-negotiate the Fibre Channel speed. Any other setting results in a slow connection. All QLogic HBAs come factory set to auto-negotiate.

Fan-In Considerations
Consider the ratio of storage ports bound to a single QLogic HBA port; this ratio is called the Fan-In. The Fan-In ideally is 1:1 one storage port is bound to a single HBA. If the application accessing the storage ports is not bandwidth intensive, then the Fan-In can be relaxed to 3:1 or further, depending on the application and time-of-the-day load. Visit powerlink.emc.com to review EMC recommended Fan-In and Fan-Out ratios for popular operating systems and EMC Arrays.

Avoiding RSCNs
Assign a unique Fibre Channel switch zone between the server initiator and the storage controller processor ports. That is, for each WWN (each initiator HBA), regardless of the number of ports, there should be one unique zone. This zoning strategy isolates an initiator from Register State Change Notification (RSCN) disruptions, which can hamper performance.

PAGE 46

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

HBA Performance Tuning Best Practices I/O Load Balancing

Linear Scaling of HBAs


It is possible to achieve higher performance by scaling the number of HBAs in a host. When installing more than one HBA on a server, QLogic recommends that you follow the slot identification guidelines in HBA Parameters that Impact HBA Performance (page 28) to take advantage of the full capacity of two or more QLogic HBAs. The graph below shows that total bus throughput increases with number of devices for different interfaces.

I/O Load Balancing


You can maximize the performance of an application by spreading the I/O load across multiple paths, arrays, and device adapters in the storage unit. When trying to balance the load within the storage unit, placement of application data is the determining factor. The following resources, in relative order of importance, are the most critical to balance: Activity to the RAID disk groups. Use as many RAID disk groups as possible for critical applications. Many performance bottlenecks occur because a few disks are overloaded. Spreading an application across multiple RAID disk groups ensures that as many disk drives as possible are available. Activity to the device adapters. When selecting RAID disk groups within a cluster for a critical application, spread them across separate device adapters.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 47

HBA Performance Tuning Best Practices I/O Load Balancing

Activity to the Fibre Channel ports. EMC recommends the use of an EMC automated non-disruptive solution PowerPath to load balance I/O between paths in your SAN. Visit www.emc.com to learn more about PowerPath. PowerPath is not available for VMware ESX Server. NOTE: With PowerPath, servers are continually tuned to adjust to changing application loads. This is because channel directors/storage processors write to cache instead of disk.

PAGE 48

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

Fibre Channel Security Best Practices


As more business critical data is being consolidated into SANs, protecting this data has come to the forefront as one of the key best practices that every SAN administrator must implement. To effectively protect a Storage Area Network, it is important to understand what actions increase security and what actions create potential loopholes. It is also important to understand the impact of these actions on the rest of the environment. As with any other best practice, the "best" approach to implementing security is dependent on the business and the current needs of the organization, along with an assessment of risk to the data being protected. This section defines a series of best practice recommendations for enhancing the security of Fibre Channel SANs. The focus is on QLogic Fibre Channel Host Bus Adapters, associated software, and how security relates to generic SAN best practices. NOTE: Fibre Channel security is as strong as the weakest component in the SAN; typically, hosts that are connected to a SAN constitute the weakest link. Therefore, protecting hosts from intruders should have priority over protecting other components. Today's hosts potentially host hundreds of different applications, OSs, and driver components, many of them with their own security considerations. It is essential to keep yourself and your host abreast with the latest security patches for all components of the host.

Setting a Password for QLogic SANsurfer FC HBA Manager


QLogic SANsurfer FC HBA Manager software runs on a host server and allows you to install, configure, troubleshoot, and analyze an installed QLogic HBA. Many SANsurfer features are password protected. The default password for these features is config, and is known by almost anybody who manages a SAN or has access to the internet and a search engine. Leaving the SANsurfer password at its default value means that anyone with access to the host server could easily change key HBA parameters and potentially breach the security of the SAN. QLogic recommends that you set both host and application access passwords for SANsurfer by performing the following steps: 1. 2. In the SANsurfer FC HBA Manager main window HBA tree, select the host for which you want to set the application access password. Click the Security tab. The Security tabbed page displays, as shown in the following figure.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 49

Fibre Channel Security Best Practices Generic Fibre Channel SAN Security

Generic Fibre Channel SAN Security


The EMC white paper Security for Fibre Channel Storage Area Networks provides a series of best practice recommendations for enhancing the security of a Fibre Channel SAN; the paper focuses on general SAN technologies as opposed to specific products. EMC recommends that you tailor these security guidelines for your environment. You can find this white paper at http://powerlink.emc.com/.

PAGE 50

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

Appendix A QLogic HBA Tasks and Tools Sheet


This appendix provides: Task: Information (page 51) Task: Monitoring/Diagnostics (page 52) Task: Configuration (page 53) Task: Maintenance (page 54) Task: Other (page 54)

Task: Information
SANsurfer FC HBA Manager
Windows, Solaris, NetWare, Mac, Linux

Subtask
Operating System

SANsurfer FC CLI
Windows, Solaris, NetWare, Mac, Linux

Windows Install Wizard


Windows

FlasUTIL
Any

FAST!UTIL
Any

EfiUTIL
Any

Linux Tools
Linux

Serial number Model/type OptionROM (BIOS/FCode/EFI) version Firmware version Driver version Fibre Channel specific data: WWN, loop ID Target information VPD displays Graphic SAN topology

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 51

Appendix A QLogic HBA Tasks and Tools Sheet

Task: Monitoring/Diagnostics
SANsurfer FC HBA Manager
b c

Subtask
Real-time status of the HBAs On-demand Fibre Channel link status Read/write buffer test Loopback test SFP health Verifying NVRAM, flash Email notification of fabric events Error/alarm reporting Statistics Snapshot Low-level disk commands LUN state transition Note
aWith b

SANsurfer FC CLI

Windows Install Wizard

FlasUTIL

FAST!UTIL

EfiUTIL

Linux Tools

loopback plug Use Linux tool ql-hba-snapshot.sh cUse Linux tool ql-lun-state.sh

PAGE 52

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

Appendix A QLogic HBA Tasks and Tools Sheet

Task: Configuration
SANsurfer FC HBA Manager

Subtask
Driver settings NVRAM settings Persistent binding LUN masking Updating factory NVRAM defaults BOOT from SAN device Scan for Fibre Channel devices Remote HBA management SCSI command timeout value Failover Device replacement Option ROM config iiDMA setting Note
a

SANsurfer FC CLI

Windows Install Wizard

FlasUTIL

FAST!UTIL

EfiUTIL

Linux Tools

a b c

Use Linux tool ql-scan-lun.sh limited to Linux tools ql-lun-state.sh and ql-set-cmd-timeout.sh cUse Linux tool ql-set-cmd-timeout.sh
bCapability

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 53

Appendix A QLogic HBA Tasks and Tools Sheet

Task: Maintenance
SANsurfer FC HBA Manager

Subtask
Driver installation/update Change boot code/Option ROM (BIOS, EFI, FCode) Copying the current flash and NVRAM contents Installation reports

SANsurfer FC CLI

Windows Install Wizard

FlasUTIL

FAST!UTIL

EfiUTIL

Linux Tools

Task: Other
SANsurfer FC HBA Manager SANsurfer FC CLI

Subtask
Scripting Generate XML Generate a summary of current configuration of the local host/HBA/devices

Windows Install Wizard

FlasUTIL

FAST!UTIL

EfiUTIL

Linux Tools

PAGE 54

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

Appendix B QLogic HBA LED Scheme


HBA
QLA2460-E QLA2462-E QLE2460-E QLE2462-E

Scheme

Yellow LED (4 Gbps) OFF On Flashing

Green LED (2 Gbps) OFF On Flashing

Amber LED (1 Gbps) OFF On Flashing

Activity Power Off Power On (before firmware init.) Power ON (after firmware init.) Firmware Error Online, 1 Gbps link / I/O activity Online, 2 Gbps link / I/O activity Online, 4 Gbps link / I/O activity Beacon

Yellow, Green, and Amber LEDs flashing alternately OFF OFF On/Flashing Flashing OFF On/Flashing OFF OFF On/Flashing OFF OFF Flashing

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 55

Appendix B QLogic HBA LED Scheme

PAGE 56

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

References
EMC CLARiiON High Availability (HA) Best Practices, July 2005 at http://powerlink.emc.com/ EMC Fibre Channel and iSCSI with QLogic Host Bus Adapter in the Linux v2.6.x kernel Environment and the v8.x-Series Drivers, Rev A04 at http://powerlink.emc.com/ EMC Fibre Channel and iSCSI with QLogic Host Bus Adapter in the Windows Environment, Rev A13 at http://powerlink.emc.com/ EMC Network Storage Topology Guide, Rev A05 at http://powerlink.emc.com/ EMC Security for Fibre Channel Storage Area Networks Best Practices Planning, August 2006 at http://powerlink.emc.com/ QLogic Corporation EMC Approved Software Website section at http://support.qlogic.com/support/oem_emc.asp QLogic SANblade 2200 and 2300 Series Troubleshooting Guide for EMC, April 2003 at http://www.qlogic.com/ VMware Performance Tuning Best Practices for ESX Server 3, Technical Note at http://vmware.com VMware SAN System Design and Deployment Guide, March 2007 at http://vmware.com/ VMware Technology Network (VMTN) at http://vmware.com/ http://www.techtarget.com/ - Several storage connectivity related articles.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 57

References

PAGE 58

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

QLogic Press Review


The QLogic Press authors value your feedback. We are especially interested in situations where a QLogic Press publication has made a difference. Please review the document, addressing value, subject matter, structure, depth, and quality as appropriate. Please use one of the following methods to provide feedback so we can make this document better suit your needs:

Visit the QLogic website at http://www.qlogic.com/go/qlogic-press-review and complete the online form; or Print both pages of this Press Review form and fax completed pages to: (949) 389-6114.

Your Contact Information


The data you provide here may be used to provide you with information from QLogic about our products, services, or activities. Name: Title: Company Name: Email Address: Phone Number: Address: City, State, Zip:

Please identify yourself as belonging to one of the following groups:


Customer Solution Developer Other (please specify): Business Partner QLogic Employee

QLogic Press Document Title

Please rate your overall satisfaction with the document:


Very Satisfied Satisfied Somewhat Satisfied Not Satisfied

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 59

QLogic Press Review

Review Comments

What other subjects would you like to see QLogic Press address?

PAGE 60

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

QLogic Company Information


Powered by QLogic
QLogic is a leading supplier of high-performance storage networking solutions, producing the controller chips, host bus adapters (HBAs), and fabric switches that are the backbone of storage networks for most Global 2000 corporations. The company delivers a broad and diverse portfolio of products that includes Fibre Channel HBAs, blade server embedded Fibre Channel switches, Fibre Channel stackable switches, iSCSI HBAs, iSCSI routers, and storage services platforms for enabling advanced storage management applications. The company is also a leading supplier of InfiniBand switches and InfiniBand host channel adapters for the emerging High Performance Computing Cluster (HPCC) market. QLogic products are delivered to small-to-medium businesses and large enterprises around the world via its channel partner community. QLogic products are also powering solutions from leading companies like Cisco, Dell, EMC, Hitachi Data Systems, HP, IBM, NEC, Network Appliance, and Sun Microsystems. QLogic is a member of the S&P 500 Index. To learn more about QLogic, visit the QLogic website at www.qlogic.com or refer to the contact information on the following page.

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

PAGE 61

QLogic Company Information

Contacting QLogic
Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 Phone: 949.389.6000 Fax: 949.389.6009 UKSales Office QLogic Corporation Surrey Technology Centre 40 Occam Road Guildford GU2 5YG Surrey, UK Phone: (44) 1483-295825 Fax: (44) 1483-295827 GermanySales Office QLogic Corporation Terminalstr. Mitte 18 85356 Muchen Germany (49) 89 97007-427 QLogic AsiaSales Office QLogic Corporation 23F, 105 Dun Hua S. Road Sec 2, Taipei 106, Taiwan, R.O.C. 886-2-2755-0000 Ext. 501

Partner Programs
Channel Programs 877.975.6442 reseller@qlogic.com Interoperability Testing solutions@qlogic.com Business Alliances Programs 949.389.6557 santrackpartner@qlogic.com

Sales Education and Technical Training


Technical Training tech.training@qlogic.com Sales Training sales.training@qlogic.com

Sales Information
800.662.4471

Technical Support
952.932.4040 support@qlogic.com

PAGE 62

QLOGIC FC HBA IN AN EMC ENVIRONMENT VERSION 1.0, JULY 2007

B e s t Pr a c t i c e s

QLogic FC HBA in an EMC Environment JULY 2007

About QLogic
QLogic is a leading supplier of high-performance storage networking solutions, producing the controller chips, host bus adapters (HBas) and fabric switches that are the backbone of storage networks for most Global 2000 corporations. the company delivers a broad and diverse portfolio of products that includes Fibre channel HBas, blade server embedded Fibre channel switches, Fibre channel stackable switches, iscsi HBas, iscsi routers and storage services platforms for enabling advanced storage management applications. the company is also a leading supplier of infiniBand switches and infiniBand host channel adapters for the emerging High Performance computing cluster (HPcc) market. QLogic products are delivered to small-to-medium businesses and large enterprises around the world via its channel partner community. QLogic products are also powering solutions from leading companies like cisco, Dell, eMc, Hitachi Data systems, HP, iBM, Nec, Network appliance and sun Microsystems. QLogic is a member of the s&P 500 index. For more information go to http://www.qlogic.com. recent accolades include: - s&P 500 index - Forbes Best 200 Best small companies - Fortunes 100 Fastest Growing companies - Business Week Hot Growth company - Network computing Well connected - 2005 storage Magazine saN Product of the Year - 2005 saNbox 5602 infostor and asNP Most Valuable Product Finalist

corporate Headquarters

QLogic corporation

26650 aliso Viejo Parkway

aliso Viejo, ca 92656

949.389.6000 surrey GU2 7YG UK +44 (0)1483 295825

europe Headquarters QLogic (UK) LtD.

surrey technology centre

40 Occam road Guildford

Fc0054602-00 rev a 7/07

S-ar putea să vă placă și