Documente Academic
Documente Profesional
Documente Cultură
Abstract
This guide discusses the best practices for planning, deploying, maintaining, and performance tuning a QLogic 4Gb Fibre Channel HBA in a SAN connected to EMC storage arrays.
Copyright 2003-2007 QLogic Corporation. All rights reserved. THE INFORMATION PROVIDED IN THIS DOCUMENT IS PROVIDED AS IS WITHOUT WARRANTY OF ANY KIND, INCLUDING ANY WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, INTEROPERABILITY, OR COMPATIBILITY. All of the Partners' products are warranted in accordance with the agreements under which the warranty for the products are provided. Unless otherwise specified, the product manufacturer, supplier, or publisher of non-Partner products provides warranty, service, and support directly to you. THE PARTNERS MAKE NO REPRESENTATIONS OR WARRANTIES REGARDING THE PARTNERS PRODUCTS OR NON-PARTNER PRODUCTS, AND NO WARRANTY IS PROVIDED FOR EITHER THE FUNCTIONALITY OR PROBLEM RESOLUTION OF ANY PRODUCTS. The inclusion of the Partners' products on an interoperability list is not a guarantee that they will work with the other designated storage products. In addition, not all software and hardware combinations created from compatible components will necessarily function properly together. The following list includes products developed or distributed by companies other than the Partners. The Partners do not provide service or support for the non-Partner products listed, but do not prohibit these products from being used together with Partner storage products. During problem debug and resolution, the Partners may require that hardware or software additions be removed from products to provide problem determination and resolution on the supplied hardware/software. For support issues regarding non-Partner products, please contact the manufacturer of the product directly. This information could include technical inaccuracies or typographical errors. The Partners do not assume any liability for damages caused by such errors as this information is provided AS IS for convenience only; readers use this information at their own risk and should confirm any information contained herein with the associated vendor. Changes are periodically made to the content of this document. These changes will be incorporated in new editions of the document. The Partners may make improvements and/or changes in the product(s) and/or the program(s) described in this document at any time without notice. Any references in this information to non-Partner websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this interoperability guide, and the use of those websites is at your own risk. Information concerning non-Partner products was obtained from the suppliers of those products, their published announcements, or other publicly available sources. The Partners have not tested those products and cannot confirm the accuracy of performance, compatibility, or any other claims related to those products. Questions about the capabilities of non-Partner products should be addressed to the suppliers of those products. All statements regarding the Partners' future direction or intent are subject to change or withdrawal without notice and represent goals and objectives only. This information is for planning purposes only; any use of the information contained herein is at the user's sole risk. The information herein is subject to change before the products described become available. QLogic, the QLogic logo, QLogic Press, the QLogic Press logo, Powered by QLogic, QLA, SANblade, SAN Pro, SANsurfer, SANsurfer Management Suite, and SANtrack are trademarks or registered trademarks of QLogic Corporation in the United States, other countries, or both. EMC and PowerPath are trademarks or registered trademarks of EMC Corporation in the United States, other countries, or both. Microsoft and Windows are trademarks or registered trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of others. The QLogic home page on the Internet can be found at http://www.qlogic.com. QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, California 92656 Tel: 949.389.6000 Fax: 949.389.6114
Table of Contents
Introduction ............................................................................................................... 9
Audience ........................................................................................................................................ 9 What are Host Bus Adapter Best Practices?.............................................................................. 9 Supported HBAs ......................................................................................................................... 10 How this Guide is Organized ..................................................................................................... 10
Statement of Services and Support ...................................................................... 11 SAN Planning and Deployment Best Practices ................................................... 13
Using a 4Gb HBA in a 2Gb FC SAN........................................................................................... 13 High Availability Best Practices ................................................................................................ 14 Choosing One or More Single/Dual/Quad Port HBAs ............................................................ 14 High Availability Features of the QLogic HBA ........................................................................ 14 Connecting the SAN for High Availability ............................................................................... 14 Validating Failover Configurations .......................................................................................... 15 Choosing the Right Port on the Switch for Host and Storage Ports ....................................... 16 Considerations for Tape Access via an HBA ........................................................................... 16 Check Interoperability of SAN Components ............................................................................ 16
PAGE 5
Table of Contents
PAGE 6
Table of Contents
Appendix A QLogic HBA Tasks and Tools Sheet............................................. 51 Appendix B QLogic HBA LED Scheme.............................................................. 55 References............................................................................................................... 57 QLogic Press Review ............................................................................................. 59 QLogic Company Information ............................................................................... 61
PAGE 7
Table of Contents
PAGE 8
Introduction
QLogic 4Gb Fibre Channel (FC) host bus adapters (HBAs) are designed and developed to provide industry leading performance, simplify management, and offer fully interoperable solutions in the most demanding enterprise storage area network (SAN) environments. This guide QLogic FC HBA in an EMC Environment presents a compilation of best practices to apply when using QLogic FC HBAs in an EMC SAN.
Audience
This guide is intended for SAN architects, IT administrators, and storage system professionals that currently utilize, or are considering utilizing, a QLogic HBA in a Fibre Channel SAN.
PAGE 9
Supported HBAs
This guide supports the following QLogic 4Gb FC HBAs used in EMC SAN environments. These QLogic SANblade HBAs are collectively referred as QLogic HBAs or adapters throughout this guide.
QLogic SANblade HBA
QLA2460-E QLA2462-E QLE2460-E QLE2462-E
FC Rate
4Gb 4Gb 4Gb 4Gb
Host Bus
PCI-X 2.0 PCI-X 2.0 PCI Express PCI Express
Number of Ports
1 2 1 2
PAGE 10
PAGE 11
PAGE 12
4Gb
Superior Scalability
Virtualization (N_Port Virtual ID) VSAN No No Yes Yes Higher availability and security Improved QoS and lower TCO
Enhanced Reliability
T10 CRC Support Overlapping Protection Domains No No Yes Yes Provides end-to-end data integrity Higher data reliability
PAGE 13
SAN Planning and Deployment Best Practices High Availability Best Practices
PAGE 14
SAN Planning and Deployment Best Practices High Availability Best Practices
PAGE 15
SAN Planning and Deployment Best Practices Considerations for Tape Access via an HBA
Choosing the Right Port on the Switch for Host and Storage Ports
In addition to validating failover configuration, if the switch being used contains multiple ASICs, QLogic recommends trying to connect host and storage pairs to the same ASIC. This connection prevents using the shared internal data transfer bus and reduces switch latency. In addition to fault tolerance concerns, performance is also a consideration. For example, if a host has two HBAs, each one accessing its own storage port, do not attach both HBAs, both storage ports, or all of the HBA and storage ports to the same switch ASIC.
Host
HBA HBA HBA
Host
HBA
ASIC 1
FC Switch
ASIC 1
SPA
SPB
SPA
SPB
Array
Array
PAGE 16
SAN Planning and Deployment Best Practices Check Interoperability of SAN Components
EMC recommends using the E-Lab Interoperability Navigator to obtain this information. Follow the links from https://elabnavigator.emc.com/. QLogic also provides a comprehensive SANtrack interoperability guide. This guide is available in electronic or print form from http://www.qlogic.com/interopguide/.
PAGE 17
SAN Planning and Deployment Best Practices Check Interoperability of SAN Components
PAGE 18
PAGE 19
PAGE 20
PAGE 21
HBA Installation Best Practices Understand the LED Scheme for QLogic HBAs
PAGE 22
PAGE 23
PAGE 24
PAGE 25
Transaction based and throughput based processing are types of workload. The workload is the total amount of work that is performed at the storage server, and is measured through the following formula: Workload = [transactions (number of host IOPS)] * [throughput (amount of data sent in one I/O)] Since a storage server can sustain a given maximum workload, the above formula shows that when the number of host transactions increases, the throughput decreases. Conversely, if the host is sending large volumes of data with each I/O, the number of transactions decreases. A workload characterized by a high number of transactions (IOPS) is called a transaction based workload. A workload characterized by large I/Os is called throughput based workload. These two workload types are conflicting in nature, and consequently require different configuration settings across all parts of the storage solution. Generally, I/O (and therefore application) performance is best when the I/O activity is evenly spread across the entire I/O subsystem. On the Windows platform, the I/O transfer size of an application can be determined using Perfmon. The following sections further describe each workload type.
PAGE 26
Throughput rates are heavily dependent on the storage servers internal bandwidth. Throughput based workloads have a relatively low host CPU overhead.
Workload Type
Transaction oriented Throughput oriented Throughput oriented
Enterprise Application
Oracle, Microsoft Exchange & SQL, Lotus Domino, Web Server, File Server Log files Backup and recovery; video, and audio streaming
Access Type
Sequential
I/O Type
Read or Write
Workload Type
Primarily throughput
OLTP Data OLTP Lazy Write OLTP Checkpoint Read ahead (DSS, table/index scans) Bulk insert
PAGE 27
HBA Performance Tuning Best Practices HBA Parameters that Impact HBA Performance
Access Type
Random Random Sequential
I/O Type
Equally split reads and writes Equally split reads and writes Write
Workload Type
Transactional Throughput Transactional
Definition
Specifies the maximum number of I/O commands allowed to execute on a HBA port. Specifies the size of a Fibre Channel frame per I/O. Specifies the HBA adapter data rate. When set to Auto, the adapter auto-negotiates the data rate with the connecting SAN device.
Default Value
256 2048 Auto
Range
1256 5122048 1 (Auto), 2 (1Gb), 3 (2Gb), 4 (4Gb)
OS Support
Windows All All
Maximum Queue Depth Maximum Scatter Gather List Size Maximum Sectors
Specifies the maximum number of I/O commands allowed to execute/queue on a HBA port. Specifies the size of the list of DMA items that are reported to SCSI mid-level per I/O request. Specifies the maximum number of disk sectors that are reported to SCSI mid-Level per I/O request.
32 32 512
The following sections focus on tuning HBA performance in Windows and VMware ESX environments.
PAGE 28
HBA Performance Tuning Best Practices HBA Performance Tuning in Microsoft Windows
Execution Throttle
What is Execution Throttle?
Execution throttle is a HBA parameter that controls the maximum number of I/O commands executing on any one HBA port. When a ports execution throttle is reached, no new commands are executed until the current command finishes executing. The valid options for this setting are 1256. For Microsoft Windows environments, this parameter is set to the default value of 16 or to an OEM specific value. With this value, at any point in time, one HBA port can have a maximum of 16 I/O commands executing. Any more I/O commands will have to wait before they can be scheduled in the HBA. The value of the Execution Throttle parameter can be increased to allow the HBA to execute more commands or decreased to decrease the amount of I/O an HBA port can put on the wire.
PAGE 29
HBA Performance Tuning Best Practices HBA Performance Tuning in Microsoft Windows
PAGE 30
HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server
Denoted in Driver by
ql2xmaxqdepth
Range
1256
The effects of changing the HBA Max Queue Depth parameter are most noticeable when the application workload type is transaction oriented.
PAGE 31
HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server
When determining loading in any mass storage SAN setup, is necessary to understand the factors used; these factors are at the SCSI layer level in both the initiator (host) and the target (device). These factors provide an I/O flow control mechanism between the initiator and the target. The host flow control variable is the queue depth per LUN; the device flow control variable is target port queue depth. The above diagram shows the placement of these variables in the I/O architecture. The following five main variables assess the loading factor on a device port: P = The number of host paths connected to that array device port. q = Queue depth per LUN on the hosts (for the host port); i.e., the maximum I/Os outstanding per LUN from the host at any given time. This is a QLogic FC driver/SCSI driver parameter. L = The number of LUNs configured on the array device port as seen by a particular host path. T = The maximum queue depth per array device port, which signifies the maximum number of I/Os outstanding (that can be handled) on that port at a given time. The value for T is typically 2048 for EMC arrays. Q = Queue depth per HBA port; i.e., the maximum I/Os that an HBA can have outstanding at an instant. This parameter is needed only if q (queue depth per LUN) is not available.
PAGE 32
HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server
These variables relate to each other through the following equation: T => P * q * L where: The target queue depth should be greater than or equal to the product of host paths, HBA queue depth, and the number of LUNs configured. For heterogeneous hosts connected to the same port on the device, use the following equation: T => Host OS 1 (P * q * L) + Host OS 2 (P * q * L) + ............ + Host OS n (P * q * L) Key Considerations The target port queue depth value (2048 for many EMC arrays) used in the equations above is on a per-port basis; that is, 2048 outstanding I/Os (simultaneous) at a time. Therefore, one EMC controller with four ports can have 2048 x 4 = 8192 outstanding I/Os at a time handled by the EMC controller without any port overloading. Two controllers can have 16384 outstanding I/Os. Check the specifications of your EMC array to get the exact number. Not abiding by the above equations can result in either under utilizing (T < actual queue depth) or flooding (T > actual queue depth) the target device. Flooding the target device queue results in a QFULL condition. A QFULL is an I/O throttling command that is sent by the Storage Array SCSI layer to an HBA port to notify the port that its I/O processing limit has been reached and it cannot accept any more I/Os until it completes its current set. In VMware ESX, when a QFULL condition is received from the target, the HBA driver typically decreases the LUNs maximum queue depth to the minimum value (usually 1). This value throttles I/O to the target port. When the target stops issuing QFULL conditions, the HBA driver starts gradually increasing the LUN queue depth value, thereby slowly increasing I/O to the target port. Excessive QFULL commands will drastically decrease the performance of your SAN connection. If you suspect flooding, you can enable the Extended Error Logging flag in the HBA parameters to view extended logging and see if a QFULL command is being received by the HBA.
PAGE 33
HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server
The figure below shows the output of the storage statistics section of esxtop while I/O is active. LQLEN shows the current HBA queue depth set in the QLogic HBA driver. A value of LOAD > 1 indicates that the host application is placing more data in the HBA queue than its current size can handle. A system with this issue can benefit from an increase in the HBA queue depth. Note that any increase in the HBA queue depth should be guided by the equation discussed in I/O Loading Fundamentals on page 32.
The figure below shows the effect of increasing the HBA queue depth. This result of esxtop has been captured after increasing the HBA queue depth from its default value of 32 to 64. Note that the LOAD is < 1 and there is a significant increase in the READS/s operations, which means that performance has increased.
PAGE 34
HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server
5.
6. 7.
PAGE 35
HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server
PAGE 36
HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server
2.
Look at the VMware kernel log files. a. View the file /var/log/vmkernelin your favorite editor after logging on to the VMware ESX console. b. Look for the string Queue depth. The value assigned to this field is a decimal representation of the currently configured queue depth for the specified HBA port.
PAGE 37
HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server
Denoted in Driver by
ql2xmaxsgs
Range
1255
PAGE 38
HBA Performance Tuning Best Practices HBA Performance Tuning in VMware ESX Server
5.
Modify as the entry as follows, where xx is the new HBA Max SG List value:
/vmkmodule[0002]/module = "qla2300_707.o" /vmkmodule[0002]/options = "ql2xmaxsgs=xx"
If you need to change more than one option, separate them with spaces in the options field. For example:
/vmkmodule[0002]/module = "qla2300_707.o" /vmkmodule[0002]/options = "ql2xmaxsgs=xx q12xmaxqdepth=64"
6. 7.
PAGE 39
Maximum Sectors
This QLogic HBA driver parameter defines the maximum number of disk sectors for a LUN that are reported by the HBA driver to the SCSI mid-level. The value of this parameter is used by the block device driver to obtain a DMA transaction size for each I/O operation. The following table specifies the HBA Max Sectors parameter name, defaults, and range. This HBA driver parameter is not available on Windows operating systems.
Parameter Description
HBA Max Sectors
Denoted in Driver by
ql2xmaxsectors
Range
512, 1024, 2048
QLogic recommends experimenting with increasing the HBA Max Sectors parameter for large size I/Os or decreasing the parameter for small size I/Os, then measuring for a performance gain. Do not expect a big performance jump, as the default value for the HBA Max Sectors parameter is pre-optimized for most application environments. QLogic does not recommend setting the value of HBA Max Sectors above 2048 in any application environment. The recommended procedure for changing and verifying the Max Sector HBA parameter is similar to the procedure for changing the HBA Max SG List detailed in HBA Performance Tuning in VMware ESX Server on page 31. When verifying for a successful change, look for the string max_sectors in the vmkernel file.
Monitoring Performance
To determine where a performance problem exists or where a potential problem will happen, it is important to gather data from all the components of the storage solution. It is not uncommon to be misled by a single piece of information and misdiagnosing the cause of poor system performance, only to realize that another component of the system is causing the problem. This section looks at what utilities, tools, and monitors are available to help you analyze what is happening within your SAN environment. Transaction performance is generally perceived to be poor when the following conditions occur: Random read/write operations are exceeding 20ms (without write cache). Random write operations are exceeding 2ms with the cache enabled. I/Os are queuing up in the operating system I/O stack (due to a bottleneck). Throughput performance is generally perceived to be poor when the disk capability is not being reached. Causes of this can stem from the following situations: With read operations, read-ahead is being limited, preventing higher amounts of immediate data available. I/Os are queuing up in the operating system I/O stack (due to a bottleneck). Most operating systems, Fibre Channel adapters, Fibre Channel switches, and disk arrays provide a host of utilities (add-on and built-in) to help you monitor I/O performance.
PAGE 40
QLogic recommends breaking down the measurement into the following three parts: Gathering host server data. Gathering fabric network data. Gathering EMC storage server data.
OS
Windows, Linux, Solaris Windows
Description
QLogic HBA performance monitor
Key Statistics
HBA port IOPS, MBps, error statistics CPU, disk, memory, network
Usage
http://support.qlogic.com/support /oem_emc.asp Menu under Administrative tools Performance or by entering perfmon at the command line /usr/bin/esxtop
perfmon
Windows built-in real time performance monitoring tool. Ability to set alerts for various thresholds.
esxtop
VMware
Display ESX server resource utilization statistics in real time. Performance tabs for these VMware applications display performance charts per VM and host server
CPU, disk, memory, network Virtual machine CPU, disk, memory, network statistics Virtual machine CPU, disk, memory, network statistics
VMware
VMware 2.x
Historical graphs that show physical server and virtual machine system statistics
PAGE 41
PAGE 42
HBA Performance Tuning Best Practices Choosing the Optimal PCI Slot for Your HBA
PCI Slot
PCI-X 2.0 PCI-X 2.0 PCI Express 1.0a PCI Express 1.0a
Bus Length
64-bit 64-bit x4 lane x4 lane
Power
3.3V 3.3V 3.3V 3.3V
Slot Key
3.3V 3.3V n/a n/a
PAGE 43
HBA Performance Tuning Best Practices Choosing the Optimal PCI Slot for Your HBA
A typical motherboard I/O diagram is depicted below; it shows the maximum available bandwidth in each direction for various slot types and the difference between these slot types.
PAGE 44
HBA Performance Tuning Best Practices Choosing the Optimal PCI Slot for Your HBA
Match the slots speeds with device/adapter speeds; slower or non-performance critical should take up the slower PCI/PCI-X slots. HBAs without a slot key can be inserted into any PCI slot; you can insert a 64-bit HBA into a 32-bit PCI slot if no 64-bit PCI-X slots are available. In this case, however, the data transmission rate is limited to standard PCI speed. Avoid such HBA placements.
PAGE 45
HBA Performance Tuning Best Practices General SAN Considerations for Performance
Minimize ISL
QLogic recommends that you locate the HBA and the storage ports it will access on the same switch. Otherwise, try to minimize ISLs and decrease the number of hops. The more hops the data takes, the slower it reaches its destination.
Upgrade to 4Gb
QLogic recommends replacing all 2Gb Fibre Channel adapters with QLogic 4Gb adapters, as 4Gb Fibre Channel adapters have many advantages over 2Gb adapters. Besides performance advantages, they offer new features, High Availability, and better troubleshooting capabilities.
Fan-In Considerations
Consider the ratio of storage ports bound to a single QLogic HBA port; this ratio is called the Fan-In. The Fan-In ideally is 1:1 one storage port is bound to a single HBA. If the application accessing the storage ports is not bandwidth intensive, then the Fan-In can be relaxed to 3:1 or further, depending on the application and time-of-the-day load. Visit powerlink.emc.com to review EMC recommended Fan-In and Fan-Out ratios for popular operating systems and EMC Arrays.
Avoiding RSCNs
Assign a unique Fibre Channel switch zone between the server initiator and the storage controller processor ports. That is, for each WWN (each initiator HBA), regardless of the number of ports, there should be one unique zone. This zoning strategy isolates an initiator from Register State Change Notification (RSCN) disruptions, which can hamper performance.
PAGE 46
PAGE 47
Activity to the Fibre Channel ports. EMC recommends the use of an EMC automated non-disruptive solution PowerPath to load balance I/O between paths in your SAN. Visit www.emc.com to learn more about PowerPath. PowerPath is not available for VMware ESX Server. NOTE: With PowerPath, servers are continually tuned to adjust to changing application loads. This is because channel directors/storage processors write to cache instead of disk.
PAGE 48
PAGE 49
Fibre Channel Security Best Practices Generic Fibre Channel SAN Security
PAGE 50
Task: Information
SANsurfer FC HBA Manager
Windows, Solaris, NetWare, Mac, Linux
Subtask
Operating System
SANsurfer FC CLI
Windows, Solaris, NetWare, Mac, Linux
FlasUTIL
Any
FAST!UTIL
Any
EfiUTIL
Any
Linux Tools
Linux
Serial number Model/type OptionROM (BIOS/FCode/EFI) version Firmware version Driver version Fibre Channel specific data: WWN, loop ID Target information VPD displays Graphic SAN topology
PAGE 51
Task: Monitoring/Diagnostics
SANsurfer FC HBA Manager
b c
Subtask
Real-time status of the HBAs On-demand Fibre Channel link status Read/write buffer test Loopback test SFP health Verifying NVRAM, flash Email notification of fabric events Error/alarm reporting Statistics Snapshot Low-level disk commands LUN state transition Note
aWith b
SANsurfer FC CLI
FlasUTIL
FAST!UTIL
EfiUTIL
Linux Tools
loopback plug Use Linux tool ql-hba-snapshot.sh cUse Linux tool ql-lun-state.sh
PAGE 52
Task: Configuration
SANsurfer FC HBA Manager
Subtask
Driver settings NVRAM settings Persistent binding LUN masking Updating factory NVRAM defaults BOOT from SAN device Scan for Fibre Channel devices Remote HBA management SCSI command timeout value Failover Device replacement Option ROM config iiDMA setting Note
a
SANsurfer FC CLI
FlasUTIL
FAST!UTIL
EfiUTIL
Linux Tools
a b c
Use Linux tool ql-scan-lun.sh limited to Linux tools ql-lun-state.sh and ql-set-cmd-timeout.sh cUse Linux tool ql-set-cmd-timeout.sh
bCapability
PAGE 53
Task: Maintenance
SANsurfer FC HBA Manager
Subtask
Driver installation/update Change boot code/Option ROM (BIOS, EFI, FCode) Copying the current flash and NVRAM contents Installation reports
SANsurfer FC CLI
FlasUTIL
FAST!UTIL
EfiUTIL
Linux Tools
Task: Other
SANsurfer FC HBA Manager SANsurfer FC CLI
Subtask
Scripting Generate XML Generate a summary of current configuration of the local host/HBA/devices
FlasUTIL
FAST!UTIL
EfiUTIL
Linux Tools
PAGE 54
Scheme
Activity Power Off Power On (before firmware init.) Power ON (after firmware init.) Firmware Error Online, 1 Gbps link / I/O activity Online, 2 Gbps link / I/O activity Online, 4 Gbps link / I/O activity Beacon
Yellow, Green, and Amber LEDs flashing alternately OFF OFF On/Flashing Flashing OFF On/Flashing OFF OFF On/Flashing OFF OFF Flashing
PAGE 55
PAGE 56
References
EMC CLARiiON High Availability (HA) Best Practices, July 2005 at http://powerlink.emc.com/ EMC Fibre Channel and iSCSI with QLogic Host Bus Adapter in the Linux v2.6.x kernel Environment and the v8.x-Series Drivers, Rev A04 at http://powerlink.emc.com/ EMC Fibre Channel and iSCSI with QLogic Host Bus Adapter in the Windows Environment, Rev A13 at http://powerlink.emc.com/ EMC Network Storage Topology Guide, Rev A05 at http://powerlink.emc.com/ EMC Security for Fibre Channel Storage Area Networks Best Practices Planning, August 2006 at http://powerlink.emc.com/ QLogic Corporation EMC Approved Software Website section at http://support.qlogic.com/support/oem_emc.asp QLogic SANblade 2200 and 2300 Series Troubleshooting Guide for EMC, April 2003 at http://www.qlogic.com/ VMware Performance Tuning Best Practices for ESX Server 3, Technical Note at http://vmware.com VMware SAN System Design and Deployment Guide, March 2007 at http://vmware.com/ VMware Technology Network (VMTN) at http://vmware.com/ http://www.techtarget.com/ - Several storage connectivity related articles.
PAGE 57
References
PAGE 58
Visit the QLogic website at http://www.qlogic.com/go/qlogic-press-review and complete the online form; or Print both pages of this Press Review form and fax completed pages to: (949) 389-6114.
PAGE 59
Review Comments
What other subjects would you like to see QLogic Press address?
PAGE 60
PAGE 61
Contacting QLogic
Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 Phone: 949.389.6000 Fax: 949.389.6009 UKSales Office QLogic Corporation Surrey Technology Centre 40 Occam Road Guildford GU2 5YG Surrey, UK Phone: (44) 1483-295825 Fax: (44) 1483-295827 GermanySales Office QLogic Corporation Terminalstr. Mitte 18 85356 Muchen Germany (49) 89 97007-427 QLogic AsiaSales Office QLogic Corporation 23F, 105 Dun Hua S. Road Sec 2, Taipei 106, Taiwan, R.O.C. 886-2-2755-0000 Ext. 501
Partner Programs
Channel Programs 877.975.6442 reseller@qlogic.com Interoperability Testing solutions@qlogic.com Business Alliances Programs 949.389.6557 santrackpartner@qlogic.com
Sales Information
800.662.4471
Technical Support
952.932.4040 support@qlogic.com
PAGE 62
B e s t Pr a c t i c e s
About QLogic
QLogic is a leading supplier of high-performance storage networking solutions, producing the controller chips, host bus adapters (HBas) and fabric switches that are the backbone of storage networks for most Global 2000 corporations. the company delivers a broad and diverse portfolio of products that includes Fibre channel HBas, blade server embedded Fibre channel switches, Fibre channel stackable switches, iscsi HBas, iscsi routers and storage services platforms for enabling advanced storage management applications. the company is also a leading supplier of infiniBand switches and infiniBand host channel adapters for the emerging High Performance computing cluster (HPcc) market. QLogic products are delivered to small-to-medium businesses and large enterprises around the world via its channel partner community. QLogic products are also powering solutions from leading companies like cisco, Dell, eMc, Hitachi Data systems, HP, iBM, Nec, Network appliance and sun Microsystems. QLogic is a member of the s&P 500 index. For more information go to http://www.qlogic.com. recent accolades include: - s&P 500 index - Forbes Best 200 Best small companies - Fortunes 100 Fastest Growing companies - Business Week Hot Growth company - Network computing Well connected - 2005 storage Magazine saN Product of the Year - 2005 saNbox 5602 infostor and asNP Most Valuable Product Finalist
corporate Headquarters
QLogic corporation