Sunteți pe pagina 1din 114

System Administration Guide: Multipath Conguration

For Solaris 10 Operating System

Sun Microsystems, Inc. www.sun.com

Part No. 819-0139-10 January 2005 Submit comments about this document at: http://www.sun.com/hwdocs/feedback

Copyright 2004 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved. Sun Microsystems, Inc. has intellectual property rights relating to technology that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.sun.com/patents and one or more additional patents or pending patent applications in the U.S. and in other countries. This document and the product to which it pertains are distributed under licenses restricting their use, copying, distribution, and decompilation. No part of the product or of this document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and in other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, AnswerBook2, docs.sun.com, OpenBoot, Solstice DiskSuite, Sun StorEdge, and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and in other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and in other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. The OPEN LOOK and Sun Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to the Xerox Graphical User Interface, which license also covers Suns licensees who implement OPEN LOOK GUIs and otherwise comply with Suns written license agreements. Netscape Navigator is a trademark or registered trademark of Netscape Communications Corporation in the United States and other countries. U.S. Government RightsCommercial use. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions of the FAR and its supplements. DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. Copyright 2004 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, Californie 95054, Etats-Unis. Tous droits rservs. Sun Microsystems, Inc. a les droits de proprit intellectuels relatants la technologie qui est dcrit dans ce document. En particulier, et sans la limitation, ces droits de proprit intellectuels peuvent inclure un ou plus des brevets amricains numrs http://www.sun.com/patents et un ou les brevets plus supplmentaires ou les applications de brevet en attente dans les Etats-Unis et dans les autres pays. Ce produit ou document est protg par un copyright et distribu avec des licences qui en restreignent lutilisation, la copie, la distribution, et la dcompilation. Aucune partie de ce produit ou document ne peut tre reproduite sous aucune forme, par quelque moyen que ce soit, sans lautorisation pralable et crite de Sun et de ses bailleurs de licence, sil y en. Le logiciel dtenu par des tiers, et qui comprend la technologie relative aux polices de caractres, est protg par un copyright et licenci par des fournisseurs de Sun. Des parties de ce produit pourront tre drives des systmes Berkeley BSD licencis par lUniversit de Californie. UNIX est une marque dpose aux Etats-Unis et dans dautres pays et licencie exclusivement par X/Open Company, Ltd. Sun, Sun Microsystems, le logo Sun, AnswerBook2, docs.sun.com, OpenBoot, Solstice DiskSuite, Sun StorEdge, et Solaris sont des marques de fabrique ou des marques dposes de Sun Microsystems, Inc. aux Etats-Unis et dans dautres pays. Toutes les marques SPARC sont utilises sous licence et sont des marques de fabrique ou des marques dposes de SPARC International, Inc. aux Etats-Unis et dans dautres pays. Les produits protant les marques SPARC sont bass sur une architecture dveloppe par Sun Microsystems, Inc. Linterface dutilisation graphique OPEN LOOK et Sun a t dveloppe par Sun Microsystems, Inc. pour ses utilisateurs et licencis. Sun reconnat les efforts de pionniers de Xerox pour la recherche et le dveloppement du concept des interfaces dutilisation visuelle ou graphique pour lindustrie de linformatique. Sun dtient une license non exclusive de Xerox sur linterface dutilisation graphique Xerox, cette licence couvrant galement les licencies de Sun qui mettent en place linterface d utilisation graphique OPEN LOOK et qui en outre se conforment aux licences crites de Sun. LA DOCUMENTATION EST FOURNIE "EN LTAT" ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES OU TACITES SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A LAPTITUDE A UNE UTILISATION PARTICULIERE OU A LABSENCE DE CONTREFAON.

Please Recycle

Contents

Preface 1.

ix 1 1

Multipathing Overview

Fibre Channel and Multipathing Features Multipath Driver Features Supported Standards 2. 4 5 2

Configuring Multipathing Software Configuring Multipathing 5

Task Summary To Configure Multipathing Reconfiguration Reboot Requirements Enabling Multipathing Globally

6 7 9 10

To Configure Multipathing Globally

Enabling Multipathing on a Per Port Basis

Considerations for Per-Port Configuration


To Configure Multipathing by Port To Configure a Single PCI HBA To Configure a Dual PCI HBA To Configure an SBus HBA 14 12 13

10

Configuring Automatic Failback for Sun StorEdge Arrays

14

iii

To Configure Automatic Failback Capability To Disable Automatic Failback Capability 15

15

Configuring Third-Party Symmetric Storage Devices for Multipathing Considerations for Third-Party Device Configuration

16

16

To Configure Third-Party Devices

16 18 18

Configuring Solstice DiskSuite or Solaris Volume Manager for Multipathing Solaris DiskSuite or Solaris Volume Manager Configuration Overview Example of Migrating Mirrored Devices

19 20

To Migrate Two-Way Mirrored Metadevices 29 29

3.

Configuring SAN Devices SAN Device Considerations Adding SAN Devices

30 30 31 31 32 33 33

To Add a SAN Device

Fabric Device Node Configuration

Ensuring that LUN-Level Information Is Visible

To Detect Fabric Devices Visible on a Host

Configuring Device Nodes Without Multipathing Enabled


To Configure an Individual Device Without Multipathing To Configure Multiple Devices Without Multipathing 40 41 43 45 38

Configuring Device Nodes With Multipathing Enabled


To Configure Individual Devices With Multipathing To Configure Multiple Devices With Multipathing

4.

Configuring Multipathing Support for Storage Boot Devices About the stmsboot Command 45 46 47

Considerations for stmsboot Operations

Displaying Potential /dev Device Name Changes

To Display Potential /dev Device Name Changes

48

iv

System Administration Guide: Multipath Configuration January 2005

Enabling Multipathing on the Boot Controller Port

48 49

To Enable Multipathing on the Boot Controller Port 51

Disabling Multipathing on the Boot Controller Port

To Disable Multipathing on the Boot Controller Port 53

51

5.

Configuring IPFC SAN Devices Loading IPFC 53 53

IPFC Considerations

Determining Fibre Channel Adapter Port Instances


54 55 55

To Determine Port Instances from the WWN

To Determine Port instances From the Physical Device Path 57

To Plumb an IPFC Instance 58

Invoking and Configuring IPFC


To Start a Network Interface Manually

59 59

To Configure the Host for Automatic Plumbing Upon Reboot 61 61 61

6.

Unconfiguring Fabric Devices Unconfiguring Fabric Devices


To Unconfigure a Fabric Device

To Unconfigure All Fabric Devices on a Fabric-Connected Host Port

62

To Unconfigure a Fabric Device Associated With Multipathing Enabled Devices 64 To Unconfigure One Path to a Multipathing Device 65

To Unconfigure All Fabric-Connected Devices With Multipathing Enabled 67 69 69

A.

Multipathing Configuration Samples

About Multipathing Configuration Samples Configuration without Multipathing Utility: format 70 71 70

Configuration with Multipathing

Contents

Utility: format

71 72 72

Utility: luxadm probe

Utility: luxadm display

Utility: luxadm failover 75 B. C. Supported FC-HBA API Zones and Ports Zone Types Port Types D. 83 84 87 83 79

Implementing Sun StorEdge SAN Software Dynamic Reconfiguration Dynamic Reconfiguration 87 87 88

Dynamic Reconfiguration and Fabric Devices


To Remove a Fabric Device Before Dynamic Reconfiguration To Maintain a Fabric Device Configuration With Dynamic Reconfiguration 88 To Reconfigure Fabric Devices With Dynamic Reconfiguration To Reconfigure the Sun Enterprise 10000 Server With a Fabric Connection 88 91 92 92

88

E.

Multipathing Troubleshooting

System Crashes During Boot Operations

System Crashes During or After a Boot Enable Operation

System Crashes During or After a Multipathing Boot Disable Operation Multipathing Is Not Running Properly 95 96 96 97

94

luxadm display and luxadm failover Commands Fail Sun StorEdge T3, 6x20, or 3900 Arrays Do Not Show

System Failed During Boot With scsi_vhci Attachment

Connected Sun StorEdge A5200 Arrays Appear Under Physical Paths in Format 97

vi

System Administration Guide: Multipath Configuration January 2005

System and Error Messages Index 99

98

Contents

vii

viii

System Administration Guide: Multipath Configuration January 2005

Preface
The System Administration Guide: Multipath Configuration provides an overview of the Solaris 10 Operating System (Solaris OS) software that is used to configure multipathing, with an explanation of how to install and then configure the software. This guide is intended for system, storage, and network administrators who create and maintain storage area networks (SANs) and have a high level of expertise in the management and maintenance of SANs.

Before You Read This Book


Before you read this book, review the late-breaking information described in the Solaris 10 Operating System Release Notes.

ix

How This Book is Organized

Chapter 1 provides an overview of SAN and multipathing features and guidelines. Chapter 2 provides an overview of the configuration process. Chapter 3 explains how to configure multipathing devices. Chapter 4 describes how to configure multipathing for a storage boot device. Chapter 5 explains how to configure IPFC SAN devices. Chapter 6 describes the steps required to unconfigure the multipathing software. Appendix A provides configuration samples. Appendix B lists the HBA API library commands. Appendix C provides Zone information. Appendix D describes dynamic reconfiguration. Appendix E provides a troubleshooting guide.

Using UNIX Commands


This document may not contain information on basic UNIX commands and procedures such as shutting down the system, booting the system, and configuring devices. See one or more of the following for this information: Solaris Handbook for Sun Peripherals AnswerBook2 online documentation for the Solaris Operating System

System Administration Guide: Multipath Configuration January 2005

Typographic Conventions
Typeface Meaning Examples

AaBbCc123

The names of commands, files, and directories; on-screen computer output What you type, when contrasted with on-screen computer output Book titles, new words or terms, words to be emphasized

Edit your .login file. Use ls -a to list all files. % You have mail. % su Password: Read Chapter 6 in the Users Guide. These are called class options. You must be superuser to do this. To delete a file, type rm filename.

AaBbCc123

AaBbCc123

Command-line variable; replace with a real name or value

Shell Prompts
Shell Prompt

C shell C shell superuser Bourne shell and Korn shell Bourne shell and Korn shell superuser

machine_name% machine_name# $ #

Preface

xi

Related Documentation
Product Title Part Number

Solaris 10 Operating System

Solaris 10 Operating System Release Notes Systems Administration Guide: Basic Administration Systems Administration Guide: Advanced Administration Systems Administration Guide: IP Services Systems Administration Guide: Network Services Solaris Administration Guide: Devices and File Systems

817-0552-nn 817-1985-nn 817-0403-nn 816-4554-nn 817-4555-nn 817-5093-nn

Contacting Sun Technical Support


If you need help installing or using this product, contact your service provider. If you have a support contract with Sun, call 1-800-USA-4SUN, or go to: http://www.sun.com/service/contacting/index.html

Accessing Sun Documentation


You can view, print, or purchase a broad selection of Sun documentation, including localized versions, at: http://www.sun.com/documentation A broad selection of Sun system documentation is located at: http://www.sun.com/products-n-solutions/hardware/docs

xii

System Administration Guide: Multipath Configuration January 2005

A complete set of Solaris documentation and many other titles are located at: http://docs.sun.com

Sun Welcomes Your Comments


Sun is interested in improving its documentation and welcomes your comments and suggestions. You can submit your comments by going to: http://www.sun.com/hwdocs/feedback Please include the title and part number of your document with your feedback: System Administration Guide: Multipath Configuration, part number 819-0139-10

Preface

xiii

xiv

System Administration Guide: Multipath Configuration January 2005

CHAPTER

Multipathing Overview
This chapter provides an overview of the multipathing capabilities of the Solaris 10 Operating System (Solaris OS). This information will prove helpful during installation and configuration of the software. This chapter contains the following sections:

Fibre Channel and Multipathing Features on page 1 Supported Standards on page 4

Fibre Channel and Multipathing Features


The Solaris 10 OS enables Fibre Channel (FC) connectivity for Solaris hosts. The software resides on the server and identifies the storage and switch devices on your SAN. It allows you to attach either loop or fabric SAN storage devices while providing a standard interface with which to manage them. This multipathing functionality were previously available through the Sun StorEdge SAN Foundation Software and Sun StorEdge Traffic Manager products. You can manage multiple redundant paths to storage for enhanced reliability and performance as well as implement third-party multipathing solutions while using Solaris 10 OS. The Solaris 10 multipathing stack is compliant with the Sun Common SCSI Architecture driver model, and allows either Sun or third party volume managers, file systems, and multipathing software to operate seamlessly. Solaris 10 fibre channel and multipathing provide the following key features:

Dynamic storage recovery The Solaris 10 OS automatically recognizes devices and any modifications made to device configurations. This makes devices available to the system without requiring you to reboot or manually change information in configuration files. Persistent device naming Devices that are configured within the SAN maintain their device naming through reboots and/or reconfiguration. The only exception to this are tape devices, found in /dev/rmt, that are persistent and will not change unless they are removed or later regenerated. FCAL support OpenBoot Prom (OBP) commands that are used on servers can access Fibre Channel-Arbitrated Loop (FCAL) attached storage for scanning the Fibre Channel loop. Fabric booting Sun Supported 1 Gbit and 2 Gbit host bus adapters (HBAs) supported by Sun can boot from fabric devices as well as non-fabric devices. Fabric topologies with Fibre Channel switches provide higher speed, more connections, and port isolation. Configuration management Management can be performed with the cfgadm(1M) and luxadm(1M) commands. These commands control host-level access to devices and allow you to configure hosts to see only necessary devices, providing an alternative to switch zoning. T11 FC-HBA library What was previously known as the Storage Networking Industry Association (SNIA) Fibre Channel HBA(FC-HBA) library is now known as the T11 FC-HBA library. The T11 FC-HBA library application programming interface (API) enables management of Fibre Channel HBAs and provides a standards-based interface for other applications (such as Sun StorEdge Enterprise Storage Manager Topology Reporter) that can be used to gather information about the SANs HBAs, switches, and storage. Man pages for common FC-HBAs are included in the Solaris 10 OS. For additional information on fibre channel specifications (FC-MI) refer to http://www.t11.org. Appendix B lists functions supported by Suns implementation of the T11 FC-HBA.

Multipath Driver Features


The Solaris 10 OS includes multipath client driver software. Once enabled, this driver software can be configured to implement a set of interfaces for host controller interface (HCI) drivers, known as the multipath driver interface (MDI). In addition to providing the multipath driver interface, the driver also provides the following features:

Path management The Solaris 10 OS multipathing software dynamically manages the paths to any storage devices it supports. There are no configuration files to manage or databases to keep current for supported devices. The addition or removal of paths to a device is done automatically when a path is brought

System Administration Guide: Multipath Configuration January 2005

online or removed from service. This allows hosts configured with multipathing to begin with a single path to a device and add more host controllers, increasing bandwidth and RAS, without changing device names or modifying applications.

Single device instances - Multipathing is fully integrated with the Solaris OS. This allows the software to display multipath devices as single device instances instead of as one device, or device link, per path. This reduces the cost of managing complex storage architectures, since it enables utilities, such as format(1M) or higher level applications such as Solaris Volume Manager, to access one representation of a storage device instead of a separate device for each path. Failover support Implementing higher levels of RAS requires redundant host connectivity to storage devices. Solaris 10 OS multipathing drivers manage both the failure or hosts coming offline to storage paths while maintaining host I/O connectivity through available secondary paths. This enables applications to continue operating in the event of host disconnection or downstream path failure. Failed paths can be automatically re-enabled through dynamic path management. Symmetrical/Asymmetrical Device Support The Solaris 10 OS can manage all symmetrical devices. Asymmetric device have proprietary commands that require additional individual device support. Solaris 10 supports all Sun asymmetric devices, for example, the T3/6120. Using a switch in front of an asymmetric device will allow you to have a pool of preferred symmetric paths, as well as a pool of inactive asymmetric paths. I/O load balancing In addition to providing simple failover support, the software can use any active paths to a storage device to send and receive I/O. With I/O routed through multiple host connections, bandwidth can be increased by the addition of host controllers. The Solaris 10 OS uses a round-robin loadbalancing algorithm, by which individual I/O requests are routed to active host controllers in a series, one after the other. Queue depth SCSI storage arrays present storage to a host in the form of a LUN. LUNs have a finite set of resources available, such as the amount of data that can be stored, as well as the number of active commands that a device or LUN can process at one time. The number of active commands that can be issued before a device blocks further I/O is know as queue depth. When multipathing software is enabled, a single queue is created for each LUN regardless of the number of distinct or separate paths it may have to the host. This prevents unnecessary action on storage array resources and allows upper layer drivers and programs to effectively use the LUN without flooding it with I/O requests. Dynamic reconfiguration The Solaris 10 OS supports Solaris Dynamic Reconfiguration (DR).

Chapter 1

Multipathing Overview

Supported Standards
The Solaris 10 OS is based on open standards for communicating with devices and device management, ensuring interoperability with other standards-based devices and software. The following standards are supported by the Solaris 10 OS:

T10 standards, including SCSI-2, SAM, FCP, SPC, and SBC T11.3 Fibre Channel standards, including FC-PH, FC-AL, FC-LE, and FC-GS T11.5 storage management standards, including FC-HBA IETF standards, including RFC 2625

System Administration Guide: Multipath Configuration January 2005

CHAPTER

Configuring Multipathing Software


This chapter contains the following sections:

Configuring Multipathing on page 5 Enabling Multipathing Globally on page 6 Enabling Multipathing on a Per Port Basis on page 9 Configuring Automatic Failback for Sun StorEdge Arrays on page 14 Configuring Third-Party Symmetric Storage Devices for Multipathing on page 16 Configuring Solstice DiskSuite or Solaris Volume Manager for Multipathing on page 18

Configuring Multipathing
Multipathing is provided by a driver that runs in the Solaris Operating System (Solaris OS) environment. Multipathing is disabled by default for the Solaris OS, but is enabled by default on Solaris OS on x86 based systems. If you are using another multipathing application, see the documentation for that application.

Note These software features are not available for parallel SCSI devices but are
available for Fibre Channel disk devices. Multipathing is not supported on tape drives or libraries, or on FC over IP. Configuration of the multipathing software depends on how you intend to use your system. The software can be configured to control all Fibre Channel HBAs supported by Sun as listed in the Sun Solaris 10 Operating System Release Notes. You can also configure multipathing for third-party symmetric storage devices to use the Solstice DiskSuite software, Solaris Volume Manager software, or third-party volume management and multipathing software.

Task Summary To Configure Multipathing


TABLE 2-1 summarizes the tasks involved in configuring multipathing.
TABLE 2-1 Task

Task Summary for Configuration of Multipathing


See

1. Configure All ports globally Individual ports 2. Configure other options.

See Reconfiguration Reboot Requirements on page 6. See Enabling Multipathing Globally on page 6. See Enabling Multipathing on a Per Port Basis on page 9. See Configuring Third-Party Symmetric Storage Devices for Multipathing on page 16. See Configuring Solstice DiskSuite or Solaris Volume Manager for Multipathing on page 18. See Configuring Multipathing Support for Storage Boot Devices on page 45. See Reconfiguration Reboot Requirements on page 6.

3. Configure boot controller devices. 4. Perform a reconfiguration reboot (if necessary).

Reconfiguration Reboot Requirements


You must perform a reconfiguration reboot whenever the following occurs:

You change the scsi_vhci.conf or fp.conf files. In non-fabric environments, you change the mp_support field for Sun StorEdge T3, 6x20, and 3900 arrays. For further information about Sun StorEdge T3, 6x20 and 3900 arrays, refer to the documentation that came with your system. Unless you explicitly enabled or disabled the software on a specific port, the global settings in multipathing will apply.

Enabling Multipathing Globally


Multipathing can be configured to control and provide load balancing for a subset of the HBAs supported by Sun. This section describes the files and parameters that are used to configure multipathing.

System Administration Guide: Multipath Configuration January 2005

To Configure Multipathing Globally

1. Using any text editor, edit the /kernel/drv/fp.conf file as displayed in Example of fp.conf file on page 8. 2. To enable multipathing globally, change the value of mpxio-disable to no. On the Solaris 10 OS for Solaris, mpxio-disable is set to yes by default, which means multipathing is disabled. With the Solaris OS on x86 based systems, mpxio-disable is set to no by default, which means that multipathing is enabled. 3. (Optional) Enable multipathing support for third-party symmetric devices. Refer to Configuring Third-Party Symmetric Storage Devices for Multipathing on page 16. 4. Save the /kernel/drv/fp.conf file. 1. Using any text editor, edit the /kernel/drv/scsi_vhci.conf file as displayed in Example of scsi_vhci.conf file on page 9. Do not change the name and class definitions. 2. If you want the multipathing software to use all the available paths for load balancing, leave the load-balance field set to the default of round-robin. Otherwise, change the definition to none. 3. Save the /kernel/drv/scsi_vhci.conf file. 4. Perform one of the following steps.

If you want to enable multipathing on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands.
# touch /reconfigure # shutdown -g0 -y -i6

Chapter 2

Configuring Multipathing Software

CODE EXAMPLE 2-1

Example of fp.conf file

# Copyright 2004 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Sun Fibre Channel Port driver configuration # #ident "%Z%%M% %I% %E% SMI" # name="fp" class="fibre-channel" port=0; name="fp" class="fibre-channel" port=1; # # To generate the binding-set specific compatible forms used to address # legacy issues the scsi-binding-set property must be defined. (do not remove) # scsi-binding-set="fcp"; # # List of ULP modules for loading during port driver attach time # load-ulp-list="1","fcp"; # # Force attach driver to support hotplug activity (do not remove the property) # ddi-forceattach=1; # # I/O multipathing feature (MPxIO) can be enabled or disabled using # mpxio-disable property. Setting mpxio-disable="no" will activate # I/O multipathing; setting mpxio-disable="yes" disables the feature. # # To globally enable MPxIO on all fp ports set: # mpxio-disable="no"; # # To globally disable MPxIO on all fp ports set: # mpxio-disable="yes"; # # You can also enable or disable MPxIO per port basis. Per port settings # override the global setting for the specified ports. # To disable MPxIO on port 0 whose parent is /pci@8,600000/SUNW,qlc@4 set: # name="fp" parent="/pci@8,600000/SUNW,qlc@4" port=0 mpxio-disable="yes"; # # NOTE: If you just want to enable or disable MPxIO on all fp ports, it is # better to use stmsboot(1M) as it also updates /etc/vfstab. # mpxio-disable="yes";

System Administration Guide: Multipath Configuration January 2005

CODE EXAMPLE 2-2

Example of scsi_vhci.conf file

# Copyright 2004 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #pragma ident"@(#)scsi_vhci.conf1.804/03/07 SMI" # name="scsi_vhci" class="root"; # # Load balancing global configuration: setting load-balance="none" will cause # all I/O to a given device (which supports multipath I/O) to occur via one # path. Setting load-balance="round-robin" will cause each path to the device # to be used in turn. # load-balance="round-robin"; # # Force load driver to support hotplug activity (do not remove this property). # ddi-forceattach=1; # # Automatic failback configuration # possible values are auto-failback="enable" or auto-failback="disable" auto-failback="disable"; # # For enabling MPxIO support for 3rd party symmetric device need an # entry similar to following in this file. Just replace the "SUN SENA" # part with the Vendor ID/Product ID for the device, exactly as reported by # Inquiry cmd. # # device-type-scsi-options-list = # "SUN SENA", "symmetric-option"; # # symmetric-option = 0x1000000;

Enabling Multipathing on a Per Port Basis


Although multipathing may be globally enabled, individual, per-port multipathing software settings have priority over global settings: This section covers the following topics:

Considerations for Per-Port Configuration on page 10

Chapter 2

Configuring Multipathing Software

To Configure Multipathing by Port on page 10

Considerations for Per-Port Configuration


Before you start configuring the software by port, consider the following:

Load balancing is controlled by the global load-balance= variable and cannot be applied on a per-port basis. If a storage device is managed or controlled by a volume manager that is not supported by Sun, the multipathing software must be disabled on that port. Devices with multipathing software enabled are enumerated under /devices/scsi_vhci. Devices with multipathing software disabled are enumerated under physical path names. With the multipathing software installed, all paths to storage devices must be configured with multipathing software enabled or disabled. Configuring multipathing software by port enables the software to co-exist with other multipathing solutions like Alternate Pathing (AP), VERITAS Dynamic Multipathing (DMP), or EMC PowerPath. However, storage devices and paths should not be shared between multipathing software and other multipathing solutions.

To Configure Multipathing by Port


This section provides an overview of the steps needed to configure the software on a per port basis. For specific steps, refer to:

To Configure a Single PCI HBA on page 12 To Configure a Dual PCI HBA on page 13 To Configure an SBus HBA on page 14

1. Log in as superuser.

10

System Administration Guide: Multipath Configuration January 2005

Determine the HBAs that you want the multipathing software to control. For example, to select the desired device, perform an ls -l command on /dev/fc. The following is an example of the command output.
lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp0 ../../devices/pci@6,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp1 ../../devices/pci@7,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp0 ../../devices/pci@6,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp1 ../../devices/pci@7,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp2 ../../devices/pci@a,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp3 ../../devices/pci@b,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 50 Apr 17 18:14 fp4 ../../devices/pci@12,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 56 Apr 17 18:14 fp5 ../../devices/pci@13,2000/pci@2/SUNW,qlc@4/fp@0,0:devctl lrwxrwxrwx 1 root root 56 Apr 17 18:14 fp6 ../../devices/pci@13,2000/pci@2/SUNW,qlc@5/fp@0,0:devctl lrwxrwxrwx 1 root root 56 Apr 17 18:14 fp7 ../../devices/sbus@7,0/SUNW,qlc@0,30400/fp@0,0:devctl -> -> -> -> -> -> -> -> -> ->

Note The fp7 entry is a SBus HBA. The fp5 and fp6 include two /pci elements. This indicates a dual PCI HBA. The rest of the entries do not have additional PCI bridges and are single PCI HBAs.

Chapter 2

Configuring Multipathing Software

11

2. Open the /kernel/drv/fp.conf configuration file and explicitly enable or disable an HBA. Add the property "mpxio-disable" to the HBA definition:

To enable multipathing on the port, set "mpxio-disable" to no. To disable multipathing on the port, set "mpxio-disable" to yes.

3. Perform one of the following steps:

If you want to enable multipathing on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands.
# touch /reconfigure # shutdown -g0 -y -i6

To Configure a Single PCI HBA


The following steps display the configuration of a sample single PCI HBA. fp0 -> ../../devices/pci@6,2000/SUNW,qlc@2/fp@0,0:devctl This sample HBA entry from the ls -l command output shown in To Configure Multipathing by Port on page 10 indicates a single PCI HBA.

1. The following lines are added to the fp.conf file:

To explicitly enable multipathing software on this port, add the following:

name="fp" parent="/pci@6,2000/SUNW,qlc@2" port=0 mpxio-disable="no";

To explicitly disable multipathing software on this port, add the following:

name="fp" parent="/pci@6,2000/SUNW,qlc@2" port=0 mpxio-disabl="yes";

2. Save and exit the file. 3. Perform one of the following steps:

If you want to enable multipathing software on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45.

12

System Administration Guide: Multipath Configuration January 2005

If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands:
# touch /reconfigure # shutdown -g0 -y -i6

To Configure a Dual PCI HBA


The following steps display the configuration of a sample dual PCI HBA. fp6 -> ../../devices/pci@13,2000/pci@2/SUNW,qlc@5/fp@0,0:devctl This sample HBA entry from the ls -l command output shown in To Configure Multipathing by Port on page 10 indicates a dual PCI HBA.

1. Add the following lines to the fp.conf file:

To explicitly enable multipathing software on this port, add the following:


name="fp" parent="/pci@13,2000/pci@2/SUNW,qlc@5" port=0 mpxio-disable="no";

To explicitly disable multipathing software on this port, add the following:


name="fp" parent="/pci@13,2000/pci@2/SUNW,qlc@5" port=0 mpxio-disable="yes";

2. Save and exit the file. 3. Perform one of the following steps:

If you want to enable multipathing software on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands.
# touch /reconfigure # shutdown -g0 -y -i6

Chapter 2

Configuring Multipathing Software

13

To Configure an SBus HBA


The following steps display the configuration of a sample SBus PCI HBA. fp7 -> ../../devices/sbus@7,0/SUNW,qlc@0,30400/fp@0,0:devctl This HBA entry from the ls -l command output shown in To Configure Multipathing by Port on page 10 indicates an SBus HBA.

1. Add the following lines to the fp.conf file:

To explicitly enable multipathing software on this port, add the following:


name="fp" parent="/sbus@7,0/SUNW,qlc@0,30400" port=0 mpxio-disable="no";

To explicitly disable multipathing software on this port, add the following:


name="fp" parent="/sbus@7,0/SUNW,qlc@0,30400" port=0 mpxio-disable="yes";

2. Save and exit the file. 3. Perform one of the following steps:

If you want to enable multipathing software on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands.
# touch /reconfigure # shutdown -g0 -y -i6

Configuring Automatic Failback for Sun StorEdge Arrays


You can configure the scsi_vhci.conf file so that primary paths fail back automatically when they become available after being in an OFFLINE state. This feature is applicable to Sun StorEdge T3+, 3900, and 6x20 arrays.

14

System Administration Guide: Multipath Configuration January 2005

For information on configuring your Sun StorEdge T3+, 3900 or 6x20 arrays, refer to the documentation that came with your system.

To Configure Automatic Failback Capability

1. Open the scsi_vhci.conf file in your text editor. 2. Enable the automatic failback property.
auto-failback = enable;

3. Save and exit the scsi_vhci.conf file. When automatic failback is enabled, the following message is printed in the
/var/adm/messages file upon boot-up:

/scsi_vhci (scsi_vhci0): Auto-failback capability enabled through scsi_vhci.conf file.

4. Perform a reconfiguration reboot so that the changes will take effect.


# touch /reconfigure # shutdown -g0 -y -i6

To Disable Automatic Failback Capability

1. Open the scsi_vhci.conf file in your text editor. 2. Disable the automatic failback property.
auto-failback = disable;

3. Save and exit the scsi_vhci.conf file. 4. Perform a reconfiguration reboot so that the changes will take effect.
# touch /reconfigure # shutdown -g0 -y -i6

Chapter 2

Configuring Multipathing Software

15

Configuring Third-Party Symmetric Storage Devices for Multipathing


The Solaris 10 Operating System software can provide multipathing and loadbalancing to third-party symmetric storage devices, enabling greater device heterogeneity in your SAN or direct-connect storage environment.

Note Before configuring any Third-Party Device, ensure that they are supported.
Refer to your third-party user documentation, or third-party vendor for information on proper product and vendor IDs, modes and various settings required by the multipathing software.

Considerations for Third-Party Device Configuration


Before you configure third-party devices for multipathing, be aware of the following:

To use this functionality, you must edit parameters in the scsi_vhci.conf and fp.conf files. You will need the storage devices vendor ID and product ID. You can obtain the values for the storage vendor_id and product_id variables in the scsi_vhci.conf file by using the format command followed by the inquiry option on your system. See the format(1M) man page. Symmetric storage devices must be able to support the following functionality or commands. Refer to your third party vendor for information.

Make all paths available for I/O, because paths will be accessed in a roundrobin fashion. Support the report_luns_scsi command. Support scsi_inquiry command.

To Configure Third-Party Devices

1. Open the fp.conf file in a text editor such as vi(1M).

16

System Administration Guide: Multipath Configuration January 2005

2. Enable mpxio.
mpxio-disable=no;

3. Save and exit the fp.conf file. 4. Open the scsi_vhci.conf file in a text editor such as vi(1M). 5. Add the vendor_id and product_id properties. The vendor ID (v_id) must be eight characters long. You must specify all eight characters. Trailing characters are spaces. Tabs and the tab character are not allowed. The product ID (prod_id) can be up to 16 characters long. Trailing characters are blanks or spaces. Tabs and the tab character are not allowed.
device-type-scsi-options-list=

"v_id prod_id ", "symmetric-option"; symmetric-option=0x1000000;

Replace the variables with appropriate values for your system. For example:
device-type-scsi-options-list= "ven-a pid_a_upto_here", "symmetric-option", "ven-b pid_b_upto ", "symmetric-option", "ven-c pid_c ", "symmetric-option"; symmetric-option=0x1000000;

6. Enable load balancing. If you want the multipathing software to use all the available paths for load balancing, leave the load-balance field set to the default of round-robin. Otherwise, change the definition to none. 7. Save and exit the scsi_vhci.conf file. 8. Perform one of the following steps:

If you want to enable multipathing software on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands:
# touch /reconfigure # shutdown -g0 -y -i6

Chapter 2

Configuring Multipathing Software

17

After you configure the scsi_vhci.conf file, confirm that the devices nodes are created under the /devices/scsi_vhci directory. If the devices were not created, confirm the vendor_id and product_id fields in the fp.conf file.

Configuring Solstice DiskSuite or Solaris Volume Manager for Multipathing


Solstice DiskSuite or Solaris Volume Manager can be reconfigured to enable the Solaris 10 multipathing driver to control the HBAs that are supported by Sun. Using the stmsboot command, you can determine the mapping from the new multipathing device path name to the pre-multipathing device path name. This mapping will be used to reconfigure Solstice DiskSuite or Solaris Volume Manager to recognize the correct devices under the new multipathing device path names. For more information concerning Solstice DiskSuite Solaris Volume Manager, check the Solstice DiskSuite or Solaris Volume Manager online manuals at: http://docs.sun.com

Solaris DiskSuite or Solaris Volume Manager Configuration Overview


The following is an overview of the steps to complete the configuration process. A detailed example of this process follows.

Note Backing up your data prior to performing any reconfiguration is


recommended. 1. Determine pre-multipathing Device Path Names by using the metadb command to get a list of devices used to store the volume manager configuration. Use the metastat -p command to get the disk-to-metadevice mapping.

18

System Administration Guide: Multipath Configuration January 2005

2. Unconfigure Solstice DiskSuite or Solaris Volume Manager for devices under multipathing control. Unmount the metadevices that will be under control of the multipathing software, and then clear the metadevices using the metaclear command. Clear the metadevice database using the metadb -d -f command. This takes the list of disks, output from the metadb command you performed in Step 1, as a list of arguments. 3. Determine multipathing device name to pre-multipathing device name mapping. Use the stmsboot -L command output to display the potential multipathing path and device name changes and the luxadm display command to determine which pre-multipathing device names are combined under each multipathing device name. 4. Enable multipathing. 5. Reconfigure the Solaris DiskSuite or Solaris Volume Manager Create metadevices using multipathing path names with the metainit command.

Caution For RAID-5 devices, be sure to used the -k option to prevent


initialization of the disks.

Example of Migrating Mirrored Devices


The following example shows how to migrate two-way mirrored metadevices. In this example:

A Sun StorEdge T3 partner pair is connected to the host, and mp_support on the Sun StorEdge T3 array is set to rw. multipathing software is initially disabled. Four LUNs of equal size exist on the partner pair. Two metadb replicas exist on lun 0 (c2t1d0) and lun 1 (c2t1d1).
d10 and d11 are the sub-mirror metadevices created on the lun 2 (c2t1d2) and lun 3 (c2t1d3). d14 is the mirror of d10 and d11.

Chapter 2

Configuring Multipathing Software

19

To Migrate Two-Way Mirrored Metadevices


# metastat -p d10 d11 d14

1. Check the metadevice paths.

1 2 /dev/dsk/c2t1d2s1 /dev/dsk/c2t1d2s6 1 2 /dev/dsk/c2t1d3s1 /dev/dsk/c2t1d3s6 -m d10 d11

20

System Administration Guide: Multipath Configuration January 2005

2. Save the pre-multipathing device information. Collect and save the output of stmsboot -L, format, metadb, metastat, and metastat -p commands.
# stmsboot -L

See To Display Potential /dev Device Name Changes on page 48 for an example of stmsboot -L output.
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0,0 1. c2t1d0 <T300 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0 ssd@w50020f2300000225,0 2. c2t1d1 <T300 cyl 34145 alt 2 hd 20 sec 128> /pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0/ ssd@w50020f2300000225,1 3. c2t1d2 <SUN-T300-0117 cyl 34145 alt 2 hd 32 sec 128> /pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0/ ssd@w50020f2300000225,2 Specify disk (enter its number):

# metadb flags a m pc luo a pc luo a pc luo a pc luo 16 1050 16 1050 first blk 1034 1034 1034 1034 block count /dev/dsk/c2t1d0s3 /dev/dsk/c2t1d0s3 /dev/dsk/c2t1d1s3 /dev/dsk/c2t1d1s3

# metastat -p d14 -m d10 d11 1 d10 1 2 c2t1d2s1 c2t1d2s6 -i 32b d11 1 2 c2t1d3s1 c2t1d3s6 -i 32b

Chapter 2

Configuring Multipathing Software

21

# metastat d14: Mirror Submirror 0: d10 State: Okay Submirror 1: d11 State: Okay Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 524288 blocks d10: Submirror of d14 State: Okay Size: 524288 blocks Stripe 0: (interlace: 32 blocks) Device Start Block Dbase State c2t1d2s1 0 No Okay c2t1d2s6 0 No Okay d11: Submirror of d14 State: Okay Size: 524288 blocks Stripe 0: (interlace: 32 blocks) Device Start Block Dbase State c2t1d3s1 0 No Okay c2t1d3s6 0 No Okay

Hot Spare

Hot Spare

22

System Administration Guide: Multipath Configuration January 2005

3. Unconfigure the volume manager without losing the data. a. Clear the submirrors and mirror devices.
# metaclear d14 d14: Mirror is cleared # metaclear d11 d11: Contact/Stripe is cleared # metaclear d10 d10: Contact/Stripe is cleared

b. Clear the meta database replicas.


# metadb -d -f -c 2 /dev/dsk/c2t1d0s7 /dev/dsk/c2t1d1s7

c. Remove or comment out the /etc/lvm/md.tab entries.


# # # d10 d11 d14 1 2 /dev/dsk/c2t1d2s1 /dev/dsk/c2t1d2s6 1 2 /dev/dsk/c2t1d3s1 /dev/dsk/c2t1d3s6 -m d10 d11

4. Enable the multipathing software as described earlier in this chapter. See Enabling Multipathing Globally on page 6 or Enabling Multipathing on a Per Port Basis on page 9.

Chapter 2

Configuring Multipathing Software

23

5. Determine the multipathing software device name to pre-multipath device name mapping. The output of stmsboot -L, format, luxadm probe, and luxadm display shows the paths for each device. See To Display Potential /dev Device Name Changes on page 48 for an example of stmsboot -L output.
# format AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0,0 1. c4t60020F20000002253B220F99000D348Cd0 <SUN-T300-0117 cyl 34145 alt 2 hd 32 sec 128> /scsi_vhci/ssd@g60020f20000002253b220f99000d348c 2. c4t60020F20000002253B220FC000086944d0 <SUN-T300-0117 cyl 34145 alt 2 hd 32 sec 128> /scsi_vhci/ssd@g60020f20000002253b220fc000086944 3. c4t60020F20000002253B220FD400071CD8d0 <SUN-T300-0117 cyl 34145 alt 2 hd 32 sec 128> /scsi_vhci/ssd@g60020f20000002253b220fd400071cd8 Specify disk (enter its number):

# luxadm probe No Network Array enclosures found in /dev/es Found Fibre Channel device(s): Node WWN:50020f20000001f6 Device Type:Disk device Logical Path:/dev/rdsk/c4t60020F20000002253B22101D00029FC9d0s2 Node WWN:50020f2000000225 Device Type:Disk device Logical Path:/dev/rdsk/c4t60020F20000002253B220FD400071CD8d0s2 Node WWN:50020f20000001f6 Device Type:Disk device Logical Path:/dev/rdsk/c4t60020F20000002253B220FC000086944d0s2 Node WWN:50020f2000000225 Device Type:Disk device Logical Path:/dev/rdsk/c4t60020F20000002253B220F99000D348Cd0s2 For each entry in luxadm probe, get the device path information.

24

System Administration Guide: Multipath Configuration January 2005

(The following luxadm display output shows a single LUN only.)


# luxadm display /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 35846.125 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 /devices/scsi_vhci/ssd@g60020f20000007d44002e60f0002d8ca:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,7 Host controller port WWN 200000017380a77e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,7 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY

Chapter 2

Configuring Multipathing Software

25

6. Correlate the non-multipathing software device information collected in Step 1 by matching the luxadm display paths output. Identify the appropriate LUN. For example, find the c2t1d0 device. c2t1d0 is LUN 0 of the Sun StorEdge T3 partner pair as seen from the format in Step 2.
1. c2t1d0 <T300 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0/ssd@w50020f2300000225,0

From the luxadm display output of each device above, check the paths, controller and device address fields.
/dev/rdsk/c4t60020F20000002253B220F99000D348Cd0s2 /devices/scsi_vhci/ssd@g60020f20000002253b220f99000d348c:c,raw Controller /devices/pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0 Device Address 50020f2300000225,0

Thus, in the above path:

c2t1d0 is now/dev/rdsk/c4t60020F20000002253B220F99000D348Cd0

Alternatively, the stmsboot -L command output also shows the device name change mapping. 7. Reconfigure the volume manager under the multipathing software by recreating /etc/lvm/md.tab with the new device names. a. Re-create metadb database replicas with new device names.
# metadb -a -f -c 2 /dev/rdsk/c4t60020F20000002253B220F99000D348Cd0s2 /dev/rdsk/c4t60020F20000002253B220FC000086944d0s2

b. Similarly, identify the metadevices with new device names by using the metatstat -p command.

26

System Administration Guide: Multipath Configuration January 2005

c. Recreate submirrors and mirror. The best method for re-creating mirrors and submirrors is to create a one-way mirror and then, after verifying that the data is good, attach the second mirror.
# metainit d10 # metainit d11
# metainit d14 -m d10

d14: Mirror is setup

d. Verify that all the data is now available in the one-way mirror device.
d14 contains the same data as earlier, before multipathing was enabled on the

same mirror. e. Attach the second mirror.


# metattach d14 d11 d14: Mirror is setup

Chapter 2

Configuring Multipathing Software

27

28

System Administration Guide: Multipath Configuration January 2005

CHAPTER

Configuring SAN Devices


This chapter provides information about configuring SAN devices and contains the following sections:

SAN Device Considerations on page 29 Adding SAN Devices on page 30 Adding SAN Devices on page 30 Fabric Device Node Configuration on page 31 Configuring Device Nodes Without Multipathing Enabled on page 33 Configuring Device Nodes With Multipathing Enabled on page 40

SAN Device Considerations


Before you configure the multipathing software, do the following.

In previous releases of Solaris, device paths had to be configured before use by using the cfgadm -c configure command. If a path was not configured the storage could not be accessed and was not visible to disk utilities such as format. In Solaris 10, paths are automatically configured, and the separate cfgadm step is not necessary. All attached storage will be visible to the host. Likewise, storage need not be unconfigured. If you wish to use a Solaris Volume Manager (SVM) metadevice on a fabric attached disk, or a mirrored fabric boot device, you must use automatic path configuration. Manual configuration can be restored by adding the line enable_tapesry=on to /kernel/drv/fcp.conf. Configure ports and zones according to the vendor-specific documentation for storage and switches.

29

LUN masking enables specific LUNs to be seen by specific hosts. See your vendor-specific storage documentation that describes masking. Turn off power management on servers connected to the SAN to prevent unexpected results as one server attempts to power down a device while another attempts to gain access. See power.conf(1M) for details about power management. Connect arrays and other storage devices to the SAN with or without multipathing capability.

Adding SAN Devices


Adding and removing SAN devices requires knowledge of the following commands:

luxadm(1M) format(1M) fsck(1M) newfs(1M) cfgadm(1M) and cfgadm_fp(1M)

Note If you use the format command when multipathing is enabled, you will see only one instance of a device identifier for each LUN. Without multipathing, you will see one identifier for each path.
The cfgadm and cfgadm_fp commands are used most frequently to configure storage devices on a SAN. Refer to the appropriate man page for detailed instructions about how to use each command. Appendix A contains information about these commands.

To Add a SAN Device

1. Create the LUN or LUNs desired. 2. If necessary, apply LUN masking for HBA control on the SAN device. 3. Connect the storage device to the system. 4. If necessary, create port-based or WWN zones on the switch on the SAN device.

30

System Administration Guide: Multipath Configuration January 2005

5. If necessary, configure all paths to the storage device using the cfgadm -c configure command on all the host bus adapters (HBAs) that have a path to the storage device. The cfgadm -c configure command creates device nodes. This step is necessary if the storage device is accessed by a host port connected to a fabric port and cfgadm is enabled. 6. Run the fsck or newfs commands on the device, if used for file systems. 7. Mount any existing file systems available on the storage devices LUNs or disk groups. You might need to run the fsck command to repair any errors in the LUNs listed in the /etc/vfstab file.

Fabric Device Node Configuration


After you configure the hardware in your direct-attach system or SAN, you must ensure that the hosts recognize the switches and devices. This section explains host recognition of fabric devices, also known as 24-bit Fibre Channel addressing devices on the SAN. After configuring the devices, ports, and zones in your SAN, you need to make sure that the host is aware of the devices and their switch connections. You can have up to 16 million fabric devices connected together on a SAN with Fibre Channel support. This section is limited to the operations required from the perspective of the Solaris Operating System. It does not cover other aspects, such as device availability and device-specific management. If devices are managed by other software, such as a volume manager, refer to the volume manager product documentation for additional instructions.

Ensuring that LUN-Level Information Is Visible


Use the cfgadm -al -o show_scsi_lun <controller_id> command to identify LUN level information. If you issue the cfgadm -al -o show_scsi_lun <controller_id> command immediately after a system boots up, the output might not show the Fibre Channel Protocol (FCP) SCSI LUN level information. The information does not appear because the storage device drivers, such as the ssd and st driver, are not loaded on the running system. Use the modinfo command to check whether the drivers are loaded. After the drivers are loaded, the LUN level information is visible in the cfgadm output.
Chapter 3 Configuring SAN Devices 31

To Detect Fabric Devices Visible on a Host


This section provides an example of the procedure for detecting fabric devices using Fibre Channel host ports c0 and c1. This procedure also shows the device configuration information that is displayed with the cfgadm(1M) command.

Note In the following examples, only failover path attachment point IDs (Ap_Ids)
are listed. The Ap_Ids displayed on your system depend on your system configuration. 1. Become superuser. 2. Display the information about the attachment points on the system.
# cfgadm -l Ap_Id c0 c1

Type fc-fabric fc-private

Receptacle connected connected

Occupant Condition unconfigured unknown configured unknown

In this example, c0 represents a fabric-connected host port, and c1 represents a private, loop-connected host port. Use the cfgadm(1M) command to manage the device configuration on fabric-connected host ports. By default, the device configuration on private, loop-connected host ports is managed by a host using the Solaris Operating System. 3. Display information about the host ports and their attached devices.
# cfgadm -al Ap_Id Type c0 fc-fabric c0::50020f2300006077 disk c0::50020f23000063a9 disk c0::50020f2300005f24 disk c0::50020f2300006107 disk c1 fc-private c1::220203708b69c32b disk c1::220203708ba7d832 disk c1::220203708b8d45f2 disk c1::220203708b9b20b2 disk

Receptacle connected connected connected connected connected connected connected connected connected connected

Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

32

System Administration Guide: Multipath Configuration January 2005

Note The cfgadm -l command displays information about Fibre Channel host ports. Also use the cfgadm -al command to display information about Fibre Channel devices. The lines that include a port world wide name (WWN) in the Ap_Id field associated with c0 represent a fabric device. Use the cfgadm configure and unconfigure commands to manage those devices and make them available to hosts using the Solaris Operating System. The Ap_Id devices with port WWNs under c1 represent private-loop devices that are configured through the c1 host port.

Configuring Device Nodes Without Multipathing Enabled


This section describes fabric device configuration tasks on a host that does not have multipathing enabled. The procedures in this section use specific devices as examples to illustrate how to use the cfgadm(1M) command to detect and configure fabric devices. The devices attached to the fabric-connected host port are not configured by default and so are not available to the host using the Solaris Operating System. Use the cfgadm(1M) configure and unconfigure commands to manage device node creation for fabric devices. See the cfgadm_fp(1M) man page for additional information. The procedures in this section show how to detect fabric devices that are visible on a host and to configure and make them available to a host using the Solaris Operating System. The device information that you supply and that is displayed by the cfgadm(1M) command depends on your system configuration.

To Configure an Individual Device Without Multipathing


This sample procedure describes how to configure a fabric device that is attached to the fabric-connected host port c0.

1. Become superuser.

Chapter 3

Configuring SAN Devices

33

2. Identify the device to be configured.


# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c0::50020f23000063a9 disk connected c0::50020f2300005f24 disk connected c0::50020f2300006107 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected c1::220203708b8d45f2 disk connected c1::220203708b9b20b2 disk connected

Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

3. Configure the fabric device.


# cfgadm -c configure c0::50020f2300006077

4. Verify that the selected fabric device is configured.


# cfgadm -al Ap_Id c0 c0::50020f2300006077 c0::50020f23000063a9 c0::50020f2300005f24 c0::50020f2300006107 c1 c1::220203708b69c32b c1::220203708ba7d832 c1::220203708b8d45f2 c1::220203708b9b20b2

Type Receptacle fc-fabric connected disk connected disk connected disk connected disk connected fc-private connected disk connected disk connected disk connected disk connected

Occupant Condition configured unknown configured unknown unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

Notice that the Occupant column for both c0 and c0::50020f2300006077 displays as configured, indicating that the c0 port has a configured occupant and that the c0::50020f2300006077 device is configured. Use the show_scsi_lun option to display FCP SCSI LUN information for multiLUN SCSI devices.

34

System Administration Guide: Multipath Configuration January 2005

CODE EXAMPLE 3-1 shows that the physical devices connected through Ap_Id c0:50020f2300006077 have four LUNs configured. The device is now available on
CODE EXAMPLE 3-1

show_scsi_lun Output Showing Two LUNs

# cfgadm -al -o show_scsi_lun c0 Ap_Id Type Receptacle c2 fc-fabric connected c0::50020f2300006077,0 disk connected c0::50020f2300006077,1 disk connected c0::50020f2300006077,2 disk connected c0::50020f2300006077,3 disk connected

Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown

the host using the Solaris Operating System. The paths represent each SCSI LUN in the physical device represented by c0::50020f2300006077. CODE EXAMPLE 3-2 is an example of the luxadm(1M) output.

Chapter 3

Configuring SAN Devices

35

CODE EXAMPLE 3-2

luxadm Output for Four Devices and a Single Array

# luxadm display 50020f2300006077 # luxadm display /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2

DEVICE PROPERTIES for disk: /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 35846.125 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 /devices/scsi_vhci/ssd@g60020f20000007d44002e60f0002d8ca:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,7 Host controller port WWN 200000017380a77e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,7 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY
DEVICE PROPERTIES for disk: 50020f23000007a4 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 9217.688 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s):

36

System Administration Guide: Multipath Configuration January 2005

CODE EXAMPLE 3-2

luxadm Output for Four Devices and a Single Array (Continued)

/dev/rdsk/c14t60020F20000007D43EE74C7B00047E6Cd0s2 /devices/scsi_vhci/ssd@g60020f20000007d43ee74c7b00047e6c:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,2 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,2 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY DEVICE PROPERTIES for disk: 50020f23000007a4 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 9217.688 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D43EE74C630001A25Fd0s2 /devices/scsi_vhci/ssd@g60020f20000007d43ee74c630001a25f:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,1 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,1 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY

Chapter 3

Configuring SAN Devices

37

CODE EXAMPLE 3-2

luxadm Output for Four Devices and a Single Array (Continued)

DEVICE PROPERTIES for disk: 50020f23000007a4 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 9217.688 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D43EE74C93000612EBd0s2 /devices/scsi_vhci/ssd@g60020f20000007d43ee74c93000612eb:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,3 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,3 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY

To Configure Multiple Devices Without Multipathing


Make sure you first identify the devices visible to the host with the procedure To Detect Fabric Devices Visible on a Host on page 32. This procedure describes how to configure all unconfigured fabric devices that are attached to a fabric-connected host port. The port used as an example is c0.

1. Become superuser.

38

System Administration Guide: Multipath Configuration January 2005

2. Identify the devices to be configured.


# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c0::50020f23000063a9 disk connected c0::50020f2300005f24 disk connected c0::50020f2300006107 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected c1::220203708b8d45f2 disk connected c1::220203708b9b20b2 disk connected

Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

3. Configure all of the unconfigured devices on the selected port.


# cfgadm -c configure c0

Note This operation repeats the configure operation of an individual device for
all the devices on c0. This can be time consuming if the number of devices on c0 is large. 4. Verify that all devices on c0 are configured.
# cfgadm -al Ap_Id c0 c0::50020f2300006077 c0::50020f23000063a9 c0::50020f2300005f24 c0::50020f2300006107 c1 c1::220203708b69c32b c1::220203708ba7d832 c1::220203708b8d45f2 c1::220203708b9b20b2

Type Receptacle fc-fabric connected disk connected disk connected disk connected disk connected fc-private connected disk connected disk connected disk connected disk connected

Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

Use the show_scsi_lun command to display FCP SCSI LUN information for multiLUN SCSI devices.

Chapter 3

Configuring SAN Devices

39

CODE EXAMPLE 3-2 shows that the physical devices connected through Ap_Id

c0:50020f2300006077 have four LUNs configured. The example in


CODE EXAMPLE 3-3 shows that the physical devices represented by

c0::50020f2300006077 and c0::50020f2300006107 each have four LUNs configured. The physical devices represented by c0::50020f23000063a9 and c0::50020f2300005f24 each have two LUNs configured.
CODE EXAMPLE 3-3

show_scsi_lun Output for Multiple LUNs and Two Devices

# cfgadm -al -o show_scsi_lun c0 Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077,0 disk connected c0::50020f2300006077,1 disk connected c0::50020f2300006077,2 disk connected c0::50020f2300006077,3 disk connected c0::50020f23000063a9,0 disk connected c0::50020f23000063a9,1 disk connected c0::50020f2300005f24,0 disk connected c0::50020f2300005f24,1 disk connected c0::50020f2300006107,0 disk connected c0::50020f2300006107,1 disk connected c0::50020f2300006107,2 disk connected c0::50020f2300006107,3 disk connected

Occupant configured configured configured configured configured configured configured configured configured configured configured configured configured

Condition unknown unknown unknown unknown unknown unknown unknown unknown unknown unknown unknown unknown unknown

Configuring Device Nodes With Multipathing Enabled


This section describes how to perform fabric device configuration steps on a host that has the multipathing software enabled. The devices that are attached to fabric-connected HBA ports are not configured by default. These devices are thus not available to the host using the Solaris Operating System when a host port is initially connected to a fabric. The procedures in this section illustrate steps to detect fabric devices that are visible on a host and to configure them as multipathing devices to make them available to the host using the Solaris Operating System. The device information that you supply, and that is displayed by the cfgadm(1M) command, depends on your system configuration. For more information on the cfgadm command, see the cfgadm_fp(1M) and cfgadm(1M) man pages.

40

System Administration Guide: Multipath Configuration January 2005

To Configure Individual Devices With Multipathing


This sample procedure uses fabric-connected host ports c0 and c2 to configure fabric devices as multipath devices on a host that has the multipathing software enabled.

Note The cfgadm -c configure command for fabric devices is the same regardless of whether or not multipathing is enabled, but the result is different. When the multipathing is enabled, the host using the Solaris Operating System creates device node and path information that includes multipathing information.
1. Become superuser. 2. Identify the port WWN of the device to be configured as a multipath device. Look for devices on a fabric-connected host port, marked as fc-fabric. These are the devices you can configure with the cfgadm -c configure command.
CODE EXAMPLE 3-4

cfgadm Listing of Fabric and Private-Loop Devices

# cfgadm -al Ap_Id c0 c0::50020f2300006077 c0::50020f23000063a9 c1 c1::220203708b69c32b c1::220203708ba7d832 c1::220203708b8d45f2 c1::220203708b9b20b2 c2 c2::50020f2300005f24 c2::50020f2300006107

Type Receptacle fc-fabric connected disk connected disk connected fc-private connected disk connected disk connected disk connected disk connected fc-fabric connected disk connected disk connected

Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown unconfigured unknown unconfigured unknown

In CODE EXAMPLE 3-4, the c0::50020f2300006077 and c2::50020f2300006107 Ap_Ids represent the same storage device with different port WWNs for the storage device controllers. The c0 and c2 host ports are enabled for use by multipathing. 3. Configure the fabric device and make multipathing devices available to the host.
# cfgadm -c configure c0::50020f2300006077 c2::50020f2300006107

Chapter 3

Configuring SAN Devices

41

4. Verify that the selected devices are configured.


# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c0::50020f23000063a9 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected c1::220203708b8d45f2 disk connected c1::220203708b9b20b2 disk connected c2 fc-fabric connected c2::50020f2300005f24 disk connected c2::50020f2300006107 disk connected

Occupant Condition configured unknown configured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown configured unknown

Notice that the Occupant column of c0 and c0::50020f2300006077 specifies configured, which indicates that the c0 port has at least one configured occupant and that the c0::50020f2300006077 device is configured. The same change has been made in c2 and c2::50020f2300006107. When the configure operation has been completed without an error, multipathingenabled devices are created on the host using the Solaris Operating System. If the physical device represented by c0::50020f2300006077 and c2::50020f2300006107 has multiple SCSI LUNs configured, each LUN is configured as a multipathing device. CODE EXAMPLE 3-5 shows that two LUNs are configured through c0::50020f2300006077 and c2::50020f2300006107. Each Ap_Id is associated with a path to those multipath devices.
CODE EXAMPLE 3-5

show_scsi_lun Output for Two LUNs on a Device

# cfgadm -al -o show_scsi_lun c0::50020f2300006077 c2::50020f2300006107 Ap_Id Type Receptacle Occupant c0::50020f2300006077,0 disk connected configured c0::50020f2300006077,1 disk connected configured c2::50020f2300006107,0 disk connected configured c2::50020f2300006107,1 disk connected configured

Condition unknown unknown unknown unknown

42

System Administration Guide: Multipath Configuration January 2005

To Configure Multiple Devices With Multipathing


Before you configure or remove device nodes, be sure to first identify the fabric devices by using the procedure To Detect Fabric Devices Visible on a Host on page 32. In this example, an Ap_Id on a fabric-connected host port is a path to a multipath device. For example, all devices with a path through c2 are to be configured, but none through c0 are to be configured. c2 is an attachment point from the host to the fabric, whereas c2::50020f2300006107 is an attachment point from the storage to the fabric. A host detects all the storage devices in a fabric for which it is configured. Configuring an Ap_Id on a multipathing device that has already been configured through another Ap_Id results in an additional path to the previously configured device. Note that a new Solaris device is not created in this case. A Solaris device is created only the first time an Ap_Id to a corresponding multipathing device is configured.

1. Become superuser. 2. Identify the fabric-connected host port to be configured.


# cfgadm -al Ap_Id Type Receptacle Occupant Condition c0 fc-fabric connected configured unknown c0::50020f2300006077 disk connected configured unknown c0::50020f23000063a9 disk connected configured unknown c1 fc-private connected configured unknown c1::220203708b69c32b disk connected configured unknown c1::220203708ba7d832 disk connected configured unknown c1::220203708b8d45f2 disk connected configured unknown c1::220203708b9b20b2 disk connected configured unknown c2 fc-fabric connected unconfigured unknown c2::50020f2300005f24 disk connected unconfigured unknown c2::50020f2300006107 disk connected unconfigured unknown

Devices represented by Ap_Ids c0::50020f2300006077 and c2::50020f2300006107 are two paths to the same physical device, with c0::50020f2300006077 already configured. Configure the unconfigured devices on the selected port.
# cfgadm -c configure c2

Chapter 3

Configuring SAN Devices

43

Note This operation repeats the configure command of an individual device for
all the devices on c2. This can be time-consuming if the number of devices on c2 is large. 3. Verify that all devices on c2 are configured.
# cfgadm -al Ap_Id c0 c0::50020f2300006077 c0::50020f23000063a9 c1 c1::220203708b69c32b c1::220203708ba7d832 c1::220203708b8d45f2 c1::220203708b9b20b2 c2 c2::50020f2300005f24 c2::50020f2300006107

Type fc-fabric disk disk fc-private disk disk disk disk fc-fabric disk disk

Receptacle connected connected connected connected connected connected connected connected connected connected connected

Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

Notice that the Occupant column of c2 and all of the devices under c2 is marked as configured. The show_scsi_lun command displays FCP SCSI LUN information for multiple LUN SCSI devices.

44

System Administration Guide: Multipath Configuration January 2005

CHAPTER

Configuring Multipathing Support for Storage Boot Devices


This chapter provides information about configuring storage boot devices using the stmsboot command. It contains the following sections:

About the stmsboot Command on page 45 Disabling Multipathing on the Boot Controller Port on page 51

About the stmsboot Command


The stmsboot command allows you to enable devices under the control of the virtual controller. To display the /dev links the stmsboot command would create, use the stmsboot -l or stmsboot -L commands. The stmsboot command is applicable to boot devices connected to Fibre Channel HBAs supported by Sun and to storage supported by multipathing. You cannot use this feature if the boot device is a SCSI or an IDE device.

Note Do not add or remove devices from your configuration while enabling or
disabling boot capability with the stmsboot command.

45

The syntax of the stmsboot command is as follows:


stmsboot [-e | -d | -L | -l controller-number]

Option

Description

-e

Enable the enumeration of devices connected to the boot controller port under scsi_vhci (the multipathing virtual controller). Devices on the boot controller port will be enumerated under the scsi_vhci virtual controller and will be controlled by multipathing. Disable the enumeration of devices connected to the boot controller port under scsi_vhci. Devices on the boot controller port will be enumerated directly under the physical controller and will not be controlled by multipathing. Display the device name changes from non-multipathing device names to multipathing device names that would occur if multipathing were configured on this controller. Display the device name changes from non-multipathing device names to multipathing device names that would occur for the given controller if multipathing were configured on this controller.

-d

-L

-l controller-number

For additional information about the stmsboot command, see the stmsboot(1M) man page. A brief description of the syntax is presented here.

Considerations for stmsboot Operations


Before performing stmsboot operations, be aware of the following:

Caution After an stmsboot enable operation, do not remove /dev links of any device names that are not multipathing enabled devices for the boot device. This includes, but is not limited to, executing commands such as devfsadm -C. Even though these boot device links are stale, they are needed if you want to disable the stmsboot features later. Removing these links might cause your system to lose the necessary links to the devices that are not multipathing enabled, in which case the system will not boot. You then need to manually create the links by using the mknod command.

You must back up data on the root device prior to performing the stmsboot command. You must first enable the multipathing software to use any stmsboot command.

46

System Administration Guide: Multipath Configuration January 2005

Plan to perform an immediate reconfiguration reboot of the host after enabling or disabling multipathing on a boot controller port. Do not perform other system tasks or other activities until you perform the reconfiguration reboot. Once enabled, the stmsboot feature has a dependency on the boot path. Upon successful completion of the stmsboot -e command, a per-port mpxio-disable entry for the boot controller path is added automatically to the /kernel/drv/fp.conf file. Do not change the boot path manually. Use the stmsboot -d command to disable a configured boot path. This command automatically updates both the eeprom-boot device and the /kernel/drv/fp.conf files portrelated mpxio-disable entry. This command then allows the system to boot by using the selected path. If your system fails during an enabling or disabling stmsboot operation, your original /etc/path_to_inst, /etc/vfstab, /etc/system, and /etc/dumpadm.conf files are saved with .sav file name extensions in the /var/tmp directory. The saved files can be useful in recovering from any unexpected system crash. If any Sun StorEdge T3, 3900, or 6x20 arrays are connected to the boot controller port, modify the settings when prompted to do so by the stmsboot command. After the stmsboot enable or disable operation, device names will change. Disable any volume manager that is actively using devices connected to the boot controller port, and reconfigure these devices with new multipathing device names. Some applications that use device names that are not multipathing enabled will need to be modified.

Displaying Potential /dev Device Name Changes


You can determine what the potential device name changes will be before performing an stmsboot command. This allows you to modify applications that use the boot controller port after the system is rebooted. To help ease the transition to newly-named boot controller devices, save the display of the device name changes for the boot controller port.

Chapter 4

Configuring Multipathing Support for Storage Boot Devices

47

To Display Potential /dev Device Name Changes


Display the potential /dev links for all controllers, using the stmsboot

-L

command.
# stmsboot -L Version : 1.8 non-STMS device name /dev/rdsk/c1t64d0 /dev/rdsk/c1t65d0 /dev/rdsk/c4t64d0 /dev/rdsk/c4t65d0

STMS device name /dev/rdsk/c9t20000020371A1D48d0 /dev/rdsk/c9t20000020371A13F3d0 /dev/rdsk/c9t20000020371A1D48d0 /dev/rdsk/c9t20000020371A13F3d0

Display potential /dev links for one controller by using the stmsboot

-l

controller-number command. For example, to display the device name change for the boot controller port, obtain the device name (cxtydz) corresponding to the root (/) directory from the /etc/vfstab file, where x is the controller number. In this example, x is equal to 3.
# stmsboot -l 3 Version: 1.8 non-STMS device name /dev/rdsk/c3t64d0 /dev/rdsk/c3t65d0 /dev/rdsk/c3t66d0 /dev/rdsk/c3t67d0

STMS device name /dev/rdsk/c9t20000020371A1D48d0 /dev/rdsk/c9t20000020371A13F3d0 /dev/rdsk/c9t20000020371A15D0d0 /dev/rdsk/c9t20000020371A1605d0

Enabling Multipathing on the Boot Controller Port


In order to enable multipathing on the boot controller port, perform the following:

48

System Administration Guide: Multipath Configuration January 2005

To Enable Multipathing on the Boot Controller Port


Before you enable multipathing, see Considerations for stmsboot Operations on page 46.

1. Become superuser. 2. If multipathing is not enabled, perform the procedure described in Enabling Multipathing Globally on page 6. 3. If any applications are running on any device connected to the boot controller port, deactivate them. 4. Verify that no devlinks or disks commands are running. If these commands are running, wait until they are finished.
# ps
-elf | grep devlinks

# ps - elf | grep disks

5. Enable the multipathing software on the boot controller port.


# stmsboot -e Version: 1.8 WARNING!!! Important system files will be modified. The devfsadmd daemon will be killed. New STMS links for devices connected to the boot controller port will be created. Please make sure that there are no processes running devfsadm, devlinks, or disks from now until the system is rebooted. This includes, but is not limited to, initiating DR events. Do you want to continue ? [y/n] (default: n)

6. Type y to continue. The software then prompts you to perform a reconfiguration reboot.

Chapter 4

Configuring Multipathing Support for Storage Boot Devices

49

Boot controller is c1 Listed below are the non-STMS device names and their corresponding new STMS device names. /dev/rdsk/c1t4d0 /dev/rdsk/c1t4d1 /dev/rdsk/c1t4d2 /dev/rdsk/c9t60020F200000024A3CDAA4BC00051BE5d0 /dev/rdsk/c9t60020F200000024A3DCBC93E00095B12d0 /dev/rdsk/c9t60020F200000024A3DCBC95700049C40d0

WARNING!!! There is at least one Sun StorEdge T3/T3B, or 6x20, that is not in "mpxio" mode. For the system to function properly, please set mp_support to "mpxio" mode on the device(s) now and press any key to continue. A reconfiguration reboot is mandatory for system sanity. While you may choose to complete other tasks prior to initiating the reboot, doing so is not recommended. Reboot the system now ? [y/n] (default:y)

Note In case of an unexpected error, the stmsboot operation will fail, indicating
an error condition, and then exit. Some device links might have been created which are useless but the system files remain unmodified. Correct the error and repeat the operation. If Sun StorEdge T3, 3900, or 6x20 arrays are connected to the boot controller port, the software will prompt you to change the mp_support setting on the storage array appropriately. Refer to your array documentation for additional information.

Note Do not complete other tasks before rebooting your system.


7. Type y to perform a reconfiguration reboot of your system. 8. If necessary, modify applications to use the new multipathing device names.

50

System Administration Guide: Multipath Configuration January 2005

Disabling Multipathing on the Boot Controller Port


Displaying multipathing on the boot controller port changes the device names from multipathing recognized device names to device names that are not recognized by multipathing. If any applications are running on any device connected to the boot controller port, deactivate them. You might need to modify some applications after system is rebooted to use device names that are not recognized by multipathing. This stmsboot disable operation will have no effect on your system if you did not previously perform an stmsboot enable operation.

To Disable Multipathing on the Boot Controller Port

1. Become superuser. 2. Verify that no devlinks or disks commands are running. If these commands are running, wait until they have finished.
# ps - elf | grep devlinks

# ps

-elf | grep disks

3. Disable device configuration.


# stmsboot -d Version: 1.5 WARNING!!! Important system files will be modified. The devfsadmd daemon will be killed. Please make sure that there are no processes running devfsadm, devlinks, or disks from now on until the system is rebooted. This includes but, is not limited to initiating DR events. Do you want to continue ? [y/n] (default: n)

Chapter 4

Configuring Multipathing Support for Storage Boot Devices

51

4. Type y to continue.
Please choose a device path [1 - 2 ] WARNING: Devices connected to the selected device path will be de enumerated from scsi_vhci 1) /devices/pci@1f,4000/pci@4/SUNW,qlc@5/fp@0,0 2) /devices/pci@1f,4000/pci@2/SUNW,qlc@4/fp@0,0 Choice [default: 1]:

Note In case of an unexpected error, the stmsboot operation will fail, indicating
an error condition, and then exit. Correct the error and you can repeat the operation. 5. Type the device path and press Return. This example shows the results for choice 2. The software then prompts you to perform a reconfiguration reboot.

Note If Sun StorEdge T3, 3900, or 6x20 arrays are connected to the boot controller port, the software will prompt you to change the mp_support setting on the storage array appropriately. To ensure that failover works, select the ONLINE path.

Choice [default: 1]: 2 Updated eeprom boot-device to boot through the selected path /pci@8,600000/SUNW,qlc@2/fp@0,0/disk@w21000004cf721119,0 WARNING!!! There is at least one Sun StorEdge T3/T3B or 6x20 in "mpxio" mode. For the sane operation of the system, please set mp_support to "rw" (or any appropriate mode other than "mpxio") on the device(s) now and press any key to continue.

A reconfiguration reboot is mandatory for system sanity. While you may choose to complete other tasks prior to initiating the reboot, doing so is not recommended. Reboot the system now ? [y/n] (default:y)

6. Type y to perform a reconfiguration reboot of your system.

52

System Administration Guide: Multipath Configuration January 2005

CHAPTER

Configuring IPFC SAN Devices


To configure IP over Fibre Channel (IPFC), you should have already ensured that the hosts recognize the switch and all attached devices. This chapter describes host recognition of IPFC devices and implementation of IP over Fibre Channel in a SAN. The IPFC driver is based on RFC 2625 and it allows IP traffic to run over Fibre Channel. This chapter contains the following topics:

Loading IPFC on page 53 Invoking and Configuring IPFC on page 58

Loading IPFC
Configuration of IPFC depends on the instance of FP, or host bus adapter ports. If multiple host bus adapters (HBAs) are present, plumb manually after identifying the FP instance on which IP should be plumbed. The following sections describe the tasks required for identifying FP instances:

IPFC Considerations on page 53 To Plumb an IPFC Instance on page 57

IPFC Considerations
IPFC devices are supported for use with Network File System (NFS) software, Network Attached Storage (NAS) devices, and Sun StorEdge Network Data Replicator (Sun SNDR) software, or Sun StorEdge Availability Suite 3.1 remote mirror software

53

TABLE 5-1 shows the supported features available for IPFC.

TABLE 5-1 Feature

IPFC (NFS/NAS and SNDR)


Supported

Cascading Zone type Maximum number of device ports per zone

Yes, with fabric zones only Fabric zone (with the HBA configured as an F-port point-to-point connection) 253

The following restrictions apply:


IPFC is not supported on 1 Gbit Sun switches. Promiscuous mode is not supported. The snoop(1M) utility cannot be used. Multicasting is supported through broadcasting only. You must assign the IP address of the IPFC port to a subnet different from that of the Ethernets on the same system. Network cards using IPFC cannot be used as routers. The /etc/notrouter file must be present on the host. With IPFC, storage devices and hosts should be in separate zones. The storage device should have one path to one zone and another path to another zone for failover and redundancy. The host can have more than one path to a specified zone, and it should have at least one path to each zone so that it can see the respective storage. Any standard network commands can be used after IPFC is attached. There are not any usage differences when these commands (telnet, ping, or ftp) are used in an Ethernet setup. Turn off power management on servers connected to the SAN to prevent unexpected results as one server attempts to power down a device while another attempts to gain access. See power.conf(1M) for details about power management.

Determining Fibre Channel Adapter Port Instances


There are two basic ways to determine Fibre Channel adapter port instances to which IP can be plumbed. The first way requires that you know the world wide name (WWN) of the card. The second way requires you to know the physical location of the card. The procedures include:

To Determine Port Instances from the WWN on page 55

54

System Administration Guide: Multipath Configuration January 2005

Invoking and Configuring IPFC on page 58

To Determine Port Instances from the WWN


1. Become superuser. 2. Determine the fp driver instances in your system. In the example below, there are four instances (0 through 3) of FP present in the system.
# prtconf -v | grep fp fp (driver not attached) fp, instance #0 fp (driver not attached) fp, instance #1 fp (driver not attached) fp, instance #2 fp (driver not attached) fp, instance #3

3. Manually load IPFC to each desired FP instance. Use the ifconfig fcip0 plumb command, where fcip0 is a variable for the desired fp instance number. For example:
# ifconfig fcip0 plumb

When the command is successful, a message appears on both the console and in the messages file. For example:
Sep 13 15:52:30 bytownite ip: ip: joining multicasts failed (7) on fcip0 - will use link layer broadcasts for multicast

If no other error message is displayed, manual plumbing has succeeded.

To Determine Port instances From the Physical Device Path


1. Determine the HBA PCI adapter slot and the I/O board PCI slot. You need this information to perform the calculation in Step 2. For example, assume you have an array with an HBA card located in PCI adapter slot 5, and the PCI adapter is in slot 1 of the I/O board. 2. Determine the instance number.

Chapter 5

Configuring IPFC SAN Devices

55

a. Use an editor to search for the fp driver binding name in the /etc/path_to_inst file. Entries have fp on the line.

Note Determine the correct entry by finding the hardware path described in your server hardware manual or Sun System Handbook. The Sun System Handbook is available at http://sunsolve.sun.com/handbook_pub/.
b. Narrow the search by using the I/O board and slot information from Step 1.

Note The following method of deriving Solaris device path of an HBA from its
physical location in server may not work for all Sun server hardware. i. Multiply the PCI adapter slot number by the number of adapter ports. For example, if the HBA has two ports, multiply by 2. Using the array with an HBA in the PCI adapter slot 5, multiply 5 by 2 to get 10. ii. Add the PCI adapter I/O board slot number to the number derived in Step i. Using an HBA in PCI adapter slot 5 and PCI slot 1 of the I/O board, add 1 to 10 for a sum of 11. iii. Convert the number derived in Step ii to hexadecimal. The number 11 converts to b in hexadecimal. iv. Search for the fp entry with pci@hex where hex is the number you derived in Step iii.
CODE EXAMPLE 5-1 shows a single Fibre Channel network adapter device path. TABLE 5-2 describes the elements of the device path.
CODE EXAMPLE 5-1

PCI Single Fibre Channel Network Adapter Device Path

"/pci@b,2000/SUNW,qlc@2/fp@0,0" 7 "fp"

TABLE 5-2

PCI Single Fibre Channel Network Adapter /etc/path_to_inst Device Path Entry
Entry Value

Entry Item

Physical Name Instance Number Driver Binding Name

/pci@b,2000/sunw,qlc@2/fp@0,0 7 fp

56

System Administration Guide: Multipath Configuration January 2005

3. Manually plumb each FP instance. Use the ifconfig <interface_number> plumb command. In this example, the value of <interface_number> is fcip7.
# ifconfig fcip7 plumb

When the command is successful, a message appears on both the console and in the messages file. For example:
Sep 13 15:52:30 bytownite ip: ip: joining multicasts failed (7) on fcip0 - will use link layer broadcasts for multicast

To Plumb an IPFC Instance


Each FP instance on the system has an entry in /dev/fc. If HBAs have been removed, some stale links might exist. Use this procedure to load and plumb IPFC.

1. For each entry in /dev/fc, issue a luxadm -e dump_map command to view all the devices that are visible through that HBA port:
# luxadm -e dump_map /dev/fc/fp0 Pos Port_ID Hard_Addr Port WWN 0 610100 0 210000e08b049f53 (Unknown Type) 1 620d02 0 210000e08b02c32a (Unknown Type) 2 620f00 0 210000e08b03eb4b (Unknown Type) 3 620e00 0 210100e08b220713 (Unknown Type,Host Bus Adapter) # luxadm -e dump_map /dev/fc/fp1 No FC devices found. - /dev/fc/fp1

Node WWN Type 200000e08b049f53 0x1f 200000e08b02c32a 0x1f 200000e08b03eb4b 0x1f 200100e08b220713 0x1f

2. Based on the list of devices, determine which destination HBAs are visible to the remote host with which you want to establish IPFC communications. In the example for this procedure, the destination HBAs have port IDs 610100 and
620d02. The originating HBAs port ID is 620e00.

Chapter 5

Configuring IPFC SAN Devices

57

3. List the physical path of the originating HBA port from which you can see the destination HBA port, where originating-hba-link is a variable for the link determined in Step .
# ls -l /dev/fc/fp originating-hba-link

For example, here 0 is the number for the originating-hba-link:


# ls -l /dev/fc/fp0 lrwxrwxrwx 1 root root 51 Sep 4 08:23 /dev/fc/fp0 -> ../../devices/pci@8,600000/SUNW,qlc@1/fp@0,0:devctl

4. Search the physical path identified in Step 3. You must remove the leading ../../devices from the path name output. For example
# grep pci@8,600000/SUNW,qlc@1/fp@0,0 /etc/path_to_inst "/pci@8,600000/SUNW,qlc@1/fp@0,0" 0 "fp"

5. Determine the fp instance for the originating HBA port from the output of the command in Step 4. The instance number precedes fp in the output. In the following example output, the instance number is 0.
"/pci@8,600000/SUNW,qlc@1/fp@0,0" 0 "fp"

6. Use the instance number from Step 5 to load IPFC and plumb the IPFC interface. In this example, the instance is 0.
# ifconfig fcip0 plumb

Invoking and Configuring IPFC


Immediately upon installation, start IPFC manually with the ifconfig command. You can configure the host so that on subsequent reboot, the IPFC network interface starts automatically. This section describes both procedures:

58

System Administration Guide: Multipath Configuration January 2005

To Start a Network Interface Manually on page 59 To Configure the Host for Automatic Plumbing Upon Reboot on page 59

To Start a Network Interface Manually


Use this procedure when you want to plumb IPFC with specific netmask values and get the IPFC interface up and running.

1. Use the ifconfig command with the appropriate interface. Ask your network administrator for an appropriate IP address and netmask information. For example, to enable an IPFC interface associated with fp instance 0 and an IP address of 192.9.201.10, type:
# touch /etc/notrouter # ifconfig fcip0 inet 192.9.201.10 netmask 255.255.255.0 up

The ifconfig command is described in more detail in the ifconfig(1M) manpage. 2. Use the command ifconfig -a to verify the network is functioning. The output of ifconfig -a should look like this:
lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 fcip0: flags= 1001843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,IPv4> mtu 1500 index 2 inet 192.9.201.10 netmask ffffff00 broadcast 192.9.201.255 ether 0:e0:8b:1:3c:f7 hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 inet 192.9.200.70 netmask ffffff00 broadcast 192.9.200.255 ether 8:0:20:fc:e9:49

To Configure the Host for Automatic Plumbing Upon Reboot


Each network interface must have an /etc/hostname.interface file defining the name of the IP address associated with it. For example, IPFC network interface fcip0 has a file named /etc/hostname.fcip0.
Chapter 5 Configuring IPFC SAN Devices 59

1. Manually create a /etc/hostname.interface file with a text editor so it contains a single line that identifies the host name or interface IP address. 2. Use a text editor to make any additional entries to the /etc/inet/hosts file. The Solaris installation program creates the /etc/inet/hosts file with minimum entries. You must manually make additional entries with a text editor. (See the hosts(4) man page for additional information.) The /etc/inet/hosts file contains the hosts database. This file contains the host names and the primary network interface IP addresses, as well as the IP addresses of other network interfaces attached to the system and of any other network interfaces that the machine must know about.
CODE EXAMPLE 5-2 shows an example of an etc/inet/host file.
CODE EXAMPLE 5-2

sun1 machine /etc/inet/hosts

127.0.0.1 localhost loghost 192.9.200.70 sun1 #This is the local host name 192.9.201.10 fcip0 #Interface to network 192.9.201.10

3. Edit the /etc/nsswitch.conf file so that all un-commented entries have the word files before any other nameservice. The /etc/nsswitch.conf specifies which name service to use for a particular machine. CODE EXAMPLE 5-3 shows an example of an /etc/nsswitch.conf file.
CODE EXAMPLE 5-3

sun1 machine /etc/nsswitch.conf File

hosts: files nis

60

System Administration Guide: Multipath Configuration January 2005

CHAPTER

Unconfiguring Fabric Devices


This chapter provides information about unconfiguring the multipathing software. This section includes:

Unconfiguring Fabric Devices on page 61

Unconfiguring Fabric Devices


Before you unconfigure a fabric device, stop all activity to the device and unmount any file systems on the fabric device. See the administration documentation for the Solaris Operating System for unmounting instructions. If the device is under any volume managers control, see the documentation for your volume manager before unconfiguring the device.

To Unconfigure a Fabric Device


This procedure describes how to unconfigure a fabric device that is attached to the fabric-connected host port c0.

1. Become superuser.

61

2. Identify the device to be unconfigured. Only devices on a fabric-connected host port can be unconfigured.
# cfgadm -al Ap_Id Type c0 fc-fabric c0::50020f2300006077 disk c0::50020f23000063a9 disk c1 fc-private c1::220203708b69c32b disk c1::220203708ba7d832 disk

Receptacle connected connected connected connected connected connected

Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

3. Unconfigure the fabric device.


# cfgadm -c unconfigure c0::50020f2300006077

4. Verify that the selected fabric device is unconfigured.


# cfgadm -al Ap_Id Type c0 fc-fabric c0::50020f2300006077 disk c0::50020f23000063a9 disk c1 fc-private c1::220203708b69c32b disk c1::220203708ba7d832 disk

Receptacle connected connected connected connected connected connected

Occupant Condition configured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown

To Unconfigure All Fabric Devices on a Fabric-Connected Host Port


This procedure describes how to unconfigure all configured fabric devices that are attached to a fabric-connected host port.

1. Become superuser.

62

System Administration Guide: Multipath Configuration January 2005

2. Identify the fabric devices to be unconfigured. Only devices on a fabric-connected host port can be unconfigured.
# cfgadm -al Ap_Id Type c0 fc-fabric c0::50020f2300006077 disk c0::50020f23000063a9 disk c1 fc-private c1::220203708b69c32b disk c1::220203708ba7d832 disk

Receptacle connected connected connected connected connected connected

Occupant configured configured configured configured configured configured

Condition unknown unknown unknown unknown unknown unknown

3. Stop all activity to each fabric device on the selected port and unmount any file systems on each fabric device. If the device is under any volume managers control, see the documentation for your volume manager before unconfiguring the device.
# cfgadm -c unconfigure c0

4. Unconfigure all of the configured fabric devices on a selected port.

Note This operation repeats the unconfigure operation of an individual device


for all the devices on c0. This can be time-consuming if the number of devices on c0 is large. 5. Verify that all the devices on c0 are unconfigured.
# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c0::50020f23000063a9 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected

Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown

Notice that the Occupant column of c0 and all the fabric devices attached to it are displayed as unconfigured.

Chapter 6

Unconfiguring Fabric Devices

63

To Unconfigure a Fabric Device Associated With Multipathing Enabled Devices


This procedure shows fabric-connected host ports c0 and c2 to illustrate how to unconfigure fabric devices associated with multipathing devices.

1. Become superuser. 2. Identify the port WWN of the fabric device to be unconfigured.
# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c0::50020f23000063a9 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected c2 fc-fabric connected c2::50020f2300005f24 disk connected c2::50020f2300006107 disk connected

Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

In this example, the c0::50020f2300006077 and c2::50020f2300006107 Ap_Ids represent different port WWNs for the same device associated with a multipathing device. The c0 and c2 host ports are enabled for use by the multipathing software. 3. Stop all device activity to each fabric device on the selected port and unmount any file systems on each fabric device. If the device is under any volume managers control, see the documentation for your volume manager for maintaining the fabric device. 4. Unconfigure fabric devices associated with the multipathing device. Only devices on a fabric-connected host port can be unconfigured through the cfgadm -c unconfigure command.
# cfgadm -c unconfigure c0::50020f2300006077 c2::50020f2300006107

Note You can remove a device from up to eight paths individually, as in the
example command cfgadm -c unconfigure c0::1111, c1::2222, c3::3333, etc. As an alternative, you can remove an entire set of paths from the host, as in the example cfgadm -c unconfigure c0.

64

System Administration Guide: Multipath Configuration January 2005

5. Verify that the selected devices are unconfigured.


# cfgadm -al Ap_Id Type c0 fc-fabric c0::50020f2300006077 disk c0::50020f23000063a9 disk c1 fc-private c1::220203708b69c32b disk c1::220203708ba7d832 disk c2 fc-fabric c2::50020f2300005f24 disk c2::50020f2300006107 disk

Receptacle connected connected connected connected connected connected connected connected connected

Occupant Condition configured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown

Notice that the Ap_Ids c0::50020f2300006077 and c2::50020f2300006107 are unconfigured. The Occupant column of c0 and c2 still displays those ports as configured because they have other configured occupants. The multipath devices associated with the Ap_Ids c0::50020f2300006077 and c2::50020f2300006107 are no longer available to the host using the Solaris Operating System. The following two multipath devices are removed from the host:
/dev/rdsk/c6t60020F20000061073AC8B52D000B74A3d0s2 /dev/rdsk/c6t60020F20000061073AC8B4C50004ED3Ad0s2

To Unconfigure One Path to a Multipathing Device


In contrast to the procedure in the preceding section, this procedure shows how to unconfigure one device associated with c2::50020f2300006107 and leave the other device, 50020f2300006077, configured. Only devices on a fabric-connected host port can be unconfigured through the cfgadm unconfigure command.

1. Become superuser.

Chapter 6

Unconfiguring Fabric Devices

65

2. Identify the Ap_Id of the multipathing device to be unconfigured.


# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c0::50020f23000063a9 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected c2 fc-fabric connected c2::50020f2300005f24 disk connected c2::50020f2300006107 disk connected

Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

In this example, c0::50020f2300006077 and c2::50020f2300006107 Ap_Ids represent different port WWNs for the same device. 3. Unconfigure the Ap_Id associated with multipathing device.

Note If the Ap_Id represents the last configured path to the device, stop all
activity to the path and unmount any file systems on it. If the multipathing device is under any volume managers control, see the documentation for your volume manager for maintaining the fabric device. In the example that follows, the path represented as c2::50020f2300006107 is unconfigured, and c0::50020f2300006077 remains configured to show how you can unconfigure just one of multiple paths for a multipathing device.
# cfgadm -c unconfigure c2::50020f2300006107

4. Verify that the selected path c2::50020f2300006107 is unconfigured.


# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c0::50020f23000063a9 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected c2 fc-fabric connected c2::50020f2300005f24 disk connected c2::50020f2300006107 disk connected

Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown

66

System Administration Guide: Multipath Configuration January 2005

The multipathing devices associated with that Ap_Id are still available to a host using the Solaris Operating System through the other path, represented by c0::50020f2300006077. A device can be connected to multiple Ap_Ids and an Ap_Id can be connected to multiple devices.

To Unconfigure All Fabric-Connected Devices With Multipathing Enabled


An Ap_Id on a fabric-connected host port is a path to a multipathing device. When a multipathing device has multiple Ap_Ids connected to it, the device is still available to the host using the Solaris Operating System after you unconfigure an Ap_Id. After you unconfigure the last Ap_Id, no additional paths remain and the multipathing device is unavailable to the host using the Solaris Operating System. Only devices on a fabric-connected host port can be unconfigured

1. Become superuser. 2. Identify the devices to be unconfigured.


# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c0::50020f23000063a9 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708b9b20b2 disk connected c2 fc-fabric connected c2::50020f2300005f24 disk connected

Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown

3. Unconfigure all of the configured devices on the selected port.


# cfgadm -c unconfigure c2

Note This operation repeats the unconfigure command of an individual device


for all devices on c2. This can be time-consuming if the number of devices on c2 is large.

Chapter 6

Unconfiguring Fabric Devices

67

4. Verify that all devices on c2 are unconfigured. Notice that the Occupant column lists c2 and all the devices attached to c2 as unconfigured.
# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected c2 fc-fabric connected c2::50020f2300005f24 disk connected c2::50020f2300006107 disk connected

Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown unconfigured unknown unconfigured unknown

68

System Administration Guide: Multipath Configuration January 2005

APPENDIX

Multipathing Configuration Samples


This appendix describes two different types of configurations:

A system that does not use multipathing driver software A system that uses multipathing driver software

Topics in this section include:


About Multipathing Configuration Samples on page 69 Configuration without Multipathing on page 70 Configuration with Multipathing on page 71

About Multipathing Configuration Samples


Multipathing device enumeration is different from legacy devices in that only one device path is shown per device regardless of the number of paths. Storage devices such as the Sun StorEdge A3x00FC have their own multipathing solution. multipathing can co-exist with such storage devices. However, these devices will not be enumerated under multipathing and will work the same as if multipathing were not installed. Case Study: A host has the following storage attached.

Sun StorEdge T3 partner pair with 4 LUNs. Sun StorEdge A5200 with both A and B loop connected.

69

Configuration without Multipathing


Before multipathing was installed and configured, the Sun StorEdge T3 mp_support was set to rw format. The output is different from the configuration that included multipathing.

Utility: format
The output for a configuration that does not use multipathing is as follows:
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0, 1. c2t3d0 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@2/fp@0,0/ssd@w50020f23000042d4,0 2. c2t3d1 <SUN-T300-0116 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@2/fp@0,0/ssd@w50020f23000042d4,1 3. c2t3d2 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@2/fp@0,0/ssd@w50020f23000042d4,2 4. c2t3d3 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@2/fp@0,0/ssd@w50020f23000042d4,3 5. c3t4d0 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300003fad,0 6. c3t4d1 <SUN-T300-0116 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300003fad,1 7. c3t4d2 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300003fad,2 8. c3t4d3 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300003fad,3 9. c4t68d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /pci@1f,2000/pci@1/SUNW,qlc@4/fp@0,0 ssd@w22000020371a1862,0 . . . > Specify disk (enter its number): ^D

70

System Administration Guide: Multipath Configuration January 2005

Configuration with Multipathing


After multipathing is installed and configured, Sun StorEdge T3 mp_support is set to mpxio.

Utility: format
The output of format looks as follows:
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0,0 1. c6t60020F20000042D43ADCBC4E000C41E2d0 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2 2. c6t60020F20000042D43B0E926A000AA3FCd0 <SUN-T300-0116 cyl 34145 alt 2 hd 24 sec 128> /scsi_vhci/ssd@g60020f20000042d43b0e926a000aa3fc 3. c6t60020F20000042D43B2753510008C9DFd0 <SUN-T300-0117 cyl 34145 alt 2 h d 24 sec 128> /scsi_vhci/ssd@g60020f20000042d43b2753510008c9df 4. c6t60020F20000042D43B275377000877DDd0 <SUN-T300-0117 cyl 34145 alt 2 h d 24 sec 128> /scsi_vhci/ssd@g60020f20000042d43b275377000877dd 5. c6t20000020371A1862d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /scsi_vhci/ssd@g20000020371a1862 Specify disk (enter its number): ^D

Consider the following notes:


Devices enumerated under multipathing will have a /scsi_vhci/ssd entry. The first four scsi_vhci entries are the four T3 LUNs which now have a long name which is Global Unique Identifier (GUID). Also, only one entry per LUN is visible instead of two paths. The next scsi_vhci entry is a disk in the Sun StorEdge A5200 array. There is only one controller number assigned for all the devices encapsulated under multipathing.

Appendix A

Multipathing Configuration Samples

71

Use luxadm display to identify the mapping between device entries without multipathing and the device entries with multipathing.

Utility: luxadm probe


The luxadm probe now shows the WWN for Sun StorEdge A5200 and GUID for T3.
# luxadm probe Found Enclosure: SENA Name:f Node WWN:50800200000777d0 Logical Path:/dev/es/ses0 Found Fibre Channel device(s): Node WWN:50020f20000042d4 Device Type:Disk device Logical Path:/dev/rdsk/c6t60020F20000042D43B275377000877DDd0s2 Node WWN:50020f20000042d4 Device Type:Disk device Logical Path:/dev/rdsk/c6t60020F20000042D43B2753510008C9DFd0s2 Node WWN:50020f20000042d4 Device Type:Disk device Logical Path:/dev/rdsk/c6t60020F20000042D43B0E926A000AA3FCd0s2 Node WWN:50020f20000042d4 Device Type:Disk device Logical Path:/dev/rdsk/c6t60020F20000042D43ADCBC4E000C41E2d0s2

Utility: luxadm display


The luxadm display has been enhanced for multipathing. For each entry in the format or luxadm probe, luxadm display output indicates the following:

The number of paths to the storage device. The mapping of the paths prior to multipathing and after multipathing (under Path(s) controller and device address). The state and type of each path.

ONLINE indicates the active path(s) through which IO is going to the device. For more than one ONLINE path, multipathing will use load balancing (like round robin scheme) or single IO to the device, depending on the setting of load-balance variable in /kernel/drv/scsi_vhci.conf file. STANDBY indicates the path is available if an ONLINE path fails or is switched to another state. There can be many STANDBY paths. If a STANDBY path is chosen to be an active path for routing IO, its status will be changed to ONLINE. OFFLINE indicates the path(s) was previously existing but is not available now. Class type for each path. PRIMARY: This path is the preferred path for routing IO.

72

System Administration Guide: Multipath Configuration January 2005

SECONDARY: This path is the next priority path after PRIMARY.

If a Sun StorEdge T3 partner pair configuration is used, two paths for each lun exist. One path is ONLINE and the other is STANDBY. In this configuration, IO to the lun is active on its ONLINE path. If this path fails, the STANDBY path becomes the ONLINE path. However, if the first path later becomes available, it will go into STANDBY mode instead of ONLINE, thus saving an expensive failover operation, if autofailback was disabled throughout the configuration. Use luxadm failover command to bring the restored path to ONLINE (see the luxadm man pages). There can be more than one primary and secondary path to a Sun StorEdge T3, 39x0 or 6x20 device in a SAN environment. As can be seen from the following two luxadm display output of some T3 LUNs, PRIMARY can be STANDBY and SECONDARY can be ONLINE for one LUN on the same physical paths, whereas SECONDARY is STANDBY and PRIMARY is ONLINE for another LUN.
# luxadm display 50020f23000007a4 DEVICE PROPERTIES for disk: 50020f23000007a4 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 9217.688 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D43EE74C4B0000C910d0s2 /devices/scsi_vhci/ssd@g60020f20000007d43ee74c4b0000c910:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,0 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,0 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY

Appendix A

Multipathing Configuration Samples

73

# luxadm display \ /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 35846.125 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 /devices/scsi_vhci/ssd@g60020f20000007d44002e60f0002d8ca:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,7 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,7 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY

74

System Administration Guide: Multipath Configuration January 2005

In the case of Sun StorEdge A5200 in the current configuration, there are two paths. Both are ONLINE and IO is load balanced on both the paths. If a path fails, the second path continues the IO. If the failed path comes back, it will be in ONLINE state and starts participating in IO transfer if load-balancing is enabled.
# luxadm display /dev/rdsk/c6t20000020371A1862d0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c6t20000020371A1862d0s2 Status(Port A): O.K. Status(Port B): O.K. Vendor: SEAGATE Product ID: ST136403FSUN36G WWN(Node): 20000020371a1862 WWN(Port A): 21000020371a1862 WWN(Port B): 22000020371a1862 Revision: 114A Serial Num: LT0187150000 Unformatted capacity: 34732.891 MBytes Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0xffff Location: In the enclosure named: f Device Type: Disk device Path(s): /dev/rdsk/c6t20000020371A1862d0s2 /devices/scsi_vhci/ssd@g20000020371a1862:c,raw Controller /devices/pci@1f,2000/pci@1/SUNW,qlc@4/fp@0,0 Device Address 22000020371a1862,0 Class primary State ONLINE Controller /devices/pci@1f,2000/pci@1/SUNW,qlc@5/fp@0,0 Device Address 21000020371a1862,0 Class primary State ONLINE

Utility: luxadm failover


The luxadm failover command is used to failover LUNs from primary to secondary paths and vice versa. In case of Sun StorEdge T3 partner pair, 6x20 HA, or 39x0 configuration, when a failover happens (for example, a cable is removed from one storage controller), the LUNs owned by that controller are failed over to the alternate controller.

Appendix A

Multipathing Configuration Samples

75

After a failure of one path is corrected, the LUN will not fail back to the original configuration automatically if auto-failback is set to "disable" in /kernel/drv/scsi_vhci.conf. A luxadm failover subcommand must be issued to perform failover to the original configuration. The failover process occurs as follows: 1. The original state of a Sun StorEdge T3 LUN is obtained by luxadm display command.
# luxadm display /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41Ed0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 Status(Port A): O.K. Status(Port B): O.K. Vendor: SUN Product ID: T300 WWN(Node): 50020f20000042d4 WWN(Port A): 50020f23000042d4 WWN(Port B): 50020f2300003fad Revision: 0117 Serial Num: Unsupported Unformatted capacity: 51220.500 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 /devices/scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2:c,raw Controller /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000042d4,0 Class primary State ONLINE Controller /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0 Device Address 50020f2300003fad,0 Class secondary State STANDBY

Note The primary path is ONLINE and the secondary path is STANDBY.

76

System Administration Guide: Multipath Configuration January 2005

2. Cable is pulled from the T3 controller 50020f23000042d4. Failover is triggered and the primary path is OFFLINE and secondary path is ONLINE. LUN status now is degraded.
# luxadm display /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41Ed0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 .. Path(s): /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 /devices/scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2:c,raw Controller /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000042d4,0 Class primary State OFFLINE Controller /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0 Device Address 50020f2300003fad,0 Class secondary State ONLINE

3. Cable is reinserted to T3 controller 50020f23000042d4. The device state become optimal but, the failover is not triggered. The Primary path comes up as STANDBY and still the secondary path is ONLINE.
# luxadm display /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41Ed0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 .. Path(s): /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 /devices/scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2:c,raw Controller /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000042d4,0 Class primary State STANDBY Controller /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0 Device Address 50020f2300003fad,0 Class secondary State ONLINE

Appendix A

Multipathing Configuration Samples

77

4. Type the luxadm failover command to failover to the primary path.


# luxadm failover primary /dev/rdsk/c5t60020F20000042D43ADCBCE000C41E2d0s2 # # luxadm display /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41Ed0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 .. Path(s): /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 /devices/scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2:c,raw Controller /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000042d4,0 Class primary State ONLINE Controller /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0 Device Address 50020f2300003fad,0 Class secondary State STANDBY

This now triggers the failover. The primary path becomes ONLINE and the secondary path becomes STANDBY which is equivalent to the original state in Step 1. 1. To verify the failover operation, display the properties using the luxadm display command.
# luxadm display /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41Ed0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 .. Path(s): /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 /devices/scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2:c,raw Controller /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000042d4,0 Class primary State ONLINE Controller /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0 Device Address 50020f2300003fad,0 Class secondary State STANDBY

For more details on the luxadm failover command, see the man pages.

78

System Administration Guide: Multipath Configuration January 2005

APPENDIX

Supported FC-HBA API


This Appendix contains the Supported FC-HBC API. For further information regarding the API, refer to Fibre Channel and Multipathing Features on page 1 under FC-HBA and FCSM Packages.
TABLE 0-1 SNIA API

Supported and Unsupported FC-HBA Interfaces


Software Support

HBA_GetVersion HBA_LoadLibrary HBA_FreeLibrary HBA_GetNumberofAdapters HBA_GetAdapterName HBA_OpenAdapter HBA_CloseAdapter HBA_GetAdapterAttributes HBA_GetAdapterPortAttributes HBA_GetDiscoveredPortAttributes HBA_GetPortAttributesbyWWN HBA_SendCTPassThru HBA_SendCTPassThruV2 HBA_RefreshInformation HBA_GetFcpTargetMapping HBA_SendScsiInquiry HBA_SendReportLuns HBA_SendReadCapacity

Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

79

TABLE 0-1 SNIA API

Supported and Unsupported FC-HBA Interfaces (Continued)


Software Support

HBA_GetPortStatistics HBA_ResetStatistics HBA_GetFcpPersistentBinding HBA_GetEventBuffer HBA_SetRNIDMgmtInfo HBA_GetRNIDMgmtInfo HBA_SendRNID HBA_SendRNIDV2 HBA_ScsiInquiryV2 HBA_ScsiReportLUNsV2 HBA_ScsiReadCapacityV2 HBA_OpenAdapterByWWN HBA_RefreshAdapterConfiguration HBA_GetVendorLibraryAttributes HBA_GetFcpTargetMappingV2 HBA_SendRPL HBA_SendRPS HBA_SendSRL HBA_SendLIRR HBA_SendRLS HBA_RemoveCallback HBA_RegisterForAdapterEvents HBA_RegisterForAdapterAddEvents HBA_RegisterForAdapterPortEvents HBA_RegisterForAdapterPortStatEvents HBA_RegisterForTargetEvents HBA_RegisterForAdapterLinkEvents HBA_RegisterForAdapterTargetEvents HBA_GetFC4Statistics HBA_GetFCPStatistics

Yes No No No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No No No No Yes Yes Yes Yes Yes No Yes No Yes No No

80

System Administration Guide: Multipath Configuration January 20054

TABLE 0-1 SNIA API

Supported and Unsupported FC-HBA Interfaces (Continued)


Software Support

HBA_GetBindingCapability HBA_GetBindingSupport HBA_SetBindingSupport HBA_SetPersistentBindingV2 HBA_GetPersistentBindingV2 HBA_RemovePersistentBinding HBA_RemoveAllPersistentBindings

No No No No No No No

Appendix B

Supported FC-HBA API

81

82

System Administration Guide: Multipath Configuration January 20054

APPENDIX

Zones and Ports


Understanding zoning and port usage is fundamental to understanding the use of configuration rules with the supported hardware. This section explains the use of zones and ports in preparation of the next section, which covers the configuration rules. Topics covered include:

Zone Types on page 83 Port Types on page 84

Zone Types
Zoning is a function of the switch that allows segregation of devices by ports or world wide names (WWNs).You can create zones for a variety of reasons, such as security, simplicity, performance, or dedication of resources. The Solaris 10 OS software supports both industry-standard port-based and WWN-based zones. See your third-party vendor documentation for more information. There are two main types of zones:

Name Server Zones - NS zones use fabric protocols to communicate with Fibre Channel devices. Each NS zone defines which ports or devices receive NS information. The Sun StorEdge T3 arrays with firmware level 1.18.02 or higher and Sun StorEdge T3+ arrays with firmware level 2.1.04 or higher support loop (TL) port connections. The Sun StorEdge T3+ array with firmware level 2.1.04 or higher supports fabric connections. FL ports are supported only for Sun StorEdge L180/L700 tape libraries. Please refer to your switch documentation for more information.

Segmented Loop Zones - The Solaris 10 OS software does not support Segmented Loop (SL) zones or ports.

83

Port Types
TABLE C-1 lists port types for Sun switches.
TABLE C-1 Port Type

Sun Switch Port Types


Description Supported Devices

TL Ports FL Ports F Ports E Ports

Translation loop Public loop Point-to-point fabric Inter-switch port

Storage devices connected to the Sun switch only. Sun StorEdge L180/L700 tape libraries. Host bus adapters, storage devices. Cascaded switches acting as ISLs, which are configured initially in fabric port mode. Automatically configured to E or F ports to support switches or fabric devices. All switch ports should be set to G-port, except for tape libraries that do not support F-port; see GL ports below. Automatically configured to FL or G ports to support hosts or switches (Sun StorEdge Network 2 Gbit McDATA Sphereon 4300 and 4500 switches). Automatically configured to FL, E, or F ports to support public loop, point-to-point, or switch devices. This port type is used only for setting L180/L700 tape libraries to FL.

G Ports

General ports

Gx Ports

Public loop or general ports

GL Ports

General loop ports

When an array is configured, the host port is connected to an F port and the array is connected to an F or TL port on the switch. The TL (translation loop) port represents eight-bit addressing devices as 24-bit addressing devices and vice versa.
TABLE 0-2 Port Type

Port Types for Storage Devices


Storage Device

loop (TL port - Sun Switches only) loop or fabric (F port)

Sun StorEdge T3 array Sun StorEdge T3+ array Sun StorEdge 39x0 array

84

System Administration Guide: Multipath Configuration January 2005

TABLE 0-2 Port Type

Port Types for Storage Devices


Storage Device

fabric loop or fabric loop loop or fabric loop loop, public loop (FL port)

Sun StorEdge 69x0 array Sun StorEdge 6x20 array Sun StorEdge 99x0 array STK 9840b tape drive STK 9940b tape drive STK 9840 tape drive Sun StorEdge L180/L700 tape libraries

Although you can connect a Sun StorEdge T3 array with a TL port, the host bus adapter recognizes it as a fabric device. Sun StorEdge T3+ arrays and the Sun StorEdge 3510FC, 39x0, 69x0, and 99x0 storage arrays should be connected with F ports as a 24-bit addressing device for fabric connectivity. The STK 9840B tape drive require F ports when connected to 2-Gbit switches. Sun StorEdge L5500/L6000 libraries are not connected to the SAN.

Appendix C

Zones and Ports

85

86

System Administration Guide: Multipath Configuration January 2005

APPENDIX

Implementing Sun StorEdge SAN Software Dynamic Reconfiguration


When you want to modify your system configuration with Dynamic Reconfiguration (DR), you must change the device configuration for SAN-based devices prior to implementing DR.

Dynamic Reconfiguration
Dynamic reconfiguration (DR) works differently for non-fabric and fabric devices. With previously configured FC-AL devices, DR happens automatically upon addition or removal of devices to a host I/O port. With the multipathing enabled, the Solaris Operating System host configures the devices as multipathing devices.

Dynamic Reconfiguration and Fabric Devices


When configuring a fabric device, you will need to modify the devices before implementing DR. Previously configured fabric devices are not automatically reconfigured when DR is implemented. When you remove a system component on which switch-connected host ports reside, and then add that system component back to a host through DR operations, the fabric device configurations are not persistent. The discussion of ondemand node creation in these sections applies to fabric devices, such as a host port connected to an F port on a switch and an array connected to an F port or TL port on a switch.

87

To Remove a Fabric Device Before Dynamic Reconfiguration


system component with on-demand node creation.

Unconfigure the fabric devices that were configured through host ports on the

If multipathing is not enabled, see Unconfiguring Fabric Devices on page 61. If the multipathing is enabled, see To Unconfigure a Fabric Device Associated With Multipathing Enabled Devices on page 64.

To Maintain a Fabric Device Configuration With Dynamic Reconfiguration

1. Reconfigure the device through on-demand node creation. 2. Perform DR operations according to the instructions in the documentation for the host.

To Reconfigure Fabric Devices With Dynamic Reconfiguration

1. Add the system component and make it available to the host. 2. Reconfigure the device(s) through on-demand node creation.

If the multipathing is not enabled, see Configuring Device Nodes Without Multipathing Enabled on page 33. If the multipathing is enabled, see Configuring Device Nodes With Multipathing Enabled on page 40.

To Reconfigure the Sun Enterprise 10000 Server With a Fabric Connection


The following procedure gives the sequence of operations for a Sun Enterprise 10000 server board with a fabric connection.

1. Unconfigure the fabric devices on fabric-connected host ports on the board to be detached.

88

System Administration Guide: Multipath Configuration January 2005

2. Start the DR detach operations for the board. See the Sun Enterprise 10000 Dynamic Reconfiguration Configuration Guide. 3. Start the DR attach operations when the board is ready. See the Sun Enterprise 10000 Dynamic Reconfiguration Configuration Guide. 4. Configure any fabric devices on the attached boards. See the sections in Chapter 3 that explain how to recognize the storage devices on the host. On the newly attached board, the devices could be the same or completely new devices.

Appendix D

Implementing Sun StorEdge SAN Software Dynamic Reconfiguration

89

90

System Administration Guide: Multipath Configuration January 2005

APPENDIX

Multipathing Troubleshooting
This appendix provides solutions to potential problems that may occur while running multipathing. This appendix contains the following sections:

System Crashes During Boot Operations on page 92 System Crashes During or After a Boot Enable Operation on page 92 System Crashes During or After a Multipathing Boot Disable Operation on page 94 Sun StorEdge T3, 6x20, or 3900 Arrays Do Not Show on page 96 System Failed During Boot With scsi_vhci Attachment on page 97 Connected Sun StorEdge A5200 Arrays Appear Under Physical Paths in Format on page 97 System and Error Messages on page 98

91

System Crashes During Boot Operations


If your system fails during the boot enable (stmsboot -e) or disable (stmsboot -d) operation, your original /etc/path_to_inst, /etc/vfstab, /etc/system, and /etc/dumpadm.conf files are saved with .sav file name extensions in the /var/tmp directory. Other important debug information is also saved in this directory.

System Crashes During or After a Boot Enable Operation


If your system crashes or panics during or after a boot enable (stmsboot -e) operation with messages similar to the following:
Cannot assemble drivers for root /scsi_vhci/ssd@g20000004cf721119:a Cannot mount root on /scsi_vhci/ssd@g20000004cf721119:a fstype ufs or cannot stat /dev/rdsk/...

Then check the following: 1. If the boot device is a Sun StorEdge T3 Raid Array LUN, did you change the mp_support setting to mpxio mode? If the change was not performed before system reboot, make this change and shut down and restart the system. 2. Are you specifying a different boot path manually? stmsboot has a dependency on the path through which the system boots. See Chapter 4 Configuring Multipathing Support for Storage Boot Devices. 3. Did you have a per-port mpxio-disable entry corresponding to the boot controller port in the qlc.conf file? If so, remove the corresponding qlc.conf entry. 4. Boot the system from another disk or over the network, and mount the boot device on /mnt.

92

System Administration Guide: Multipath Configuration January 2005

Verify that /mnt/etc/vfstab is the same as /var/tmp/vfstab.sav. Are all the boot controller device names shown as multipathing device names, compared with /var/tmp/vfstab.sav? Verify that /mnt/etc/path_to_inst is the same as /var/tmp/path_to_inst.sav. This file should contain a unique ssd instance for each scsi_vhci/ssd@gGUID device. The /mnt/etc/system file should have a rootdev entry to the boot device of scsi_vhci/ssd@gGUID. For example:

rootdev:/scsi_vhci/ssd@g20000004cf721119:a

The /mnt/kernel/drv/fp.conf file should have the boot controller port entry set to mpxio-disable="no".

5. To restore the system to the "stmsboot disabled" state, use the following commands:
# cp /mnt/var/tmp/vfstab.sav /mnt/etc/vfstab # cp /mnt/var/tmp/system.sav /mnt/etc/system # cp /mnt/var/tmp/dumpadm /mnt/etc/dumpadm.conf

Note that path_to_inst.sav is not restored because the system remembers the device names. Remove the per-port mpxio-disable entry corresponding to the boot controller port in the /mnt/kernel/drv/fp.conf file, and reboot your system.
# touch /reconfigure # shutdown -g0 -y -i6

If these steps do not resolve the problem, contact Sun Services.

Appendix E

Multipathing Troubleshooting

93

System Crashes During or After a Multipathing Boot Disable Operation


If your system crashes or panics during or after multipathing boot disable (stmsboot -d) operation, check the following: If after multipathing boot disable (stmsboot -d) operation, your system panics with messages similar to the following:
Cannot assemble drivers for root /dev/rdsk/.... Cannot mount root on fstype /dev/rdsk/.... ufs or cannot stat /dev/rdsk/...

Then check the following: 1. If the boot device is a Sun StorEdge T3 Raid Array LUN, did you change the mp_support setting to rw or none? If the change was not performed before system reboot, make this change and shut down and restart the system. 2. Are you specifying a different boot path manually? stmsboot has a dependency on the path through which the system boots. See Chapter 4 Configuring Multipathing Support for Storage Boot Devices. 3. Did you have a per-port mpxio-disable entry corresponding to the boot controller port in the qlc.conf file? If so, remove the corresponding qlc.conf entry. 4. Boot the system from another disk or over the network and mount the boot device on /mnt.

The /mnt/etc/vfstab file should indicate the device name that is not multipathing enabled instead of multipathing device names. The /mnt/etc/system file should not have a rootdev entry. The /mnt/kernel/drv/scsi_vhci.conf file should not have the boot controller port entry.

94

System Administration Guide: Multipath Configuration January 2005

5. To restore the system to the stmsboot enabled state, use the following commands:
# cp /mnt/var/tmp/vfstab.sav /mnt/etc/vfstab # cp /mnt/var/tmp/system.sav /mnt/etc/system # cp /mnt/var/tmp/dumpadm /mnt/etc/dumpadm.conf

Note that path_to_inst.sav is not restored because system remembers the device names. Add the per-port mpxio-disable=no entry corresponding to the boot controller port in the /mnt/kernel/drv/fp.conf file and reboot your system:
# touch /reconfigure # shutdown -g0 -y -i6

If these steps do not resolve the problem, contact Sun Services.

Multipathing Is Not Running Properly


The first item to check is if multipathing has been installed correctly. Verify that the multipathing drivers (that is, scsi_vhci and mpxio), are loaded with the help of the modinfo command. (See the modinfo man pages.)
# modinfo | grep mpxio 23 102193e5 84cb 1 mpxio (MDI Library) # modinfo | grep scsi_vhci 121 781ea000 6a20 225 1 scsi_vhci (Sun Multiplexed SCSI vHCI)

If modinfo output does not show that the drivers are loaded, verify that the following binaries have been loaded in the proper directories:

/kernel/drv/scsi_vhci, /kernel/drv/v9/scsi_vhci (64 bit) /kernel/misc/mpxio, /kernel/misc/sparcv9/mpxio (64-bit) /kernel/drv/scsi_vhci.conf /kernel/drv/fp.conf

Appendix E

Multipathing Troubleshooting

95

If these binaries are not present in the specified directories, then multipathing software was not installed properly. Repeat the installation process, making sure you are logged in as superuser.

luxadm display and luxadm failover Commands Fail


If the multipathing software is running properly, but the luxadm display and luxadm failover commands arent working, use the luxadm fcode -p command to ensure that the system sees all your pHCIs. If you dont see one or more of the pHCIs, their Fcode patch is out of date. An alternative way to confirm this is by examining the device path. Notice the scsi@4 in the example below. Make sure that none of the device paths looks like the following:
/devices/pci@9d,600000/pci@1/scsi@4/fp@0,0:devctl

To correct this problem, download the latest Fcode software.

Sun StorEdge T3, 6x20, or 3900 Arrays Do Not Show


A Sun StorEdge T3, 6x20, or 3900 array can be correctly configured for multipathing with the latest firmware revision. Be sure to set the master settings as follows:
hostname:/:<1> sys mp_support mpxio

While checking the array setup, confirm that the LPC version is also current. After the device is configured, it is advisable to perform a reconfiguration reboot or equivalent.

96

System Administration Guide: Multipath Configuration January 2005

System Failed During Boot With scsi_vhci Attachment


It is most likely due to an incomplete installation. The installation failed to provide an entry for the scsi_vhci in the name_to_major database or duplicate entries for the same number with different driver name references. If you do not see any installation log errors, call Sun Support.

Connected Sun StorEdge A5200 Arrays Appear Under Physical Paths in Format
Consider the following:

The Sun StorEdge A5200 Array is not fabric supported, but can be connected using FCAL. Solaris 10 OS software supports the Sun StorEdge A5200, A3500FC arrays and FC tape devices. SL zones contain SL ports only. SL ports are not supported in the Solaris 10 OS 4.x release but were in earlier releases. Check whether the Sun StorEdge A5200 class devices are not connected to a multipathing supported HBA. Check whether multipathing is disabled under /kernel/drv/fp.conf. Check whether the system is booting from the Sun StorEdge A5200 disks. When you boot from a multipathing device, all devices under the pHCI with the boot device are enumerated under scsi_vhci. Make sure the pHCI has mpxio-disable=no set in the multipathing file /kernel/drv/qlc.conf or /kernel/drv/fp.conf.

Appendix E

Multipathing Troubleshooting

97

System and Error Messages


The following messages might appear in the course of operation.

When the automatic failback feature is enabled through the configuration file, you should receive the following message:
Auto-failback capability enabled through scsi_vhci.conf file.

If automatic failback succeeds, the following message is logged:


Auto failback operation succeeded for devices.

If automatic failback fails, the following message is logged:


Auto failback operation failed for device.

Externally initiated failover of a Sun StorEdge T3 or T3+ array has been observed.
Waiting for externally initiated failover to complete

After an stmsboot enable (stmsboot -e) operation, the following message corresponding to the boot controller port (here, fp2) should not appear once the system is booted, that is, booted through the stmsboot-enabled device through the fp2 path):
mpxio: [ID 284422 kern.info] /pci@8,600000/SUNW,qlc@1/fp@0,0 (fp2) multipath capabilities disabled: controller for root filesystem cannot be multipathed.

98

System Administration Guide: Multipath Configuration January 2005

Index

Numerics
24-bit addressing devices, 84 24-bit Fibre Channel addressing devices, 31

A
Ap_Id, 66 array port, 84

B
broadcasting, 54

activity, 64 conguration, 33 node creation, 31 node discovery, 38 removal, 64 storage, 31 device node creation, 40 device node removal, 40 disabling ports, 10 drivers, 31

C
cfgadm -al, 32 -c congure, 31 -fp, 40 -l, 32 cfgadm -al, 39, 66 cfgadm -al -o show_FCP_dev, 40 cfgadm -c congure, 39 cfgadm(1M), 33 conguration examples, 70 examples, luxadm command, 72, 75, 76 fabric devices, 33 conguring HBAs, 13, 14 third-party devices, 16

E
eight-bit addressing devices, 84 enabling ports, 10 error messages, 98

F
F_port, 84 fabric connectivity, 85 fabric-connected host ports, 32 fc-fabric, 41 Fibre Channel Protocol, 31

H
host port, 84

I
IPFC guidelines, 53 SCSI LUN level information, 31

D
device

99

L
loading drivers, 31 LUN level information, 31 recognition, 31 luxadm, 35

Z
zoning, 83

M
modinfo, 31 multicasting, 54

N
NAS, 53 NFS, 53

P
physical device representation, 42 port, 84 promiscuous mode, 54

S
show_scsi_lun, 40 SNDR, 53 snoop, 54 ssd driver, 31 st driver, 31 Sun StorEdge 39x0 series, 85 Sun StorEdge 69x0 series, 85 Sun StorEdge 99x0 series, 85 support contact, xii

T
TL port, 84 translation loop port, 84

U
unconguring a single path, 66 devices, 61 multipathed devices, 65 multiple devices, 67 unconguring devices, 64

100

System Administration Guide: Multipath Conguration January 2005

S-ar putea să vă placă și