Documente Academic
Documente Profesional
Documente Cultură
Part No. 819-0139-10 January 2005 Submit comments about this document at: http://www.sun.com/hwdocs/feedback
Copyright 2004 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved. Sun Microsystems, Inc. has intellectual property rights relating to technology that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.sun.com/patents and one or more additional patents or pending patent applications in the U.S. and in other countries. This document and the product to which it pertains are distributed under licenses restricting their use, copying, distribution, and decompilation. No part of the product or of this document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and in other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, AnswerBook2, docs.sun.com, OpenBoot, Solstice DiskSuite, Sun StorEdge, and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and in other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and in other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. The OPEN LOOK and Sun Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to the Xerox Graphical User Interface, which license also covers Suns licensees who implement OPEN LOOK GUIs and otherwise comply with Suns written license agreements. Netscape Navigator is a trademark or registered trademark of Netscape Communications Corporation in the United States and other countries. U.S. Government RightsCommercial use. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions of the FAR and its supplements. DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. Copyright 2004 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, Californie 95054, Etats-Unis. Tous droits rservs. Sun Microsystems, Inc. a les droits de proprit intellectuels relatants la technologie qui est dcrit dans ce document. En particulier, et sans la limitation, ces droits de proprit intellectuels peuvent inclure un ou plus des brevets amricains numrs http://www.sun.com/patents et un ou les brevets plus supplmentaires ou les applications de brevet en attente dans les Etats-Unis et dans les autres pays. Ce produit ou document est protg par un copyright et distribu avec des licences qui en restreignent lutilisation, la copie, la distribution, et la dcompilation. Aucune partie de ce produit ou document ne peut tre reproduite sous aucune forme, par quelque moyen que ce soit, sans lautorisation pralable et crite de Sun et de ses bailleurs de licence, sil y en. Le logiciel dtenu par des tiers, et qui comprend la technologie relative aux polices de caractres, est protg par un copyright et licenci par des fournisseurs de Sun. Des parties de ce produit pourront tre drives des systmes Berkeley BSD licencis par lUniversit de Californie. UNIX est une marque dpose aux Etats-Unis et dans dautres pays et licencie exclusivement par X/Open Company, Ltd. Sun, Sun Microsystems, le logo Sun, AnswerBook2, docs.sun.com, OpenBoot, Solstice DiskSuite, Sun StorEdge, et Solaris sont des marques de fabrique ou des marques dposes de Sun Microsystems, Inc. aux Etats-Unis et dans dautres pays. Toutes les marques SPARC sont utilises sous licence et sont des marques de fabrique ou des marques dposes de SPARC International, Inc. aux Etats-Unis et dans dautres pays. Les produits protant les marques SPARC sont bass sur une architecture dveloppe par Sun Microsystems, Inc. Linterface dutilisation graphique OPEN LOOK et Sun a t dveloppe par Sun Microsystems, Inc. pour ses utilisateurs et licencis. Sun reconnat les efforts de pionniers de Xerox pour la recherche et le dveloppement du concept des interfaces dutilisation visuelle ou graphique pour lindustrie de linformatique. Sun dtient une license non exclusive de Xerox sur linterface dutilisation graphique Xerox, cette licence couvrant galement les licencies de Sun qui mettent en place linterface d utilisation graphique OPEN LOOK et qui en outre se conforment aux licences crites de Sun. LA DOCUMENTATION EST FOURNIE "EN LTAT" ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES OU TACITES SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A LAPTITUDE A UNE UTILISATION PARTICULIERE OU A LABSENCE DE CONTREFAON.
Please Recycle
Contents
Preface 1.
ix 1 1
Multipathing Overview
Fibre Channel and Multipathing Features Multipath Driver Features Supported Standards 2. 4 5 2
Task Summary To Configure Multipathing Reconfiguration Reboot Requirements Enabling Multipathing Globally
6 7 9 10
To Configure Multipathing by Port To Configure a Single PCI HBA To Configure a Dual PCI HBA To Configure an SBus HBA 14 12 13
10
14
iii
15
Configuring Third-Party Symmetric Storage Devices for Multipathing Considerations for Third-Party Device Configuration
16
16
16 18 18
Configuring Solstice DiskSuite or Solaris Volume Manager for Multipathing Solaris DiskSuite or Solaris Volume Manager Configuration Overview Example of Migrating Mirrored Devices
19 20
3.
30 30 31 31 32 33 33
To Configure an Individual Device Without Multipathing To Configure Multiple Devices Without Multipathing 40 41 43 45 38
To Configure Individual Devices With Multipathing To Configure Multiple Devices With Multipathing
4.
Configuring Multipathing Support for Storage Boot Devices About the stmsboot Command 45 46 47
48
iv
48 49
51
5.
IPFC Considerations
54 55 55
59 59
6.
62
To Unconfigure a Fabric Device Associated With Multipathing Enabled Devices 64 To Unconfigure One Path to a Multipathing Device 65
A.
Contents
Utility: format
71 72 72
Utility: luxadm failover 75 B. C. Supported FC-HBA API Zones and Ports Zone Types Port Types D. 83 84 87 83 79
To Remove a Fabric Device Before Dynamic Reconfiguration To Maintain a Fabric Device Configuration With Dynamic Reconfiguration 88 To Reconfigure Fabric Devices With Dynamic Reconfiguration To Reconfigure the Sun Enterprise 10000 Server With a Fabric Connection 88 91 92 92
88
E.
Multipathing Troubleshooting
System Crashes During or After a Multipathing Boot Disable Operation Multipathing Is Not Running Properly 95 96 96 97
94
luxadm display and luxadm failover Commands Fail Sun StorEdge T3, 6x20, or 3900 Arrays Do Not Show
Connected Sun StorEdge A5200 Arrays Appear Under Physical Paths in Format 97
vi
98
Contents
vii
viii
Preface
The System Administration Guide: Multipath Configuration provides an overview of the Solaris 10 Operating System (Solaris OS) software that is used to configure multipathing, with an explanation of how to install and then configure the software. This guide is intended for system, storage, and network administrators who create and maintain storage area networks (SANs) and have a high level of expertise in the management and maintenance of SANs.
ix
Chapter 1 provides an overview of SAN and multipathing features and guidelines. Chapter 2 provides an overview of the configuration process. Chapter 3 explains how to configure multipathing devices. Chapter 4 describes how to configure multipathing for a storage boot device. Chapter 5 explains how to configure IPFC SAN devices. Chapter 6 describes the steps required to unconfigure the multipathing software. Appendix A provides configuration samples. Appendix B lists the HBA API library commands. Appendix C provides Zone information. Appendix D describes dynamic reconfiguration. Appendix E provides a troubleshooting guide.
Typographic Conventions
Typeface Meaning Examples
AaBbCc123
The names of commands, files, and directories; on-screen computer output What you type, when contrasted with on-screen computer output Book titles, new words or terms, words to be emphasized
Edit your .login file. Use ls -a to list all files. % You have mail. % su Password: Read Chapter 6 in the Users Guide. These are called class options. You must be superuser to do this. To delete a file, type rm filename.
AaBbCc123
AaBbCc123
Shell Prompts
Shell Prompt
C shell C shell superuser Bourne shell and Korn shell Bourne shell and Korn shell superuser
machine_name% machine_name# $ #
Preface
xi
Related Documentation
Product Title Part Number
Solaris 10 Operating System Release Notes Systems Administration Guide: Basic Administration Systems Administration Guide: Advanced Administration Systems Administration Guide: IP Services Systems Administration Guide: Network Services Solaris Administration Guide: Devices and File Systems
xii
A complete set of Solaris documentation and many other titles are located at: http://docs.sun.com
Preface
xiii
xiv
CHAPTER
Multipathing Overview
This chapter provides an overview of the multipathing capabilities of the Solaris 10 Operating System (Solaris OS). This information will prove helpful during installation and configuration of the software. This chapter contains the following sections:
Dynamic storage recovery The Solaris 10 OS automatically recognizes devices and any modifications made to device configurations. This makes devices available to the system without requiring you to reboot or manually change information in configuration files. Persistent device naming Devices that are configured within the SAN maintain their device naming through reboots and/or reconfiguration. The only exception to this are tape devices, found in /dev/rmt, that are persistent and will not change unless they are removed or later regenerated. FCAL support OpenBoot Prom (OBP) commands that are used on servers can access Fibre Channel-Arbitrated Loop (FCAL) attached storage for scanning the Fibre Channel loop. Fabric booting Sun Supported 1 Gbit and 2 Gbit host bus adapters (HBAs) supported by Sun can boot from fabric devices as well as non-fabric devices. Fabric topologies with Fibre Channel switches provide higher speed, more connections, and port isolation. Configuration management Management can be performed with the cfgadm(1M) and luxadm(1M) commands. These commands control host-level access to devices and allow you to configure hosts to see only necessary devices, providing an alternative to switch zoning. T11 FC-HBA library What was previously known as the Storage Networking Industry Association (SNIA) Fibre Channel HBA(FC-HBA) library is now known as the T11 FC-HBA library. The T11 FC-HBA library application programming interface (API) enables management of Fibre Channel HBAs and provides a standards-based interface for other applications (such as Sun StorEdge Enterprise Storage Manager Topology Reporter) that can be used to gather information about the SANs HBAs, switches, and storage. Man pages for common FC-HBAs are included in the Solaris 10 OS. For additional information on fibre channel specifications (FC-MI) refer to http://www.t11.org. Appendix B lists functions supported by Suns implementation of the T11 FC-HBA.
Path management The Solaris 10 OS multipathing software dynamically manages the paths to any storage devices it supports. There are no configuration files to manage or databases to keep current for supported devices. The addition or removal of paths to a device is done automatically when a path is brought
online or removed from service. This allows hosts configured with multipathing to begin with a single path to a device and add more host controllers, increasing bandwidth and RAS, without changing device names or modifying applications.
Single device instances - Multipathing is fully integrated with the Solaris OS. This allows the software to display multipath devices as single device instances instead of as one device, or device link, per path. This reduces the cost of managing complex storage architectures, since it enables utilities, such as format(1M) or higher level applications such as Solaris Volume Manager, to access one representation of a storage device instead of a separate device for each path. Failover support Implementing higher levels of RAS requires redundant host connectivity to storage devices. Solaris 10 OS multipathing drivers manage both the failure or hosts coming offline to storage paths while maintaining host I/O connectivity through available secondary paths. This enables applications to continue operating in the event of host disconnection or downstream path failure. Failed paths can be automatically re-enabled through dynamic path management. Symmetrical/Asymmetrical Device Support The Solaris 10 OS can manage all symmetrical devices. Asymmetric device have proprietary commands that require additional individual device support. Solaris 10 supports all Sun asymmetric devices, for example, the T3/6120. Using a switch in front of an asymmetric device will allow you to have a pool of preferred symmetric paths, as well as a pool of inactive asymmetric paths. I/O load balancing In addition to providing simple failover support, the software can use any active paths to a storage device to send and receive I/O. With I/O routed through multiple host connections, bandwidth can be increased by the addition of host controllers. The Solaris 10 OS uses a round-robin loadbalancing algorithm, by which individual I/O requests are routed to active host controllers in a series, one after the other. Queue depth SCSI storage arrays present storage to a host in the form of a LUN. LUNs have a finite set of resources available, such as the amount of data that can be stored, as well as the number of active commands that a device or LUN can process at one time. The number of active commands that can be issued before a device blocks further I/O is know as queue depth. When multipathing software is enabled, a single queue is created for each LUN regardless of the number of distinct or separate paths it may have to the host. This prevents unnecessary action on storage array resources and allows upper layer drivers and programs to effectively use the LUN without flooding it with I/O requests. Dynamic reconfiguration The Solaris 10 OS supports Solaris Dynamic Reconfiguration (DR).
Chapter 1
Multipathing Overview
Supported Standards
The Solaris 10 OS is based on open standards for communicating with devices and device management, ensuring interoperability with other standards-based devices and software. The following standards are supported by the Solaris 10 OS:
T10 standards, including SCSI-2, SAM, FCP, SPC, and SBC T11.3 Fibre Channel standards, including FC-PH, FC-AL, FC-LE, and FC-GS T11.5 storage management standards, including FC-HBA IETF standards, including RFC 2625
CHAPTER
Configuring Multipathing on page 5 Enabling Multipathing Globally on page 6 Enabling Multipathing on a Per Port Basis on page 9 Configuring Automatic Failback for Sun StorEdge Arrays on page 14 Configuring Third-Party Symmetric Storage Devices for Multipathing on page 16 Configuring Solstice DiskSuite or Solaris Volume Manager for Multipathing on page 18
Configuring Multipathing
Multipathing is provided by a driver that runs in the Solaris Operating System (Solaris OS) environment. Multipathing is disabled by default for the Solaris OS, but is enabled by default on Solaris OS on x86 based systems. If you are using another multipathing application, see the documentation for that application.
Note These software features are not available for parallel SCSI devices but are
available for Fibre Channel disk devices. Multipathing is not supported on tape drives or libraries, or on FC over IP. Configuration of the multipathing software depends on how you intend to use your system. The software can be configured to control all Fibre Channel HBAs supported by Sun as listed in the Sun Solaris 10 Operating System Release Notes. You can also configure multipathing for third-party symmetric storage devices to use the Solstice DiskSuite software, Solaris Volume Manager software, or third-party volume management and multipathing software.
See Reconfiguration Reboot Requirements on page 6. See Enabling Multipathing Globally on page 6. See Enabling Multipathing on a Per Port Basis on page 9. See Configuring Third-Party Symmetric Storage Devices for Multipathing on page 16. See Configuring Solstice DiskSuite or Solaris Volume Manager for Multipathing on page 18. See Configuring Multipathing Support for Storage Boot Devices on page 45. See Reconfiguration Reboot Requirements on page 6.
You change the scsi_vhci.conf or fp.conf files. In non-fabric environments, you change the mp_support field for Sun StorEdge T3, 6x20, and 3900 arrays. For further information about Sun StorEdge T3, 6x20 and 3900 arrays, refer to the documentation that came with your system. Unless you explicitly enabled or disabled the software on a specific port, the global settings in multipathing will apply.
1. Using any text editor, edit the /kernel/drv/fp.conf file as displayed in Example of fp.conf file on page 8. 2. To enable multipathing globally, change the value of mpxio-disable to no. On the Solaris 10 OS for Solaris, mpxio-disable is set to yes by default, which means multipathing is disabled. With the Solaris OS on x86 based systems, mpxio-disable is set to no by default, which means that multipathing is enabled. 3. (Optional) Enable multipathing support for third-party symmetric devices. Refer to Configuring Third-Party Symmetric Storage Devices for Multipathing on page 16. 4. Save the /kernel/drv/fp.conf file. 1. Using any text editor, edit the /kernel/drv/scsi_vhci.conf file as displayed in Example of scsi_vhci.conf file on page 9. Do not change the name and class definitions. 2. If you want the multipathing software to use all the available paths for load balancing, leave the load-balance field set to the default of round-robin. Otherwise, change the definition to none. 3. Save the /kernel/drv/scsi_vhci.conf file. 4. Perform one of the following steps.
If you want to enable multipathing on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands.
# touch /reconfigure # shutdown -g0 -y -i6
Chapter 2
# Copyright 2004 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Sun Fibre Channel Port driver configuration # #ident "%Z%%M% %I% %E% SMI" # name="fp" class="fibre-channel" port=0; name="fp" class="fibre-channel" port=1; # # To generate the binding-set specific compatible forms used to address # legacy issues the scsi-binding-set property must be defined. (do not remove) # scsi-binding-set="fcp"; # # List of ULP modules for loading during port driver attach time # load-ulp-list="1","fcp"; # # Force attach driver to support hotplug activity (do not remove the property) # ddi-forceattach=1; # # I/O multipathing feature (MPxIO) can be enabled or disabled using # mpxio-disable property. Setting mpxio-disable="no" will activate # I/O multipathing; setting mpxio-disable="yes" disables the feature. # # To globally enable MPxIO on all fp ports set: # mpxio-disable="no"; # # To globally disable MPxIO on all fp ports set: # mpxio-disable="yes"; # # You can also enable or disable MPxIO per port basis. Per port settings # override the global setting for the specified ports. # To disable MPxIO on port 0 whose parent is /pci@8,600000/SUNW,qlc@4 set: # name="fp" parent="/pci@8,600000/SUNW,qlc@4" port=0 mpxio-disable="yes"; # # NOTE: If you just want to enable or disable MPxIO on all fp ports, it is # better to use stmsboot(1M) as it also updates /etc/vfstab. # mpxio-disable="yes";
# Copyright 2004 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # #pragma ident"@(#)scsi_vhci.conf1.804/03/07 SMI" # name="scsi_vhci" class="root"; # # Load balancing global configuration: setting load-balance="none" will cause # all I/O to a given device (which supports multipath I/O) to occur via one # path. Setting load-balance="round-robin" will cause each path to the device # to be used in turn. # load-balance="round-robin"; # # Force load driver to support hotplug activity (do not remove this property). # ddi-forceattach=1; # # Automatic failback configuration # possible values are auto-failback="enable" or auto-failback="disable" auto-failback="disable"; # # For enabling MPxIO support for 3rd party symmetric device need an # entry similar to following in this file. Just replace the "SUN SENA" # part with the Vendor ID/Product ID for the device, exactly as reported by # Inquiry cmd. # # device-type-scsi-options-list = # "SUN SENA", "symmetric-option"; # # symmetric-option = 0x1000000;
Chapter 2
Load balancing is controlled by the global load-balance= variable and cannot be applied on a per-port basis. If a storage device is managed or controlled by a volume manager that is not supported by Sun, the multipathing software must be disabled on that port. Devices with multipathing software enabled are enumerated under /devices/scsi_vhci. Devices with multipathing software disabled are enumerated under physical path names. With the multipathing software installed, all paths to storage devices must be configured with multipathing software enabled or disabled. Configuring multipathing software by port enables the software to co-exist with other multipathing solutions like Alternate Pathing (AP), VERITAS Dynamic Multipathing (DMP), or EMC PowerPath. However, storage devices and paths should not be shared between multipathing software and other multipathing solutions.
To Configure a Single PCI HBA on page 12 To Configure a Dual PCI HBA on page 13 To Configure an SBus HBA on page 14
1. Log in as superuser.
10
Determine the HBAs that you want the multipathing software to control. For example, to select the desired device, perform an ls -l command on /dev/fc. The following is an example of the command output.
lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp0 ../../devices/pci@6,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp1 ../../devices/pci@7,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp0 ../../devices/pci@6,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp1 ../../devices/pci@7,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp2 ../../devices/pci@a,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 49 Apr 17 18:14 fp3 ../../devices/pci@b,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 50 Apr 17 18:14 fp4 ../../devices/pci@12,2000/SUNW,qlc@2/fp@0,0:devctl lrwxrwxrwx 1 root root 56 Apr 17 18:14 fp5 ../../devices/pci@13,2000/pci@2/SUNW,qlc@4/fp@0,0:devctl lrwxrwxrwx 1 root root 56 Apr 17 18:14 fp6 ../../devices/pci@13,2000/pci@2/SUNW,qlc@5/fp@0,0:devctl lrwxrwxrwx 1 root root 56 Apr 17 18:14 fp7 ../../devices/sbus@7,0/SUNW,qlc@0,30400/fp@0,0:devctl -> -> -> -> -> -> -> -> -> ->
Note The fp7 entry is a SBus HBA. The fp5 and fp6 include two /pci elements. This indicates a dual PCI HBA. The rest of the entries do not have additional PCI bridges and are single PCI HBAs.
Chapter 2
11
2. Open the /kernel/drv/fp.conf configuration file and explicitly enable or disable an HBA. Add the property "mpxio-disable" to the HBA definition:
To enable multipathing on the port, set "mpxio-disable" to no. To disable multipathing on the port, set "mpxio-disable" to yes.
If you want to enable multipathing on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands.
# touch /reconfigure # shutdown -g0 -y -i6
2. Save and exit the file. 3. Perform one of the following steps:
If you want to enable multipathing software on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45.
12
If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands:
# touch /reconfigure # shutdown -g0 -y -i6
2. Save and exit the file. 3. Perform one of the following steps:
If you want to enable multipathing software on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands.
# touch /reconfigure # shutdown -g0 -y -i6
Chapter 2
13
2. Save and exit the file. 3. Perform one of the following steps:
If you want to enable multipathing software on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands.
# touch /reconfigure # shutdown -g0 -y -i6
14
For information on configuring your Sun StorEdge T3+, 3900 or 6x20 arrays, refer to the documentation that came with your system.
1. Open the scsi_vhci.conf file in your text editor. 2. Enable the automatic failback property.
auto-failback = enable;
3. Save and exit the scsi_vhci.conf file. When automatic failback is enabled, the following message is printed in the
/var/adm/messages file upon boot-up:
1. Open the scsi_vhci.conf file in your text editor. 2. Disable the automatic failback property.
auto-failback = disable;
3. Save and exit the scsi_vhci.conf file. 4. Perform a reconfiguration reboot so that the changes will take effect.
# touch /reconfigure # shutdown -g0 -y -i6
Chapter 2
15
Note Before configuring any Third-Party Device, ensure that they are supported.
Refer to your third-party user documentation, or third-party vendor for information on proper product and vendor IDs, modes and various settings required by the multipathing software.
To use this functionality, you must edit parameters in the scsi_vhci.conf and fp.conf files. You will need the storage devices vendor ID and product ID. You can obtain the values for the storage vendor_id and product_id variables in the scsi_vhci.conf file by using the format command followed by the inquiry option on your system. See the format(1M) man page. Symmetric storage devices must be able to support the following functionality or commands. Refer to your third party vendor for information.
Make all paths available for I/O, because paths will be accessed in a roundrobin fashion. Support the report_luns_scsi command. Support scsi_inquiry command.
16
2. Enable mpxio.
mpxio-disable=no;
3. Save and exit the fp.conf file. 4. Open the scsi_vhci.conf file in a text editor such as vi(1M). 5. Add the vendor_id and product_id properties. The vendor ID (v_id) must be eight characters long. You must specify all eight characters. Trailing characters are spaces. Tabs and the tab character are not allowed. The product ID (prod_id) can be up to 16 characters long. Trailing characters are blanks or spaces. Tabs and the tab character are not allowed.
device-type-scsi-options-list=
Replace the variables with appropriate values for your system. For example:
device-type-scsi-options-list= "ven-a pid_a_upto_here", "symmetric-option", "ven-b pid_b_upto ", "symmetric-option", "ven-c pid_c ", "symmetric-option"; symmetric-option=0x1000000;
6. Enable load balancing. If you want the multipathing software to use all the available paths for load balancing, leave the load-balance field set to the default of round-robin. Otherwise, change the definition to none. 7. Save and exit the scsi_vhci.conf file. 8. Perform one of the following steps:
If you want to enable multipathing software on the boot controller port, see Configuring Multipathing Support for Storage Boot Devices on page 45. If you do not want to enable multipathing software on the boot controller port at this time, perform a reconfiguration reboot now to shut down and restart your system by using the following commands:
# touch /reconfigure # shutdown -g0 -y -i6
Chapter 2
17
After you configure the scsi_vhci.conf file, confirm that the devices nodes are created under the /devices/scsi_vhci directory. If the devices were not created, confirm the vendor_id and product_id fields in the fp.conf file.
18
2. Unconfigure Solstice DiskSuite or Solaris Volume Manager for devices under multipathing control. Unmount the metadevices that will be under control of the multipathing software, and then clear the metadevices using the metaclear command. Clear the metadevice database using the metadb -d -f command. This takes the list of disks, output from the metadb command you performed in Step 1, as a list of arguments. 3. Determine multipathing device name to pre-multipathing device name mapping. Use the stmsboot -L command output to display the potential multipathing path and device name changes and the luxadm display command to determine which pre-multipathing device names are combined under each multipathing device name. 4. Enable multipathing. 5. Reconfigure the Solaris DiskSuite or Solaris Volume Manager Create metadevices using multipathing path names with the metainit command.
A Sun StorEdge T3 partner pair is connected to the host, and mp_support on the Sun StorEdge T3 array is set to rw. multipathing software is initially disabled. Four LUNs of equal size exist on the partner pair. Two metadb replicas exist on lun 0 (c2t1d0) and lun 1 (c2t1d1).
d10 and d11 are the sub-mirror metadevices created on the lun 2 (c2t1d2) and lun 3 (c2t1d3). d14 is the mirror of d10 and d11.
Chapter 2
19
20
2. Save the pre-multipathing device information. Collect and save the output of stmsboot -L, format, metadb, metastat, and metastat -p commands.
# stmsboot -L
See To Display Potential /dev Device Name Changes on page 48 for an example of stmsboot -L output.
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0,0 1. c2t1d0 <T300 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0 ssd@w50020f2300000225,0 2. c2t1d1 <T300 cyl 34145 alt 2 hd 20 sec 128> /pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0/ ssd@w50020f2300000225,1 3. c2t1d2 <SUN-T300-0117 cyl 34145 alt 2 hd 32 sec 128> /pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0/ ssd@w50020f2300000225,2 Specify disk (enter its number):
# metadb flags a m pc luo a pc luo a pc luo a pc luo 16 1050 16 1050 first blk 1034 1034 1034 1034 block count /dev/dsk/c2t1d0s3 /dev/dsk/c2t1d0s3 /dev/dsk/c2t1d1s3 /dev/dsk/c2t1d1s3
# metastat -p d14 -m d10 d11 1 d10 1 2 c2t1d2s1 c2t1d2s6 -i 32b d11 1 2 c2t1d3s1 c2t1d3s6 -i 32b
Chapter 2
21
# metastat d14: Mirror Submirror 0: d10 State: Okay Submirror 1: d11 State: Okay Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 524288 blocks d10: Submirror of d14 State: Okay Size: 524288 blocks Stripe 0: (interlace: 32 blocks) Device Start Block Dbase State c2t1d2s1 0 No Okay c2t1d2s6 0 No Okay d11: Submirror of d14 State: Okay Size: 524288 blocks Stripe 0: (interlace: 32 blocks) Device Start Block Dbase State c2t1d3s1 0 No Okay c2t1d3s6 0 No Okay
Hot Spare
Hot Spare
22
3. Unconfigure the volume manager without losing the data. a. Clear the submirrors and mirror devices.
# metaclear d14 d14: Mirror is cleared # metaclear d11 d11: Contact/Stripe is cleared # metaclear d10 d10: Contact/Stripe is cleared
4. Enable the multipathing software as described earlier in this chapter. See Enabling Multipathing Globally on page 6 or Enabling Multipathing on a Per Port Basis on page 9.
Chapter 2
23
5. Determine the multipathing software device name to pre-multipath device name mapping. The output of stmsboot -L, format, luxadm probe, and luxadm display shows the paths for each device. See To Display Potential /dev Device Name Changes on page 48 for an example of stmsboot -L output.
# format AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0,0 1. c4t60020F20000002253B220F99000D348Cd0 <SUN-T300-0117 cyl 34145 alt 2 hd 32 sec 128> /scsi_vhci/ssd@g60020f20000002253b220f99000d348c 2. c4t60020F20000002253B220FC000086944d0 <SUN-T300-0117 cyl 34145 alt 2 hd 32 sec 128> /scsi_vhci/ssd@g60020f20000002253b220fc000086944 3. c4t60020F20000002253B220FD400071CD8d0 <SUN-T300-0117 cyl 34145 alt 2 hd 32 sec 128> /scsi_vhci/ssd@g60020f20000002253b220fd400071cd8 Specify disk (enter its number):
# luxadm probe No Network Array enclosures found in /dev/es Found Fibre Channel device(s): Node WWN:50020f20000001f6 Device Type:Disk device Logical Path:/dev/rdsk/c4t60020F20000002253B22101D00029FC9d0s2 Node WWN:50020f2000000225 Device Type:Disk device Logical Path:/dev/rdsk/c4t60020F20000002253B220FD400071CD8d0s2 Node WWN:50020f20000001f6 Device Type:Disk device Logical Path:/dev/rdsk/c4t60020F20000002253B220FC000086944d0s2 Node WWN:50020f2000000225 Device Type:Disk device Logical Path:/dev/rdsk/c4t60020F20000002253B220F99000D348Cd0s2 For each entry in luxadm probe, get the device path information.
24
Chapter 2
25
6. Correlate the non-multipathing software device information collected in Step 1 by matching the luxadm display paths output. Identify the appropriate LUN. For example, find the c2t1d0 device. c2t1d0 is LUN 0 of the Sun StorEdge T3 partner pair as seen from the format in Step 2.
1. c2t1d0 <T300 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0/ssd@w50020f2300000225,0
From the luxadm display output of each device above, check the paths, controller and device address fields.
/dev/rdsk/c4t60020F20000002253B220F99000D348Cd0s2 /devices/scsi_vhci/ssd@g60020f20000002253b220f99000d348c:c,raw Controller /devices/pci@1f,4000/pci@4/SUNW,qlc@4/fp@0,0 Device Address 50020f2300000225,0
c2t1d0 is now/dev/rdsk/c4t60020F20000002253B220F99000D348Cd0
Alternatively, the stmsboot -L command output also shows the device name change mapping. 7. Reconfigure the volume manager under the multipathing software by recreating /etc/lvm/md.tab with the new device names. a. Re-create metadb database replicas with new device names.
# metadb -a -f -c 2 /dev/rdsk/c4t60020F20000002253B220F99000D348Cd0s2 /dev/rdsk/c4t60020F20000002253B220FC000086944d0s2
b. Similarly, identify the metadevices with new device names by using the metatstat -p command.
26
c. Recreate submirrors and mirror. The best method for re-creating mirrors and submirrors is to create a one-way mirror and then, after verifying that the data is good, attach the second mirror.
# metainit d10 # metainit d11
# metainit d14 -m d10
d. Verify that all the data is now available in the one-way mirror device.
d14 contains the same data as earlier, before multipathing was enabled on the
Chapter 2
27
28
CHAPTER
SAN Device Considerations on page 29 Adding SAN Devices on page 30 Adding SAN Devices on page 30 Fabric Device Node Configuration on page 31 Configuring Device Nodes Without Multipathing Enabled on page 33 Configuring Device Nodes With Multipathing Enabled on page 40
In previous releases of Solaris, device paths had to be configured before use by using the cfgadm -c configure command. If a path was not configured the storage could not be accessed and was not visible to disk utilities such as format. In Solaris 10, paths are automatically configured, and the separate cfgadm step is not necessary. All attached storage will be visible to the host. Likewise, storage need not be unconfigured. If you wish to use a Solaris Volume Manager (SVM) metadevice on a fabric attached disk, or a mirrored fabric boot device, you must use automatic path configuration. Manual configuration can be restored by adding the line enable_tapesry=on to /kernel/drv/fcp.conf. Configure ports and zones according to the vendor-specific documentation for storage and switches.
29
LUN masking enables specific LUNs to be seen by specific hosts. See your vendor-specific storage documentation that describes masking. Turn off power management on servers connected to the SAN to prevent unexpected results as one server attempts to power down a device while another attempts to gain access. See power.conf(1M) for details about power management. Connect arrays and other storage devices to the SAN with or without multipathing capability.
Note If you use the format command when multipathing is enabled, you will see only one instance of a device identifier for each LUN. Without multipathing, you will see one identifier for each path.
The cfgadm and cfgadm_fp commands are used most frequently to configure storage devices on a SAN. Refer to the appropriate man page for detailed instructions about how to use each command. Appendix A contains information about these commands.
1. Create the LUN or LUNs desired. 2. If necessary, apply LUN masking for HBA control on the SAN device. 3. Connect the storage device to the system. 4. If necessary, create port-based or WWN zones on the switch on the SAN device.
30
5. If necessary, configure all paths to the storage device using the cfgadm -c configure command on all the host bus adapters (HBAs) that have a path to the storage device. The cfgadm -c configure command creates device nodes. This step is necessary if the storage device is accessed by a host port connected to a fabric port and cfgadm is enabled. 6. Run the fsck or newfs commands on the device, if used for file systems. 7. Mount any existing file systems available on the storage devices LUNs or disk groups. You might need to run the fsck command to repair any errors in the LUNs listed in the /etc/vfstab file.
Note In the following examples, only failover path attachment point IDs (Ap_Ids)
are listed. The Ap_Ids displayed on your system depend on your system configuration. 1. Become superuser. 2. Display the information about the attachment points on the system.
# cfgadm -l Ap_Id c0 c1
In this example, c0 represents a fabric-connected host port, and c1 represents a private, loop-connected host port. Use the cfgadm(1M) command to manage the device configuration on fabric-connected host ports. By default, the device configuration on private, loop-connected host ports is managed by a host using the Solaris Operating System. 3. Display information about the host ports and their attached devices.
# cfgadm -al Ap_Id Type c0 fc-fabric c0::50020f2300006077 disk c0::50020f23000063a9 disk c0::50020f2300005f24 disk c0::50020f2300006107 disk c1 fc-private c1::220203708b69c32b disk c1::220203708ba7d832 disk c1::220203708b8d45f2 disk c1::220203708b9b20b2 disk
Receptacle connected connected connected connected connected connected connected connected connected connected
Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
32
Note The cfgadm -l command displays information about Fibre Channel host ports. Also use the cfgadm -al command to display information about Fibre Channel devices. The lines that include a port world wide name (WWN) in the Ap_Id field associated with c0 represent a fabric device. Use the cfgadm configure and unconfigure commands to manage those devices and make them available to hosts using the Solaris Operating System. The Ap_Id devices with port WWNs under c1 represent private-loop devices that are configured through the c1 host port.
1. Become superuser.
Chapter 3
33
Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
Type Receptacle fc-fabric connected disk connected disk connected disk connected disk connected fc-private connected disk connected disk connected disk connected disk connected
Occupant Condition configured unknown configured unknown unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
Notice that the Occupant column for both c0 and c0::50020f2300006077 displays as configured, indicating that the c0 port has a configured occupant and that the c0::50020f2300006077 device is configured. Use the show_scsi_lun option to display FCP SCSI LUN information for multiLUN SCSI devices.
34
CODE EXAMPLE 3-1 shows that the physical devices connected through Ap_Id c0:50020f2300006077 have four LUNs configured. The device is now available on
CODE EXAMPLE 3-1
# cfgadm -al -o show_scsi_lun c0 Ap_Id Type Receptacle c2 fc-fabric connected c0::50020f2300006077,0 disk connected c0::50020f2300006077,1 disk connected c0::50020f2300006077,2 disk connected c0::50020f2300006077,3 disk connected
Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown
the host using the Solaris Operating System. The paths represent each SCSI LUN in the physical device represented by c0::50020f2300006077. CODE EXAMPLE 3-2 is an example of the luxadm(1M) output.
Chapter 3
35
DEVICE PROPERTIES for disk: /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 35846.125 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 /devices/scsi_vhci/ssd@g60020f20000007d44002e60f0002d8ca:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,7 Host controller port WWN 200000017380a77e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,7 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY
DEVICE PROPERTIES for disk: 50020f23000007a4 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 9217.688 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s):
36
/dev/rdsk/c14t60020F20000007D43EE74C7B00047E6Cd0s2 /devices/scsi_vhci/ssd@g60020f20000007d43ee74c7b00047e6c:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,2 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,2 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY DEVICE PROPERTIES for disk: 50020f23000007a4 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 9217.688 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D43EE74C630001A25Fd0s2 /devices/scsi_vhci/ssd@g60020f20000007d43ee74c630001a25f:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,1 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,1 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY
Chapter 3
37
DEVICE PROPERTIES for disk: 50020f23000007a4 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 9217.688 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D43EE74C93000612EBd0s2 /devices/scsi_vhci/ssd@g60020f20000007d43ee74c93000612eb:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,3 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,3 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY
1. Become superuser.
38
Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
Note This operation repeats the configure operation of an individual device for
all the devices on c0. This can be time consuming if the number of devices on c0 is large. 4. Verify that all devices on c0 are configured.
# cfgadm -al Ap_Id c0 c0::50020f2300006077 c0::50020f23000063a9 c0::50020f2300005f24 c0::50020f2300006107 c1 c1::220203708b69c32b c1::220203708ba7d832 c1::220203708b8d45f2 c1::220203708b9b20b2
Type Receptacle fc-fabric connected disk connected disk connected disk connected disk connected fc-private connected disk connected disk connected disk connected disk connected
Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
Use the show_scsi_lun command to display FCP SCSI LUN information for multiLUN SCSI devices.
Chapter 3
39
CODE EXAMPLE 3-2 shows that the physical devices connected through Ap_Id
c0::50020f2300006077 and c0::50020f2300006107 each have four LUNs configured. The physical devices represented by c0::50020f23000063a9 and c0::50020f2300005f24 each have two LUNs configured.
CODE EXAMPLE 3-3
# cfgadm -al -o show_scsi_lun c0 Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077,0 disk connected c0::50020f2300006077,1 disk connected c0::50020f2300006077,2 disk connected c0::50020f2300006077,3 disk connected c0::50020f23000063a9,0 disk connected c0::50020f23000063a9,1 disk connected c0::50020f2300005f24,0 disk connected c0::50020f2300005f24,1 disk connected c0::50020f2300006107,0 disk connected c0::50020f2300006107,1 disk connected c0::50020f2300006107,2 disk connected c0::50020f2300006107,3 disk connected
Occupant configured configured configured configured configured configured configured configured configured configured configured configured configured
Condition unknown unknown unknown unknown unknown unknown unknown unknown unknown unknown unknown unknown unknown
40
Note The cfgadm -c configure command for fabric devices is the same regardless of whether or not multipathing is enabled, but the result is different. When the multipathing is enabled, the host using the Solaris Operating System creates device node and path information that includes multipathing information.
1. Become superuser. 2. Identify the port WWN of the device to be configured as a multipath device. Look for devices on a fabric-connected host port, marked as fc-fabric. These are the devices you can configure with the cfgadm -c configure command.
CODE EXAMPLE 3-4
# cfgadm -al Ap_Id c0 c0::50020f2300006077 c0::50020f23000063a9 c1 c1::220203708b69c32b c1::220203708ba7d832 c1::220203708b8d45f2 c1::220203708b9b20b2 c2 c2::50020f2300005f24 c2::50020f2300006107
Type Receptacle fc-fabric connected disk connected disk connected fc-private connected disk connected disk connected disk connected disk connected fc-fabric connected disk connected disk connected
Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown unconfigured unknown unconfigured unknown
In CODE EXAMPLE 3-4, the c0::50020f2300006077 and c2::50020f2300006107 Ap_Ids represent the same storage device with different port WWNs for the storage device controllers. The c0 and c2 host ports are enabled for use by multipathing. 3. Configure the fabric device and make multipathing devices available to the host.
# cfgadm -c configure c0::50020f2300006077 c2::50020f2300006107
Chapter 3
41
Occupant Condition configured unknown configured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown configured unknown
Notice that the Occupant column of c0 and c0::50020f2300006077 specifies configured, which indicates that the c0 port has at least one configured occupant and that the c0::50020f2300006077 device is configured. The same change has been made in c2 and c2::50020f2300006107. When the configure operation has been completed without an error, multipathingenabled devices are created on the host using the Solaris Operating System. If the physical device represented by c0::50020f2300006077 and c2::50020f2300006107 has multiple SCSI LUNs configured, each LUN is configured as a multipathing device. CODE EXAMPLE 3-5 shows that two LUNs are configured through c0::50020f2300006077 and c2::50020f2300006107. Each Ap_Id is associated with a path to those multipath devices.
CODE EXAMPLE 3-5
# cfgadm -al -o show_scsi_lun c0::50020f2300006077 c2::50020f2300006107 Ap_Id Type Receptacle Occupant c0::50020f2300006077,0 disk connected configured c0::50020f2300006077,1 disk connected configured c2::50020f2300006107,0 disk connected configured c2::50020f2300006107,1 disk connected configured
42
Devices represented by Ap_Ids c0::50020f2300006077 and c2::50020f2300006107 are two paths to the same physical device, with c0::50020f2300006077 already configured. Configure the unconfigured devices on the selected port.
# cfgadm -c configure c2
Chapter 3
43
Note This operation repeats the configure command of an individual device for
all the devices on c2. This can be time-consuming if the number of devices on c2 is large. 3. Verify that all devices on c2 are configured.
# cfgadm -al Ap_Id c0 c0::50020f2300006077 c0::50020f23000063a9 c1 c1::220203708b69c32b c1::220203708ba7d832 c1::220203708b8d45f2 c1::220203708b9b20b2 c2 c2::50020f2300005f24 c2::50020f2300006107
Type fc-fabric disk disk fc-private disk disk disk disk fc-fabric disk disk
Receptacle connected connected connected connected connected connected connected connected connected connected connected
Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
Notice that the Occupant column of c2 and all of the devices under c2 is marked as configured. The show_scsi_lun command displays FCP SCSI LUN information for multiple LUN SCSI devices.
44
CHAPTER
About the stmsboot Command on page 45 Disabling Multipathing on the Boot Controller Port on page 51
Note Do not add or remove devices from your configuration while enabling or
disabling boot capability with the stmsboot command.
45
Option
Description
-e
Enable the enumeration of devices connected to the boot controller port under scsi_vhci (the multipathing virtual controller). Devices on the boot controller port will be enumerated under the scsi_vhci virtual controller and will be controlled by multipathing. Disable the enumeration of devices connected to the boot controller port under scsi_vhci. Devices on the boot controller port will be enumerated directly under the physical controller and will not be controlled by multipathing. Display the device name changes from non-multipathing device names to multipathing device names that would occur if multipathing were configured on this controller. Display the device name changes from non-multipathing device names to multipathing device names that would occur for the given controller if multipathing were configured on this controller.
-d
-L
-l controller-number
For additional information about the stmsboot command, see the stmsboot(1M) man page. A brief description of the syntax is presented here.
Caution After an stmsboot enable operation, do not remove /dev links of any device names that are not multipathing enabled devices for the boot device. This includes, but is not limited to, executing commands such as devfsadm -C. Even though these boot device links are stale, they are needed if you want to disable the stmsboot features later. Removing these links might cause your system to lose the necessary links to the devices that are not multipathing enabled, in which case the system will not boot. You then need to manually create the links by using the mknod command.
You must back up data on the root device prior to performing the stmsboot command. You must first enable the multipathing software to use any stmsboot command.
46
Plan to perform an immediate reconfiguration reboot of the host after enabling or disabling multipathing on a boot controller port. Do not perform other system tasks or other activities until you perform the reconfiguration reboot. Once enabled, the stmsboot feature has a dependency on the boot path. Upon successful completion of the stmsboot -e command, a per-port mpxio-disable entry for the boot controller path is added automatically to the /kernel/drv/fp.conf file. Do not change the boot path manually. Use the stmsboot -d command to disable a configured boot path. This command automatically updates both the eeprom-boot device and the /kernel/drv/fp.conf files portrelated mpxio-disable entry. This command then allows the system to boot by using the selected path. If your system fails during an enabling or disabling stmsboot operation, your original /etc/path_to_inst, /etc/vfstab, /etc/system, and /etc/dumpadm.conf files are saved with .sav file name extensions in the /var/tmp directory. The saved files can be useful in recovering from any unexpected system crash. If any Sun StorEdge T3, 3900, or 6x20 arrays are connected to the boot controller port, modify the settings when prompted to do so by the stmsboot command. After the stmsboot enable or disable operation, device names will change. Disable any volume manager that is actively using devices connected to the boot controller port, and reconfigure these devices with new multipathing device names. Some applications that use device names that are not multipathing enabled will need to be modified.
Chapter 4
47
-L
command.
# stmsboot -L Version : 1.8 non-STMS device name /dev/rdsk/c1t64d0 /dev/rdsk/c1t65d0 /dev/rdsk/c4t64d0 /dev/rdsk/c4t65d0
Display potential /dev links for one controller by using the stmsboot
-l
controller-number command. For example, to display the device name change for the boot controller port, obtain the device name (cxtydz) corresponding to the root (/) directory from the /etc/vfstab file, where x is the controller number. In this example, x is equal to 3.
# stmsboot -l 3 Version: 1.8 non-STMS device name /dev/rdsk/c3t64d0 /dev/rdsk/c3t65d0 /dev/rdsk/c3t66d0 /dev/rdsk/c3t67d0
48
1. Become superuser. 2. If multipathing is not enabled, perform the procedure described in Enabling Multipathing Globally on page 6. 3. If any applications are running on any device connected to the boot controller port, deactivate them. 4. Verify that no devlinks or disks commands are running. If these commands are running, wait until they are finished.
# ps
-elf | grep devlinks
6. Type y to continue. The software then prompts you to perform a reconfiguration reboot.
Chapter 4
49
Boot controller is c1 Listed below are the non-STMS device names and their corresponding new STMS device names. /dev/rdsk/c1t4d0 /dev/rdsk/c1t4d1 /dev/rdsk/c1t4d2 /dev/rdsk/c9t60020F200000024A3CDAA4BC00051BE5d0 /dev/rdsk/c9t60020F200000024A3DCBC93E00095B12d0 /dev/rdsk/c9t60020F200000024A3DCBC95700049C40d0
WARNING!!! There is at least one Sun StorEdge T3/T3B, or 6x20, that is not in "mpxio" mode. For the system to function properly, please set mp_support to "mpxio" mode on the device(s) now and press any key to continue. A reconfiguration reboot is mandatory for system sanity. While you may choose to complete other tasks prior to initiating the reboot, doing so is not recommended. Reboot the system now ? [y/n] (default:y)
Note In case of an unexpected error, the stmsboot operation will fail, indicating
an error condition, and then exit. Some device links might have been created which are useless but the system files remain unmodified. Correct the error and repeat the operation. If Sun StorEdge T3, 3900, or 6x20 arrays are connected to the boot controller port, the software will prompt you to change the mp_support setting on the storage array appropriately. Refer to your array documentation for additional information.
50
1. Become superuser. 2. Verify that no devlinks or disks commands are running. If these commands are running, wait until they have finished.
# ps - elf | grep devlinks
# ps
Chapter 4
51
4. Type y to continue.
Please choose a device path [1 - 2 ] WARNING: Devices connected to the selected device path will be de enumerated from scsi_vhci 1) /devices/pci@1f,4000/pci@4/SUNW,qlc@5/fp@0,0 2) /devices/pci@1f,4000/pci@2/SUNW,qlc@4/fp@0,0 Choice [default: 1]:
Note In case of an unexpected error, the stmsboot operation will fail, indicating
an error condition, and then exit. Correct the error and you can repeat the operation. 5. Type the device path and press Return. This example shows the results for choice 2. The software then prompts you to perform a reconfiguration reboot.
Note If Sun StorEdge T3, 3900, or 6x20 arrays are connected to the boot controller port, the software will prompt you to change the mp_support setting on the storage array appropriately. To ensure that failover works, select the ONLINE path.
Choice [default: 1]: 2 Updated eeprom boot-device to boot through the selected path /pci@8,600000/SUNW,qlc@2/fp@0,0/disk@w21000004cf721119,0 WARNING!!! There is at least one Sun StorEdge T3/T3B or 6x20 in "mpxio" mode. For the sane operation of the system, please set mp_support to "rw" (or any appropriate mode other than "mpxio") on the device(s) now and press any key to continue.
A reconfiguration reboot is mandatory for system sanity. While you may choose to complete other tasks prior to initiating the reboot, doing so is not recommended. Reboot the system now ? [y/n] (default:y)
52
CHAPTER
Loading IPFC
Configuration of IPFC depends on the instance of FP, or host bus adapter ports. If multiple host bus adapters (HBAs) are present, plumb manually after identifying the FP instance on which IP should be plumbed. The following sections describe the tasks required for identifying FP instances:
IPFC Considerations
IPFC devices are supported for use with Network File System (NFS) software, Network Attached Storage (NAS) devices, and Sun StorEdge Network Data Replicator (Sun SNDR) software, or Sun StorEdge Availability Suite 3.1 remote mirror software
53
Yes, with fabric zones only Fabric zone (with the HBA configured as an F-port point-to-point connection) 253
IPFC is not supported on 1 Gbit Sun switches. Promiscuous mode is not supported. The snoop(1M) utility cannot be used. Multicasting is supported through broadcasting only. You must assign the IP address of the IPFC port to a subnet different from that of the Ethernets on the same system. Network cards using IPFC cannot be used as routers. The /etc/notrouter file must be present on the host. With IPFC, storage devices and hosts should be in separate zones. The storage device should have one path to one zone and another path to another zone for failover and redundancy. The host can have more than one path to a specified zone, and it should have at least one path to each zone so that it can see the respective storage. Any standard network commands can be used after IPFC is attached. There are not any usage differences when these commands (telnet, ping, or ftp) are used in an Ethernet setup. Turn off power management on servers connected to the SAN to prevent unexpected results as one server attempts to power down a device while another attempts to gain access. See power.conf(1M) for details about power management.
54
3. Manually load IPFC to each desired FP instance. Use the ifconfig fcip0 plumb command, where fcip0 is a variable for the desired fp instance number. For example:
# ifconfig fcip0 plumb
When the command is successful, a message appears on both the console and in the messages file. For example:
Sep 13 15:52:30 bytownite ip: ip: joining multicasts failed (7) on fcip0 - will use link layer broadcasts for multicast
Chapter 5
55
a. Use an editor to search for the fp driver binding name in the /etc/path_to_inst file. Entries have fp on the line.
Note Determine the correct entry by finding the hardware path described in your server hardware manual or Sun System Handbook. The Sun System Handbook is available at http://sunsolve.sun.com/handbook_pub/.
b. Narrow the search by using the I/O board and slot information from Step 1.
Note The following method of deriving Solaris device path of an HBA from its
physical location in server may not work for all Sun server hardware. i. Multiply the PCI adapter slot number by the number of adapter ports. For example, if the HBA has two ports, multiply by 2. Using the array with an HBA in the PCI adapter slot 5, multiply 5 by 2 to get 10. ii. Add the PCI adapter I/O board slot number to the number derived in Step i. Using an HBA in PCI adapter slot 5 and PCI slot 1 of the I/O board, add 1 to 10 for a sum of 11. iii. Convert the number derived in Step ii to hexadecimal. The number 11 converts to b in hexadecimal. iv. Search for the fp entry with pci@hex where hex is the number you derived in Step iii.
CODE EXAMPLE 5-1 shows a single Fibre Channel network adapter device path. TABLE 5-2 describes the elements of the device path.
CODE EXAMPLE 5-1
"/pci@b,2000/SUNW,qlc@2/fp@0,0" 7 "fp"
TABLE 5-2
PCI Single Fibre Channel Network Adapter /etc/path_to_inst Device Path Entry
Entry Value
Entry Item
/pci@b,2000/sunw,qlc@2/fp@0,0 7 fp
56
3. Manually plumb each FP instance. Use the ifconfig <interface_number> plumb command. In this example, the value of <interface_number> is fcip7.
# ifconfig fcip7 plumb
When the command is successful, a message appears on both the console and in the messages file. For example:
Sep 13 15:52:30 bytownite ip: ip: joining multicasts failed (7) on fcip0 - will use link layer broadcasts for multicast
1. For each entry in /dev/fc, issue a luxadm -e dump_map command to view all the devices that are visible through that HBA port:
# luxadm -e dump_map /dev/fc/fp0 Pos Port_ID Hard_Addr Port WWN 0 610100 0 210000e08b049f53 (Unknown Type) 1 620d02 0 210000e08b02c32a (Unknown Type) 2 620f00 0 210000e08b03eb4b (Unknown Type) 3 620e00 0 210100e08b220713 (Unknown Type,Host Bus Adapter) # luxadm -e dump_map /dev/fc/fp1 No FC devices found. - /dev/fc/fp1
Node WWN Type 200000e08b049f53 0x1f 200000e08b02c32a 0x1f 200000e08b03eb4b 0x1f 200100e08b220713 0x1f
2. Based on the list of devices, determine which destination HBAs are visible to the remote host with which you want to establish IPFC communications. In the example for this procedure, the destination HBAs have port IDs 610100 and
620d02. The originating HBAs port ID is 620e00.
Chapter 5
57
3. List the physical path of the originating HBA port from which you can see the destination HBA port, where originating-hba-link is a variable for the link determined in Step .
# ls -l /dev/fc/fp originating-hba-link
4. Search the physical path identified in Step 3. You must remove the leading ../../devices from the path name output. For example
# grep pci@8,600000/SUNW,qlc@1/fp@0,0 /etc/path_to_inst "/pci@8,600000/SUNW,qlc@1/fp@0,0" 0 "fp"
5. Determine the fp instance for the originating HBA port from the output of the command in Step 4. The instance number precedes fp in the output. In the following example output, the instance number is 0.
"/pci@8,600000/SUNW,qlc@1/fp@0,0" 0 "fp"
6. Use the instance number from Step 5 to load IPFC and plumb the IPFC interface. In this example, the instance is 0.
# ifconfig fcip0 plumb
58
To Start a Network Interface Manually on page 59 To Configure the Host for Automatic Plumbing Upon Reboot on page 59
1. Use the ifconfig command with the appropriate interface. Ask your network administrator for an appropriate IP address and netmask information. For example, to enable an IPFC interface associated with fp instance 0 and an IP address of 192.9.201.10, type:
# touch /etc/notrouter # ifconfig fcip0 inet 192.9.201.10 netmask 255.255.255.0 up
The ifconfig command is described in more detail in the ifconfig(1M) manpage. 2. Use the command ifconfig -a to verify the network is functioning. The output of ifconfig -a should look like this:
lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 fcip0: flags= 1001843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,IPv4> mtu 1500 index 2 inet 192.9.201.10 netmask ffffff00 broadcast 192.9.201.255 ether 0:e0:8b:1:3c:f7 hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 inet 192.9.200.70 netmask ffffff00 broadcast 192.9.200.255 ether 8:0:20:fc:e9:49
1. Manually create a /etc/hostname.interface file with a text editor so it contains a single line that identifies the host name or interface IP address. 2. Use a text editor to make any additional entries to the /etc/inet/hosts file. The Solaris installation program creates the /etc/inet/hosts file with minimum entries. You must manually make additional entries with a text editor. (See the hosts(4) man page for additional information.) The /etc/inet/hosts file contains the hosts database. This file contains the host names and the primary network interface IP addresses, as well as the IP addresses of other network interfaces attached to the system and of any other network interfaces that the machine must know about.
CODE EXAMPLE 5-2 shows an example of an etc/inet/host file.
CODE EXAMPLE 5-2
127.0.0.1 localhost loghost 192.9.200.70 sun1 #This is the local host name 192.9.201.10 fcip0 #Interface to network 192.9.201.10
3. Edit the /etc/nsswitch.conf file so that all un-commented entries have the word files before any other nameservice. The /etc/nsswitch.conf specifies which name service to use for a particular machine. CODE EXAMPLE 5-3 shows an example of an /etc/nsswitch.conf file.
CODE EXAMPLE 5-3
60
CHAPTER
1. Become superuser.
61
2. Identify the device to be unconfigured. Only devices on a fabric-connected host port can be unconfigured.
# cfgadm -al Ap_Id Type c0 fc-fabric c0::50020f2300006077 disk c0::50020f23000063a9 disk c1 fc-private c1::220203708b69c32b disk c1::220203708ba7d832 disk
Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
Occupant Condition configured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown
1. Become superuser.
62
2. Identify the fabric devices to be unconfigured. Only devices on a fabric-connected host port can be unconfigured.
# cfgadm -al Ap_Id Type c0 fc-fabric c0::50020f2300006077 disk c0::50020f23000063a9 disk c1 fc-private c1::220203708b69c32b disk c1::220203708ba7d832 disk
3. Stop all activity to each fabric device on the selected port and unmount any file systems on each fabric device. If the device is under any volume managers control, see the documentation for your volume manager before unconfiguring the device.
# cfgadm -c unconfigure c0
Occupant Condition unconfigured unknown unconfigured unknown unconfigured unknown configured unknown configured unknown configured unknown
Notice that the Occupant column of c0 and all the fabric devices attached to it are displayed as unconfigured.
Chapter 6
63
1. Become superuser. 2. Identify the port WWN of the fabric device to be unconfigured.
# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c0::50020f23000063a9 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected c2 fc-fabric connected c2::50020f2300005f24 disk connected c2::50020f2300006107 disk connected
Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
In this example, the c0::50020f2300006077 and c2::50020f2300006107 Ap_Ids represent different port WWNs for the same device associated with a multipathing device. The c0 and c2 host ports are enabled for use by the multipathing software. 3. Stop all device activity to each fabric device on the selected port and unmount any file systems on each fabric device. If the device is under any volume managers control, see the documentation for your volume manager for maintaining the fabric device. 4. Unconfigure fabric devices associated with the multipathing device. Only devices on a fabric-connected host port can be unconfigured through the cfgadm -c unconfigure command.
# cfgadm -c unconfigure c0::50020f2300006077 c2::50020f2300006107
Note You can remove a device from up to eight paths individually, as in the
example command cfgadm -c unconfigure c0::1111, c1::2222, c3::3333, etc. As an alternative, you can remove an entire set of paths from the host, as in the example cfgadm -c unconfigure c0.
64
Receptacle connected connected connected connected connected connected connected connected connected
Occupant Condition configured unknown unconfigured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown
Notice that the Ap_Ids c0::50020f2300006077 and c2::50020f2300006107 are unconfigured. The Occupant column of c0 and c2 still displays those ports as configured because they have other configured occupants. The multipath devices associated with the Ap_Ids c0::50020f2300006077 and c2::50020f2300006107 are no longer available to the host using the Solaris Operating System. The following two multipath devices are removed from the host:
/dev/rdsk/c6t60020F20000061073AC8B52D000B74A3d0s2 /dev/rdsk/c6t60020F20000061073AC8B4C50004ED3Ad0s2
1. Become superuser.
Chapter 6
65
Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
In this example, c0::50020f2300006077 and c2::50020f2300006107 Ap_Ids represent different port WWNs for the same device. 3. Unconfigure the Ap_Id associated with multipathing device.
Note If the Ap_Id represents the last configured path to the device, stop all
activity to the path and unmount any file systems on it. If the multipathing device is under any volume managers control, see the documentation for your volume manager for maintaining the fabric device. In the example that follows, the path represented as c2::50020f2300006107 is unconfigured, and c0::50020f2300006077 remains configured to show how you can unconfigure just one of multiple paths for a multipathing device.
# cfgadm -c unconfigure c2::50020f2300006107
Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown
66
The multipathing devices associated with that Ap_Id are still available to a host using the Solaris Operating System through the other path, represented by c0::50020f2300006077. A device can be connected to multiple Ap_Ids and an Ap_Id can be connected to multiple devices.
Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown configured unknown
Chapter 6
67
4. Verify that all devices on c2 are unconfigured. Notice that the Occupant column lists c2 and all the devices attached to c2 as unconfigured.
# cfgadm -al Ap_Id Type Receptacle c0 fc-fabric connected c0::50020f2300006077 disk connected c1 fc-private connected c1::220203708b69c32b disk connected c1::220203708ba7d832 disk connected c2 fc-fabric connected c2::50020f2300005f24 disk connected c2::50020f2300006107 disk connected
Occupant Condition configured unknown configured unknown configured unknown configured unknown configured unknown unconfigured unknown unconfigured unknown unconfigured unknown
68
APPENDIX
A system that does not use multipathing driver software A system that uses multipathing driver software
About Multipathing Configuration Samples on page 69 Configuration without Multipathing on page 70 Configuration with Multipathing on page 71
Sun StorEdge T3 partner pair with 4 LUNs. Sun StorEdge A5200 with both A and B loop connected.
69
Utility: format
The output for a configuration that does not use multipathing is as follows:
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0, 1. c2t3d0 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@2/fp@0,0/ssd@w50020f23000042d4,0 2. c2t3d1 <SUN-T300-0116 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@2/fp@0,0/ssd@w50020f23000042d4,1 3. c2t3d2 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@2/fp@0,0/ssd@w50020f23000042d4,2 4. c2t3d3 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@2/fp@0,0/ssd@w50020f23000042d4,3 5. c3t4d0 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300003fad,0 6. c3t4d1 <SUN-T300-0116 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300003fad,1 7. c3t4d2 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300003fad,2 8. c3t4d3 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /pci@1f,4000/SUNW,qlc@4/fp@0,0/ssd@w50020f2300003fad,3 9. c4t68d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /pci@1f,2000/pci@1/SUNW,qlc@4/fp@0,0 ssd@w22000020371a1862,0 . . . > Specify disk (enter its number): ^D
70
Utility: format
The output of format looks as follows:
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0,0 1. c6t60020F20000042D43ADCBC4E000C41E2d0 <SUN-T300-0117 cyl 34145 alt 2 hd 24 sec 128> /scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2 2. c6t60020F20000042D43B0E926A000AA3FCd0 <SUN-T300-0116 cyl 34145 alt 2 hd 24 sec 128> /scsi_vhci/ssd@g60020f20000042d43b0e926a000aa3fc 3. c6t60020F20000042D43B2753510008C9DFd0 <SUN-T300-0117 cyl 34145 alt 2 h d 24 sec 128> /scsi_vhci/ssd@g60020f20000042d43b2753510008c9df 4. c6t60020F20000042D43B275377000877DDd0 <SUN-T300-0117 cyl 34145 alt 2 h d 24 sec 128> /scsi_vhci/ssd@g60020f20000042d43b275377000877dd 5. c6t20000020371A1862d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /scsi_vhci/ssd@g20000020371a1862 Specify disk (enter its number): ^D
Devices enumerated under multipathing will have a /scsi_vhci/ssd entry. The first four scsi_vhci entries are the four T3 LUNs which now have a long name which is Global Unique Identifier (GUID). Also, only one entry per LUN is visible instead of two paths. The next scsi_vhci entry is a disk in the Sun StorEdge A5200 array. There is only one controller number assigned for all the devices encapsulated under multipathing.
Appendix A
71
Use luxadm display to identify the mapping between device entries without multipathing and the device entries with multipathing.
The number of paths to the storage device. The mapping of the paths prior to multipathing and after multipathing (under Path(s) controller and device address). The state and type of each path.
ONLINE indicates the active path(s) through which IO is going to the device. For more than one ONLINE path, multipathing will use load balancing (like round robin scheme) or single IO to the device, depending on the setting of load-balance variable in /kernel/drv/scsi_vhci.conf file. STANDBY indicates the path is available if an ONLINE path fails or is switched to another state. There can be many STANDBY paths. If a STANDBY path is chosen to be an active path for routing IO, its status will be changed to ONLINE. OFFLINE indicates the path(s) was previously existing but is not available now. Class type for each path. PRIMARY: This path is the preferred path for routing IO.
72
If a Sun StorEdge T3 partner pair configuration is used, two paths for each lun exist. One path is ONLINE and the other is STANDBY. In this configuration, IO to the lun is active on its ONLINE path. If this path fails, the STANDBY path becomes the ONLINE path. However, if the first path later becomes available, it will go into STANDBY mode instead of ONLINE, thus saving an expensive failover operation, if autofailback was disabled throughout the configuration. Use luxadm failover command to bring the restored path to ONLINE (see the luxadm man pages). There can be more than one primary and secondary path to a Sun StorEdge T3, 39x0 or 6x20 device in a SAN environment. As can be seen from the following two luxadm display output of some T3 LUNs, PRIMARY can be STANDBY and SECONDARY can be ONLINE for one LUN on the same physical paths, whereas SECONDARY is STANDBY and PRIMARY is ONLINE for another LUN.
# luxadm display 50020f23000007a4 DEVICE PROPERTIES for disk: 50020f23000007a4 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 9217.688 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D43EE74C4B0000C910d0s2 /devices/scsi_vhci/ssd@g60020f20000007d43ee74c4b0000c910:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,0 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,0 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY
Appendix A
73
# luxadm display \ /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 Vendor: SUN Product ID: T300 Revision: 0301 Serial Num: Unsupported Unformatted capacity: 35846.125 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c14t60020F20000007D44002E60F0002D8CAd0s2 /devices/scsi_vhci/ssd@g60020f20000007d44002e60f0002d8ca:c,raw Controller /devices/pci@6,2000/SUNW,jfca@2,1/fp@0,0 Device Address 50020f23000007d4,7 Host controller port WWN 200000017380a66e Class primary State ONLINE Controller /devices/pci@7,2000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000007a4,7 Host controller port WWN 210000e08b0a9675 Class secondary State STANDBY
74
In the case of Sun StorEdge A5200 in the current configuration, there are two paths. Both are ONLINE and IO is load balanced on both the paths. If a path fails, the second path continues the IO. If the failed path comes back, it will be in ONLINE state and starts participating in IO transfer if load-balancing is enabled.
# luxadm display /dev/rdsk/c6t20000020371A1862d0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c6t20000020371A1862d0s2 Status(Port A): O.K. Status(Port B): O.K. Vendor: SEAGATE Product ID: ST136403FSUN36G WWN(Node): 20000020371a1862 WWN(Port A): 21000020371a1862 WWN(Port B): 22000020371a1862 Revision: 114A Serial Num: LT0187150000 Unformatted capacity: 34732.891 MBytes Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0xffff Location: In the enclosure named: f Device Type: Disk device Path(s): /dev/rdsk/c6t20000020371A1862d0s2 /devices/scsi_vhci/ssd@g20000020371a1862:c,raw Controller /devices/pci@1f,2000/pci@1/SUNW,qlc@4/fp@0,0 Device Address 22000020371a1862,0 Class primary State ONLINE Controller /devices/pci@1f,2000/pci@1/SUNW,qlc@5/fp@0,0 Device Address 21000020371a1862,0 Class primary State ONLINE
Appendix A
75
After a failure of one path is corrected, the LUN will not fail back to the original configuration automatically if auto-failback is set to "disable" in /kernel/drv/scsi_vhci.conf. A luxadm failover subcommand must be issued to perform failover to the original configuration. The failover process occurs as follows: 1. The original state of a Sun StorEdge T3 LUN is obtained by luxadm display command.
# luxadm display /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41Ed0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 Status(Port A): O.K. Status(Port B): O.K. Vendor: SUN Product ID: T300 WWN(Node): 50020f20000042d4 WWN(Port A): 50020f23000042d4 WWN(Port B): 50020f2300003fad Revision: 0117 Serial Num: Unsupported Unformatted capacity: 51220.500 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 /devices/scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2:c,raw Controller /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000042d4,0 Class primary State ONLINE Controller /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0 Device Address 50020f2300003fad,0 Class secondary State STANDBY
Note The primary path is ONLINE and the secondary path is STANDBY.
76
2. Cable is pulled from the T3 controller 50020f23000042d4. Failover is triggered and the primary path is OFFLINE and secondary path is ONLINE. LUN status now is degraded.
# luxadm display /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41Ed0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 .. Path(s): /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 /devices/scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2:c,raw Controller /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000042d4,0 Class primary State OFFLINE Controller /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0 Device Address 50020f2300003fad,0 Class secondary State ONLINE
3. Cable is reinserted to T3 controller 50020f23000042d4. The device state become optimal but, the failover is not triggered. The Primary path comes up as STANDBY and still the secondary path is ONLINE.
# luxadm display /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41Ed0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 .. Path(s): /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 /devices/scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2:c,raw Controller /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000042d4,0 Class primary State STANDBY Controller /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0 Device Address 50020f2300003fad,0 Class secondary State ONLINE
Appendix A
77
This now triggers the failover. The primary path becomes ONLINE and the secondary path becomes STANDBY which is equivalent to the original state in Step 1. 1. To verify the failover operation, display the properties using the luxadm display command.
# luxadm display /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41Ed0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 .. Path(s): /dev/rdsk/c5t60020F20000042D43ADCBC4E000C41E2d0s2 /devices/scsi_vhci/ssd@g60020f20000042d43adcbc4e000c41e2:c,raw Controller /devices/pci@1f,4000/SUNW,qlc@2/fp@0,0 Device Address 50020f23000042d4,0 Class primary State ONLINE Controller /devices/pci@1f,4000/SUNW,qlc@4/fp@0,0 Device Address 50020f2300003fad,0 Class secondary State STANDBY
For more details on the luxadm failover command, see the man pages.
78
APPENDIX
HBA_GetVersion HBA_LoadLibrary HBA_FreeLibrary HBA_GetNumberofAdapters HBA_GetAdapterName HBA_OpenAdapter HBA_CloseAdapter HBA_GetAdapterAttributes HBA_GetAdapterPortAttributes HBA_GetDiscoveredPortAttributes HBA_GetPortAttributesbyWWN HBA_SendCTPassThru HBA_SendCTPassThruV2 HBA_RefreshInformation HBA_GetFcpTargetMapping HBA_SendScsiInquiry HBA_SendReportLuns HBA_SendReadCapacity
Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
79
HBA_GetPortStatistics HBA_ResetStatistics HBA_GetFcpPersistentBinding HBA_GetEventBuffer HBA_SetRNIDMgmtInfo HBA_GetRNIDMgmtInfo HBA_SendRNID HBA_SendRNIDV2 HBA_ScsiInquiryV2 HBA_ScsiReportLUNsV2 HBA_ScsiReadCapacityV2 HBA_OpenAdapterByWWN HBA_RefreshAdapterConfiguration HBA_GetVendorLibraryAttributes HBA_GetFcpTargetMappingV2 HBA_SendRPL HBA_SendRPS HBA_SendSRL HBA_SendLIRR HBA_SendRLS HBA_RemoveCallback HBA_RegisterForAdapterEvents HBA_RegisterForAdapterAddEvents HBA_RegisterForAdapterPortEvents HBA_RegisterForAdapterPortStatEvents HBA_RegisterForTargetEvents HBA_RegisterForAdapterLinkEvents HBA_RegisterForAdapterTargetEvents HBA_GetFC4Statistics HBA_GetFCPStatistics
Yes No No No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No No No No Yes Yes Yes Yes Yes No Yes No Yes No No
80
No No No No No No No
Appendix B
81
82
APPENDIX
Zone Types
Zoning is a function of the switch that allows segregation of devices by ports or world wide names (WWNs).You can create zones for a variety of reasons, such as security, simplicity, performance, or dedication of resources. The Solaris 10 OS software supports both industry-standard port-based and WWN-based zones. See your third-party vendor documentation for more information. There are two main types of zones:
Name Server Zones - NS zones use fabric protocols to communicate with Fibre Channel devices. Each NS zone defines which ports or devices receive NS information. The Sun StorEdge T3 arrays with firmware level 1.18.02 or higher and Sun StorEdge T3+ arrays with firmware level 2.1.04 or higher support loop (TL) port connections. The Sun StorEdge T3+ array with firmware level 2.1.04 or higher supports fabric connections. FL ports are supported only for Sun StorEdge L180/L700 tape libraries. Please refer to your switch documentation for more information.
Segmented Loop Zones - The Solaris 10 OS software does not support Segmented Loop (SL) zones or ports.
83
Port Types
TABLE C-1 lists port types for Sun switches.
TABLE C-1 Port Type
Storage devices connected to the Sun switch only. Sun StorEdge L180/L700 tape libraries. Host bus adapters, storage devices. Cascaded switches acting as ISLs, which are configured initially in fabric port mode. Automatically configured to E or F ports to support switches or fabric devices. All switch ports should be set to G-port, except for tape libraries that do not support F-port; see GL ports below. Automatically configured to FL or G ports to support hosts or switches (Sun StorEdge Network 2 Gbit McDATA Sphereon 4300 and 4500 switches). Automatically configured to FL, E, or F ports to support public loop, point-to-point, or switch devices. This port type is used only for setting L180/L700 tape libraries to FL.
G Ports
General ports
Gx Ports
GL Ports
When an array is configured, the host port is connected to an F port and the array is connected to an F or TL port on the switch. The TL (translation loop) port represents eight-bit addressing devices as 24-bit addressing devices and vice versa.
TABLE 0-2 Port Type
Sun StorEdge T3 array Sun StorEdge T3+ array Sun StorEdge 39x0 array
84
fabric loop or fabric loop loop or fabric loop loop, public loop (FL port)
Sun StorEdge 69x0 array Sun StorEdge 6x20 array Sun StorEdge 99x0 array STK 9840b tape drive STK 9940b tape drive STK 9840 tape drive Sun StorEdge L180/L700 tape libraries
Although you can connect a Sun StorEdge T3 array with a TL port, the host bus adapter recognizes it as a fabric device. Sun StorEdge T3+ arrays and the Sun StorEdge 3510FC, 39x0, 69x0, and 99x0 storage arrays should be connected with F ports as a 24-bit addressing device for fabric connectivity. The STK 9840B tape drive require F ports when connected to 2-Gbit switches. Sun StorEdge L5500/L6000 libraries are not connected to the SAN.
Appendix C
85
86
APPENDIX
Dynamic Reconfiguration
Dynamic reconfiguration (DR) works differently for non-fabric and fabric devices. With previously configured FC-AL devices, DR happens automatically upon addition or removal of devices to a host I/O port. With the multipathing enabled, the Solaris Operating System host configures the devices as multipathing devices.
87
Unconfigure the fabric devices that were configured through host ports on the
If multipathing is not enabled, see Unconfiguring Fabric Devices on page 61. If the multipathing is enabled, see To Unconfigure a Fabric Device Associated With Multipathing Enabled Devices on page 64.
1. Reconfigure the device through on-demand node creation. 2. Perform DR operations according to the instructions in the documentation for the host.
1. Add the system component and make it available to the host. 2. Reconfigure the device(s) through on-demand node creation.
If the multipathing is not enabled, see Configuring Device Nodes Without Multipathing Enabled on page 33. If the multipathing is enabled, see Configuring Device Nodes With Multipathing Enabled on page 40.
1. Unconfigure the fabric devices on fabric-connected host ports on the board to be detached.
88
2. Start the DR detach operations for the board. See the Sun Enterprise 10000 Dynamic Reconfiguration Configuration Guide. 3. Start the DR attach operations when the board is ready. See the Sun Enterprise 10000 Dynamic Reconfiguration Configuration Guide. 4. Configure any fabric devices on the attached boards. See the sections in Chapter 3 that explain how to recognize the storage devices on the host. On the newly attached board, the devices could be the same or completely new devices.
Appendix D
89
90
APPENDIX
Multipathing Troubleshooting
This appendix provides solutions to potential problems that may occur while running multipathing. This appendix contains the following sections:
System Crashes During Boot Operations on page 92 System Crashes During or After a Boot Enable Operation on page 92 System Crashes During or After a Multipathing Boot Disable Operation on page 94 Sun StorEdge T3, 6x20, or 3900 Arrays Do Not Show on page 96 System Failed During Boot With scsi_vhci Attachment on page 97 Connected Sun StorEdge A5200 Arrays Appear Under Physical Paths in Format on page 97 System and Error Messages on page 98
91
Then check the following: 1. If the boot device is a Sun StorEdge T3 Raid Array LUN, did you change the mp_support setting to mpxio mode? If the change was not performed before system reboot, make this change and shut down and restart the system. 2. Are you specifying a different boot path manually? stmsboot has a dependency on the path through which the system boots. See Chapter 4 Configuring Multipathing Support for Storage Boot Devices. 3. Did you have a per-port mpxio-disable entry corresponding to the boot controller port in the qlc.conf file? If so, remove the corresponding qlc.conf entry. 4. Boot the system from another disk or over the network, and mount the boot device on /mnt.
92
Verify that /mnt/etc/vfstab is the same as /var/tmp/vfstab.sav. Are all the boot controller device names shown as multipathing device names, compared with /var/tmp/vfstab.sav? Verify that /mnt/etc/path_to_inst is the same as /var/tmp/path_to_inst.sav. This file should contain a unique ssd instance for each scsi_vhci/ssd@gGUID device. The /mnt/etc/system file should have a rootdev entry to the boot device of scsi_vhci/ssd@gGUID. For example:
rootdev:/scsi_vhci/ssd@g20000004cf721119:a
The /mnt/kernel/drv/fp.conf file should have the boot controller port entry set to mpxio-disable="no".
5. To restore the system to the "stmsboot disabled" state, use the following commands:
# cp /mnt/var/tmp/vfstab.sav /mnt/etc/vfstab # cp /mnt/var/tmp/system.sav /mnt/etc/system # cp /mnt/var/tmp/dumpadm /mnt/etc/dumpadm.conf
Note that path_to_inst.sav is not restored because the system remembers the device names. Remove the per-port mpxio-disable entry corresponding to the boot controller port in the /mnt/kernel/drv/fp.conf file, and reboot your system.
# touch /reconfigure # shutdown -g0 -y -i6
Appendix E
Multipathing Troubleshooting
93
Then check the following: 1. If the boot device is a Sun StorEdge T3 Raid Array LUN, did you change the mp_support setting to rw or none? If the change was not performed before system reboot, make this change and shut down and restart the system. 2. Are you specifying a different boot path manually? stmsboot has a dependency on the path through which the system boots. See Chapter 4 Configuring Multipathing Support for Storage Boot Devices. 3. Did you have a per-port mpxio-disable entry corresponding to the boot controller port in the qlc.conf file? If so, remove the corresponding qlc.conf entry. 4. Boot the system from another disk or over the network and mount the boot device on /mnt.
The /mnt/etc/vfstab file should indicate the device name that is not multipathing enabled instead of multipathing device names. The /mnt/etc/system file should not have a rootdev entry. The /mnt/kernel/drv/scsi_vhci.conf file should not have the boot controller port entry.
94
5. To restore the system to the stmsboot enabled state, use the following commands:
# cp /mnt/var/tmp/vfstab.sav /mnt/etc/vfstab # cp /mnt/var/tmp/system.sav /mnt/etc/system # cp /mnt/var/tmp/dumpadm /mnt/etc/dumpadm.conf
Note that path_to_inst.sav is not restored because system remembers the device names. Add the per-port mpxio-disable=no entry corresponding to the boot controller port in the /mnt/kernel/drv/fp.conf file and reboot your system:
# touch /reconfigure # shutdown -g0 -y -i6
If modinfo output does not show that the drivers are loaded, verify that the following binaries have been loaded in the proper directories:
Appendix E
Multipathing Troubleshooting
95
If these binaries are not present in the specified directories, then multipathing software was not installed properly. Repeat the installation process, making sure you are logged in as superuser.
While checking the array setup, confirm that the LPC version is also current. After the device is configured, it is advisable to perform a reconfiguration reboot or equivalent.
96
Connected Sun StorEdge A5200 Arrays Appear Under Physical Paths in Format
Consider the following:
The Sun StorEdge A5200 Array is not fabric supported, but can be connected using FCAL. Solaris 10 OS software supports the Sun StorEdge A5200, A3500FC arrays and FC tape devices. SL zones contain SL ports only. SL ports are not supported in the Solaris 10 OS 4.x release but were in earlier releases. Check whether the Sun StorEdge A5200 class devices are not connected to a multipathing supported HBA. Check whether multipathing is disabled under /kernel/drv/fp.conf. Check whether the system is booting from the Sun StorEdge A5200 disks. When you boot from a multipathing device, all devices under the pHCI with the boot device are enumerated under scsi_vhci. Make sure the pHCI has mpxio-disable=no set in the multipathing file /kernel/drv/qlc.conf or /kernel/drv/fp.conf.
Appendix E
Multipathing Troubleshooting
97
When the automatic failback feature is enabled through the configuration file, you should receive the following message:
Auto-failback capability enabled through scsi_vhci.conf file.
Externally initiated failover of a Sun StorEdge T3 or T3+ array has been observed.
Waiting for externally initiated failover to complete
After an stmsboot enable (stmsboot -e) operation, the following message corresponding to the boot controller port (here, fp2) should not appear once the system is booted, that is, booted through the stmsboot-enabled device through the fp2 path):
mpxio: [ID 284422 kern.info] /pci@8,600000/SUNW,qlc@1/fp@0,0 (fp2) multipath capabilities disabled: controller for root filesystem cannot be multipathed.
98
Index
Numerics
24-bit addressing devices, 84 24-bit Fibre Channel addressing devices, 31
A
Ap_Id, 66 array port, 84
B
broadcasting, 54
activity, 64 conguration, 33 node creation, 31 node discovery, 38 removal, 64 storage, 31 device node creation, 40 device node removal, 40 disabling ports, 10 drivers, 31
C
cfgadm -al, 32 -c congure, 31 -fp, 40 -l, 32 cfgadm -al, 39, 66 cfgadm -al -o show_FCP_dev, 40 cfgadm -c congure, 39 cfgadm(1M), 33 conguration examples, 70 examples, luxadm command, 72, 75, 76 fabric devices, 33 conguring HBAs, 13, 14 third-party devices, 16
E
eight-bit addressing devices, 84 enabling ports, 10 error messages, 98
F
F_port, 84 fabric connectivity, 85 fabric-connected host ports, 32 fc-fabric, 41 Fibre Channel Protocol, 31
H
host port, 84
I
IPFC guidelines, 53 SCSI LUN level information, 31
D
device
99
L
loading drivers, 31 LUN level information, 31 recognition, 31 luxadm, 35
Z
zoning, 83
M
modinfo, 31 multicasting, 54
N
NAS, 53 NFS, 53
P
physical device representation, 42 port, 84 promiscuous mode, 54
S
show_scsi_lun, 40 SNDR, 53 snoop, 54 ssd driver, 31 st driver, 31 Sun StorEdge 39x0 series, 85 Sun StorEdge 69x0 series, 85 Sun StorEdge 99x0 series, 85 support contact, xii
T
TL port, 84 translation loop port, 84
U
unconguring a single path, 66 devices, 61 multipathed devices, 65 multiple devices, 67 unconguring devices, 64
100