Sunteți pe pagina 1din 354

Sun Cluster 3 Configuration Guide

The information contained in this document is considered Sun


Proprietary/Confidential: Internal Use Only - Sun Employees and
Authorized Resellers. Copyright laws must be respected and therefore
this document must not be distributed to end user customers.

The information contained in this document may be used by the


employees of Sun Authorized Resellers to create more informed
positioning and proposals of Sun's products and strategies, and to
present more convincing and forceful arguments when selling
solutions on Sun platform.

October 13, 2009

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only i


Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only
Contents

Preface ix

1. Sun Cluster 3 Introduction 1

2. Sun Cluster 3 Topologies 3


Clustered Pairs 4
N+1 (Star) 5
Pair + N 6
N*N (Scalable) 7
Diskless Cluster Configurations 8
Single-Node Cluster Configurations 9

3. Server Configuration 11
Boot Device for a Server 15
Heterogeneous Servers in Sun Cluster 15
Generic Server Configuration Rules 15
SPARC Servers 16
x64 Servers 25

4. Clusters with Heterogeneous Servers 35


Generic Rules 35

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only iii


Mixing Different Types of Servers in a Cluster 36
Sharing Storage Among Different Types of Servers in a Cluster 36

5. Storage Overview 39
Local Storage (Single-Hosted Storage) 39
Heterogeneous Storage in Sun Cluster 39
Shared Storage (Multi-Hosted Storage) 40
Third-Party Storage 58

6. Fibre Channel Storage Support 59


SAN Configuration Support 59
Sun StorEdge A3500FC System 63
Sun StorEdge A5x00 Array 66
Sun StorEdge T3 Array (Single Brick) 74
Sun StorEdge T3 Array (Partner Pair) 78
Sun StorageTek 2540 RAID Array 81
Sun StorEdge 3510 RAID Array 83
Sun StorEdge 3511 RAID Array 88
Sun StorEdge 3910/3960 System 90
Sun StorEdge 6120 Array 92
Sun StorEdge 6130 Array 94
Sun StorageTek 6140 Array 97
Sun Storage 6180 Array 99
Sun StorEdge 6320 System 100
Sun StorageTek 6540 Array 103
Sun Storage 6580/6780 Arrays 105
Sun StorEdge 6910/6960 Arrays 107
Sun StorEdge 6920 System 109
Sun StorEdge 9910/9960 Arrays 111

iv Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


Sun StorEdge 9970/9980 115
Sun StorageTek 9985/9990 119
Sun StorageTek 9985V/9990V 122

7. SCSI Storage Support 127


Netra st D130 Array 127
Netra st A1000 Array 128
Netra st D1000 Array 129
Sun StorEdge MultiPack 131
Sun StorEdge D2 Array 132
Sun StorEdge S1 Array 134
Sun StorEdge A1000 Array 137
Sun StorEdge D1000 Array 138
Sun StorEdge A3500 Array 140
Sun StorEdge 3120 JBOD Array 142
Sun StorEdge 3310 JBOD Array 148
Sun StorEdge 3310 RAID Array 153
Sun StorEdge 3320 JBOD Array 157
Sun StorEdge 3320 RAID Array 162

8. SAS Storage Support 167


Sun StorageTek 2530 RAID Array 167
Sun Storage J4200 and J4400 JBOD Arrays 169
Sun Storage J4400 JBOD Array 171

9. Ethernet Storage Support 173


Sun StorageTek 2510 RAID Array 173
Sun StorageTek 5000 NAS Appliance 175
Sun StorageTek 5210 NAS Appliance 177
Sun StorageTek 5220 NAS Appliance 177

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only v


Sun StorageTek 5310 NAS Appliance 178
Sun StorageTek 5320 NAS Appliance 178
Sun StorageTek 5320 NAS Cluster Appliance 178
Sun Storage 7000 Unified Storage System 179
Sun Storage 7110 Unified Storage System 181
Sun Storage 7210 Unified Storage System 181
Sun Storage 7310 Unified Storage System 181
Sun Storage 7410 Unified Storage System 181

10. Network Configuration 183


Cluster Interconnect 183
Public Network 202

11. Software Configuration 219


Solaris Releases 219
Application Services 222
Co-Existence Software 249
Restriction on Applications Running in Sun Cluster 250
Data Configuration 250
RAID in Sun Cluster 3 258
Support for Virtualized OS Environments 259

12. Managing Sun Cluster 3 263


Console Access 263
Cluster Administration and Monitoring 263

13. Sun Cluster 3 Ordering Information 265


Overview of Order Flow Chart 265
Order Flow Chart 266
Agents Edist Download Process 286

vi Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


A. Campus Clusters 287
Number of Nodes 287

Campus Cluster Room Configurations 287


Applications 288

Guideline for Specs Based Campus Cluster Configurations 288

TrueCopy Support 291


SRDF Support 292

B. Sun Cluster Geographic Edition 295

C. Third-Party Agents 311

D. Revision History 313

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only vii


viii Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only
Preface

This document is designed to be a high-level pre-sales guide. Given a set of


customer requirements, the reader should be able to configure and order a Sun
Cluster. This is a growing document. As new applications are supported, new
releases are qualified, and newer hardware is introduced this document is modified.
Please make sure you have the latest version. Unless otherwise noted, support for
Sun Cluster 3 encompasses Sun Cluster 3.0, Sun Cluster 3.1 and Sun Cluster 3.2
versions.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only ix


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Related Documentation
TABLE P-1 Sun Cluster 3.0 User Documentation

Title Part Number

Sun Cluster 3.0 12/01 Software Installation Guide. 816-2022


Sun Cluster 3.0 12/01 Hardware Guide. 816-2023
Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide. 816-2024
Sun Cluster 3.0 12/01 Data Services Developers Guide 816-2025
Sun Cluster 3.0 12/01 System Administration Guide 816-2026
Sun Cluster 3.0 12/01 Concepts. 816-2027
Sun Cluster 3.0 Error Messages Manual. 816-2028
Sun Cluster 3.0 12/01 Release Notes. 816-2029
Sun Cluster 3.0 12/01 Release Notes Supplement 816-3753

TABLE P-2 Sun Cluster 3.1 User Documentation

Title Part Number

Sun Cluster 3.1 Software Installation Guide 817-6543


Sun Cluster 3.1 Hardware Administration Guide 817-0168

Sun Cluster 3.1 Data Services Planning and Administration Guide 817-6564
Sun Cluster 3.1 Data Services Developers Guide 817-6555
Sun Cluster 3.1 System Administration Guide 817-6546
Sun Cluster 3.1 Error Messages Guide 817-6558
Sun Cluster 3.1 Release Notes Supplement 816-3381

x Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


TABLE P-3 Sun Cluster 3.2 User Documentation

Title Part Number

Sun Cluster Software Installation Guide for Solaris OS 819-2970


Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS 819-2993
Sun Cluster Data Services Planning and Administration Guide for Solaris OS 819-2974
Sun Cluster Data Services Developer’s Guide for Solaris OS 819-2972
Sun Cluster System Administration Guide for Solaris OS 819-2971
Sun Cluster Concepts Guide for Solaris OS 819-2969
Sun Cluster Error Messages Guide for Solaris OS 819-2973
Sun Cluster 3.2 Release Notes for Solaris OS 819-6611

Notes
Sun Cluster 3 poses restrictions in addition to those imposed by the base hardware
and software components. Under no circumstances does Sun Cluster 3 relax the
restrictions imposed by the base hardware and software components. It is also
important to understand what we mean by REQUIRED and RECOMMENDED.

Configuration rules stated as REQUIRED, must be followed to configure a valid Sun


Cluster. It is REQUIRED that a configuration has no single point of failure that could
bring the entire cluster down (for example, having mirrored storage).

Configuration rules shown as RECOMMENDED, need not necessarily be followed to


configure a valid Sun Cluster. It is RECOMMENDED that a configuration has
redundancy within the node, so that if a component fails, the backup component can
be used within the node without initiating application fail over to the backup node
(for example, redundant network adapters in NAFO group prevent application
failover in case of failure of the primary network adapter with Sun Cluster 3.0).

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only xi


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

xii Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 1

Sun Cluster 3 Introduction

Sun Cluster 3 extends Solaris with the cluster framework, enabling the use of core
Solaris services such as file systems, devices, and networks seamlessly across a
tightly coupled cluster and maintaining full Solaris compatibility for existing
applications.

Key Benefits
■ Higher / Near continuous availability of existing applications based on Solaris
services such as highly available file system and network services.
■ Integrates/extends the benefits of Solaris scalability to dotCOM application
architectures by providing scalable and available file and network services for
horizontal applications.
■ Ease of management of the cluster platform by presenting a simple unified
management view of shared system resources.

A Typical Sun Cluster 3 Configuration


A typical Sun Cluster configuration has the following components.

Hardware Components
■ Servers with local storage (storage devices hosted by one node).
■ Shared storage (storage devices hosted by more than one node).
■ Cluster Interconnect for private communication among the cluster nodes.
■ Public Network Interfaces for connectivity to the outside world.
■ Administrative Workstation for managing the cluster.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 1


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

In order to be a supported Sun Cluster configuration, the configuration of hardware


components in a cluster must first be supported by the corresponding base product
groups for each hardware component. For example, in order for a Sun Cluster
configuration composed of two Sun Fire V880 servers connected to two StorEdge
3510 storage devices to be supported, the V880 and SE 3510 base product groups
must support connecting a SE 3510 to a V880 in a standalone configuration.

Software Components
■ Solaris Operating Environment running on each cluster node.
■ Sun Cluster 3 software running on each cluster node.
■ Data Services - applications with agents and fault monitors - running on one or
more cluster nodes.
■ Cluster file system providing global access to the application data.
■ Sun Management Center running on the administrative workstation providing
ease of management.
FIGURE 1-1 A Typical Sun Cluster 3 Configuration
Console Access Cluster Interconnect

Cluster Interconnect
Public
Network

Node Node Node Node

Admin. Workstation Storage Storage


with Sun Management Center

logical diagram: physical connections & number of units are dependent on storage/interconnect used

2 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 2

Sun Cluster 3 Topologies

A topology is the connection scheme that connects the cluster nodes to the storage
platforms used in the cluster. Sun Cluster supports any topology that adheres to the
following guidelines:
■ Sun Cluster supports a maximum of sixteen nodes in a cluster, regardless of the
storage configurations that are implemented.
■ A shared storage device can connect to as many nodes as the storage device
supports.
■ There are common redundant interconnects between all nodes of the cluster.

Shared storage devices do not need to connect to all nodes of the cluster. However,
these storage devices must connect to at least two nodes.

While Sun Cluster does not require you to configure a cluster by using specific
topologies, the following topologies are described to provide the vocabulary to
discuss a cluster’s connection scheme. These topologies are typical connection
schemes:
■ “Clustered Pairs” on page 4
■ “N+1 (Star)” on page 5
■ “Pair + N” on page 6
■ “N*N (Scalable)” on page 7
■ “Diskless Cluster Configurations” on page 8
■ “Single-Node Cluster Configurations” on page 9

For more information on these topologies, see the definitions and diagrams that
follow.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 3


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Clustered Pairs
FIGURE 2-1 Clustered Pair Topology

Interconnect

Interconnect

Node Node Node Node

Storage Storage Storage Storage

logical diagram: physical connections & number of units are dependent on


storage/interconnect used.

Clustered Pair Features


■ Nodes are configured in pairs, i.e., possible configurations include two, four, six,
or eight nodes.
■ Each pair has shared storage, connected to both the nodes of the pair.
■ A maximum of 8 nodes are supported.

Clustered Pair Benefits


■ All nodes are part of the same cluster configuration, reducing cost and
simplifying administration.
■ Since each pair has its own shared storage, no one node needs to be of
significantly higher capacity then others.
■ The cost of cluster interconnect is spread across all pairs.

4 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 TOPOLOGIES

N+1 (Star)
FIGURE 2-2 N+1 Topology

Interconnect

Interconnect

Node Node Node Node

Storage Storage Storage

logical diagram: physical connections & number of units are dependent on


storage/interconnect used.

N+1 Features
■ All shared storage is dual-hosted, and physically attached to exactly two cluster
nodes.
■ A single server is designated as backup for all other nodes. The other nodes are
called primary nodes.
■ A maximum of 8 nodes are supported.

N+1 Benefits
The cost of backup node is spread over all primary nodes.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 5


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

N+1 Limitations
The capacity of the backup node is the limiting factor in growing the N+1 cluster.
For example, in a 4 node E6x00 cluster, the growth of the cluster is limited by the
number of slots available, for population with CPU / IO boards, in the backup node.
Hence, the backup node should be equal or larger in capacity to the largest primary
node.

Pair + N
FIGURE 2-3 Pair + N topology (N = 2 here)

Interconnect

Interconnect

Node Node Node Node

Storage Storage

logical diagram: physical connections & number of units are dependent on


storage/interconnect used.

Pair + N Features
■ All shared storage is dual hosted and physically attached to a single pair of
nodes.
■ A maximum of 16 SPARC nodes or 8 x64 nodes are supported.

Pair + N Benefits
Applications can access data from nodes which are not directly connected to the
storage device.

6 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 TOPOLOGIES

Pair + N Limitations
There may be heavy data traffic on the cluster interconnect.

N*N (Scalable)
FIGURE 2-4 N*N (Scalable) topology (N = 4 here)

Interconnect

Interconnect

Node Node Node Node

Storage Storage

logical diagram: physical connections & number of units are dependent on


storage/interconnect used.

N*N (Scalable) Features


■ Shared storage is connected to every node in the cluster.
■ All nodes have access to the same LUNs.
■ A maximum of 16 SPARC nodes or 8 x64 nodes are supported.
The maximum number of nodes sharing a LUN is specified by the shared storage
device. Refer to the respective shared storage device section for the maximum
number of nodes.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 7


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

N*N (Scalable) Benefits


■ This topology enables support of up to 16-node Oracle Parallel Server/Real
Application Cluster configurations. See “Oracle Real Application Cluster
(OPS/RAC)” on page 245. OPS/RAC requires connectivity of shared storage to
every node running an OPS/RAC instance.
■ Sun Cluster 3 allows failover of HA/Scalable application instance from any node
to any other node in the cluster. This topology provides connectivity from every
node to the shared storage device. Using this topology, in the event of a fail over,
the application can use the local path from the node to the storage device, rather
than going through the interconnect.

N*N (Scalable) Limitations


■ The maximum number of N*N nodes supported depends upon the shared storage
device. Some storage products support shared storage to only 2 nodes, others up
to 8 or more. Refer to the shared storage device sections for details.
■ The data service may have restrictions on the maximum nodes supported. See the
appropriate software sections for details.

Diskless Cluster Configurations


FIGURE 2-5 Diskless Cluster Configuration (N = 4 here)

Interconnect

Interconnect

Node Node Node Node

logical diagram: physical connections & number of units are dependent on


storage/interconnect used.

Diskless Cluster Features


■ Shared storage is not part of this configuration.
■ A maximum of 8 nodes are supported.

8 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 TOPOLOGIES

Diskless Cluster Benefits


This configuration allows for clusters without shared storage to be supported. These
configurations are ideal for deploying applications that require no shared storage.

Diskless Cluster Recommendations


For increased availability, the addition of a quorum device is recommended. The
minimum number of nodes in a diskless cluster is 2 with Quorum Server.

Single-Node Cluster Configurations

Single-Node Cluster Features


One node or domain comprises the entire cluster.

Single-Node Cluster Benefits


This configuration allows for a single node to run as a functioning cluster
deployment, offering users the benefits of having application management
functionality, application restart functionality as well the ability to start a cluster
with one node and growing the size of the cluster as time progresses. HA storage is
not required with single-node clusters.

Single-Node Cluster Limitations


■ Requires Sun Cluster 3.1 version 10/03 or later
■ True failover is impossible due to the presence of only one node in the cluster

Single-Node Cluster Recommendations


Single node clusters are ideal for users learning how to manage a cluster, observe
cluster behavior (for agent development purposes) or to begin a cluster with the
intention of adding nodes as time goes on.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 9


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

10 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 3

Server Configuration

Table 3-1 and table 3-2 below lists the servers supported with Sun Cluster 3. All
other components, like storage and network interfaces, may not be supported with
all the servers. Refer to the other chapters to ensure you have a supported Sun
Cluster configuration.

TABLE 3-1 Supported SPARC Servers

Servers

Sun Blade T6300 Server Module


Sun Blade T6320 Server Module
Sun Blade T6340 Server Module
Sun Enterprise 10K
Sun Enterprise 220R
Sun Enterprise 250
Sun Enterprise 3x00
Sun Enterprise 420R
Sun Enterprise 450
Sun Enterprise 4x00
Sun Enterprise 5x00
Sun Enterprise 6x00
Sun Fire 12K
Sun Fire 15K
Sun Fire 280R
Sun Fire 3800
Sun Fire 4800

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 11


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 3-1 Supported SPARC Servers (Continued)

Servers

Sun Fire 4810


Sun Fire 6800
Sun Fire E20K
Sun Fire E25K
Sun Fire E2900
Sun Fire E4900
Sun Fire E6900
Sun Fire T1000
Sun Fire T2000
Sun Fire V120
Sun Fire V125
Sun Fire V1280
Sun Fire V210
Sun Fire V215
Sun Fire V240
Sun Fire V245
Sun Fire V250
Sun Fire V440
Sun Fire V445
Sun Fire V480
Sun Fire V490
Sun Fire V880
Sun Fire V890
Sun Netra 120
Sun Netra 1280
Sun Netra 1290
Sun Netra 20
Sun Netra 210
Sun Netra 240 AC/DC
Sun Netra 440 AC/DC

12 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

TABLE 3-1 Supported SPARC Servers (Continued)

Servers

Sun Netra CT 900 CP3010


Sun Netra CT 900 CP3060
Sun Netra CT 900 CP3260
Sun Netra t 1120
Sun Netra t 1125
Sun Netra t 1400
Sun Netra t 1405
Sun Netra T1 AC200/DC200
Sun Netra T2000
Sun Netra T5220
Sun Netra T5440
Sun SPARC Enterprise M3000
Sun SPARC Enterprise M4000
Sun SPARC Enterprise M5000
Sun SPARC Enterprise M8000
Sun SPARC Enterprise M9000
Sun SPARC Enterprise T1000
Sun SPARC Enterprise T2000
Sun SPARC Enterprise T5120
Sun SPARC Enterprise T5140
Sun SPARC Enterprise T5220
Sun SPARC Enterprise T5240
Sun SPARC Enterprise T5440

TABLE 3-2 Supported x64 Servers

Servers

Sun Blade X6220


Sun Blade X6240
Sun Blade X6250

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 13


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 3-2 Supported x64 Servers (Continued)

Servers

Sun Blade X6270 Server Module


Sun Blade X6440 Server Module
Sun Blade X6450 Server Module
Sun Blade X8400 Server Module
Sun Blade X8420 Server Module
Sun Blade X8440 Server Module
Sun Blade X8450 Server Module
Sun Fire V20z
Sun Fire V40z
Sun Fire X2100 M2
Sun Fire X2200 M2
Sun Fire X4100
Sun Fire X4100 M2
Sun Fire X4140
Sun Fire X4150
Sun Fire X4170
Sun Fire X4200
Sun Fire X4200 M2
Sun Fire X4240
Sun Fire X4250
Sun Fire X4270
Sun Fire X4275
Sun Fire X4440
Sun Fire X4450
Sun Fire X4540
Sun Fire X4600
Sun Fire X4600 M2
Sun Netra X4200 M2
Sun Netra X4250
Sun Netra X4450

14 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

Boot Device for a Server


Any local storage device, supported by the base platform as a boot device, can be
used as a boot device for the server in the cluster as well.
■ Boot-device LUN(s) cannot be visible to other nodes in the cluster.
■ It is recommended to mirror the root disk.
■ Multipathed boot is supported with Sun Cluster when the drivers associated with
SAN 4.3 (or later) are used in conjunction with an appropriate storage device
(e.g., the local disks on a SF v880 or a SAN connected fiber storage device).

Heterogeneous Servers in Sun Cluster


The rules that describe which servers can participate in the same cluster have
changed. We no longer have the server family definitions. Instead now we have a
new set of rules that define mixing at the level of underlying networking/storage
technologies. This change vastly increases the flexibility of configurations. Use the
new set of rules described in “Clusters with Heterogeneous Servers” on page 35 to
find out which servers can be clustered together.

Generic Server Configuration Rules


These configuration rules apply to any type of server in a cluster:
■ The rule for minimum number of CPUs per node have changed. It is no longer
required to have minimum 2 CPUs per node. Systems with only 1 CPU are now
supported as cluster nodes.
■ Cluster node minimum-memory requirements:
■ Releases prior to Sun Cluster 3.2 1/09: 512MB
■ Starting with Sun Cluster 3.2 1/09: 1GB
■ Alternate pathing (AP) is not supported with Sun Cluster 3.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 15


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SPARC Servers

Sun Blade 6000 and 6048


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Server Module in a Sun Blade 6000 or 6048 is used as a
node in the cluster:
■ The following Sun Blade 6000 and 6048 Server Modules are supported as cluster
nodes:
■ Sun Blade T6300 Server Module
■ Sun Blade T6320 Server Module
■ Sun Blade T6340 Server Module
■ Minimum Sun Cluster release: 3.1 8/05 (update 4)
■ Minimum Solaris release: See Blade Server Module product info

Sun Netra 20, t 1120/1125, t 1400/1405 and T1


AC200/DC200
These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Netra T1 AC200/DC200, t 1120/1125, t 1400/1405, 20 is
used as a node in the cluster:
■ Netra servers allow the use of E1 PCI expander for provisioning extra PCI slots in
the system. While the use of the expander with these systems for any other
purpose is supported, its use for cluster connections (shared storage, cluster
interconnect, public network interfaces) is supported only with Netra T1
AC200/DC200.

Sun Netra 210


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Netra 210 is used as a node in the cluster:
■ Due to limited card support, only Diskless Cluster Configurations using the
onboard Ethernet ports is currently supported. Additional card support is TBD.

16 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

Sun Netra CT 900 CP3010


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Netra CT 900 CP3010 is used as a node in the cluster:
■ Sun Cluster 3.2 is required.
■ Private interconnect auto discovery may not show all adapters. Private
interconnect information can be manually entered during Solaris Cluster install.
■ Oracle RAC is not supported as of August 2007.
■ Connection to storage should only be direct as storage switches are not supported
as of August 2007.

Sun Netra CT 900 CP3060


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Netra CT 900 CP3060 is used as a node in the cluster:
■ For all configurations, the built in network switches should be port VLANed or
tag VLANed to separate traffic on each of the cluster interconnects and for Sun
Cluster auto discovery to work properly during installation.
■ Restrictions associated with SANBlaze HBA:
■ Connection to storage should only be direct as storage switches are not
supported as of September 2007.
■ The default global_fencing setting in Sun Cluster 3.2 must not be changed from
its default value of “pathcount.” See Table 3-1 for additional storage
restrictions.
■ MPxIO is not supported as of September 2007 due to limitations of third party
HBAs and third party drivers.

The Sun HBAs do not have any specific restrictions.

Sun Netra CT 900 CP3260


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Netra CT 900 CP3260 is used as a node in the cluster:
■ For all configurations, the built in network switches should be port VLANed or
tag VLANed to separate traffic on each of the cluster interconnects and for Sun
Cluster auto discovery to work properly during installation.
■ Only the Sun Netra CP3200 ARTM-FC (XCP32X0-RTM-FC-Z) is supported for
shared storage connectivity. This also is a single point of failure.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 17


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Netra T5220


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Netra T5220 is used as a node in the cluster:
■ As of Jan’ 09, there are some supported NICs and storage for SC shared storage

Sun Enterprise 3x00-6x00 and 10K


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Enterprise 3x00-6x00, 10K server/domain is used as a
node in the cluster:
■ Only SBus system boards are supported in Sun Enterprise 3x00, 4x00, 5x00, 6x00,
and 10K servers. As an exception, PCI I/O boards can be used for SCI-PCI
connectivity only.
■ For Sun Enterprise 3x00, 4x00, 5x00, and 6x00 servers, it is recommended to have
minimum 2 CPU/Memory boards and minimum 2 I/O boards in each server. For
Sun Enterprise 10K server, it is recommended to have minimum 2 System boards in
each domain.
■ For Sun Enterprise 3x00, 4x00, 5x00, 6x00, and 10K servers, it is recommended to
have the mirrored components of a storage set attach to different system boards
in a server/domain. This provides protection from the failure of a board.
■ For Sun Enterprise 3x00, 4x00, 5x00, 6x00, and 10K servers, when two network
interfaces are configured as part of a NAFO group, it is recommended to have each
interface attach to a separate system board in the server/domain.
■ Dynamic reconfiguration (DR) is now supported with Sun Enterprise 10K. This
support requires Sun Cluster 3.0 12/01, Solaris 8 10/01, and SSP 3.5.

Sun Fire T1000/T2000, Sun SPARC Enterprise


T1000/T2000 and Netra T2000
These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Fire T2000 server is used as a node in the cluster:
■ Two-node Sun Fire T2000 and Sun Netra T2000 clusters installed with Solaris 10
11/06 (or later) and KU 118833-30 (or later) can configure e1000g cluster
interconnects using back-to-back cabling, otherwise Ethernet switches are
required. See Info Doc number 88928 for more info.
■ For the T1000 server, only SVM is supported. Support for VxVM is planned for a
future date.

18 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

■ Sun Cluster supports SCSI storage on the T2000 and requires two PCI-X slots for
HBAs. Some T2000 servers shipped with a disk controller that occupies one of the
PCI-X slots and some ship with a disk controller that is integrated onto the
motherboard. In order to have SCSI storage supported with Sun Cluster, it is
required to have two open PCI-X slots for SCSI HBAs. SCSI storage is not
supported with Sun Cluster and the T1000 because the T1000 has only one PCI-X
slot.
■ To configure internal disk mirroring in the T2000 servers, follow the special
instructions in the Sun Fire T2000 Server Product Notes. However, when the
procedure instructs you to install the Solaris OS, do not do so. Instead, return to
the cluster installation guide and follow those instructions for the Solaris OS
installation.

Please note that, in this config guide, the name “Sun Fire T1000” refers to the Sun
Fire T1000 or the Sun SPARC Enterprise T1000 server. Likewise, the name “Sun Fire
T2000” refers to the Sun Fire T2000 or the Sun SPARC Enterprise T2000 server.

Sun Fire V125


Operating System Requirements:
■ Solaris 8 beginning with HW 7/03 OS (with mandatory patch 109885-15)
■ Solaris 9 beginning with 9/04 OS
■ Solaris 10 OS

Solaris Cluster for this server maybe configured differently for Sun Cluster 3.0, Sun
Cluster 3.1 or Sun Cluster 3.2. Tagged VLAN is supported in SC3.1U4 and later
release. For Server with only 2 onboard ethernet ports and no other ethernet cards,
tagged VLAN must be used.

For use of a single dual-port HBA, please follow guideline under “Shared Storage
(Multi-Hosted Storage)” and configuration requirements for its use.

Sun Fire V210 and V240


Sun Cluster 3.0/3.1 support for these servers may require a patch (depending on the
version of Solaris involved in the configuration).

For Sun Cluster 3.0 configurations:


■ No patch is required for Solaris 8 configurations.
■ For Solaris 9 support, patches 112563-10 and 114189-01 are required.

For Sun Cluster 3.1 configurations prior to 3.1 10/03 (update 1):

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 19


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ For Solaris 8 support, patch #113800-03 is required.


■ For Solaris 9 support, patch #113801-03 is required.

For Sun Cluster 3.1 10/03 (update 1) configurations or later:


■ No patch required.

Sun Fire V215 and V245


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Fire V215 or V245 is used as a node in the cluster:
■ To configure internal disk mirroring in the Sun Fire 215 and V245 servers, follow
the special instructions in the Sun Fire V215/V245 Server Product Notes.
However, when the procedure instructs you to install the Solaris OS, do not do so.
Instead, return to the cluster installation guide and follow those instructions for
the Solaris OS installation.
■ With Solaris 9, Sun Cluster support for the V215 and V245 requires KU patch
122300-10 and SAN 4.4.13 or later. Please note that Solaris 9 does not support PCI-
e adapters.

Sun Fire V440/Netra 440


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Fire V440 is used as a node in the cluster:
■ The hardware RAID 1 functionality of the Sun Fire V440 and Netra 440 requires
the following patches:
■ Solaris 8: No patch requirement
■ Solaris 9: Patch 113277-33 or later
■ Solaris 10: Patch 119374-02 or later

Sun Fire V445


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Fire V445 is used as a node in the cluster:
■ To configure internal disk mirroring in the Sun Fire V445 servers, follow the
special instructions in the Sun Fire V445 Server Product Notes. However, when
the procedure instructs you to install the Solaris OS, do not do so. Instead, return
to the cluster installation guide and follow those instructions for the Solaris OS
installation.

20 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

■ Both Solaris 9 and 10 are supported with Sun Cluster for the V445. Please note
that Solaris 9 supports only PCI-X (and not PCI-Express) cards.

Sun Fire V480/V880 and V490/V890


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Fire V480/V490 or a Sun Fire V880/V890 is used as a
node in the cluster:
■ Using MPxIO for multipathing to the local disks of a V480/V490 or V880/V890 is
supported as long as the SAN 4.3 (or later) drivers being used. All other
multipathing solutions (such as DMP, DLM or SEDLM) to the local disks in a
V480/V490 or a V880/V890 is NOT supported.
■ The “Hot-Plug” feature of the V880/V890 is supported. To get more information
on this feature, including a list of hot-pluggable cards, please see the Sun Fire
V880 and the Sun Fire V890 Product Notes at http://www.sun.com/products-n-
solutions/hardware/docs

Sun Fire V1280/Netra 1280, Netra 1290 and E2900


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Fire V1280/Netra 1280 is used as a node in the
cluster:
■ Dynamic reconfiguration of the SF V1280/Netra 1280’s CPU and memory boards
while the system remains online is supported with Sun Cluster 3. For more
information on this feature as well as its requirements, please consult the SF
V1280/Netra 1280 base product documentation.

Sun Fire 3800, 4800/4810 and 6800


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Fire 3800, 4800, 4810, or 6800 domain is used as a
node in the cluster:
■ For Sun Fire 3800, 4800, 4810, and 6800 servers, it is required to configure the Sun
Fireplane Interconnect System as two segments when the server is divided into 2
or more domains. For Sun Fire 6800 server, it is required that the segments be
implemented along the power boundary.
■ It is supported to have multiple domains from a server in the same cluster.
Clustering in a box - a cluster where all the nodes are domains from the same
server - is supported. However, there can be single points of failure for the whole
cluster in such configurations. For example, a 2-node cluster across two domains

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 21


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

of a Sun Fire 3800, or a cluster with primary and backup domain in same segment
of Sun Fire 6800 will have the common powerplane as the single point of failure.
A 2 node cluster on a single Sun Fire 6800, where each node is a domain in a
different segment implemented across the power boundary, is a good cluster-in-a-
box solution with appropriate fault isolation built-in.
■ It is recommended to have minimum 2 CPU/Memory board and minimum 2 I/O
assembly in each domain, whenever possible.
■ For the cluster interconnect, it is recommended that at least two independent
interconnects attach to different I/O assemblies in a domain. When all the
independent interconnects of a cluster interconnect attach to the same I/O
assembly, it is required that at least two independent interconnects attach to
different controllers in the I/O assembly.
■ It is recommended to have the mirrored components of a storage set attach to
different I/O assemblies in a domain. When the mirrored components of a storage
set attach to same the I/O assembly, it is recommended that they attach to different
controllers in the I/O assembly.
■ When two or more network interfaces are configured as part of a NAFO group, it
is recommended to have each interface attach to different I/O assemblies in a
domain. When the different interfaces of a NAFO group are attached to the same
I/O assembly, it is recommended that they attach to different controllers in the I/O
assembly.
■ Dynamic reconfiguration (DR) is now supported. This support requires Sun
Cluster 3.0 12/01 (or later). Jaguar or other multi-core CPUs require patch 111335-
26 (or later) or patch 117124-05 (or later).
■ XMITS PCI IO boats are supported.

Sun Fire E4900/E6900


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Fire Enterprise 4900 or 6900 domain is used as a node
in the cluster:
■ For Sun Fire Enterprise 4900 and 6900 servers, it is required to configure the Sun
Fireplane Interconnect System as two segments when the server is divided into 2
or more domains. For Sun Fire Enterprise 6900 server, it is required that the
segments be implemented along the power boundary.
■ It is supported to have multiple domains from a server in the same cluster.
Clustering in a box - a cluster where all the nodes are domains from the same
server - is supported. However, there can be single points of failure for the whole
cluster in such configurations. For example, a 2-node cluster across two domains
of a Sun Fire 3800, or a cluster with primary and backup domain in same segment
of Sun Fire Enterprise 6900 will have the common powerplane as the single point
of failure. A 2 node cluster on a single Sun Fire Enterprise 6900, where each node
is a domain in a different segment implemented across the power boundary, is a
good cluster-in-a-box solution with appropriate fault isolation built-in.

22 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

■ It is recommended to have minimum 2 CPU/Memory board and minimum 2 I/O


assembly in each domain, whenever possible.
■ For the cluster interconnect, it is recommended that at least two independent
interconnects attach to different I/O assemblies in a domain. When all the
independent interconnects of a cluster interconnect attach to the same I/O
assembly, it is required that at least two independent interconnects attach to
different controllers in the I/O assembly.
■ It is recommended to have the mirrored components of a storage set attach to
different I/O assemblies in a domain. When the mirrored components of a storage
set attach to the same I/O assembly, it is recommended that they attach to different
controllers in the I/O assembly.
■ When two or more network interfaces are configured as part of a NAFO group, it
is recommended to have each interface attach to different I/O assemblies in a
domain. When the different interfaces of a NAFO group are attached to the same
I/O assembly, it is recommended that they attach to different controllers in the I/O
assembly.
■ Dynamic reconfiguration (DR) is now supported. This support requires Sun
Cluster 3.0 12/01 (or later). Jaguar or other multi-core CPUs require patch 111335-
26 (or later) or patch 117124-05 (or later).
■ XMITS PCI IO boats are supported.

Sun Fire 12K, 15K, E20K and E25K


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun Fire 12K/15K domain is used as a node in the cluster:
■ It is supported to have multiple domains from a server in the same cluster.
Clustering in a box - a cluster where all the nodes are domains from the same
server - is supported.
■ It is recommended to have minimum 2 CPU/Memory board and minimum 2 I/O
boards in each domain.
■ For the cluster interconnect, it is recommended that at least two independent
interconnects attach to different I/O boards in a domain.
■ It is recommended to have the mirrored components of a storage set attach to
different I/O boards in a domain.
■ When two or more network interfaces are configured as part of a NAFO group, it
is recommended to have each interface attach to different I/O boards in a domain.
■ Dynamic reconfiguration (DR) is supported. This support requires Sun Cluster 3.0
12/01 (or later) Jaguar or other multi-core CPUs require patch 111335-26 (or later)
or patch 117124-05 (or later).
■ Slot 1 Dynamic Reconfiguration is supported. This allows SF 12k/15ks that are
clustered to be able to dynamically reconfigure the boards in Slot 1 while the
systems remain online. For Solaris 8 support, Solaris 8 2/02 and a SMS version 1.3
or higher is required. For Solaris 9 Support, Solaris 9 4/03 and patch #114271-02 is
required. For more information, please see Sun Product Intro #Q3FY2003-30I.
■ XMITS PCI IO boats are supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 23


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun SPARC Enterprise M4000, M5000, M8000 and


M9000
These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun SPARC Enterprise M4000, M5000, M8000 or M9000 is
used as a node in the cluster:
■ It is supported to have multiple domains from a server in the same cluster.
Clustering in a box - a cluster where all the nodes are domains in the same server
- is supported. However, there can be a single point of failure for the whole
cluster in such configurations.
■ It is recommended to have a minimum of 2 CPU/Memory boards in a domain
whenever possible.
■ It is recommended to have separate IO Units per cluster node (domain) whenever
possible. It is possible to create cluster nodes that share the same IO Unit and this
is supported. However, there can be a single point of failure for the whole cluster
in such configurations.
■ It is recommended to have a minimum of 2 IO Units in a domain whenever
possible.
■ Dynamic reconfiguration is supported.

Sun SPARC Enterprise T5120 andT5220


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun SPARC Enterprise T5120 or T5220 is used as a node in
the cluster:
■ For LDOM configuration, Sun Cluster is supported in control domain only
■ For nxge drivers, please refer to base product documentation for proper
/etc/system parameters

Sun SPARC Enterprise T5140 andT5240


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun SPARC Enterprise T5140 or T5240 is used as a node in
the cluster:
■ For LDOM configuration, Sun Cluster is supported in control domain only
■ For nxge drivers, please refer to base product documentation for proper
/etc/system parameters
■ As of April’08, InfiniBand HCA/Switches are not yet supported with T5140 or
T5240
■ As of April’08, SCSI Storage are not yet enabled with T5140 or T5240

24 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

Sun SPARC Enterprise T5440


These configuration rules apply in addition to the “Generic Server Configuration
Rules” on page 15 when a Sun SPARC Enterprise T5440 is used as a node in the
cluster:
■ Please refer to Support for Virtualized OS Environments section in Software
chapter for LDoms support

x64 Servers
Please note that x64 requires the following patches: 120501-04, 120490-01, 120498-01.

Sun Blade 6000 and 6048

TABLE 3-3 Sun Blade 6000 and 6048 Support Matrix

Solaris Starting with Solaris 10 11/06


Sun Blade X6240, X6440: Starting with Solaris 10 5/08
Sun Blade X6270: Starting with Solaris 10 10/08
Solaris Cluster Starting with Solaris Cluster 3.1 8/05
Supported Server Sun Blade X6220
Modules
Sun Blade X6240
Sun Blade X6250
Sun Blade X6270
Sun Blade X6440 (Excludes ‘Barcelona’ processors, e.g., model 8354)
Sun Blade X6450
Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-5, “Cluster Interconnects: PCI-E ExpressModule Network
Interfaces for x64 Servers,” on page 198 and Table 10-15, “Public
Network: PCI-E ExpressModule Network Interfaces for x64 Servers,” on
page 213.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 25


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Blade 8000

TABLE 3-4 Sun Blade 8000 Support Matrix

Solaris Starting with Solaris 10 6/06


Sun Blade X8450: Starting with Solaris 10 8/07
Supported Server Sun Blade X8400
Modules
Sun Blade X8420
Sun Blade X8440
Sun Blade X8450
Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards X1028A-Z, X4731A, X5040A-Z, X7282A-Z, X7283A-Z, X7284A-Z,
X7287A-Z, SG-XPCIE2FCGBE-Q-Z, SG-XPCIE2FCGBE-E-Z
Infiniband X1288A-Z
Interconnect

Sun Blade 8000 P

TABLE 3-5 Sun Blade 8000 P Support Matrix

Solaris Starting with Solaris 10 6/06


Supported Server Sun Blade X8400
Modules
Sun Blade X8420
Sun Blade X8440
Sun Blade X8450 (Starting with Solaris 10 8/07)
Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards X5040A-Z

26 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

Sun Fire V20z

TABLE 3-6 Sun Fire V20z Support Matrix

Solaris Starting with Solaris 9 9/04a


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.
Special Notes The Sun Fire V20z currently requires two X4422A cards. The earlier
V20z revisions only support a single X4422A. These are the A55
marketing numbers (380-0979 chassis assembly/FRU) and A55*L
marketing numbers (380-1168 chassis assembly/FRU). Later revisions
e.g., the 380-1194 chassis assembly/FRU using marketing number
A55*M, are supported.

For more information, see the Sun Fire V20z Server Just the Facts,
SunWIN token #400844.
a The onboard hardware RAID disk mirroring of the V20z requires Solaris 9 patch 119443-02 or later.

Sun Fire V40z

TABLE 3-7 Sun Fire V40z Support Matrix

Solaris Starting with Solaris 9 4/04a


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.
a The onboard hardware RAID disk mirroring of the V40z requires Solaris 9 patch 119443-02 or later. or Solaris 10
patch 119375-02 or later.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 27


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire X2100 M2 and X2200 M2

TABLE 3-8 Sun Fire X2100 M2 and X2200 M2 Servers

Solaris Starting with Solaris 10 11/06


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4100 and X4200

TABLE 3-9 Sun Fire X4100/X4200 Support Matrix

Solaris Starting with Solaris 10 HW1


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4100 M2

TABLE 3-10 Sun Fire X4100 M2 Support Matrix

Solaris Starting with Solaris 10 HW1


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

28 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

Sun Fire X4140 and X4240

TABLE 3-11 Sun Fire X4140 and X4240 Support Matrix

Solaris Starting with Solaris 10 8/07


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4150

TABLE 3-12 Sun Fire X4150 Support Matrix

Solaris Starting with Solaris 10 8/07


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4170

TABLE 3-13 Sun Fire X4170 Support Matrix

Solaris Starting with Solaris 10 10/08


Sun Cluster Starting with Sun Cluster 3.1 8/05
Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 29


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire X4200 M2

TABLE 3-14 Sun Fire X4200 M2 Support Matrix

Solaris Starting with Solaris 10 HW1


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4250

TABLE 3-15 Sun Fire X4250 Support Matrix

Solaris Starting with Solaris 10 8/07


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4270

TABLE 3-16 Sun Fire X4270 Support Matrix

Solaris Starting with Solaris 10 10/08


Sun Cluster Starting with Sun Cluster 3.1 8/05
Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

30 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

Sun Fire X4275

TABLE 3-17 Sun Fire X4275 Support Matrix

Solaris Starting with Solaris 10 10/08


Sun Cluster Starting with Sun Cluster 3.1 8/05
Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4440

TABLE 3-18 Sun Fire X4440 Support Matrix

Solaris Starting with Solaris 10 8/07


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4450

TABLE 3-19 Sun Fire X4450 Support Matrix

Solaris Starting with Solaris 10 8/07


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 31


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Fire X4540

TABLE 3-20 Sun Fire X4540 Support Matrix

Solaris Starting with Solaris 10 8/07


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4600

TABLE 3-21 Sun Fire X4600 Support Matrix

Solaris Starting with Solaris 10 1/06


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Fire X4600 M2

TABLE 3-22 Sun Fire X4600 M2 Support Matrix

Solaris Starting with Solaris 10 1/06


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

32 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SERVER CONFIGURATION

Sun Netra X4200 M2

TABLE 3-23 Sun Netra X4200 M2 Support Matrix

Solaris Starting with Solaris 10 11/06


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Netra X4250

TABLE 3-24 Sun Netra X4250 Support Matrix

Solaris Starting with Solaris 10 8/07


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Netra X4450

TABLE 3-25 Sun Netra X4450 Support Matrix

Solaris Starting with Solaris 10 8/07 + patches


Storage and HBAs Please see Chapter 5, Storage Overview for Sun storage products.
Network Cards See Table 10-2, “Cluster Interconnects: PCI Network Interfaces for x64
Servers,” on page 194 and Table 10-12, “Public Network: PCI Network
Interfaces for x64 Servers,” on page 209.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 33


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

34 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 4

Clusters with Heterogeneous


Servers

Note – The rules that describe which servers can participate in the same cluster
have changed. We no longer have the server family definitions. Instead now we have
a new set of rules that define mixing at the level of underlying networking/storage
technologies. This change vastly increases the flexibility of configurations. Use the
new set of rules described below to find out which servers can be clustered together.

Generic Rules
These rules must be followed while configuring clusters with heterogeneous servers:
■ Cluster configurations must comply with the topology definitions specified in
“Sun Cluster 3 Topologies” on page 3.
■ Cluster configurations must comply with the support matrices listed in other
sections (for example, “Server Configuration” on page 11, “Storage Overview” on
page 39, and “Network Configuration” on page 183) of the configuration guide.
■ If there are any restrictions placed on server/storage connectivity or
server/network connectivity by the base platforms and the individual
networking/storage components, then these restrictions override the Sun Cluster
configuration rules.
■ SCSI storage can connected to a maximum of two nodes simultaneously.
■ Fiber storage can be connected to a maximum of four nodes simultaneously (with
the exception of the SE 99x0 storage which can be connected to a maximum of 8
nodes simultaneously).

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 35


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Mixing Different Types of Servers in a


Cluster
All nodes in the cluster share the cluster interconnect. Hence, whether two or more
servers can participate in the same cluster is completely defined by the technology
used for the cluster interconnect. Note that these servers may or may not be able to
share a storage device (please check the storage configuration section of the
configuration guide for more information). Cluster interconnects between the
various nodes in a cluster must be of the same interconnect technology (i.e., fast
Ethernet, gigabit Ethernet, SCI). For Ethernet, all interconnects must operate at the
same speed.

Sharing Storage Among Different Types


of Servers in a Cluster
Whether a storage device can be shared among different types of servers in a cluster
is dictated by the underlying technology used by the storage device, any storage
networking infrastructure in between, and the I/O bus type in the servers.

Parallel SCSI Devices


The rules for sharing Parallel SCSI devices among different cluster nodes are:
■ Node I/O bus types (SBus, PCI, cPCI) cannot be mixed on the same SCSI bus.
■ Similar SCSI technology (SE SCSI, HVD, LVD, etc.) must be used on the same
SCSI bus. The grouping that define similar SCSI technology are listed in Table 4-1,
“SCSI Interface Groupings,” on page 37.

36 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CLUSTERS WITH HETEROGENEOUS SERVERS

Table 4-1, “SCSI Interface Groupings,” on page 37 gives the SCSI interfaces,
supported in different servers in Sun Cluster 3, grouped by the underlying SCSI
technology. Each grouping also defines the mixing scope of the servers using these
interfaces in Sun Cluster 3.

TABLE 4-1 SCSI Interface Groupings

Group SCSI Interfaces in the Group

40MB/s SE Ultra SCSI - PCI Netra T1 AC200/DC200 onboard SCSI


Netra t 1120/1125 onboard SCSI
Netra t 1400/1405 onboard SCSI
Netra 20 onboard SCSI
1032A SunSwift PCI
6540A Dual-channel single-ended UltraSCSI [US2S]
HVD SCSI - SBus 1065A SBus-to-differential Ultra SCSI [UDWIS/S]
HVD SCSI - PCI 6541A Dual-channel differential UltraSCSI [UD2S]
320MB/s LVD SCSI-PCI SG-(X)PCI1SCSI-LM320
SG-(X)PCI1SSCI-LM320-Z
SG-XPCI2SCSI-LM320 [Jasper 320]
SG-XPCI2SCSI-LM320-Z
SunFire V440 onboard SCSI
Netra 440 On Board SCSI
160MB/s LVD SCSI - PCI 6758A StorEdge PCI Dual Ultra 3SCSI [Jasper]
80MB/s LVD SCSI- PCI 2222A Dual FE +Dual SCSI Adapter [Cauldron]

■ Example 1: 40MB/s single-ended Ultra SCSI is only supported on PCI cluster


nodes (due to the set of Sun Cluster 3 supported servers) and allows mixing of
any HBAs within this set.
■ Example 2: Cluster configs would not support mixing of HVD across node bus
types, SBus 1065A HBA with PCI 6541A HBA.
■ Example 3: Cluster configs would not support LVD SCSI mixing of speeds (e.g.,
80MB/s HBA with a 160MB/s HBA) nor SCSI type (e.g., 40MB/s SE SCSI HBA
with a 160MB/s LVD SCSI HBA).
■ Example 4: 4 nodes of V480, clustered pair topology with 4 A1000s. This shows a
config that was supported under previous rules as well.
■ Example 5: Two v880s and one 6800 with 2 S1’s using the Pair + N topology with
6800 as the + N node.

Fibre-Channel Host-Connected Storage


This section describes the general rules for Fibre Channel connected storage.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 37


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Storage, HBA, server and other component requirements take precedence over
any Sun Cluster rules.
■ Both SAN and direct-connected FC storage are supported.
■ Node I/O bus type mixing is allowed, e.g., PCIe and PCI-X, or SBus and PCI.
■ FC speeds may be mixed.
■ Connectivity between the nodes of a cluster and shared data must use logically
separate paths. It is recommended to use physically separate paths. “Paths” in
this context refers to connections to the submirrors of an SVM mirrored volume or
the MPxIO paths to a highly available RAID volume, for example.

Please refer to chapter 5, Storage Overview, for additional Sun Cluster details,
including any exceptions to the above rules.

Also refer to the specific storage and SAN product documentation for product
details.

Note on Multipathing
Multipathed vs. non-multipathed connections are assumed to be consistent for all
nodes logically connected to the shared storage device. For example, if MPxIO is
used to connect to one node, MPxIO must also be used to connect this shared
storage to the other cluster nodes. Similarly for non-multipathed connections, all
such shared connections must be non-multipathed connections to all logically-
connected cluster nodes.

38 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 5

Storage Overview

Any storage device (a single disk, tape, or a CD-ROM, or an array enclosure


consisting of several disks) connected to the nodes in a Sun Cluster, is a global
device accessible by all the cluster nodes through the global namespace.

Any storage inside a node, including internal disks and tape storage, is local storage
and cannot be shared.

Local Storage (Single-Hosted Storage)


Local storage consists of storage devices connected to only one node. Such storage
devices are not considered highly available. They can be used for:
■ Setting up root, usr, swap, /globaldevices.
■ Hosting application binaries, configuration files.
■ Storing anything other than the application data.
■ Any storage device, along with its cable, junction, and host bus adapter,
supported by the base server can be used for local storage in Sun Cluster.

Heterogeneous Storage in Sun Cluster


All storage devices shown as supported for shared storage in “Fibre Channel Storage
Support” on page 59 and “SCSI Storage Support” on page 127 can be used in any
combination in Sun Cluster 3. No restrictions are imposed on combinations of shared
storage in Sun Cluster beyond those imposed by the interoperability of the
individual storage devices themselves.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 39


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Shared Storage (Multi-Hosted Storage)


Shared storage consists of storage devices connected to more than one node such
that one or more LUNs, or volumes are accessible from each connected cluster node.
Such devices are considered highly available. They can be used for:
■ Hosting application data
■ Hosting application binaries, configuration files
■ Setting up quorum devices

Please consult each storage device’s section for maximum node connectivity and
other guidelines. The following are general guidelines:
■ Some parallel SCSI devices can be split into two functionally separate devices. See
each specific storage device for details.
■ Parallel SCSI devices can only share a LUN or volume between two nodes in the
same cluster.
■ Fibre Channel (FC) devices can share a LUN, or volume, between two or more
cluster nodes within the same cluster.
■ In some cases, FC devices may present different LUNs to different clusters or non-
clustered nodes.
■ FC devices may be directly connected to FC switches, to HBAs, or attached
directly to cluster nodes. See the specific storage device in question for
restrictions.
■ Sun Cluster highly recommends that each sub-mirror of a mirrored volume or
path to a multi-path IO connection use separate host adapter cards and controller
chips
■ Sun Cluster now supports the use of a single dual-port HBA in supported
configurations as a single adapter used to connect shared storage devices. Note
that the usage of a single adapter decreases availability and reliability of the
cluster and while we don’t require two HBAs, it is still strongly recommended.

Storage products are supported with a specific set of servers, as listed in the tables
later in this chapter. See Table 5-1, “FC Storage for SPARC Servers,” on page 42 and
Table 5-4, “SCSI Storage for SPARC Servers,” on page 50.

For a storage configuration (storage device, HBA, switch) to be considered for


support with Sun Cluster, it MUST be supported by Network Storage. If Network
Storage does not support a given configuration, then Sun Cluster cannot support the
configuration.

40 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


STORAGE OVERVIEW

Configuration Requirements for use of single


dual-port HBA for storage connectivity
■ Solaris Volume Manager and Solstice Disk Suite only
■ Dual-String Mediators are not supported
■ Disksets must have a minimum of 2 disks
■ Storage products are supported with a specific set of servers, as listed in the tables
later in this chapter. See Table 5-1, “FC Storage for SPARC Servers,” on page 42
and Table 5-4, “SCSI Storage for SPARC Servers,” on page 50.

For a storage configuration (storage device, HBA, switch) to be considered for


support with Sun Cluster, it MUST be supported by Network Storage. If Network
Storage does not support a given configuration, then Sun Cluster cannot support the
configuration.

Quorum Devices in Sun Cluster


Sun StorEdge A3500 and A3500FC arrays cannot be used as quorum devices. Except
for these arrays, all supported shared storage devices can act as quorum devices.

If you use Sun StorEdge A3500 or A3500FC arrays for shared storage in your cluster,
you must use a different device if you need a quorum device.

Supported Fibre Channel (FC) Storage Devices


Table 5-1 lists the FC storage devices supported with Sun Cluster and the server
types that can share these storage devices in clusters. Once you have determined
whether your server and storage combination is supported, refer to the storage

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 41


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

details section to find other supported components. If you have mixed types of
servers in your cluster, refer to “Sharing Storage Among Different Types of Servers
in a Cluster” on page 36 for additional restrictions.

TABLE 5-1 FC Storage for SPARC Servers


Server Sun StorEdge Arrays
Sun StorageTek 2540 RAID Array

Sun StorEdge 3510 RAID Array

Sun StorEdge 3511 RAID Array

Sun StorEdge 3910/3960 System

Sun StorEdge 6120 Array

Sun StorEdge 6130 Array

Sun StorageTek 6140 Array

Sun Storage 6180 Array

Sun StorEdge 6320 System

Sun StorageTek 6540 Array

Sun Storage 6580/6780 Arrays

Sun StorEdge 6910/6960 Arrays

Sun StorEdge 6920 System

Sun StorEdge 9910/9960 Arrays

Sun StorEdge 9970/9980

Sun StorageTek 9985/9990

Sun StorageTek 9985V/9990V


Sun Blade T6300 • • • • • • • • • • • • • •

Sun Blade T6320 • • • • • • • • • • • •

Sun Blade T6340 • • • • • • • •

Sun Enterprise 10Ka • • • • • • • • • • • •

Sun Enterprise 220R • • • • • • • • • • • • • •

Sun Enterprise 250 • • • • • • • • • •

Sun Enterprise 3000a • • • • • • • • • •

Sun Enterprise 3500a • • • • • • • • • • • •

Sun Enterprise 4000a • • • • • • • • • •

Sun Enterprise 420R • • • • • • • • • • • • • •

Sun Enterprise 450 • • • • • • • • • • • • • •

Sun Enterprise 4500a • • • • • • • • • • • •

Sun Enterprise 5000a • • • • • • • • • •

Sun Enterprise 5500a • • • • • • • • • • • •

Sun Enterprise 6000a • • • • • • • • • •

Sun Enterprise 6500a • • • • • • • • • • • •

Sun Fire 12K • • • • • • • • • • • • • •

Sun Fire 15K • • • • • • • • • • • • • •

Sun Fire 280R • • • • • • • • • • • • • •

Sun Fire 3800 • • • • • • • • • • • •

42 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


STORAGE OVERVIEW

TABLE 5-1 FC Storage for SPARC Servers (Continued)


Server Sun StorEdge Arrays

Sun StorageTek 2540 RAID Array

Sun StorEdge 3510 RAID Array

Sun StorEdge 3511 RAID Array

Sun StorEdge 3910/3960 System

Sun StorEdge 6120 Array

Sun StorEdge 6130 Array

Sun StorageTek 6140 Array

Sun Storage 6180 Array

Sun StorEdge 6320 System

Sun StorageTek 6540 Array

Sun Storage 6580/6780 Arrays

Sun StorEdge 6910/6960 Arrays

Sun StorEdge 6920 System

Sun StorEdge 9910/9960 Arrays

Sun StorEdge 9970/9980

Sun StorageTek 9985/9990

Sun StorageTek 9985V/9990V


Sun Fire 4800 • • • • • • • • • • • • • •

Sun Fire 4810 • • • • • • • • • • • • • •

Sun Fire 6800 • • • • • • • • • • • • • •

Sun Fire E20K • • • • • • • • • • • • •

Sun Fire E25K • • • • • • • • • • • • •

Sun Fire E2900 • • • • • • • • • • • • • • •

Sun Fire E4900 • • • • • • • • • • • • • • •

Sun Fire E6900 • • • • • • • • • • • • • • •

Sun Fire T1000 • • • • • • • • • • • • • • • •

Sun Fire T2000 • • • • • • • • • • • • • • • •

Sun Fire V120


Sun Fire V125 • • •

Sun Fire V1280 • • • • • • • • • • • • • • •

Sun Fire V210 • • • •

Sun Fire V215 • • • • • • • • • • • •

Sun Fire V240 • • • • • • • • • • • •

Sun Fire V245 • • • • • • • • • • • •

Sun Fire V250 • • • •

Sun Fire V440 • • • • • • • • • • • • •

Sun Fire V445 • • • • • • • • • • • • •

Sun Fire V480 • • • • • • • • • • • • • •

Sun Fire V490 • • • • • • • • • • • • • • •

Sun Fire V880 • • • • • • • • • • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 43


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 5-1 FC Storage for SPARC Servers (Continued)


Server Sun StorEdge Arrays

Sun StorageTek 2540 RAID Array

Sun StorEdge 3510 RAID Array

Sun StorEdge 3511 RAID Array

Sun StorEdge 3910/3960 System

Sun StorEdge 6120 Array

Sun StorEdge 6130 Array

Sun StorageTek 6140 Array

Sun Storage 6180 Array

Sun StorEdge 6320 System

Sun StorageTek 6540 Array

Sun Storage 6580/6780 Arrays

Sun StorEdge 6910/6960 Arrays

Sun StorEdge 6920 System

Sun StorEdge 9910/9960 Arrays

Sun StorEdge 9970/9980

Sun StorageTek 9985/9990

Sun StorageTek 9985V/9990V


Sun Fire V890 • • • • • • • • • • • • •

Sun Netra 120


Sun Netra 1280 • • • • • • • • • • •

Sun Netra 1290 • • • • • • • • • • •

Sun Netra 20 • • • • •

Sun Netra 240 • • • • •

Sun Netra 440 • • • • • • • • •

Sun Netra CT 900 CP3010 • •

Sun Netra CT 900 CP3060 • • • •

Sun Netra CT 900 CP3260 • •

Sun Netra t 1120/1125 • • • • •

Sun Netra t 1400/1405 • • • • • • •

Sun Netra T1
AC200/DC200
Sun Netra T2000 • • • • • • • • • •

Sun Netra T5220 • • • •

Sun Netra T5440 • • • •

Sun SPARC Enterprise • • • • • • • • • •


M3000
Sun SPARC Enterprise • • • • • • • • • • • • •
M4000
Sun SPARC Enterprise • • • • • • • • • • • •
M5000
Sun SPARC Enterprise • • • • • • • • • • • •
M8000

44 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


STORAGE OVERVIEW

TABLE 5-1 FC Storage for SPARC Servers (Continued)


Server Sun StorEdge Arrays

Sun StorageTek 2540 RAID Array

Sun StorEdge 3510 RAID Array

Sun StorEdge 3511 RAID Array

Sun StorEdge 3910/3960 System

Sun StorEdge 6120 Array

Sun StorEdge 6130 Array

Sun StorageTek 6140 Array

Sun Storage 6180 Array

Sun StorEdge 6320 System

Sun StorageTek 6540 Array

Sun Storage 6580/6780 Arrays

Sun StorEdge 6910/6960 Arrays

Sun StorEdge 6920 System

Sun StorEdge 9910/9960 Arrays

Sun StorEdge 9970/9980

Sun StorageTek 9985/9990

Sun StorageTek 9985V/9990V


Sun SPARC Enterprise • • • • • • • • • • •
M9000
Sun SPARC Enterprise • • • • • • • • • • • • • •
T5120
Sun SPARC Enterprise • • • • • • • • • • • • • •
T5140
Sun SPARC Enterprise • • • • • • • • • • • • • •
T5220
Sun SPARC Enterprise • • • • • • • • • • • • • •
T5240
Sun SPARC Enterprise • • • • • • • • • • • • • •
T5440
External I/O Expansion • • • • • • • • • • •b •b •b •b
Unit for Sun SPARC
Enterprise M4000, M5000,
M8000, M9000
External I/O Expansion • • • • • • • • •b •b •b •b
Unit for Sun SPARC
Enterprise T5120, T5140,
T5220, T5240
USBRDT-5240 Uniboard • • • • • •
for Sun Fire 4800, E4900,
6800, E6900, 12K, 15K,
E20K, E25K

a Only these servers’ SBus I/O boards are supported for shared cluster storage
b The SE 9900 WWWW includes External I/O Expansion Unit support under the base server

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 45


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 5-2 FC Storage for x64 Servers

Sun StorageTek 2540 RAID Array

Sun StorEdge 3510 RAID Array

Sun StorEdge 3511 RAID Array

Sun StorEdge 6120 Array

Sun StorEdge 6130 Array

Sun StorageTek 6140 Array

Sun Storage 6180 Array

Sun StorEdge 6320 System

Sun StorageTek 6540 Array

Sun Storage 6580/6780 Arrays

Sun StorEdge 6920 System

Sun StorEdge 9910/9960 Arrays

Sun StorEdge 9970/9980

Sun StorageTek 9985/9990

Sun StorageTek 9985V/9990V


Sun Blade X6220 • • • • • • • • • • •

Sun Blade X6240 • • • • • • • • • • • • •

Sun Blade X6250 • • • • • • • • • • • • •

Sun Blade X6270 • • • • • • • •

Sun Blade X6440 • • • • • • • • • • • • •

Sun Blade X6450 • • • • • • • • • • • • •

Sun Blade X8400 • • • • • • • • •

Sun Blade X8420 • • • • • • • • • • • • • • •

Sun Blade X8440 • • • • • • • • • • • • • • •

Sun Blade X8450 • • • • • • • • • • • • • • •

Sun Fire V40z • • • • • • • • • •

Sun Fire X2100 M2 • • • • • • • • • • • •

Sun Fire X2200 M2 • • • • • • • • • • • • • •

Sun Fire X4100 • • • • • • • • • • • •

Sun Fire X4100 M2 • • • • • • • • • • • • • • •

Sun Fire X4140 • • • • • • • • • • • • • •

Sun Fire X4150 • • • • • • • • • • • • • •

Sun Fire X4170 • • • •

Sun Fire X4200 • • • • • • • • • • • •

Sun Fire X4200 M2 • • • • • • • • • • • • • • •

Sun Fire X4240 • • • • • • • • • • • • • •

Sun Fire X4250 • • • • • •

Sun Fire X4270 • • • •

46 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


STORAGE OVERVIEW

47
Sun StorageTek 9985V/9990V

Sun StorageTek 9985/9990

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


Sun StorEdge 9970/9980


Sun StorEdge 9910/9960 Arrays


Sun StorEdge 6920 System


Sun Storage 6580/6780 Arrays •


FC Storage for x64 Servers (Continued)

Sun StorageTek 6540 Array •



Sun StorEdge 6320 System


Sun Storage 6180 Array


Sun StorageTek 6140 Array


Sun StorEdge 6130 Array


Sun StorEdge 6120 Array


Sun StorEdge 3511 RAID Array


Sun StorEdge 3510 RAID Array


Sun StorageTek 2540 RAID Array


TABLE 5-2

Sun Netra X4200 M2


Sun Fire X4600 M2

Sun Netra X4250


Sun Netra X4450
Sun Fire X4275
Sun Fire X4440
Sun Fire X4450
Sun Fire X4540
Sun Fire X4600
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

For other storage arrays and other x64 servers, please refer to the specific server

TABLE 5-3 Older FC Storage and Platform Compatibility Matrix


Sun StorEdge Arrays Sun StorEdge Arrays
Sun StorEdge A3500FC System

Sun StorEdge A5x00 Array

Sun StorEdge T3 Array (Single Brick)b

Sun StorEdge T3 Array (Partner Pair)b

Sun StorEdge A3500FC System

Sun StorEdge A5x00 Array

Sun StorEdge T3 Array (Single Brick)e

Sun StorEdge T3 Array (Partner Pair)b


Server

Netra t 1120/1125 Sun Fire T2000 • •

Netra t 1400/1405 Sun Fire V120


Netra T1 AC200/DC200 Sun Fire V210
Netra 20 • Sun Fire V215 • •

Netra 120 Sun Fire V240 • •

Netra 240 Sun Fire V245 • •

Netra 440 • • Sun Fire V250 • •

Netra 1280 •d • • Sun Fire 280R •c • •

Netra 1290 • • • Sun Fire V440 • •

Sun Enterprise 220R • • • Sun Fire V445 • •

Sun Enterprise 250 • • • Sun Fire V480 •c • •

Sun Enterprise 420R • • • Sun Fire V490 •c • •

Sun Enterprise 450 • • • Sun Fire V880 •d • •

Sun Enterprise 3000a • • • • Sun Fire V890 •d • •

Sun Enterprise 3500a • • • • Sun Fire V1280 • • • •

Sun Enterprise 4000a • • • • Sun Fire E2900 • •

Sun Enterprise 4500a • • • • Sun Fire 3800 •c • •

Sun Enterprise 5000a • • • • Sun Fire 4800/4810 •c • •

Sun Enterprise 5500a • • • • Sun Fire E4900 • •

Sun Enterprise 6000a • • • • Sun Fire 6800 •c • •

48 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


STORAGE OVERVIEW

TABLE 5-3 Older FC Storage and Platform Compatibility Matrix (Continued)


Sun StorEdge Arrays Sun StorEdge Arrays

Sun StorEdge A3500FC System

Sun StorEdge A5x00 Array

Sun StorEdge T3 Array (Single Brick)b

Sun StorEdge T3 Array (Partner Pair)b

Sun StorEdge A3500FC System

Sun StorEdge A5x00 Array

Sun StorEdge T3 Array (Single Brick)e

Sun StorEdge T3 Array (Partner Pair)b


Server

Sun Enterprise 6500a • • • • Sun Fire E6900 • •

Sun Enterprise 10Ka • • • • Sun Fire 12K/15K •c • •

Sun Fire T1000 Sun Fire E20K/E25K • •

Sun SPARC Enterprise • •


T5440

a Only these servers’ SBus I/O boards are supported for cluster shared storage
b The T2000 is supported with the T3+ only
c Only Sun StorEdge A5200 supported
d Only Sun StorEdge A5100/A5200 supported
e The T2000 is supported with the T3+ only

discussion in Chapter 3.

Supported SCSI Storage Devices


Table 5-4 lists the SCSI storage devices supported with Sun Cluster and the server
types that can share these storage devices. Once you have determined whether your
server and storage combination are supported, refer to the storage details section to

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 49


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

find other supported components. If you have mixed types of servers in your cluster,
refer to “Sharing Storage Among Different Types of Servers in a Cluster” on page 36
for additional restrictions.

TABLE 5-4 SCSI Storage for SPARC Servers


Sun Netra Sun StorEdge Arrays
Netra st D130 Array

Netra st A1000 Array

Netra st D1000 Array

Sun StorEdge S1 Array

Sun StorEdge D2 Array

Sun StorEdge A3500 Array

Sun StorEdge 3120 JBOD Array

Sun StorEdge 3310 JBOD Array

Sun StorEdge 3310 RAID Array

Sun StorEdge 3320 JBOD Array

Sun StorEdge 3320 RAID Array


Server

Sun Enterprise 10K


Sun Enterprise 220R • • • • • • •

Sun Enterprise 250 • • • • • • •

Sun Enterprise 3x00 • • •

Sun Enterprise 420R • • • • • • •

Sun Enterprise 450 • • • • • • •

Sun Enterprise 4x00 • • •

Sun Enterprise 5x00 • • •

Sun Enterprise 6x00 • • •

Sun Fire 12K • • • • •

Sun Fire 15K • • • • •

Sun Fire 280R • • • • • • •

Sun Fire 3800


Sun Fire 4800 • • • • •

Sun Fire 4810 • • • • •

Sun Fire 6800 • • • • •

Sun Fire E20K • • • •

Sun Fire E25K • • • •

Sun Fire E2900 • • • • •

Sun Fire E4900 • • •

50 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


STORAGE OVERVIEW

TABLE 5-4 SCSI Storage for SPARC Servers (Continued)


Sun Netra Sun StorEdge Arrays

Netra st D130 Array

Netra st A1000 Array

Netra st D1000 Array

Sun StorEdge S1 Array

Sun StorEdge D2 Array

Sun StorEdge A3500 Array

Sun StorEdge 3120 JBOD Array

Sun StorEdge 3310 JBOD Array

Sun StorEdge 3310 RAID Array

Sun StorEdge 3320 JBOD Array

Sun StorEdge 3320 RAID Array


Server

Sun Fire E6900 • • •

Sun Fire T1000 • • • • • • •

Sun Fire T2000a • • • • • • •

Sun Fire V120 •

Sun Fire V125 • • • • • •

Sun Fire V1280 • • • • • • •

Sun Fire V210 • • • • • • •

Sun Fire V215 • • • • • • •

Sun Fire V240 • • • • • • •

Sun Fire V245 • • • • • • •

Sun Fire V250 • • • • • • •

Sun Fire V440 • • • • • • •

Sun Fire V445 • • • • • • •

Sun Fire V480 • • • • • • •

Sun Fire V490 • • • • • • •

Sun Fire V880 • • • • • • •

Sun Fire V890 • • • • • • •

Sun Netra 120 •

Sun Netra 1280 • • • • • • •

Sun Netra 1290 • • • • • • •

Sun Netra 20 • • • • • • • • •

Sun Netra 240 • • • • • • • •

Sun Netra 440 • • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 51


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 5-4 SCSI Storage for SPARC Servers (Continued)


Sun Netra Sun StorEdge Arrays

Netra st D130 Array

Netra st A1000 Array

Netra st D1000 Array

Sun StorEdge S1 Array

Sun StorEdge D2 Array

Sun StorEdge A3500 Array

Sun StorEdge 3120 JBOD Array

Sun StorEdge 3310 JBOD Array

Sun StorEdge 3310 RAID Array

Sun StorEdge 3320 JBOD Array

Sun StorEdge 3320 RAID Array


Server

Sun Netra t 1120/1125 • • • • • • • •

Sun Netra t 1400/1405 • • • • • • • • • •

Sun Netra T1 • •
AC200/DC200
Sun Netra T2000 • • • •

Sun Netra T5220 • • • • •

Sun Netra T5440 • • •

Sun SPARC Enterprise • • • •


M3000
Sun SPARC Enterprise • • • • •
M4000
Sun SPARC Enterprise • • • • •
M5000
Sun SPARC Enterprise • • • • •
M8000
Sun SPARC Enterprise • • • • •
M9000
Sun SPARC Enterprise • • • • •
T5120
Sun SPARC Enterprise • • • • •
T5140
Sun SPARC Enterprise • • • • •
T5220

52 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


STORAGE OVERVIEW

TABLE 5-4 SCSI Storage for SPARC Servers (Continued)


Sun Netra Sun StorEdge Arrays

Netra st D130 Array

Netra st A1000 Array

Netra st D1000 Array

Sun StorEdge S1 Array

Sun StorEdge D2 Array

Sun StorEdge A3500 Array

Sun StorEdge 3120 JBOD Array

Sun StorEdge 3310 JBOD Array

Sun StorEdge 3310 RAID Array

Sun StorEdge 3320 JBOD Array

Sun StorEdge 3320 RAID Array


Server

Sun SPARC Enterprise • • • • •


T5240
Sun SPARC Enterprise • • • • •
T5440
External I/O Expansion • • •
Unit for Sun SPARC
Enterprise M4000, M5000,
M8000, M9000 Servers

a Support for SCSI storage with the Sun Fire T2000 server requires two PCI-X slots for HBAs. T2000 severs
with a disk controller that occupies one of the PCI-X slots are not supported with Sun Cluster and SCSI
storage.

TABLE 5-5 SCSI Storage for x64 Servers


Sun StorEdge 3120 JBOD Array

Sun StorEdge 3310 JBOD Array

Sun StorEdge 3310 RAID Array

Sun StorEdge 3320 JBOD Array

Sun StorEdge 3320 RAID Array

Server

Sun Fire V20z •

Sun Fire V40z • • • • •

Sun Fire X2100 M2 • • • • •

Sun Fire X2200 M2 • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 53


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 5-5 SCSI Storage for x64 Servers (Continued)

Sun StorEdge 3120 JBOD Array

Sun StorEdge 3310 JBOD Array

Sun StorEdge 3310 RAID Array

Sun StorEdge 3320 JBOD Array

Sun StorEdge 3320 RAID Array


Server

Sun Fire X4100 • • • • •

Sun Fire X4100 M2 • • • • •

Sun Fire X4140 • • • • •

Sun Fire X4150


Sun Fire X4170
Sun Fire X4200 • • • • •

Sun Fire X4200 M2 • • • • •

Sun Fire X4240 • • • • •

Sun Fire X4250 • • •

Sun Fire X4270


Sun Fire X4275
Sun Fire X4440 • • • • •

Sun Fire X4450 • • • • •

Sun Fire X4540 • • •

Sun Fire X4600 • • • • •

Sun Fire X4600 M2 • • • • •

Sun Netra X4200 M2 • • • • •

Sun Netra X4250 • • • • •

Sun Netra X4450 • • • • •

54 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


STORAGE OVERVIEW

TABLE 5-6 SAS Storage for SPARC Servers

Sun StorageTek 2530 RAID Array

Sun Storage J4400 JBOD Array

Sun Storage J4200 JBOD Array


Server

Sun Fire E2900 •

Sun Fire T1000 • • •

Sun Fire T2000 • • •

Sun Fire V125 •

Sun Fire V1280 •

Sun Fire V215 • • •

Sun Fire V245 • • •

Sun Fire V445 • • •

Sun Fire V480 •

Sun Fire V490 •

Sun Fire V880 •

Sun Fire V890 •

Sun Netra T2000 •

Sun Netra T5440 •

Sun SPARC Enterprise M4000 • • •

Sun SPARC Enterprise M5000 • • •

Sun SPARC Enterprise M8000 • • •

Sun SPARC Enterprise M9000 • • •

Sun SPARC Enterprise T1000 - See Sun Fire


T1000
Sun SPARC Enterprise T2000 - See Sun Fire
T2000

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 55


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 5-6 SAS Storage for SPARC Servers (Continued)

Sun StorageTek 2530 RAID Array

Sun Storage J4400 JBOD Array

Sun Storage J4200 JBOD Array


Server

Sun SPARC Enterprise T5120 • • •

Sun SPARC Enterprise T5140 • • •

Sun SPARC Enterprise T5220 • • •

Sun SPARC Enterprise T5240 • • •

Sun SPARC Enterprise T5440 • •

External I/O Expansion Unit for Sun •


SPARC Enterprise T5120, T5140, T5220 and
T5240 Servers

TABLE 5-7 SAS Storage for x64 Servers


Sun StorageTek 2530 RAID Array

Sun Storage J4400 JBOD Array

Sun Storage J4200 JBOD Array

Server

Sun Fire X2100 M2 • • •

Sun Fire X2200 M2 • • •

Sun Fire X4100 •

Sun Fire X4100 M2 • • •

Sun Fire X4140 • • •

56 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


STORAGE OVERVIEW

TABLE 5-7 SAS Storage for x64 Servers (Continued)

Sun StorageTek 2530 RAID Array

Sun Storage J4400 JBOD Array

Sun Storage J4200 JBOD Array


Server

Sun Fire X4150 • • •

Sun Fire X4170 • • •

Sun Fire X4200 •

Sun Fire X4200 M2 • • •

Sun Fire X4240 • • •

Sun Fire X4250 • • •

Sun Fire X4270 • • •

Sun Fire X4275 • • •

Sun Fire X4440 • • •

Sun Fire X4450 • • •

Sun Fire X4600 • • •

Sun Fire X4600 M2 • • •

Sun Netra X4200 M2 •

Sun Netra X4450 •

Supported Ethernet-Connected Storage Devices


Please see the indicated sections for the following products:
■ “Sun StorageTek 2510 RAID Array” on page 173
■ “Sun StorageTek 5000 NAS Appliance” on page 175
■ “Sun Storage 7000 Unified Storage System” on page 179

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 57


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Third-Party Storage
Please see the following link for information on supported third-party storage:
http://www.sun.com/software/cluster/osp/index.html

58 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 6

Fibre Channel Storage Support

This chapter discusses Fibre Channel storage support in Sun Cluster, both as direct-
attach and SAN configurations.

SAN Configuration Support


This section pertains to SAN-switch-connected shared storage support.

Server/Switch/Storage Support
Using supported storage switches, it is possible to connect supported Fibre Channel
storage devices and supported servers in a Storage Area Network (SAN)
configuration. These configurations are supported with Sun Cluster as long as they
are within the range of supported devices and limitations listed below. Supported
configurations are comprised of supported SAN HBAs, switches and storage devices
(all listed below) while following the SAN support rules (also listed below).

SAN Support Rules


In order to create a supported SAN connected cluster configuration, the following
rules must be followed:
■ The HBA/SAN/Storage configuration must be listed in the HBA, Storage, and
Switch sections below.
■ Cascading up to two layers of switches is supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 59


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The configuration must be supported by Network Storage. Please see the NWS
“what works with what” matrices, particularly the latest SAN matrix or the SE
9900 series matrix (if you are using a SE 9900 series storage array). You can find
these matrices at http://mysales.central/public/storage/products/matrix.html

Supported SAN Software


SAN software is supported as follows, unless noted otherwise by storage array, FC
switch, HBA, server, or other documentation.
■ Solaris 10: With associated SAN related patches.
■ Solaris 9: Sun StorEdge SAN Foundation Software - release 4.4.15 is supported.
■ Solaris 8: Sun StorEdge SAN Foundation Software - release 4.4.12 is supported.

Supported SAN Storage


Please refer to the section on the storage device of interest for support details. In
order to put together a supported configuration, please match a supported
server/HBA combination with a supported SAN switch below and a supported
SAN storage device from Table 5-1, “FC Storage for SPARC Servers,” on page 42.
This configuration must adhere to the SAN support rules listed above. Once this
combination is complete, please check it against the Network Storage what works
with what matrices to ensure that both groups support the configuration. If they do,
the configuration is supported, if not, additional testing will need to be done to
enable this support.

Supported SAN Host Bus Adapters (HBAs)


To find if a given server and HBA combination can be supported in a SAN
environment, please see the specific storage device details in this chapter to check
for the appropriate storage/HBA configuration. If a given combination of storage,
server and HBA are supported, then you may proceed to choosing a SAN switch.
The HBAs supported in a Sun Cluster SAN are listed in the following sections. Refer
to individual storage device sections for exceptions.

1Gb HBAs
■ SBus: * (X)6757A Sun StorEdge SBus Dual FC Network Adapter
■ PCI:

60 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

■ (X)6727A Sun StorEdge PCI Dual Fibre Channel Network Adapter


■ (X)6799A Sun StorEdge PCI Single Fibre Channel Network Adapter
■ cPCI: (X)6748A Sun StorEdge CompactPCI Dual Fibre Channel Network Adapter

2Gb HBAs
■ SBus: none
■ PCI:
■ SG-(X)PCI1FC-QF2 ((X)6767A) Sun StorEdge 2G FC PCI Single Fibre Channel
HBA
■ SG-(X)PCI2FC-QF2 ((X)6768A) Sun StorEdge 2G FC PCI Dual Fibre Channel
HBA
■ SG-(X)PCI1FC-JF2 JNI 2Gb PCI Single Port Fibre Channel HBA
■ SG-(X)PCI2FC-JF2 JNI 2Gb PCI Dual Port Fibre Channel HBA
■ SG-(X)PCI1FC-EM2 Emulex 2Gb PCI
■ SG-(X)PCI2FC-EM2 Emulex 2Gb PCI
■ SG-(X)PCI1FC-QL2 Sun StorEdge 2G FC PCI Single Fibre Channel HBA
■ SG-(X)PCI2FC-QF2-Z Sun StorEdge 2G FC PCI Dual Fibre Channel HBA
■ cPCI: none

4Gb HBAs
■ SBus: none
■ PCI:
■ SG-(X)PCI1FC-QF4 Sun StorEdge 4G FC PCI Single Fibre Channel Network
Adapter
■ SG-(X)PCI2FC-QF4 Sun StorEdge 4G FC PCI Dual Fibre Channel Network
Adapter
■ SG-(X)PCI1FC-EM4 Emulex Single Port 4Gb Fiber Channel HBA
■ SG-(X)PCI2FC-EM4 Emulex Dual Port 4Gb Fiber Channel HBA
■ cPCI: none
■ PCI-E
■ SG-(X)PCIE1FC-QF4 Sun StorEdge 4G FC PCI-E Single Fibre Channel Network
Adapter
■ SG-(X)PCIE2FC-QF4 Sun StorEdge 4Gb PCI-E Dual Port Fibre Channel HBA
■ SG-(X)PCIE1FC-EM4 Emulex 4Gb Single Port PCI-E
■ SG-(X)PCIE2FC-EM4 Emulex 4Gb Dual Port PCI-E

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 61


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ PCI-E ExpressModules
■ SG-XPCIE2FC-QB4-Z
■ SG-XPCIE2FC-EB4-Z
■ SG-XPCIE2FCGBE-Q-Z
■ SG-XPCIE2FCGBE-E-Z
■ Sun Blade 8000/8000 P NEM
■ SG-XPCIE20FC-NEM-Z Sun StorageTek 4Gb FC NEM 20-Port HBA
■ Sun Netra CT 900
■ SG-XPCIE2FC-ATCA-Z Sun StorageTek 4Gb Fibre Channel ATCA HBA
■ XCP32X0-RTM-FC-Z Sun Netra CP3200 ARTM-FC

8Gb HBAs
■ PCI-E
■ SG-XPCIE1FC-EM8-Z
■ SG-XPCIE2FC-EM8-Z
■ SG-XPCIE1FC-QF8-Z
■ SG-XPCIE2FC-QF8-Z

Supported SAN Switches


The following switches are supported in a Sun Cluster SAN environment. In order to put
together a supported configuration, please match a supported server/HBA combination
with a switch from the following list and a supported SAN storage device (listed below).
This configuration must adhere to the SAN support rules listed above. Once this
combination is complete, please check it against the Network Storage what works with
what matrices to ensure that both groups support the configuration. If they do, the
configuration is supported, if not, additional testing will need to be done to enable this
support.
■ Sun 8 and 16 port 1Gb switches
■ Sun 8, 16 and 64 port 2Gb switches
■ Brocade 200E, 300, 3101 (4G, 8G bps), 2800, 3200, 3250, 3800, 3850, 3900, 4100, 4900,
5000, 6400, 12000, 24000, 48000, 5100, 5300, DCX (4G, 8G bps), DCX-4S (4G, 8G bps)
switches
■ McData 4300, 4400, 4500, 4700, 6064, 6140, Intrepid 10000 switches
■ QLogic 5200, 5202, 5600, 5602, 5802V (4G and 8G), 9100, 9200 switches

1. Does not support distance solutions e.g. campus or metro cluster

62 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

■ Cisco MDS 9020, 9120, 9124, 9134, 9140, 9216A, 9216i, 9222i1, 9506, 9509, 9513 switches

Please see SAN WWWW for any possible constraints or limitations.

Sun StorEdge A3500FC System

SE A3500FC Configuration Rules


Daisy-chaining of the controller modules is not supported.

Node Connectivity Limits


SE A3500FC systems can connect to 2 cluster nodes.

Hub Support
Hubs are required to connect hosts to A3500FC in cluster configurations. An
A3500FC controller module is connected to two hosts via hubs. Each StorEdge
A3500FC controller module is connected to two hubs. Both hosts are connected to
both the hubs. It is required that both hubs be connected to different host bus
adapters on a node. Figure 6-1 on page 66 shows how to configure an A3500FC unit
as shared storage.

Up to four A3500FC controller modules can be connected to a hub. You can connect
controller modules in the same or separate cabinets.

RAID Requirements
An SE A3500FC controller module with the redundant controllers provides
appropriate hardware redundancy. An SE A3500FC controller also has hardware
RAID capabilities built in. Hence, software mirroring of data is not required.

However, a software volume manager can be used for managing the data. Also, a
cluster configuration with an SE 3500FC array with a single controller module is
supported and requires volume management or software mirroring.

1. iSCSI/FCIP options not yet supported as of Dec’07

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 63


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Multipathing
Only the Redundant Disk Array Controller (RDAC) driver from Sun StorEdge RAID
Manager 6.22 is supported.

Volume Manager Support


All volume manager releases supported by Sun Cluster 3 and the A3500

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

Sharing SE A3500FC Systems


There are no Sun Cluster 3 specific requirements.

Quorum Devices
Sun StorEdge A3500 and A3500FC arrays cannot be used as quorum devices.

Campus Cluster
Campus clusters are not supported.

64 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

SE A3500FC Support Matrix


To select a supported configuration, first check Table 5-1, “FC Storage for SPARC
Servers,” on page 42 to see if your server and storage combination is supported. If it
is, select your host adapters from TABLE 6-1.

TABLE 6-1 SE A3500FC Support Matrix

Server Host Adapter Part Number

Sun Enterprise 3x00, 4x00, onboard FC-AL socket


5x00, 6x00
FC-AL SBus Host Adapter 6730A
Sun Enterprise 10K FC-AL SBus Host Adapter 6730A

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 65


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SE A3500FC Other Components

TABLE 6-2 SE A3500FC Supported Components

Component Part Number

FC controller module 6538A


FC-AL seven-port Hub 6732A
FC-AL GBIC 6731A
2-meter, fiber-optic cable 973A
15-meter, fiber-optic cable 978A

SE A3500FC Sample Configuration Diagrams


FIGURE 6-1 Sun StorEdge A3500FC as Shared Storage
Node 1 Node 2

HA 1 HA 2 HA 1 HA 2

0 1 0 1 0 1 0 1

Hub A Hub A

0 1 2 3 4 5 6 0 1 2 3 4 5 6

FCAL port FCAL port


Controller A Controller B

A3500FC

Sun StorEdge A5x00 Array


66 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only
FIBRE CHANNEL STORAGE SUPPORT

This section covers Sun Cluster requirements when configured with the Sun
StorEdge A5000, A5100, or A5200.

SE A5x00 Configuration Rules


Daisy-chaining of A5x00s is not supported.

Both full- and split-loops are supported.

Node Connectivity Limits


SE A5x00 arrays can connect to 2 cluster nodes.

Switch and Hub Support


The Sun StorEdge FC network switches (6746A, SG-XSW16-32P) are supported in
python mode. The SE A5x00 arrays can also be directly attached without hubs or
switches.

RAID Requirements
In order to ensure data redundancy and hardware redundancy, software mirroring
across boxes is required. Mirroring of data between the two halves of the same
A5x00 unit is not supported.

Multipathing
Multipathing (for example, using DMP, MPxIO, etc.) is not supported with A5x00s.

Volume Manager Support


All volume manager releases supported by Sun Cluster 3 and the A5x00.

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 67


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sharing SE A5x00 Arrays


Box sharing is not supported.

SE A5x00 Support Matrix


To select a supported configuration, first check Table 5-1, “FC Storage for SPARC
Servers,” on page 42 to see if your server and storage combination is supported. If it
is, select your host adapters from TABLE 6-2, TABLE 6-3, or TABLE 6-4.

TABLE 6-3 Sun Cluster 3 and SE A5000 Support Matrix

Server Host Bus Adapter Connectivity A5x00 Configuration

Sun Enterprise 6729A Direct-Attached full-loop only, each host


220R, 250, 420R, 450 must be on a different
loop.
Hub (6732A) full-loop only
Sun Enterprise onboard FC Direct-Attached full-loop, split-loop
3x00-6x00 socket
Hub (6732A)
6730A
6757A
6757A Switch (6746A, SG-
XSW16-32P)
Sun Enterprise 10K 6730A Direct-Attached full-loop, split-loop
6757A Hub (6732A)
6757A Switch (6746A, SG-
XSW16-32P)
Sun Fire 280R, 4800, 6799A, 6727A Direct-Attached full-loop, split-loop
4810, 6800
Hub (6732A)
Switch (6746A, SG-
XSW16-32P)
Sun Fire 3800 6748A Direct-Attached full-loop, split-loop
Hub (6732A)
Switch (6746A, SG-
XSW16-32P)

68 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

TABLE 6-4 Sun Cluster 3 and SE A5100 Support Matrix

Server Host Bus Adapter Connectivity A5x00 Configuration

Netra 1280 6799A, 6727A Direct-Attached full-loop, split-loop


Sun Fire 280R, Hub (6732A)
V480/V490,
V880/V890, V1280, Switch (6746A, SG-
4800, 4810, 6800 XSW16-32P)

Sun Enterprise 6729A Direct-Attached full-loop only, each host


220R, 250, 420R, must be on a different
450 loop.
Hub (6732A) full-loop only
Sun Enterprise onboard FC socket Direct-Attached full-loop, split-loop
3x00-6x00 6730A Hub (6732A)
6757A
6757A Switch (6746A, SG-
XSW16-32P)
Sun Enterprise 10K 6730A Direct-Attached full-loop, split-loop
6757A Hub (6732A)
6757A Switch (6746A, SG-
XSW16-32P)
Sun Fire 3800 6748A Direct-Attached full-loop, split-loop
Hub (6732A)
Switch (6746A, SG-
XSW16-32P)

TABLE 6-5 Sun Cluster 3 and SE A5200 Support Matrix

Server Host Bus Adapter Connectivity A5x00 Configuration

Netra 1280 6799A, 6727A Direct-Attached full-loop, split-loop


Sun Fire 280R, Hub (6732A)
V480/V490,
V880/V890, V1280, Switch (6746A, SG-
4800, 4810, 6800, XSW16-32P)
12K/15K

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 69


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 6-5 Sun Cluster 3 and SE A5200 Support Matrix (Continued)

Server Host Bus Adapter Connectivity A5x00 Configuration

Sun Enterprise 6729A Direct-Attached full-loop only, each host


220R, 250, 420R, must be on a different
450 loop.
Hub (6732A) full-loop only
Sun Enterprise onboard FC socket Direct-Attached full-loop, split-loop
3x00-6x00 6730A Hub (6732A)
6757A
6757A Switch (6746A, SG-
XSW16-32P)
Sun Enterprise 10K 6730A Direct-Attached full-loop, split-loop
6757A Hub (6732A)
6757A Switch (6746A, SG-
XSW16-32P)
Sun Fire 3800 6748A Direct-Attached full-loop, split-loop
Hub (6732A)
Switch (6746A, SG-
XSW16-32P)

SE A5x00 Other Components


The part numbers referenced in the support matrix tables are:

TABLE 6-6 SE A5x00 Part Number Descriptions

Part # Description

6729A FC-100 Host Adapter


6730A FC-AL SBus Host Adapter
6748A Sun StorEdge cPCI Dual FC Network Adapter
6799A Sun StorEdge PCI Single FC Network Adapter
6727A Sun StorEdge PCI Dual FC Network Adapter
6757A Sun StorEdge SBus Dual FC Network Adapter

70 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

TABLE 6-6 SE A5x00 Part Number Descriptions (Continued)

Part # Description

6732A FC-AL seven-port Hub


6746A Sun StorEdge Network FC Switch -8
SG-XSW16-32P Sun StorEdge Network FC Switch -16

Other components supported with SE A5x00 are listed below:

TABLE 6-7 SE A5x00 Supported Components

Component Part Number

FC-AL GBIC 6731A


Interface Board 6734A
2-meter, fiber-optic cable 973A
15-meter, fiber-optic cable 978A
5-meter, fiber-optic cable 9715A

SE A5x00 Sample Configuration Diagrams


Some sample configurations for connecting SE A5x00 as shared storage are:
■ Direct-attached, full-loop A5x00 configuration: Figure 6-2 on page 72.
■ Direct-attached, split-loop A5x00 configuration: Figure 6-3 on page 72.
■ Hub-attached, full-loop, single-loop A5x00 configuration: Figure 6-4 on page 73.
■ Hub-attached, full-loop, dual-loop A5x00 configuration: Figure 6-5 on page 74.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 71


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 6-2 Direct-Attached, Full-Loop A5x00 Configuration


Node 1 Node 2

HA 1 HA 2 HA 1 HA 2

0 1 0 1 0 1 0 1

a0 b0 a0 b0

Data Mirror
A5x00 #1 A5x00 #2

FIGURE 6-3 Direct-Attached, Split-Loop A5x00 Configuration.

Node 1 Node 2

HA 1 HA 2 HA 1 HA 2

0 1 0 1 0 1 0 1

a0 a1 b0 b1 a0 a1 b0 b1
Data Data’ Mirror Mirror’

A5x00 #1 A5x00 #2

72 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

FIGURE 6-4 Hub-Attached, Full-Loop, Single-Loop A5x00 Configuration


Node 1 Node 2

HA 1 HA 2 HA 1 HA 2

0 1 0 1 0 1 0 1

Hub A Hub A
0 1 2 3 4 5 6 0 1 2 3 4 5 6

a0 b0 b1 a0 a1 b0 b1
A5x00 #1 A5x00 #2
Data
Mirror

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 73


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 6-5 Hub-Attached, Full-Loop, Dual-Loop A5x00 Configuration


Node 1 Node 2

HA 1 HA 2 HA 1 HA 2

0 1 0 1 0 1 0 1

Hub A Hub B Hub A Hub B


0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6

a0 a1 b0 b1 a0 a1 b0 b1

Data Mirror

a0 a1 b0 b1 a0 a1 b0 b1

Data’ Mirror’

Sun StorEdge T3 Array (Single Brick)

SE T3 Single Brick Configuration Rules

Node Connectivity Limits


T3A arrays can connect to two nodes. T3B arrays can connect to up to 4 nodes.

74 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Hub and Switch Support


Hubs/Switches are required to connect a T3 brick to multiple nodes in the cluster.

If a T3 is connected to >2 nodes then switches are mandatory.

RAID Requirements
In order to ensure data redundancy and hardware redundancy, host-based mirroring
between two arrays is required.

Multipathing
Multipathing (for example using DMP, MPxIO, etc.) is not supported with T3 single
brick configurations.

Volume Manager Support


All volume manager releases supported by Sun Cluster 3 and the T3.

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

Sharing T3 Single Bricks with Multiple Clusters/Non-


Clustered Nodes
Sun Cluster 3 requires exclusive access to LUNs that store its shared data. Sun
StorEdge T3 array supports LUN masking and LUN mapping with FW2.1. With this
feature a LUN can be assigned exclusively to a cluster of nodes. Using this feature a
Sun StorEdge T3 storage device can be shared among multiple clusters and non-
clustered hosts.

SE T3 Single Brick Support Matrix and Exceptions


To determine whether your configuration is supported:

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 75


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-8 or Table 6-9 to determine if there is limited HBA support.

TABLE 6-8 T3A Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20, Netra 1290 6727A


Netra 440, Netra 1280 6799A, 6727A
Sun Fire V240, V250, 280R, V440, V480, V880,
V1280, E2900, E4900, E6900
Sun Enterprise 220R, 250, 420R, 450 6799A, 6727A
Sun Enterprise 3x00-6x00 onboard FCAL socketa
6730Aa
6757A
Sun Enterprise 10K 6730Aa, 6757A
a Supported in arbitrated loop configurations only (no SAN configurations).

TABLE 6-9 T3B Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20 6727A
Sun Enterprise 3x00-6x00 onboard FCAL socketa
6730Aa
Sun Enterprise 10K 6730Aa, 6757A
a Supported in arbitrated loop configurations only (no SAN configura-
tions).

4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed in the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html)

76 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

SE T3 Single Brick Other Components


The part numbers referenced in the support matrix tables above are:

TABLE 6-10 SE T3 Single Brick Part Number Descriptions

Part Number Description

6730A FC-AL SBus Host Adapter


6748A Sun StorEdge cPCI Dual FC Network Adapter
6799A Sun StorEdge PCI Single FC Network Adapter
6727A Sun StorEdge PCI Dual FC Network Adapter
6757A Sun StorEdge SBus Dual FC Network Adapter
6732A FC-AL seven-port Hub
6746A Sun StorEdge Network FC Switch -8 (1GB)
SG-XSW16-32P Sun StorEdge Network FC Switch -16 (1GB)
Brocade 2800 1GB Brocade 2800 Switch
SG-XSW8-2GB Sun StorEdge Network FC Switch -8 (2GB)
SG-XSW16-2GB Sun StorEdge Network FC Switch -16 (2GB)
Brocade 3800 2GB Brocade 3800 Switch

Other components supported with T3 are listed below:

TABLE 6-11 SE T3 Single Brick Supported Components

Component Part # of the Component

FC-AL GBIC 6731A


2-meter, fiber-optic cable 973A
15-meter, fiber-optic cable 978A
5-meter, fiber-optic cable 9715A

SE T3 Single Brick Sample Configuration


Diagrams
The figure below shows how to configure T3 in single brick configuration as shared
storage.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 77


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 6-6 Sun StorEdge T3 in Single-Brick Configuration as Shared Storage

Node1 Node2 Node3 Node4

HA 1 HA 2 HA 1 HA 2 HA 1 HA 2 HA 1 HA 2

Switch Switch

Data Mirror

T3 brick T3 brick

Sun StorEdge T3 Array (Partner Pair)

SE T3 Partner Pair Configuration Rules

Node Connectivity Limits


T3 (T3A) arrays can connect to two nodes. T3+ (T3B) arrays can connect to up to 4
nodes.

Hub and Switch Support


Hubs or switches are required when connecting to 2 nodes. FC switches are required
when connecting to more than 2 nodes.

RAID Requirements
A T3 partner pair has full hardware redundancy built-in. Hence, it is supported to
use hardware RAID5 for data availability. This automatically implies that a cluster
configuration with a single T3 partner pair is supported.

78 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Multipathing
Use of Sun StorEdge Traffic Manager (MPxIO) is required for having dual paths
from server to the T3PP arrays. No other multipathing solution (for example Veritas
DMP) is supported.

Volume Manager Support


All volume manager releases supported by Sun Cluster 3 and the T3.

Software, Firmware, and Patches


T3 partner pair support requires Sun Cluster 3.0 7/01 (or later), and Solaris 8 7/01
(or later).

FW2.1 is required to use LUN masking and LUN mapping.

Sharing T3 Partner Pairs with Multiple Clusters/Non-


Clustered Nodes
Sun Cluster 3 requires exclusive access to LUNs that store its shared data. Sun
StorEdge T3 array supports LUN masking and LUN mapping with FW2.1. With this
feature, a LUN can be assigned exclusively to a cluster of nodes. Using this feature,
a Sun StorEdge T3 storage device can be shared among multiple clusters and non-
clustered nodes.

SE T3 Partner Pair Support Matrix


To determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if your
chosen server/storage combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Refer to the SAN WWWW


(http://mysales.central/public/storage/products/matrix.html) for additional
information and restrictions.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 79


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SE T3 Partner Pair Other Components


The following table lists part number of components that you might use in your
cluster configuration.

TABLE 6-12 SE T3 Partner-Pair Part Number Descriptions

Part Number Description

6730A FC-AL SBus Host Adapter


6748A Sun StorEdge cPCI Dual FC Network Adapter
6799A Sun StorEdge PCI Single FC Network Adapter
6727A Sun StorEdge PCI Dual FC Network Adapter
6757A Sun StorEdge SBus Dual FC Network Adapter
6732A FC-AL seven-port Hub
6767A 2GB Sun StorEdge PCI Dual FC Network Adapter
6768A 2GB Sun StorEdge PCI Single FC Network Adapter
6746A Sun StorEdge Network FC Switch -8
SG-XSW16-32P Sun StorEdge Network FC Switch -16
Brocade 2800 1GB Brocade 2800 Switch
SG-XSW8-2GB Sun StorEdge Network FC Switch -8 (2GB)
SG-XSW16-2GB Sun StorEdge Network FC Switch -16 (2GB)
Brocade 3800 2GB Brocade 3800 Switch

Other components supported with T3 are listed below:

TABLE 6-13 SE T3 Partner-Pair Supported Components

Component Part # of the Component

FC-AL GBIC 6731A


2-meter, fiber-optic cable 973A
15-meter, fiber-optic cable 978A
5-meter, fiber-optic cable 9715A

80 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

SE T3 Partner Pair Sample Configuration


Diagrams
The following illustration shows how to configure 2 T3 partner pairs as shared
storage.

FIGURE 6-7 Sun StorEdge T3 Partner Pair as Shared Storage

Node 1 Node 2

HA 1 HA 2 HA 1 HA 2

Switch Switch
RAID 5 Data

RAID 5 Data

Sun StorageTek 2540 RAID Array

ST 2540 Configuration Rules:


■ Sun Cluster supports both Simplex (ST2540 with 1x controller) and Duplex
(ST2540 with 2x controllers) configurations

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 81


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Node Connectivity Limits


■ A maximum of 4 nodes can be connected to any one LUN using DAS cabling, 8
nodes when through a SAN.

Hubs and Switches


■ FC switches are supported. ST2540 can also be directly attached.

RAID Requirements
■ Simplex Configuration:
■ Two ST2540 arrays will be required.
■ Data need to be mirrored across arrays using Volume Manager Software (Host
Based Mirroring).
■ Duplex Configuration:
■ A single ST2540 array is supported with properly configured dual controllers,
multipathing, and hardware RAID.

Multipathing
■ Sun StorEdge Traffic Manager (MPXIO) is required in Duplex Configuration
(ST2540 with 2x controllers)

ST 2540 Volume Manager Support


■ There are no Sun Cluster specific requirements. Please note the base product
documentation regarding Volume Manager support.

Software, Firmware, and Patches


■ Please see ST2540 release notes.

Sharing ST 2540 Arrays


■ LUN masking will enable sharing across multiple platforms. See product
documentation for further details

82 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

ST 2540 Support Matrix and Exceptions:


To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-14 to determine if there is limited HBA support

TABLE 6-14 ST 2540 Array/Server combinations with Limited HBA Support

Server Host Adapter

Netra CT 900 CP3060 SG-XPCIE2FC-ATCA-Z


Netra CT 900 CP3260 XCP32X0-RTM-FC-Z
Netra T2000 SG-XPCI2FC-QF4
Netra T5220 SG-XPCI2FC-EM4-Z, SG-XPCI2FC-QF4,
SG-XPCIE2FC-EM4, SG-XPCIE2FC-QF4

4. If HBA support is not limited, you can use your server and storage combination
with host adapters in the “Server Search” under the “Searches” tab of the
Interop Tool, https://interop.central.sun.com/interop/interop

Sun StorEdge 3510 RAID Array


This section describes the configuration rules for using Sun StorEdge 3510 RAID.
Only SE 3510 RAID units can be used as shared storage devices with Sun Cluster 3.
SE 3510 JBOD units can be attached to SE 3510 RAID units for additional storage,
but cannot be used independently of the SE 3510 RAID units in a Sun Cluster 3
configuration.

SE 3510 RAID Configuration Rules


■ Both AC and DC power supplies are supported.
■ Up to 8 additional 3510 JBOD units can be connected to an existing clustered 3510
RAID device. SE 3510 JBOD units are NOT supported in a clustered configuration
unless they are connected to a SE 3510 RAID unit.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 83


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Logical Volumes are NOT supported.


■ Connecting up to eight initiators to one channel is supported when using the
SAN 4.3 (or later) drivers. The SE 3510 has a total of 8 host ports set up in pairs (4
channels). Any driver versions predating SAN 4.3 limit the maximum amount of
initiators connected to a single SE 3510 channel at 1.
■ A single SE 3510 can be used for shared storage as long as it is configured with
dual controllers.
■ A maximum of 8 target IDs can be configured per channel (256 LUNs).
If running SAN 4.3 and 3.27r controller firmware or later above restriction is
removed.

Node Connectivity Limits


The SE 3510 RAID array can connect to up to 8 nodes.

Hub and Switch Support


FC switches are supported. SE 3510 RAID arrays can also be directly attached.

RAID Requirements
■ SE 3510 RAID arrays can be used without a software volume manager if you have
correctly configured dual controllers, multipathing, and hardware RAID.
■ A single 3510 is supported with properly configured dual controllers,
multipathing, and hardware RAID.
■ Single controller SE 3510 RAID units are supported as long as they are mirrored
to another array.
■ Hardware RAID is supported with the SE 3510 RAID array, with or without
software mirroring.

Multipathing
Sun StorEdge Traffic manager (MPxIO) is required with dual-controller SE 3510
configurations.

84 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Volume Manager Support


All volume manager releases supported by Sun Cluster 3 and the SE3510 RAID
array.

Software, Firmware, and Patches


SE 3510 RAID support requires Sun StorEdge SAN Foundation 4.2 software and
firmware patch ID #113723-03 or later. The latest supported firmware is 4.21.

Sharing an SE 3510 RAID Array


With the usage of LUN Masking/Mapping, several clustered and non-clustered
devices can share an SE 3510.

SE 3510 RAID Support Matrix and Exceptions


To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-15 to determine if there is limited HBA support.

TABLE 6-15 SE 3510 Array/Server Combinations with Limited HBA Support

Server Host Adapter

Netra CT 900 CP3010 FC2312-PMC-FF (a SBS PCI mezzanine card)


Netra CT 900 CP3060 SB-AMC55 a (a SANBlaze advanced mezzanine card)
SG-XPCIE2FC-ATCA-Z
Netra T5220 SG-XPCI2FC-EM4-Z, SG-XPCI2FC-QF4, SG-
XPCIE2FC-EM4, SG-XPCIE2FC-QF4
Sun Enterprise 3500-6500, E10k 6757A
Sun Fire T1000 SG-(X)PCIE2FC-QF4
SG-(X)PCIE2FC-EM4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 85


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

a Netra CT 900 ATCA Blade Server supports any ATCA card that complies with PICMIG 3.x specifications. The
third party HBA has been tested with the Sun Netra CT 900 using the CP3060 blade under Sun Cluster, but this
HBA is not a Sun product and thus not supported by Sun. A Sun branded HBA is scheduled to be qualified and
supported in the Q1CY08 time frame

4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed at the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html). Additionally, use
the “Server Search” under the “Searches” tab of the Interop Tool,
https://interop.central.sun.com/interop/interop

In FIGURE 6-9, the same set of LUNs is mapped to channels 0 and 5; a different set of
LUNs is mapped to channels 1 and 4.

86 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

SE 3510 RAID Sample Configuration Diagrams


FIGURE 6-8 Direct-Attached, 4-Node, Dual-Controller SE 3510 RAID Configuration

FIGURE 6-9 Switch-Attached, Dual-Controller SE 3510 RAID Configuration

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 87


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorEdge 3511 RAID Array


This section describes the configuration rules for using Sun StorEdge 3511 RAID.

Only SE 3511 RAID units can be used as shared storage devices with Sun Cluster 3.
SE 3511 JBOD units can be attached to SE 3511 RAID units for additional storage, but
cannot be used independently of the SE 3511 RAID units in a Sun Cluster 3
configuration. PLEASE READ RECOMMEND USES AND LIMITATIONS OF THE
SE 3511 IN THE SE 3511 BASE PRODUCT DOCUMENTATION.

SE 3511 RAID Configuration Rules


■ Both AC and DC power supplies are supported.
■ Up to 5 SE 3511 JBOD arrays can be connected to an SE 3511 RAID array in a Sun
Cluster configuration.
■ It is highly recommended that the 4.11 or later array firmware be used with the
SE3511. In particular, SE 3511 firmware releases earlier than 4.11 are exposed to
CR5059398, which can lead to data corruption. In general, it is recommended that
the latest supported firmware level be installed to benefit from the available fixes.
■ Logical Volumes are NOT supported.
■ Node Connectivity Limits

The SE 3511 RAID array can connect to up to 8 nodes.

A maximum of 8 nodes can directly connect to a LUN on an SE 3511 RAID array. A


maximum of 8 nodes can be connected through a switch to a LUN on an SE 3511
RAID array.

Hub and Switch Support


FC switches are supported. The SE 3511 array can also be directly attached, without
switches.

RAID Requirements
■ SE 3511 arrays can be used without a software volume manager with properly
configured dual controllers, multipathing, and hardware RAID.
■ A single SE 3511 array is supported with properly configuration dual controllers,
multipathing, and hardware RAID.

88 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

■ Single controller SE 3511 RAID arrays are supported as long as they are mirrored
to another array.

Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with dual-controller SE 3511
configurations.

Volume Manager Support


All volume manager releases supported by Sun Cluster 3 and the SE3511 RAID
array.

Software, Firmware, and Patches


SE 3511 RAID array support requires Sun StorEdge SAN Foundation 4.4 (or later)
software. The latest supported firmware is 4.21.

Sharing an SE 3511 RAID Array


Using LUN Masking/Mapping, several clustered and non-clustered devices can
share a SE 3511 RAID array

SE 3511 RAID Support Matrix and Exceptions


To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 89


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

3. Check Table 6-16 to determine if there is limited HBA support.

TABLE 6-16 SE 3511 Array/Server combinations with Limited HBA Support

Server Host Adapter

Netra CT 900 CP3010 FC2312-PMC-FF (a SBS PCI mezzanine card)


Netra CT 900 CP3060 SB-AMC55 a (a SANBlaze advanced mezzanine card)
SG-XPCIE2FC-ATCA-Z
Sun Enterprise 3500-6500, E10k 6757A
Sun Fire T1000 SG-(X)PCIE2FC-QF4
SG-(X)PCIE2FC-EM4
a Netra CT 900 ATCA Blade Server supports any ATCA card that complies with PICMIG 3.x specifications. The
third party HBA has been tested with the Sun Netra CT 900 using the CP3060 blade under Sun Cluster, but this
HBA is not a Sun product and thus not supported by Sun. A Sun branded HBA is scheduled to be qualified and
supported in the Q1CY08 time frame.

4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed at the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html). Additionally, use
the “Server Search” under the “Searches” tab of the Interop Tool,
https://interop.central.sun.com/interop/interop

Sun StorEdge 3910/3960 System


This section describes the configuration rules for using Sun StorEdge 3910/3960 as
shared storage.

SE 3910/3960 Configuration Rules

Node Connectivity Limits


SE 3910/3960 systems can connect to up to 4 nodes.

90 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Hub and Switch Support


FC switches are supported. SE 3910/3960 systems can also be directly attached,
without switches.

RAID Requirements
■ SE 3910/3960 systems can be used without software volume management with
properly configured dual controllers, multipathing, and hardware RAID.
■ T3 single bricks require software mirroring.

Multipathing
SE 3910/3960 systems require Sun StorEdge Traffic Manager (MPxIO).

Volume Manager Support


All volume manager releases supported by Sun Cluster 3 and the SE3910/3960.

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

Sharing an SE 3910/3690 System


An SE 3910/3960 system can be shared among multiple clustered and non-clustered
nodes. If the 3900 series system uses a T3FW older than 2.1, then each cluster
connection should be in its own zone. If the 3900 series system uses T3FW2.1 (or
later), then the LUN masking and LUN mapping capabilities can be used for
providing exclusive access.

SE 3910/3960 Support Matrix


To determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if your
chosen server/storage combination is supported with Sun Cluster.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 91


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Refer to the SAN WWWW


(http://mysales.central/public/storage/products/matrix.html) for additional
information and restrictions.

Sun StorEdge 6120 Array

SE 6120 Configuration Rules


There is a maximum limit of 64 LUNS for any 6120/30 cluster configuration. Greater
than 16 LUN support requires SE 6120 firmware version 3.1 or higher.

Node Connectivity Limits


The SE 6120 can connect to up to 8 nodes.

A maximum of 4 nodes can be connected to any one LUN.

Hubs and Switches


FC switches are required.

RAID Requirements
■ SE 6120 arrays are supported without software volume management, if you have
properly configured 6120 partner pairs, multipathing, and hardware RAID.
■ A single 6120 partner pair is supported with properly configured multipathing
and hardware RAID.
■ 6120 single bricks require software mirroring.
■ RAID 5 is supported for use with SE 6120 partner pair configurations.

92 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with SE 6120 partner pair
configurations.

Volume Manager Support


■ All volume manager releases supported by Sun Cluster 3 and the SE 6120.
■ SE 6120 arrays are supported without software volume management with
properly configured multipathing and hardware RAID.

Software, Firmware, and Patches


SE 6120 firmware version 3.1 or higher required to support more than 16 LUNs.

Sharing SE 6120 Arrays


Using LUN masking, several clustered and non-clustered nodes can share an SE
6120.

SE 6120 Support Matrix and Exceptions


To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-18 to determine if there is limited HBA support.

TABLE 6-17 SE 6120/6130 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20 6799A, 6727A


Sun Fire T1000 SG-(X)PCIE2FC-QF4
SG-(X)PCIE2FC-EM4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 93


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed in the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html)

Sun StorEdge 6130 Array

SE 6130 Configuration Rules

Node Connectivity Limits


■ Maximum limit of 64 LUNS does not apply to the 6130 cluster configuration.
■ The SE 6130 array can connect to up to 8 nodes. However, the SE 6130 is not
compatible with RAC or CVM in configurations of more than 4 nodes.
■ The SE 6130 supports up to 8 nodes per LUN.

Hubs and Switches


■ FC switches are required if more than two nodes are connected to the same SE
6130. The SE 6130 can be direct attached if it is connected to only two nodes.

RAID Requirements
■ SE 6130 arrays are supported without software volume management, if you have
a properly configured 6130, multipathing, and hardware RAID.
■ A single SE 6130 is supported with properly configured multipathing and
hardware RAID.

Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with the SE 6130.

SE 6130 Volume Manager Support


■ All volume manager releases supported by Sun Cluster 3 and the SE 6130.

94 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

■ SE 6130 arrays are supported without software volume management with


properly configured multipathing and hardware RAID.

Software, Firmware, and Patches


■ Please see SE 6130 release notes.
■ SE 6130 Update 1, 2, and 3 are supported

Sharing SE 6130 Arrays


LUN masking will enable sharing of clustered and non-clustered systems.

SE 6130 Support Matrix and Exceptions


To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-18 to determine if there is limited HBA support.

TABLE 6-18 SE 6120/6130 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20 6799A, 6727A


Sun Fire T1000 SG-(X)PCIE2FC-QF4
SG-(X)PCIE2FC-EM4

4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed in the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html)

SE 6130 Sample Configuration Diagrams


The figures that follow show how to configure an SE 6130 as shared storage.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 95


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 6-10 Sun StorEdge 6130 as direct attached Storage

FIGURE 6-11 Sun StorEdge SE 6130 as SAN Storage

96 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Sun StorageTek 6140 Array

ST 6140 Configuration Rules

Node Connectivity Limits


■ The ST 6140 array can connect to up to 8 nodes including Oracle RAC
configurations.
■ The ST 6140 supports up to 8 nodes per LUN.

Hubs and Switches


■ FC switches are required if more than four nodes are connected to the same ST
6140. The ST 6140 can be direct attached if it is connected to four nodes or less.

RAID Requirements
■ ST 6140 arrays are supported without software volume management, if you have
a properly configured ST 6140, multipathing, and hardware RAID.
■ A single ST 6140 is supported with properly configured multipathing and
hardware RAID.

Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with the ST 6140.

ST 6140 Volume Manager Support


■ There are no Sun Cluster specific requirements. Please note the base product
documentation regarding Volume Manager support.
■ ST 6140 arrays are supported without software volume management with
properly configured multipathing and hardware RAID.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 97


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Software, Firmware, and Patches


■ Please see ST 6140 release notes.

Sharing ST 6140 Arrays


LUN masking will enable sharing of clustered and non-clustered systems.

ST 6140 Support Matrix and Exceptions


To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42 and Table 5-2,
“FC Storage for x64 Servers,” on page 46 to determine whether your chosen
server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-19 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combination

TABLE 6-19 ST 6140 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra CT 900 CP3060 SG-XPCIE2FC-ATCA-Z


Netra CT 900 CP3260 XCP32X0-RTM-FC-Z
Netra T5220 SG-XPCI2FC-EM4-Z, SG-XPCI2FC-QF4,
SG-XPCIE2FC-EM4, SG-XPCIE2FC-QF4
Sun Fire T1000 SG-(X)PCIE2FC-QF4
SG-(X)PCIE2FC-EM4

with host adapters listed at the SAN WWWW


(http://mysales.central/public/storage/products/matrix.html). Additionally, use
the “Server Search” under the “Searches” tab of the Interop Tool,
https://interop.central.sun.com/interop/interop

98 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Sun Storage 6180 Array

SS 6180 Configuration Rules

Node Connectivity Limits


■ The SS 6180 array can connect to up to 8 nodes including Oracle RAC
configurations.
■ The SS 6180 supports up to 8 nodes per LUN.

Hubs and Switches


■ The SS 6180 may be direct attached when connected up to four nodes.
■ FC switches are required if more than four nodes are connected to the SS 6180.

RAID Requirements
■ SS 6180 arrays are supported without software volume management with
properly configured multipathing, and hardware RAID.
■ A single SS 6180 is supported with properly configured multipathing and
hardware RAID.

Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with the SS 6180.

SS 6180 Volume Manager Support


■ There are no Sun Cluster specific requirements. Please note the base product
documentation regarding Volume Manager support.
■ SS 6180 arrays are supported without software volume management with
properly configured multipathing and hardware RAID.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 99


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Software, Firmware, and Patches


■ Starting with Sun Cluster 3.1u4.
■ Solaris 9 and Solaris 10 - see the SS 6180 release notes for details.
■ Please see the SS 6180 release notes.

Sharing SS 6180 Arrays


LUN masking will enable sharing of clustered and non-clustered systems.

SS 6180 Support Matrix and Exceptions


To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42 and Table 5-2,
“FC Storage for x64 Servers,” on page 46 to determine whether your chosen
server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-20 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combination
as listed in the “Server Search” under the “Searches” tab of the Interop Tool,
https://interop.central.sun.com/interop/interop

TABLE 6-20 SS 6180 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

No limited HBA support


at this time

Sun StorEdge 6320 System


This section describes the configuration rules for using Sun StorEdge 6320 as shared
storage.

100 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

SE 6320 Configuration Rules


There is a maximum limit of 64 LUNS in any 6320 configuration. Greater than 16
LUN support requires SE 6320 firmware version 3.1 or higher.

Node Connectivity Limits


SE 6320 systems can connect to up to 8 nodes. However, they are not compatible
with RAC or CVM in configurations of more than 4 nodes. Multiple 8 node N*N
clusters may be connected to an SE6320 array.

Currently, a maximum of 4 nodes can be connected to a SE 6320 LUN.

Hub and Switch Support


FC switches are required. This switches can be the optional 6320’s front-end switches
or compatible external switches.

Switches are supported with the “switchless” version of the 6320 (SE 6320 SL).

RAID Requirements
■ SE 6320 systems can be used without software volume management with
properly configured multipathing and hardware RAID.
■ A single 6320 is supported with properly configured multipathing and RAID.
■ Otherwise, data must be mirrored to another array.

Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required.

Volume Manager Support


■ All volume manager releases supported by Sun Cluster 3 and the SE 6320.
■ SE 6320 arrays are supported without software volume management with
properly configured multipathing and hardware RAID.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 101


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Software, Firmware, and Patches


SE 6320 systems require firmware version 3.1 (or later) to support more than 16
LUNs.

Sharing SE 6320 Systems with Multiple Clusters and


Non-Clustered Hosts
Sun Cluster 3 requires exclusive access to LUNs that store its shared data. Using the
LUN masking a SE 6320 LUN can be assigned to multiple nodes. This facility can be
used to a share a SE 6320 among multiple clustered and non-clustered nodes.

SE 6320 Support Matrix and Exceptions


To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-21 to determine if there is limited HBA support.

TABLE 6-21 SE 6320 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Netra 20, Netra 1120/1225, Netra t 1400/1405 6799A, 6727A


Sun Fire T1000 SG-(X)PCIE2FC-QF4
SG-(X)PCIE2FC-EM4

4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed in the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html)

102 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

SE 6320 Sample Configuration Diagrams


FIGURE 6-12 Sun StorEdge SE 6320 Connected Through Switches to Cluster Nodes

Node 1 Node 2

HBA 1 HBA 2 HBA 1 HBA 2

Switch Switch

SE 6320

FIGURE 6-13 Sun StorEdge SE 6320 Directly Connected to Cluster Nodes


Node 1 Node 2

HBA 1 HBA 2 HBA 1 HBA 2

SE 6320

Sun StorageTek 6540 Array

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 103


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

ST 6540 Configuration Rules

Node Connectivity Limits


■ The ST 6540 array can connect to up to 8 nodes including Oracle RAC clusters.

Hubs and Switches


■ FC switches are required if more than four nodes are connected to the same ST
6540.

RAID Requirements
■ ST 6540 arrays are supported without software volume management, if you have
a properly configured ST 6540, multipathing, and hardware RAID.
■ A single ST6540 is supported with properly configured multipathing and
hardware RAID.

Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with the ST 6540.

ST 6540 Volume Manager Support


■ There are no Sun cluster 3 specific requirements, please note the base product
documentation.
■ ST 6540 arrays are supported without software volume management with
properly configured multipathing and hardware RAID.

Software, Firmware, and Patches


■ Please see ST 6540 release notes.

Sharing ST 6540 Arrays


LUN masking will enable sharing of clustered and non-clustered systems.

104 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

ST 6540 Support Matrix and Exceptions


To determine whether your configuration is supported:

1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-22 to determine if there is limited HBA support.

4. If HBA support is not limited, you can use your server and storage combination
with host adapters as listed in the “Server Search” under the “Searches” tab of
the Interop Tool, https://interop.central.sun.com/interop/interop

TABLE 6-22 SE 6540 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Sun Fire T1000 SG-(X)PCIE2FC-QF4


SG-(X)PCIE2FC-EM4
Netra T5220 SG-XPCI2FC-EM4-Z, SG-XPCI2FC-QF4,
SG-XPCIE2FC-EM4, SG-XPCIE2FC-QF4

Sun Storage 6580/6780 Arrays


This section describes the configuration rules for using Sun Storage 6580/6780 as
shared storage.

SS 6580/6780 Configuration Rules

Node Connectivity Limits


These arrays can connect to up to 8nodes.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 105


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Hub and Switch Support


FC switches are supported.

RAID Requirements
■ SS 6580/6780 systems can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single SS 6580/6780 system is supported with properly configured
multipathing and hardware RAID.

Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required.

Volume Manager Support


All volume manager releases supported by Sun Cluster 3 and the SS 6580/6780.

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

Sharing SS 6580/6780 Systems with Multiple Clusters


and Non-Clustered Hosts
Sun Cluster 3 requires exclusive access to LUNs that store its shared data. Using the
LUN masking capabilities in the SVE an SS 6580/6780 LUN can be assigned to
multiple nodes. This facility can be used to a share a SS 6580/6780 among multiple
clustered/non-clustered nodes.

SS 6580/6780 Support Matrix


To determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42, or Table 5-2, “FC
Storage for x64 Servers,” on page 46, to see if your chosen server/storage
combination is supported with Sun Cluster.

106 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-23 to determine if there is limited HBA support

TABLE 6-23 SS 6580/6780 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

No limited HBA support


at this time

4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed by the “Server Search” under the “Searches” tab of
the Interop Tool, https://interop.central.sun.com/interop/interop

Sun StorEdge 6910/6960 Arrays


This section describes the configuration rules for using Sun StorEdge 6910/6960 as
shared storage.

SE 6910/6960 Configuration Rules

Node Connectivity Limits


These arrays can connect to up to 2 nodes.

Hub and Switch Support


FC switches are supported.

RAID Requirements
■ SE 6910/6960 systems can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single 6910/6960 system is supported with properly configured multipathing
and hardware RAID.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 107


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required.

Volume Manager Support


All volume manager releases supported by Sun Cluster 3 and the SE 6910/6960.

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

Sharing SE 6910/60 Systems with Multiple Clusters and


Non-Clustered Hosts
Sun Cluster 3 requires exclusive access to LUNs that store its shared data. Using the
LUN masking capabilities in the SVE an SE69x0 LUN can be assigned to multiple
nodes. This facility can be used to a share a SE69x0 among multiple clustered/non-
clustered nodes.

SE 6910/6960 Support Matrix


To determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if your
chosen server/storage combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Check Table 6-24 to determine if there is limited HBA support

TABLE 6-24 SE 6910/6960 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Sun Fire T1000 SG-(X)PCIE2FC-QF4


SG-(X)PCIE2FC-EM4

108 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

4. Refer to the SAN WWWW


(http://mysales.central/public/storage/products/matrix.html) for additional
information and restrictions

Sun StorEdge 6920 System


This section describes the configuration rules for using Sun StorEdge 6920 as shared
storage.

SE 6920 Configuration Rules

Node Connectivity Limits


These arrays can connect to up to 8 nodes.

Hub and Switch Support


FC switches are supported.

RAID Requirements
■ SE 6920 systems can be used without software volume management if you have
properly configured multipathing and hardware RAID.
■ A single 6920 system is supported with properly configured multipathing and
hardware RAID.

SE 6920 Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required.

Volume Manager Support


■ All volume manager releases supported by Sun Cluster 3 and the SE 6920.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 109


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ SE 6920 arrays are supported without software volume management with


properly configured multipathing and hardware RAID.

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

Sharing SE 6920 with Multiple Clusters and Non-


Clustered Hosts
Sun Cluster 3 requires exclusive access to LUNs that store its shared data. Using the
LUN masking capabilities in the SVE an SE 6920 LUN can be assigned to multiple
nodes. This facility can be used to a share a SE 6920 among multiple clustered and
non-clustered nodes.

Sun StorEdge 6920 system V. 3.0.0 support with Sun


Cluster
The Remote Replication, Snapshot and Local Mirroring features of system

V. 3.0.0 are supported with Sun Cluster 3 and the SE 6920. The SE 6920’s
virtualization feature is supported with the use of the following storage arrays as
back end non_VLV luns storage: T3B and the SE 6020/6120. For information on third
party storage please consult http://www.sun.com/software/cluster/osp/

SE 6920 Support Matrix


To determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if your
chosen server/storage combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60.

3. Refer to the SAN WWWW


(http://mysales.central/public/storage/products/matrix.html) for additional
information and restrictions.

110 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

4. Check Table 6-25 to determine if there is limited HBA support

TABLE 6-25 SE 6920 Array/Server Combinations with Limited HBA Support

Server Host Bus Adapter

Sun Fire T1000 SG-(X)PCIE2FC-QF4


SG-(X)PCIE2FC-EM4

Sun StorEdge 9910/9960 Arrays


This section describes the configuration rules for using Sun StorEdge 9910/9960 as
shared storage.

Note – For a configuration to be supported in a Sun Cluster configuration, it must


also be supported by the SE 9900 team. Please check the SE 9900 series “what works
with what” matrix first to ensure a given configuration is supported by the SE 9900
team. Also note that new server support is typically not released by the SE 9900
team/Hitachi until after the server’s GA.

SE 9910/9960 Configuration Rules


Sun Cluster 3 requires HOST MODE=09 and System Option Mode 185=O. See HDS
MK-90RD017-7 “9900 Sun Solaris Configuration Guide” for more info.

Node Connectivity Limits


A maximum of 8 SPARC nodes, or 4 x64 nodes, in a given cluster can be connected
simultaneously to a SE 9910/9960 LUN.

Hub and Switch Support


FC switches are supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 111


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

RAID Requirements
■ The SE 9910/9960 can be used without software volume management if you have
properly configured multipathing and hardware RAID.
■ A single 9910/9960 volume is supported with properly configured multipathing
and hardware RAID.
■ Without multipathing, data must be mirrored to another array or to another
volume within the array using an independent I/O path.

Multipathing
Sun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and Sun
Dynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from a
cluster node to the ST 99x0 array. MPxIO is the multipathing solution applicable to
Sun HBAs, SDLM is the multipathing solution applicable to both JNI HBAs and Sun
HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4). SDLM supports
both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0, 5.1 and 5.4).

No other storage multipathing solutions (for example Veritas DMP) are supported
with Sun Cluster.

By using multiple paths and either MPxIO or SDLM in conjunction with hardware
RAID, the requirement to host base mirror the data on a SE 9910/9960 is removed.

Please note that only SDLM versions 5.0,5.1 and 5.4 support VxVM (versions 3.2 and
3.5)

SDLM/HDLM support is limited to SPARC and sharing of a LUN to only 2 cluster


nodes. There is no SDLM/HDLM for Solaris x86.

Volume Manager Support


■ MPxIO: All volume manager releases supported by Sun Cluster 3 and the SE
9910/9960.
■ SDLM: Please refer to “Multipathing” on page 112.
■ SE 9910/9960 arrays are supported without software volume management with
properly configured multipathing and hardware RAID.

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

112 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Sharing an SE 9910/9960 Among Several Clusters or


Non-Clustered Systems
A single SE 9910/9960 can be utilized by several separately clustered or non-
clustered devices. The main requirement for this functionality being that the ports of
the SE 9910/9960 must be assigned properly so that no two clusters can see each
other’s storage. This can be done either through physical cabling or by using
SANtinel.

SE 9910/9960 Special Features

TrueCopy
Sun StorEdge 9910/9960 TrueCopy is supported with Sun Cluster 3 with the
following configuration details:
■ Both synchronous and asynchronous modes of operations are supported.
■ When using an MPxIO LUN as the Command Control Interface (CCI) command
device, CCI 01-10-03/02 and microcode 01-18-09-00/00 or better must be used.
■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster
■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicate
data within the same cluster as an alternative to host-based mirroring.
See“TrueCopy Support” on page 291 for more info.
■ TrueCopy pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.

SANtinel and LUSE


SANtinel and LUSE are both supported for usage within a Sun Cluster 3
environment. Please see the SE 9900 series documentation for more information on
SANtinel and LUSE.

ShadowImage
Sun StorEdge 9900 ShadowImage is now supported with Sun Cluster 3 with the
following configuration details:
■ When using an MPxIO LUN as the Command Control Interface (CCI) command
device, CCI 01-10-03/02 and microcode 01-18-09-00/00 or better must be used.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 113


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The Remote Console may be used


Caution: This note applies to configurations using host-based mirroring with
SE 9910/9960 arrays. If ShadowImage is used to restore data from a suspended
pair (PSUS), make sure that you perform the relevant volume-manager steps
prior to executing either a reverse-copy or a quick-restore. This will ensure that
you don’t corrupt your mirror.

Graphtrack and LUN Manager


Graphtrack and LUN Manager are supported

SE 9910/9960 Support Matrix


To determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FC
Storage for x64 Servers,” on page 46 to see if your chosen server/storage
combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60

TABLE 6-26 SE 9910/9960 Array/Server Combinations with Additional HBA Support

Server Host Adaptera

Sun Enterprise 220R, 250, 420R, 450 XT8-FCE-6460-N


Sun Fire V880, V1280, 4800/4810, E4900, 6800, E6900, 12K/15K, E20K/E25K XT8-FCI-1063-N
Sun Enterprise 3x00-6x00 XT8-FC64-1063-N
XT8-FCE-1473-N
Sun Enterprise 10K XT8-FC64-1063-N
XT8-FCE-1473-N
XT8-FCE-6460-N
Sun Fire 280R, V480 XT8-FCE-6460-N
Sun Fire 3800 XT8-FCC-6460-N
a When selecting one of these “XT8-FC” HBAs, all HBAs sharing the LUN are required to be an “XT8-FC” HBA, although
not necessarily the same model.

3. And choose a supported FC switch from the list in “Supported SAN Switches” on
page 62

114 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

4. Refer to the “Sun StorEdge 9900 Systems: What Works With What Support
Matrix,” SunWIN Token Number 344150, for additional details.

Sun StorEdge 9970/9980


This section describes the configuration rules for using Sun StorEdge 9970/9980 as
shared storage.

Note – For a configuration to be supported in a Sun Cluster configuration, it must


also be supported by the SE 9900 team. Please check the SE 9900 series “what works
with what” matrix first to ensure a given configuration is supported by the SE 9900
team. Also note that new server support is typically not released by the SE 9900
team/Hitachi until after the server’s GA.

SE 9970/9980 Configuration Rules


For 9970/9980, Sun Cluster 3 requires HOST MODE=09 - see HDS MK-92RD123-5
“9900 Series Sun Solaris Configuration Guide” for more info.

Node Connectivity Limits


A maximum of 8 SPARC nodes, or 4 x64 nodes, in a given cluster can be connected
simultaneously to a SE 9970/9980 LUN.

Hub and Switch Support


FC switches are supported.

RAID Requirements
■ SE 9970/9980 arrays can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single SE 9970/9980 array is supported with properly configured multipathing
and hardware RAID.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 115


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Without multipathing, data must be mirrored to another array or to another


volume within the 9970/9980 array using an independent I/O path.

Multipathing
Sun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and Sun
Dynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from a
cluster node to the SE 99x0 array. MPxIO is the multipathing solution applicable to
Sun HBAs, SDLM is the multipathing solution applicable to both JNI HBAs and Sun
HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4). SDLM supports
both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0,5.1,5.4).

No other storage multipathing solutions (for example Veritas DMP) are supported
with Sun Cluster.

By using multiple paths and either MPxIO or SDLM in conjunction with hardware
RAID, the requirement to host base mirror the data on a SE 9970/9980 is removed.

SDLM/HDLM support is limited to SPARC and sharing of a LUN to only 2 cluster


nodes. There is no SDLM/HDLM for Solaris x86.

Note – Only SDLM versions 5.0,5.1 and 5.4support VxVM (versions 3.2 and 3.5).

Volume Manager Support


■ MPxIO: All volume manager releases supported by Sun Cluster 3 and the SE
9970/9980.
■ SDLM: Please refer to “Multipathing” on page 116.
■ SE 9970/9980 arrays are supported without software volume management with
properly configured multipathing and hardware RAID.

Sharing an SE 9970/9980 Among Several Clusters or


Non-Clustered Systems
A single SE 9970/9980 can be utilized by several separately clustered or non-
clustered devices. The main requirement for this functionality being that the ports of
the SE 9970/9980 must be assigned properly so that no two clusters can see each
other’s storage. This can be done either through physical cabling or by using
SANtinel.

116 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

SE 9970/9980 Special Features

TrueCopy
Sun StorEdge 9970/9980 TrueCopy is supported with Sun Cluster 3 with the
following configuration details:
■ Both synchronous and asynchronous modes of operations are supported.
■ When using an MPxIO LUN as the Command Control Interface (CCI) command
device, CCI 01-10-03/02 and microcode 21-02-23-00/00 or better must be used.
■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster
■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicate
data within the same cluster as an alternative to host-based mirroring.
See“TrueCopy Support” on page 291 for more info.
■ TrueCopy pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.

SANtinel and LUSE


SANtinel and LUSE are both supported for usage within a Sun Cluster 3
environment. Please see the SE 9970/9980 series documentation for more
information on SANtinel and LUSE.

ShadowImage
Sun StorEdge 9970/9980 ShadowImage is now supported with Sun Cluster 3 with
the following configuration details:
■ When using an MPxIO LUN as the Command Control Interface (CCI) command
device, CCI 01-10-03/02 and microcode 21-02-23-00/00 or better must be used.
■ The Remote Console may be used

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 117


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Caution – This note applies to configurations using host-based mirroring with SE


9970/9980 arrays. If ShadowImage is used to restore data from a suspended pair
(PSUS), make sure that you perform the relevant volume-manager steps prior to
executing either a reverse-copy or a quick-restore. This will ensure that you don’t
corrupt your mirror.

Graphtrack and LUN Manager


Graphtrack and LUN Manager are supported

SE 9970/9980 Support Matrix


To determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FC
Storage for x64 Servers,” on page 46 to see if your chosen server/storage
combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60

TABLE 6-27 SE 9970/9980 Array/Server Combinations with Additional HBA Support

Server Host Adaptera

Sun Enterprise 3x00-6x00 XT8-FC64-1063-N


XT8-FCE-1473-N
Sun Enterprise 10K XT8-FCE-1473-N
Sun Fire 3800 XT8-FCC-6460-N
Sun Fire 12K/15K, E20K/E25K XT8-FCE-6460-N
XT8-FCI-1063-N
a When selecting one of these “XT8-FC” HBAs, all HBAs sharing the LUN are re-
quired to be an “XT8-FC” HBA, although not necessarily the same model.

3. And choose a supported FC switch from the list in “Supported SAN Switches” on
page 62

4. Refer to the “Sun StorEdge 9900 Systems: What Works With What Support
Matrix,” SunWIN Token Number 344150, for additional details.

118 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

Sun StorageTek 9985/9990


This section describes the configuration rules for using Sun StorageTek 9985/9990 as
shared storage.

Note – For a configuration to be supported in a Sun Cluster configuration, it must


also be supported by the ST 9900 team. Please check the ST 9900 series “what works
with what” matrix first to ensure a given configuration is supported by the ST 9900
team. Also note that new server support is typically not released by the ST 9900
team/Hitachi until after the server’s GA.

ST 9985/9990 Configuration Rules

Node Connectivity Limits


A maximum of 8 SPARC nodes, or 4 x64 nodes, in a given cluster can be connected
simultaneously to a ST 9985/9990 LUN.

Hub and Switch Support


FC switches are supported.

RAID Requirements
■ ST 9985/9990 arrays can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single ST 9985/9990 array is supported with properly configured multipathing
and hardware RAID.
■ Without multipathing, data must be mirrored to another array or to another
volume within the ST 9985/9990 array using an independent I/O path.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 119


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Multipathing
Sun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and Sun
Dynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from a
cluster node to the ST 9985/9990 array. MPxIO is the multipathing solution
applicable to Sun HBAs, SDLM is the multipathing solution applicable to both JNI
HBAs and Sun HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4).
SDLM supports both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0,
5.1 and 5.4).

No other storage multipathing solutions (for example Veritas DMP) are supported
with Sun Cluster.

By using multiple paths and either MPxIO or SDLM in conjunction with hardware
RAID, the requirement to host base mirror the data on a ST 9985/9990 is removed.

SDLM/HDLM support is limited to SPARC and sharing of a LUN to only 2 cluster


nodes. There is no SDLM/HDLM for Solaris x86.

Note – Only SDLM versions 5.0,5.1 and 5.4 support VxVM (versions 3.2 and 3.5).

Volume Manager Support


■ MPxIO: All volume manager releases supported by Sun Cluster 3 and the ST
9985/9990.
■ SDLM: Please refer to “Multipathing” on page 120.
■ ST 9985/9990 arrays are supported without software volume management with
properly configured multipathing and hardware RAID.

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

Sharing an ST 9985/9990 Among Several Clusters or


Non-Clustered Systems
A single ST 9985/9990 can be utilized by several separately clustered or non-
clustered devices. The main requirement for this functionality being that the ports of
the ST 9985/9990 must be assigned properly so that no two clusters can see each
other’s storage. This can be done either through physical cabling or by using
SANtinel.

120 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

ST 9985/9990 Special Features

TrueCopy
Sun StorageTek 9985/9990 TrueCopy is supported with Sun Cluster 3 with the
following configuration details:
■ Both synchronous and asynchronous modes of operations are supported.
■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster
■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicate
data within the same cluster as an alternative to host-based mirroring.
See“TrueCopy Support” on page 291 for more info.
■ TrueCopy pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.

Universal Replicator
Universal Replicator is supported with Sun Cluster 3 as follows:
■ Universal Replicator can be used with Sun Cluster to replicate data outside of the
cluster.
■ Using Universal Replicator to copy replicate within a cluster is not supported.
■ Universal Replicator pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.

SANtinel and LUSE


SANtinel and LUSE are both supported for usage within a Sun Cluster 3
environment. Please see the ST 9985/9990 series documentation for more
information on SANtinel and LUSE.

ShadowImage
Sun StorageTek 9985/9990 ShadowImage is now supported with Sun Cluster 3 with
the following configuration details:
■ Microcode versions TBD
■ The Remote Console may be used

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 121


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Caution – This note applies to configurations using host-based mirroring with ST


9985/9990arrays. If ShadowImage is used to restore data from a suspended pair
(PSUS), make sure that you perform the relevant volume-manager steps prior to
executing either a reverse-copy or a quick-restore. This will ensure that you don’t
corrupt your mirror.

Graphtrack and LUN Manager


Graphtrack and LUN Manager are supported

ST 9985/9990 Support Matrix


To determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FC
Storage for x64 Servers,” on page 46 to see if your chosen server/storage
combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60

3. And choose a supported FC switch from the list in “Supported SAN Switches” on
page 62

4. Refer to the “Sun StorEdge 9900 Systems: What Works With What Support
Matrix,” SunWIN Token Number 344150, for additional details.

Sun StorageTek 9985V/9990V


This section describes the configuration rules for using Sun StorageTek 9985V/9990V
as shared storage.

Note – For a configuration to be supported in a Sun Cluster configuration, it must


also be supported by the ST 9900 team. Please check the ST 9900 series “what works
with what” matrix first to ensure a given configuration is supported by the ST 9900
team. Also note that new server support is typically not released by the ST 9900
team/Hitachi until after the server’s GA.

122 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

ST 9985V/9990V Configuration Rules

Node Connectivity Limits


■ A maximum of 8 SPARC nodes, or 4 x64 nodes, in a given cluster can be
connected simultaneously to a ST 9985V/9990V LUN.
■ ST 9985V/9990V also support up to 16 SPARC nodes to a LUN when used with
Oracle RAC. See Table 11-13, “Oracle RAC Support with Sun Cluster 3.2 for
SPARC,” on page 247 for more info.

Hub and Switch Support


FC switches are supported.

RAID Requirements
■ ST 9985V/9990V arrays can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single ST 9985V/9990V array is supported with properly configured
multipathing and hardware RAID.
■ Without multipathing, data must be mirrored to another array or to another
volume within the ST 9985V/9990V array using an independent I/O path.

Multipathing
Sun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and Sun
Dynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from a
cluster node to the ST 9985V/9990V array. MPxIO is the multipathing solution
applicable to Sun HBAs, SDLM is the multipathing solution applicable to both JNI
HBAs and Sun HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4).
SDLM supports both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0,
5.1 and 5.4).

No other storage multipathing solutions (for example Veritas DMP) are supported
with Sun Cluster.

By using multiple paths and either MPxIO or SDLM in conjunction with hardware
RAID, the requirement to host base mirror the data on a ST 9985V/9990V is
removed.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 123


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SDLM/HDLM support is limited to SPARC and sharing of a LUN to only 2 cluster


nodes. There is no SDLM/HDLM for Solaris x86.

Note – Only SDLM versions 5.0, 5.1 and 5.4 support VxVM (versions 3.2 and 3.5).

Volume Manager Support


■ MPxIO: All volume manager releases supported by Sun Cluster 3 and the ST
9985V/9990V.
■ SDLM: Please refer to “Multipathing” on page 120.
■ ST 9985V/9990V arrays are supported without software volume management
with properly configured multipathing and hardware RAID.

Software, Firmware, and Patches


There are no Sun Cluster 3 specific requirements.

Sharing an ST 9985V/9990V Among Several Clusters or


Non-Clustered Systems
A single ST 9985V/9990V can be utilized by several separately clustered or non-
clustered devices. The main requirement for this functionality being that the ports of
the ST 9985V/9990V must be assigned properly so that no two clusters can see each
other’s storage. This can be done either through physical cabling or by using
SANtinel.

ST 9985V/9990V Special Features

TrueCopy
Sun StorageTek 9985V/9990V TrueCopy is supported with Sun Cluster 3 with the
following configuration details:
■ Both synchronous and asynchronous modes of operations are supported.
■ CCI package version 01-19-03/04 and later can be used on the host side.
■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster

124 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


FIBRE CHANNEL STORAGE SUPPORT

■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicate
data within the same cluster as an alternative to host-based mirroring.
See“TrueCopy Support” on page 291 for more info.
■ TrueCopy pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.

Universal Replicator
Universal Replicator is supported with Sun Cluster 3 as follows:
■ Universal Replicator can be used with Sun Cluster to replicate data outside of the
cluster.
■ Using Universal Replicator to replicate data within a cluster is not supported.
■ Universal Replicator pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.

SANtinel and LUSE


SANtinel and LUSE are both supported for usage within a Sun Cluster 3
environment. Please see the ST 9985V/9990V series documentation for more
information on SANtinel and LUSE.

ShadowImage
Sun StorageTek 9985V/9990V ShadowImage is now supported with Sun Cluster 3
with the following configuration details:
■ Microcode versions TBD
■ The Remote Console may be used

Caution – This note applies to configurations using host-based mirroring with ST


9985V/9990V arrays. If Shadowimage is used to restore data from a suspended pair
(PSUS), make sure that you perform the relevant volume-manager steps prior to
executing either a reverse-copy or a quick-restore. This will ensure that you don’t
corrupt your mirror.

Graphtrack and LUN Manager


Graphtrack and LUN Manager are supported

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 125


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

ST 9985V/9990V Support Matrix


To determine whether your configuration is supported:

1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FC
Storage for x64 Servers,” on page 46 to see if your chosen server/storage
combination is supported with Sun Cluster.

2. If your combination is supported, choose a supported HBA from the list in


“Supported SAN Host Bus Adapters (HBAs)” on page 60. Note: Only Sun-
branded Emulex and QLogic HBAs are supported for N*N configurations larger
than 8 nodes.

3. And choose a supported FC switch from the list in “Supported SAN Switches” on
page 62

4. Refer to the “Sun StorEdge 9900 Systems: What Works With What Support
Matrix,” SunWIN Token Number 344150, for additional details.

126 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 7

SCSI Storage Support

This chapter covers Sun Cluster supported SCSI storage devices.

Netra st D130 Array

Netra st D130 RAID Requirements


In order to ensure data redundancy and hardware redundancy, software mirroring
across boxes is required.

The other configuration rules for using Netra st D130 as shared storage are listed
below.
■ Daisy Chaining of Netra st D130 is not supported.
■ Host Adapters supported with Netra st D130 are listed below:

TABLE 7-1 Sun Cluster and Netra st D130 Support Matrix for SPARC

Maximum Node
Host Host Adapter Part # for Host Adapter Connectivity

Netra T1 AC200/DC200 onboard UltraSCSI 2


porta
SunSwift Adapter, PCI 1032A
Netra t 1400/1405 SunSwift Adapter, PCI 1032A
a Onboard SCSI port must be used for one storage connection due to the limited number of PCI slots on the server.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 127


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Cables supported with Netra st D130 are listed below:

TABLE 7-2 Netra st D130 Supported Cables

Cable Part # of Cable

2-meter Ultra SCSI-3/SCSI-3 cable 1139A


2-meter SCSI-3/VHDCI cable with right angle connector 959A
0.36-meter SCSI-3 cable with right-angled connector 6917A
0.8-meter Ultra SCSI-3/SCSI-3 cable 1134A

Figure 7-1 below shows how to configure Netra st D130 as a shared storage.

FIGURE 7-1 Netra st D130 as Shared Storage

onboard port onboard port


Netra T1 Netra T1
200 200
HBA HBA

Netra Netra
st D130 st D130
Data Mirror

Netra st A1000 Array

Netra st A1000 RAID Requirements


In order to ensure data redundancy and hardware redundancy, software mirroring
across boxes is required.

The other configuration rules for using Netra st A1000 are used as a shared storage
are listed below.

128 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

■ Daisy-chaining of Netra st A1000 arrays is not supported.


■ The support matrix for Netra st A1000 with Sun Cluster 3 is:

TABLE 7-3 Netra st A1000 and Sun Cluster 3 Support Matrix for SPARC

Maximum Node
Servers Host Bus Adapters Connectivity Connectivity

Netra t 1120/1125, Netra UltraSCSI Direct 2


1400/1405, Netra 20 adapter (6541A) Attached

■ Cables supported with Netra st A1000 are listed below:

TABLE 7-4 Netra st A1000 Supported Cables

Cable Part # of the Cable

0.16-meter, SCSI-3 cable, SCSI-3/SCSI-3 with right angled connector 991A


2-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 992A
4-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 993A
4.0-meter, 68-pin to VHDC differential SCSI cable 3830A
10.0-meter, 68-pin to VHDC differential SCSI cable 3831A

Netra st D1000 Array

Netra st D1000 RAID Requirements


In order to ensure data redundancy and hardware redundancy, software mirroring
across boxes is required. Hence, a cluster configuration with a single st D1000 in a
split-bus configuration, with data mirrored across the two halves of st D1000, is not
supported.

The other configuration rules for using Netra st D1000 as shared storage are listed
below.
■ Daisy chaining of Netra st D1000s is not supported.
■ Single Netra st D1000, in split-bus configuration, is not supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 129


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Host adapter supported with Netra st D1000 is listed below:

TABLE 7-5 Sun Cluster 3 and Netra st D1000 Support Matrix for SPARC

Part # of the Host Maximum Node


Server Host Adapter Adapter Connectivity

Netra 1120/1125, Netra t PCI-to-differential 6541A 2


1400/1405, Netra 20, Netra 240 UltraSCSI Host
Adapter (UD2S)

■ Cables supported with Netra st D1000 are listed below:

TABLE 7-6 Netra st D1000 Supported Cables

Cable Part # of the Cable

0.16-meter, SCSI-3 cable, SCSI-3/SCSI-3 with right angled connector 991A


2-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 992A
4-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 993A
4.0-meter, 68-pin to VHDC differential SCSI cable 3830A
10.0-meter, 68-pin to VHDC differential SCSI cable 3831A

The figure below shows how to configure the Netra st D1000 as shared storage.

130 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

FIGURE 7-2 Two Netra st D1000s, in Single-Bus Configuration, as Shared Storage.

NODE 1 NODE 2

HA1 HA1

HA2 HA2

Netra st D1000 #1 Netra st D1000 #2


Data Mirror

Sun StorEdge MultiPack

SE Multipack RAID Requirements


In order to ensure data redundancy and hardware redundancy, software mirroring
across boxes is required.

The other configuration rules for using MultiPack as a shared storage are listed
below.
■ Daisy Chaining of MultiPacks is not supported.
■ Host adapters supported with MultiPack are listed below:

TABLE 7-7 Sun Cluster 3 and SE Multipack Support Matrix for SPARC

Maximum Node
Host Host Adapter Part # for Host Adapter Connectivity

Sun Enterprise Dual-channel single- 6540A- SCSI Loop length 2


220R, 250, 420R, 450 ended UltraSCSI host must not exceed 3m (1.5
adapter, PCI (US2S) meters if using 3-6 disks)
SunSwift Adapter, PCI 1032A- SCSI Loop length
must not exceed 6m

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 131


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Cables supported with MultiPack are listed below:

TABLE 7-8 SE Multipack Supported Cables

Cable Part # of Cable

68-pin, 0.8 meter, external SCSI cable 901A


68-to-68 pin, 2.0 meter, external SCSI cable 902A

Figure 7-3 below shows how to configure MultiPack as a shared storage:

FIGURE 7-3 Sun StorEdge MultiPack as Shared Storage


NODE 1 NODE 2

HA1 HA1

HA2 HA2

Data Mirror

MultiPack #1 MultiPack #2

Sun StorEdge D2 Array

SE D2 RAID Requirements
Since D2 doesn’t have RAID capabilities built-in, host-based mirroring using
VxVM/SDS is required.

This host based mirroring requirement ensures the physical path redundancy. With
dual ESM modules, there are no single points of failure in a D2 array. Hence, a
cluster configuration with a single D2 in a split-bus configuration, with data
mirrored across the two halves of the D2, is supported.

132 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

SE D2 Support Matrix
The support matrix for D2 with Sun Cluster 3 is:

TABLE 7-9 Sun StorEdge D2 and Sun Cluster 3 Support Matrix for SPARC

Max. SCSI Bus Maximum Node


Host Host Adapter Cable Lengthe Connectivity

Netra t 1120/1125, Netra Sun StorEdge PCI dual 0.8m (1136A) 25m 2
1400/1405, Netra 20, Netra 240 Ultra 3 SCSI (6758A) 1.2m (1137A)
AC/DC, Netra 1280 SG-XPCI2SCSI-LM320 2m (1138A)
Sun Enterprise 220R, 250, 420R 4m (3830B)
Sun Fire 280R, V480/V490, 10m (3831B)
V880/V890, V1280
Netra 440a Onboard SCSI Port
6757A
Sun Fire V210, V240b, V250, V440c Onboard SCSI port 0.8m (1132A) 25m
Sun 6758 2m (3832A)
SG-XPCI2SCSI-LM320 4m (3830A)
10m (3831A)
Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-Z 0.8m (1136A) 25m
SGXPCI2SCSI-LM320-Z 1.2m (1137A)
SGXPCIE2SCSIU320Zd 2m (1138A)
(x)4422A-2 4m (3830B)
10m (3831B)
Sun Fire T1000 SG-(X)PCIE2SCSIU320Z 0.8m (1132A) 12m
Sun Fire T2000 SG-XPCIE2SCSIU320Z 2m (3832A)
4m (3830A)
10m (3831A)
a In order to use the Netra 440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.
b The onboard SCSI port must be used for one shared storage connection due to the server only having one PCI slot.
c In order to use the SF V440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.
d This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.
e From each host to D2, including the internal bus lengths

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 133


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorEdge S1 Array

SE S1 Array RAID Requirements


In order to ensure data redundancy and hardware redundancy, software mirroring
across boxes is required.

The other configuration rules for using Sun StorEdge S1 as shared storage are listed
below.
■ Daisy Chaining of Sun StorEdge S1 is not supported.
■ Sun StorEdge S1 is supported in direct attached configurations.

134 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

■ Host adapters supported with Sun StorEdge S1 are listed below:

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 135


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 7-10 Sun StorEdge S1 and Sun Cluster 3 Support Matrix for SPARC

Maximum
Max. SCSI Node
Host Host Adapter Cable Bus Lengthg Connectivity

Netra T1 AC200/DC200a onboard UltraSCSI port 0.8m (1134A) 3m 2


Netra t 1400/1405, Netra 20 SunSwift Adapter, PCI 2m (1139A)
(1032A)
Netra t 1400/1405, Netra 20 Sun StorEdge PCI dual 0.8m (1132A) 12m
Sun Enterprise 220R, 250, 420R, 450 Ultra 3 SCSI (6758A) 2m (3832A)
Sun StorEdge Dual Fast 4m (3830A)
Ethernet + SCSI Adapter 10m (3831A)
(2222A)
Sun 4422,
SG XPCI2SCSI-LM320
Netra 120b/Sun Fire V120 Sun StorEdge Dual Fast
Ethernet + SCSI Adapter
(2222A)
4422
Onboard SCSI port
Netra 240 AC/DC, Netra 1280, Onboard SCSI port
Netra 1290 Sun 2222A
Sun Fire V210c, V240, V250, 280R, Sun 4422
V440d, V480/V490, V880/V890, Sun 6758
V1280 SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra 440 e Onboard SCSI port
Sun 6758
X4422A
SG-PCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire V125 Onboard SCSI
X4422A-2
SGXPCI2SCSILM320-Z
SGXPCI1SCSILM320-Z
Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-Z
SGXPCI2SCSI-LM320-Z
SGXPCIE2SCSIU320Zf
(x)4422A-2
Sun Fire T1000 SG-(X)PCIE2SCSIU320Z
Sun Fire T2000 SG-XPCIE2SCSIU320Z

136 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

a The on-board SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
b The on-board SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
c The on-board SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
d In order to use the SF V440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.
e In order to use the Netra 440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.
f This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.
g This includes connectivity to both the hosts.

The figure below shows how to configure Sun StorEdge S1 as a shared storage in a
Netra T1 200 cluster:

FIGURE 7-4 Sun StorEdge S1 as Shared Storage

onboard port onboard port


Netra T1 Netra T1
200 200
HBA HBA

Sun Sun
StorEdge S1 StorEdge S1
Data Mirror

Sun StorEdge A1000 Array

SE A1000 RAID Requirements


In order to ensure data redundancy and hardware redundancy, software mirroring
across boxes is required.

The configuration rules for using Sun StorEdge A1000 are used as a shared storage
are listed below.
■ Daisy-chaining of A1000 arrays is supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 137


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The support matrix for A1000 with Sun Cluster 3 is:

TABLE 7-11 SE A1000 and Sun Cluster Support Matrix for SPARC

Max Node
Servers Host Bus Adapters Connectivity

Netra 440 PCI-to-differential UltraSCSI Direct


Sun Enterprise 220R, 250, 420R, 450 adapter (6541A) Attached
Sun Fire 280R, V440, V480/V490, 2 Nodes
V880/V890, V1280
Sun Enterprise 3x00-6x00 (SBus only) SBus-to-differential UltraSCSI Direct
adapter, UDWIS/S (1065A) Attached
2 Nodes

■ Cables supported with Sun StorEdge A1000 are listed below:

TABLE 7-12 SE A1000 Supported Cables

Cable Part # of the Cable

0.16-meter, SCSI-3 cable, SCSI-3/SCSI-3 with right angled connector 991A


2-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 992A
4-meter, SCSI-3 cable, SCSI-3/VHDCI with right angled connector 993A
4.0-meter, 68-pin to VHDC differential SCSI cable 3830A
10.0-meter, 68-pin to VHDC differential SCSI cable 3831A

Sun StorEdge D1000 Array

SE D1000 RAID Requirements


In order to ensure data redundancy and hardware redundancy, software mirroring
across boxes is required. Hence, a cluster configuration with a single D1000 in a
split-bus configuration, with data mirrored across the two halves of D1000, is not
supported.

The other configuration rules for using D1000 as shared storage are listed below.
■ Daisy chaining of D1000s is not supported.

138 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

■ Host adapters supported with D1000 are listed below:

TABLE 7-13 Sun Cluster and SE D1000 Support Matrix for SPARC

Maximum
Part # of the Node
Server Host Adapter Host Adapter Connectivity

Netra 440, Netra 1280 PCI-to-differential 6541A 2


Sun Enterprise 220R, 250, 420R, 450 UltraSCSI Host
Sun Fire 280R, V440, V480/V490, Adapter (UD2S)
V880/V890, V1280
Sun Enterprise 3x00, 4x00, 5x00, SBus-to-differential 1065A
6x00 UltraSCSI Host
Adapter (UDWIS/S)

■ Cables supported with D1000 are listed below:

TABLE 7-14 SE D1000 Supported Cables

Cable Part # of the Cable

0.8-meter, UltraSCSI differential jumper cable. Shipped with D1000 array


2.0-meter, 68-pin to VHDC differential SCSI cable 3832A
4.0-meter, 68-pin to VHDC differential SCSI cable 3830A
10.0-meter, 68-pin to VHDC differential SCSI cable 3831A
12-meter, external differential UltraSCSI cable 979A

The figure below shows how to configure D1000 as shared storage.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 139


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 7-5 Two Sun StorEdge D1000s, in Single-Bus Configuration, as Shared Storage.

NODE 1 NODE 2

HA1 HA1

HA2 HA2

Data Mirror

D1000 #1 D1000 #2

Sun StorEdge A3500 Array

SE A3500 RAID Requirements


An A3500 controller module with the redundant controllers provides appropriate
hardware redundancy. An A3500 controller also has hardware RAID capabilities
built-in. Hence, software mirroring of data is not required. However a software
volume manager can be used for managing the data. Also, a cluster configuration
with an A3500 array with a single controller module is supported.

The other configuration rules for using Sun StorEdge A3500 as shared storage are
listed below:
■ Daisy-chaining of the controller modules is not supported.
■ Sun StorEdge A3500, and A3500FC arrays cannot be used as quorum devices.
■ A3500 Light is supported.
■ It is required to connect the two SCSI ports of a controller module to different Host
Adapters on a node.

140 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

■ Host Adapter supported with A3500 is listed below:

TABLE 7-15 Sun Cluster 3 and SE A3500 Support Matrix for SPARC

Part # for Host Maximum Node


Servers Host Adapter Adapter Connectivity

Sun Enterprise 3x00, SBus-to-differential UltraSCSI 1065A 2


4x00, 5x00, 6x00 Host Adapter (UDWIS/S)

■ Cables supported with A3500 are listed in the table below.

TABLE 7-16 A3500 Supported Cables

Cable Part # for Cable

4m, 68-pin to VHDC differential SCSI cable 3830A


10m, 68-pin to VHDC differential SCSI cable 3831A
12m, external differential UltraSCSI cable 979A

Figure 7-6 on page 142 shows how to configure A3500 as a shared storage.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 141


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 7-6 Single A3500 Configuration

HA HA

HA HA

HA HA

Controller Controller
A B

Quorum Device
A3500 Controller Module

Sun StorEdge 3120 JBOD Array

SE 3120 JBOD Array Configuration Details


■ Daisy Chaining is not supported.
■ SCSI bus length is 12 meters.
■ Data may be mirrored between the halves of a single dual-bus SE3120 JBOD array.
This enables Sun Cluster configurations with a single dual-bus SE 3120 JBOD
array.
■ Data in single-bus SE 3120 JBOD arrays must be mirrored against another storage
array.
■ NOTE: The single bus SE 3120 array in figure 7-7 must have its data mirrored to
another array.

142 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

The support matrix for the SE 3120 JBOD with Sun Cluster 3 is listed below:

TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Enterprise 220R x2222, 4422, 6758 0.8m 2


SG-XPCI2SCSI-LM320 (1136A)
SG-XPCI2SCSI-LM320-Z 1.2m
(1137A)
Sun Enterprise 250 x2222, 4422, 6758
2m (1138A)
SG-XPCI2SCSI-LM320
4m (3830B)
SG-XPCI2SCSI-LM320-Z
10m (3831B)
Sun Enterprise 420R x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Enterprise 450 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire 12K/15K x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire 280R x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire 4800 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire 6800 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire E2900 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire E6900 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire T1000 SG-(X)PCIE2SCSIU320Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 143


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC (Continued)

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire T2000 SG-(X)PCI1SCSI-LM320


SG-(X)PCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Sun Fire V125 Onboard SCSI, X4422A-2
SGXPCI2SCSILM320-Z
SGXPCI1SCSILM320-Z
Sun Fire V1280 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire V210 onboard SCSI port
x2222, 4422, 6758A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire V215 SGXPCI1SCSILM320-Z
SGXPCI2SCSILM320-Z
SGXPCIE2SCSIU320Za
(x)4422A-2
Sun Fire V240 onboard SCSI port
x2222, 4422, 6758A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire V245 SGXPCI1SCSILM320-Z
SGXPCI2SCSILM320-Z
SGXPCIE2SCSIU320Zb
(x)4422A-2
Sun Fire V250 onboard SCSI port
x2222, 4422, 6758A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire V440 onboard SCSI port
x2222, 4422, 6758A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z

144 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC (Continued)

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire V445 SGXPCI1SCSILM320-Z


SGXPCI2SCSILM320-Z
SGXPCIE2SCSIU320Zc
(x)4422A-2
Sun Fire V480 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire V490 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire V880 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Fire V890 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Netra 1120/1125 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Netra 1280/1290 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Netra 1400/1405 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Netra 20 x2222, 4422, 6758
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Netra 240 AC/DC onboard SCSI port
x2222, 4422, 6758A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 145


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC (Continued)

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Netra 440 onboard SCSI port


6758A, X4422A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Sun Netra T5220 SG-XPCIE2SCSIU320Z,
SGXPCI2SCSILM320-Z
Sun Netra T5440 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise M4000 SG-XPCI2SCSI-LM320-Z
SG-XPCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise M5000 SG-XPCI2SCSI-LM320-Z
SG-XPCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise M8000 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise M9000 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise T5120 SG-(X)PCIE2SCSIU320Z
Sun SPARC Enterprise T5220 SG-(X)PCIE2SCSIU320Z
Sun SPARC Enterprise T5140 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise T5240 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z
External I/O Expansion Unit for SG-(X)PCI2SCSILM320-Z
Sun SPARC Enterprise M4000,
M5000, M8000 and M9000 Servers
a This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.
b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.
c This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

146 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

TABLE 7-18 Sun Cluster 3 and SE3120 JBOD Support Matrix for x64

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire V40z SG-XPCI1SCSI-LM320 0.8m 2


SG-XPCI1SCSI-LM320-Z (1136A)
SG-XPCI2SCSI-LM320 1.2m
SG-XPCI2SCSI-LM320-Z (1137A)
2m (1138A)
Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z
4m (3830B)
Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z 10m (3831B)
Sun Fire X4100 SG-XPCI1SCSI-LM320
Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320
Sun Fire X4140 SG-XPCIE2SCSIU320Z
Sun Fire X4200 SG-XPCI1SCSI-LM320
Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320
Sun Fire X4240 SG-XPCIE2SCSIU320Z
Sun Fire X4250 SG-XPCIE2SCSIU320Z
Sun Fire X4440 SG-XPCIE2SCSIU320Z
Sun Fire X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Fire X4540 SG-XPCIE2SCSIU320Z
Sun Fire X4600 SG-XPCIE2SCSIU320Z
Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z
Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z
Sun Netra X4250 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Netra X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 147


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 7-7 SE 3120 Single-Bus Configuration

FIGURE 7-8 SE 3120 Dual-Bus Configuration

Sun StorEdge 3310 JBOD Array


This section describes the configuration rules for using Sun StorEdge 3310 JBOD (a
SE 3310 without RAID controllers) as a shared storage.

148 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

SE 3310 JBOD Configuration Details


■ Both AC and DC power supplies are supported.
■ It IS supported to have a single dual-bus 3310 JBOD cluster configuration which
is split into two separate halves that are then mirrored against each other. This
configuration would make a single SE 3310 JBOD act like two separate storage
devices- this configuration must have the “-02” revision of the I/O boards
installed. Please contact the SE 3310 Product Manager for more information.
■ Connecting an expansion 3310 JBOD units to an existing 3310 JBOD in a cluster
configuration is NOT supported.
■ There is a SCSI loop length (length of cables to both hosts +.5m internal 3310 bus
length +.3m jumper cable if using a single bus configuration) limitation of 12m on
a single SCSI loop with the SE 3310 JBOD.
■ For additional configuration information, please see the “SE 3310 Release Notes”
as doc# 816-7290 at http://docs.sun.com
■ For questions concerning support of specific configurations of the SF 2900 please
contact product marketing directly.
■ SE 3310 JBOD with the V440/Netra 440’s shared on-board SCSI is supported. That
is, the V440’s on-board SCSI can be used for connecting the SE 3310 JBOD as
cluster shared storage.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 149


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

The support matrix for the SE 3310 JBOD with Sun Cluster 3 is listed below:

TABLE 7-19 Sun Cluster 3 and SE3310 JBOD Support Matrix for SPARC

Maximum
Node
Server Host Adapter Cable Connectivity

Netra 1120/1125, Netra 1400/1405, x2222 0.8m (1136A) 2


Sun Enterprise 220R, 250, 420R, 450 4422/4422A-2 1.2m (1137A)
6758A 2m (1138A)
SG-XPCI2SCSI-LM320 4m (3830B)
SG-XPCI2SCSI-LM320-Z 10m (3831B)
Netra 20, Netra 1280, Netra 1290 x2222
Sun Fire 280R, V440, V480/V490, 4422A/4422A-2
V880/V890, V1280, E2900, 4800, 6758A
6800, 12K/15K, E20K/E25K SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra 240 AC/DC onboard SCSI port
Sun Fire V210a, V240, V250,V440 x2222
4422A/4422A-2
6758A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra 440 Onboard SCSI port
6758A
X4422A/4422A-2
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra T5220 SG-XPCIE2SCSIU320Z,
SGXPCI2SCSILM320-Z
Sun Fire V125 Onboard SCSI
X4422A-2
SGXPCI2SCSILM320-Z
SGXPCI1SCSILM320-Z
Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-Z
SGXPCI2SCSI-LM320-Z
SGXPCIE2SCSIU320Zb
(x)4422A-2
Sun Fire T1000 SG-(X)PCIE2SCSIU320Z

150 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

TABLE 7-19 Sun Cluster 3 and SE3310 JBOD Support Matrix for SPARC (Continued)

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire T2000 SG-(X)PCI1SCSI-LM320


SG-(X)PCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Netra T2000 SGXPCI2SCSILM320-Z

Sun SPARC Enterprise M3000 SG-XPCIE2SCSIU320Z

Sun SPARC Enterprise SG-XPCI2SCSI-LM320-Z


M4000/M5000 SG-XPCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise SG-XPCIE2SCSIU320Z
M8000/M9000
Sun SPARC Enterprise SG-XPCIE2SCSIU320Z
T5120/T5220
Sun SPARC Enterprise SG-XPCIE2SCSIU320Z
T5140/T5240
Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z
a The onboard SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 151


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 7-20 Sun Cluster 3 and SE3310 JBOD Support Matrix for x64

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire V40z SG-XPCI1SCSI-LM320 0.8m 2


SG-XPCI1SCSI-LM320-Z (1136A)
SG-XPCI2SCSI-LM320 1.2m
SG-XPCI2SCSI-LM320-Z (1137A)
2m (1138A)
Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z
4m (3830B)
Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z 10m (3831B)
Sun Fire X4100 SG-XPCI1SCSI-LM320
Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320
Sun Fire X4140 SG-XPCIE2SCSIU320Z
Sun Fire X4200 SG-XPCI1SCSI-LM320
Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320
Sun Fire X4240 SG-XPCIE2SCSIU320Z
Sun Fire X4440 SG-XPCIE2SCSIU320Z
Sun Fire X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Fire X4600 SG-XPCIE2SCSIU320Z
Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z
Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z
Sun Netra X4250 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Netra X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z

152 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

FIGURE 7-9 Direct-Attached SE 3310 JBOD Configuration

Node 1 Node 2

HA 1 HA 2

0 1 0 1 0 1 0 1

a0 a1 b0 b1 a0 a1 b0 b1

Data Data Mirror Mirror

SE 3310 #1 SE 3310 #2

Sun StorEdge 3310 RAID Array


This section describes the configuration rules for using Sun StorEdge 3310 RAID (a
SE 3310 with either one or two RAID controllers) as a shared storage.

SE 3310 RAID Configuration Details


■ Both AC and DC power supplies are supported.
■ The SE 3310 RAID version (a 3310 with either a single or dual RAID controllers)
must be mirrored against another storage array in Sun Cluster configurations.
■ Connecting a maximum of 1 additional expansion 3310 JBOD unit to an existing
3310 RAID device in a cluster configuration IS supported. This brings the
expansion JBOD under the control of the RAID controller, enabling the cluster to
see both the 3310 RAID device and the expansion JBOD as one device.
■ There is a SCSI cable length (length of cables to both hosts) limitation of 25 m per
SCSI loop with the SE 3310 RAID.
■ For additional configuration information, please see the “SE 3310 Release Notes”
as doc# 816-7292 at http://docs.sun.com
■ The SE 3310 RAID with the V440/Netra 440’s shared on-board SCSI is supported
and requires minimum patch release 113722-06.
■ Logical Volumes are NOT supported. For more information, please see bug ID
4881785.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 153


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The latest supported firmware is 4.21.

TABLE 7-21 Sun Cluster 3 and SE3310 RAID Support Matrix for SPARC

Maximum
Node
Server Host Adapter Cable Connectivity

Netra 1120/1125, Netra 1400/1405 6758A 0.8m 2


Sun Enterprise 220R, 250, 420R, 450 x2222 (1136A)
4422 1.2m
SG-XPCI2SCSI-LM320 (1137A)
SG-XPCI2SCSI-LM320-Z 2m (1138A)
4m (3830B)
Netra 20, Netra 1280, Netra 1290 6758A
10m (3831B)
Sun Fire 280R, V440, V480/V490, x2222
V880/V890, V1280, E2900, 4800, 4422
6800, 12K/15K, E20K/E25 SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra 240 onboard SCSI port
Sun Fire V210a, V240, V250 x2222
4422
6758A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra 440 6758A
X4422A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra T5220 SG-XPCIE2SCSIU320Z,
SGXPCI2SCSILM320-Z
Sun Fire V125 Onboard SCSI
X4422A-2
SGXPCI2SCSILM320-Z
SGXPCI1SCSILM320-Z
Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-Z
SGXPCI2SCSI-LM320-Z
SGXPCIE2SCSIU320Zb
(x)4422A-2
Sun Fire T1000 SG-(X)PCIE2SCSIU320Z
Sun Fire T2000 SG-(X)PCI1SCSI-LM320
SG-(X)PCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z

154 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

TABLE 7-21 Sun Cluster 3 and SE3310 RAID Support Matrix for SPARC (Continued)

Maximum
Node
Server Host Adapter Cable Connectivity

Netra T2000 SGXPCI2SCSILM320-Z


Sun SPARC Enterprise M3000 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise SG-XPCI2SCSI-LM320-Z
M4000/M5000 SG-XPCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise SG-XPCIE2SCSIU320Z
M8000/M9000
Sun SPARC Enterprise T5120/T5220 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise T5140/T5240 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z
a The onboard SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

TABLE 7-22 Sun Cluster 3 and SE3310 RAID Support Matrix for x64

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire V20z X4422Aa 0.8m 2


(1136A)
Sun Fire V40z SG-XPCI1SCSI-LM320
1.2m
SG-XPCI1SCSI-LM320-Z
(1137A)
SG-XPCI2SCSI-LM320
2m (1138A)
SG-XPCI2SCSI-LM320-Z
4m (3830B)
Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z 10m (3831B)
Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z
Sun Fire X4100 SG-XPCI1SCSI-LM320
Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320
Sun Fire X4140 SG-XPCIE2SCSIU320Z
Sun Fire X4200 SG-XPCI1SCSI-LM320
Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 155


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 7-22 Sun Cluster 3 and SE3310 RAID Support Matrix for x64 (Continued)

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire X4240 SG-XPCIE2SCSIU320Z


Sun Fire X4440 SG-XPCIE2SCSIU320Z
Sun Fire X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Fire X4600 SG-XPCIE2SCSIU320Z
Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z
Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z
Sun Netra X4250 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Netra X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
a Requires “GLM Device Driver for the Sun Dual Gigabit Ethernet and Dual SCSI/P Adapter 1.0” for Solaris 9 x86 at
http://www.sun.com/software/download/products/4134849d.html or Solaris 10 x86 at ht-
tp://www.sun.com/software/download/products/441b0a3e.html. The glm driver for Solaris 10 x86 is bundled
starting with Update 2, making this SDLC download unnecessary.

156 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

FIGURE 7-10 Direct-Attached SE 3310 RAID Configuration


Node 1 Node 2

HA 1 HA 2

0 1 0 1 0 1 0 1

a0 a1 a2 a3 a0 a1 a2 a3

Data Mirror

SE 3310 #1 SE 3310 #2

Sun StorEdge 3320 JBOD Array


This section describes the configuration rules for using Sun StorEdge 3320 JBOD (a
SE 3320 without RAID controllers) as a shared storage.

SE 3320 JBOD Configuration Details


■ Both AC and DC power supplies are supported.
■ It IS supported to have a single dual-bus 3320 JBOD cluster configuration which
is split into two separate halves that are then mirrored against each other. This
configuration would make a single SE 3320 JBOD act like two separate storage
devices.
■ Connecting an expansion 3320 JBOD units to an existing 3320 JBOD in a cluster
configuration is NOT supported.
■ Effective July 14, 2008, new dual-hosted single-bus SE3320 JBOD configurations
are not supported. Cabling the array into Split-Bus mode using 2 meter cables is
the current supported method for all new installations. See Field Action Bulletins
(FAB) 239464 (Dual hosted Sun StorageTek 3320 JBOD in Single-Bus
configurations may experience parity errors) for details.
■ For additional configuration information, please see the “SE 3320 Release Notes”
as doc# 816-7290 at http://docs.sun.com

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 157


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ For questions concerning support of specific configurations of the SF 2900 please


contact product marketing directly.
■ SE 3320 JBOD with the V440/Netra 440’s shared on-board SCSI is supported. That
is, the V440’s on-board SCSI can be used for connecting the SE 3320 JBOD as
cluster shared storage.

158 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

The support matrix for the SE 3320 JBOD with Sun Cluster 3 is listed below:

TABLE 7-23 Sun Cluster 3 and SE3320 JBOD Support Matrix for SPARC

Maximum
Node
Server Host Adapter Cable Connectivity

Netra 1120/1125, Netra 1400/1405 x2222 0.8m (1136A) 2


Sun Enterprise 220R, 250, 420R, 450 4422A-4422A-2 1.2m (1137A)
6758A 2m (1138A)
SG-XPCI2SCSI-LM320 4mc (3830B)
SG-XPCI2SCSI-LM320-Z 10mc (3831B)
Netra 20, Netra 1280, Netra 1290 x2222
Sun Fire 280R, V440, V480/V490, 4422A/4422A-2
V880/V890, V1280, E2900, 4800, 6758A
6800, 12K/15K, E20K/E25K SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra 240 AC/DC onboard SCSI port
Sun Fire V210a, V240, V250,V440 x2222
4422A/4422A-2
6758A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra 440 Onboard SCSI port
6758A
X4422A/4422A-2
SG-PCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra T2000 SGXPCI2SCSILM320-Z
Netra T5220 SG-XPCIE2SCSIU320Z,
SGXPCI2SCSILM320-Z
Netra T5440 SG-XPCIE2SCSIU320Z
Sun Fire V125 Onboard SCSI
X4422A-2
SGXPCI2SCSILM320-Z
SGXPCI1SCSILM320-Z
Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-Z
SGXPCI2SCSI-LM320-Z
SGXPCIE2SCSIU320Zb
(x)4422A-2

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 159


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 7-23 Sun Cluster 3 and SE3320 JBOD Support Matrix for SPARC (Continued)

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire T1000 SG-(X)PCIE2SCSIU320Z


Sun Fire T2000 SG-(X)PCI1SCSI-LM320
SG-(X)PCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise M3000 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise SG-XPCI2SCSI-LM320-Z
M4000/M5000 SG-XPCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise SG-XPCIE2SCSIU320Z
M8000/M9000
Sun SPARC Enterprise SG-XPCIE2SCSIU320Z
T5120/T5220
Sun SPARC Enterprise SG-XPCIE2SCSIU320Z
T5140/T5240
Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z
External I/O Expansion Unit for SG-(X)PCI2SCSILM320-Z
Sun SPARC Enterprise M4000,
M5000, M8000 and M9000 Servers
a The onboard SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.
c Effective July 14, 2008, no longer supported for SE3320 JBOD SC configs. See FAB 239646 discussion above.

160 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

TABLE 7-24 Sun Cluster 3 and SE3320 JBOD Support Matrix for x64

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire V40z SG-XPCI1SCSI-LM320 0.8m 2


SG-XPCI1SCSI-LM320-Z (1136A)
SG-XPCI2SCSI-LM320 1.2m
SG-XPCI2SCSI-LM320-Z (1137A)
2m (1138A)
Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z
4m (3830B)
Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z 10m (3831B)
Sun Fire X4100 SG-XPCI1SCSI-LM320
Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320
Sun Fire X4140 SG-XPCIE2SCSIU320Z
Sun Fire X4200 SG-XPCI1SCSI-LM320
Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320
Sun Fire X4240 SG-XPCIE2SCSIU320Z
Sun Fire X4250 SG-XPCIE2SCSIU320Z
Sun Fire X4440 SG-XPCIE2SCSIU320Z
Sun Fire X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Fire X4540 SG-XPCIE2SCSIU320Z
Sun Fire X4600 SG-XPCIE2SCSIU320Z
Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z
Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z
Sun Netra X4250 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Netra X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 161


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorEdge 3320 RAID Array


This section describes the configuration rules for using Sun StorEdge 3320 RAID (a
SE 3320 with either one or two RAID controllers) as a shared storage.

SE 3320 RAID Configuration Details


■ Both AC and DC power supplies are supported.
■ The SE 3320 RAID version (a 3320 with either a single or dual RAID controllers)
must be mirrored against another storage array in Sun Cluster configurations.
■ Connecting a maximum of 1 additional expansion 3320 JBOD unit to an existing
3320 RAID device in a cluster configuration IS supported. This brings the
expansion JBOD under the control of the RAID controller, enabling the cluster to
see both the 3320 RAID device and the expansion JBOD as one device.
■ There is a SCSI cable length (length of cables to both hosts) limitation of 25 m per
SCSI loop with the SE 3320 RAID.
■ For additional configuration information, please see the “SE 3320 Release Notes”
as doc# 816-7292 at http://docs.sun.com
■ The SE 3320 RAID with the V440/Netra 440’s shared on-board SCSI is supported
and requires minimum patch release 113722-06.
■ Logical Volumes are NOT supported. For more information, please see bug ID
4881785.

162 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

■ The latest supported firmware is 4.21.

TABLE 7-25 Sun Cluster 3 and SE3320 RAID Support Matrix for SPARC

Maximum
Node
Server Host Adapter Cable Connectivity

Netra 1120/1125, Netra 1400/1405, 6758A 0.8m 2


Netra 20, Netra 1280, Netra 1290 x2222 (1136A)
Sun Enterprise 220R, 250, 420R, 450 4422 1.2m
Sun Fire 280R, V440, V480/V490, SG-XPCI2SCSI-LM320 (1137A)
V880/V890, V1280, E2900, 4800, SG-XPCI2SCSI-LM320-Z 2m (1138A)
6800, 12K/15K, E20K/E25K 4m (3830B)
Netra 240 onboard SCSI port 10m (3831B)
Sun Fire V210a, V240, V250 x2222
4422
6758A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra 440 6758A
X4422A
SG-XPCI2SCSI-LM320
SG-XPCI2SCSI-LM320-Z
Netra T2000 SGXPCI2SCSILM320-Z
Netra T5220 SG-XPCIE2SCSIU320Z,
SGXPCI2SCSILM320-Z
Netra T5440 SG-XPCIE2SCSIU320Z
Sun Fire V125 Onboard SCSI
X4422A-2
SGXPCI2SCSILM320-Z
SGXPCI1SCSILM320-Z
Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-Z
SGXPCI2SCSI-LM320-Z
SGXPCIE2SCSIU320Zb
(x)4422A-2
Sun Fire T1000 SG-(X)PCIE2SCSIU320Z
Sun Fire T2000 SG-(X)PCI1SCSI-LM320
SG-(X)PCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise M3000 SG-XPCIE2SCSIU320Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 163


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 7-25 Sun Cluster 3 and SE3320 RAID Support Matrix for SPARC (Continued)

Maximum
Node
Server Host Adapter Cable Connectivity

Sun SPARC Enterprise SG-XPCI2SCSI-LM320-Z


M4000/M5000 SG-XPCI1SCSI-LM320-Z
SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise SG-XPCIE2SCSIU320Z
M8000/M9000
Sun SPARC Enterprise T5120/T5220 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise T5140/T5240 SG-XPCIE2SCSIU320Z
Sun SPARC Enterprise T5440 SG-XPCIE2SCSIU320Z
External I/O Expansion Unit for Sun SG-(X)PCI2SCSILM320-Z
SPARC Enterprise M4000, M5000,
M8000 and M9000 Servers
a The onboard SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
b This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.

164 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SCSI STORAGE SUPPORT

TABLE 7-26 Sun Cluster 3 and SE3320 RAID Support Matrix for x64

Maximum
Node
Server Host Adapter Cable Connectivity

Sun Fire V40z SG-XPCI1SCSI-LM320 0.8m 2


SG-XPCI1SCSI-LM320-Z (1136A)
SG-XPCI2SCSI-LM320 1.2m
SG-XPCI2SCSI-LM320-Z (1137A)
2m (1138A)
Sun Fire X2100 M2 SG-XPCIE2SCSIU320Z
4m (3830B)
Sun Fire X2200 M2 SG-XPCIE2SCSIU320Z 10m (3831B)
Sun Fire X4100 SG-XPCI1SCSI-LM320
Sun Fire X4100 M2 SG-XPCIE2SCSI-LM320
Sun Fire X4140 SG-XPCIE2SCSIU320Z
Sun Fire X4200 SG-XPCI1SCSI-LM320
Sun Fire X4200 M2 SG-XPCIE2SCSI-LM320
Sun Fire X4240 SG-XPCIE2SCSIU320Z
Sun Fire X4250 SG-XPCIE2SCSIU320Z
Sun Fire X4440 SG-XPCIE2SCSIU320Z
Sun Fire X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Fire X4540 SG-XPCIE2SCSIU320Z
Sun Fire X4600 SG-XPCIE2SCSIU320Z
Sun Fire X4600 M2 SG-XPCIE2SCSIU320Z
Sun Netra X4200 M2 SG-XPCI2SCSI-LM320-Z
Sun Netra X4250 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z
Sun Netra X4450 SG-XPCIE2SCSIU320Z
SGXPCI2SCSILM320-Z

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 165


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 7-11 Direct-Attached SE 3320 RAID Configuration


Node 1 Node 2

HA 1 HA 2

0 1 0 1 0 1 0 1

a0 a1 a2 a3 a0 a1 a2 a3

Data Mirror

SE 3320 #1 SE 3320 #2

FIGURE 7-12 Direct-Attached SE 3320 RAID with Attached JBODs (for additional storage)

Node 1 Node 2

HA 1 HA 2

0 1 0 1 0 1 0 1

a0 a1 a2 a3 a0 a1 B1 B2 a2 a3 a0 a1 a2 a3 a0 a1 B1 B2 a2 a3
Data Data Mirror Mirror

JBOD #1 SE 3320 #1 JBOD #2 SE 3320 #1

Note: B1 port on 3320’s represents single bus connection port

166 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 8

SAS Storage Support

This chapter covers Sun Cluster supported SAS storage devices.

Sun StorageTek 2530 RAID Array

ST 2530 Configuration Rules:


■ Sun Cluster supports both Simplex (with one controller) and Duplex (with two
controllers) configurations.
■ For Simplex configuration, ST2530 array requires volume manager software such
as SVM or VxVM to mirror data across two arrays.
■ For Duplex configuration, ST2530 array can be supported with properly
configured dual controllers, multipathing software, hardware RAID and without
volume manager software.

Node Connectivity Limits


■ A maximum of 3 nodes can be connected to any one LUN.

Hubs and Switches


■ SAS Expanders are not supported as of May’08.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 167


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

RAID Requirements
■ ST 2530 arrays are supported without software mirroring when properly
configured with dual controllers, multipathing, and hardware RAID providing in-
array data redundancy.
■ A single 2530 array is supported when properly configured with dual controllers,
multipathing, and hardware RAID providing in-array data redundancy.

Multipathing
■ Sun StorEdge Traffic Manager (MPXIO) is required in Duplex Configuration
(ST2530 with 2x controllers). Solaris MPT Patch 125081-14 or later is required to
config Sun Cluster.

ST 2530 Volume Manager Support


■ There are no Sun Cluster specific requirements. Please refer to the ST 2530
product documentation regarding Volume Manager support.

Software, Firmware, and Patches


■ CAM Build 6.0.1 Build 10 is the minimum requirement for Sun Cluster
■ x64: Starting with Solaris 10 8/07
■ Please see the ST 2530 release notes for other requirements.

Sharing ST 2530 Arrays


■ LUN masking will enable sharing across multiple platforms. Please refer to the
base product documentation for further details.

ST 2530 Support Matrix and Exceptions:


To determine whether your configuration is supported:

1. First check Table 5-6, SAS Storage for SPARC Servers, on page 55, or Table 5-7,
SAS Storage for x64 Servers, on page 56 to determine whether your chosen
server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list below:
■ SG-XPCI8SAS-E-Z

168 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SAS STORAGE SUPPORT

■ SG-XPCIE8SAS-E-Z

3. Check Table 8-1 to determine if there is limited HBA support

TABLE 8-1 ST 2530 Array/Server combinations with Limited HBA Support

Server Host Adapter

Sun Netra T2000 SG-XPCI8SAS-E-Z


Sun Netra X4200 M2 SG-XPCIE8SAS-E-Za
a Not NEBS tested

4. If HBA support is not limited, you can use your server and storage combination
with host adapters as indicated by the “Server Search” under the Interop Tool
“Searches” tab, https://interop.central.sun.com/interop/interop

Sun Storage J4200 and J4400 JBOD


Arrays

J4200/J4400 Configuration Rules:


■ Sun Cluster supports both SAS and SATA HDDs.
■ The J4200 and J4400 products require installed HDDs to be all SAS or SATA.
Mixing is not permitted.
■ When configuring SATA HDDs in J4200/J4400 shared storage:
■ Each SAS I/O Module (SIM) is required to have only a single host connection.
Thus, all J4200 or J4400 shared storage configurations using SATA HDDS must
have dual SIMs.
■ SCSI-reservation-based fencing and quorum-device support must be disabled.
See Software Quorum in the Sun Cluster 3.2 1/09 documentation for more info.
■ A single J4200 or J4400 with SATA HDDs is supported, however due to the single-
host-connection-per-SIM requirement, exhibits single points of failure.
■ A single J4200 or J4400 with SAS HDDs is supported when configured with dual
SIMs, MPxIO, and proper data redundancy, however provides less availability.
■ Also see the J4200/J4400 Release Notes and SAS Multipathing Guide for
additional information.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 169


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Node Connectivity Limits


■ A LUN can only be shared by two cluster nodes.

RAID Requirements
■ It is recommended to mirror shared data in a J4200 or J4400 with another array.
■ When configured with dual SIMs and MPxIO, shared data can be mirrored within
a single J4200 or J4400 with SAS HDDS, but with less availability.
■ When a J4200 or J4400 array is configured with a single SIM, shared data must be
mirrored to another array.

Multipathing
■ Sun Cluster support with SAS multipathing is enabled and qualified when using
SAS HDDs.

Volume Manager Support


■ There are no Sun Cluster specific requirements. Please note the base product
documentation regarding volume manager support.

Software, Firmware, and Patches


■ SAS HDDs: Sun Cluster support starts with Solaris 10 5/08 (update 5).
(J4200/J4400 product support starts with Solaris 10 8/07 (update 4))
■ SATA HDDs: Sun Cluster support starts with Solaris 10 10/08 (update 6) and Sun
Cluster 3.2 1/09 (update 2), plus patches. The Software Quorum feature of Sun
Cluster 3.2 1/09 is required. Refer to the Sun Cluster 3.2 1/09 documentation for
details.

Sharing J4200/J4400 JBOD Arrays


■ A J4200 or J4400 cannot be shared with another cluster or with non-cluster nodes.

J4200/J4400 Support Matrix and Exceptions:


To determine whether your configuration is supported:

170 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SAS STORAGE SUPPORT

1. First check Table 5-6, SAS Storage for SPARC Servers, on page 55, or Table 5-7,
SAS Storage for x64 Servers, on page 56 to determine whether your chosen
server and storage combination is supported.

2. If your combination is supported, choose a supported HBA from the list below:
■ SG-XPCI8SAS-E-Z
■ SG-XPCIE8SAS-E-Z

3. Check Table 8-2 to determine if there is limited HBA support

TABLE 8-2 SS J4200/4400 Array/Server combinations with Limited HBA Support

Server Host Adapter

None at this time

If HBA support is not limited, you can use your server and storage combination
with host adapters as indicated by the “Server Search” under the Interop Tool
“Searches” tab, https://interop.central.sun.com/interop/interop

Sun Storage J4400 JBOD Array


See “Sun Storage J4200 and J4400 JBOD Arrays” on page 169.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 171


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

172 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 9

Ethernet Storage Support

This chapter covers Sun Cluster supported Ethernet-connected shared storage


devices.

Sun StorageTek 2510 RAID Array

ST 2510 Configuration Rules:


■ Sun Cluster supports Duplex (ST 2510 with 2x controllers) configuration.
■ For Duplex configuration, a single ST 2510 array can be supported with properly
configured dual controllers, multipathing, hardware RAID and without volume
manager software.
■ The ST 2510 can only be configured on the same subnet as that of the cluster
nodes due to bug 6614299.

Node Connectivity Limits


■ A maximum of 4 nodes can be connected to any one LUN.

Hubs and Switches


■ See subnet restriction the Configuration Rules section above.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 173


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

RAID Requirements
■ ST 2510 arrays are supported without software mirroring when properly
configured with dual controllers, multipathing, and hardware RAID providing in-
array data redundancy.
■ A single 2510 array is supported when properly configured with dual controllers,
multipathing, and hardware RAID providing in-array data redundancy.

Multipathing
■ For Duplex configuration, the option to use Sun StorEdge Traffic Manager
(MPXIO) is available. If MPXIO is not used, data must be mirrored to another
array or to another volume within the ST 2510.

ST 2510 Volume Manager Support


■ There are no Sun Cluster specific requirements.
■ Please see the ST 2510 product documentation regarding Volume Manager
support.

Software, Firmware, and Patches


■ SPARC server requirements:
■ SC 3.1 8/05 (update 4) + patches, SC 3.2 + patches and later
■ Solaris 10 5/09 (update 7) + patches and later
■ CAM 6.2.0 (FW 6.70.54.11) and later
■ x64 server requirements:
■ SC 3.1 8/05 (update 4) + patches, SC 3.2 + patches and later
■ Solaris 10 8/07 (update 4) + patches, and later
■ CAM 6.0.1 and later
■ Please see the ST2510 Release Notes for ST2510 requirements.

Sharing ST2510 Arrays


■ LUN masking will enable sharing across multiple platforms. See product
documentation for details.

174 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


ETHERNET STORAGE SUPPORT

ST2510 Support Matrix and Exceptions:


The StorageTek 2510 product team does not maintain a list of qualified servers. Per
the StorageTek 2500 Just The Facts, SunWIN Token# 500199: ‘The Sun StorageTek
2510 iSCSI Array is supported with any ethernet enabled device running in a
supported O/S environment.”

Following that model, Sun Cluster 3 supports any Sun server qualified as a cluster
node, with any Ethernet interface supported by that server, provided the
requirements for Solaris release, patches, etc. are met.

Sun StorageTek 5000 NAS Appliance

ST 5000 NAS Configuration Rules:


■ This information covers the following models:
■ Sun StorageTek 5210 NAS Appliance
■ Sun StorageTek 5220 NAS Appliance
■ Sun StorageTek 5310 NAS Appliance
■ Sun StorageTek 5320 NAS Appliance
■ Sun StorageTek 5320 NAS Cluster Appliance
■ Directories created on these Network Attached Storage (NAS) devices can be
exported to cluster nodes, mounted on all cluster nodes, and be available for
general use by highly available cluster applications.
■ Support includes fencing of failed cluster nodes from NAS directories and the
release of NFS locks during failover.
■ There is no fencing support for NFS-exported file systems when used in a non-
global zone, including nodes of a zone cluster.
■ Fencing support of NAS devices is provided in global zones.
■ Device configuration is fairly straightforward, with the creation and exporting of
NAS directories being done as in a non-clustered set-up, with some special
considerations for setting up the directory access list:
■ Do not enable general access for cluster nodes or use host groups to grant
access to directories for the entire cluster as these two actions will hinder the
fencing of failed cluster nodes.
■ Do specify access for each directory and each cluster node explicitly instead.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 175


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ When adding trusted admin access for the cluster, make sure the trusted admin
access entry comes before any general admin access entries.
■ It is also a good practice to set the NAS fencing module to load automatically
when the NAS device boots. If the NAS device is rebooted, and the fencing
module is not set to automatically load, failed cluster nodes will not be able to be
fenced. Please see the Sun Cluster System Administration Guide for details on
setting the NAS fencing module to load at boot time.
■ iSCSI LUNs may only be used as quorum devices.
■ An iSCSI LUN quorum device must be on the same subnet as that of the cluster
nodes due to bug 6614299.

Node Connectivity Limits


■ ST 5000 NAS iSCSI LUN quorum device only supports 2-node clusters.

Hubs and Switches


■ See subnet restriction the Configuration Rules section above.

RAID Requirements
■ N/A

Multipathing
■ N/A

ST 5000 NAS Volume Manager Support


■ N/A

Software, Firmware, and Patches


■ Starts with SC 3.2 2/08 (update 1).
■ ST 5000 NAS iSCSI LUN quorum device support starts with Solaris 10 6/06
(update 2).
■ Starts with NAS firmware version 4.21.
■ Please see the ST 5000 NAS Release Notes for other ST 5000 NAS requirements.

176 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


ETHERNET STORAGE SUPPORT

Sharing ST 5000 NAS Arrays


■ N/A

ST 5000 NAS Support Matrix and Exceptions:


The ST 5000 NAS product team does not maintain a list of qualified servers. Per the
ST 5320 NAS WWWW, SunWIN Token# 472566: ‘A client is any computer on the
network that requests file services from the StorageTek 5000 NAS Appliance. The list
of clients above represent client environments that have been tested. The list is not
all inclusive and additional client OS are scheduled for testing. In general, if a client
implementation follows the NFS version 2 or 3 protocol or the CIFS specifications, it
is supported with the StorageTek 5000 NAS Appliance.”

Following that model, Sun Cluster 3 supports any Sun server qualified as a cluster
node, provided the requirements for Solaris release, patches, etc. are met.

Sun StorageTek 5210 NAS Appliance

Sun StorageTek 5210 NAS Appliance


Configuration Rules:
■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

Sun StorageTek 5220 NAS Appliance

Sun StorageTek 5220 NAS Appliance


Configuration Rules:
■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 177


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorageTek 5310 NAS Appliance

Sun StorageTek 5310 NAS Appliance


Configuration Rules:
■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

Sun StorageTek 5320 NAS Appliance

Sun StorageTek 5320 NAS Appliance


Configuration Rules:
■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

Sun StorageTek 5320 NAS Cluster


Appliance

Sun StorageTek 5320 NAS Cluster Appliance


Configuration Rules:
■ Please see Section “Sun StorageTek 5000 NAS Appliance” on page 9-175.

178 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


ETHERNET STORAGE SUPPORT

Sun Storage 7000 Unified Storage System

SS 7000 Configuration Rules:


■ This information covers the following models:
■ Sun Storage 7110
■ Sun Storage 7210
■ Sun Storage 7310 single- and dual-controller configurations
■ Sun Storage 7410 single- and dual-controller configurations
■ The SS 7000 can only be used as an iSCSI block device. File-level protocols, e.g.,
NFS, are not supported except with Oracle RAC.
■ Oracle RAC is supported with the SS 7000 over NFS. See Section “Oracle Real
Application Cluster (OPS/RAC)” on page 11-245 for details.
■ SS 7000 iSCSI LUNs can only be configured on the same subnet as that of the
cluster nodes due to bug 6614299.
■ Starting with Sun Storage 7000 Software Update 2009.Q3:
■ SS 7000 iSCSI LUNs can be configured as “scsi2” or “scsi3” quorum devices.
■ SS 7000 iSCSI LUNs can be configured with fencing enabled.
■ SS 7000 with releases prior to Software Update 2009.Q3:
■ SS 7000 iSCSI LUNs must be configured to use Software Quorum - “scsi2” or
“scsi3” quorum devices are not supported.
■ SS 7000 iSCSI LUNs must be configured with fencing disabled.

Node Connectivity Limits


■ A maximum of 8 nodes can be connected to any one LUN.

Hubs and Switches


■ See subnet restriction the Configuration Rules section above.

RAID Requirements
■ There are no Sun Cluster specific requirements.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 179


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Multipathing
■ There are no Sun Cluster specific requirements.

SS 7000 Volume Manager Support


■ There are no Sun Cluster specific requirements.
■ Please see the SS 7000 product documentation regarding Volume Manager
support.

Software, Firmware, and Patches


■ SS 7000 support starts with Solaris 10 10/08 (update 5).
■ iSCSI LUNs used as “scsi2” or “scsi3” quorum devices and fencing enabled starts
with Sun Cluster 3.1 8/05 (update 4).
■ iSCSI LUNs used with Software Quorum and fencing disabled starts with Sun
Cluster 3.2 1/09 (update 2).
■ See Section “Oracle Real Application Cluster (OPS/RAC)” on page 11-245 when
using RAC with NFS.
■ Please see the SS 7000 documents for SS 7000 requirements.

Sharing SS 7000 Arrays


■ TBD

SS 7000 Support Matrix and Exceptions:


The Sun Storage 7000 Unified Storage System product team does not maintain a list
of qualified servers. The “Sun Storage 7000 Family What Works With What,”
SunWIN Token# 555895, 1/27/09 revision, states in the Client/Operating System
Support section: “A client is any computer on the network that requests file- or
block-level services from the Storage 7000 Unified Storage System. ... In general, if a
client implementation follows the protocol specifications, it is supported with the
Storage 7000 System.”

Following that model, Sun Cluster 3 supports any Sun server qualified as a cluster
node, with any Ethernet interface supported by that server, provided the
requirements for Solaris release, patches, etc. are met.

180 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


ETHERNET STORAGE SUPPORT

Sun Storage 7110 Unified Storage System

SS 7110 Configuration Rules:


■ Please see “Sun Storage 7000 Unified Storage System” on page 179.

Sun Storage 7210 Unified Storage System

SS 7210 Configuration Rules:


■ Please see “Sun Storage 7000 Unified Storage System” on page 179.

Sun Storage 7310 Unified Storage System

SS 7310 Configuration Rules:


■ Includes both single- and dual-controller configurations.
■ Please see “Sun Storage 7000 Unified Storage System” on page 179.

Sun Storage 7410 Unified Storage System

SS 7410 Configuration Rules:


■ Includes both single- and dual-controller configurations.
■ Please see “Sun Storage 7000 Unified Storage System” on page 179.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 181


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

182 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 10

Network Configuration

Cluster Interconnect
The cluster interconnect is the network fabric, private to the cluster, for
communication between the cluster nodes. This fabric is used for cluster-private
communication as well as cluster file system data transfer among the nodes. The
fabric consists of transport paths between all nodes of the cluster.

The following are general cluster-interconnect configuration guidelines. Please refer


to the technology discussions in this chapter for technology-specific guidelines.
■ Each transport path must connect all the nodes in the cluster.
■ All private transport paths in the cluster interconnect network fabric must use the
same technology and operate at the same speed. Technology in this discussion,
for example, is Ethernet (both fiber and UTP) vs InfiniBand.
■ There can be a maximum of six transport paths.
■ It is recommended that at least two transport paths terminate on separate network
adaptors on each node in the cluster.
■ A single transport path is supported, although a single point of failure.
■ A single multi-port NIC may support more than one transport path, although
could be a single point of failure. Please note that NICs on both sides of a
transport path are not required to have the same number of ports.
■ It is recommended that the anticipated data communication traffic between the
nodes be taken into consideration while sizing the capacity of the cluster
interconnect.
■ Interconnects can be of the following two types: point-to-point and junction-
based.
■ Public and private networks can share a single NIC with multiple ports.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 183


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Point-to-Point Interconnect
For 2 node clusters, a point-to-point connection between the nodes forms a complete
interconnect.

FIGURE 10-1 Two Point-to-Point Interconnects in a Two-Node cluster


Node1 Node2

NIC NIC

NIC NIC

Junction-Based Interconnect
For clusters with more than two nodes, a switch is necessary to form an
interconnect. Note that this option can be used for a two node cluster as well. Using
VLANs for private interconnect traffic is supported.

184 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

FIGURE 10-2 Two Junction-Based Interconnects in an N-Node Cluster (N <=8)

Switch

Switch

Node 1 Node 2 Node 3 Node 4

Private Interconnect Techology Support


Private interconnects must operate at the same speed (e.g., a cluster having one
interconnect path be of gigabit speed and the other path being fast Ethernet speed is
not supported). The transport paths must all use the same technology, e.g., a cluster
having one Ethernet transport path and one IB transport path is not supported. The
following types of private interconnects are supported with Sun Cluster 3:

Ethernet
■ There can be a maximum of 6 independent Ethernet interconnects within a
cluster.
■ All Ethernet ports within an interconnect path must operate at the same speed.
■ VLAN Support
■ Sun Cluster supports the use of private interconnect networks over switch-
based virtual local area networks (VLAN). In a switch-based VLAN
environment, Sun Cluster enables multiple clusters and non-clustered systems
to share Ethernet switches in two different configurations.
■ The implementation of switch-based VLAN environments is vendor-specific.
Since each switch manufacturer implements VLAN differently, the following
guidelines address Sun Cluster requirements regarding how VLANs should be
configured for use with cluster interconnects.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 185


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ You must understand your capacity needs before you set up a VLAN
configuration. To do this, you must know the minimum bandwidth necessary
for your interconnect and application traffic.
■ Interconnect traffic must be placed in the highest priority queue.
■ All ports must be equally serviced, similar to a round robin or first in first out
model.
■ You must verify that you have properly configured your VLANs to prevent
path timeouts.
■ Linking of VLAN switches together is supported. For minimum quality of
service requirements for your Sun Cluster configuration, please see the Sun
Cluster 3 Release Notes Supplement.
■ VLAN configurations are supported in campus cluster configurations with the
same restrictions as “normal” Sun Cluster configurations.
■ Transport paths may share a switch by using VLANs.
■ Jumbo Frames Support
■ Sun Cluster 3.1 and all updates prior to 3.1 9/04 (update 3) are supported and
require the following patches:
117950-07 (or later): SC3.1: Core Patch for Solaris 8.
117949-07 (or later): SC3.1: Core Patch for Solaris 9.
■ Sun Cluster 3.1 9/04 (update 3) and later are supported.
■ Agents support:
- Solaris 8 on Sun Cluster supports only Oracle RAC.
- Solaris 9 and later on Sun Cluster supports all Sun Cluster agents.
- When using Scalable Services and jumbo frames on your public network, it
is required that the Maximum Transfer Unit (MTU) of the private network is
the same size or larger than the MTU of your public network.
■ Solaris support:
- Solaris 8 requires patch 111883-23 (or later): SunOS 5.8: Sun GigaSwift
Ethernet 1.0 driver patch.
- Solaris 9 requires patch 112817-16 (or later): SunOS 5.9: Sun GigaSwift
Ethernet 1.0 driver patch.
- Solaris 10 does not have specific patch requirements for this feature.

PCI/SCI
SCI is supported with maximum 4-node clusters.
■ An SCI interconnect consists of a pair of cable connections.

186 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

■ A 2-node cluster may deploy point-to-point SCI transport paths.


■ Each point-to-point SCI path uses two SCI cables for a total of 4 cables.
■ Each junction-based SCI path requires two SCI cables to connect to its respective
SCI switch, e.g., a 2-node cluster using:SCI switches will have a total of 8 cables.
■ A maximum of 4 SCI cards per node is supported. Note that DR is supported on
4 SCI configurations but requires patches 117124-05/S9 and 111335-26/S8
■ Configuring more than 2 SCI transport paths requires Sun Cluster 3.1 U1 or later

Sun Fire Link


■ Supports up to 4 node Sun Cluster configurations.
■ Only DLPI mode is currently supported.

InfiniBand
The Sun Dual Port 4X IB Host Channel Adapter is supported with maximum 4-node
clusters.
■ Sun Cluster 3.1 update 4 (or later).
■ Solaris 10 update 1 (or later).
■ Solaris Patch Requirements:
■ 118852-07 (or later) SunOS 5.10: patch kernel/misc/sparcv9/ibcm
■ All cluster configurations require one Sun IB Switch 9P per transport path. IB
does not support a point-to-point interconnect.
■ Each IB transport path requires one IB cable from an HCA port to the switch, e.g.
a two-node cluster using IB will use a total of 4 cables.
■ A maximum of 2 IB transport paths per node is supported. Using two IB HCA
cards is recommended for best availability, however using both ports of a single
HCA is supported but may reduce availability. Note that some servers only
support a single IB HCA card.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 187


Sun Fire Cluster Link (Wildcat)
X7286 Sun PCI-X Single GigE MMF Low Profile
The PCI network interfaces that can be used to set up the cluster interconnect are

X7285 Sun PCI-X Dual GigE UTP Low Profile


X7281A-2 Gigabit Ethernet MMF PCI-Ej
X7280A-2 Gigabit Ethernet UTP PCI-Ed, j
Cluster Interconnects: PCI Network Interfaces for SPARC Servers

X5544A/X5544A-4 10 Gigabit Ethernet PCI



X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4445A Quad-gigabit Ethernet cardi


X4444A Quad-gigabit Ethernet cardi


X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII





X4150A-2/X4151A-2 Gigabit Ethernet PCI



X4150A/X4151A Gigabit Ethernet PCI



X2222A Combo Dual FastEthernet-Dual SCSI PCI



X1236A-Z InfiniBand HCA PCIe
X1233A/X1233A-Z InfiniBand HCA PCI


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

X1151/X3151 Gigabit Ethernet PCI



X1150/X3150 Gigabit Ethernet PCI



X1141 Gigabit Ethernet PCI


X1074 SCi PCI



Network Interface Cards
listed in Table 10-1.

X1034 Quad-fast Ethernet PCIf




X1033 Fast-Ethernet PCI


X1032 SunSwift PCI



TABLE 10-1
X1027 PCI-E Dual 10 GigE Fiber Low Profilee
Onboard Ethernet/Gigabit Ports







Sun Netra 1280
Sun Netra 1290
Sun Netra 120
Sun Netra 210
Sun Netra 240

Sun Netra 440


Sun Netra T1

Sun Netra 20
AC 200/DC

Sun Netra t

Sun Netra t
1400/1405
1120/1125

AC/DC
Servers

200

188
Sun Fire Cluster Link (Wildcat)
X7286 Sun PCI-X Single GigE MMF Low Profile
NETWORK CONFIGURATION

189
Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)

X7285 Sun PCI-X Dual GigE UTP Low Profile


X7281A-2 Gigabit Ethernet MMF PCI-Ej



X7280A-2 Gigabit Ethernet UTP PCI-Ed, j



X5544A/X5544A-4 10 Gigabit Ethernet PCI
X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4445A Quad-gigabit Ethernet cardi



X4444A Quad-gigabit Ethernet cardi


X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII


X4150A-2/X4151A-2 Gigabit Ethernet PCI



X4150A/X4151A Gigabit Ethernet PCI


X2222A Combo Dual FastEthernet-Dual SCSI PCI


X1236A-Z InfiniBand HCA PCIe
X1233A/X1233A-Z InfiniBand HCA PCI
X1151/X3151 Gigabit Ethernet PCI


X1150/X3150 Gigabit Ethernet PCI


X1141 Gigabit Ethernet PCI


X1074 SCi PCI


Network Interface Cards

X1034 Quad-fast Ethernet PCIf


X1033 Fast-Ethernet PCI


X1032 SunSwift PCI


TABLE 10-1

X1027 PCI-E Dual 10 GigE Fiber Low Profilee


Onboard Ethernet/Gigabit Ports



d
c

Sun Enterprise

Sun Enterprise

Sun Enterprise

Sun Enterprise
Sun Netra

Sun Netra

Sun Netra

Sun Netra

Sun Netra

Sun Netra
CP3010

CP3060

CP3260
Servers

T2000

T5220

T5440

220R

250

420

450
Sun Fire Cluster Link (Wildcat)
X7286 Sun PCI-X Single GigE MMF Low Profile
Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)

X7285 Sun PCI-X Dual GigE UTP Low Profile


X7281A-2 Gigabit Ethernet MMF PCI-Ej


j
X7280A-2 Gigabit Ethernet UTP PCI-Ed, j


j
X5544A/X5544A-4 10 Gigabit Ethernet PCI


X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j



j

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4445A Quad-gigabit Ethernet cardi




X4444A Quad-gigabit Ethernet cardi


X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII





X4150A-2/X4151A-2 Gigabit Ethernet PCI




X4150A/X4151A Gigabit Ethernet PCI



X2222A Combo Dual FastEthernet-Dual SCSI PCI


X1236A-Z InfiniBand HCA PCIe


X1233A/X1233A-Z InfiniBand HCA PCI


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

X1151/X3151 Gigabit Ethernet PCI


X1150/X3150 Gigabit Ethernet PCI


X1141 Gigabit Ethernet PCI


X1074 SCi PCI


Network Interface Cards

X1034 Quad-fast Ethernet PCIf


X1033 Fast-Ethernet PCI


X1032 SunSwift PCI


TABLE 10-1

X1027 PCI-E Dual 10 GigE Fiber Low Profilee



Onboard Ethernet/Gigabit Ports






d

V210a
Sun Fire T1000
Sun Fire T2000
Sun Enterprise

Sun Enterprise

Sun Enterprise

Sun Enterprise

Sun Enterprise

Sun Fire V120


Sun Fire V125

Sun Fire V215

Sun Fire V240


Sun Fire
Servers

3x00

4x00

5x00

6x00

10K

190
Sun Fire Cluster Link (Wildcat)


k

k
X7286 Sun PCI-X Single GigE MMF Low Profile
NETWORK CONFIGURATION

191
Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)




X7285 Sun PCI-X Dual GigE UTP Low Profile •




X7281A-2 Gigabit Ethernet MMF PCI-Ej


j

j
X7280A-2 Gigabit Ethernet UTP PCI-Ed, j


j

j
X5544A/X5544A-4 10 Gigabit Ethernet PCI







X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j


j

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4445A Quad-gigabit Ethernet cardi







X4444A Quad-gigabit Ethernet cardi •





X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII










X4150A-2/X4151A-2 Gigabit Ethernet PCI










X4150A/X4151A Gigabit Ethernet PCI










X2222A Combo Dual FastEthernet-Dual SCSI PCI









X1236A-Z InfiniBand HCA PCIe
X1233A/X1233A-Z InfiniBand HCA PCI



X1151/X3151 Gigabit Ethernet PCI









X1150/X3150 Gigabit Ethernet PCI









X1141 Gigabit Ethernet PCI





X1074 SCi PCI








Network Interface Cards

X1034 Quad-fast Ethernet PCIf









X1033 Fast-Ethernet PCI






X1032 SunSwift PCI






TABLE 10-1

X1027 PCI-E Dual 10 GigE Fiber Low Profilee


Onboard Ethernet/Gigabit Ports










Sun Fire V1280
Sun Fire E2900
Sun Fire V245

Sun Fire V250

Sun Fire V440


Sun Fire V445

Sun Fire V480


Sun Fire V490
Sun Fire V880
Sun Fire V890
Sun Fire 280R

Sun Fire 3800

Sun Fire 4810


4800/6800
Sun Fire
Servers
Sun Fire Cluster Link (Wildcat)

k

X7286 Sun PCI-X Single GigE MMF Low Profile


Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)

X7285 Sun PCI-X Dual GigE UTP Low Profile


X7281A-2 Gigabit Ethernet MMF PCI-Ej



X7280A-2 Gigabit Ethernet UTP PCI-Ed, j


X5544A/X5544A-4 10 Gigabit Ethernet PCI


X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4445A Quad-gigabit Ethernet cardi


X4444A Quad-gigabit Ethernet cardi


X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII


X4150A-2/X4151A-2 Gigabit Ethernet PCI


X4150A/X4151A Gigabit Ethernet PCI


X2222A Combo Dual FastEthernet-Dual SCSI PCI


X1236A-Z InfiniBand HCA PCIe


h
X1233A/X1233A-Z InfiniBand HCA PCI
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

X1151/X3151 Gigabit Ethernet PCI


X1150/X3150 Gigabit Ethernet PCI


X1141 Gigabit Ethernet PCI


X1074 SCi PCI


g
Network Interface Cards

X1034 Quad-fast Ethernet PCIf


X1033 Fast-Ethernet PCI


X1032 SunSwift PCI


TABLE 10-1

X1027 PCI-E Dual 10 GigE Fiber Low Profilee


Onboard Ethernet/Gigabit Ports


Sun Fire E4900,

M4000/M5000

M8000/M9000

T5120/T5220
E20K/E25Kb
Sun SPARC

Sun SPARC

Sun SPARC

Sun SPARC
Enterprise

Enterprise

Enterprise

Enterprise
12K/15Kb
Sun Fire

Sun Fire
Servers

M3000
6900

192
NETWORK CONFIGURATION

TABLE 10-1 Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)

Network Interface Cards

Onboard Ethernet/Gigabit Ports

X1027 PCI-E Dual 10 GigE Fiber Low Profilee

X1032 SunSwift PCI

X1033 Fast-Ethernet PCI

X1034 Quad-fast Ethernet PCIf

X1074 SCi PCI

X1141 Gigabit Ethernet PCI

X1150/X3150 Gigabit Ethernet PCI

X1151/X3151 Gigabit Ethernet PCI

X1233A/X1233A-Z InfiniBand HCA PCI

X1236A-Z InfiniBand HCA PCIe

X2222A Combo Dual FastEthernet-Dual SCSI PCI

X4150A/X4151A Gigabit Ethernet PCI

X4150A-2/X4151A-2 Gigabit Ethernet PCI

X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII

X4444A Quad-gigabit Ethernet cardi

X4445A Quad-gigabit Ethernet cardi

X4447A-Z x8 PCI-E Quad Gigabit Ethernete,

X5544A/X5544A-4 10 Gigabit Ethernet PCI

X7280A-2 Gigabit Ethernet UTP PCI-Ed,

X7281A-2 Gigabit Ethernet MMF PCI-Ej

X7285 Sun PCI-X Dual GigE UTP Low Profile

X7286 Sun PCI-X Single GigE MMF Low Profile

Sun Fire Cluster Link (Wildcat)


j
j
Servers

Sun SPARC • • • • • •
Enterprise
T5140/T5240
Sun SPARC • • • • • •
Enterprise
T5440
External I/O • • • •
Expansion Unit
for Sun SPARC
Enterprise
M4000, M5000,
M8000, M9000,
T5120, T5140,
T5220 & T5240
a SF V210 onboard gigabit port support requires patch #110648-28
b Do not install PCI SCI cards into hs PCI+ PCI slot 1. For more information see bug 6178223.
c Base and Extended Fabrics, and Sun Netra CP3200 ARTM-FC-Z (XCP32X0-RTM-FC-Z)
d Two-node clusters installed with Solaris 10 11/06 (or later) and KU 118833-30 (or later) can configure e1000g cluster interconnects using back-
to-back cabling, otherwise Ethernet switches are required. See Info Doc number 88928 for more info.
e Refer to Info Doc ID: 89736 for details
f Includes support for new LW8-QFE card on SF 1280, Netra 1280 and E2900
g This support requires patch #110900-08 for Solaris 8, patch #112838-06 and 114272-02 for Solaris 9. Max nodes supported is 4 X1074A
h Support in SC3.2U1 or later as CR 6599044 (P2/S2) was tested and integrated in SC3.2U1

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 193


i Note that the 1280/2900 series boxes do not support the X4444A cards due to a short PCI slot. However, the X4445A is supported in the

X9273A Intel Quad GigEa



X9272A Intel Dual GigEa

X9271A Intel Single GigEe

Cluster Interconnects: PCI Network Interfaces for x64 Servers

X7286A Sun PCI-X Single GigE MMF Low Profile


X7285A Sun PCI-X Dual GigE UTP Low Profile

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only



X7281A-2 Sun PCI-E Dual GigE MMF




X7280A-2 Sun PCI-E Dual GigE UTP




j Note that the network interface is not supported with Solaris 9 as Solaris 9 does not support PCIe

X5544A/X5544A-4 Sun 10 GigE PCI/PCI-Xd


X4447A-Z Sun x8 PCI-E Quad GigE UTP
k Sun Fire Cluster Link Only Supported on SF 6800, 12k/15k. Only DLPI mode is supported


X4446A-Z Sun x4 PCI-E Quad GigE UTP




X4445A Sun Quad GigaSwift PCI-X UTP


X4444A Sun Quad GigaSwift PCI UTP


X4422A/X4422A-2 Sun StorEdge Dual GigE/Dual SCSI PCIb



c
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

X4151A/X4151A-2 Sun GigaSwift MMF PCI


X4150A/X4150A-2 Sun GigaSwift UTP PCI


X2222A Combo Dual FastEthernet-Dual SCSI PCI
X1333A-4 Sun Dual Port 4x IB HCA PCI-X


Network Interface Cards
X1236A-Z Sun Dual Port 4x IB HCA PCI-E


X1235A Sun Dual Port 4x IB HCA PCI-X


X1233A/X1233A-Z InfiniBand HCA PCI

TABLE 10-2
X1027 PCI-E Dual 10 GigE Fiber Low Profilea



Onboard Ethernet/GigE Ports





Sun Fire X2100

Sun Fire X2200

Sun Fire X4100


Sun Fire X4100

Sun Fire X4140


Sun Fire X4150
Sun Fire X4170
Sun Fire X4200
Sun Fire V20z
Sun Fire V40z
1280/2900.

Servers

194
M2

M2

M2
NETWORK CONFIGURATION

195
Cluster Interconnects: PCI Network Interfaces for x64 Servers (Continued)

X9273A Intel Quad GigEa


X9272A Intel Dual GigEa

b Requires Sun GigaSwift Ethernet driver for x 86 Solaris 9 1.0. available at http://www.sun.com/software/download/prod-
X9271A Intel Single GigEe
X7286A Sun PCI-X Single GigE MMF Low Profile



X7285A Sun PCI-X Dual GigE UTP Low Profile

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only




X7281A-2 Sun PCI-E Dual GigE MMF









X7280A-2 Sun PCI-E Dual GigE UTP











X5544A/X5544A-4 Sun 10 GigE PCI/PCI-Xd



X4447A-Z Sun x8 PCI-E Quad GigE UTP



X4446A-Z Sun x4 PCI-E Quad GigE UTP











X4445A Sun Quad GigaSwift PCI-X UTP


X4444A Sun Quad GigaSwift PCI UTP
X4422A/X4422A-2 Sun StorEdge Dual GigE/Dual SCSI PCIb

c Do not install X4422A in both V 40z PCI slots 2 and 3 (See CR 6196936)
X4151A/X4151A-2 Sun GigaSwift MMF PCI
X4150A/X4150A-2 Sun GigaSwift UTP PCI
X2222A Combo Dual FastEthernet-Dual SCSI PCI
X1333A-4 Sun Dual Port 4x IB HCA PCI-X




Network Interface Cards

X1236A-Z Sun Dual Port 4x IB HCA PCI-E







X1235A Sun Dual Port 4x IB HCA PCI-X

a Refer to Info Doc ID: 89736 for details


X1233A/X1233A-Z InfiniBand HCA PCI
TABLE 10-2

X1027 PCI-E Dual 10 GigE Fiber Low Profilea






Onboard Ethernet/GigE Ports










ucts/40f7115e.html
Sun Netra X4250

Sun Netra X4450


Sun Netra X4200
Sun Fire X4200

Sun Fire X4240


Sun Fire X4250
Sun Fire X4270
Sun Fire X4275
Sun Fire X4440
Sun Fire X4450
Sun Fire X4540
Sun Fire X4600
Sun Fire X4600
Servers

M2

M2

M2
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

d Support starting with Solaris 10 6/06


e Support starting with Solaris 10 3/05 HW1

The SBus and cPCI network interfaces that can be used to set up the cluster
interconnect are listed in Table 10-3

TABLE 10-3 Cluster Interconnects: SBus and cPCI Network Interfaces for SPARC Servers

Network Interface Cards


Onboard Ethernet/Gigabit Ports

X1018 SunSwift SBus

X1049 Quad-fast Ethernet SBus

X1059 Fast-Ethernet

X1140 Gigabit Ethernet SBus

X1232 Sun Swift cPCI

X1234 Quad-fast ethernet cPCI

X1261 Gigabit Ethernet cPCI


SBus

Servers

Sun Enterprise 3x00 • • • • •


Sun Enterprise 4x00 • • • • •
Sun Enterprise 5x00 • • • • •
Sun Enterprise 6x00 • • • • •
Sun Enterprise 10K • • • •
Sun Fire 3800 • • •
Sun Fire 4800, 4810, 6800 • • •

196 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

TABLE 10-4 Cluster Interconnects: PCI-E ExpressModule Network Interfaces for SPARC
Servers

Network Interface
ExpressModules

SG-XPCIE2FCGBE-E-Z Dual 4Gb FC Dual GbE ExpressModule

SG-XPCIE2FCGBE-Q-Z Dual 4Gb FC Dual GbE ExpressModule

X1028A-Z Dual 10 GbE XFP PCIe ExpressModulea

X1288A-Z 4x Dual 10Gb/s IB HCA PCIe ExpressModule

X7282A-Z PCI-Express Dual GbE ExpressModule UTP

X7283A-Z PCI-Express Dual GbE ExpressModule MMF

X7284A-Z x4 PCIe Quad GbE ExpressModule

X7287A-Z Quad GbE UTP x8 PCIe ExpressModulea

Servers

Sun Blade T6300 • • • • • • •


Sun Blade T6320 • • • • • • •
Sun Blade T6340 • • • • • • •
USBRDT-5240 Uniboard for • • • • •
E4800, E4900, E6800, E6900,
E12K, E15K, E20K and E25K
a Requires patch 125670-02 or later. Refer to InfoDoc 89736 for details

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 197


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-5 Cluster Interconnects: PCI-E ExpressModule Network Interfaces for x64
Servers

Network Interface
ExpressModules
SG-XPCIE2FCGBE-E-Z Dual 4Gb FC Dual GbE ExpressModule

SG-XPCIE2FCGBE-Q-Z Dual 4Gb FC Dual GbE ExpressModule

X1028A-Z 4x Dual 10 GbE XFP ExpressModulea

X1288A-Z Dual 10Gb/s IB HCA PCIe ExpressModule

X7282A-Z PCI-Express Dual GbE ExpressModule UTP

X7283A-Z PCI-Express Dual GbE ExpressModule MMF

X7284A-Z x4 PCIe Quad GbE ExpressModule

X7287A-Z Quad GbE UTP x8 ExpressModule

Servers

Sun Blade X6220 • • • • • • • •


Sun Blade X6240 • • • • • • • •
Sun Blade X6250 • • • • • • • •
Sun Blade X6270 • • • • • • •
Sun Blade X6440 • • • • • • • •
Sun Blade X6450 • • • • • • • •
a Requires patch 125671-02 or later. Refer to InfoDoc 89736 for details

198 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

TABLE 10-6 Cluster Interconnect: Network Express Module (NEM) Network Interfaces
for SPARC Servers

Network Interface
NEMs

X4212A SB 6000 14-Port Multi-Fabric NEM

X4236A SB 6000 24-Port Multi-Fabric NEM

X4250A SB 6000 10-Port GbE NEM

X4731A SB 6048 12-Port GbE NEM

Servers

Sun Blade T6300 • • •


Sun Blade T6320 •a • •
Sun Blade T6340 • •b • •
a Requires X4822A XAUI Pass-Through Fabric Expansion
Module for 10GbE operation
b Requires X1029A Dual 10GbE Fabric Expansion Module for
10GbE operation

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 199


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-7 Cluster Interconnect: Network Express Module (NEM) Network Interfaces for
x64 Servers

Network Interface
NEMs
X4212A SB 6000 14-Port Multi-Fabric NEM

X4236A SB 6000 24-Port Multi-Fabric NEM

X4250A SB 6000 10-Port GbE NEM

X4731A SB 6048 12-Port GbE NEM

Servers

Sun Blade X6220 • • •


Sun Blade X6240 • •
Sun Blade X6250 •a • •
Sun Blade X6270 • •
Sun Blade X6440 • •
Sun Blade X6450 •a • •
a Requires X1029A Dual 10GbE Fabric Expansion Module for
10GbE operation

TABLE 10-8 Cluster Interconnect: XAUI Network Interfaces for SPARC Servers

Network
Interface
Cards
SESX7XA1Z

Servers

Sun Netra T5220 •


Sun Netra T5440 •
Sun SPARC Enterprise T5120 •

200 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

TABLE 10-8 Cluster Interconnect: XAUI Network Interfaces for SPARC Servers

Network
Interface
Cards

SESX7XA1Z
Servers

Sun SPARC Enterprise T5140 •


Sun SPARC Enterprise T5220 •
Sun SPARC Enterprise T5240 •
Sun SPARC Enterprise T5440 •

The cables/switches supported with each type of cluster interconnect are listed
below:

TABLE 10-9 Cables for Cluster Interconnect

Network Interface Cable Part # for cable

Fast Ethernet Null Ethernet Cable (for point-to-point only) 3837A


Customer-supplied (for junction based or
point to point)
Gigabit Ethernet Customer-supplied (for junction based or
(Copper) point to point)
Gigabit Ethernet 2m Fiber Optic Cable 973A
(Fiber)
15m Fiber Optic Cable 978A
5-meter, fiber-optic cable 9715A
Customer Supplied (for junction based or
point to point)
10 Gigabit Ethernet 2m Fibre Optic Cable X9732A
(Fiber) 5m Fibre Optic Cable X9733A
15m Fibre Optic Cable X9734A
25m Fibre Optic Cable X9736A
Customer Supplied (for junction based or
point to point)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 201


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-9 Cables for Cluster Interconnect (Continued)

Network Interface Cable Part # for cable

PCI SCI 2m PCI SCI Cable 3901A


5m PCI SCI Cable 3902A
7.5m PCI SCI Cable 3903A
InfiniBand 2m IB Cable 9280A
5m IB Cable 9281A

The switches supported with each type of cluster interconnect are listed below:

TABLE 10-10 Switches for Cluster Interconnect

Network Interface Switch Part # of Switch

Fast Ethernet Customer supplied N/A


Gigabit Ethernet Customer supplied N/A
10 Gigabit Ethernet Customer Supplied N/A
PCI SCI 4 port SCI Switch 3895A
Sun Fire Link Sun Fire Link Switch
InfiniBand Sun IB Switch 9P 3152A
Voltaire ISR 9024 with By Solaris Ready Partner
Gridvision 5.1

Public Network
Clients connect to the cluster nodes through public network interfaces. It is required
that all nodes in the cluster be independently connected on the same IP subnets.

Sun Cluster 3.0 uses NAFO as a public network interface while later Sun Cluster 3
releases use IPMP as a public network interface.

Note – The Sun X1018 and X1059 cards do not support IPMP, thus, they are not
supported as a public network interface with Sun Cluster 3 releases after 3.0.

For ATM networks only LANE mode is supported.

202 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

203
Public network PCI interfaces supported with Sun Cluster 3 for SPARC servers are

X7286 Sun PCI-X Single GigE MMF Low Profile


The following Cluster Interconnect tables only indicate that at least one card is

X7285 Sun PCI-X Dual GigE UTP Low Profile


supported per server or domain, as applicable. Please ensure the targeted

X7281A-2 Gigabit Ethernet MMF PCI-Ed


X7280A-2 Gigabit Ethernet UTP PCI-Ed
Public Network: PCI Network Interfaces for SPARC Servers

X5544A/X5544A-4 10 Gigabit Ethernet PCI

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4447A-Z x8 PCI-E Quad Gigabit Ethernetb,d
X4445A Quad-Gigabit Ethernet PCI*c


X4444A Quad-Gigabit Ethernet PCI*c


configuration meets your customer’s requirements.

X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII



X4150A-2/X4151A-2 Gigabit Ethernet PCI


X4150A/X4151A Gigabit Ethernet PCI


X2222A Combo Dual FastEthernet-Dual SCSI PCII



X1159 Sun ATM 622/MMF 5.0 PCI


X1157 Sun ATM 155/MMF 5.0 PCI


X1151/X3151 Gigabit Ethernet PCI


X1150/X3150 Gigabit Ethernet PCI


X1141 Gigabit Ethernet PCI


Network Interface Cards
listed in Table 10-11

X1034 Quad-Fast Ethernet PCIc



X1033 Fast-Ethernet PCI



TABLE 10-11
X1032 SunSwift PCI



X1027 PCI-E Dual 10 GigE Fiber Low Profileb
Onboard Ethernet/Gigabit Ports





Sun Netra 120
Sun Netra 210
Sun Netra 240

Sun Netra 440


Sun Netra T1

Sun Netra 20
AC 200/DC

Sun Netra t

Sun Netra t
1400/1405
1120/1125

AC/DC
Servers

200
X7286 Sun PCI-X Single GigE MMF Low Profile
Public Network: PCI Network Interfaces for SPARC Servers (Continued)

X7285 Sun PCI-X Dual GigE UTP Low Profile


X7281A-2 Gigabit Ethernet MMF PCI-Ed


X7280A-2 Gigabit Ethernet UTP PCI-Ed


X5544A/X5544A-4 10 Gigabit Ethernet PCI

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4447A-Z x8 PCI-E Quad Gigabit Ethernetb,d


X4445A Quad-Gigabit Ethernet PCI*c


X4444A Quad-Gigabit Ethernet PCI*c


X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII



X4150A-2/X4151A-2 Gigabit Ethernet PCI



X4150A/X4151A Gigabit Ethernet PCI



X2222A Combo Dual FastEthernet-Dual SCSI PCII


X1159 Sun ATM 622/MMF 5.0 PCI


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

X1157 Sun ATM 155/MMF 5.0 PCI


X1151/X3151 Gigabit Ethernet PCI


X1150/X3150 Gigabit Ethernet PCI


X1141 Gigabit Ethernet PCI


Network Interface Cards

X1034 Quad-Fast Ethernet PCIc


X1033 Fast-Ethernet PCI


TABLE 10-11

X1032 SunSwift PCI


X1027 PCI-E Dual 10 GigE Fiber Low Profileb


Onboard Ethernet/Gigabit Ports




a
Sun Netra 1280
Sun Netra 1290

Sun Enterprise

Sun Enterprise

Sun Enterprise
Sun Netra

Sun Netra

Sun Netra

Sun Netra

Sun Netra

Sun Netra
CP3010

CP3060

CP3260
Servers

T2000

T5220

T5440

220R

250

420

204
NETWORK CONFIGURATION

205
X7286 Sun PCI-X Single GigE MMF Low Profile
Public Network: PCI Network Interfaces for SPARC Servers (Continued)

X7285 Sun PCI-X Dual GigE UTP Low Profile


X7281A-2 Gigabit Ethernet MMF PCI-Ed


d
X7280A-2 Gigabit Ethernet UTP PCI-Ed


d
X5544A/X5544A-4 10 Gigabit Ethernet PCI

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4447A-Z x8 PCI-E Quad Gigabit Ethernetb,d



d
X4445A Quad-Gigabit Ethernet PCI*c




X4444A Quad-Gigabit Ethernet PCI*c


X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII





X4150A-2/X4151A-2 Gigabit Ethernet PCI




X4150A/X4151A Gigabit Ethernet PCI



X2222A Combo Dual FastEthernet-Dual SCSI PCII


X1159 Sun ATM 622/MMF 5.0 PCI


X1157 Sun ATM 155/MMF 5.0 PCI


X1151/X3151 Gigabit Ethernet PCI


X1150/X3150 Gigabit Ethernet PCI


X1141 Gigabit Ethernet PCI


Network Interface Cards

X1034 Quad-Fast Ethernet PCIc


X1033 Fast-Ethernet PCI


TABLE 10-11

X1032 SunSwift PCI


X1027 PCI-E Dual 10 GigE Fiber Low Profileb



Onboard Ethernet/Gigabit Ports







Sun Fire T1000
Sun Fire T2000
Sun Enterprise

Sun Enterprise

Sun Enterprise

Sun Enterprise

Sun Enterprise

Sun Enterprise

Sun Fire V120


Sun Fire V125
Sun Fire V210
Sun Fire V215
Servers

3x00

4x00

5x00

6x00

10K
450
X7286 Sun PCI-X Single GigE MMF Low Profile




Public Network: PCI Network Interfaces for SPARC Servers (Continued)

X7285 Sun PCI-X Dual GigE UTP Low Profile




X7281A-2 Gigabit Ethernet MMF PCI-Ed


d

d
X7280A-2 Gigabit Ethernet UTP PCI-Ed


d

d
X5544A/X5544A-4 10 Gigabit Ethernet PCI







Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4447A-Z x8 PCI-E Quad Gigabit Ethernetb,d


d
X4445A Quad-Gigabit Ethernet PCI*c







X4444A Quad-Gigabit Ethernet PCI*c







X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII











X4150A-2/X4151A-2 Gigabit Ethernet PCI











X4150A/X4151A Gigabit Ethernet PCI











X2222A Combo Dual FastEthernet-Dual SCSI PCII









X1159 Sun ATM 622/MMF 5.0 PCI





SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

X1157 Sun ATM 155/MMF 5.0 PCI







X1151/X3151 Gigabit Ethernet PCI









X1150/X3150 Gigabit Ethernet PCI









X1141 Gigabit Ethernet PCI






Network Interface Cards

X1034 Quad-Fast Ethernet PCIc









X1033 Fast-Ethernet PCI






TABLE 10-11

X1032 SunSwift PCI






X1027 PCI-E Dual 10 GigE Fiber Low Profileb


Onboard Ethernet/Gigabit Ports











Sun Fire V1280
Sun Fire E2900
Sun Fire V240
Sun Fire V245

Sun Fire V250

Sun Fire V440


Sun Fire V445

Sun Fire V480


Sun Fire V490
Sun Fire V880
Sun Fire V890
Sun Fire 280R

Sun Fire 3800

Sun Fire 4810


4800/6800
Sun Fire
Servers

206
NETWORK CONFIGURATION

207
X7286 Sun PCI-X Single GigE MMF Low Profile


Public Network: PCI Network Interfaces for SPARC Servers (Continued)

X7285 Sun PCI-X Dual GigE UTP Low Profile


X7281A-2 Gigabit Ethernet MMF PCI-Ed



X7280A-2 Gigabit Ethernet UTP PCI-Ed


X5544A/X5544A-4 10 Gigabit Ethernet PCI

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4447A-Z x8 PCI-E Quad Gigabit Ethernetb,d


X4445A Quad-Gigabit Ethernet PCI*c


X4444A Quad-Gigabit Ethernet PCI*c


X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII


X4150A-2/X4151A-2 Gigabit Ethernet PCI


X4150A/X4151A Gigabit Ethernet PCI


X2222A Combo Dual FastEthernet-Dual SCSI PCII


X1159 Sun ATM 622/MMF 5.0 PCI


X1157 Sun ATM 155/MMF 5.0 PCI


X1151/X3151 Gigabit Ethernet PCI


X1150/X3150 Gigabit Ethernet PCI


X1141 Gigabit Ethernet PCI


Network Interface Cards

X1034 Quad-Fast Ethernet PCIc


X1033 Fast-Ethernet PCI


TABLE 10-11

X1032 SunSwift PCI


X1027 PCI-E Dual 10 GigE Fiber Low Profileb


Onboard Ethernet/Gigabit Ports


Sun Fire E4900,

M4000/M5000

M8000/M9000

T5120/T5220
E20K/E25K
Sun SPARC

Sun SPARC

Sun SPARC

Sun SPARC
Enterprise

Enterprise

Enterprise

Enterprise
12K/15K
Sun Fire

Sun Fire
Servers

M3000
E6900
X7286 Sun PCI-X Single GigE MMF Low Profile
Public Network: PCI Network Interfaces for SPARC Servers (Continued)

X7285 Sun PCI-X Dual GigE UTP Low Profile


X7281A-2 Gigabit Ethernet MMF PCI-Ed


X7280A-2 Gigabit Ethernet UTP PCI-Ed


X5544A/X5544A-4 10 Gigabit Ethernet PCI

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


X4447A-Z x8 PCI-E Quad Gigabit Ethernetb,d


X4445A Quad-Gigabit Ethernet PCI*c
X4444A Quad-Gigabit Ethernet PCI*c

d Note that the network interface is not supported with Solaris 9 as Solaris 9 does not support PCIe
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
X4150A-2/X4151A-2 Gigabit Ethernet PCI

a Base and Extended Fabrics, and Sun Netra CP3200 ARTM-FC (XCP32X0-RTM-FC-Z)
X4150A/X4151A Gigabit Ethernet PCI

c Includes support for the Sun LW8-QFE card on the SF1280, Netra 1280 and E2900
X2222A Combo Dual FastEthernet-Dual SCSI PCII
X1159 Sun ATM 622/MMF 5.0 PCI
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

X1157 Sun ATM 155/MMF 5.0 PCI


X1151/X3151 Gigabit Ethernet PCI
X1150/X3150 Gigabit Ethernet PCI
X1141 Gigabit Ethernet PCI
Network Interface Cards

X1034 Quad-Fast Ethernet PCIc


X1033 Fast-Ethernet PCI

b Refer to Info Doc ID: 89736 for details


TABLE 10-11

X1032 SunSwift PCI


X1027 PCI-E Dual 10 GigE Fiber Low Profileb


Onboard Ethernet/Gigabit Ports


Expansion Unit
for Sun SPARC

M4000, M5000,
M8000, M9000,

T5220 & T5240


T5120, T5140,
T5140/T5240

External I/O
Sun SPARC

Sun SPARC
Enterprise

Enterprise

Enterprise
Servers

T5440

208
NETWORK CONFIGURATION

Public network PCI interfaces supported with Sun Cluster 3 for x64 servers are listed

TABLE 10-12 Public Network: PCI Network Interfaces for x64 Servers

Network Interface Cards

Onboard Ethernet/GigE Ports

X1027 PCI-E Dual 10 GigE Fiber Low Profilea

X2222A Combo Dual FastEthernet-Dual SCSI PCI

X4150A/X4150A-2 GigaSwift UTP PCI

X4151A/X4151A-2 GigaSwift MMF PCI

X4422A/X4422A-2 StorEdge Dual GigE/Dual SCSI PCIb

X4444A Quad GigaSwift PCI UTP

X4445A Quad GigaSwift PCI-X UTP

X4446A-Z x4 PCI-E Quad GigE UTP

X4447A-Z x8 PCI-E Quad GigE UTP

X5544A/X5544A-4 10 GigE PCI/PCI-Xd

X7280A-2 PCI-E Dual GigE UTP

X7281A-2 PCI-E Dual GigE MMF

X7285A PCI-X Dual GigE UTP Low Profile

X7286A PCI-X Single GigE MMF Low Profile

X9271A Intel Single GigEe

X9272A Intel Dual GigEa

X9273A Intel Quad GigEa


Servers

Sun Fire V20z • •


Sun Fire V40z • • • •c • • • • • •
Sun Fire X2100 M2 • • •
Sun Fire X2200 M2 • • •
Sun Fire X4100 • • •
Sun Fire X4100 M2 • • • • •
Sun Fire X4140 • • • • •
Sun Fire X4150 • • • • •
Sun Fire X4170 • • • •
Sun Fire X4200 • • • •
Sun Fire X4200 M2 • • • •
Sun Fire X4240 • • • • •
Sun Fire X4250 • • • • • •
Sun Fire X4270 • • • •
Sun Fire X4275 • • • •
Sun Fire X4440 • • • • •
Sun Fire X4450 • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 209


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-12 Public Network: PCI Network Interfaces for x64 Servers (Continued)

Network Interface Cards

Onboard Ethernet/GigE Ports

X1027 PCI-E Dual 10 GigE Fiber Low Profilea

X2222A Combo Dual FastEthernet-Dual SCSI PCI

X4150A/X4150A-2 GigaSwift UTP PCI

X4151A/X4151A-2 GigaSwift MMF PCI

X4422A/X4422A-2 StorEdge Dual GigE/Dual SCSI PCIb

X4444A Quad GigaSwift PCI UTP

X4445A Quad GigaSwift PCI-X UTP

X4446A-Z x4 PCI-E Quad GigE UTP

X4447A-Z x8 PCI-E Quad GigE UTP

X5544A/X5544A-4 10 GigE PCI/PCI-Xd

X7280A-2 PCI-E Dual GigE UTP

X7281A-2 PCI-E Dual GigE MMF

X7285A PCI-X Dual GigE UTP Low Profile

X7286A PCI-X Single GigE MMF Low Profile

X9271A Intel Single GigEe

X9272A Intel Dual GigEa

X9273A Intel Quad GigEa


Servers

Sun Fire X4540 • • • • •


Sun Fire X4600 • • • • • • •
Sun Fire X4600 M2 • • • • • • •
Sun Netra X4200 M2 • • • • • • •
Sun Netra X4250 • • • • •
Sun Netra X4450 • • • • •
a Refer to Info Doc ID: 89736 for details
b Requires Sun GigaSwift Ethernet driver for x 86 Solaris 9 1.0. available at http://www.sun.com/software/download/prod-
ucts/40f7115e.html
c Do not install X4422A in both V 40z PCI slots 2 and 3 (See CR 6196936)
d Support starting with Solaris 10 6/06
e Support starting with Solaris 10 3/05 HW1

in Table 10-12.

210 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

Public network SBus and cPCI interfaces supported with Sun Cluster 3 are listed in
Table 10-13.

TABLE 10-13 Public Network: SBus and cPCI Network Interfaces for SPARC Servers

Network Interface Cards

Onboard Ethernet/Gigabit Ports

X1018 SunSwift SBusa

X1049 Quad-fast ethernet SBus

X1059 Fast-Ethernet SBus

X1140 Gigabit Ethernet SBus

X1147 Sun ATM 155/MMF 5.0 SBus

X1149 Sun ATM 622/MMF 5.0 SBus

X1232 Sun Swift cPCI

X1234 Quad-fast ethernet cPCI

X1261 Gigabit Ethernet cPCI


Servers

Sun Enterprise 3x00 • • • • • • •


Sun Enterprise 4x00 • • • • • • •
Sun Enterprise 5x00 • • • • • • •
Sun Enterprise 6x00 • • • • • • •
Sun Enterprise 10K • • • • • •
Sun Fire 3800 • • •
Sun Fire 4800, 4810, 6800 • • •
a Sun Cluster 3.0 support only

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 211


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-14 Public Network: PCI-E ExpressModule Network Interfaces for SPARC
Servers

Network Interface ExpressModules


SG-XPCIE2FCGBE-E-Z Dual 4Gb FC Dual GbE ExpressModule

SG-XPCIE2FCGBE-Q-Z Dual 4Gb FC Dual GbE ExpressModule

X1028A-Z Dual 10 GbE XFP ExpressModulea

X7282A-Z PCI-Express Dual GbE ExpressModule UTP

X7283A-Z PCI-Express Dual GbE ExpressModule MMF

X7284A-Z x4 PCIe Quad GbE ExpressModule

X7287A-Z Quad GbE UTP x8 PCIe ExpressModulea

Servers

Sun Blade T6300 • • • • • • •


Sun Blade T6320 • • • • • • •
Sun Blade T6340 • • • • • • •
USBRDT-5240 Uniboard for • • • • •
E4800, E4900, E6800, E6900,
E12K, E15K, E20K and E25K
a Requires patch 125670-02 or later. Refer to InfoDoc 89736 for details.

212 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

TABLE 10-15 Public Network: PCI-E ExpressModule Network Interfaces for x64 Servers

Network Interface ExpressModules

SG-XPCIE2FCGBE-E-Z Dual 4Gb FC Dual GbE ExpressModule

SG-XPCIE2FCGBE-Q-Z Dual 4Gb FC Dual GbE ExpressModule

X1028A-Z Dual 10 GbE XFP ExpressModulea

X7282A-Z PCI-Express Dual GbE ExpressModule UTP

X7283A-Z PCI-Express Dual GbE ExpressModule MMF

X7284A-Z x4 PCIe Quad GbE ExpressModule

X7287A-Z Quad GbE UTP x8 ExpressModulea

Servers

Sun Blade X6220 • • • • • • •


Sun Blade X6240 • • • • • • •
Sun Blade X6250 • • • • • • •
Sun Blade X6270 • • • • • •
Sun Blade X6440 • • • • • • •
Sun Blade X6450 • • • • • • •
a Requires patch 125671-02 or later. Refer to InfoDoc 89736 for details.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 213


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-16 Public Network: Network Express Module (NEM) Network Interfaces for
SPARC Servers

Network Interface
NEMs
X4212A SB 6000 14-Port Multi-Fabric NEM

X4236A SB 6000 24-Port Multi-Fabric NEM

X4250A SB 6000 10-Port GbE NEM

X4731A SB 6048 12-Port GbE NEM

Servers

Sun Blade T6300 • • •


Sun Blade T6320 •a • •
Sun Blade T6340 • •b • •
a Requires X4822A XAUI Pass-Through Fabric Expansion
Module for 10GbE operation
b Requires X1029A Dual 10GbE Fabric Expansion Module for
10GbE operation

214 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

TABLE 10-17 Public Network: Network Express Module (NEM) Network Interfaces for x64
Servers

Network Interface
NEMs

X4212A SB 6000 14-Port Multi-Fabric NEM

X4236A SB 6000 24-Port Multi-Fabric NEM

X4250A SB 6000 10-Port GbE NEM

X4731A SB 6048 12-Port GbE NEM

Servers

Sun Blade X6220 • • •


Sun Blade X6240 • •
Sun Blade X6250 •a • •
Sun Blade X6270 • •
Sun Blade X6440 • •
Sun Blade X6450 •a • •
a Requires X1029A Dual 10GbE Fabric Expansion Module for
10GbE operation

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 215


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 10-18 Public Network: XAUI Network Interfaces for SPARC Servers

Network
Interface
Cards

SESX7XA1Z

Servers

Sun Netra T5220 •


Sun Netra T5440 •
Sun SPARC Enterprise •
T5120/T5220
Sun SPARC Enterprise •
T5140/T5240
Sun SPARC Enterprise T5440 •

Network Adapter Failover- NAFO


Sun Cluster 3.0 provides a feature called Network Adapter Fail Over (NAFO) for
high availability of public network interfaces. Please note that NAFO is ONLY
supported on Sun Cluster 3.0. IPMP takes the place of NAFO in Sun Cluster 3.1.
NAFO detects the failure of a network adapter, and automatically starts using a
spare unused network adapter on the same server (if one exists and is configured for
this purpose). Configuration rules for NAFO are listed below:
■ It is required to set-up NAFO for each public network interface.
■ It is recommended to configure redundant network adapters for every public
network interface.
■ Network adapters that are part of the same NAFO group of different speeds are
now supported. For example, a Quad Gigabit Ethernet controller and a Sun Fast
Ethernet controller can now be part of the same NAFO group.

216 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


NETWORK CONFIGURATION

IPMP Support
IPMP, Sun's Network Multipathing implementation for the Solaris Operating
System, is easy to use, and enables a server to have multiple network ports
connected to the same subnet. Solaris IPMP software provides resilience from
network adapter failure by detecting the failure or repair of a network adapter and
switching the network address to and from the alternative adapter. Moreover, when
more than one network adapter is functional, Solaris IPMP increases data
throughput by spreading outbound packets across adapters.

Solaris IPMP provides a solution for most failover scenarios, while requiring
minimal system administrator intervention. With Solaris IPMP, there is no
degradation in system or network performance when IPMP functions are not
invoked, and failover functions are accomplished in a short time frame. Public
Network Management (PNM), Network Adapter Fail Over (NAFO) supported in
Sun Cluster 3.0 is officially end of life. Starting with Sun Cluster 3.1, Solaris IPMP is
the replaced technology to ensure public network availability on SunPlex systems.
■ It is recommended to configure redundant network adapters for every public
network interface.
■ The Sun X1018 and X1059 cards do not support IPMP, thus, they are not
supported with Sun Cluster 3.1 as a public network interface.

Public Network Link Aggregation


Link aggregation is supported by Sun Cluster for public networking. Link
aggregations must be put into IPMP groups for use by Sun Cluster.

There are two options for implementing link aggregation with Sun Cluster:
■ Sun Trunking 1.3.
■ The link aggregation software included with Solaris 10 1/06 (update 1) and later.
See dladm(1M).

The Ethernet NIC and Solaris release dictates which option can be used.

Sun Cluster supports Sun Trunking 1.3 with Solaris 8, 9 and 10.

Solaris link aggregation is supported with Solaris 10 1/06 and later. Solaris 10 1/06
is the first Solaris release providing this feature.

The Ethernet NIC must be supported by the server. Refer to the Public Network
support tables earlier in this chapter to determine Sun Cluster support.

Then consult the Solaris link aggregation and Sun Trunking 1.3 hardware support
information for configuration requirements:

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 217


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Solaris link aggregation: “Compatibility/Patches” section on http://systems-


tsc/twiki/bin/view/Netprod/Dladm
■ Sun Trunking 1.3: “Sun Trunking Platform Support Matrix” link on
http://www.sun.com/products/networking/ethernet/suntrunking/

Public Network VLAN Tagging


■ IEEE 802.1Q VLAN Tagging is supported with Sun Cluster

Global Networking
Sun Cluster 3 provides global networking between the clients and the cluster nodes
through the use of following features:
■ Global Interface (GIF): A global interface is a single network interface for
incoming request from all the clients. The responses are sent out directly by the
individual nodes processing the requests. In case the node hosting the global
interface fails, the interface is failed over to a backup node.
■ Cluster Interconnect: The cluster interconnect is used for request/data transfer
between the cluster nodes, the providing global connectivity to all the cluster
nodes from any one node.
■ It is strongly recommended to configure redundant network adapters in GIF’s
NAFO/IPMP group.

Jumbo Frames Support


■ Jumbo frames is supported with Sun Cluster. Refer to the jumbo frames
discussion in the Cluster Interconnect section, page 186, for requirements

218 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 11

Software Configuration

Typically, each node in a sun cluster will have the Solaris Operating Environment,
Sun Cluster 3, volume management software, and applications along with their
agents and fault monitors running on it.

Solaris Releases
All nodes in the cluster are required to run the same version (including the update
release) of the operating system.

The Solaris releases supported with Sun Cluster 3 are listed below.

TABLE 11-1 Solaris Releases for Sun Cluster 3.1 SPARC


Sun Cluster 3.1 5/03 (FCS)

Sun Cluster 3.1 10/03 (update 1)

Sun Cluster 3.1 4/04 (update 2)

Sun Cluster 3.1 9/04 (update 3)

Sun Cluster 3.1 8/05 (update 4)

Supported Solaris Releases

Solaris 8 2/02 (update 7) • • • • •


Solaris 8 HW 12/02 (PSR 1) • • • • •
Solaris 8 HW 5/03 (PSR 2) • • • • •
Solaris 8 HW 7/03 (PSR 3) • • • • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 219


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-1 Solaris Releases for Sun Cluster 3.1 SPARC (Continued)

Sun Cluster 3.1 5/03 (FCS)

Sun Cluster 3.1 10/03 (update 1)

Sun Cluster 3.1 4/04 (update 2)

Sun Cluster 3.1 9/04 (update 3)

Sun Cluster 3.1 8/05 (update 4)


Supported Solaris Releases

Solaris 8 HW 2/04 (PSR 4) • • • • •

Solaris 9 (FCS) • • • • •

Solaris 9 9/02 (update 1) • • • • •

Solaris 9 12/02 (update 2) • • • • •

Solaris 9 4/03 (update 3) • • • • •

Solaris 9 8/03 (update 4) • • • • •

Solaris 9 12/03 (update 5) • • • • •

Solaris 9 4/04 (update 6) • • • • •

Solaris 9 9/04 (update 7) • • • • •

Solaris 9 9/05 (update 8) • • • • •

Solaris 9 9/05 HW (update 9) •

Solaris 10 (FCS) •

Solaris 10 3/05 HW1 •

Solaris 10 3/05 HW2 •

Solaris 10 1/06 (update 1) •

Solaris 10 6/06 (update 2) •

Solaris 10 11/06 (update 3) •

Solaris 10 8/07 (update 4) •

Solaris 10 5/08 (update 5) •

Solaris 10 10/08 (update 6) •

Solaris 10 5/09 (update 7) •

220 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-2 Solaris Releases for Sun Cluster 3.2 SPARC

Sun Cluster 3.2 (FCS)

Sun Cluster 3.2 2/08 (update 1)

Sun Cluster 3.2 1/09 (update 2)


Supported Solaris Releases

Solaris 9 9/05 (update 8) • • •

Solaris 9 9/05 HW (update 9) • • •

Solaris 10 11/06 (update 3) • •

Solaris 10 8/07 (update 4) • •

Solaris 10 5/08 (update 5) • • •

Solaris 10 10/08 (update 6) • • •

Solaris 10 5/09 (update 7) •

TABLE 11-3 Solaris Releases for Sun Cluster 3.2 x64


Sun Cluster 3.2 (FCS)

Sun Cluster 3.2 2/08 (update 1)

Sun Cluster 3.2 1/09 (update 2)

Supported Solaris Releases

Solaris 10 11/06 (update 3) • •

Solaris 10 8/07 (update 4) • •

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 221


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-3 Solaris Releases for Sun Cluster 3.2 x64 (Continued)

Sun Cluster 3.2 (FCS)

Sun Cluster 3.2 2/08 (update 1)

Sun Cluster 3.2 1/09 (update 2)


Supported Solaris Releases

Solaris 10 5/08 (update 5) • • •

Solaris 10 10/08 (update 6) • • •

Solaris 10 5/09 (update 7) •

Application Services
An application service is an application along with an agent which makes the
application highly available and / or scalable in Sun Cluster. Application services
can be of two types - failover and scalable. Sun Microsystems has developed agents
and fault monitors for a core set of applications. These application services are
discussed in the following sections. Sun Microsystems has also made available an
application service development toolkit for developing custom agents and fault
monitors for other applications. Unless otherwise noted, all application services are
supported with all hardware components (servers, storage, network interfaces, etc.)
stated as supported in previous chapters. Unless otherwise noted, all services are
32bit application services. For more information on application services, please see
the Sun Cluster Data Services Planning and Administration Guide at
http://docs.sun.com/

All the Sun Cluster 3.1 agents are supported in the Sun Cluster 3.2 release. If you
upgrade Sun Cluster 3.1 software to Sun Cluster 3.2 software, we recommend that
you upgrade all agents to Sun Cluster 3.2 to utilize any new features and bug fixes
in the agent software. If you upgrade the application software you must apply the
latest agent patches to make the new version of the application highly available on
Sun Cluster. Please check the application support matrix to make sure the
application version is supported with Sun Cluster.

222 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

All Sun Cluster 3.2 u1 agents are supported on SC 3.2 core. After installing SC 3.2
core platform, please download the latest agent packages (e.g. SC 3.2 u1) or apply
the latest agent patches. Agents are continuously enhanced to support the latest
application versions. The latest agent updates or agent patches contains fixes to
support the newer application versions.

Failover Services
A failover service has only one instance of the application running in the cluster at a
time. In case of application failure, an attempt is made to restart the application on
the same node. If unsuccessful, the application is restarted on one of the surviving
nodes, depending on the service configuration. This process is called failover.

The table below lists the failover services supported with Sun Cluster 3.1

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC

Application Application Version SC Version Solaris Comments

Agfa IMPAX 4.5-5.x 3.1 9 • Requires patch 118983-01 or later


Apache Proxy Server All versions 3.1 8, 9, 10
shipped with
Solaris
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.1 8, 9, 10 • Supported in failover zones (using the
6.0 container agent)
Apache Web Server All versions 3.1 8, 9, 10
shipped with
Solaris
BEA WebLogic Server 7.0, 8.1, 9.0 3.1 8, 9, 10
DHCP N/A 3.1 8, 9, 10 • Requires patch 116389-09 or later
DNS 3.1 8, 9, 10
HADB (JES) All versions 3.1 8, 9, 10
supported by JES
Application Server
EE are supported
(4.4, 4.5)
IBM WebSphere MQ 5.3, 6.0 3.1 8, 9, 10 • Agent supported in a failover zone
(using the container agent) in S10.
Requires patch 116392-11 or 116428-10.
Please refer to Info Doc 83129 for which
patch to use.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 223


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

JES Application Server All versions till JES 3.1 8, 9, 10


previously known as 5 are supported
SunOne Application (8.1EE)
Server
JES Directory Server This agent is 3.1 • Please contact the Directory Server
owned and product group for details
supported by the
Directory Server
product group
JES Messaging Server This agent is 3.1 • Please contact the Messaging Server
previously known as owned and product group for details
iPlanet Messaging supported by the
Server (ims) Messaging Server
product group
JES Web Proxy Server All versions till JES 3.1 8, 9, 10
previously known as 5 are supported
SunOne Proxy Server
JES Web Server All versions till JES 3.1 8, 9, 10
previously known as 5 are supported
SunOne Web Server (up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release)
MySQL 3.23.54a-4.0.23 3.1 8, 9, 10 • Supported in failover zones (using the
4.1.6-4.1.22 container agent)
5.0.15-5.0.45, 5.0.85
N1 Grid Engine 5.3 3.1 8, 9 • Requires patch 118689-02 or later
6.0, 6.1 8, 9, 10
N1 Grid Service 4.1, 5.0, 5.0u1, 5.1, 3.1 8, 9, 10 • Supported in failover zones (using the
Provisioning System 5.2, 5.21-5.2.4 container agent)
Netbackup This agent is 3.1 • Please contact Veritas/Symantec for
owned and details
supported by
Veritas/Symantec
NFS V3 3.1 8, 9, 10 • Not supported in a Container

224 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Oracle Application 9.0.2 - 9.0.3(10g) 3.1 8. 9 • Requires patch 118328-03 or later


Server • Solaris 10 requires patch 118328-04 or
9.0.4 - 10.1.2 8, 9, 10
later
• Note 1: 9.0.2-9.0.3 = 9iAS
• Note 2: 9.0.4 = 10g AS
Oracle E-Business 11.5.8, 11.5.9, 3.1 8, 9, 10 • Requires patch 116427-05 or later
Suite 11.5.10-11.5.10cu2
12.0
Oracle Server 8.1.6 32 & 64 bit 3.1 8, 9 • Note that Oracle 8.1.x have been
8.1.7 32 & 64 bit desupported by Oracle. However, when
9i 32 & 64 bit the customer has continuing support for
Oracle 8.1.x from Oracle, Sun will
continue supporting the Sun Cluster HA
Oracle agent with it.
9i R2 32 & 64 bit 8, 9, 10 • Both Standard and Enterprise Editions
10G R1 & R2 64 bit are supported

11g 9, 10 • Both Standard and Enterprise Editions


are supported
PostgreSQL 7.3.x, 8.0.x, 8.1.x, 3.1 8, 9, 10 • Supported in failover zones (using the
8.2.x, 8.3.x container agent)
Samba 2.2.2 3.1 8, 9
2.2.7
2.2.8
2.2.8a
2.2.2 (w/o 9, 10 • Requires 116390-06 for SUNWscsmb
Winbind) v3.1.0 ARCH=SPARC
2.2.7a with patch • Requires 116727-04 for SUNWscsmb
114684-01 (w/o v3.1.1 ARCH=SPARC
Winbind) • Requires 116726-03 for SUNWscsmb
2.2.8a with patch v3.1.0 ARCH=ALL
114684-02 (w/o
Winbind)
up to 3.0.14a
3.0.23d- 3.0.27 8, 9, 10

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 225


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

SAP 4.0, 4.5, 4.6, 6.10, 3.1 8, 9, 10 • The intermediate releases of SAP
6.20, 6.30, 6.40, 7.0, application, for example 4.6C, 4.6D, etc.,
NW 2004 (SR1, are all supported
SR2, SR3) • The Sun Cluster Resource Types (RTs) for
making the traditional SAP components
(Central Instance and App Server
Instances) Highly Available are:
- SUNW.sap_ci_v2
- SUNW.sap_as_v2
• The agent part number for making the
traditional SAP components (CI and AS)
Highly Available is CLAIS-XXG-9999
• The RTs for making WebAS, SCS, Enq
and Replica Highly Available are:
- SUNW.sapwebas,
- SUNW.sapscs
- SUNW.sapenq
- SUNW.saprepl
• The Agent part number for making
WebAS, SCS, Enq and Replica Highly
Available is CLAIS-XAI-9999
• The RTs for making SAP J2EE Highly
Available are:
- SUNW.sapscs
- SUNW.sapenq
- SUNW.saprepl
- SUNW.sap_j2ee
• The Agent part numbers for making SAP
J2EE Highly Available are:
- CLAIS-XAI-9999
- CLAIS-XAE-9999
• SAP J2EE agent not supported on S10
• In Sun Cluster 3.2 the SAP J2EE
functionality is available in the
SUNW.sapwebas RT. There is no separate
GDS resource needed to make SAP J2EE
Highly Available. One single part
number CLAIS-XAI-9999 will make
ABAP, J2EE or ABAP+J2EE Highly
Available. Refer to SC 3.2 section of this
config guide for details.

226 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

SAP (Continued) • RTs SUNW.sapwebas, SUNW.sap_j2ee,


SUNW.sap_as_v2, can be configured in
Multi Master configuration. Refer to the
admin guides for details
• NetWeaver 2004 is based on SAP kernel
7.0
• NetWeaver 2004 is based on SAP kernel
6.40
• Refer to the following document for
details on SAP agents:
http://galileo.sfbay/agent_support_mat
rix/SAP-Config-Guide/
SAP LiveCache 7.4, 7.5, 7.6 3.1 8, 9, 10 • RTs for making Livecache and Xserver
Highly Available are:
- SUNW.sap_livecache
- SUNW.sap_xserver.
• Part number: CLAIS-XXL-9999
SAP MaxDB 7.4, 7.5 7.6, 7.7 3.1 8, 9, 10 • RTs for making MaxDB Highly Available
are:
- SUNW.sapdb
- SUNW.sap_xserver
• Part number: CLAIS-XAA-9999
Siebel 7.0, 7.5, 7.7 3.1 8 • Apply the latest Sun Cluster 3.1 Siebel
agent patch
7.7, 7.8 9
7.8.2 9, 10
Solaris Containers Brand type: Native 3.1 10 • Requires SC 3.1 08/05
(a.k.a. zones) and 1X • Solaris 8 zones support added with patch
120590-06
Sun Java Server All versions till JES 3.1 8, 9, 10
Message Queue 5 are supported
previously known as (3.5, 3.6, 4.0, 4.1)
JES MQ Server and
SunOne MQ Server

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 227


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Sun StorEdge 3.1, 3.1 8, 9


Availability Suite
3.2.1 9 • Sun Cluster 3.1u4 requires at least Solaris
9u9 and patches 116466-09, 116467-09
and 116468-13
• HA-ZFS not supported with AVS
4.0 10 • Sun Cluster 3.1u4 requires at least Solaris
10u3 and patch 123246-02
• HA-ZFS not supported with AVS
SWIFTAlliance Access 5.0 3.1 8, 9 • Requires patch 118050-03 or later
5.5 9 • Requires patch 118050-03 or later
5.9 9 • Requires patch 118050-05 or later
6.0 10 • Requires S10 01/06 or 11/06 and patch
118050-05 or later
SWIFTAlliance 5.0 3.1 9 • Requires patch 118984-04 or later
Gateway
6.0 10 • Requires S10 01/06 or 11/06 and patch
118984-04 or later
6.1 • Supports on all S10 versions supported
by Swift and Sun Cluster
Sybase ASE 12.0-12.5.1, 12.5.2 3.1 8, 9 • Supported in HA mode only - both
and 12.5.3 asymmetric and symmetric. The
Companion Server feature is not
12.5.2, 12.5.3, 15.0, 10
supported.
15.0.1 and 15.0.2
• NOTE: Latest sybase agent patches
required for running the supported
configurations
Note - There are two Sybase agents: One
sold by Sun, another sold by Sybase. This
table refers to the agent sold by Sun.
WebSphere Message 5.0, 6.0 3.1 • Requires patch 116728-04 or later
Broker

TABLE 11-5 Failover Services for Sun Cluster 3.1 x64

Application Application Version SC Version Solaris Comments

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.1 9, 10 • Supported in failover zones (using the
6.0 container agent)
BEA WebLogic Server 7.0, 8.1 3.1 9, 10
DHCP N/A 3.1 9, 10 • Requires patch 117639-03 or later

228 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-5 Failover Services for Sun Cluster 3.1 x64 (Continued)

Application Application Version SC Version Solaris Comments

DNS 3.1 9, 10
HADB (JES) All versions 3.1 9, 10
supported by JES
Application Server
EE are supported
JES Application Server All versions till JES 3.1 9, 10
previously known as 5 are supported
SunOne Application (up to 8.1EE)
Server
JES Web Proxy Server All versions till JES 3.1 9, 10
previously known as 5 are supported
SunOne Proxy Server
JES Web Server All versions till JES 3.1 9, 10
previously known as 5 are supported
SunOne Web Server (up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release)
MySQL 3.23.54a-4.0.23, 3.1 9, 10 • Supported in failover zones (using the
4.1.6-4.1.22, 5.0.15- container agent)
5.0.45, 5.0.85
N1 Grid Engine 5.3 3.1 9, 10 • Requires patch 118689-02 or later
6.0, 6.1
N1 Grid Service Provi- 4.1, 5.0, 5.0u1, 5.1, 3.1 9, 10 • Supported in failover zones (using the
sioning System 5.2, 5.2.1 - 5.2.4 container agent)
NFS V3 3.1 9, 10
Oracle Server 10G R1 32 bit 3.1 10 • Both Standard and Enterprise Editions
10G R2 32 & 64 bit are supported with Sun Cluster 3.1U4

PostgreSQL 7.3.x, 8.0.x, 8.1.x, 3.1 9, 10 • Supported in failover zones (using the
8.2.x, 8.3.x container agent)
Samba 2.2.2 to 3.0.27 3.1 9, 10 • Requires patch 116726-05 or later

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 229


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-5 Failover Services for Sun Cluster 3.1 x64 (Continued)

Application Application Version SC Version Solaris Comments

Solaris Containers Brand type: Native 3.1 10 • Requires SC 3.1 08/05


(a.k.a. zones) and 1X • lx zones support added with patch
120590-05
Sun Java Server All versions till JES 3.1 9, 10 • Always apply the latest agent patch
Message Queue 5 are supported
previously known as (3.5, 3.6, 4.0, 4.1)
JES MQ Server and
SunOne MQ Server
Sun StorEdge 4.0 3.1 10 • Sun Cluster 3.1u4 requires at least Solaris
Availability Suite 10u3 and patch 123247-02
• HA-ZFS not supported with AVS

The tables below lists the failover services supported with Sun Cluster 3.2:

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC

Application Application Version SC Version Solaris Comments

Agfa IMPAX 4.5 - 5.x, 6.3 3.2 9, 10 • Agent not supported in non-global zones
• Solaris 10 version support is for Agfa
IMPAX 6.3 only
Apache Proxy Server All 2.2.x versions 3.2 9, 10 • Agent supported in global zones and
and all versions of zone nodes (SC 3.2 support of zones)
Apache shipped • Agent not supported in failover zones
with Solaris. • Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.2 9, 10 • Agent supported in global zones, failover
6.0 zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Apache Web Server All 2.2.x versions 3.2 9, 10 • Agent supported in global zones, zone
and all versions of nodes (SC 3.2 support of zones), and
Apache shipped Zone Clusters (a.k.a. cluster brand zones)
with Solaris. • Agent not supported in failover zones
• Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported

230 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

BEA WebLogic Server 7.0, 8.1, 9.0, 9.2, 3.2 9, 10 • Agent supported in global zones and
10.0, 10.2 zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Please see the Release Notes that
documents an issue discovered during
the qualification of WLS in non-global
zones
• Apply the latest patch or upgrade the
agent to SC 3.2 u1 or later
DHCP N/A 3.2 9, 10 • Agent not supported in non-global zones
DNS 3.2 9, 10 • Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
HADB (JES) All versions 3.2 9, 10 • Agent not supported in non-global zones
supported by JES
Application Server
EE are supported
(4.4, 4.5)
IBM WebSphere MQ 5.3, 6.0, 7.0 3.2 9, 10 • Supported in global zones, failover zones
(using the container agent) and zone
nodes (SC 3.2 support of zones)
Informix V9.4, 10, 11 and 3.2 10 • Agent available for download at
11.5 http://www.sun.com/download under
Systems Administration category and
Clustering sub-category
JES Application Server All versions till JES 3.2 9, 10 • Agent supported in global zones and
previously known as 5 U1, 9.1, 9.1 UR2, zone nodes (SC 3.2 support of zones)
SunOne Application GlassFish V2 UR2 • Agent not supported in failover zones
Server
JES Directory Server 5.2.x. This agent is 3.2 • Please contact the Directory Server
owned and product group: Ludovic Poitou, Regis
supported by the Marco
Directory Server • For more info:
product group http://blogs.sfbay.sun.com/Ludo/date/
20061106
JES Messaging Server 6.3. This agent is 3.2 • Please contact the Messaging Server
previously known as owned and product group: Durga Tirunagari
iPlanet Messaging supported by the • For more info, mail to
Server (ims) Messaging Server messaging@sun.com
product group

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 231


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

JES Web Proxy Server All versions till JES 3.2 9, 10 • Agent supported in global zones and
previously known as 5 are supported zone nodes (SC 3.2 support of zones)
SunOne Proxy Server (up to 4.0) • Agent not supported in failover zones
JES Web Server All versions up to 3.2 9, 10 • Agent supported in global zones and
previously known as and including JES zone nodes (SC 3.2 support of zones)
SunOne Web Server 5 U1 are • Agent not supported in failover zones
supported. All
releases up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release
Kerberos Version shipped 3.2 10 • Agent supported in global zones and
with Solaris zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
MySQL 3.23.54a-4.0.23 3.2 9, 10 • Agent supported in global zones, failover
4.1.6-4.1.22 zones (using the container agent), zone
5.0.15-5.0.85 nodes (SC 3.2 support of zones) and
Zone Clusters (a.k.a. cluster brand zones)
5.1.x
• MySQL versions 5.0.x and 5.1.x require
patches 126031-04 (S9), 126032-04 (S10)
N1 Grid Engine 6.0, 6.1 3.2 9, 10 • Agent not supported in non-global zones
N1 Grid Service 4.1, 5.0, 5.0u1, 5.1, 3.2 9, 10 • Agent supported in global zones, failover
Provisioning System 5.2, 5.2.1 - 5.2.4 zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Netbackup This agent is 3.2 • Please contact Veritas/Symantec for
owned and details
supported by
Veritas/Symantec
NFS V3 3.2 9, 10 • Agent not supported in non-global zones
V4 10
Oracle Application 9.0.2 - 9.0.3 (10g) 3.2 9 • Note 1: 9.0.2 - 9.0.3 = 9iAS
Server • Note 2: 9.0.4 = 10g AS
9.0.4 - 10.1.3.1 9, 10
• Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Apply the latest agent patch

232 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Oracle E-Business 11.5.8, 11.5.9, 3.2 9, 10 • Agent supported in global zones and
Suite 11.5.10 -11.5.10cu2 zone nodes (SC 3.2 support of zones)
12.0 • Agent not supported in failover zones
• Apply the latest agent patch
Oracle Server 8.1.6 32 & 64 bit 3.2 9 • Note that Oracle 8.1.x have been
8.1.7 32 & 64 bit desupported by Oracle. However, when
9i 32 & 64 bit a customer has continuing support for
Oracle 8.1.x from Oracle, Sun will
continue supporting the Sun Cluster HA
Oracle agent with it.
9i R2 32 & 64 bit 9, 10 • Both Standard and Enterprise Editions
10G R1 & R2 64 bit are supported
11g • Supported in non-global zones

10.2.0.4 10 • HA Oracle agent is supported in Solaris


Container (a.k.a. Zone) Clusters
• Support starts with Solaris 10 5/09 and
SC 3.2 1/09
• UFS and standalone QFS 5.0 may be used
with or without SVM
• ASM is not supported with HA Oracle
• NAS is not supported with Zone Clusters
• Other features that are currently
supported with HA Oracle are supported

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 233


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

PostgreSQL 7.3.x, 8.0.x, 8.1.x, 3.2 9, 10 • Agent supported in global zones, failover
8.2.x, 8.3.x zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
• PostgreSQL agent in SC 3.2 u2 supports
Write Ahead Log (WAL) shipping
functionality. Get this functionality in
one of the following ways:
- Install the SC 3.2 u2 agent, or
- Upgrade to the SC 3.2 u2 agent, or
- Apply the latest agent patch
• Feature info: This project enhances the
PostgreSQL agent to provide the ability
to support log shipping functionality as a
replacement for shared storage thus
eliminating the need for shared storage
in a cluster when using PostgreSQL
Databases. This feature provides support
for PostgreSQL database replication
between two different clusters or
between two different PostgreSQL
failover resources within one cluster.
Samba 2.2.2 to 3.0.27 3.2 9, 10 • Agent supported in global zones, failover
zones (using the container agent) and
zone nodes (SC 3.2 support of zones)

234 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

SAP 4.0, 4.5, 4.6 3.2 9, 10 • The intermediate releases of SAP


6.10, 6.20, 6.30, 6.40 application, for example 4.6C, 4.6D, etc.,
7.0, 7.1 are all supported
NW 2004 (SR1, • The Sun Cluster Resource Types for
SR2, SR3) making the traditional SAP components
(Central Instance and Application Server
NW 2004s (SR1,
Instances) Highly Available are:
SR2, SR3).
SUNW.sap_ci_v2, SUNW.sap_as_v2
• The agent part number to make the
traditional SAP components (CI and AS)
Highly Available is CLAIS-XXG-9999
• The RTs for making WebAS, SCS, Enq,
Replica Highly Available are:
- SUNW.sapwebas
- SUNW.sapscs
- SUNW.sapenq
- SUNW.saprepl
• Agent part number for making WebAS,
SCS, Enq, Replica Highly Available is
CLAIS-XAI-9999.
Refer to the admin guides for details on
configuring ABAP, ABAP+J2EE and J2EE.
• RTs SUNW.sapwebas, SUNW.sap_as_v2
can be configured in Multi Master
configuration. Refer to the admin guides
for details.
• Agent supported in global zones and
zone nodes (SC 3.2 support of zones).
• Agent not supported in failover zones.
• NetWeaver 2004s is based on SAP kernel
7.00
• NetWeaver 2004 is based on SAP kernel
6.40
• Refer to the following document for
details on SAP agents:
http://galileo.sfbay/agent_support_mat
rix/SAP-Config-Guide

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 235


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

SAP (Continued) • SAP Exchange Server (XI, another name


PI) is an ABAP+Java application based
on SAP NetWeaver. SAP Enterprise
Portal is a Java-only application based on
SAP NetWeaver. These components of
SAP can be made highly available using
the “SC Agent for SAP Enqueue Server”
(CLAIS-XAI-9999), which includes
agents for web application server,
message server, enqueue server and
enqueue replication server.
• Apply patch 126062-06 to make SAP 7.1
Highly Available on SC 3.2 GA or use the
SAP WebAS agent (SUNW.sapenq,
SUNW.saprepl, SUNW.sapscs,
SUNW.sapwebas) from the SC 3.2 1/09
(u2) release
• Please refer to the Release Notes before
configuring the SAP resources
SAP LiveCache 7.4, 7.5, 7.6 3.2 9, 10 • RTs for making Livecache and Xserver
Highly Available are:
- SUNW.sap_livecache
- SUNW.sap_xserver
• Part number: CLAIS-XXL-9999
• Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones.
• Livecache version 7.6.03.09 required for
S10 SPARC
SAP MaxDB 7.4, 7.5 7.6, 7.7 3.2 9, 10 • RTs for making MaxDB Highly Available
are:
- SUNW.sapdb
- SUNW.sap_xserver
• Part number: CLAIS-XAA-9999
• Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• MaxDB version 7.6.03.09 required for S10
SPARC

236 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

Siebel 7.0, 7.5, 7.7 3.2 9 • Agent not supported in non-global zones
7.7, 7.8 • Agent for Siebel 8.0 requires SC 3.2 u1 or
patches to the SC 3.2 Siebel agent:
7.8.2 9, 10 - 126064-02 (Solaris 9)
8.0 - 126065-02 (Solaris 10)

Solaris Containers Brand type: native, 3.2 10 • This agent now supports lx, solaris8 and
(a.k.a. Zones) lx, solaris8 and solaris9 brand containers in addition to
solaris9 supporting native Solaris 10 containers
• Container agent requires at least patch
126020-01 or a SC 3.2 u1 agent to support
lx and solaris8 brand containers
• Container agent requires patch 126020-03
to support solaris9 brand container
Sun Java Server All versions till JES 3.2 9, 10 • Agent supported in global zones and
Message Queue 5 are supported whole root zones (SC support for non-
previously known as (3.5, 3.6, 4.0, 4.1, global zones)
JES MQ Server and 4.2, 4.3) • Agent not supported in sparse root zones
SunOne MQ Server • Agent not supported in failover zones
Sun StorEdge 3.2.1 3.2 9 • Requires Solaris 9u9 and patches 116466-
Availability Suite 09, 116467-09 and 116468-13
• HA-ZFS not supported with AVS
4.0 10 • Requires Solaris 10u3 and patch 123246-
02
• HA-ZFS not supported with AVS.
SWIFTAlliance Access 5.9, 6.0, 6.2 3.2 9, 10 • SC 3.2 SWIFTAlliance Access agent
patch 126085-01 or later required for
Solaris 9
• Solaris 10 agents are available for
download from
http://www.sun.com/download
• SWIFT Alliance Access 6.0 is supported
on all S10 versions supported by Swift
and by Sun Cluster. 6.0 is not supported
on Solaris 9.
• SWIFT Alliance Access 6.2 is supported
on Solaris 10 8/07 or later on SPARC
platform with patch 126086-01

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 237


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)

Application Application Version SC Version Solaris Comments

SWIFTAlliance 5.0, 6.0, 6.1 3.2 9, 10 • S10 agents are available for download
Gateway from http://www.sun.com/download
• SWIFT Alliance Gateway 6.0 and 6.1 are
supported on all S10 versions supported
by Swift and Sun Cluster. 6.0 and 6.1 are
not supported on Solaris 9
Sybase ASE 12.0 - 12.5.1, 12.5.2, 3.2 9 • Supported in HA mode only - both
12.5.3 asymmetric and symmetric. The
Companion Server feature is not
12.5.2, 12.5.3, 15.0, 10
supported.
15.0.1, 15.0.2
Note - There are two Sybase agents. One
sold by Sun, another sold by Sybase. This
table refers to the agent sold by Sun.
• Agent supported in global zones and
zone nodes (SC support of zones)
• Agent not supported in failover zones
WebSphere Message 5.0, 6.0 3.2 9, 10 • Agent supported in global zones and
Broker zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64

Application Application Version SC Version Solaris Comments

Apache Proxy Server All 2.2.x versions 3.2 10 • Agent supported in global zones and
and all versions of zone nodes (SC 3.2 support of zones)
Apache shipped • Agent not supported in failover zones
with Solaris. • Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.2 10 • Agent supported in global zones, failover
6.0 zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Apache Web Server All 2.2.x versions 3.2 10 • Agent supported in global zones, zone
and all versions of nodes (SC 3.2 support of zones), and
Apache shipped Zone Clusters (a.k.a. cluster brand zones)
with Solaris. • Agent not supported in failover zones
• Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.

238 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)

Application Application Version SC Version Solaris Comments

BEA WebLogic Server 7.0, 8.1, 9.0, 9.2, 3.2 10 • Agent supported in global zones and
10.0, 10.2 zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Please see the Release Notes that
documents an issue discovered during
the qualification of WLS in non-global
zones
• Apply the latest agent patch or upgrade
the agent to SC 3.2 u1
DHCP N/A 3.2 10 • Agent not supported in non-global zones
DNS 3.2 10 • Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
HADB (JES) All versions 3.2 10 • Agent not supported in zones
supported by JES
Application Server
EE are supported
(4.4, 4.5)
IBM WebSphere MQ 6.0, 7.0 3.2 10 • Agent supported in global zones, failover
zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Informix V9.4, 10, 11, 11.5 3.2 10 • Agent available for download from
http://www.sun.com/download under
Systems Administration category and
Clustering sub-category
JES Application Server All versions till JES 3.2 10 • Agent supported in global zones and
previously known as 5 U1, 9.1, 9.1 UR2, zone nodes (SC 3.2 support of zones)
SunOne Application GlassFish V2 UR2 • Agent not supported in failover zones
Server
JES Web Proxy Server All versions till JES 3.2 10 • Agent not supported in non-global zones
previously known as 5 are supported
SunOne Proxy Server (up to 4.0)
JES Web Server All versions up to 3.2 10 • Agent supported in global zones and
previously known as and including JES zone nodes (SC 3.2 support of zones)
SunOne Web Server 5 U1 are • Agent not supported in failover zones
supported. All
releases up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 239


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)

Application Application Version SC Version Solaris Comments

Kerberos Version shipped 3.2 10 • Agent supported in global zones and


with Solaris zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
MySQL 3.23.54a - 4.0.23 3.2 10 • Agent supported in global zones, failover
4.1.6 - 4.1.22 zones (using the container agent), zone
5.0.15 - 5.0.85 nodes (SC 3.2 support of zones) and
Zone Clusters (a.k.a. cluster brand zones)
5.1.x
• MySQL versions 5.0.x and 5.1.x require
patch 126033-05
N1 Grid Engine 6.0, 6.1 3.2 10 • Agent not supported in non-global zones
N1 Grid Service Provi- 4.1, 5.0, 5.0u1, 5.1, 3.2 10 • Agent supported in global zones, failover
sioning System 5.2, 5.2.1 - 5.2.4 zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
NFS V3 3.2 10 • Agent not supported in zones
V4
Oracle Application V10.1.2, 10.1.3.1 3.2 10 • Agent supported in global zones and
Server zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
Oracle Server 10G R1 32 bit 3.2 10 • Both Standard and Enterprise Editions
10G R2 32 & 64 bit are supported with Solaris10u3
• Agent supported in non-global zones
10.2.0.4 10 • HA Oracle agent is supported in Solaris
Container (a.k.a. Zone) Clusters
• Support starts with Solaris 10 5/09 and
SC 3.2 1/09
• UFS and standalone QFS 5.0 may be used
with or without SVM
• ASM is not supported with HA Oracle
• NAS is not supported with Zone Clusters
• Other features that are currently
supported with HA Oracle are supported

240 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)

Application Application Version SC Version Solaris Comments

PostgreSQL 7.3.x, 8.0.x, 8.1.x, 3.2 10 • Agent supported in global zones, failover
8.2.x, 8.3.x zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
• PostgreSQL agent in SC 3.2 u2 supports
Write Ahead Log (WAL) shipping
functionality. Get this functionality in
one of the following ways:
- Install the SC 3.2 u2 agent or
- Upgrade to the SC 3.2 u2 agent or
- Apply the latest agent patch
• Feature info: This project enhances the
PostgreSQL agent to provide the ability
to support log shipping functionality as a
replacement for shared storage thus
eliminating the need for shared storage
in a cluster when using PostgreSQL
Databases. This feature provides support
for PostgreSQL database replication
between two different clusters or
between two different PostgreSQL
failover resources within one cluster.
SAP NetWeaver 2004s 3.2 10 • Agent supported in global zones and
(SR1, SR2, SR3), zone nodes (SC 3.2 support of zones)
Web Application • Agent not supported in failover zones
Server 7.0, SAP 7.1 • Apply the latest agent patch
• NetWeaver 2004s is based on SAP Kernel
7.00
• Refer to the following document for
details on SAP agents:
http://galileo.sfbay/agent_support_mat
rix/SAP-Config-Guide/
• See SPARC Table 11-6 for details
• Apply patch 126063-07 to make SAP 7.1
Highly Available on SC 3.2 or use the
SAP WebAS agent (SUNW.sapenq,
SUNW.saprepl, SUNW.sapscs,
SUNW.sapwebas) from SC 3.2 u2
SAP LiveCache 7.6 3.2 10 • Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Requires SAP Livecache version 7.6.01.09
for S10 x86

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 241


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)

Application Application Version SC Version Solaris Comments

SAP MaxDB 7.6, 7.7 3.2 10 • Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Requires SAP MaxDB version 7.6.01.09
for S10 x86
Samba 2.2.2 to 3.0.27 3.2 10 • Agent supported in global zones, failover
zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Solaris Containers Brand type: native, 3.2 10 • This agent now supports lx, solaris8 and
(a.k.a. Zones) lx, solaris8 and solaris9 brand containers in addition to
solaris9 supporting native Solaris 10 containers
• Container agent requires at least patch
126021-01 or the SC 3.2 u1 agent to
support lx and solaris8 brand containers
• Container agent requires at least patch
126021-03 to support solaris9 brand
containers
Sun Java Server All versions till JES 3.2 10 • Agent supported in global zones, whole
Message Queue 5 are supported root non-global zone nodes (SC 3.2
previously known as (3.5, 3.6, 4.0, 4.1, support of zones)
JES MQ Server and 4.2, 4.3) • Agent not supported in sparse root non-
SunOne MQ Server global zones
• Agent not supported in Failover Zones
Sun StorEdge 4.0 3.2 10 • Requires at least Solaris 10u3 and patch
Availability Suite 123247-02
• HA-ZFS not supported with AVS
Sybase ASE 15.0, 15.0.1 and 3.2 10 • Agent supported in global zones and
15.0.2 zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Agent available for download from
http://www.sun.com/download
WebSphere Message 6.0 3.2 10 • Agent supported in global zones and
Broker zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Apply the latest agent patch

242 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

Scalable Services
A scalable service has one or more instances of applications running in the cluster
simultaneously. A global interface provides the view of a single logical service to the
outside world. The application requests are distributed to various running instances,
based on the load-balancing policy. In case a node on which an application instance
is running fails, an attempt is made to restart the application on the same node. If
unsuccessful, the application is restarted on a surviving node or the load is
redistributed among the surviving nodes, depending on the service configuration. In
case the node hosting the global interface (GIF) fails, the global interface is failed
over to a surviving node, depending on the service configuration.

This section does not include information about Oracle Real Application Cluster
(RAC). Please refer to “Oracle Real Application Cluster (OPS/RAC)” on page 245.

The following tables contain the scalable services supported with Sun Cluster 3.1

TABLE 11-8 Supported Scalable Services with Sun Cluster 3.1SPARC

Application Application Version SC Version Solaris Comment

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.1 8, 9, 10 • Supported in failover zones (using the
6.0 container agent)
Apache Web Server All versions 3.1 8, 9, 10
shipped with
Solaris
JES Web Server All versions till JES 3.1 8, 9, 10
previously known as 5 are supported
SunOne Web Server (up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 243


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-9 Supported Scalable Services with Sun Cluster 3.1x64

Application Application Version SC Version Solaris Comment

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.1 9, 10 • Supported in failover zones (using the
6.0 container agent)
Apache Web Server All versions 3.1 9, 10
shipped with
Solaris
JES Web Server All versions till JES 3.1 9, 10
previously known as 5 are supported
SunOne Web Server (up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release)

The following tables contain the scalable services supported with Sun Cluster 3.2

TABLE 11-10 Supported Scalable Services with Sun Cluster 3.2SPARC

Application Application Version SC Version Solaris Comment

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.2 9, 10 • Agent supported in global zones, failover
6.0 zones (using the container agent), and
zone nodes (SC 3.2 support of zones)
Apache Web Server All 2.2.x versions 3.2 9, 10 • Agent supported in global zones, zone
and all versions of nodes (SC 3.2 support of zones), and
Apache shipped Zone Clusters (a.k.a. cluster brand zones)
with Solaris. • Agent not supported in failover zones
• Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.
JES Web Server All versions up to 3.2 9, 10 • Agent supported in global zones and
previously known as and including JES zone nodes (SC 3.2 support of zones)
SunOne Web Server 5 U1 are • Agent not supported in failover zones
supported. All
releases up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release.

244 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

TABLE 11-11 Supported Scalable Services with Sun Cluster 3.2x64

Application Application Version SC Version Solaris Comment

Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.2 10 • Agent supported in global zones, failover
6.0 zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Apache Web Server All 2.2.x versions 3.2 10 • Agent supported in global zones, zone
and all versions of nodes (SC 3.2 support of zones), and
Apache shipped Zone Clusters (a.k.a. cluster brand zones)
with Solaris. • Agent not supported in failover zones.
• Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.
JES Web Server All versions up to 3.2 10 • Agent supported only in zone nodes (SC
previously known as and including JES 3.2 support of zones)
SunOne Web Server 5 U1 are
supported.

All releases up to
and including 7.0,
7.0 U1, 7.0 U2 and
all future updates
of 7.0 release

Oracle Real Application Cluster (OPS/RAC)


Oracle Real Application Cluster is supported with Sun Cluster 3. The configuration
rules around OPS/RAC support are the following.

Oracle Real Application Cluster Topology Support


N*N (Scalable) topology is no longer required for support of OPS/RAC with Sun
Cluster 3. In order for a configuration to be supported, the nodes in a cluster
running OPS/RAC must be connected to the same shared storage arrays. This
allows a subset of the total nodes in a cluster to be running OPS/RAC as long as the
OPS/RAC nodes are connected to the same shared storage devices.

RSM is supported with RAC and Sun Cluster 3. This functionality requires Sun
Cluster 3.0 5/02, Oracle 9i RAC 9.2.03 and Solaris 8 or 9. This support is limited to
SCI-PCI cards and switches. This support applies to all servers that support SCI-PCI.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 245


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Maximum Nodesb TABLE 11-12 Oracle RAC Support with Sun Cluster 3.1 for SPARC

Solaris

H/W RAID

Veritas CVMe

Sun Cluster GFSf

(not supported with CVM)


Shared QFSg

SVM for Sun Cluster (Oban)g

NAS

Fast Ethernet

Gigabit Ethernet

10Gigabit Ethernet

SCI-PCI with RSMl

Infinibandm
i
Version

8.1.7 4 8, 9 • • • •k • • •
32bit/
64bit/
OPFS
32bita
9i 4 8, 9 • • • •k • • •
RAC/
RACG
R1
32/64
bit
9i 8c 8, 9, • • • •h •j • • • • RAC •
RAC/ 10d 9.2.0.3
RACG and
R2 32/ above
64 bit
10gR1 8 8, 9, • • • • • • • • • • •
RAC 10
10.1.0.
3 and
above
10gR2 8 8, 9, • •e • • • • • • • •
RAC 10
11g 8 9, 10 • • • • • • • • • •
RAC
a Supported in active-passive mode only
b Please refer to the respective storage section for the number of nodes supported
c Requires Oracle 9.2.0.3 and above plus patch 2854962. Please refer to the respective storage section for
the number of nodes supported
d Requires Sun Cluster 3.1 8/05
e Requires Veritas CVM 3.2 or later

246 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

f Supported with Binary and log files only


g Using shared QFS and SVM for Sun Cluster (OBAN) together is only supported on Solaris 10
h Requires RAC 9.2.0.5 and Oracle patch 3566420
i SVM for Sun Cluster (OBAN) on Solaris 10 requires the following patches: 120809-01, 120807-01, 118822-
21, 120537-04
j Requires RAC 9.2.0.5, Oracle patch 3366258
k SE 5210/5310 and ST 5220/5320 support only to two nodes
l SCI-PCI support maximum 4 nodes
m InfiniBand support starts with Solaris 10

TABLE 11-13 Oracle RAC Support with Sun Cluster 3.2 for SPARC
Maximum Nodesc

Solaris

H/W RAID

Veritas CVMg

Sun Cluster GFSh

(not supported with CVM)


Shared QFSi

SVM for Sun Cluster (Oban)

NAS

Fast Ethernet

Gigabit Ethernet

10GB Ethernet

SCI-PCI with RSMo

Infinibandp
k

Version

9i RAC/ 4 9 • • • • • • •
m
RACG
R1
32/64
bit
9i RAC/ 8d 9, 10 • • • •j •l • • • RAC •
RACG 9.2.0.3
R2 32/ and
64 bit above

10gR1 8 9, 10 • • • • • • • • • •
RAC
10.1.0.3
and
above

10gR2 8 9, 10 • • • • • • • • •
RAC
10g 16e 10f 4.6.2 • • • •
RAC and
10.2.0.3a above

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 247


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Maximum Nodesc TABLE 11-13 Oracle RAC Support with Sun Cluster 3.2 for SPARC (Continued)

Solaris

H/W RAID

Veritas CVMg

Sun Cluster GFSh

(not supported with CVM)


Shared QFSi

SVM for Sun Cluster (Oban)

NAS

Fast Ethernet

Gigabit Ethernet

10GB Ethernet

SCI-PCI with RSMo

Infinibandp
k
Version

10g R2 4 10f • • • •
RAC n

10.2.0.4b
8 10f • • • • • • • • •

16e 10f 4.6.2 • • • •


and
above
11g RAC 4 10f • • • •

8 9, 10 • • • • • • • • •

11gR1 4 10f • • • •
RACb
8 10f • • • • • • • • •

16e 10f • • 4.6.2 • • • •


and
above

11g RAC 4 10f •n • • •


11.1.0.7b
a Requires SC 3.2 2/08 (u1) and above
b Requires SC 3.2 1/09 (u2) and above
c Please refer to the respective storage section for the number of nodes supported
d Requires Oracle 9.2.0.3 and above plus patch 2854962. Please refer to the respective storage section for the number
of nodes supported
e ASM is supported
f Requires Solaris 10 10/08 (u6) and above
g Requires CVM 4.0 and above
h Supported with Binary and log files only
i Using shared QFS and SVM for Sun Cluster (OBAN) together is only supported on Solaris 10
j Requires RAC 9.2.0.5 and Oracle patch 3566420
k SVM for Sun Cluster (OBAN) on Solaris 10 requires the following patches: 120809-01, 120807-01, 118822-21, 120537-
04
l Requires RAC 9.2.0.5, Oracle patch 3366258
m SE 5210/5310 and ST 5220/5320 support only

248 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

n Adds support for the Sun Storage 7000 series: 1) When RAC is installed in a global zone, you can also use NFS for
Clusterware OCR and Voting disks; 2) When RAC installed in a zone cluster, you must use iSCSI LUNs as OCR
and Voting devices; 3) If you use iSCSI LUNs for Clusterware OCR and Voting disks, either in the global zone or
in a zone cluster, configure the corresponding DID devices with fencing disabled.
o Maximum of 4 nodes with PCI-SCI
p InfiniBand support starts with Solaris 10

TABLE 11-14 Oracle RAC Support with Sun Cluster 3.1 and Sun Cluster 3.2 for x64
Maximum Nodesa

Solaris

H/W RAID

Veritas CVMc

Sun Cluster GFSd

Shared QFS

SVM for Sun Cluster (Oban)e

NAS

Fast Ethernet

Gigabit Ethernet

10GB Ethernet

Infiniband
Version

10gR2 8b 10 • • • • • • • • •
RAC 64
bit
(10.2.0.1
and
above)
a Please refer to the respective storage section for the number of nodes supported
b Greater than 4 nodes requires SC 3.2 2/08 (u1) and above
c Veritas CVM not supported for x64
d Supported with Binary and log files only
e Up to four nodes are supported with SVM - larger numbers of nodes requires hardware RAID

Co-Existence Software
Solaris Resource Manager 1.2 and 1.3 is certified for co-existence with Sun Cluster
3.0 7/01 (or later) software.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 249


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Restriction on Applications Running in


Sun Cluster
Sun Cluster supports running multiple data services in the same cluster. There is no
limit on the number of applications per node, or the kind of applications that run on
a node. However, other factors such as application performance, and adverse
interaction between different applications may constrain the configuration of
multiple applications on the same node.

Data Configuration
The application data can be configured on the shared storage in Sun Cluster in one
of the following structures:
■ “Raw Devices” on page 250
■ “Raw Volumes / Meta Devices” on page 250
■ “File System” on page 253

Raw Devices
Since every shared storage disk is a global device, all the disk partitions, and any
raw data laid out on them, are globally accessible. No other software apart from
Solaris Operating Environment and Sun Cluster 3 is required to configure data on
raw devices.

Raw Volumes / Meta Devices


If raw volumes / Meta devices are used for data storage, a volume manager needs to
be run on each node of the cluster.

Veritas Cluster Volume Manager (CVM) and Solaris Volume Manager for Sun
Cluster (Oban) is supported with only Oracle RAC/OPS clusters.

250 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

Sun Cluster 3 supports the use of volume managers as listed below:

TABLE 11-15 Sun Cluster 3.1 Supported Volume Managers

Volume Manager Version

Solstice DiskSuite 4.2.1 (Solaris 8 Only)


Solaris Volume Manager (SVM) Solaris 9
Solaris 10c
Please see Table 11-1, “Solaris
Releases for Sun Cluster 3.1
SPARC,” on page 219 for details.
Solaris Volume Manager for SC Solaris 9 9/04 or later with patch
(Oban)a 116669-03
Please see Table 11-1, “Solaris
Releases for Sun Cluster 3.1
SPARC,” on page 219 for details.
Veritas Volume Manager (VxVM)b. 3.2 (Solaris 8 Only)
This includes support for the cluster
3.5 (Solaris 8, 9)
functionality - formerly known as
CVM 4.0 MP2 on Sun Cluster 3.1 u2 and
earlier on (Solaris 8) Requires Sun
Cluster patch 117949 on (Solaris 9)
Requires Sun Cluster patch 117950
4.1 (Solaris 8,9 and 10) Requires
VxVM 4.1 patch 117080-02d
a For SVM Sun Cluster functionality you will need to order Sun Cluster Advanced Edition
for Oracle RAC.
b FMR feature of VxVM is supported only with Sun Cluster 3.1 08/05 with Solaris 9 and 10
with Veritas Storage Foundation Suite 4.1
c SVM (Oban) on Sun Cluster 3.1 with Solaris 10 requires the following minimum level of
Solaris patches: 120809-01, 120807-01, 118822-21, 120537-04
d Veritas Volume Manager delivered as part of Veritas Storage Foundation 4.0 and 4.1 is
also supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 251


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Either VxVM volume manager or Solstice DiskSuite (SDS) can be used for shared
storage within a cluster configuration. Using VxVM for shared storage and SDS for
mirroring the root disk is also a supported configuration.

TABLE 11-16 Sun Cluster 3.2 Supported Volume Managers

Platform/Ver
Volume Manager sion Solaris Notes

Solaris Volume Manager SPARC SVM support tracks Solaris Please see the respective SC
(SVM) and x64 support. Please see Table 11-2, Release Notes for patch and other
“Solaris Releases for Sun requirements.
Cluster 3.2 SPARC,” on
page 221 and Table 11-3,
“Solaris Releases for Sun
Cluster 3.2 x64,” on page 221
for details.
Solaris Volume Manager SPARC SVM for SC support tracks Please see the respective SC
for SC (Oban) and x64 Solaris support. Please see Release Notes for patch and other
Table 11-2, “Solaris Releases requirements.
for Sun Cluster 3.2 SPARC,”
on page 221 and Table 11-3,
“Solaris Releases for Sun
Cluster 3.2 x64,” on page 221
for details.
Veritas Volume Manager SPARC: 4.1 - S9u8 plus required patches 4.1_mp2 patch 117080-07
(VxVM) including CVM (SC 3.2) as listed with SunSolve
support - S9u9
SPARC: 5.0 5.0_mp1 patch 122058-09 and
(SC 3.2) - S10u3 plus required patches 124361-05,
as listed with SunSolve
- S10u4 plus required patches
as listed with SunSolve
SPARC: 5.0
MP3 RP1
(SC 3.2u2)
Veritas Volume Manager x64: 4.1 - S10u3 plus required patches 4.1_mp1 patch 120586-04
(VxVM) only (SC 3.2) as listed with SunSolve
x64: 5.0 - S10u4 plus required patches patch 128060-02
(SC 3.2u1) as listed with SunSolve

x64: 5.0
MP3 RP1
(SC 3.2u2)

252 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

File System
If the application data is laid out on a file system, the cluster file system enables the
file system data to be available to all the nodes in the cluster. Sun Cluster 3 supports
cluster file system on top of a UFS/VxFS laid out on a Veritas volume or SDS meta
device. File system logging is required in Sun Cluster 3.

TABLE 11-17 Veritas File System Support Matrix with Sun Cluster 3.1

Volume Manager Version Solaris Version Notes

Veritas file system 3.4 8 Not Supported with Sun Cluster


(VxFS) 3.1u3
3.5 8, 9
4.0 MP2a,b 8, 9 4.0 MP2 on Sun Cluster 3.1 u2
and earlier on (Solaris 8)
Requires Sun Cluster patch
117949 on (Solaris 9) Requires
Sun Cluster patch 117950
4.1b 8, 9, 10 Requires VxFS 4.1 patch 119300-
01(Solaris 8), 119301-01(Solaris
9),
119302-01(Solaris 10) (fix for
bug 6227073)
a Requires patch 120107-01
b VxFS 4.0 and 4.1 delivered as part of Veritas Storage Foundation Suite is supported.

TABLE 11-18 Veritas File System Support Matrix with Sun Cluster 3.2

Platform Version Solaris Notes

SPARC 4.1 - S9u8 plus required patches Requires 119301-04 (S9) and
as listed with SunSolve 119302-04 (S10) patches
5.0 - S9u9 Requires 123201-02 (S9) and
- S10u3 plus required patches 123202-02 (S10) patches
as listed with SunSolve
- S10u4 plus required patches
as listed with SunSolve
x64 5.0 - S10 plus required patches as - Starting with SC3.2u1
listed with SunSolve - Requires 125847-01 patch

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 253


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 11-19 Sun StorEdge QFS (SPARC) Support Matrix with Sun Cluster 3.1

Sun HA-SAM Tape FFS with


Cluster Library HA Storage
QFS Version Solaris Version Version Volume Manager Support Support Plus Only

4.1 (HA) QFS 8 update 5 3.1 u1 SVM and Veritas VxVM N/A Yes
Standalone 9 update 3 a, b, c 3.5 and above

4.2 (HA) QFS 8 update 7 3.1 u2 SVM and Veritas VxVM N/A Yes
Standalone 9 update 3 and later a, b, c 3.5 and above

4.2 (Shared) 8 update 7 3.1 u2 No VM Support N/A N/A


QFS 9 update 3 and later c, d

4.3 (HA) QFS 8 update 7 3.1 u3 SVM and Veritas VxVM N/A Yes
Standalone 9 update 3 and later a, b, c 4.0 and above
Solaris 10
4.3 (Shared) 8 update 7 3.1 u3 No VM Support N/A N/A
QFS 9 update 3 and later c, d

Solaris 10
4.4 (HA) QFS 9 update 3 and later 3.1 u3 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 a, b, e 4.0 and above

4.4 (Shared) 9 update 3 and later 3.1 u3 VM/Oban (with Solaris N/A N/A
QFS Solaris 10 d, e, f 10 only, No S9 support)

4.5 (HA) QFS 9 update 3 and later 3.1 u4 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 u1 a,b,g,h 4.0 and above

4.5 (Shared) 9 update 3 and later 3.1 u4 VM/Oban (with Solaris N/A N/A
QFS d,f,g,h 10 only, No S9 support)
Solaris 10 u1
4.6 (HA) QFS 9 update 3 and later 3.1 u4 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 u3 a,b,g 4.1 and above

4.6 (Shared) 9 update 3 and later 3.1 u4 VM/Oban (with Solaris L700 k.a Refer to j
SAM-QFS Solaris 10 u3 d,f,g,i,j 10 only, NO S9 support) SL500 FCk.a
a Supports with use of HA-NFS Agent
b Supports with use of HA-Oracle Agent
c Supports Oracle 9i only
d Supports with use of RAC Agent(s)
e Supports Oracle 9i, 10gR1 only
f Support with SVM Cluster Functionality (Oban).
g Supports Oracle 9i, 10gR1, and 10gR2
h Supports for SC 3.2 w/QFS 4.5 + QFS 05 patch (Build 4.5.42)

254 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

i Supports for COTC (Clients-outside-the-cluster, no mixed architecture)


a. Qualified by SAM-QFS QA Only
b. NO QFS ms type filesystems, ma type filesystem only supported
c. No Software Volume Managers supported
d. There are storage prerequisites required for this configuration See QFS documentation for details on prerequisites, & configu-
ration examples
e. This configuration has been qualified for use with 16 nodes configuration (2 cluster nodes/14 client nodes)
f. See above matrix for OS support
g. See above matrix for Sun Cluster Support
h. Requires SUNWqfsr & SUNWqfsu packages
See QFS Documentation http://docs.sun.com/source/819-7935-10/chapter6.html#94364
j Supports for HA-SAM
a.Qualified by SAM-QFS QA Only
b. No software Volume Managers
c. Active-Passive Only supported
d. Oracle RMAN not supported
e. NO other data service supported with this configuration
f. Requires SUNWsamfsr & SUNWsamfsu packages
See HA-SAM Documentation http://docs.sun.com/source/819-7931-10/chap08.html#19295 for configuration prerequisites,
configuration examples, and more information
k Requires ACSLS Server running the ACSLS 7.x Software
a. Qualified by SAM-QFS QA Only

COTC: Currently COTC is at release 1.0. Clients outside the cluster is used when
user applications require access to the data stored on Cluster filesystem(s), the
cluster device fencing is lowered so COTC can access the data stored on attached
storage that is being managed by the cluster. For this configuration user applications
must run outside the cluster, this configuration requires that no other data service is
used inside the cluster for applications access outside the cluster. This configuration
requires that a logical hostname be used for Shared QFS Metadata traffic
communications between Shared QFS Metadata Server and Metadata Clients that
exist outside the cluster, this requires extra set-up in SC RG (see QFS Related
documentation for configuration examples). It is highly recommended that a
dedicated network be used for communications between the cluster nodes and the
nodes that exist outside the cluster. The storage topology is that must be used is
direct FC attached storage and can be any HWRAID supported in the configuration
guide. This is Shared QFS with NO SAM functionality. The cluster nodes provide
automated failover of the MDS. The currently supported node configuration is 2-4
nodes inside the cluster, and up to 16 nodes outside the cluster, is what has currently
been qualified. If your requirement requires other than mentioned above, a Get-To-
Yes must be filed for supportability. See QFS Documentation
http://docs.sun.com/source/819-7935-10/chapter6.html#94364

HA-SAM: Currently HA-SAM is at release 1.0, HA-SAM provide the feature of the
SAM (Storage Archive Management) “Archiving, Staging, Releaser, & Recycler”.
Each of these must run on the current Metadata Server. HA-SAM automated failover
is done with use of the SUNW.qfs agent, the Metadata Server in a HA-SAM

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 255


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

configuration has not been qualified using no other data service other than
SUNW.qfs & SUNW.hasam. This configuration is supported with a maximum of 2
cluster nodes, this also requires Shared QFS filesystem(s), As a requirement for this
configuration 1-PXFS filesystem must be used for SAM catalog. Currently this
configuration has only been qualified to runs in a active-passive configuration. No
other data service is supported in conjunction with this configuration. If your
requirement requires other than mentioned above, a Get-To-Yes must be filed for
supportability. See HA-SAM Documentation http://docs.sun.com/source/819-7931-
10/chap08.html#19295

SAM-QFS Packages Notes:

a) Filesystem Manager - SUNWfsmgrr SUNWfsmgru

b) Filesystem Configurations NO HASAM - SUNWqfsr SUNWqfsu

c) HA-SAM Configurations - SUNWsamfsr SUNWsamfsu

TABLE 11-20 Sun StorEdge QFS (SPARC) Support Matrix with Sun Cluster 3.2

Sun HA-SAM Tape FFS with


Cluster Library HA Storage
QFS Version Solaris Version Version Volume Manager Support Support Plus Only

4.5 (HA) QFS 9 update 3 and later 3.2 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 u1 a,b,c,d 4.0 and above

4.5 (Shared) 9 update 3 and later 3.2 VM/Oban (with Solaris N/A N/A
QFS Solaris 10 u1 c,d,e,f 10 only, No S9 support)

4.6 (HA) QFS 9 update 3 and later 3.2 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 u3 a, b, c 4.1 and above

4.6 (Shared) 9 update 3 and later 3.2 VM/Oban (with Solaris L700 i, 9a Refer to h
SAM-QFS Solaris 10 u3 c,d,e,g,h 10 only, NO S9 support) SL500 FC 9a
a Supports with use of HA-NFS Agent
b Supports with use of HA-Oracle Agent
c Supports Oracle 9i, 10gR1, and 10gR2
d Supports for SC 3.2 w/QFS 4.5 + QFS 05 patch (Build 4.5.42)
e Supports with use of RAC Agent(s)
f Support with SVM Cluster Functionality (Oban).

256 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

g Supports for COTC (Clients-outside-the-cluster, no mixed architecture)


7a. Qualified by SAM-QFS QA Only
7b. NO QFS ms type filesystems, ma type filesystem only supported
7c. No Software Volume Managers supported
7d. There are storage prerequisites required for this configuration See QFS documentation for details on prerequisites, & config-
uration examples
7e. This configuration has been qualified for use with 16 nodes configuration (2 cluster nodes/14 client nodes)
7f. See above matrix for OS support
7g. See above matrix for Sun Cluster Support
7h. Requires SUNWqfsr & SUNWqfsu packages
See QFS Documentation http://docs.sun.com/source/819-7935-10/chapter6.html#94364
h Supports for HA-SAM
8a.Qualified by SAM-QFS QA Only
8b. No software Volume Managers
8c. Active-Passive Only supported
8d. Oracle RMAN not supported
8e. NO other data service supported with this configuration
8f. Requires SUNWsamfsr & SUNWsamfsu packages
See HA-SAM Documentation http://docs.sun.com/source/819-7931-10/chap08.html#19295 for configuration prerequisites,
configuration examples, and more information
i Requires ACSLS Server running the ACSLS 7.x Software
9a. Qualified by SAM-QFS QA Only

TABLE 11-21 Sun StorEdge QFS (x64) Support Matrix with both Sun Cluster 3.1 and 3.2

HA-SAM FFS with


Sun Cluster Tape Library HA Storage
QFS Version Solaris Version Version Volume Manager Support Support Plus Only

4.5 (HA) QFS 9 update 3 and later 3.1 u4/3.2 SVM/VxVm 4.0 and N/A Yes
Standalone Solaris 10 FCS - u1 a, b, c above

4.5 (Shared) 9 update 3 and later 3.1 u4/3.2 VM/Oban (with Solaris N/A N/A
QFS Solaris 10 FCS - u1 c, d, e, f 10 only, No S9 support)

4.6 (HA) 9 update 3 and later 3.1 u4/3.2 SVM/VxVm 4.1 and N/A Yes
Standalone QFS Solaris 10 FCS - u3 a, b, c above

4.6 (Shared) 9 update 3 and later 3.1 u4/3.2 VM/Oban (with Solaris L700 i, ia Refer to h
SAM-QFS Solaris 10 FCS - u3 c, d, e, g, h 10 only, NO S9 support) SL500 FCia
a Supports with use of HA-NFS Agent
b Supports with use of HA-Oracle Agent
c Supports Oracle 10gR2
d Supports with use of RAC Agent(s)
e Support with SVM Cluster Functionality (Oban).
f Supports for SC 3.2 w/QFS 4.5 + QFS 05 patch (Build 4.5.42)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 257


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

g Supports for COTC (Clients-outside-the-cluster, no mixed architecture)


a. Qualified by SAM-QFS QA Only
b. NO QFS ms type filesystems, ma type filesystem only Supported
c. No Software Volume Managers supported
d. There are storage prerequisites required for this configuration See QFS documentation for details on prerequisites, & configu-
ration examples
e. This configuration has been qualified for use with 16 nodes configuration (2 cluster nodes/14 client nodes)
f. See above matrix for OS support
g. See above matrix for Sun Cluster Support
h. Requires SUNWqfsr & SUNWqfsu packages
See QFS Documentation http://docs.sun.com/source/819-7935-10/chapter6.html#94364
h Supports for HA-SAM
a.Qualified by SAM-QFS QA Only
b. No software Volume Managers
c. Active-Passive Only supported
d. Oracle RMAN not supported
e. NO other data service supported with this configuration
f. Requires SUNWsamfsr & SUNWsamfsu packages
See HA-SAM Documentation http://docs.sun.com/source/819-7931-10/chap08.html#19295 for configuration prerequisites,
configuration examples, and more information
i Requires ACSLS Server running the ACSLS 7.x Software
a. Qualified by SAM-QFS QA Only

RAID in Sun Cluster 3


All RAID features provided by the volume managers, or storage devices with
hardware RAID capabilities, are supported with Sun Cluster 3 with the following
exceptions. RAID5 functionality of SDS/SVM is not supported. The configuration
rules for RAID in Sun Cluster 3 are the following:
■ Sun Cluster requires that access to data be highly available on each node sharing
storage. For example, software mirroring where there are two independent paths
to mirrored data or a supported multi-pathing storage configuration with
multiple paths to highly available hardware RAID5 data.
■ Mirroring or RAID5 is required to ensure high availability of data.
■ There is no architectural limit imposed on the number of mirrors in a Sun Cluster.
■ Sun Cluster recommends that mirroring be done across same the type of storage
device

258 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

Support for Virtualized OS


Environments
This section of the document discusses supported virtualized OS Environments with
Solaris Cluster.

Support for Logical Domains (LDoms) I/O


domains
Logical Domains are available on most recent SPARC based servers from SUN. As of
release of LDoms 1.0.1 release (October 2007), Solaris Cluster is supported only in
LDoms I/O domains. LDoms I/O domains have direct access to the hardware and
are not dependent upon other domains to get access to the physical resources they
need. Look into LDoms documentation on how to configure such I/O domains.

Following considerations apply to deployment of Solaris Cluster in LDoms I/O


domains.
■ Minimum required LDoms version is 1.0.1, Solaris version is Solaris10 11/06 (u3)
with required patches, Solaris Cluster version is SC3.2.
■ Additional guest domains can be created on the system where Solaris Cluster is
running in the I/O domain. Such guest domains can use resources such as virtual
disks and virtual networks exported by the I/O domains. Such usage of LDoms
I/O domains is supported with Solaris Cluster.
Note that use of the I/O domain to provide device services to other domains
can introduce additional load on the I/O domain. Capacity planning of the
I/O domain must take such usage into account.
■ All applications which are certified with Solaris Cluster are supported in LDoms
I/O domains by Solaris Cluster. Please check with your ISV for any restrictions on
specific applications.
■ Unless explicitly noted, if a LDoms capable server is qualified with Solaris
Cluster, then it is qualified to run Solaris Cluster in the I/O domain.

Servers with LDoms (I/O) support:


■ Sun Blade T6320
■ Sun SPARC Enterprise T5120 or T5220
■ Sun SPARC Enterprise T5140 or T5240 (LDoms version is 1.0.2)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 259


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

FIGURE 11-1 Solaris Cluster in I/O domains with non-clustered guest domains

Support for Logical Domains (LDoms) 1.0.3 guest


domains as virtual nodes (Sun cluster 3.2
02/08/Solaris 05/08 and above)
In addition to LDoms I/O domains, LDom 1.0.3 Guest domains can also be
configured as Sun Cluster nodes as of July’08. A guest domain is viewed no
differently than a physical node. All Sun Cluster topologies are supported using
LDoms 1.0.3 Guest domains.

Please note that using LDoms 1.0.3 Guest domains as Sun Cluster nodes in
conjunction with LDoms I/O domains to provide device services to other domains
can introduce additional load on the I/O domains. As such, performance and
capacity planning should be considered for the I/O domains

Sun Cluster Data Services which are currently certified are also supported with
Ldoms 1.0.3 Guest domains clusters with the following exceptions.
■ Oracle RAC configurations.

Following are some rules and guidelines in using Ldoms 1.0.3 guest domains with
Sun Cluster:

260 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SOFTWARE CONFIGURATION

■ Use the mode=sc option for all virtual switch devices that connect the virtual
network devices used as the cluster interconnect.
■ Map only the full SCSI disks into the guest domains for shared storage.
■ The nodes of a cluster can consist of any combination of physical machines,
LDoms I/O domains, and LDoms guest domains.
■ If a physical machine is configured with LDoms, install Sun Cluster software only
in I/O domains or guest domains on that machine.
■ Network isolation - Guest domains that are located on the same physical machine
but are configured in different clusters must be network-isolated from each other
using one of the following methods:
■ Configure the clusters to use different network interfaces in the I/O domain
for the private network.
■ Use different network addresses for each of the clusters.

For the complete and detailed list of rules and guidelines please refer to

http://wikis.sun.com/display/SunCluster/Sun+Cluster+3.2+2-
08+Release+Notes#SunCluster3.22-08ReleaseNotes-ldomsguidelines

For the list of supported Sun Cluster patches please refer to

http://wikis.sun.com/display/SunCluster/Sun+Cluster+3.2+2-
08+Release+Notes#SunCluster3.22-08ReleaseNotes-ldomssw

Servers with LDoms (Guest) support:


■ Netra CP3060 Blade
■ Netra T2000 Server
■ Netra T5220 Server
■ Netra T5440 Server
■ Sun Blade T6300 Server Module
■ Sun Blade T6320 Server Module
■ Sun Blade T6340 Server Module
■ Sun Fire or SPARC Enterprise T1000 Server
■ Sun Fire or SPARC Enterprise T2000 Server
■ Sun SPARC Enterprise T5120 and T5220 Servers
■ Sun SPARC Enterprise T5140 and T5240 Servers
■ Sun SPARC Enterprise T5440 Server
■ USBRDT-5240 Uniboard

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 261


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Please note that the following cards are not supported as of July’08:

http://docs.sun.com/source/820-4895-10/chapter1.html#d0e995

262 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 12

Managing Sun Cluster 3

Console Access
It is required to have console access to each cluster node for some maintenance and
service procedures, and for monitoring the console messages. Sun Cluster 3 does not
require any specific type of console access mechanism. Some options that are
available are:
■ Sun serial port A - this may be used with the Sun Cluster Terminal Concentrator
(X1312A), a customer supplied terminal concentrator, an alphanumeric terminal,
or serial terminal connection software from another computer such as tip(1).
■ E10K System Service Processor (SSP) and similar console devices.
■ Sun keyboards and monitors may be used on cluster nodes when supported by
the base server platform. However, they may not be used as console devices. The
console must be redirected to a serial port or SSP/RSC as applicable to the server
using the appropriate OBP settings.

Cluster Administration and Monitoring


An administrative console located on a public network from which all cluster nodes
are accessible is required for administrating Sun Cluster 3. Several tools/options are
available for monitoring and administration of Sun Cluster 3: The use of the Sun
Management Center, SunPlex Manager, and Cluster Control Panel GUI tools are
optional.
■ Command line interface (CLI) - all SunPlex system management and monitoring
may be performed from the system console, or through telnet or rlogin sessions.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 263


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Sun Management Center (SunMC) - This is the de facto system management tool
for all Sun platforms in the Enterprise. SunMC enables administrators to carry out
in-depth monitoring of the SunPlex system. Sun Cluster 3 requires that the
SunMC console layer be run on a Solaris SPARC system. The versions of SunMC
supported with the Sun Cluster 3 product are listed below:
■ SunMC 2.1.1
■ SunMC 3.0
■ SunMC 3.5
■ SunMC 3.6
■ SunMC 3.6.1
■ SunMC 4.0
■ SunPlex Manager - This is an easy to use system management tool that enables
one to carry out basic SunPlex system management and monitoring with a focus
on installation and configuration. This requires a suitable workstation or PC with
a Web browser as listed below:

TABLE 12-1 SunPlex Manager Supported Web Browsers

Operating System Browser

Solaris Mozilla 1.4 and above


Netscape 6.2 and above
FireFox 1.0 and above
Windows Internet Explorer: 5.5 and above
Mozilla 1.4 and above
Netscape 6.2 and above
FireFox 1.0 and above

■ Cluster Control Panel (CCP) - provides a launch pad for the cconsole, crlogin,
and ctelnet GUI tools which start multiple window connections to a set of
specified nodes. The multiple window connections consist of a host window for
each of the specified nodes and a common window. The common window’s input
is directed to each host window for running the same command on each node
simultaneously. This requires a Solaris SPARC system with a graphics console
running Solaris 8 (or later) and requires about 250KB in /opt. Note that cconsole
is designed to work with the Sun Cluster Terminal Concentrator, Enterprise 10K
System Service Processor, Sun Fire 3800 - 6800 System Controller, and Sun Fire
12K/15K System Controller. Cluster Control Panel is supported with Solaris 9 x86
and Solaris 10 x86.

264 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CHAPTER 13

Sun Cluster 3 Ordering Information

Follow the steps given below for ordering a Sun Cluster 3 Configuration:

1. Generating Configuration and Quote: The responsibility for generating a valid


configuration rests with the Sales Team. No formal approval of your
configuration is required. Use either one of the following two mechanisms for
generating a valid cluster configuration:
■ Webdesk: Webdesk is the new online ordering and quoting tool. Not all
servers/storage supported in Sun Cluster 3 can currently be validated through
webdesk. In such cases, use the Sun Cluster 3 configuration guide for validating
your cluster.
■ Configuration Guide: See “Overview of Order Flow Chart” on page 265

2. Sun Cluster Order Approval: Sun Cluster orders NO LONGER need to go


through a separate order approval process before they can be completed.
■ GETS Process: Effective April 1st, 2005 Sun Cluster orders will no longer need to
go through the “Global Enterprise Tracking System” (GETS) step or the “Sun
Customer Order Process Evaluation” (SCOPE) step before being “booked” in the
order entry system and released for shipment to the customer or partner
respectively. In addition, Partners/Resellers will no longer be required to follow
RSCOPE-Tool process or use the RSCOPE-Tool for Sun Cluster sales orders as of
April 1, 2005.
■ The “B-Hold” on Sun Cluster software marketing parts was removed as ofApril 1,
2005.
■ The GETS and SCOPE information has been removed from the Configuration
Guide as of December 13, 2005.

Overview of Order Flow Chart


1. (Required) Order cluster nodes.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 265


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

2. (Required) Order shared storage.

3. (Required) Order cluster interconnect.

4. (Required) Order public network adapter.

5. (Optional) Order administrative framework.

6. (Required) Order Solaris media and documentation.

7. (Required) Order Sun Cluster 3 software and license.

8. (Optional) Order Sun Cluster 3 Agents software and license.

9. (Required) Order Enterprise Services and training packages from the Sun
Cluster section of the Enterprise Services price list.

Order Flow Chart


The configuration rules laid out in the flow chart below are in addition to the
configuration rules for the individual components. Under no circumstances, Sun
Cluster 3 relaxes the restrictions imposed by the base components.

1. (Required) Order cluster nodes. Refer to “Generic Server Configuration Rules”


on page 15 for rules for configuring a cluster node. The table below lists the
minimum number of server components (for example, CPUs) that need to be
ordered for one server unit to be used as a cluster node. Some of these
components may be bundled with other components (for example, a power
supply with server base). Please calculate the actual number of additional
components to be ordered appropriately. The table below guides you through
ordering a single server unit. Use the same fletcher for ordering as many servers
as needed.

Required Recommended
Server Component Quantity Quantity

Sun Fire T2000/T1000 Server Base Package 1 1


Netra T2000 CPU Module 1 1
Internal Memory 4 4
Internal Disk 2 2
Power Supplies 1 2

266 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

Required Recommended
Server Component Quantity Quantity

Netra T1 AC200/DC200 Server Base Package 1 1


CPU Module 1 1
Internal Memory 2 2
Internal Disk 1 2
Sun Enterprise 220R, 250, Server Base Package 1 1
420R, 450
CPU Module 2 2
Sun Fire 280R, V480,
V880 Internal Memory 2 2
Netra t 1120/1125, t Internal Disk 1 2
1400/1405, 20
Power Supply as required N+1
Sun Enterprise 3x00, Server Base Package 1 1
4x00, 5x00, 6x00
CPU modules 2 2
Memory 2 2
CPU/Memory board 1 2
SBus I/O board 1 2
Power/Cooling Module as required N+1
Sun Enterprise 10K (* Base cabinet 1 1
The quantities
CPU modules* 2 2
mentioned for each
domain) System Board* 1 2
Memory* 2 2
Sun Fire 3800(* The Base Package 1 1
quantities mentioned for
CPU/Memory Board 1 2
each domain)
Bundle*
cPCI I/O Assembly* 1 2
Sun Fire 4800, 4810, Base Package 1 1
6800(* The quantities
CPU/Memory Board 1 2
mentioned for each
Bundle*
domain)
PCI I/O Assembly* 1 2
Sun Fire Base Package 1 1
12K/15K/E20K/E25K(*
CPU/Memory Board* 1 2
The quantities
mentioned for each PCI I/O Board* 1 2
domain)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 267


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

2. (Required) Order Shared Storage. The tables below give the number of
components (for example, cable, GBIC) required to connect each storage unit to
a pair of nodes. Some of these components may be bundled with other
components (for example, cable with storage array). Please calculate the actual
number of additional components to be ordered appropriately. Also, the tables
give the number of “Host I/O ports” required with a shared storage unit. Some
servers have onboard host adapters and some host adapter cards have multiple
ports on them. Calculate the actual number of Host Adapter Cards to be
ordered appropriately.

a. Ordering Netra st D130. Refer to “SCSI Storage Support” on page 127 for the
configuration rules and the part numbers of the supported components.
Order each component in the quantity mentioned in the table below to
configure a Netra st D130 unit as a shared storage.

Component Quantity

Netra st D130 unit 1


Additional Disk As required
Host I/O Port 2
Cable 2

b. Ordering Sun StorEdge S1. Refer to “Sun StorEdge S1 Array” on page 134
for the configuration rules and the part numbers of the supported
components. Order each component in the quantity mentioned in the table
below to configure a Sun StorEdge S1 unit as a shared storage.

Component Quantity

Sun StorEdge unit 1


Additional Disk As required
Host I/O Port 2
Cable 2

268 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

c. Ordering Sun StorEdge MultiPack. Refer to “Sun StorEdge S1 Array” on


page 134 for the configuration rules and the part numbers of the supported
components. Order each component in the quantity mentioned in the table
below to configure a MultiPack unit as a shared storage.

Component Quantity

Sun StorEdge MultiPack unit 1


Additional Disk As required
Host I/O Port 2
Cable 2

d. Ordering Sun StorEdge D1000. Refer to “Netra st A1000 Array” on page 128
for the configuration rules and the part numbers of the supported
components. Order each component in the quantity mentioned in the table
below to configure one D1000 unit as shared storage. To configure a Single
Bus D1000, order components in the first row of the table. To configure Split
Bus D1000 order components in the second row of the table.

D1000 Configuration Components Quantity

Single Bus D1000 Sun StorEdge D1000 unit 1


Additional disk as required
Host I/O Port (total for two nodes) 2
Cable (excluding Jumper cable) 2
Split Bus D1000 Sun StorEdge D1000 unit 1
Additional disk as required
Host Bus Ports (total for two nodes) 4
Cable 4

e. Ordering Netra st D1000. Refer to “Netra st A1000 Array” on page 128 for the
configuration rules and the part numbers of the supported components.
Order each component in the quantity mentioned in the table below to
configure one Netra st D1000 unit as shared storage. To configure a Single

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 269


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Bus Netra st D1000, order components in the first row of the table. To
configure Split Bus Netra st D1000 order components in the second row of
the table.

Netra st D1000 Configuration Components Quantity

Single Bus Netra st D1000 Netra st D1000 unit 1


Additional disk as required
Host I/O Port (total for two nodes) 2
Cable (excluding Jumper cable) 2
Split Bus Netra st D1000 Netra st D1000 unit 1
Additional disk as required
Host Bus Ports (total for two nodes) 4
Cable 4

f. Ordering Sun StorEdge A3500. Refer to “Sun StorEdge A3500 Array” on


page 140 for the configuration rules and the part numbers of the supported
components. Order each component in the quantities mentioned in the first
row of the table below to configure one A3500 controller module as a shared
storage.

Component Quantity

Sun StorEdge Base Configuration as required


Additional Disk as required
Controller Module 1
Host I/O Port (total for two nodes) 4
Cable 4

g. Ordering Sun StorEdge A3500FC. Refer to “Sun StorEdge A3500FC System”


on page 63 for the configuration rules and the part numbers of the supported
components. Order all the components in the first row of the table below for

270 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

connecting two hubs to a pair of nodes. Order all the components in the second
row of the table below to configure an A3500FC controller module attached to
both the hub.

Connectivity Component Quantity

Hub - to - node connectivity Seven slot FC-AL Hub 2


Host I/O Port (total for 2 nodes) 4
Cable 4
GBIC 8
A3500FC unit Sun StorEdge Base Configuration as required
Additional Disk as required
Controller Module 1
Cable 2
GBIC 3

h. Ordering Sun StorEdge A5x00. Refer to “Sun StorEdge A5x00 Array” on


page 66 for the configuration rules and the part numbers of the supported
components. Order each component in the quantity mentioned in the tables
below to configure A5x00 as shared storage.

i. Ordering a direct-attached, full-loop A5x00. Order components in the


quantities mentioned in the table below.

Component Quantity

Sun StorEdge A5x00 unit 1


Additional disks as required
Interface Board 1
Host I/O Port 2
Cable 2
GBIC 4

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 271


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

ii. Ordering a direct-attached, split-loop A5x00. Order components in the


quantities mentioned in the table below:

Component Quantity

Sun StorEdge A5x00 unit 1


Additional disk as required
Interface Board 2
Host I/O Port 4
Cable 4
GBIC 8

iii. Ordering Hub-attached full loop, single loop A5x00. Order all the
components in the first row of the table below for connecting a hub to a
pair of nodes. Order all the components in the second row of the table
below to attach as many A5x00 to the hub as required. Note that
maximum 4 A5000, or 4 A5100, or 3 A5200 units can be attached to a hub.

Connectivity Component Quantity

Hub - to - node connectivity Seven slot FC-AL Hub 1


Host I/O Port 2
Cable 2
GBIC 4
A5x00 unit Sun StorEdge A5x00 unit 1
Interface Boards 1
Cable 1
GBIC 2

Ordering Hub-attached full loop, dual loop A5x00. Order all the
components in the first row of the table below for connecting two hubs to a
pair of nodes. Order all the components in the second row of the table

272 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

below to attach as many A5x00 to both the hubs as required. Note that
maximum 4 A5000, or 4 A5100, or 3 A5200 units can be attached to the hub-
pair in this fashion.

Connectivity Component Quantity

Hub - to - node connectivity Seven slot FC-AL Hub 2


Host I/O Port 2
Cable 2
GBIC 4
A5x00 unit Sun StorEdge A5x00 unit 1
Interface Boards 2
Cable 2
GBIC 4

i. Ordering Sun StorEdge T3. Refer to “Sun StorEdge T3 Array (Single Brick)”
on page 74 for the configuration rules and the part numbers of the supported
components. Both T3 for the Workgroup and T3 for the Enterprise models
are supported with Sun Cluster 3.

i. Ordering Hub-attached T3 Array. Order all the components in the first


row of the table below for connecting two hubs to a pair of nodes. Order
all the components in the second row of the table below to attach a T3
brick to a hub.

Connectivity Component Quantity

Hub - to - node connectivity Seven slot FC-AL Hub 2


Host I/O Port 4
Cable 4
GBIC 8
T3 brick T3 brick 2
Cable 2
GBIC 2

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 273


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

ii. Ordering Switch-attached T3 Array. Order all the components in the first
row of the table below for connecting two switches to a pair of nodes.
Order all the components in the second row of the table below to attach a
T3 brick to a switch.

Connectivity Component Quantity

Switch - to - node connectivity FC Switch 2


Host I/O Port 4
Cable 4
GBIC 8
T3 brick T3 brick 2
Cable 2
GBIC 2

j. Ordering Sun StorEdge 3910/3960 Refer to “Sun StorEdge 3910/3960 System”


on page 90 for the configuration rules and the part numbers of supported
components. Order all the components in the table below to attach a SE39x0
system to the cluster

Component Quantity

Sun StorEdge 39x0 1


Additional Components As required
Host I/O Port 2
Cable 2

3. (Required) Order the cluster interconnect. Refer to “Cluster Interconnect” on


page 183 for configuration rules and the part numbers of the supported
components. Order the components from the table below:

Min. Max.
Interconnect topology. Component Quantity Quantity

point-to-point. (N = 2 nodes) Host Network Port 4 12


cable 2 6

274 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

Min. Max.
Interconnect topology. Component Quantity Quantity

Junction-based (for N = 2 - 8 Host Network Port 2xN 6xN


nodes)
cable (customer supplied 2xN 6xN
for Fast Ethernet)
Switch 2 6

4. (Required) Order public network interfaces. Refer “Public Network” on


page 202 for configuration rules and the part numbers of the supported network
adapters. Order as many network adapters as required.

5. (Optional) Order the administrative framework.

a. Order administrative workstation - a Sun Ultra 5 or better, as per the table


below.

Description Part # Quantity

Administrative workstation (Sun Ultra 5 See Workstation section of CS Price 1


or better) Book

b. Order the Terminal Concentrator bundle: terminal concentrator, rack-


mounting bracket (if required), and serial cables to connect to the cluster
nodes, as per the table below. Note: In the table below it is assumed that N
is total number of nodes in the cluster.

Description Part # Quantity

Terminal Concentrator Kit: terminal concentrator, 3 - 5 meters X1312A 1


serial cables (for 2 cluster nodes, and administrative workstation)
Rack mounting bracket for Enterprise 5x00 and 6x00 only X1311A N
5-meter serial cable X3836A N-2
power cord for Terminal Concentrator X311L, or 1
equivalent

6. (Required) Order the Solaris media. Solaris licenses are included with a new
Sun server.

7. (Required) Order Sun Cluster 3 software and license.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 275


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

a. Order Sun Cluster 3 base software. Starting with the 7/01 release, we now
have a generic part number available for the Sun Cluster 3. This part number
will always point to the latest update release.Order Sun Cluster 3 license:

Description Part#

Sun Cluster 3.1 Base CD - latest CLUZS-999-99M9


Sun Cluster 3.1 Agents CD - latest CLA9S-999-99M9

Description Part#

Sun Cluster 3.2 Base CD - latest CLUZS-999-99M9 or


SOLZ9-10GC9A7M

276 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

b. Order Sun Cluster 3 server license:


One server entitlement per physical system in the cluster. If a system has
multiple domains, with some or all domains participating in same or different
clusters, only one server entitlement is needed for that system.
Purchase of a support contract requires the purchase of the license. Support
contracts are available through the normal channels.
.

TABLE 13-1 Sun Cluster 3.1 base software, License Only

Description Part#

Sun Cluster server license for Netra t 1120/1125 CLNIS-310-B929


Sun Cluster server license for Netra t 1400/1405 CLNIS-31X-A929
Sun Cluster server license for Netra 20 CLNIS-310-D929
Sun Cluster server license for Netra 210 CLNIS-310-A929
Sun Cluster server license for Netra 240 CLNIS-310-I929
Sun Cluster server license for Netra 440 CLNIS-310-H929
Sun Cluster server license for Netra t 1280 CLNIS-310-E929
Sun Cluster server license for ATCA CP3010 SPARC Blade CLUIS-310-AA29
Sun Cluster server license for Netra CP3060 CLNIS-310-J929
Sun Cluster server license for Netra CP3260 CLNIS-310-O929
Sun Cluster server license for Netra T2000 CLNIS-310-F929
Sun Cluster server license for Netra T5440 CLNIS-310-L929
Sun Cluster server license for Netra T5220 CLNIS-310-K929
Sun Cluster server license for Netra X4200 CLNIS-310-G929
Sun Cluster server license for Netra X42500 CLNIS-310-M929
Sun Cluster server license for Netra X4450 CLNIS-310-N929
Sun Cluster server license for Sun Blade X6220 CLUII-310-M929
Sun Cluster server license for Sun Blade X6240 CLUII-310-R929
Sun Cluster server license for Sun Blade X6250 CLUII-310-N929
Sun Cluster server license for Sun Blade X6270 CLUII-310-W929
Sun Cluster server license for Sun Blade X6440 and X6450 CLUII-310-S929
Sun Cluster server license for Sun Blade 84xx Server Module CLUII-310-E929
Sun Cluster server license for Sun Blade T6300 CLUII-310-J929
Sun Cluster server license for Sun Blade T6320 CLUIS-310-AC29

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 277


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 13-1 Sun Cluster 3.1 base software, License Only (Continued)

Description Part#

Sun Cluster server license for Sun Blade T6340 CLUIS-310-AF29


Sun Cluster server license for Sun Fire V20z CLUII-310-G929
Sun Cluster server license for Sun Fire V40z CLUII-310-F929
Sun Cluster server license for Sun Fire X2100 M2 CLUII-310-I929
Sun Cluster server license for Sun Fire X2200 M2 CLUII-310-H929
Sun Cluster server license for Sun Fire X4100 and X4200 CLUII-310-C929
Sun Cluster server license for Sun Fire X4140 CLUII-310-P929
Sun Cluster server license for Sun Fire X4150 CLUII-310-K929
Sun Cluster server license for Sun Fire X4170 CLUII-310-X929
Sun Cluster server license for Sun Fire X4270 and X4275 CLUII-310-Y929
Sun Cluster server license for Sun Fire X4440 CLUII-310-Q929
Sun Cluster server license for Sun Fire X4450 CLUII-310-L929
Sun Cluster server license for Sun Fire X4540 CLUII-310-U929
Sun Cluster server license for Sun Fire X4600 CLUII-310-D929
Sun Cluster server license for E220R CLUIS-31X-B929
Sun Cluster server license for E250 CLUIS-31X-C929
Sun Cluster server license for E420R CLUIS-310-A929
Sun Cluster server license for E450 CLUIS-310-B929
Sun Cluster server license for E3500 CLUIS-31X-D929
Sun Cluster server license for E4500 or E5500 CLUIS-31X-E929
Sun Cluster server license for E6500 CLUIS-31X-F929
Sun Cluster server license for E10000 CLUIS-31X-A929
Sun Cluster server license for Sun Fire T1000 CLUII-310-A929
Sun Cluster server license for Sun Fire T2000 CLUII-310-B929
Sun Cluster server license for Sun Fire V120 CLUIS-310-H929
Sun Cluster server license for Sun Fire V210 CLUIS-310-J929
Sun Cluster server license for Sun Fire V215/V245 CLUIS-310-T929
Sun Cluster server license for Sun Fire V240 CLUIS-310-K929
Sun Cluster server license for Sun Fire V250 CLUIS-310-O929
Sun Cluster server license for Sun Fire 280R CLUIS-310-Q929

278 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

TABLE 13-1 Sun Cluster 3.1 base software, License Only (Continued)

Description Part#

Sun Cluster server license for Sun Fire V440 CLUIS-310-P929


Sun Cluster server license for Sun Fire V445 CLEIS-310-S929
Sun Cluster server license for Sun Fire V480 / V490 CLUIS-310-L929
Sun Cluster server license for Sun Fire V880 / V890 CLUIS-310-N929
Sun Cluster server license for Sun Fire V1280 CLUIS-310-I929
Sun Cluster server license for Sun Fire E2900 CLUIS-310-E929
Sun Cluster server license for Sun Fire 3800 CLUIS-31X-G929
Sun Cluster server license for Sun Fire 4800/4810 CLEIS-310-R929
Sun Cluster server license for Sun Fire E4900 CLUIS-310-F929
Sun Cluster server license for Sun Fire 6800 CLUIS-310-M929
Sun Cluster server license for Sun Fire E6900 CLUIS-310-G929
Sun Cluster server license for Sun Fire E12K/E20K CLUIS-310-C929
Sun Cluster server license for Sun Fire E15K/E25K CLUIS-310-D929
Sun Cluster.server license for Sun Enterprise M3000 CLUIS-310-AE29
Sun Cluster.server license for Sun Enterprise M4000 CLUIS-310-U929
Sun Cluster server license for Sun Enterprise M5000 CLUIS-310-V929
Sun Cluster server license for Sun Enterprise M8000 CLUIS-310-W929
Sun Cluster server license for Sun Enterprise M9000-32 CLUIS-310-X929
Sun Cluster server license for Sun Enterprise M900-64 CLUIS-310-Y929
Sun Cluster server license for Sun SPARC Enterprise T5120/T5220 CLUIS-310-AB29
Sun Cluster server license for Sun SPARC Enterprise T5140/T5240 CLUIS-310-AD29
Sun Cluster server license for Sun SPARC Enterprise T5440 CLUIS-310-AG29

TABLE 13-2 Sun Cluster 3.2 base software, License Only

Description Part#

Sun Cluster server license for Netra t 1120/1125 CLNIS-320-B929


Sun Cluster server license for Netra 20 CLNIS-320-D929
Sun Cluster server license for Netra 210 CLNIS-320-A929
Sun Cluster server license for Netra 240 CLNIS-320-I929

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 279


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 13-2 Sun Cluster 3.2 base software, License Only (Continued)

Description Part#

Sun Cluster server license for Netra 440 CLNIS-320-H929


Sun Cluster server license for Netra t 1280 CLNIS-320-E929
Sun Cluster server license for Netra 1290 CLNIS-320-C929
Sun Cluster server license for ATCA CP3010 SPARC Blade CLUIS-320-AA29
Sun Cluster server license for Netra CP3060 CLNIS-320-J929
Sun Cluster server license for Netra CP3260 CLNIS-320-O929
Sun Cluster server license for Netra T2000 CLNIS-320-F929
Sun Cluster server license for Netra T5220 CLNIS-320-K929
Sun Cluster server license for Netra T5440 CLNIS-320-L929
Sun Cluster server license for Netra X4200 CLNIS-320-G929
Sun Cluster server license for Netra X4250 CLNIS-320-M929
Sun Cluster server license for Netra X4450 CLNIS-320-N929
Sun Cluster server license for Sun Blade X6220 CLUII-320-M929
Sun Cluster server license for Sun Blade X6240 CLUII-320-R929
Sun Cluster server license for Sun Blade X6250 CLUII-320-N929
Sun Cluster server license for Sun Blade X6270 CLUII-320-W929
Sun Cluster server license for Sun Blade X6440 and X6450 CLUII-320-S929
Sun Cluster server license for Sun Blade 84xx Server Module CLUII-320-E929
Sun Cluster server license for Sun Blade T6300 CLUII-320-J929
Sun Cluster server license for Sun Blade T6320 CLUIS-320-AC29
Sun Cluster server license for Sun Blade T6340 CLUIS-320-AF29
Sun Cluster server license for Sun Fire V20z CLUII-320-G929
Sun Cluster server license for Sun Fire V40z CLUII-320-F929
Sun Cluster server license for Sun Fire X2100 M2 CLUII-320-I929
Sun Cluster server license for Sun Fire X2200 M2 CLUII-320-H929
Sun Cluster server license for Sun Fire X4100 and X4200/X4200 M2 CLUII-320-C929
Sun Cluster server license for Sun Fire X4140 CLUII-320-P929
Sun Cluster server license for Sun Fire X4150 CLUII-320-K929
Sun Cluster server license for Sun Fire X4170 CLUII-320-X929
Sun Cluster server license for Sun Fire X4270 and X4275 CLUII-320-Y929

280 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

TABLE 13-2 Sun Cluster 3.2 base software, License Only (Continued)

Description Part#

Sun Cluster server license for Sun Fire X4440 CLUII-320-Q929


Sun Cluster server license for Sun Fire X4450 CLUII-320-L929
Sun Cluster server license for Sun Fire X4540 CLUII-320-U929
Sun Cluster server license for Sun Fire X4600 CLUII-320-D929
Sun Cluster server license for E420R CLUIS-320-A929
Sun Cluster server license for E450 CLUIS-320-B929
Sun Cluster server license for Sun Fire T1000 CLUII-320-A929
Sun Cluster server license for Sun Fire T2000 CLUII-320-B929
Sun Cluster server license for Sun Fire V120 CLUIS-320-H929
Sun Cluster server license for Sun Fire V210 CLUIS-320-J929
Sun Cluster server license for Sun Fire V215/V245 CLUIS-320-T929
Sun Cluster server license for Sun Fire V240 CLUIS-320-K929
Sun Cluster server license for Sun Fire V250 CLUIS-320-O929
Sun Cluster server license for Sun Fire 280R CLUIS-320-Q929
Sun Cluster server license for Sun Fire V440 CLUIS-320-P929
Sun Cluster server license for Sun Fire V445 CLUIS-320-S929
Sun Cluster server license for Sun Fire V480 / V490 CLUIS-320-L929
Sun Cluster server license for Sun Fire V880 / V890 CLUIS-320-N929
Sun Cluster server license for Sun Fire V1280 CLUIS-320-I929
Sun Cluster server license for Sun Fire E2900 CLUIS-320-E929
Sun Cluster server license for Sun Fire 4800/4810 CLUIS-320-R929
Sun Cluster server license for Sun Fire E4900 CLUIS-320-F929
Sun Cluster server license for Sun Fire 6800 CLUIS-320-M929
Sun Cluster server license for Sun Fire E6900 CLUIS-320-G929
Sun Cluster server license for Sun Fire E12K/E20K CLUIS-320-C929
Sun Cluster server license for Sun Fire E15K/E25K CLUIS-320-D929
Sun Cluster server license for Sun Enterprise M3000 CLUIS-320-AE29
Sun Cluster server license for Sun Enterprise M4000 CLUIS-320-U929
Sun Cluster server license for Sun Enterprise M5000 CLUIS-320-V929
Sun Cluster server license for Sun Enterprise M8000 CLUIS-320-W929

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 281


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 13-2 Sun Cluster 3.2 base software, License Only (Continued)

Description Part#

Sun Cluster server license for Sun Enterprise M9000-32 CLUIS-320-X929


Sun Cluster server license for Sun Enterprise M900-64 CLUIS-320-Y929
Sun Cluster server license for Sun SPARC Enterprise T5120/T5220 CLUIS-320-AB29
Sun Cluster server license for Sun SPARC Enterprise T5140/T5240 CLUIS-320-AD29
Sun Cluster server license for Sun SPARC Enterprise T5440 CLUIS-320-AG29

c. Upgrade licenses for the cluster software. Order one per server. Please refer
to http://www.sun.com/software/solaris/cluster/faq.jsp#g31 for more
details on various tiers:

TABLE 13-3 Sun Cluster 3.1 and 3.2 Base Software, Upgrade from Previous Revisions Only

Description Part# Quantity

SunPlex upgrade license to upgrade from Tier 1 to Tier 2 CLSIS-LCO-A9U9 1 per server
SunPlex upgrade license to upgrade from Tier 2 to Tier 3 CLSIS-LCO-B9U9
SunPlex upgrade license to upgrade from Tier 3 to Tier 4 CLSIS-LCO-C9U9
SunPlex upgrade license to upgrade from Tier 4 to Tier 5 CLSIS-LCO-D9U9
SunPlex upgrade license to upgrade from Tier 5 to Tier 6 CLSIS-LCO-E9U9
SunPlex upgrade license to upgrade from Tier 6 to Tier 7 CLSIS-LCO-F9U9
SunPlex upgrade license to upgrade from Tier 7 to Tier 8 CLSIS-LCO-G9U9
SunPlex upgrade license to upgrade from Tier 8 to Tier 9 CLSIS-LCO-H9U9
SunPlex upgrade license to upgrade from Tier 9 to Tier 10 CLSIS-LCO-I9U9
SunPlex upgrade license to upgrade from Tier 10 to Tier 11 CLSIS-LCO-J9U9
SunPlex upgrade license to upgrade to same or lower Tier CLSIS-LCO-K9U9

8. (Optional) Order Sun Cluster 3 Agent software and license.

a. Order Sun Cluster 3 Agent software. For Sun Cluster 3.1, order the Sun
Cluster 3 Agents CD. A softcopy of documentation for the agents is included
in the CD. For Sun Cluster 3.2 the agents are included on the same DVD as
the base software. Documentation can also be found at docs.sun.com.

Description Part# Quantity

Sun Cluster 3.1 Agents CD - latest CLA9S-999-99M9 1 Per Cluster

282 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

b. Order Sun Cluster 3.1 and 3.2 Agent license. Order one license for every
agent installed in the cluster.

TABLE 13-4 Agents License

Description Part#

HA Agfa IMPAX CLAIS-XAH-9999


HA Apache Web/Proxy Server CLAIS-XXA-9999
HA Apache Tomcat CLAIS-XXX-9999
HA BEA Weblogic CLAIS-XXK-9999
HA DHCP CLAIS-XXH-9999
HA DNS CLAIS-XDN-9999
HA NFS CLAIS-XXF-9999
HA IBM WebSphere MQ CLAIS-XXQ-9999
HA IBM WebSphere MQ Integrator CLAIS-XXI-9999
HA Kerberos CLAI9-XXA-9999
HA MySQL CLAIS-XXO-9999
HA Oracle CLAIS-XXR-9999
HA Oracle Application Server CLAIS-XAD-9999
HA PostgreSQL CLAIS-XAM-9999
HA Samba CLAIS-XXM-9999
HA SAP Enqueue server CLAIS-XAI-9999
HA SAP J2EE Engine CLAIS-XAE-9999
HA SAP LiveCache CLAIS-XXL-9999
HA SAP/MaxDB Database CLAIS-XAA-9999
HA Siebel CLAIS-XXS-9999
HA Solaris Container CLAIS-XXZ-9999
HA Sun Java System Application Server CLAIS-XXJ-9999
HA Sun Java System Application Server EE CLAIS-XAB-9999
HA Sun Java System Directory Server CLAIS-XXD-9999
HA Sun Java System Message Queue CLAIS-XXT-9999
HA Sun Java System Web Server CLAIS-XXN-9999
HA Sun N1 Grid Engine CLAIS-XAC-9999
HA Sun N1 Service Provisioning System CLAIS-XAF-9999

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 283


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 13-4 Agents License

Description Part#

HA SWIFT Alliance Gateway CLAIS-XAG-9999


HA Sybase CLAIS-XXY-9999
Oracle E-Business Suite CLAIS-XXE-9999
Oracle Parallel Server and Real Application Cluster CLAIS-XXP-9999
Scalable Apache Web/Proxy Server CLAIS-XXC-9999
Scalable Java System Web Server CLAIS-XXW-9999
Scalable SAP CLAIS-XXG-9999
SWIFTAlliance Access CLAIS-XXV-9999

c. Order VxVM cluster license from the table below. This license needs to be
ordered when OPS/RAC is used with the VxVM. Note that the VxVM
software package includes the cluster functionality in it. Separate license
keys are needed to enable the VxVM base product and the VxVM cluster
functionality. The VxVM software package and the license key for VxVM
base product need to be acquired separately.

Description Part# Quantity

Veritas VxVM 5.0 Cluster Functionality License CLUI9-500-9999 One per OPS/RAC node

Note that CVM 5.0 uses the same license PN as that of VxVM 5.0

9. Sun Cluster Advanced Edition for Oracle RAC


Order one license per node. This includes a license for the following:
■ Oracle RAC Agent
■ Shared QFS Metadata server
■ Shared QFS client
■ SC agent for QFS metadata server
■ Clustered Solaris Volume Manager
■ SC-QFS-SVM
The following is NOT included with Sun Cluster Advanced Edition for Oracle
RAC:
■ Sun Cluster Server licenses (have to be purchased separately)
■ Usage of QFS without Sun Cluster

284 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER 3 ORDERING INFORMATION

■ Usage of QFS for non Oracle RAC applications

TABLE 13-5 Sun Cluster Advanced Edition for Oracle RAC

Description Part#

Sun Cluster Advanced Edition for Oracle RAC License for Tier 1 Servers CLAI9-LCA-1999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 2 Servers CLAI9-LCA-2999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 3 Servers CLAI9-LCA-3999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 4 Servers CLAI9-LCA-4999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 5 Servers CLAI9-LCA-5999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 6 Servers CLAI9-LCA-6999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 7 Servers CLAI9-LCA-7999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 8 Servers CLAI9-LCA-8999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 9 Servers CLAI9-LCA-9999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 10 Servers CLAI9-LCA-1099
Sun Cluster Advanced Edition for Oracle RAC License for Tier 11 Servers CLAI9-LCA-1199

10. Sun Cluster Geographic Edition:


One server entitlement per physical system in the cluster. If a system has multiple
domains, with some or all domains participating in same or different clusters,
only one server entitlement is needed for that system
Purchase of a support contract requires the purchase of the license. Support
contracts are available through the normal channels.

TABLE 13-6 Sun Cluster Geographic Edition 3.1

Description Part#

Sun Cluster Geographic Edition 3.1 License for Tier 1 Servers CLGI9-001-9999
Sun Cluster Geographic Edition 3.1 License for Tier 2 Servers CLGI9-002-9999
Sun Cluster Geographic Edition 3.1 License for Tier 3 Servers CLGI9-003-9999
Sun Cluster Geographic Edition 3.1 License for Tier 4 Servers CLGI9-004-9999
Sun Cluster Geographic Edition 3.1 License for Tier 5 Servers CLGI9-005-9999
Sun Cluster Geographic Edition 3.1 License for Tier 6 Servers CLGI9-006-9999
Sun Cluster Geographic Edition 3.1 License for Tier 7 Servers CLGI9-007-9999
Sun Cluster Geographic Edition 3.1 License for Tier 8 Servers CLGI9-008-9999

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 285


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE 13-6 Sun Cluster Geographic Edition 3.1

Description Part#

Sun Cluster Geographic Edition 3.1 License for Tier 9 Servers CLGI9-009-9999
Sun Cluster Geographic Edition 3.1 License for Tier 10 Servers CLGI9-010-9999
Sun Cluster Geographic Edition 3.1 License for Tier 11 Servers CLGI9-011-9999

TABLE 13-7 Sun Cluster Geographic Edition 3.2

Description Part#

Sun Cluster Geographic Edition 3.2 Tier 1 CLGI9-320-1999


Sun Cluster Geographic Edition 3.2 Tier 2 CLGI9-320-2999
Sun Cluster Geographic Edition 3.2 Tier 3 CLGI9-320-3999
Sun Cluster Geographic Edition 3.2 Tier 4 CLGI9-320-4999
Sun Cluster Geographic Edition 3.2 Tier 5 CLGI9-320-5999
Sun Cluster Geographic Edition 3.2 Tier 6 CLGI9-320-6999
Sun Cluster Geographic Edition 3.2 Tier 7 CLGI9-320-7999
Sun Cluster Geographic Edition 3.2 Tier 8 CLGI9-320-8999
Sun Cluster Geographic Edition 3.2 Tier 9 CLGI9-320-9999
Sun Cluster Geographic Edition 3.2 Tier 10 CLGI9-320-1099
Sun Cluster Geographic Edition 3.2 Tier 11 CLGI9-320-1199

11. (Required) Order Enterprise Services and training packages from the Sun
Cluster section of the Enterprise Services pricelist.Enterprise Tracking System)

Agents Edist Download Process


The Agents Edist Download Process is a mechanism for the field to procure agents
that are announced asynchronously from Sun Cluster 3 update releases. You will
need to go through the usual sales process - scope, MCSO, etc. However, you will
need to download the agent binaries and documentation from http://edist.central,
and deliver it to the customer site for installation. This is because the agent is not
available on the Agents CD.

286 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


APPENDIX A

Campus Clusters

Campus clusters are a common means of achieving disaster recovery. Unlike


traditional clusters, the nodes of a campus cluster can be several kilometers apart.
This enables application services to be highly available in the event of a disaster like
fire, earthquake, site destruction due to terrorist attack, etc. Sun now supports 8
node campus cluster configurations.

This appendix documents all the support related information for campus clusters
using Sun Cluster 3. For a detailed description of campus cluster concepts and
configurations refer to the Sun Cluster Hardware Administration Guide. In general,
the support information listed for traditional clusters in the rest of the configuration
guide applies to campus cluster configurations as well. This section gives details that
are specific to campus cluster configurations with appropriate pointers to other
sections in the config guide.

Number of Nodes
8-node campus cluster configurations are supported with Sun Cluster 3.

Campus Cluster Room Configurations


Configurations of two or more rooms are supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 287


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Applications
All of the application services, including Oracle Parallel Server (OPS), and Real
Application Clusters (RAC), mentioned in the “Software Configuration” on page 219
is applicable to the campus clusters as well.

Guideline for Specs Based Campus


Cluster Configurations
The goal of this section is to provide an overview of what a generic Specs Based
Campus Cluster configuration consists of. It also summarizes the characteristics that
a given distance configuration, proposed and submitted by the field, must comply
with, in order to be a valid candidate for support.

Overview of a Specs Based Campus Cluster

Basically, a Specs Based Campus Cluster can be considered as a distance


configuration where some IP and SAN extension solutions are deployed to provide
separation between the cluster nodes and/or the shared storage devices, including
the Quorum Device when applicable.

This can be depicted as follows:

288 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CAMPUS CLUSTERS

An example of such a configuration is one where a DWDM network is deployed to


extend the SANs required to connect the cluster nodes to the distant shared storage
devices, and also to support a distant cluster transport between these nodes:

Note that the solutions deployed for distance in the transport subsystem and for the
distance in the I/O paths can be either distinct or shared, as depicted in the previous
example with DWDMs. This design choice has to be made by the implementers,
within the constraints of the requirements described in the other sections of this
document, and may depend on the topology of the Specs Based Campus Cluster.

Independently of the level of complexity of the distance implementation(s)


(technologies, equipment types,...), the base cluster components - nodes, SAN
switches, storage devices - must be supported according to the existing SC3.x
Configuration Guide. Also the existing maintenance and service procedures, as
documented in the SC3.x HW Administration Guide must continue to be applicable.

Technical requirements

This section deals with the technical list of features that a Specs Based Configuration
must comply with:

Latency:
■ Transport Latency
■ The measured latency of each transport, between any pair of nodes in the
cluster, must be less than 15 ms one-way.
■ Note that this document doesn’t address the means used to measure the
latency. It assumes that this information is obtained by the field, possibly but
not exclusively, under the terms of some Service Level Agreement (SLA).
■ Data path Latency

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 289


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ The measured latency of each path, between nodes and storage devices
attached through redundant SANs, must be less than 15 ms.
■ Note that the “path” that is referred to in that previous rule is defined as
whatever resides between a SAN switch the cluster nodes are directly
connected to, and the corresponding SAN switch the shared storage devices
are directly connected to.
■ The same remark as above applies here concerning the actual measurement of
that latency.
■ General rules and guidelines:
■ The measured network latency should be identical for each redundant private
interconnect between two nodes
■ In case of failures in the distance infrastructure (“cloud”), the latency of the
remaining transport(s) or data path(s) must remain below the max. values (15
ms one-way)

Bit Error Rate (BER):


■ The quality of the distance infrastructure for the data paths must be such that the
BER shouldn’t be worse than 10^-10.

Topology:

The basic requirements and recommendations are common with standard cluster
configurations. Below are a few additional considerations
■ HDS array is supported as Quorum Device with Sun Cluster 3.2 using patch
release 2 (Solaris 9 SPARC/126105-01, Solaris 10 SPARC/126106-01, Solaris 10
x86/126107-01) and Sun Cluster 3.1U4 using patches (Solaris 8 SPARC/117950-31,
Solaris 9 SPARC/117949-30, Solaris 9 x86/117909-31, Solaris 10 SPARC/120500-15,
Solaris 10 x86/120501-15)
■ Transport:
■ Transport redundancy must be implemented and ensured between the cluster
nodes. The distance transport must be implemented in such a way that the
cluster nodes logically and functionally perceive distinct paths. For example,
adding/removing as well as enabling/disabling a transport path shouldn’t
affect the other one(s). In other words, from a functional point of view, the
distance implementation must be totally transparent, delayed responses apart,
to all applicable SC3.x commands related to transports.
■ The same principle must apply during the re-establishment of a previously
failed path.
■ I/O:
■ I/O path redundancy must be implemented and ensured between the nodes
and the SAN attached shared storage devices.

290 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CAMPUS CLUSTERS

■ Independently of the technology employed to implement the SAN extension,


and the associated inner working, the incoming traffic - seen from the SAN
switches perspective - must be standard Fibre Channel (i.e no vendor specific
alteration.)
■ The number of cascaded ISLs must stay within the limits defined by the
current SAN WWWW rules. In the case where more than one level of
cascading is present, the sum of the latencies associated with each level must
not exceed the max Data path latency (15 ms). Note: it may be necessary to
take into consideration the latencies of the intermediate SAN switches when
calculating the sum.
■ Although there’s no universal rule, implementers must verify that the
provision of Buffer Credits in the SAN switches is adequate for the proposed
extension solution and to prevent unexpected database disruption (Link Reset).
■ The use of host based mirroring is advised even in case where the storage
devices already provide hardware RAID protection.

TrueCopy Support
TrueCopy is now supported for shared storage data replication between two sites
within a cluster. This offers a configuration alternative for campus clusters in which
distance concerns make host-side mirroring impractical. Automatic failover in the
case of primary node failure is included, as well as support for SVM, VxVM and raw
disk device groups. Careful consideration must be taken when deciding on
TrueCopy configuration parameters, such as fence level, since these have a direct
impact on cluster availability and data integrity guarantees.

Some things to consider when investigating a potential Truecopy cluster


configuration:

All Truecopy fence levels are supported, however, there are specific trade-offs with
respect to cluster availability, performance and data integrity which should be
considered when deciding upon a setting. The DATA fence level offers the best
guarantees of data integrity by offering fully synchronous data updates, but can
leave the primary site vulnerable to storage problems at the secondary site. A fence
level of NEVER avoids the issues of being vulnerable to secondary storage failures,
but opens up the possibility of allowing the primary and secondary data copies to
get out of sync. Using a fence level of ASYNC can offer increased I/O performance
through the use of asynchronous data updates, but of course introduces a potential
for data loss should the primary site fail while it is still caching unwritten data.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 291


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Two node clusters still require the use of a quorum device and even though the
replicated Truecopy devices are made to look like a single DID device, they are not
truly shared devices, so do not meet the needs of a quorum device. Quorum server
is generally a viable option.

Nodes at each site must only have direct access to one of the devices in a replica
pair, otherwise volume management software can become confused about the disks
which make up replicated device groups. Multiple local nodes at each site can share
access to local replicas (providing local failover), but direct access to a single replica
must not be shared between sites.

Careful planning of device usage is important as replica groups must be configured


to match a corresponding global device group (including naming) so that the
switching of the replication primary can coincide with the importing of the proper
device groups.

SRDF Support
SRDF is now supported for shared storage data replication between two sites within
a cluster. This offers a configuration alternative for campus clusters in which
distance concerns make host-side mirroring impractical. Automatic failover in the
case of primary node failure is included, as well as support for SVM, VxVM and raw
disk device groups. Careful consideration must be taken when deciding on SRDF
configuration parameters since these have a direct impact on cluster availability and
data integrity guarantees.

When investigating a potential SRDF campus cluster configuration, please consider


the followings:
■ Synchronous, asynchronous and adaptive copy modes are all supported,
however, there are specific trade-offs with respect to cluster availability,
performance and data integrity which should be considered when deciding upon
a setting. The synchronous mode offers the best guarantees of data integrity by
offering fully synchronous data updates, but the primary site could be vulnerable
to storage problems at the secondary site. Asynchronous mode can offer increased
I/O performance through the use of asynchronous data updates, but a potential
for data loss could happen if the primary site fail while it is still caching
unwritten data.
■ While the use of SRDF static devices is supported, they should be avoided at all
costs. SRDF operations required during switchover and failover take several
minutes to complete for static devices. Use dynamic devices whenever possible.

292 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


CAMPUS CLUSTERS

■ Two node clusters still require the use of a quorum device and even though the
replicated SRDF devices are made to look like a single DID device, they are not
truly shared devices, so do not meet the needs of a quorum device. Quorum
server is generally a viable option.
■ Nodes at each site must only have direct access to one of the devices in a replica
pair, otherwise volume management software can become confused about the
disks which make up replicated device groups. Multiple local nodes at each site
can share access to local replicas (providing local failover), but direct access to a
single replica must not be shared between sites.
■ Careful planning of device usage is important as replica groups must be
configured to match a corresponding global device group (including naming) so
that the switching of the replication primary can coincide with the importing of
the proper device groups.
■ Take care to ensure that the correct DID devices are being merged into a single
replicated DID device. If the wrong pair of devices are combined, use the
“scdidadm -b” command to unmerge them.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 293


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

294 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


APPENDIX B

Sun Cluster Geographic Edition

Introduction
This chapter provides a description of the supported Sun Cluster Geographic
Edition (GE) product hardware configurations and infrastructure. The Sun Cluster
Configuration Guide / Support Matrix provides the technical specification for
individual clusters in Sun Cluster GE configurations. The networking infrastructure
required for inter-cluster connections will depend on customer-specific
requirements.

Elements of Sun Cluster GE Hardware


Configuration
Although Sun Cluster GE can be installed and configured on a single stand-alone
Sun Cluster, the product only has utility when it is installed in configurations
consisting of several clusters. It is important to distinguish between Sun Cluster GE
configurations, which provide automated failover between geographically-separated
distinct clusters, and Campus Cluster configurations, which provide automatic
failover within a geographically-spread single cluster.

As a rule of thumb, Campus cluster configurations offer protection against localized


incidents (for example a fire within a single room or building) and allow storage to
be placed near the point of use, but require synchronous data replication to ensure
correct and reliable automatic failover. This imposes stringent limits of distance and
link characteristics, usually in the 10 - 100km range.

Sun Cluster GE is more appropriate for long-distance configurations (hundreds to


thousands of km) where protection against major (city-wide) disaster is required. It
permits the use of asynchronous data replication over standard Internet connections,
as part of a company-wide Business Continuity plan.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 295


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Campus and GE configurations can be combined in a single Disaster Recovery


configuration, to give the best of both worlds, see the Three-site topologies section,
later.

The elements of Sun Cluster GE product configuration are:


■ Sun Cluster installations, with attached data storage. Sun Cluster GE places no
additional restrictions on supported cluster configurations, beyond those already
imposed by the base Sun Cluster configuration guidelines.
■ Internet connections for inter-cluster management communication and default
heartbeat between the Sun Cluster installations
■ Connections for data replication (either host-based or storage-based). This may be
the same connection as that used for the heartbeat.
■ Optional connections for custom heartbeats if required.

Inter-Cluster Topologies
Inter-cluster relationships in Sun Cluster GE consist of entities called partnerships,
which are relationships between two clusters. All Sun Cluster GE inter-cluster
communications happen between partner clusters.

A partnership requires an IP connection between the public network interfaces of


the partner clusters for inter-cluster management communication and default inter-
cluster heartbeats. A single cluster may participate in more than one partnership and
requires IP connections with each of its partners. These connections can be
established via dedicated corporate network connections, or across the public
Internet.

Within a partnership, entities known as protection groups may be configured. A


protection group links a Sun Cluster Resource Group with the data-replication
resources that it requires, and establishes the data-replication relationship between
partner clusters. One partnership may have several protection groups configured,
each protection group establishing a different data-replication relationship between
the partner clusters.

296 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER GEOGRAPHIC EDITION

FIGURE B-1 Example Sun Cluster GE topologies that demonstrate Sun Cluster GE inter-
cluster relationships.

The Geneva-Paris-Rome-Berlin topology is an example of a configuration with a


centralized DR site. It assumes a central Geneva cluster that forms three separate
partnerships with the Paris, Rome and Berlin clusters. The partnerships require two-
way internet connections between cluster pairs Paris-Geneva, Rome-Geneva and
Berlin-Geneva. A protection group is configured on each partnership so that in
normal operation, the Paris, Rome, Berlin primaries replicate data to Geneva as a
secondary. Each protection group requires the infrastructure to support a data-
replication link between the normal primary cluster and Geneva. Should any of the
outlying sites be lost, Geneva can take over as a substitute.

The New York-London topology has two clusters that form a partnership with two
protection groups. In normal operation, each cluster is the primary for one of the
protection groups, and the secondary for the other, this is a symmetrical
configuration. The partnership requires a two-way IP connection between the two
clusters for inter-cluster management and heartbeats. Data-replication link
infrastructure is required between the clusters to support data-replication for two
protection groups.

Three-site topologies
It is possible to use a campus cluster for the primary cluster, thus creating a three-
site configuration of Primary, Backup and DR sites. This is currently supported using
volume manager mirroring within the campus cluster, and AVS replication to the DR
site. Other combinations will be supported in the future. It is not possible to create a
daisy-chain of Sun Cluster GE pairs, i.e. London -> Paris -> Rome.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 297


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Cluster Hardware and Storage


Configurations
The configuration of an individual cluster within an Sun Cluster GE partnership is
subject to the standard configuration rules for the related Sun Cluster release, as
described elsewhere in the Configuration Guide. Sun Cluster GE imposes no
additional restrictions on the cluster configuration. Clusters can have any supported
size and configuration, including single-node clusters. It is generally not advisable to
use a single-node cluster at the primary site, since any local failure will require a
switchover, however this is a supported configuration.

Both sites must have the same platform architecture, SPARC or x64. This is not a
requirement of Sun Cluster GE, but rather of most applications. Filesystems and data
files (e.g. from an Oracle data base) are generally not endian-neutral. Heterogeneous
combinations have therefore not been tested.

For use of Sun Cluster GE with third-party storage-based data-replication


mechanisms, the cluster hardware configurations required are those supporting the
related storage hardware. Partner clusters must be compatibly configured to support
data replication between the clusters.

For specific supported software versions, please see the matrices at the end of this
section.

Storage configurations
Within one cluster, Sun Cluster GE data-replication places some software
configuration requirements on the accessibility of device groups and the
configuration of data volumes. The software configuration requirements may have
implications for the preferred configuration of storage on the cluster.

The clusters in a partnership need not be identical, although cluster software


versions and replicated disk configurations must be the same on each side. During
an upgrade it is permissible to run with one version of skew between the sites (i.e.
Vn at one site, and Vn+1 at the other). There is no requirement to run the same
Solaris version at both sites provided this does not impose other constraints (e.g. on
AVS versions).

For all supported products, replication can be configured as Synchronous or


Asynchronous. The choice will be determined by the customer’s performance
requirements and requirements on acceptable transaction loss and recovery time for
disaster-recovery.

298 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER GEOGRAPHIC EDITION

The use of Synchronous replication will guarantee that both clusters in a partnership
always have identical copies of data, however the need to ensure that data has been
written to both partners before a write is considered as complete means that the data
write throughput is effectively limited to that of the inter-cluster link. This will be
orders of magnitude slower than the physical disk connection.

The use of Asynchronous replication will avoid this performance penalty, but can
mean that the data stored on the secondary partner may not always be an up-to-date
copy of the primary data. A failure of the primary cluster under such circumstances
can result in some data updates not being completed at the remote site.

Sun AVS configuration


An example partnership of simple two-node clusters is shown.

Using Sun Cluster GE with AVS requires nothing in the way of specialized
hardware. AVS, being a software-based replication system, is largely hardware-
agnostic. See the AVS documentation for information on which Sun storage systems
are supported.

In terms of network connectivity AVS, being host-based, depends on the


connectivity available between the host systems which make up the Sun Cluster GE
partner groups. It will share the same IP link that is used by the heartbeat.

Since AVS replication software runs on a single host in each cluster, certain scalable
and parallel applications cannot be supported with AVS. A specific example is
Oracle RAC, which cannot work with AVS. HA-Oracle is fully supported.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 299


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Supported versions
AVS 3.2.1 is supported only on Solaris 8 and Solaris 9, SPARC only. AVS 4.0 is
supported only on Solaris 10, SPARC and x86.

StorEdge TrueCopy configuration


Use of Sun Cluster GE with TrueCopy data-replication requires Sun Cluster
configurations with Sun StorEdge 9970/9910 Array or Hitachi Lightning 9900 Series
storage that support the TrueCopy command interfaces. Sun Cluster GE places no
specific limitations on the connectivity to be used, any TrueCopy configuration
which is supported by Sun Cluster can be used.

Hitachi offers TrueCopy planning and installation services (see


http://www.hds.com/services/professional-services/plan-design.html) and these
are likely to be the best source of configuration planning information for a
TrueCopy-based Sun Cluster GE installation

Support for Hitachi Universal Replicator will be provided in a forthcoming release.

Supported versions
TrueCopy Raid Manager versions 01-18-03/03 or later (SPARC) are supported.

EMC SRDF
Use of Sun Cluster GE with EMC Symmetrix Remote Data Facility (SRDF) data-
replication requires Sun Cluster configurations with EMC Symmetrix hardware that
supports the SRDF Solutions Enabler command interface.

For Oracle considerations, the following guidelines may be useful:

http://www.emc.com/techlib/pdf/H1143.1_SRDFS_A_Oracle9i_10g_ldv.pdf

Supported versions
EMC Solution Enabler (SymCLI) version 6.0.1 or later is supported on Solaris SPARC
and x86. Enginuity firmware Version 5671 or later is required.

300 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER GEOGRAPHIC EDITION

Inter-Cluster Network Connections.

Inter-Cluster Management and Default Heartbeats.


IP access is required between Sun Cluster GE partner clusters. The communication
between partner clusters for Sun Cluster GE inter-cluster management operations is
through a logical hostname IP address. The hostname used corresponds to the name
of the cluster, a configuration issue which must be considered at the planning stage.
The default inter-cluster heartbeat module also communicates through this address.

Custom Heartbeats
Sun Cluster GE provides interfaces for optional customer-added plug-ins for inter-
cluster heartbeats. The communication channel for a custom heartbeat plug-in is
defined by its implementation. A custom heartbeat plug-in would allow the use of
a communication channel that is different from the default heartbeat connection. In a
telecoms environment, for example, there may be other, non-IP, connection paths
available.

Data Replication Network


There is no explicit limitation on the distance between Sun Cluster GE partner
clusters. Sun Cluster GE partner-cluster configurations require the infrastructure for
long-distance data-replication connections to support the protection-groups hosted
by the partnership. The requirements on the data-replication connection are
determined by:
■ The distance between the partner clusters.
■ The amount of data to be replicated, and the pattern of data access.
■ The cost of the network connection.
■ Data-replication configuration parameters.

The type of inter-cluster links used for the data replication will depend on the
product chosen. Sun Cluster GE does not place additional limitations on this beyond
those required by the data replication product.

It is difficult to fully define the characteristics of the data-replication infrastructure


for common reference configurations, since it is unlikely that a “typical”
configuration exists. Nevertheless, some information on customer requirements is
available from the field, as is information on network requirements for various data
replication configurations.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 301


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Note, however, that while network throughput (in Mbit/s) is important when dealing
with large quantities of data, network latency is of much greater importance as far as
write performance is concerned.

Data Replication configuration guidelines


It should be noted that the choice of replication method and parameters is not an
“all or nothing” issue. Multiple protection groups can be configured within a
partnership, each using a different replication strategy appropriate to their needs.

By way of an example, consider a large internet sales company. It will have a large
database of products, which is updated regularly but probably not continuously.
Staff will, from time to time, add new products and remove old ones. Such a
database could safely be replicated asynchronously, since even if some updates were
lost following a failure, the situation could be recovered relatively easily. Staff could
re-enter the changes at a later date.

On the other hand, the filesystem which keeps records of customers’ purchases
cannot tolerate any data loss, since this could not be recovered by company staff.
This would not only result in financial loss from the lost order data, but could also
lead to a loss of customer confidence. The relatively small quantity of data stored
would, however, probably permit this filesystem to be replicated synchronously to
avoid any risk of data loss following a failure.

Unsupported features
Support for some new features in Solaris requires further testing and/or additional
development. Please note the following specific restrictions.

Shared QFS
Shared QFS filesystems embed the names of the host systems in the filesystem
metadata. In order to transfer an sQFS filesystem to a new cluster this, metadata
must be rewritten to contain the names of the hosts in the new cluster. SCGE does
not perform this rewrite, and so SQFS filesystems cannot be supported with SCGE.
This restriction will be lifted in a forthcoming release.

Oracle ASM
Testing on ASM is ongoing and support is very limited at this time. Please contact
the cluster team for the latest status.

302 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER GEOGRAPHIC EDITION

ZFS
There are two issues which prevent SCGE from supporting ZFS:
1. Prior to bringing a zpool online on a new cluster, the LUNs used by the zpool
must be imported. This is analogous to the import operation carried out by
traditional volume managers such as SVM and VxVM. SCGE does not yet issue a
zpool import command. This prevents the use of ZFS with storage-based
replication mechanisms, where the LUNs are inaccessible while configured as
secondaries.
2. More seriously, there is a potential interaction between ZFS and block-based
replication systems in general. The ZFS copy-on-write model of file update
presumes that the on-disk structure of the filesystem is always internally
consistent. For a local filesystem this will be the case, but when a filesystem is
replicated to a remote site this consistency can only be guaranteed if the order in
which disk blocks are written is the same at the secondary site as at the primary.

All of the supported replication technologies will guarantee this during normal
active replication, but if the communications link between primary and secondary
sites is lost, or the secondary site is otherwise unavailable, a backlog of modified
blocks will occur at the primary. This backlog will be transmitted once the secondary
site is again available, however most replication products do not maintain write-
ordering during this catch-up phase (AVS, TrueCopy and SRDF do not maintain
write-ordering such circumstances. Universal Replicator does). If a failure should
occur during this catch-up resynchronization, the destination zpool could be left in
an unusable state.

This is not an issue specific to Solaris Cluster, and/or Geographic Edition,


nevertheless it must be satisfactorily addressed before SCGE can safely claim
support for ZFS in a DR environment.

Solaris Containers (zones)


There are some limitations when using zones in conjunction with AVS. Solaris
Cluster supports zones in two ways:
1.By treating a zone as a black-box with the HA-Container agent. This model is
fully supported by SCGE with all replication mechanisms.
2. 1.By treating a zone as a node, and managing applications inside a zone. In this
case the application resource group nodelist will contain entries of the form
“<nodename>:<zonename>”, sometimes referred to as a “zone-node”
SCGE always treats replication resources as global to a site, i.e. the nodelist for
such resources groups (RGs) contains only physical hostnames (not zone names).
With AVS replication, it is essential that the AVS resource group be online on the
same physical node as the application, so that IO can be intercepted. In order to
correctly manage failovers within a local cluster in this case, SCGE must creates

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 303


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

affinities between the replication RGs and the application RGs. Solaris Cluster
will not permit affinities or dependencies to be created between RGs if one RG
has a nodelist of physical nodenames, and the other has a nodelist of “zone-
nodes”. This is highlighted in CR 6443496.
Until this issue is addressed, SCGE will be unable to support the use of zone-
nodes with AVS replication. The use of zone-nodes with TrueCopy and SRDF is,
however, fully supported.

TABLE B-1 Test/support matrix for SC Geographic Edition with various types of data
replication and volume managers

SRDF on EMC Symmetrix


Data Replication AVS on all storage TrueCopy on StorEdge 99xx arrays supported by Sun
type: supported by Sun Cluster† series arrays Cluster

Volume HW HW
Manager: Raid SVM†† VxVM HW Raid SVM†† VxVM Raid SVM†† VxVM

Odyssey S8u7 SPAR Yes Yes Yes Yes‡‡ No††† Yes‡‡ No‡‡‡ No‡‡‡ No‡‡‡
R1 SCGE or C (V4.1)
3.1 8/05 later
x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡‡‡ No‡‡‡ No‡‡‡
with SC
3.1u4 (3.1 S9u7 SPAR Yes Yes Yes Yes No††† Yes No‡‡‡ No‡‡‡ No‡‡‡
8/05) * or C (V4.1) (V4.1)
later
x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡‡‡ No‡‡‡ No‡‡‡

S10 SPAR No§ No§ No§ Yes No††† Yes No‡‡‡ No‡‡‡ No‡‡‡
C
x64 No§ No§ No§ No§§ No††† No§§ No‡‡‡ No‡‡‡ No‡‡‡

304 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER GEOGRAPHIC EDITION

TABLE B-1 Test/support matrix for SC Geographic Edition with various types of data
replication and volume managers

SRDF on EMC Symmetrix


Data Replication AVS on all storage TrueCopy on StorEdge 99xx arrays supported by Sun
type: supported by Sun Cluster† series arrays Cluster

Odyssey S8u7 SPAR Yes Yes Yes Yes*** No††† Yes*** No‡‡‡ No‡‡‡ No‡‡‡
R2 or C (V4.1)
(“Nestor”) later
x64 No‡ No‡ No‡ No‡ No‡ No‡ No No No
SCGE 3.1
‡,‡‡‡, ‡,‡‡‡,§ ‡,‡‡‡,§§
2006Q4,
§§§ §§ §
with SC
3.1u4 (3.1
8/05)
S9u7 SPAR Yes Yes Yes Yes No††† Yes Yes No††† Yes
or C (V4.1) (V4.1)
later
x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡ No No‡
‡,§§§

S10U SPAR Yes Yes Yes Yes No††† Yes Yes No††† Yes
2 or C (V4.1) (V4.1) (V4.1)
later
x64 Yes Yes Yes No§§ No††† No§§ No§§§ No No§§§
(V4.1) †††,§§
§

Odyssey S8 SPAR No**


R2.1 C
(“Athena”)
x64 No**
SCGE 3.2
with SC 3.2 S9u8 SPAR Yes*** Yes*** Yes*** Yes*** No††† Yes*** Yes No††† Yes
or C (V5.0)
later ‡
x64 No
S10u SPAR Yes Yes Yes Yes No††† Yes Yes No††† Yes
3 or C (V5.0) (V4.1) (V4.1)
later
x64 Yes Yes Yes Yes No††† Yes Yes No Yes
(V4.1) (V4.1) ††† (V4.1)

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 305


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE B-1 Test/support matrix for SC Geographic Edition with various types of data
replication and volume managers

SRDF on EMC Symmetrix


Data Replication AVS on all storage TrueCopy on StorEdge 99xx arrays supported by Sun
type: supported by Sun Cluster† series arrays Cluster

Odyssey S8 SPAR No**


R2.2 C
(“Helen”)
x64 No**
SCGE 3.2
2/08 S9u8 SPAR Yes*** Yes*** Yes*** Yes*** No††† Yes*** Yes No††† Yes
(3.2u1) or C (V5.0)
with SC 3.2 later ‡
x64 No
2/08
(3.2u1) S10u SPAR Yes Yes Yes Yes No††† Yes Yes No††† Yes
3 or C (V5.0) (V5.0) (V5.0)
later
x64 Yes Yes Yes Yes No††† Yes Yes No††† Yes
(V5.0) (V5.0) (V5.0)

This matrix shows the supported combinations for each release of Sun Cluster Geographic Edition. Superscript
numbers refer to explanatory notes below. It is assumed that each Solaris release also has the latest patch
releases required by the underlying Sun Cluster installation, unless notes are given to the contrary. The full
details of testing can be found at the (internal) URLs in the Test documents section in the following paragraph.

This is a current matrix, including qualifications carried out after a given version was released. The support
status of components not specifically referred to here (e.g. UFS, VxFS) should be determined by reference to
standard Sun Cluster.
Note that references to volume managers below are to single-owner versions (i.e. not CVM or Oban). Multi-
owner volume manager support is addressed in the Oracle configuration matrix.

Test documents:
http://haweb.sfbay/dsqa/projects/odyssey/r1/
http://galileo.sfbay/scq/odyssey/athena/
http://galileo.sfbay/scq/odyssey/post_scgeo32_quals/
* When using SCGE 3.1 8/05 with Cacao 1.1 (as shipped in Java ES 4) patch 122783-03 or later must be installed.
† AVS 3.2.1 required for Solaris 8 and 9, AVS 4.0 or later required for Solaris 10
‡ SCGE x64 support is only available with Solaris 10.
§ AVS was not available for Solaris 10 at this time.
** Solaris 8 is not supported with Sun Cluster 3.2, nor with SCGE 3.2.
††On Solaris 8 references to SVM should be taken as referring to Solstice Disk Suite (SDS)
‡‡Tested on Solaris 9, extrapolated to S8.
§§Not tested.
***Not tested, extrapolated from testing on previous release.
†††CRs 6216278 (SVM) and 5070680 (SCGE) must be addressed first. Work is in progress.
‡‡‡SRDF support was added for SCGE 3.1 2006Q4, for S9 and S10 only.
§§§SRDF software was not available for Solaris on x86 or x64 platforms for this release.

306 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER GEOGRAPHIC EDITION

TABLE B-2 Test and support matrix for SCGE and Oracle RAC, showing tested and
supported configurations per release.

Oracle version and configuration, per volume manager*

9i RAC 10g RAC

HW Raid SVM/Oba VxVM/C HW Raid SVM/Ob VxVM/C


n VM an VM
S8 AVS 3.2.1 SPARC No †
R1 SCGE 3.1u4 8/05

True Copy SPARC Yes‡ No‡‡ No§§ No§§ No§§§ No§§


S9 AVS 3.2.1 SPARC No †

True Copy SPARC Yes‡ No‡‡ No§§ No§§ No§§§ No§§


S10 True Copy SPARC Yes No‡‡ No§§ No§§ No§§§ No§§

S8 AVS 3.2.1 SPARC No §

True Copy SPARC Yes‡ No‡‡ Yes‡ No‡‡


SRDF SPARC No§§
S9 AVS 3.2.1 SPARC No†
True Copy SPARC Yes‡ No‡‡ Yes No§§ No‡‡ No§§
SRDF SPARC No§§ No‡‡ No§§ No§§ No‡‡ No§§
S10 AVS 4.0 SPARC No†
R2 SCGE 3.1u4 2006 Q4

x64 No† No† No† No† No† No†, ****

True Copy SPARC Yes No‡‡ Yes No§§ No‡‡ No§§


x64 No** No**, ‡‡ No** No§§ No‡‡ No****
SRDF SPARC No§§ No‡‡ No§§ No§§ No‡‡ No§§
x64 No**, †† No**, ‡‡ No**, †† No†† No††,‡‡ No††

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 307


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

TABLE B-2 Test and support matrix for SCGE and Oracle RAC, showing tested and
supported configurations per release.

Oracle version and configuration, per volume manager*

9i RAC 10g/11g RAC‡‡‡

HW Raid SVM/Oba VxVM/C HW Raid SVM/Ob VxVM/C


n VM an VM
S9 AVS 3.2.1 SPARC No †

True Copy SPARC Yes‡ No‡‡ Yes‡, ***, Yes‡ No‡‡ Yes***,†††
†††

SRDF SPARC No§§ No‡‡ Yes††† Yes‡ No‡‡ Yes†††,


††††

S10 AVS 4.0 SPARC No †

x64 No†,** No†,** No†,** No † No † No †

True Copy SPARC Yes‡ No‡‡ Yes‡,***,†† Yes‡ No‡‡ Yes‡,***,†††


x64 No** No**, ‡‡ No** Yes‡ No‡‡ Yes§


SRDF SPARC No§§ No‡‡ Yes‡,††† Yes‡ No‡‡ Yes†††,
SCGE 3.2

††††

x64 No**, †† No**, ††, ‡‡ No**, †† No††,§§ No††,‡‡ No§,††


S9 AVS 3.2.1 SPARC No †

True Copy SPARC Yes‡ No‡‡ Yes Yes No‡‡ Yes


SRDF SPARC Yes‡ No‡‡ Yes Yes No‡‡ Yes
S10 AVS 4.0 SPARC No †

x64 No†,** No†,** No†,** No † No † No †

True Copy SPARC Yes No‡‡ Yes Yes No‡‡ Yes


SCGE 3.2U1

x64 Yes No**,‡‡ No §, ** Yes No‡‡ No §

SRDF SPARC Yes No‡‡ Yes Yes No‡‡ Yes


x64 Yes No**,‡‡ No §, ** Yes No‡‡ Yes §

308 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


SUN CLUSTER GEOGRAPHIC EDITION

TABLE B-2 Test and support matrix for SCGE and Oracle RAC, showing tested and
supported configurations per release.

Oracle version and configuration, per volume manager*

This matrix shows the supported combinations for Oracle RAC and various types of data replication
technology, for each release of Solaris Cluster Geographic Edition (SCGE). Superscript numbers refer to
explanatory notes below. It is assumed that each Solaris release also has the latest patch releases required by
the underlying Sun Cluster installation, unless notes are given to the contrary. The full details of testing can
be found at the (internal) URLs in the Test documents section in the following paragraph.

“HW Raid” means that no volume manager was used. “SVM/Oban means the Sun Cluster Volume Manager,
and “VxVM/CVM” means the Veritas Cluster Volume Manager

This is a current, evolving, matrix, including qualifications carried out after a given version was released.

HA Oracle. Note that this table no longer calls out HA-Oracle as a separate entity. SCGE support for HA-
Oracle is the same as that provided by the underlying Solaris Cluster release.
* ASM support is limited at present, for technical reasons.
† The use of AVS Replication with Oracle RAC is not technically possible.
‡ Extrapolated from tests on a compatible release.
§ CVM is not yet supported on Solaris x86
** Oracle 9i was not released for Solaris x86.
††SRDF software was not available with SCGE for Solaris on x86 or x64 platforms for this release.
‡‡CRs 6216268 (SVM), 6325951 (Oban) and 5032363 (SCGE) must be addressed first.
§§Not yet tested, by project decision.
***Requires SCGE TrueCopy patch 126613-01 or later.
†††Limited support, requires special configuration. Obtain prior review/approval of configuration by SCGE team before making com-
mitment.
‡‡‡11g support is the same as 10g, presuming corresponding support by underlying core Sun Cluster
§§§CRs 6216268 (SVM) and 5070680 (SCGE) must be addressed first. Work is in progress.
****VxVM on x64 is not supported by SC3.1u4
††††Requires SCGE SRDF patch 126746-01 or later.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 309


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

310 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


APPENDIX C

Third-Party Agents

All the agents mentioned in “Application Services” on page 222 are developed, sold,
and supported by the Sun Cluster Business Unit. A variety of agents have been/are
being developed by third party organizations - other business units in Sun, and
ISVs. These agents are sold and supported by the respective third party
organizations. The table below lists the agents which Sun Cluster product marketing
is aware of:

The versions of the application supported in this table may not be up-to-date.
Please contact the person referred to in the contact column of the table for more/
latest information on these agents:

TABLE C-1 Third Party Agents

Application Contact

iPlanet Mail/Messaging Server 5.1 Email: portia.shao@sun.com


Phone: x15213/+1 408 276 5213
IBM DB2 7.2 (EE, EEE) Email: DB2Sun@us.ibm.com
IBM IDS/IIF 9.21, 9.3 (HA Informix) Tom Bauch
Email: bauch@us.ibm.com
Phone: 972-561-7954.
HA SBU 6.1 (Agent is bundled with SBU product) Dennis Henderson
Phone: 1-510-936-2260/x12260
Email: dennis.henderson@sun.com
HA-iCS 5.1 Cheryl Alderese
Phone: x34240/+1 408 276 4240
Email: cheryl.alderese@sun.com
Sybase ASE 12.5 (active-passive) Rick Linden
NOTE: There are two Sybase agents: one sold by Email: rick.linden@sybase.com
Sun, other sold by Sybase. This table refers to the
agent sold by Sybase.

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 311


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

312 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


APPENDIX D

■ E3500-E6500/E10000 + A3500FC (using


Hubs)
■ VxVM 3.1 (including CVM functionality)
■ HA Oracle 8.1.7 32bit
■ VxVM/SDS with SDS root mirror

04/17/2001
■ Added support for Serengeti-12/12i/24 with
Revision T3 single brick configs

History 05/07/2001
■ HA Oracle 8.1.6 64bit
■ Solaris 8 U4
■ SunMC 3.0 support
■ changed the verbiage for Sun Cluster 3.0
11/21/00 server licensing
■ First draft created. ■ Sample configs for Serengeti12/12i/24
cluster
12/22/00
06/12/01
■ HA LDAP 4.12 + Solaris 8
■ T3 single brick + 220/420/250/450
02/13/01 ■ Switch + 250/450/220r/420r/4800/4810/
6800
■ Support for E420R
■ CVM 7/10/2001
03/30/01 ■ VxVM 3.1.1
■ Oracle 9iRAC (OPS, 32bit) + VxVM 3.0.4
■ T3 single brick + E3x00-E6x00,E10K ■ Oracle Parallel Server 8.1.7 32bit + VxVM
■ A3500FC + E3x00-E6x00, E10K 3.1.1
■ Solaris 8 Update 2
■ Solaris 8 Update 3

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 313


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ OPS/RAC support on Sun Fire 4800/4810/ ■ Netra t 1400/1405 + Netra st D130 + VxVM
6800 servers 3.1.1
■ Gigabit Ethernet as Public Network
Interface. 9/26/01
■ Sun Fire 4800/4810/6800 8 node, mixed
■ Clarify Statement around E1 expander
cluster, and SVM support
support
■ Add II/SNDR 3.0 support
07/23/01 ■ Netra 1400/1405 + S1
■ SunPlex Manager ■ Netra AC200/DC200 + S1
■ Solaris Resource Manager 1.2 coexistence ■ F15K + Purple2
■ HA Sybase Agent
■ HA SAP Agent. 10/01/01
■ Sun Fire(TM) 280R server support.
■ clarify statement around 2 node OPS/RAC
■ Sun Fire 3800 server support.
support
■ Netra t1 200
■ HA Oracle 8.1.7 64 bit
■ Netra t 1400/1405
■ HA Oracle 9i 32 bit
■ Netra t 1120/1125
■ weaken the swap requirements to
recommendation
08/01/01 ■ removed the two node limit for E250/450/
■ Fix the VxVM license in sample configs 220R/420R + T3 single bricks
■ Solaris 8 7/01 ■ added a table for maximum cluster nodes
■ HA Informix v9.21
■ T3PP + E220R/E420R/E250/E450 10/16/01
■ Solaris 8 Update6 support
08/21/01 ■ Netra 20 + D1000
■ SE 99x0 + E450/E3500-6500 ■ Netra 20 + S1
■ HA Netbackup 3.4, 3.4.1
08/29/01
■ Changed SVM to SDS
10/29/01
■ Oracle 9iRAC (OPS) 32 bit + VxVM 3.1.1 ■ Sun Fire V880 + D1000/A5200/T3
(using cluster functionality) ■ Scalable Broadvision
■ HA SAP 4.6D 64 bit
■ HA SAP 4.5B 32 bit 11/13/01
■ HA SAP 4.0 32 bit
■ HA Informix v9.21 to be sold and supported
■ LDAP 4.13
by Informix. Contact: Hans Juergen Krueger,
■ Sun StorEdge 4800/4810/6800 + T3PP
hans-juergen.kreuger@informix.com, 1-650-
926-1061
9/11/01 ■ Oracle 9i RAC 64bit
■ Purple2 support ■ Update information about webdesk
■ 280R + Purple1 partner pair ■ Update information about SCOPE
■ 3800 + Purple1 partner pair ■ cleaned up the placement of some of the
■ >2 node 280R configs storage information.
■ add crystal+ support

314 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


REVISION HISTORY

12/04/01 02/12/02
■ Sun Cluster 3.0 U2 ■ Added a section on campus cluster
• PCI-SCI + E3500-6500 configurations
■ E3500-6500, 10K (SBus only) + A5x00/T3A/
01/08/02 TB (single brick and partner pair) + 6757A
■ onboard GBE port for public interface and
■ Indy DAS
cluster interconnect for V880
■ OPFS 8i 32bit
■ Scalable SAP 4.6D 32 bit (same agent as HA-
■ Made MPxIO support information more
SAP)
explicit
■ HA-iDS 5.1
■ PCI/SCI + E250/450
■ HA-iCS 5.1- The HA-iCS agent will be sold
■ added Sun Cluster 3.0 12/01
and supported by the iCS group. Contact
Cheryl Alderese, cheryl.alderese@sun.com
01/29/02 for details.
■ Revision history added ■ Updated the part numbers for sun cluster
■ >2 node support for V880 user documentation
■ >2 node support for SF3800 ■ Updated the contact address for Informix
■ F15K and 1034A public network interface agent
■ Netra T1 + Netra st D1000 ■ added 5-meter fiber optic cable support to
■ 250/220R/420R + FCI 1063 + SE 99x0 direct T3, A5x00 section
attached ■ Clarified statement around use of PCI I/O
■ F4800-6800 + 6799/6727 + SE 99x0 direct board for SCI-PCI in E3500-6500
attached
■ E10K + FC641063 + SE 99x0 direct attached 02/28/02
■ F15K + 6799/6727 + SE 99x0 direct attached
■ TrueCopy support
■ V880 + 1063/6799/6727 + SE 99x0 direct
■ Solaris 8 02/02 support
attached
■ Build F15K and F6800 in the same family
■ V880 + 1063 + Brocade 2800(F) + SE 99x0
■ A1000 support with E250/450/220R/420R/
■ F3800 + 6748 + SE 99x0 direct attached
280R/V880/3500-6500
■ E250/450/220R/420R + FCI 1063 + Brocade
■ Netra 1400/1405, 1120/1125, 20 + Netra st
2800 (F) + SE 99x0
A1000
■ E3500-6500 + FC641063 + Brocade 2800 (F) +
■ Campus clusters support for 220R/420R/
SE 99x0
250/450/280R/V880/3800 + T3A/T3B
■ E10K + FC641063 + Brocade 2800 (F) + SE
(single brick and partner pair)
99x0
■ F4800-6800 + 6727/6799 + Brocade 2800 (QL
only) + SE 99x0
03/15/02
■ F4800-6800 + FCI 1063 + Brocade 2800 (F) + ■ Dynamic reconfiguration (DR) support for
SE 99x0 Sun Fire 3800-6800
■ F15K + 6727/6799 + Brocade 2800 (QL only) ■ 1034A as private interconnect with Sun Fire
+ SE 99x0 15K
■ F15K + 1063 + Brocade 2800 (F) + SE 99x0 ■ SDS 4.2.1 supported with SE 99x0 arrays
■ Quorum support on T3PP/SE 99x0/SE39x0 ■ Soft Partitioning now supported with SDS
■ F15K + F4800-6800 + SE 99x0 - mixed family 4.2.1
config ■ SE39x0 + V880, F15K, E3500-6500, E10K
■ Sun Fire 15K + T3A

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 315


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ iDS 4.16 ■ 280R and Netra 20 server family


consolidation
03/21/02 ■ Netra 1120/1125, 1400/1405, Sun Enterprise
220R/420R/250/450 server family
■ SE6910/6960 + 250/450/220R/420R/3500-
consolidation
6500/10K/3800/4800-6800/15K
■ A5100 support with V880
■ 280R + 6799/6727A direct attached
■ Added support for Fabric mode with SE
■ 280R + 6799/6727A + Brocade 2802
99x0 and Brocade Switches and Sun 1Gb
■ 3800 + 6748A + Brocade 2802
HBAs.
■ MPxIO support with SE 99x0 arrays for all
the combinations where 6799/6727/6748 is
used. VxVM 3.2 is required for MPxIO
05/21/02
support. ■ SE 9960/9910 + SVM + MPxIO
■ Campus cluster with SE 9960/9910 with
04/09/02 Brocade switches
■ Netra 20 + T3A/B WG + Crystal+ + hubs
■ Sun Fire 12K support
■ single cpu clusters
■ DR support with Sun Fire 12K/15K
■ Sun Cluster 3.0 5/02
■ Added Solaris 8 U1 support
■ HA Oracle 9i 64bit
■ Relaxed Sun Cluster and Solaris updates
■ Oracle 9i RAC Guard 64 bit
support-matrix
■ Solaris 9 support for DNS, NFS, Apache
■ Sybase 12.5 (active-active) - sold &
1.3.9, iDS 5.1
supported by Sybase
■ Oracle 9i RAC Guard 32 bit
06/04/02
04/23/02 ■ 280R and V880 support in the same family
■ 2222A + 12K/15K for public and private
■ OPS/RAC support for campus clusters
network
■ 4 node OPS/RAC with T3WG
■ Jasper + S1 support
■ 8 node support for Sun Fire 12K/15K
■ Jasper + D2 support
■ HA-SAP 6.10
■ 4 node OPS with T3PP w/o CVM
■ F15K + A5200
■ Indy 1.0+ support
■ SAN 4.0 support
04/25/02
■ Oracle 9i (R1) RAC Guard 64 bit
■ Correct statement of support for Sybase ASE ■ HA-Oracle 9iR2 32/64 bit
12.5 ■ Oracle 9iR2 RAC 32 bit
■ HA-Apache 2.0
05/07/02 ■ Scalable iPlanet Webserver 6.0
■ Sun StorEdge 9970/9980 support
■ T3FW 2.1 support 06/18/02
■ 4 node OPS with SE 9960/9910 ■ Clarify HA/Scalable app support with N*N
■ Ivory + SE 9960/9910 topology
■ E3000 - 6000 support ■ HA Sybase 12.0 64bit
■ SE 9900 ShadowImage, graphtrack, LUN ■ Oracle 9iR2 RACG 32bit
manager ■ V480 support with Sun Cluster 3.0
■ E3x00-6x00, 10K server family consolidation ■ PCI SCI support with 220R, 420R

316 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


REVISION HISTORY

■ E220R/E420R/E250/E450 + 6799/6727A + 10/01/02


SE 9910/9960 + Brocade 2800 switch
■ NWS SAN 4.0 support reflected in storage
■ Support of Sun 1Gb 8/16 port switches with
configuration section
9960/9910
■ Supported Private and Public Interconnects
■ 2222A support on Sun Fire 4800, 4810, 6800
revised (expanded x1150, 1151, 2222 and
for cluster interconnect and public network
additional card support)
interface.
■ Additional campus cluster features included
■ 4 node OPS/RAC support with T3PP with
in campus cluster appendix
VxVM 3.2 cluster functionality.
■ Network Support section revised to reflect
■ 4 node OPS/RAC support with SE3900 with
supported configurations in a easier to use
and w/o VxVM 3.2 cluster functionality.
matrix fashion
■ Indy 1.5 support for SE3900 series systems.
■ Brocade 12000, 3800, and McDATA 6064
support with SE 9960/9910
10/15/02
■ Added SE 3310 Support
07/23/02 ■ Added Diskless Cluster Configuration
Support
■ ATM as public network interface
■ Added support for SAP Livecache 4.6D and
■ Support of E10K and F15K in the same
Apache 1.3.19
family for SE 9900 series storage systems

08/06/02 10/29/02
■ Revised the topology support section to
■ PCI-SCI with Sun Fire 4800, 6800
reflect the relaxed topology restrictions.
■ Heterogeneous node configurations
■ Added the WDM based campus cluster
■ 2222A + S1 on remaining platforms
configurations section.
■ Availability Suite 3.1 with Sun Cluster 3.0 5/
■ Added the “hot-plug” functionality section
02 (or later) + Solaris 8
to the Campus Cluster section.
■ 8 node N+1 configurations

08/20/02 11/12/02
■ Added Sun Fire V120 Support
■ Cassini 1261a, 1150a, 1151a support
■ Added Enterprise 10k PCI SCI Support
■ Oracle 9iR2 RAC 64 bit
(1074a)
■ Oracle 9iR2 RACG 64 bit
■ Added SANtinel and LUSE to the SE 9900
■ HA-Livecache 7.4
series software support sections.
■ HA-Siebel 7.0
■ Updated Agents and Third-Party Agents
section
09/10/02
■ Fixed several typographical errors within
■ 4 Node OPS/RAC supported with SE 9970/ several sections
9980
■ Netra server line VxVM support 12/03/02
standardized (identical to all other
■ Added McData 6064 1GB switch support for
supported servers with Sun Cluster 3.0)
9910/9960
■ SE A5200 support for V480
■ Added SunOne Proxy Server 3.6 support
■ Support 2GB HBA (6767A, 6768A) and
Brocade 3800 switch with SE T3 ES, SE 39x0

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 317


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

1/14/03 2/25/03
■ PCI SCI (1074a) support for SF 280R, V480, ■ Added Sun Netra 1280 Support
V880 ■ Added Brocade 6400 Switch Support
■ Added McData 6064 2GB switch support for ■ Added SE 69x0 Campus Cluster Support
the 9910/9960/9970/9980
■ Added Netra 120 support 3/11/03
■ Added VLAN support
■ Added Brocade 12000 switch support
■ Added A1000 daisy chaining support
■ Added SF V480 McData 6064 (1&2 Gb)
■ Added SunOne Web Server 6.1 agent
support with SE 9970/9980
support
■ Revised Storage Support, Interconnects and
Data Configuration sections
1/28/03
■ Added V1280 support 4/1/03
■ Added SDLM support
■ Added SE 6120 support
■ Added non-support statement for
■ Added 4 nodes Sun Fire Link Support
multipathing to the local disks of a SF v480/
■ Added E450 S1 storage support
v880
■ Single dual-controller, split-bus SE 3310
■ Added Sun Fire Link support for 6800
JBOD configuration support removed
■ Added WDM support for V280, 480, 880
■ Revised Storage Support and Interconnects
■ Added WDM support for OPS/RAC
sections
(removed the RAC/OPS restriction)
■ Added SAP 6.20 support
■ Added 6768 HBA support for SF 6800/SE
■ Added support for RAC on GFS
9980
■ Added HA-Siebel 7.5 Sun Cluster 3.0 U3
4/15/03
support
■ Revised SE 3310 sections ■ Added SE 2GB FC 64 Port Switch Support
■ Expanded Brocade 3800 support to SBUS
2/11/03 systems with T3s/39x0
■ Expanded SE 9970/9980 support for E 420
■ VLAN phase 2 (switch trunking) enabled
■ Revised several sections
■ Slot 1 DR support added
■ Added 6757 McData 6064 support with
5/6/03
9980/ E10k
■ Added HA IBM WebSphere MQ agent ■ Added SE 6320 support
support ■ Added Sol 8 12k/15k SCI support
■ Added HA IBM WebSphere MQ Integrator ■ Added 12k/15k Sol 9 DR Slot 1 support
agent support ■ Added RSM support with RAC
■ Added HA Samba Agent support ■ Revised interconnect and storage sections
■ Added HA DHCP support
■ Added HA NetBackUp 3.4 agent support for 5/20/03
Solaris 9 ■ Added Sun Cluster 3.1. All sections were
■ Revised A5x00 and SE 3310 Storage sections “generified” to Sun Cluster 3 (unless
■ Revised agents, server support, interconnect otherwise specified)
support sections ■ Added SF V210/V240 support
■ Added additional SE 6320 support
■ Added Sol 9 12k/15k SCI support

318 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


REVISION HISTORY

■ Revised topologies, interconnects, storage, ■ Revised the storage node limitations to


data services, ordering and all other sections clearly define SCSI/FIBER/99x0 node
affected by Sun Cluster 3.1 updates connectivity limits
■ Revised agents section
6/03/03
■ Added support for SE 3510 RAID
8/19/03
■ Added Support for McData 4500 switch ■ Added expanded campus cluster support
■ Added Support for Brocade 3200 switch phase 1- additional campus cluster switch
■ Added Campus Cluster support for VLANs support
■ Expanded SDLM (HDLM) support for ■ Added SF 3800/Brocade 3900/SE 6x20
Solaris 9 on Sun Cluster 3.0 only support
■ Added Sol 9 support for AVS and Sun
6/17/03 Cluster 3.1
■ Revised storage, agents and software
■ Added support for Sun Cluster 3.1 and V240
sections
(Sol 8 and Sol 9)
■ Added support for Sun Cluster 3.0 and V240
with Sol 9
9/2/03
■ Added support for SE 3510 with V240 ■ Added SF v240/SE 6x20 support
■ Revised 3510 RAID switch supported (added ■ Added Sol 9 8/03 support for Sun Cluster
Sun 64 port 2gb switch) 3.0 and Sun Cluster 3.1
■ Added support for SF/Netra 1280 memory/ ■ Added NBU 4.5 support for Sun Cluster 3.0
CPU DR support 5/02
■ Added additional Sun Cluster 3.0/3.1 Samba ■ Added “maximum node” columns to all
support storage arrays
■ Revised Sun Cluster 3.1 Solaris support table ■ Revised VxVM CVM license numbers for
and Volume Manager support table Sun Cluster 3.0
■ Revised storage support section
9/16/03
7/15/03 ■ Added SF V440 support
■ Added support for Brocade 3900, McData ■ Added SF V250 support
6140 switches ■ Added second source HBA support (jni)
■ Revised storage/switch support ■ Added new 6767/6768 HBA part numbers
■ Logical volume unsupported on SE 3510 ■ Added McData 4300 switch support
■ Sun Fire Link (wildcat) supported in DLPI ■ Revised storage section
mode for SF 12k/15k
■ WebSphere MQ and MQ integrator 9/30/03
supported in Sol 9 Sun Cluster 3.x versions
■ Added SAN Support section to storage
support configuration
7/29/03 ■ Added SF V240, McData 4500 support for SE
■ Expanded support for the SE 3310 RAID/ 99x0
JBOD with SF 4800-6800, 12k/15k support ■ Revised Storage support section
■ Added 8 node RAC 9.2.0.3 with SE 99x0 ■ Shadowimage CCI device support clarified
support on SE 99x0
■ Added 8 node N*N support for SE 99x0
storage

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 319


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

10/14/03 ■ Added SCI Promo info


■ Changed Informix contact
■ Added D1000/A1000 to SF V440
■ Cleaned up several tables
■ Truecopy CCI device support clarified on SE
■ Fixed psr info
99x0
2/24/04
10/28/03
■ X4422 (cauldron s) support added
■ Sun Cluster 3.1 10/03 announced/added
■ X4444 (quad-gigabit) support added
■ Added SCI 1074a card support to SF V440
■ Mixed speed nafo configs supported
■ TrueCopy CCI device support clarified on SE
■ Netra 240/SE 99x0/JNI support added
99x0
■ Modified SE 3510 section
■ Added agents for Tomcat, MySQL, Oracle
■ Revised Solaris support section
Ebusiness suite, SWIFTAlliance, Sun Cluster
3.1 NBU 4.5 support
3/9/04
11/11/03 ■ Expanded JNI/second source support for
6120/6320
■ Added MPxIO boot support
■ Documented Hardware RAID 1 support
■ Removed >1 initiator per channel restriction
restriction for SF v440 internal disks
on SE 3510
■ Revised storage section HBA/Storage
■ Expanded leadville-based HBA support
support for 6120, 6320, 99x0, 69x0
■ Added BEA 8.1 agent support
■ Revised supported SAN switch listing-
Brocade 3200
12/2/03
■ Added Netra 240 AC/DC 4/6/04
■ Added 64 LUN 6120/6320
■ Sun Fire Enterprise 2900 Support
■ Added EBS 7.1 support
■ SE 3510 with Sun branded JNI supported
■ SE 3120 JBOD Supported
1/13/03 ■ SE 3510 RAID Restriction Removed
■ Expanded Support for campus cluster ■ SE 3310 JBOD With V440 SCSI supported
storage devices (SE 3510, SE 6x20) ■ Mirroring Between different types of Storage
■ Updated support for RAC on a FS Arrays Supported
■ Expanded SF 440 storage support
■ Solaris 9 12/03 supported 4/27/04
■ NBU 5.0 supported with Sun Cluster 3.0 U3
■ Sun Fire Enterprise 20/25 Support
■ Solaris 9 4/04 support for Sun Cluster 3.0
2/10/04 and 3.1
■ Sun Fire Enterprise 4900/6900 added ■ Inclusion of simplified hardware procedures
■ TrueCopy campus cluster manual support in documentation
added ■ Revised support for Sun Fire 240 and Netra
■ onboard port campus cluster support added 240 to include Netra ST D1000
■ SE 3310 JBOD split bus re-enabled ■ Revised support for Sun Fire 240 to include
■ SF V440/SE 39x0 support added x4422A as cluster interconnect
■ SE 3310/x2222 HBA support added
■ Netra 240 AC support added
■ NBU 5.0 support added

320 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


REVISION HISTORY

5/11/04 8/31/04
■ Sun Netra 440 DC ■ Support for Netra 440 X6799 and X6541
■ hsPCI+ for 12K/15K
■ Single SE 3120 JBOD Split Bus 9/14/04
■ SE 3510 8 array expansion
■ Support for Sun StorEdge 9990
■ Sun Cluster Open Storage
■ Support for Sun Fire V490/890
■ HA-Oracle Agent for Oracle 10G on Sun
■ Support for X4444A card with Sun Fire 20/
Cluster 3.0
25K
6/1/04 10/05/04
■ Expanded Campus Cluster Support
■ Support for Sun LW8-QFE card
including McData 4500
■ HA-Oracle Agent for Oracle 10G on Sun
10/19/04
Cluster 3.1
■ SAP DB agent Support (SPARC) ■ Support for SE 6130
■ App Server J2EE Support (SPARC) ■ Support for 4 card SCI without DR
■ 8 Node support for SE 6120/6130
■ x86 Support matrix addendum 11/02/04
■ Support for 0racle 10G RAC on Solaris
6/15/04 SPARC
■ Support for SE 6920 ■ XMITS PCI IO boats for Serengeti class
systems with Sun Cluster
7/13/04
11/16/04
■ Support for SE 3511 RAID
■ Support for SE 320 ■ Sun Cluster 3.1 9/04
■ Support for Brocade 3250, 3850 and 24000
switches 12/07/04
■ Support for SE 3310 with V440/Netra 440 on ■
board SCSI
■ EMC Symetrix DMX, 8000, EMC Clariion 1/11/05
CX300,CX400,CX500,CX600 and CX700
■ Jumbo Frames Support
8/03/04
2/01/05
■ Support for 3510 and 3511 RAID arrays with
eight nodes connected to a LUN ■ 10G RAC with SVM Cluster Functionality
■ Support for Netra 440 with the X4422A
(cauldron S), SG-XPCI1FC-QF2, SG- 3/08/05
XPCI2FC-QF2 and X4444A cards ■ Support for Netra 440 and Jasper 320
■ Support for QLogic 5200 Switch
8/17/04
■ Support for Netra 440 AC with X3151A card 4/05/05
■ Support for Sun Fire V40z with SE 3310
■ Support for Public Network VLAN Tagging
RAID and X4422A (cauldron S) HBA.
■ Support for Brocade 4100 FC SPARC

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 321


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Support for HA Siebel 7.7 ■ Support for Sun Fire V40z dual core
■ Support for Sun 4150A51A Cards processors

4/19/05 8/23/05
■ Support for HA Sybase 12.5 agent ■ Support for SE 9985 with Sun Cluster

■ Updates including AC/DC support ■ Sun Cluster 3.1 8/05 update


clarification in 3510/35100 and 3310/3311
9/13/05
5/03/05 ■ Support for Jasper 320 with 3310 RAID and
■ Support for SE 6920 V 3.0.0 (Unity 3.0) V40z

■ Support for Oracle 10G with Shared QFS ■ Panther processor support
■ HA-Oracle 10G on Solaris 9 x86
5/17/05 ■ Miscellaneous updates
■ Support for NEC iStorage
■ Miscellaneous Updates
9/27/05
■ Support for Brocade 200E and 48000
6/7/05 ■ Support for 3310 RAID and V40z with SG-
■ Support for 3310/3120 JBOD XPCI1SCSI-LM320
■ Support for X4444A ■ Panther processor support for E2900,4900
and 6900
■ Support for SG-XPCI2SCSI-LM320 (Jasper
320) ■ Support for Sybase 12.5.2 and 12.5.3
■ Support for Sybase ASE 12.5.1 (SPARC)
10/11/05
7/12/05 ■ Support for AVS 3.2.1
■ Support for Sun 5544A Card (SPARC) ■ Support for SE 3320
■ Support for Sun Emulex Cards (Rainbow) ■ Panther processor support for E20 and 25K
SG-XPCI21C-EM2 and SG-XPCI2FC -EM2 ■ Misc. updates and corrections
(SPARC)
■ Support for Sun Fire V440 On Board HW 11/11/05
RAID ■ Galaxy Servers
■ Support for SE 9990 with HDLM 5.4\ ■ Fibre Channel storage for x64

7/26/05 ■ Support for x4445A NIC

■ Support for Sun 4150/4151A card on Solaris ■ Support for 3320 on x64
x86 ■ Support for Infiniband on x64
■ Support for Shadow Image and TrueCopy ■ Support for HA Oracle 10gR1 on x64
with SE 9990 ■ Corrections on agents
■ Support for Sun Fire V40z On Board HW ■ Misc. updates and corrections
RAID

322 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


REVISION HISTORY

12/10/05 4/18/06
■ 8 Node Oracle RAC support with V40z
■ T2000 support for SCSI storage
1/10/06 ■ Support for RoHS NICs
■ Support for T2000 Server ■ Updated storage support for Netra 240
■ Added License part numbers for Sun Cluster
Geo Edition
1/24/06 ■ Added License part numbers for Sun Cluster
■ Support for 6920 with x64 Clusters Advanced Edition for Oracle RAC
■ Updated Version Support for MySQL and
WebSphere MQ agents
■ Support for single dual-port HBA as path to 7/11/06
shared storage ■ StorageTek 6540 Array

2/7/06 ■ StorageTek 6140 Array


■ updates to MySQL agent
■ Updated Oracle E-Business Suite Agent
Support ■ Sun Blade 8000 Modular System
■ Edited Volume Manager Support ■ Sun Blade X8400 Server Module
Information ■ Solaris 10 6/06 (Update 3)
■ Support for the SE3320 with X4200
10/17/06
2/21/06 ■ Support for the Sun Fire X4100 M2, X4200
■ Support for T1000 server M2, and X4600 M2 servers
■ Support for 3511 in campus clusters
11/21/06
■ Added support for Solaris 10 zone failover
for MySQL and Apache Tomcat agents ■ Support for the Sun Fire V215, v245, and
V445 servers
■ Support for SE 6130 with x64 servers
■ Support for mixed 2Gb/s and 4Gb/s FC
■ Support for four node connectivity with the cards in SAN attached storage
SE 6920 with x64 servers
■ Support for Cisco FC switches

1/09/07
4/4/06 ■ Support for the Sun Fire X2100 M2 and
■ Oracle RAC 10gR2 for x64 X2200 M2 servers
■ 4422A support for Solaris 10x64
2/06/07
■ Support for McData 4500 and 4700 switches
■ Update MySQL agent section
■ Support for 99x0 with T2000
■ Update Samba agent section

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 323


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Support for the Sun Blade x8420 (A4F) ■ Add new support of ST2540 (FC)
Server Module
■ Support of Netra 210 for Diskless Cluster 5/08/07
Config ■ Add Sun Cluster Geographic Edition section
■ Change of config guide ownership from ■ Update Spec-Based Campus Cluster section
Matt Hamilton to Hamilton Nguyen
■ Consolidate various Campus Cluster entries
3/06/07 ■ Update Siebel 7.8.2, SwiftAlliance Access
and SwiftAlliance Gateway support for Sun
■ Update the entire config guide with Sun Cluster3.1 (SPARC) table
Cluster 3.2 data
■ Add Cisco 9124, Brocade 5000, Qlogic 9100
■ Update V210/V240 Server Configuration and 9200 to list of FC switches supported
section
■ Update 5544A/5544A-4 support with
■ Update SE3511 RAID Configuration Rules additional servers
section
■ Add Sun NAS 53XX note
■ Update Private Interconnect Technology
Support section ■ Add Minnow firmware note

■ Add STK6140 and two additional HBAs to ■ Update QFS and Oracle RAC tables (x64 and
Sun Blade 8000 support matrix SPARC)

■ Add new Netra x4200 M2 support matrix ■ Add new Sun Blade 8000 P support matrix

■ Add Spec-Based Campus Cluster section ■ Add Sun SPARC Enterprise M4000, M5000,
M8000 and M9000 supports
■ Add SAN4.4.12 note
6/05/07
4/03/07
■ Add Sun Blade T6300 support
■ Add SE 9970/9980 and SE 9985/9990
supports to x4600 Matrix ■ Add StorageTek 6540 support with x64
servers
■ Add note related to Info Doc#88928 to T2000
section ■ Add External I/O Expansion Unit for Sun
SPARC Enterprise Mx000 Servers
■ Add Oracle Application Server support to
Failover Services for Sun Cluster 3.2 (x64) ■ Add Apache Tomcat 6.0 support
table ■ Add/update AVS support including AVS 4.0
■ Add HA Oracle support to Failover Services ■ Add SAP support to Failover Services for
for Sun Cluster 3.2 (SPARC and x64) and Sun Cluster 3.2 (x64)
Failover Services for Sun Cluster 3.1 (x64) ■ Update SAP with agent support in zones to
tables Failover Services for Sun Cluster 3.2
■ Add JES Directory Server/JES Messaging (SPARC)
Server/Netbackup notes to Failover Services ■ Update Swift Alliance Access and Gateway
for Sun Cluster 3.1 (SPARC) and Failover sections with Solaris 10 11/06 support
Services for Sun Cluster 3.2 (SPARC) tables
■ Add new support of V125
■ Add IB notes/Update IB support

324 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


REVISION HISTORY

7/10/07 9/14/07
■ Add 802.3ad Native Link Aggregation ■ Add CP3060 SPARC Blade for Netra CT900
support with Public Network ATCA Server support
■ Add new support of SE 9990V ■ Update CP3010 SPARC Blade for Netra
■ Update Oracle RAC table (Sun Cluster 3.2 CT900 ATCA Server with SE3510 support
SPARC) with additional storage support ■ Add Solaris 10 Update 4 support with Sun
■ Update Sun Cluster Geographic Edition and Cluster 3.2
Oracle table with additional config support ■ Update Guideline for Spec Based Campus
■ Update MySQL with incrementally Cluster Configurations section with support
supported versions of HDS as quorum device

■ Update SAP support (Sun Cluster 3.2 x64) ■ Update Cluster Interconnect section of
Network Configuration chapter
■ Add note to Diskless Cluster section as
related to inclusion of Quorum Server ■ Update link aggregation info in IPMP
Support sub-section under Public Network
■ Update Andromeda tables with additional section
hardware support
■ Add configuration rule to SE 99xx sections
■ Update V215, V245, V445 and V490 on mixing FC HBAs that are and are not
platforms with additional SE 99xx support MPxIO supported
■ Update Netra 440 platform with additional ■ Update JES Messaging Server with version
storages support 6.3 and JES Directory Server with version
■ Update T1000 platform with SCSI-based 5.2.x in Failover Services for Sun Cluster 3.2
storage support (SPARC) table
■ Update Cluster Interconnect and Public ■ Update both SwiftAlliance Access and
Network tables with additional NICs support SwiftAlliance Gateway with version 6.0 in
Failover Services for Sun Cluster 3.2
8/07/07 (SPARC) table
■ Add CP3010 SPARC Blade for Netra CT900 ■ Update N1 Grid Engine 6.1 in Failover
ATCA Server support Services for Sun Cluster 3.1 (SPARC and x64)
and Sun Cluster 3.2 (SPARC and x64) tables
■ Add Solaris 9 support to V215 and V245
platforms ■ Add Sybase ASE support to Failover
Services for Sun Cluster 3.2 (x64) table
■ Update Campus Clusters chapter
■ Update Sybase ASE entry in Failover
■ Update True Copy Support section
Services for Sun Cluster 3.2 (SPARC) table
■ Add additional PCI-E ExpressModule with non-global zones support
Network Interfaces to Cluster Interconnect and
Public Network tables 10/09/07
■ Update Supported SAN Software section ■ Add Sun SPARC Enterprise T5120 and T5220
with release SAN 4.4.13 note platforms support
■ Update Siebel 7.8.2 entry in Failover Services ■ Add new support of SE 9985V
for Sun Cluster 3.1 (SPARC) table with
Solaris 10 support ■ Update Sun Blade T6300 platform with
additional HBA support

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 325


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Update X2100 M2 and X2200 M2 Servers with ■ Update SE 99xx with Mx000 support
SE3120, SE3310 and SE3320 supports ■ Update SE3320 RAID Support Matrix with
■ Update SE3310, SE3320, SE3510 and SE3511 Netra 1290 support
with Minnow 4.21 firmware ■ Update Sun Blade T6300 platform with
■ Update Netra1290 with ST6140 and ST6540 LDOM support
supports ■ Update Mx000 with DR support
■ Update/add SAP Livecache 7.6 and SAP ■ Add Cisco 9134 and 9222i to list of FC
MaxDB 7.6 entries in Failover Services for switches supported
Sun Cluster 3.2 (SPARC & x64) tables
■ Update QFS tables with SAM-QFS (Shared)
■ Update MySQL version in Failover Services 4.6 support
for Sun Cluster 3.1 (SPARC & x64) and
Failover Services for Sun Cluster 3.2 (SPARC ■ Update Samba with incrementally supported
& x64) tables versions for both Solaris Cluster 3.1 and 3.2

11/06/07 01/08/08
■ Add Sun Blade x6220 and x6250 Server Modules ■ Add ST2530 (SAS) and SAS HBAs supports
support ■ Add Sun Blade 6048 chassis support
■ Add Sun Blade T6320 Server Module ■ Update Sun Blade 60xx Support Matrix with
support Infiniband interconnect (x1288A-Z) and ST
■ Add new section to introduce Support for 99xx storage support
Virtualized OS Environment (LDOM) ■ Update Sun SPARC Enterprise T5120 and T5220
■ Update Solaris Container agent for Sun platforms with SCSI storage support
Cluster 3.1 with native and 1x brand support ■ Add Sybase version 15.0.1 and 15.0.2
■ Update Guideline for Spec Based Campus support in Failover Services for Sun Cluster
Cluster Configurations with support of HDS 3.1(SPARC) and Sun Cluster 3.2 (SPARC and
as quorum device for Sun Cluster 3.1u4 x64) tables

■ Update SE3120 JBOD Support Matrix with ■ Add Brocade DCX to list of SAN switches
E6900 support supported

■ Update Sun Blade 8000 Support Matrix with ■ Update Mx000 with additional ST 99xx
x7287A-Z support support

■ Update x4100 M2, x4200 M2, Netra x4200 ■ Update External I/O Expansion Unit for Sun
M2, x4600 and x4600 M2 with x4446A-Z SPARC Enterprise Mx000 Servers with
support additional NICs

12/04/07 02/05/08
■ Add Sun Blade x8440 Server Modules ■ Update ST2540 with additional servers
support support

■ Add Sun Fire X4150 and X4450 Servers ■ Update ST2530 (SAS) with additional servers
support and HBAs support

■ Update ST2540 with M4000, M8000, M9000 ■ Update Sun SPARC Enterprise T5120/T5220
and Sun Blade X84xx support with additional ST6540 Array support

326 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


REVISION HISTORY

■ Update Volume Manager tables with ■ Update Oracle Server with version 11g
additional S10U4 support support in Sun Cluster 3.1 (SPARC) and Sun
■ Update Netra X4200 M2 with additional Cluster 3.2 (SPARC) tables
ST2540 RAID Array support ■ Update Oracle RAC with version 11g
■ Update Sun Blade 6000/6048/8000 Support support in Sun Cluster 3.1 (SPARC) and Sun
Matrix with additional NIC support Cluster 3.2 (SPARC) tables
■ Update Oracle Application Server with
03/04/08 version 10.1.3.1 support in Sun Cluster 3.2
(SPARC and x64) tables
■ Add Sun Blade x8450 Server Module support
■ Update Oracle Business Suite with version
■ Add Universal Replicator support with SE
12.0 support in Sun Cluster 3.2 (SPARC)
9985V/SE 9990V
table
■ Add ST2530 support with T5120/T5220
■ Add HA Container (1x and Solaris8
■ Update supported SAN software for Sun branded) support to Sun Cluster 3.2 (SPARC
Cluster on Solaris 9 and x64) tables
■ Update SE 9985V/9990V with x64 support ■ Update BEA Web Logic Server with version
■ Update Siebel agent with additional version 9.2 support in Sun Cluster 3.2 (SPARC and
8.0 support in Failover Services for Sun x64) tables
Cluster 3.2 (SPARC) ■ Update JES Application Server with version
■ Update Sun Blade x6220 and x6250 Server 9.1EE support in Sun Cluster 3.2 (SPARC and
Modules with SE 9985V/9990V support x64) tables
■ Update Sun SPARC Enterprise T5120 and T5220 ■ Update CP3060 SPARC Blade for Netra
with SE 99xx support CT900 ATCA Server with additional HBA
support
04/01/08 ■ Update Sun Blade 8000 and 8000P with
■ Add ST2510 (iSCSI) support additional SE 99xx support
■ Add Sun SPARC Enterprise T5140 and T5240 ■ Update Sun Fire X4100 M2/X4200 M2,
support X4450, X4600, X4600 M2 with additional SE
99xx support
■ Add Sun Fire X4140 and X4240 Servers
support ■ Update Netra 440, Netra 1280, SF V440, SF
V445, SF V480, SF V490 with additional NIC
■ Add Sun Fire X4440 Server support
support
■ Add Sun StorageTek NAS support for any
■ Update Sun SPARC Enterprise M5000 with
data services with more than 2-node
ST2540 support
■ Add support of SRDF in a campus cluster
■ Update the maximum number of Cluster
configuration
nodes (x64) from 4x to 8x
■ Update Sun Cluster Geographic Edition
appendix to reflect SCGE3.2U1 release 05/13/08
■ Update VxVM (on x64 and SPARC) tables to ■ Add S10U5 support with SC3.2
reflect SC3.2U1 release
■ Add Brocade 300, 5100 and 5300 switches
■ Add x7285A and x7286A NICs support

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 327


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

■ Add limited Netra T5220 support ■ Add SG-XPCIE2FCGBE-Q-Z HBA/NIC


■ Update ST2530 support support with Sun Blade T63xx Server
Modules
■ Update Sun Blade T6320 platform with
additional SE 99xx support 07/08/08
■ Update T5120/T5220 with SE3120 support ■ Add support of LDOM with Guest Domain
■ Update Sun Blade 6000/6048 and 8000 ■ Add SG-XPCIE2FCGBE-E-Z HBA/NIC
Support Matrix with x7284A-Z support support with Sun Blade 6000, 6048 and 8000
■ Update Sun Blade T6300 and T6320 with chassis
x7284A-Z support ■ Add SG-XPCIE2FCGBE-E-Z HBA/NIC
■ Update PostgreSQL in Failover Services for support with Sun Blade T63xx Server
Sun Cluster 3.1 and 3.2 for (SPARC and x64) Modules
■ Update Solaris Container in Failover ■ Update Sun Blade x6450 Server Module with
Services for Sun Cluster 3.2 (SPARC and x64) additional SE 99xx supports
■ Update Oracle E-Business Suite in Failover ■ Update Sun Fire X4100/X4200 with SE 99xx
Services for Sun Cluster 3.1 (SPARC) supports
■ Update JES Web Server in both Failover and ■ Update Sun Fire X4140/X4240/X4440 with
Scalable Services for Sun Cluster 3.1 and 3.2 SE 99xx supports
(SPARC and x64) ■ Update Sun Fire X4150/X4450 with
■ Update SAP in Failover Services for Sun additional SE 99xx supports
both Sun Cluster 3.1 and 3.2 (SPARC) ■ Update Netra T2000 with SR2530 and ST2540
■ Update web browsers supported with supports
SunPlex Manager
08/05/08
06/10/08 ■ Update Supported SAN Software section
■ Add Sun Blade x6450 Server Module support with release SAN 4.4.15 note
■ Update Sun SPARC Enterprise T5140 and T5240 ■ Update Sun Fire T1000 Server with
with SCSI and SE 99xx storage support additional SE 99xx supports
■ Update ST2510 (iSCSI) with number of ■ Add new section to provide more details on
nodes support from 2 to 4 Sun NAS
■ Update Sun Fire X4150 with SE 9970/9980 ■ Update MySQL in Failover Services for Sun
storage support Cluster 3.2 (SPARC and x64) with additional
■ Update SwiftAlliance Gateway with versions
additional version 6.1 support in Failover ■ Update Solaris Container in Failover
Services for both Sun Cluster 3.1 and 3.2 Services for Sun Cluster 3.2 (SPARC and x64)
(SPARC) with additional versions
■ Update SAP in Failover Services for both
Sun Cluster 3.1 and 3.2 (SPARC) 09/02/08
■ Add SG-XPCIE2FCGBE-Q-Z HBA/NIC ■ Add Netra x4250 support
support for Sun Blade 6000, 6048 and 8000 ■ Add Netra x4450 support
chassis

328 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


REVISION HISTORY

■ Update Sun Blade T6320 with X4236A ■ Update Sun Cluster 3.2 and Sun Cluster
support 3.2U1 with Solaris 10U6 support
■ Update MySQL in Failover Services for Sun ■ Update SCGE/Oracle RAC table with VxVM
Cluster 3.2 (SPARC and x64) with additional support involving TrueCopy/S10 x86/
version SCGE3.2 and SRDF/S10 x86/SCGE3.2U1
■ Update External I/O Expansion Unit for Sun ■ Update SWIFT Alliance Access in Failover
SPARC Enterprise Mx000 Servers with Service for Sun Cluster 3.2 (SPARC) with
additional NIC support version 6.2
■ Update x2100M2/x2200M2, x4100M2/
10/14/08 x4200M2, x4140, x4150, x4240, x4440, x4450,
■ Add Sun SPARC Enterprise T5440 Server x4600/x4600M2 with additional HBAs
support support
■ Add Sun Blade T6340 Server Module ■ Update External I/O Expansion Unit with
support T5120, T5140, T5220 and T5240 support
■ Add Sun Fire X4540 Server support ■ Add Brocade 310 switch support
■ Update Sun Blade X6220 and X6250 Server
Module with x4236A NEM10G support
12/09/08
■ Add 4x 8GB FC PCIe HBAs support ■ Add new J4200 storage support

■ Update SE3320 JBOD on discontinuation of ■ Add new J4400 storage support


dual-hosted single-bus support by base ■ Add discussion on MTU relationship
product group between public network and cluster
■ Update Sun Cluster 3.1 with Solaris 10U5 interconnect when using scalable services
support ■ Add Qlogic 5802V switch support
■ Add Informix Dynamic Server V11 support ■ Update SAP MaxDB in Failover Services for
for Sun Cluster 3.2 (SPARC and x64) Failover Sun Cluster 3.1 (SPARC) and 3.2 (SPARC and
Services x64) with version 7.7
■ Update JES MQ Server in Failover Services for
Sun Cluster 3.1 and 3.2 (SPARC and x64) 01/13/09 (not published)
with version 4.1 ■ Add Sun Fire X4250
■ Update Agfa IMPAX in Failover Services for ■ Update Sun SPARC Enterprise M3000 with
Sun Cluster 3.2 (SPARC) with version 6.3 additional NICs
■ Update Netra T5220 with storage and
11/11/08 additional NICs
■ Add Sun SPARC Enterprise M3000 Server ■ Update Netra T5440 with additional NICs
support
■ Update StorageTek 2510, 2530, 2540 info
■ Add Netra T5440 support
■ Update Informix Dynamic Server in Failover
■ Add USBRDT-5240 support Services for Sun Cluster 3.2 (SPARC and x64)
■ Update Netra x4200M2 support with additional version support
■ Update Netra T5220 support ■ Update SAP WAS in Failover Services for
Sun Cluster 3.2 (x64) with additional version
support

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 329


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

02/10/09 ■ Update Sun Blade X6240 with Barcelona


support
■ Add Solaris Cluster 3.2 1/09 with associated
Solaris and agents updates ■ Update Sun Storage J4200/J4400 with SATA
HDD, T5440, X4240, X4250
■ Add Brocade DCX-4S switch
■ Update Sun StorEdge 9970/9980 with
■ Add SG-XPCIE20FC-NEM-Z
M3000, T5440, T6340, X6240, X6440
■ Transition of Config Guide production from
■ Update Sun StorEdge 9985/9990, Sun
Hamilton Nguyen to Ray Jang
StorageTek T9985V/9990V with X6240,
X6440
03/10/09
■ Add Sun Storage 6580/6780 06/09/09
■ Add Sun Netra CP3260 ■ Update Sun Storage 6580/6780 with M4000,
■ Update Sun SPARC Enterprise T5140, T5240, M5000, M8000, M9000 External I/O
T5440 with X1236A-Z Expansion Unit support
■ Update Sun StorEdge 9910/9960 to sync up
04/14/09 with SE 9900 WWWW
■ Add Sun Blade X6240 ■ Update Sun StorEdge 9970/9980, Sun
■ Add Sun Blade X6440 StorageTek 9985V/9990V with X2200 M2

■ Update Sun StorageTek 9985V/9990V with ■ Update Sun StorEdge 9985/9990 with T6340,
M3000, T5440, T6340, and X4200 M3000, T5440, X2200 M2

■ Update Sun StorageTek 99985/9990 with 07/21/09


Universal Replicator support
■ Update Sun StorageTek 2510 support with
■ Update general Sun StorEdge 9900 TrueCopy SPARC servers, and consolidated x86 server
and Universal Replicator info info
■ Update Sun Storage 6580/6780 server ■ Update SVM support to track the bundled
support Solaris release support
■ Update WebLogic Server agent with zone ■ Update Sun StorageTek 2530 with Netra
nodes support X4200 M2 support using the non-NEBS-
■ Update Solaris Cluster 3.2 HA for SAP Web qualified SG-XPCIE8SAS-E-Z
Application Server with SAP 7.1 on S10
SPARC 08/04/09
■ Update Apache agent supports all ■ Add new Ethernet Storage Support chapter
Apache.org 2.2.x versions and relocated ST 2510 section
■ Add Sun Storage 7110, 7210, 7310 and 7410
05/12/09 Systems
■ Add Solaris 10 5/09 for SC 3.1 and SC 3.2, ■ Update Interconnect and Public Net support
Solaris 10 10/08 for SC 3.1 of X4447A-Z QGE to include Netra X4200
■ Add Sun Fire X4170, X4270. X4275 M2
■ Add Sun Blade X6270

330 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


REVISION HISTORY

09/01/09
■ Add Sun Blade 6048 for SPARC blades
■ Update Network Configuration chapter,
separating ExpressModules and Network
Express Modules into separate tables
■ Add Dhole X4822A FEM
■ Update Sun StorEdge 9970/9980 with M4000

10/13/09
■ Add Sun StorageTek 9985V/9990V 16-node
N*N RAC support
■ Add Sun Storage 7000 support for RAC over
NFS
■ Add Sun Storage 6180
■ Re-add Netra X4450 info (lost since 10/14/
08?)
■ Update Apache Web Server agent with Zone
Cluster support
■ Update HA Oracle with Zone Cluster
support
■ Update Java MQ agent with 4.3 support
■ Update MySQL agent with 5.0.85 and Zone
Cluster support
■ Update SS 7000 iSCSI LUN fencing and
scsi2/scsi3 quorum device support with SW
2009.Q3
■ Update ST 3320 JBOD that new single-bus
configs not supported per FAB 239464
■ Update External I/O Expansion Unit
support for the SE9900 line
■ Relocate/integrate Sun StorageTek 5000 NAS
info to the Ethernet Storage Support chapter

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 331


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

332 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


Index

NUMERICS 6180 array 99


2510 RAID array 173 6320 system 100
2530 RAID array 167 6540 array 103
2540 RAID array 81 6580 array 105
3120 JBOD array 142 6780 array 105
3310 JBOD array 148 6910 system 107
3310 RAID array 153 6920 system 109
3320 JBOD array 157 6960 system 107
3320 RAID array 162 7000 Unified Storage System 179
3510 RAID array 83 7110 Unified Storage System 181
3511 RAID array 88 7210 Unified Storage System 181
3910 system 90 7310 Unified Storage System 181
3960 system 90 7410 Unified Storage System 181
3rd-party agents 311 9910 system 111
3rd-party storage devices 58 9960 system 111
5000 NAS Appliance 175 9970 system 115
5210 NAS Appliance 177 9980 system 115
5220 NAS Appliance 177 9985 system 119
5310 NAS Appliance 178 9985V system 122
5320 NAS Appliance 178 9990 system 119
5320 NAS Cluster Appliance 178 9990V system 122
6120 array 92
6130 array 81, 94, 97, 99, 103, 167, 173, 175
6140 array 97

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 333


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

A clustered pair topology 4


A1000 array 137 clusters using different servers 35
A3500 array 140 command-line tools 263
A3500FC system 63 consoles 263
A5x00 array 66 CPUs, minimum 15
administrative consoles 263
Agfa IMPAX
Sun Cluster 3.1 223 D
Sun Cluster 3.2 230 D1000 array 138
Apache Proxy Server D2 array 132
Sun Cluster 3.1 223
data configuration 250
Sun Cluster 3.2 230, 238
file system 253
Apache Tomcat
meta devices 250
Sun Cluster 3.1 223, 228, 243, 244
raw devices 250
Sun Cluster 3.2 230, 238, 244, 245
raw volumes 250
Apache Web Server
Sun Cluster 3.1 223, 238, 243, 244 DB2 311
Sun Cluster 3.2 230, 238, 244, 245 DHCP
Sun Cluster 3.1 223, 228
application services 222
Sun Cluster 3.2 231, 239
diskless clusters 8
DNS
B
Sun Cluster 3.1 223, 229
backup node capacity 6
Sun Cluster 3.2 231, 239
BEA WebLogic Application Server
Sun Cluster 3.1 223, 228 documentation, Sun Cluster x

Sun Cluster 3.2 231, 239


benefits of clustering 1
E
boot devices 15
enterprise continuity 295
Ethernet 185

C
campus clusters 287
configurations 287
F
failover services
maximum nodes 287
Agfa IMPAX
SAN configurations 288 Sun Cluster 3.1 223
TrueCopy 291 Sun Cluster 3.2 230
Apache Proxy Server
Cluster Control Panel (CCP) 264
Sun Cluster 3.1 223
cluster topologies 3 Sun Cluster 3.2 230, 238

334 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


INDEX

Apache Tomcat Sun Cluster 3.2 232, 240


Sun Cluster 3.1 223, 228 N1 Grid Service Provisioning System
Sun Cluster 3.2 230, 238 Sun Cluster 3.1 224, 229
Apache Web Server Sun Cluster 3.2 232, 240
Sun Cluster 3.1 223, 238 Netbackup
Sun Cluster 3.2 230, 238 Sun Cluster 3.1 224
BEA WebLogic Application Server Sun Cluster 3.2 232
Sun Cluster 3.1 223, 228 NFS
Sun Cluster 3.2 231, 239 Sun Cluster 3.1 224, 229
defined 223 Sun Cluster 3.2 232, 240
DHCP Oracle Application Server
Sun Cluster 3.1 223, 228 Sun Cluster 3.1 225
Sun Cluster 3.2 231, 239 Sun Cluster 3.2 232, 240
DNS Oracle E-Business Suite
Sun Cluster 3.1 223, 229 Sun Cluster 3.1 225, 328
Sun Cluster 3.2 231, 239 Sun Cluster 3.2 233
HADB Oracle Server
Sun Cluster 3.1 223, 229 Sun Cluster 3.1 225, 229
Sun Cluster 3.2 231, 239 Sun Cluster 3.2 233, 240
IBM WebSphere MQ PostgreSQL
Sun Cluster 3.1 223 Sun Cluster 3.1 225, 229
Sun Cluster 3.2 231, 239 Sun Cluster 3.2 234, 241
Informix Samba
Sun Cluster 3.2 231, 239 Sun Cluster 3.1 225, 229
JES Application Server Sun Cluster 3.2 234, 242
Sun Cluster 3.1 224, 229 SAP
Sun Cluster 3.2 231, 239 Sun Cluster 3.1 226
JES Directory Server Sun Cluster 3.2 235, 241
Sun Cluster 3.1 224 SAP LiveCache
Sun Cluster 3.2 231 Sun Cluster 3.1 227
JES Messaging Server Sun Cluster 3.2 236, 241
Sun Cluster 3.1 224 SAP MaxDB
Sun Cluster 3.2 231 Sun Cluster 3.1 227
JES MQ Server Sun Cluster 3.2 236, 242
Sun Cluster 3.1 227, 230 Siebel
Sun Cluster 3.2 237, 242 Sun Cluster 3.1 227
JES Web Proxy Server Sun Cluster 3.2 237
Sun Cluster 3.1 224, 229 Solaris Containers
Sun Cluster 3.2 232, 239 Sun Cluster 3.1 227, 230
JES Web Server Sun Cluster 3.2 237, 242
Sun Cluster 3.1 224, 229 Sun Java Server Message Queue
Sun Cluster 3.2 232, 239 Sun Cluster 3.1 227, 230
Kerberos Sun Cluster 3.2 237, 242
Sun Cluster 3.2 232, 240 Sun One Proxy Server
MySQL Sun Cluster 3.1 229
Sun Cluster 3.1 224, 229 Sun Cluster 3.2 231, 232, 239
Sun Cluster 3.2 232, 240 Sun StorEdge Availability Suite
N1 Grid Engine Sun Cluster 3.1 228, 230
Sun Cluster 3.1 224, 229 Sun Cluster 3.2 237, 242

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 335


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SWIFTAlliance Access Sun StorEdge 9980 system 115


Sun Cluster 3.1 228
Sun StorEdge A3500FC system 63
Sun Cluster 3.2 237
SWIFTAlliance Gateway Sun StorEdge A5x00 array 66
Sun Cluster 3.1 228 Sun StorEdge T3 array (partner pair) 78
Sun Cluster 3.2 238 Sun StorEdge T3 array (single brick) 74
Sybase ASE
supported devices 41
Sun Cluster 3.1 228
Sun Cluster 3.2 238, 242 fibre channel storage. See FC storage
WebSphere Message Broker file system 253
Sun Cluster 3.1 228
Sun Cluster 3.2 238, 242
Sun Cluster 3.1 list 223, 228, 230, 238
FC storage
G
SAN support 59 global interface (GIF) 218
SE 6130 array 81, 94, 97, 99, 103, 167, 173, global networking 218
175 Graphtrack 114, 118, 122, 125
Sun Storage 6180 array 99
Sun Storage 6580 array 105
Sun Storage 6780 array 105 H
Sun StorageTek 2510 RAID array 173 HA SBU 311
Sun StorageTek 2540 RAID array 81 HADB
Sun StorageTek 6140 array 97 Sun Cluster 3.1 223, 229
Sun StorageTek 6540 array 103 Sun Cluster 3.2 231, 239
Sun StorageTek 9985 system 119 HA-iCS 311
Sun StorageTek 9985V system 122 hardware components, typical 1
Sun StorageTek 9990 system 119 heterogenous servers 35
Sun StorageTek 9990V system 122 generic rules for using 35
Sun StorEdge 3510 RAID array 83 sharing storage 36
Sun StorEdge 3511 RAID array 88
heterogenous storage 39
Sun StorEdge 3910 system 90
Sun StorEdge 3960 system 90
Sun StorEdge 6120 array 92
I
Sun StorEdge 6130 array 94
IBM DB2 311
Sun StorEdge 6320 system 100
IBM IDS/IIF 311
Sun StorEdge 6910 system 107
IBM WebSphere MQ
Sun StorEdge 6920 system 109
Sun Cluster 3.1 223
Sun StorEdge 6960 system 107
Sun Cluster 3.2 231, 239
Sun StorEdge 9910 system 111
Informix
Sun StorEdge 9960 system 111
Sun Cluster 3.2 231, 239
Sun StorEdge 9970 system 115

336 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


INDEX

interconnect 183 L
Ethernet 185 local storage 39
junction-based 184 LUN Manager 114, 118, 122, 125
PCI/SCI 186 LUSE 113, 117, 121, 125
point-to-point 184
Sun Fire Link 187
technologies supported 185 M
VLAN support 185
managing clusters 263
iPlanet Mail/Messaging Server 311
meta devices 250
IPMP 217
minimum CPUs 15
Multipack 131
multipathing 217
J
MySQL
J4200 JBOD array 169
Sun Cluster 3.1 224, 229
J4400 JBOD array 169 Sun Cluster 3.2 232, 240
JES Application Server
Sun Cluster 3.1 224, 229
Sun Cluster 3.2 231, 239
N
JES Directory Server
N*N topology 7
Sun Cluster 3.1 224
N+1 topology 5
Sun Cluster 3.2 231
N1 Grid Engine
JES Messaging Server
Sun Cluster 3.1 224, 229
Sun Cluster 3.1 224
Sun Cluster 3.2 232, 240
Sun Cluster 3.2 231
N1 Grid Service Provisioning System
JES MQ Server
Sun Cluster 3.1 224, 229
Sun Cluster 3.1 227, 230
Sun Cluster 3.2 232, 240
Sun Cluster 3.2 237, 242
JES Web Proxy Server NAFO 216
Sun Cluster 3.1 224, 229 NAS storage
Sun Cluster 3.2 232, 239 Sun Storage 7000 Unified Storage System 179
JES Web Server Sun Storage 7110 Unified Storage System 181
Sun Cluster 3.1 224, 229, 243, 244 Sun Storage 7210 Unified Storage System 181
Sun Cluster 3.2 232, 239, 244, 245 Sun Storage 7310 Unified Storage System 181
Sun Storage 7410 Unified Storage System 181
Sun StorageTek 5000 NAS Appliance 175
K Sun StorageTek 5210 NAS Appliance 177
Kerberos Sun StorageTek 5220 NAS Appliance 177
Sun Cluster 3.2 232, 240 Sun StorageTek 5310 NAS Appliance 178
Sun StorageTek 5320 NAS Appliance 178

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 337


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun StorageTek 5320 NAS Cluster Appliance Oracle Application Server


178 Sun Cluster 3.1 225
Netbackup Sun Cluster 3.2 232, 240
Sun Cluster 3.1 224 Oracle E-Business Suite
Sun Cluster 3.2 232 Sun Cluster 3.1 225, 328
Netra servers Sun Cluster 3.2 233
1280 servers 21 Oracle Parallel Server, See Oracle RAC
20 servers 16 Oracle RAC 245
440 servers 20 Sun Cluster 3.1 246, 247, 249
t 1120 servers 16 topologies 245
t 1125 servers 16 Oracle Real Application Cluster, See Oracle RAC
t 1400 servers 16 Oracle Server
t 1405 servers 16 Sun Cluster 3.1 225, 229
T1 AC200 servers 16 Sun Cluster 3.2 233, 240
T1 DC200 servers 16 overview of clustering 1
Netra storage
st A1000 array 128
st D1000 array 129 P
st D130 array 127 pair+N topology 6
network adapter failover 216 PCI/SCI 186
network configuration 183 PostgreSQL
interconnect 183 Sun Cluster 3.1 225, 229
public network 202 Sun Cluster 3.2 234, 241
network interfaces private interconnect, see interconnect 183
interconnect 188, 194, 196, 197, 198, 199, public network 202
200, 203, 209, 213, 215
public network 211, 212, 214, 216, 325
network multipathing 217 Q
NFS
quorum devices 41
Sun Cluster 3.1 224, 229
Sun Cluster 3.2 232, 240
nodes, maximum in cluster 3
R
RAID 258
raw devices 250
O
raw volumes 250
Oban 251, 252
recommended rules, definition xi
operating systems 219
required rules, definition xi
Sun Cluster versions and Solaris versions 219

338 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


INDEX

S Sun StorEdge 3320 JBOD array 157


S1 array 134 Sun StorEdge 3320 RAID array 162
Samba Sun StorEdge A1000 array 137
Sun Cluster 3.1 225, 229 Sun StorEdge A3500 array 140
Sun Cluster 3.2 234, 242 Sun StorEdge D1000 array 138
SAN support 59 Sun StorEdge D2 array 132
SANtinel 113, 117, 121, 125 Sun StorEdge Multipack 131
SAP Sun StorEdge S1 array 134
Sun Cluster 3.1 226 supported devices 49
Sun Cluster 3.2 235, 241 SDS 251
SAP LiveCache servers 11
Sun Cluster 3.1 227 boot devices 15
Sun Cluster 3.2 236, 241 generic configuration 15
SAP MaxDB
ShadowImage 113, 117, 121, 125
Sun Cluster 3.1 227
shared storage 40
Sun Cluster 3.2 236, 242
quorum devices 41
SAS storage
Sun Storage J4200 JBOD array 169 supported FC devices 41

Sun Storage J4400 JBOD array 169 supported SCSI devices 49

Sun StorageTek 2530 RAID array 167 third-party devices 58


scalable services Siebel
Apache Tomcat Sun Cluster 3.1 227
Sun Cluster 3.1 243, 244 Sun Cluster 3.2 237
Sun Cluster 3.2 244, 245
single-node clusters 9
Apache Web Server
Sun Cluster 3.1 243, 244 software components, typical 2
Sun Cluster 3.2 244, 245 Solaris Containers
defined 243 Sun Cluster 3.1 227, 230
JES Web Server Sun Cluster 3.2 237, 242
Sun Cluster 3.1 243, 244
Sun Cluster 3.2 244, 245 Solaris Resource Manager 249
Sun Cluster 3.1 243, 244, 245 Solaris Volume Manager 251, 252
scalable topology 7 Solaris Volume Manager for Sun Cluster 251, 252
SCSI storage 127, 167, 173 Solstice DiskSuite
Sun Netra st A1000 array 128 Sun Cluster 3.1 251
Sun Netra st D1000 array 129 star topology 5
Sun Netra st D130 array 127 storage 39
Sun StorEdge 3120 JBOD array 142 FC storage 59
Sun StorEdge 3310 JBOD array 148 heterogenous storage 39
Sun StorEdge 3310 RAID array 153 local storage 39

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 339


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

SCSI storage 127, 167, 173 6780 array 105


shared storage 40 7000 Unified Storage System 179
storage-attached networks 59 7110 Unified Storage System 181
Sun Cluster documentation x 7210 Unified Storage System 181
Sun Enterprise servers 7310 Unified Storage System 181
10000 servers 18 7410 Unified Storage System 181
3x00-6x00 servers 18 J4200 JBOD array 169
Sun Fire Enterprise servers J4400 JBOD array 169
4900 servers 22 Sun StorageTek storage
6900 servers 22 2510 RAID array 173
Sun Fire Link 187 2530 RAID array 167
Sun Fire servers 2540 RAID array 81
12K servers 23, 24, 25 5000 NAS Appliance 175
15K servers 23, 24, 25 5210 NAS Appliance 177
20K servers 23, 24, 25 5220 NAS Appliance 177
25K servers 23, 24, 25 5310 NAS Appliance 178
3800 servers 21 5320 NAS Appliance 178
4800 servers 21 5320 NAS Cluster Appliance 178
4810 servers 21 6140 array 97
6800 servers 21 6540 array 103
V1280 servers 21 9985 system 119
V210 servers 16, 17, 18, 19 9985V system 122
V240 servers 16, 17, 18, 19 9990 system 119
V400 servers 21 9990V system 122
V440 servers 20 Sun StorEdge Availability Suite
V480 servers 21 Sun Cluster 3.1 228, 230
V880 servers 21 Sun Cluster 3.2 237, 242
V890 servers 21 Sun StorEdge storage
Sun Java Server Message Queue 3120 JBOD array 142
Sun Cluster 3.1 227, 230 3310 JBOD array 148
Sun Cluster 3.2 237, 242 3310 RAID array 153
3320 JBOD array 157
Sun Management Center (SunMC) 264
Sun One Proxy Server 3320 RAID array 162
Sun Cluster 3.1 229 3510 RAID array 83
Sun Cluster 3.2 231, 232, 239 3511 RAID array 88
Sun Storage storage 3910 system 90
6180 array 99 3960 system 90
6580 array 105 6120 array 92
6130 array 94

340 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only


INDEX

6320 system 100 terminal concentrators 263


6910 system 107 third-party agents 311
6920 system 109 third-party storage devices 58
6960 system 107 topologies 3
9910 system 111 clustered pair topology 4
9960 system 111 defined 3
9970 system 115 diskless clusters 8
9980 system 115 N*N topology 7
A1000 array 137 N+1 topology 5
A3500 array 140 pair+N topology 6
A3500FC system 63 scalable topology 7
A5x00 array 66 single-node clusters 9
D1000 array 138 star topology 5
D2 array 132
TrueCopy 113, 117, 291
Multipack 131
S1 array 134
SE 6130 array 81, 94, 97, 99, 103, 167, 173,
U
175
user documentation, Sun Cluster x
T3 array (partner pair) 78
T3 array (single brick) 74
SunPlex Manager 264
V
Support for Virtualized OS Environments 259 Veritas file system
SVM 251, 252 Sun Cluster 3.1 253
SWIFTAlliance Access Veritas Volume Manager
Sun Cluster 3.1 228 Sun Cluster 3.1 251, 252
Sun Cluster 3.2 237 Virtualized OS environments 259
SWIFTAlliance Gateway
VLANs 185
Sun Cluster 3.1 228
volume managers
Sun Cluster 3.2 238 Sun Cluster 3.1 251, 252
Sybase ASE 311
Sun Cluster 3.1 228
Sun Cluster 3.2 238, 242 W
System 1 109 wave division multiplexors 295
WebLogic Application Server
Sun Cluster 3.1 223, 228
T Sun Cluster 3.2 231, 239
T3 array (partner pair) 78 WebSphere Message Broker
T3 array (single brick) 74 Sun Cluster 3.1 228

Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only 341


SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009

Sun Cluster 3.2 238, 242

342 Sun Proprietary/Confidential: Sun Employees and Authorized Partners Only

S-ar putea să vă placă și