Documente Academic
Documente Profesional
Documente Cultură
Preface ix
3. Server Configuration 11
Boot Device for a Server 15
Heterogeneous Servers in Sun Cluster 15
Generic Server Configuration Rules 15
SPARC Servers 16
x64 Servers 25
5. Storage Overview 39
Local Storage (Single-Hosted Storage) 39
Heterogeneous Storage in Sun Cluster 39
Shared Storage (Multi-Hosted Storage) 40
Third-Party Storage 58
Related Documentation
TABLE P-1 Sun Cluster 3.0 User Documentation
Sun Cluster 3.1 Data Services Planning and Administration Guide 817-6564
Sun Cluster 3.1 Data Services Developers Guide 817-6555
Sun Cluster 3.1 System Administration Guide 817-6546
Sun Cluster 3.1 Error Messages Guide 817-6558
Sun Cluster 3.1 Release Notes Supplement 816-3381
Notes
Sun Cluster 3 poses restrictions in addition to those imposed by the base hardware
and software components. Under no circumstances does Sun Cluster 3 relax the
restrictions imposed by the base hardware and software components. It is also
important to understand what we mean by REQUIRED and RECOMMENDED.
Sun Cluster 3 extends Solaris with the cluster framework, enabling the use of core
Solaris services such as file systems, devices, and networks seamlessly across a
tightly coupled cluster and maintaining full Solaris compatibility for existing
applications.
Key Benefits
■ Higher / Near continuous availability of existing applications based on Solaris
services such as highly available file system and network services.
■ Integrates/extends the benefits of Solaris scalability to dotCOM application
architectures by providing scalable and available file and network services for
horizontal applications.
■ Ease of management of the cluster platform by presenting a simple unified
management view of shared system resources.
Hardware Components
■ Servers with local storage (storage devices hosted by one node).
■ Shared storage (storage devices hosted by more than one node).
■ Cluster Interconnect for private communication among the cluster nodes.
■ Public Network Interfaces for connectivity to the outside world.
■ Administrative Workstation for managing the cluster.
Software Components
■ Solaris Operating Environment running on each cluster node.
■ Sun Cluster 3 software running on each cluster node.
■ Data Services - applications with agents and fault monitors - running on one or
more cluster nodes.
■ Cluster file system providing global access to the application data.
■ Sun Management Center running on the administrative workstation providing
ease of management.
FIGURE 1-1 A Typical Sun Cluster 3 Configuration
Console Access Cluster Interconnect
Cluster Interconnect
Public
Network
logical diagram: physical connections & number of units are dependent on storage/interconnect used
A topology is the connection scheme that connects the cluster nodes to the storage
platforms used in the cluster. Sun Cluster supports any topology that adheres to the
following guidelines:
■ Sun Cluster supports a maximum of sixteen nodes in a cluster, regardless of the
storage configurations that are implemented.
■ A shared storage device can connect to as many nodes as the storage device
supports.
■ There are common redundant interconnects between all nodes of the cluster.
Shared storage devices do not need to connect to all nodes of the cluster. However,
these storage devices must connect to at least two nodes.
While Sun Cluster does not require you to configure a cluster by using specific
topologies, the following topologies are described to provide the vocabulary to
discuss a cluster’s connection scheme. These topologies are typical connection
schemes:
■ “Clustered Pairs” on page 4
■ “N+1 (Star)” on page 5
■ “Pair + N” on page 6
■ “N*N (Scalable)” on page 7
■ “Diskless Cluster Configurations” on page 8
■ “Single-Node Cluster Configurations” on page 9
For more information on these topologies, see the definitions and diagrams that
follow.
Clustered Pairs
FIGURE 2-1 Clustered Pair Topology
Interconnect
Interconnect
N+1 (Star)
FIGURE 2-2 N+1 Topology
Interconnect
Interconnect
N+1 Features
■ All shared storage is dual-hosted, and physically attached to exactly two cluster
nodes.
■ A single server is designated as backup for all other nodes. The other nodes are
called primary nodes.
■ A maximum of 8 nodes are supported.
N+1 Benefits
The cost of backup node is spread over all primary nodes.
N+1 Limitations
The capacity of the backup node is the limiting factor in growing the N+1 cluster.
For example, in a 4 node E6x00 cluster, the growth of the cluster is limited by the
number of slots available, for population with CPU / IO boards, in the backup node.
Hence, the backup node should be equal or larger in capacity to the largest primary
node.
Pair + N
FIGURE 2-3 Pair + N topology (N = 2 here)
Interconnect
Interconnect
Storage Storage
Pair + N Features
■ All shared storage is dual hosted and physically attached to a single pair of
nodes.
■ A maximum of 16 SPARC nodes or 8 x64 nodes are supported.
Pair + N Benefits
Applications can access data from nodes which are not directly connected to the
storage device.
Pair + N Limitations
There may be heavy data traffic on the cluster interconnect.
N*N (Scalable)
FIGURE 2-4 N*N (Scalable) topology (N = 4 here)
Interconnect
Interconnect
Storage Storage
Interconnect
Interconnect
Server Configuration
Table 3-1 and table 3-2 below lists the servers supported with Sun Cluster 3. All
other components, like storage and network interfaces, may not be supported with
all the servers. Refer to the other chapters to ensure you have a supported Sun
Cluster configuration.
Servers
Servers
Servers
Servers
Servers
SPARC Servers
■ Sun Cluster supports SCSI storage on the T2000 and requires two PCI-X slots for
HBAs. Some T2000 servers shipped with a disk controller that occupies one of the
PCI-X slots and some ship with a disk controller that is integrated onto the
motherboard. In order to have SCSI storage supported with Sun Cluster, it is
required to have two open PCI-X slots for SCSI HBAs. SCSI storage is not
supported with Sun Cluster and the T1000 because the T1000 has only one PCI-X
slot.
■ To configure internal disk mirroring in the T2000 servers, follow the special
instructions in the Sun Fire T2000 Server Product Notes. However, when the
procedure instructs you to install the Solaris OS, do not do so. Instead, return to
the cluster installation guide and follow those instructions for the Solaris OS
installation.
Please note that, in this config guide, the name “Sun Fire T1000” refers to the Sun
Fire T1000 or the Sun SPARC Enterprise T1000 server. Likewise, the name “Sun Fire
T2000” refers to the Sun Fire T2000 or the Sun SPARC Enterprise T2000 server.
Solaris Cluster for this server maybe configured differently for Sun Cluster 3.0, Sun
Cluster 3.1 or Sun Cluster 3.2. Tagged VLAN is supported in SC3.1U4 and later
release. For Server with only 2 onboard ethernet ports and no other ethernet cards,
tagged VLAN must be used.
For use of a single dual-port HBA, please follow guideline under “Shared Storage
(Multi-Hosted Storage)” and configuration requirements for its use.
For Sun Cluster 3.1 configurations prior to 3.1 10/03 (update 1):
■ Both Solaris 9 and 10 are supported with Sun Cluster for the V445. Please note
that Solaris 9 supports only PCI-X (and not PCI-Express) cards.
of a Sun Fire 3800, or a cluster with primary and backup domain in same segment
of Sun Fire 6800 will have the common powerplane as the single point of failure.
A 2 node cluster on a single Sun Fire 6800, where each node is a domain in a
different segment implemented across the power boundary, is a good cluster-in-a-
box solution with appropriate fault isolation built-in.
■ It is recommended to have minimum 2 CPU/Memory board and minimum 2 I/O
assembly in each domain, whenever possible.
■ For the cluster interconnect, it is recommended that at least two independent
interconnects attach to different I/O assemblies in a domain. When all the
independent interconnects of a cluster interconnect attach to the same I/O
assembly, it is required that at least two independent interconnects attach to
different controllers in the I/O assembly.
■ It is recommended to have the mirrored components of a storage set attach to
different I/O assemblies in a domain. When the mirrored components of a storage
set attach to same the I/O assembly, it is recommended that they attach to different
controllers in the I/O assembly.
■ When two or more network interfaces are configured as part of a NAFO group, it
is recommended to have each interface attach to different I/O assemblies in a
domain. When the different interfaces of a NAFO group are attached to the same
I/O assembly, it is recommended that they attach to different controllers in the I/O
assembly.
■ Dynamic reconfiguration (DR) is now supported. This support requires Sun
Cluster 3.0 12/01 (or later). Jaguar or other multi-core CPUs require patch 111335-
26 (or later) or patch 117124-05 (or later).
■ XMITS PCI IO boats are supported.
x64 Servers
Please note that x64 requires the following patches: 120501-04, 120490-01, 120498-01.
For more information, see the Sun Fire V20z Server Just the Facts,
SunWIN token #400844.
a The onboard hardware RAID disk mirroring of the V20z requires Solaris 9 patch 119443-02 or later.
Note – The rules that describe which servers can participate in the same cluster
have changed. We no longer have the server family definitions. Instead now we have
a new set of rules that define mixing at the level of underlying networking/storage
technologies. This change vastly increases the flexibility of configurations. Use the
new set of rules described below to find out which servers can be clustered together.
Generic Rules
These rules must be followed while configuring clusters with heterogeneous servers:
■ Cluster configurations must comply with the topology definitions specified in
“Sun Cluster 3 Topologies” on page 3.
■ Cluster configurations must comply with the support matrices listed in other
sections (for example, “Server Configuration” on page 11, “Storage Overview” on
page 39, and “Network Configuration” on page 183) of the configuration guide.
■ If there are any restrictions placed on server/storage connectivity or
server/network connectivity by the base platforms and the individual
networking/storage components, then these restrictions override the Sun Cluster
configuration rules.
■ SCSI storage can connected to a maximum of two nodes simultaneously.
■ Fiber storage can be connected to a maximum of four nodes simultaneously (with
the exception of the SE 99x0 storage which can be connected to a maximum of 8
nodes simultaneously).
Table 4-1, “SCSI Interface Groupings,” on page 37 gives the SCSI interfaces,
supported in different servers in Sun Cluster 3, grouped by the underlying SCSI
technology. Each grouping also defines the mixing scope of the servers using these
interfaces in Sun Cluster 3.
■ Storage, HBA, server and other component requirements take precedence over
any Sun Cluster rules.
■ Both SAN and direct-connected FC storage are supported.
■ Node I/O bus type mixing is allowed, e.g., PCIe and PCI-X, or SBus and PCI.
■ FC speeds may be mixed.
■ Connectivity between the nodes of a cluster and shared data must use logically
separate paths. It is recommended to use physically separate paths. “Paths” in
this context refers to connections to the submirrors of an SVM mirrored volume or
the MPxIO paths to a highly available RAID volume, for example.
Please refer to chapter 5, Storage Overview, for additional Sun Cluster details,
including any exceptions to the above rules.
Also refer to the specific storage and SAN product documentation for product
details.
Note on Multipathing
Multipathed vs. non-multipathed connections are assumed to be consistent for all
nodes logically connected to the shared storage device. For example, if MPxIO is
used to connect to one node, MPxIO must also be used to connect this shared
storage to the other cluster nodes. Similarly for non-multipathed connections, all
such shared connections must be non-multipathed connections to all logically-
connected cluster nodes.
Storage Overview
Any storage inside a node, including internal disks and tape storage, is local storage
and cannot be shared.
Please consult each storage device’s section for maximum node connectivity and
other guidelines. The following are general guidelines:
■ Some parallel SCSI devices can be split into two functionally separate devices. See
each specific storage device for details.
■ Parallel SCSI devices can only share a LUN or volume between two nodes in the
same cluster.
■ Fibre Channel (FC) devices can share a LUN, or volume, between two or more
cluster nodes within the same cluster.
■ In some cases, FC devices may present different LUNs to different clusters or non-
clustered nodes.
■ FC devices may be directly connected to FC switches, to HBAs, or attached
directly to cluster nodes. See the specific storage device in question for
restrictions.
■ Sun Cluster highly recommends that each sub-mirror of a mirrored volume or
path to a multi-path IO connection use separate host adapter cards and controller
chips
■ Sun Cluster now supports the use of a single dual-port HBA in supported
configurations as a single adapter used to connect shared storage devices. Note
that the usage of a single adapter decreases availability and reliability of the
cluster and while we don’t require two HBAs, it is still strongly recommended.
Storage products are supported with a specific set of servers, as listed in the tables
later in this chapter. See Table 5-1, “FC Storage for SPARC Servers,” on page 42 and
Table 5-4, “SCSI Storage for SPARC Servers,” on page 50.
If you use Sun StorEdge A3500 or A3500FC arrays for shared storage in your cluster,
you must use a different device if you need a quorum device.
details section to find other supported components. If you have mixed types of
servers in your cluster, refer to “Sharing Storage Among Different Types of Servers
in a Cluster” on page 36 for additional restrictions.
Sun Netra 20 • • • • •
Sun Netra T1
AC200/DC200
Sun Netra T2000 • • • • • • • • • •
a Only these servers’ SBus I/O boards are supported for shared cluster storage
b The SE 9900 WWWW includes External I/O Expansion Unit support under the base server
47
Sun StorageTek 9985V/9990V
•
•
Sun StorEdge 6320 System
•
•
Sun Storage 6180 Array
•
•
Sun StorageTek 6140 Array
•
•
Sun StorEdge 6130 Array
•
•
Sun StorEdge 6120 Array
•
Sun StorEdge 3511 RAID Array
•
Sun StorEdge 3510 RAID Array
•
Sun StorageTek 2540 RAID Array
•
TABLE 5-2
For other storage arrays and other x64 servers, please refer to the specific server
a Only these servers’ SBus I/O boards are supported for cluster shared storage
b The T2000 is supported with the T3+ only
c Only Sun StorEdge A5200 supported
d Only Sun StorEdge A5100/A5200 supported
e The T2000 is supported with the T3+ only
discussion in Chapter 3.
find other supported components. If you have mixed types of servers in your cluster,
refer to “Sharing Storage Among Different Types of Servers in a Cluster” on page 36
for additional restrictions.
Sun Netra 20 • • • • • • • • •
Sun Netra T1 • •
AC200/DC200
Sun Netra T2000 • • • •
a Support for SCSI storage with the Sun Fire T2000 server requires two PCI-X slots for HBAs. T2000 severs
with a disk controller that occupies one of the PCI-X slots are not supported with Sun Cluster and SCSI
storage.
Server
Server
Third-Party Storage
Please see the following link for information on supported third-party storage:
http://www.sun.com/software/cluster/osp/index.html
This chapter discusses Fibre Channel storage support in Sun Cluster, both as direct-
attach and SAN configurations.
Server/Switch/Storage Support
Using supported storage switches, it is possible to connect supported Fibre Channel
storage devices and supported servers in a Storage Area Network (SAN)
configuration. These configurations are supported with Sun Cluster as long as they
are within the range of supported devices and limitations listed below. Supported
configurations are comprised of supported SAN HBAs, switches and storage devices
(all listed below) while following the SAN support rules (also listed below).
■ The configuration must be supported by Network Storage. Please see the NWS
“what works with what” matrices, particularly the latest SAN matrix or the SE
9900 series matrix (if you are using a SE 9900 series storage array). You can find
these matrices at http://mysales.central/public/storage/products/matrix.html
1Gb HBAs
■ SBus: * (X)6757A Sun StorEdge SBus Dual FC Network Adapter
■ PCI:
2Gb HBAs
■ SBus: none
■ PCI:
■ SG-(X)PCI1FC-QF2 ((X)6767A) Sun StorEdge 2G FC PCI Single Fibre Channel
HBA
■ SG-(X)PCI2FC-QF2 ((X)6768A) Sun StorEdge 2G FC PCI Dual Fibre Channel
HBA
■ SG-(X)PCI1FC-JF2 JNI 2Gb PCI Single Port Fibre Channel HBA
■ SG-(X)PCI2FC-JF2 JNI 2Gb PCI Dual Port Fibre Channel HBA
■ SG-(X)PCI1FC-EM2 Emulex 2Gb PCI
■ SG-(X)PCI2FC-EM2 Emulex 2Gb PCI
■ SG-(X)PCI1FC-QL2 Sun StorEdge 2G FC PCI Single Fibre Channel HBA
■ SG-(X)PCI2FC-QF2-Z Sun StorEdge 2G FC PCI Dual Fibre Channel HBA
■ cPCI: none
4Gb HBAs
■ SBus: none
■ PCI:
■ SG-(X)PCI1FC-QF4 Sun StorEdge 4G FC PCI Single Fibre Channel Network
Adapter
■ SG-(X)PCI2FC-QF4 Sun StorEdge 4G FC PCI Dual Fibre Channel Network
Adapter
■ SG-(X)PCI1FC-EM4 Emulex Single Port 4Gb Fiber Channel HBA
■ SG-(X)PCI2FC-EM4 Emulex Dual Port 4Gb Fiber Channel HBA
■ cPCI: none
■ PCI-E
■ SG-(X)PCIE1FC-QF4 Sun StorEdge 4G FC PCI-E Single Fibre Channel Network
Adapter
■ SG-(X)PCIE2FC-QF4 Sun StorEdge 4Gb PCI-E Dual Port Fibre Channel HBA
■ SG-(X)PCIE1FC-EM4 Emulex 4Gb Single Port PCI-E
■ SG-(X)PCIE2FC-EM4 Emulex 4Gb Dual Port PCI-E
■ PCI-E ExpressModules
■ SG-XPCIE2FC-QB4-Z
■ SG-XPCIE2FC-EB4-Z
■ SG-XPCIE2FCGBE-Q-Z
■ SG-XPCIE2FCGBE-E-Z
■ Sun Blade 8000/8000 P NEM
■ SG-XPCIE20FC-NEM-Z Sun StorageTek 4Gb FC NEM 20-Port HBA
■ Sun Netra CT 900
■ SG-XPCIE2FC-ATCA-Z Sun StorageTek 4Gb Fibre Channel ATCA HBA
■ XCP32X0-RTM-FC-Z Sun Netra CP3200 ARTM-FC
8Gb HBAs
■ PCI-E
■ SG-XPCIE1FC-EM8-Z
■ SG-XPCIE2FC-EM8-Z
■ SG-XPCIE1FC-QF8-Z
■ SG-XPCIE2FC-QF8-Z
■ Cisco MDS 9020, 9120, 9124, 9134, 9140, 9216A, 9216i, 9222i1, 9506, 9509, 9513 switches
Hub Support
Hubs are required to connect hosts to A3500FC in cluster configurations. An
A3500FC controller module is connected to two hosts via hubs. Each StorEdge
A3500FC controller module is connected to two hubs. Both hosts are connected to
both the hubs. It is required that both hubs be connected to different host bus
adapters on a node. Figure 6-1 on page 66 shows how to configure an A3500FC unit
as shared storage.
Up to four A3500FC controller modules can be connected to a hub. You can connect
controller modules in the same or separate cabinets.
RAID Requirements
An SE A3500FC controller module with the redundant controllers provides
appropriate hardware redundancy. An SE A3500FC controller also has hardware
RAID capabilities built in. Hence, software mirroring of data is not required.
However, a software volume manager can be used for managing the data. Also, a
cluster configuration with an SE 3500FC array with a single controller module is
supported and requires volume management or software mirroring.
Multipathing
Only the Redundant Disk Array Controller (RDAC) driver from Sun StorEdge RAID
Manager 6.22 is supported.
Quorum Devices
Sun StorEdge A3500 and A3500FC arrays cannot be used as quorum devices.
Campus Cluster
Campus clusters are not supported.
HA 1 HA 2 HA 1 HA 2
0 1 0 1 0 1 0 1
Hub A Hub A
0 1 2 3 4 5 6 0 1 2 3 4 5 6
A3500FC
This section covers Sun Cluster requirements when configured with the Sun
StorEdge A5000, A5100, or A5200.
RAID Requirements
In order to ensure data redundancy and hardware redundancy, software mirroring
across boxes is required. Mirroring of data between the two halves of the same
A5x00 unit is not supported.
Multipathing
Multipathing (for example, using DMP, MPxIO, etc.) is not supported with A5x00s.
Part # Description
Part # Description
HA 1 HA 2 HA 1 HA 2
0 1 0 1 0 1 0 1
a0 b0 a0 b0
Data Mirror
A5x00 #1 A5x00 #2
Node 1 Node 2
HA 1 HA 2 HA 1 HA 2
0 1 0 1 0 1 0 1
a0 a1 b0 b1 a0 a1 b0 b1
Data Data’ Mirror Mirror’
A5x00 #1 A5x00 #2
HA 1 HA 2 HA 1 HA 2
0 1 0 1 0 1 0 1
Hub A Hub A
0 1 2 3 4 5 6 0 1 2 3 4 5 6
a0 b0 b1 a0 a1 b0 b1
A5x00 #1 A5x00 #2
Data
Mirror
HA 1 HA 2 HA 1 HA 2
0 1 0 1 0 1 0 1
a0 a1 b0 b1 a0 a1 b0 b1
Data Mirror
a0 a1 b0 b1 a0 a1 b0 b1
Data’ Mirror’
RAID Requirements
In order to ensure data redundancy and hardware redundancy, host-based mirroring
between two arrays is required.
Multipathing
Multipathing (for example using DMP, MPxIO, etc.) is not supported with T3 single
brick configurations.
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.
3. Check Table 6-8 or Table 6-9 to determine if there is limited HBA support.
Netra 20 6727A
Sun Enterprise 3x00-6x00 onboard FCAL socketa
6730Aa
Sun Enterprise 10K 6730Aa, 6757A
a Supported in arbitrated loop configurations only (no SAN configura-
tions).
4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed in the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html)
HA 1 HA 2 HA 1 HA 2 HA 1 HA 2 HA 1 HA 2
Switch Switch
Data Mirror
T3 brick T3 brick
RAID Requirements
A T3 partner pair has full hardware redundancy built-in. Hence, it is supported to
use hardware RAID5 for data availability. This automatically implies that a cluster
configuration with a single T3 partner pair is supported.
Multipathing
Use of Sun StorEdge Traffic Manager (MPxIO) is required for having dual paths
from server to the T3PP arrays. No other multipathing solution (for example Veritas
DMP) is supported.
1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if your
chosen server/storage combination is supported with Sun Cluster.
Node 1 Node 2
HA 1 HA 2 HA 1 HA 2
Switch Switch
RAID 5 Data
RAID 5 Data
RAID Requirements
■ Simplex Configuration:
■ Two ST2540 arrays will be required.
■ Data need to be mirrored across arrays using Volume Manager Software (Host
Based Mirroring).
■ Duplex Configuration:
■ A single ST2540 array is supported with properly configured dual controllers,
multipathing, and hardware RAID.
Multipathing
■ Sun StorEdge Traffic Manager (MPXIO) is required in Duplex Configuration
(ST2540 with 2x controllers)
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.
4. If HBA support is not limited, you can use your server and storage combination
with host adapters in the “Server Search” under the “Searches” tab of the
Interop Tool, https://interop.central.sun.com/interop/interop
RAID Requirements
■ SE 3510 RAID arrays can be used without a software volume manager if you have
correctly configured dual controllers, multipathing, and hardware RAID.
■ A single 3510 is supported with properly configured dual controllers,
multipathing, and hardware RAID.
■ Single controller SE 3510 RAID units are supported as long as they are mirrored
to another array.
■ Hardware RAID is supported with the SE 3510 RAID array, with or without
software mirroring.
Multipathing
Sun StorEdge Traffic manager (MPxIO) is required with dual-controller SE 3510
configurations.
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.
a Netra CT 900 ATCA Blade Server supports any ATCA card that complies with PICMIG 3.x specifications. The
third party HBA has been tested with the Sun Netra CT 900 using the CP3060 blade under Sun Cluster, but this
HBA is not a Sun product and thus not supported by Sun. A Sun branded HBA is scheduled to be qualified and
supported in the Q1CY08 time frame
4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed at the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html). Additionally, use
the “Server Search” under the “Searches” tab of the Interop Tool,
https://interop.central.sun.com/interop/interop
In FIGURE 6-9, the same set of LUNs is mapped to channels 0 and 5; a different set of
LUNs is mapped to channels 1 and 4.
Only SE 3511 RAID units can be used as shared storage devices with Sun Cluster 3.
SE 3511 JBOD units can be attached to SE 3511 RAID units for additional storage, but
cannot be used independently of the SE 3511 RAID units in a Sun Cluster 3
configuration. PLEASE READ RECOMMEND USES AND LIMITATIONS OF THE
SE 3511 IN THE SE 3511 BASE PRODUCT DOCUMENTATION.
RAID Requirements
■ SE 3511 arrays can be used without a software volume manager with properly
configured dual controllers, multipathing, and hardware RAID.
■ A single SE 3511 array is supported with properly configuration dual controllers,
multipathing, and hardware RAID.
■ Single controller SE 3511 RAID arrays are supported as long as they are mirrored
to another array.
Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with dual-controller SE 3511
configurations.
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.
4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed at the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html). Additionally, use
the “Server Search” under the “Searches” tab of the Interop Tool,
https://interop.central.sun.com/interop/interop
RAID Requirements
■ SE 3910/3960 systems can be used without software volume management with
properly configured dual controllers, multipathing, and hardware RAID.
■ T3 single bricks require software mirroring.
Multipathing
SE 3910/3960 systems require Sun StorEdge Traffic Manager (MPxIO).
1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if your
chosen server/storage combination is supported with Sun Cluster.
RAID Requirements
■ SE 6120 arrays are supported without software volume management, if you have
properly configured 6120 partner pairs, multipathing, and hardware RAID.
■ A single 6120 partner pair is supported with properly configured multipathing
and hardware RAID.
■ 6120 single bricks require software mirroring.
■ RAID 5 is supported for use with SE 6120 partner pair configurations.
Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with SE 6120 partner pair
configurations.
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.
4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed in the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html)
RAID Requirements
■ SE 6130 arrays are supported without software volume management, if you have
a properly configured 6130, multipathing, and hardware RAID.
■ A single SE 6130 is supported with properly configured multipathing and
hardware RAID.
Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with the SE 6130.
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.
4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed in the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html)
RAID Requirements
■ ST 6140 arrays are supported without software volume management, if you have
a properly configured ST 6140, multipathing, and hardware RAID.
■ A single ST 6140 is supported with properly configured multipathing and
hardware RAID.
Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with the ST 6140.
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42 and Table 5-2,
“FC Storage for x64 Servers,” on page 46 to determine whether your chosen
server and storage combination is supported.
4. If HBA support is not limited, you can use your server and storage combination
RAID Requirements
■ SS 6180 arrays are supported without software volume management with
properly configured multipathing, and hardware RAID.
■ A single SS 6180 is supported with properly configured multipathing and
hardware RAID.
Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with the SS 6180.
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42 and Table 5-2,
“FC Storage for x64 Servers,” on page 46 to determine whether your chosen
server and storage combination is supported.
4. If HBA support is not limited, you can use your server and storage combination
as listed in the “Server Search” under the “Searches” tab of the Interop Tool,
https://interop.central.sun.com/interop/interop
Switches are supported with the “switchless” version of the 6320 (SE 6320 SL).
RAID Requirements
■ SE 6320 systems can be used without software volume management with
properly configured multipathing and hardware RAID.
■ A single 6320 is supported with properly configured multipathing and RAID.
■ Otherwise, data must be mirrored to another array.
Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required.
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.
4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed in the SAN WWWW
(http://mysales.central/public/storage/products/matrix.html)
Node 1 Node 2
Switch Switch
SE 6320
SE 6320
RAID Requirements
■ ST 6540 arrays are supported without software volume management, if you have
a properly configured ST 6540, multipathing, and hardware RAID.
■ A single ST6540 is supported with properly configured multipathing and
hardware RAID.
Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required with the ST 6540.
1. First check Table 5-1, “FC Storage for SPARC Servers,” on page 42, to determine
whether your chosen server and storage combination is supported.
4. If HBA support is not limited, you can use your server and storage combination
with host adapters as listed in the “Server Search” under the “Searches” tab of
the Interop Tool, https://interop.central.sun.com/interop/interop
RAID Requirements
■ SS 6580/6780 systems can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single SS 6580/6780 system is supported with properly configured
multipathing and hardware RAID.
Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required.
1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42, or Table 5-2, “FC
Storage for x64 Servers,” on page 46, to see if your chosen server/storage
combination is supported with Sun Cluster.
4. If HBA support is not limited, you can use your server and storage combination
with host adapters listed by the “Server Search” under the “Searches” tab of
the Interop Tool, https://interop.central.sun.com/interop/interop
RAID Requirements
■ SE 6910/6960 systems can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single 6910/6960 system is supported with properly configured multipathing
and hardware RAID.
Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required.
1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if your
chosen server/storage combination is supported with Sun Cluster.
RAID Requirements
■ SE 6920 systems can be used without software volume management if you have
properly configured multipathing and hardware RAID.
■ A single 6920 system is supported with properly configured multipathing and
hardware RAID.
SE 6920 Multipathing
Sun StorEdge Traffic Manager (MPxIO) is required.
V. 3.0.0 are supported with Sun Cluster 3 and the SE 6920. The SE 6920’s
virtualization feature is supported with the use of the following storage arrays as
back end non_VLV luns storage: T3B and the SE 6020/6120. For information on third
party storage please consult http://www.sun.com/software/cluster/osp/
1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 to see if your
chosen server/storage combination is supported with Sun Cluster.
RAID Requirements
■ The SE 9910/9960 can be used without software volume management if you have
properly configured multipathing and hardware RAID.
■ A single 9910/9960 volume is supported with properly configured multipathing
and hardware RAID.
■ Without multipathing, data must be mirrored to another array or to another
volume within the array using an independent I/O path.
Multipathing
Sun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and Sun
Dynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from a
cluster node to the ST 99x0 array. MPxIO is the multipathing solution applicable to
Sun HBAs, SDLM is the multipathing solution applicable to both JNI HBAs and Sun
HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4). SDLM supports
both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0, 5.1 and 5.4).
No other storage multipathing solutions (for example Veritas DMP) are supported
with Sun Cluster.
By using multiple paths and either MPxIO or SDLM in conjunction with hardware
RAID, the requirement to host base mirror the data on a SE 9910/9960 is removed.
Please note that only SDLM versions 5.0,5.1 and 5.4 support VxVM (versions 3.2 and
3.5)
TrueCopy
Sun StorEdge 9910/9960 TrueCopy is supported with Sun Cluster 3 with the
following configuration details:
■ Both synchronous and asynchronous modes of operations are supported.
■ When using an MPxIO LUN as the Command Control Interface (CCI) command
device, CCI 01-10-03/02 and microcode 01-18-09-00/00 or better must be used.
■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster
■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicate
data within the same cluster as an alternative to host-based mirroring.
See“TrueCopy Support” on page 291 for more info.
■ TrueCopy pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.
ShadowImage
Sun StorEdge 9900 ShadowImage is now supported with Sun Cluster 3 with the
following configuration details:
■ When using an MPxIO LUN as the Command Control Interface (CCI) command
device, CCI 01-10-03/02 and microcode 01-18-09-00/00 or better must be used.
1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FC
Storage for x64 Servers,” on page 46 to see if your chosen server/storage
combination is supported with Sun Cluster.
3. And choose a supported FC switch from the list in “Supported SAN Switches” on
page 62
4. Refer to the “Sun StorEdge 9900 Systems: What Works With What Support
Matrix,” SunWIN Token Number 344150, for additional details.
RAID Requirements
■ SE 9970/9980 arrays can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single SE 9970/9980 array is supported with properly configured multipathing
and hardware RAID.
Multipathing
Sun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and Sun
Dynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from a
cluster node to the SE 99x0 array. MPxIO is the multipathing solution applicable to
Sun HBAs, SDLM is the multipathing solution applicable to both JNI HBAs and Sun
HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4). SDLM supports
both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0,5.1,5.4).
No other storage multipathing solutions (for example Veritas DMP) are supported
with Sun Cluster.
By using multiple paths and either MPxIO or SDLM in conjunction with hardware
RAID, the requirement to host base mirror the data on a SE 9970/9980 is removed.
Note – Only SDLM versions 5.0,5.1 and 5.4support VxVM (versions 3.2 and 3.5).
TrueCopy
Sun StorEdge 9970/9980 TrueCopy is supported with Sun Cluster 3 with the
following configuration details:
■ Both synchronous and asynchronous modes of operations are supported.
■ When using an MPxIO LUN as the Command Control Interface (CCI) command
device, CCI 01-10-03/02 and microcode 21-02-23-00/00 or better must be used.
■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster
■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicate
data within the same cluster as an alternative to host-based mirroring.
See“TrueCopy Support” on page 291 for more info.
■ TrueCopy pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.
ShadowImage
Sun StorEdge 9970/9980 ShadowImage is now supported with Sun Cluster 3 with
the following configuration details:
■ When using an MPxIO LUN as the Command Control Interface (CCI) command
device, CCI 01-10-03/02 and microcode 21-02-23-00/00 or better must be used.
■ The Remote Console may be used
1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FC
Storage for x64 Servers,” on page 46 to see if your chosen server/storage
combination is supported with Sun Cluster.
3. And choose a supported FC switch from the list in “Supported SAN Switches” on
page 62
4. Refer to the “Sun StorEdge 9900 Systems: What Works With What Support
Matrix,” SunWIN Token Number 344150, for additional details.
RAID Requirements
■ ST 9985/9990 arrays can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single ST 9985/9990 array is supported with properly configured multipathing
and hardware RAID.
■ Without multipathing, data must be mirrored to another array or to another
volume within the ST 9985/9990 array using an independent I/O path.
Multipathing
Sun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and Sun
Dynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from a
cluster node to the ST 9985/9990 array. MPxIO is the multipathing solution
applicable to Sun HBAs, SDLM is the multipathing solution applicable to both JNI
HBAs and Sun HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4).
SDLM supports both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0,
5.1 and 5.4).
No other storage multipathing solutions (for example Veritas DMP) are supported
with Sun Cluster.
By using multiple paths and either MPxIO or SDLM in conjunction with hardware
RAID, the requirement to host base mirror the data on a ST 9985/9990 is removed.
Note – Only SDLM versions 5.0,5.1 and 5.4 support VxVM (versions 3.2 and 3.5).
TrueCopy
Sun StorageTek 9985/9990 TrueCopy is supported with Sun Cluster 3 with the
following configuration details:
■ Both synchronous and asynchronous modes of operations are supported.
■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster
■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicate
data within the same cluster as an alternative to host-based mirroring.
See“TrueCopy Support” on page 291 for more info.
■ TrueCopy pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.
Universal Replicator
Universal Replicator is supported with Sun Cluster 3 as follows:
■ Universal Replicator can be used with Sun Cluster to replicate data outside of the
cluster.
■ Using Universal Replicator to copy replicate within a cluster is not supported.
■ Universal Replicator pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.
ShadowImage
Sun StorageTek 9985/9990 ShadowImage is now supported with Sun Cluster 3 with
the following configuration details:
■ Microcode versions TBD
■ The Remote Console may be used
1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FC
Storage for x64 Servers,” on page 46 to see if your chosen server/storage
combination is supported with Sun Cluster.
3. And choose a supported FC switch from the list in “Supported SAN Switches” on
page 62
4. Refer to the “Sun StorEdge 9900 Systems: What Works With What Support
Matrix,” SunWIN Token Number 344150, for additional details.
RAID Requirements
■ ST 9985V/9990V arrays can be used without software volume management if you
have properly configured multipathing and hardware RAID.
■ A single ST 9985V/9990V array is supported with properly configured
multipathing and hardware RAID.
■ Without multipathing, data must be mirrored to another array or to another
volume within the ST 9985V/9990V array using an independent I/O path.
Multipathing
Sun Cluster 3 now supports use of Sun StorEdge Traffic Manager (MPxIO) and Sun
Dynamic Link Manager (SDLM- formerly HDLM) for having multiple paths from a
cluster node to the ST 9985V/9990V array. MPxIO is the multipathing solution
applicable to Sun HBAs, SDLM is the multipathing solution applicable to both JNI
HBAs and Sun HBAs (Sun HBA support with SDLM limited to SDLM 5.0/5.1/5.4).
SDLM supports both Solaris 8 and Solaris 9 (Sol 9 support limited to SDLM 4.1, 5.0,
5.1 and 5.4).
No other storage multipathing solutions (for example Veritas DMP) are supported
with Sun Cluster.
By using multiple paths and either MPxIO or SDLM in conjunction with hardware
RAID, the requirement to host base mirror the data on a ST 9985V/9990V is
removed.
Note – Only SDLM versions 5.0, 5.1 and 5.4 support VxVM (versions 3.2 and 3.5).
TrueCopy
Sun StorageTek 9985V/9990V TrueCopy is supported with Sun Cluster 3 with the
following configuration details:
■ Both synchronous and asynchronous modes of operations are supported.
■ CCI package version 01-19-03/04 and later can be used on the host side.
■ TrueCopy can be used with Sun Cluster to replicate data outside of the cluster
■ Starting with Sun Cluster 3.2, a Sun Cluster feature enables TrueCopy to replicate
data within the same cluster as an alternative to host-based mirroring.
See“TrueCopy Support” on page 291 for more info.
■ TrueCopy pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.
Universal Replicator
Universal Replicator is supported with Sun Cluster 3 as follows:
■ Universal Replicator can be used with Sun Cluster to replicate data outside of the
cluster.
■ Using Universal Replicator to replicate data within a cluster is not supported.
■ Universal Replicator pair LUNs cannot be used as a quorum device.
■ Command Device LUNs cannot be used as a quorum device.
ShadowImage
Sun StorageTek 9985V/9990V ShadowImage is now supported with Sun Cluster 3
with the following configuration details:
■ Microcode versions TBD
■ The Remote Console may be used
1. Check Table 5-1, “FC Storage for SPARC Servers,” on page 42 or Table 5-2, “FC
Storage for x64 Servers,” on page 46 to see if your chosen server/storage
combination is supported with Sun Cluster.
3. And choose a supported FC switch from the list in “Supported SAN Switches” on
page 62
4. Refer to the “Sun StorEdge 9900 Systems: What Works With What Support
Matrix,” SunWIN Token Number 344150, for additional details.
The other configuration rules for using Netra st D130 as shared storage are listed
below.
■ Daisy Chaining of Netra st D130 is not supported.
■ Host Adapters supported with Netra st D130 are listed below:
TABLE 7-1 Sun Cluster and Netra st D130 Support Matrix for SPARC
Maximum Node
Host Host Adapter Part # for Host Adapter Connectivity
Figure 7-1 below shows how to configure Netra st D130 as a shared storage.
Netra Netra
st D130 st D130
Data Mirror
The other configuration rules for using Netra st A1000 are used as a shared storage
are listed below.
TABLE 7-3 Netra st A1000 and Sun Cluster 3 Support Matrix for SPARC
Maximum Node
Servers Host Bus Adapters Connectivity Connectivity
The other configuration rules for using Netra st D1000 as shared storage are listed
below.
■ Daisy chaining of Netra st D1000s is not supported.
■ Single Netra st D1000, in split-bus configuration, is not supported.
TABLE 7-5 Sun Cluster 3 and Netra st D1000 Support Matrix for SPARC
The figure below shows how to configure the Netra st D1000 as shared storage.
NODE 1 NODE 2
HA1 HA1
HA2 HA2
The other configuration rules for using MultiPack as a shared storage are listed
below.
■ Daisy Chaining of MultiPacks is not supported.
■ Host adapters supported with MultiPack are listed below:
TABLE 7-7 Sun Cluster 3 and SE Multipack Support Matrix for SPARC
Maximum Node
Host Host Adapter Part # for Host Adapter Connectivity
HA1 HA1
HA2 HA2
Data Mirror
MultiPack #1 MultiPack #2
SE D2 RAID Requirements
Since D2 doesn’t have RAID capabilities built-in, host-based mirroring using
VxVM/SDS is required.
This host based mirroring requirement ensures the physical path redundancy. With
dual ESM modules, there are no single points of failure in a D2 array. Hence, a
cluster configuration with a single D2 in a split-bus configuration, with data
mirrored across the two halves of the D2, is supported.
SE D2 Support Matrix
The support matrix for D2 with Sun Cluster 3 is:
TABLE 7-9 Sun StorEdge D2 and Sun Cluster 3 Support Matrix for SPARC
Netra t 1120/1125, Netra Sun StorEdge PCI dual 0.8m (1136A) 25m 2
1400/1405, Netra 20, Netra 240 Ultra 3 SCSI (6758A) 1.2m (1137A)
AC/DC, Netra 1280 SG-XPCI2SCSI-LM320 2m (1138A)
Sun Enterprise 220R, 250, 420R 4m (3830B)
Sun Fire 280R, V480/V490, 10m (3831B)
V880/V890, V1280
Netra 440a Onboard SCSI Port
6757A
Sun Fire V210, V240b, V250, V440c Onboard SCSI port 0.8m (1132A) 25m
Sun 6758 2m (3832A)
SG-XPCI2SCSI-LM320 4m (3830A)
10m (3831A)
Sun Fire V215/V245, V445 SGXPCI1SCSI-LM320-Z 0.8m (1136A) 25m
SGXPCI2SCSI-LM320-Z 1.2m (1137A)
SGXPCIE2SCSIU320Zd 2m (1138A)
(x)4422A-2 4m (3830B)
10m (3831B)
Sun Fire T1000 SG-(X)PCIE2SCSIU320Z 0.8m (1132A) 12m
Sun Fire T2000 SG-XPCIE2SCSIU320Z 2m (3832A)
4m (3830A)
10m (3831A)
a In order to use the Netra 440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.
b The onboard SCSI port must be used for one shared storage connection due to the server only having one PCI slot.
c In order to use the SF V440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.
d This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.
e From each host to D2, including the internal bus lengths
The other configuration rules for using Sun StorEdge S1 as shared storage are listed
below.
■ Daisy Chaining of Sun StorEdge S1 is not supported.
■ Sun StorEdge S1 is supported in direct attached configurations.
TABLE 7-10 Sun StorEdge S1 and Sun Cluster 3 Support Matrix for SPARC
Maximum
Max. SCSI Node
Host Host Adapter Cable Bus Lengthg Connectivity
a The on-board SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
b The on-board SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
c The on-board SCSI port must be used for one shared storage connection due to the server having only 1 PCI slot.
d In order to use the SF V440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.
e In order to use the Netra 440's onboard SCSI port to connect to shared storage, patch #115275-02 must be installed.
f This HBA is not supported with Solaris 9 as Solaris 9 does not support PCIe.
g This includes connectivity to both the hosts.
The figure below shows how to configure Sun StorEdge S1 as a shared storage in a
Netra T1 200 cluster:
Sun Sun
StorEdge S1 StorEdge S1
Data Mirror
The configuration rules for using Sun StorEdge A1000 are used as a shared storage
are listed below.
■ Daisy-chaining of A1000 arrays is supported.
TABLE 7-11 SE A1000 and Sun Cluster Support Matrix for SPARC
Max Node
Servers Host Bus Adapters Connectivity
The other configuration rules for using D1000 as shared storage are listed below.
■ Daisy chaining of D1000s is not supported.
TABLE 7-13 Sun Cluster and SE D1000 Support Matrix for SPARC
Maximum
Part # of the Node
Server Host Adapter Host Adapter Connectivity
FIGURE 7-5 Two Sun StorEdge D1000s, in Single-Bus Configuration, as Shared Storage.
NODE 1 NODE 2
HA1 HA1
HA2 HA2
Data Mirror
D1000 #1 D1000 #2
The other configuration rules for using Sun StorEdge A3500 as shared storage are
listed below:
■ Daisy-chaining of the controller modules is not supported.
■ Sun StorEdge A3500, and A3500FC arrays cannot be used as quorum devices.
■ A3500 Light is supported.
■ It is required to connect the two SCSI ports of a controller module to different Host
Adapters on a node.
TABLE 7-15 Sun Cluster 3 and SE A3500 Support Matrix for SPARC
Figure 7-6 on page 142 shows how to configure A3500 as a shared storage.
HA HA
HA HA
HA HA
Controller Controller
A B
Quorum Device
A3500 Controller Module
The support matrix for the SE 3120 JBOD with Sun Cluster 3 is listed below:
TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC (Continued)
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC (Continued)
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-17 Sun Cluster 3 and SE3120 JBOD Support Matrix for SPARC (Continued)
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-18 Sun Cluster 3 and SE3120 JBOD Support Matrix for x64
Maximum
Node
Server Host Adapter Cable Connectivity
The support matrix for the SE 3310 JBOD with Sun Cluster 3 is listed below:
TABLE 7-19 Sun Cluster 3 and SE3310 JBOD Support Matrix for SPARC
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-19 Sun Cluster 3 and SE3310 JBOD Support Matrix for SPARC (Continued)
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-20 Sun Cluster 3 and SE3310 JBOD Support Matrix for x64
Maximum
Node
Server Host Adapter Cable Connectivity
Node 1 Node 2
HA 1 HA 2
0 1 0 1 0 1 0 1
a0 a1 b0 b1 a0 a1 b0 b1
SE 3310 #1 SE 3310 #2
TABLE 7-21 Sun Cluster 3 and SE3310 RAID Support Matrix for SPARC
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-21 Sun Cluster 3 and SE3310 RAID Support Matrix for SPARC (Continued)
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-22 Sun Cluster 3 and SE3310 RAID Support Matrix for x64
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-22 Sun Cluster 3 and SE3310 RAID Support Matrix for x64 (Continued)
Maximum
Node
Server Host Adapter Cable Connectivity
HA 1 HA 2
0 1 0 1 0 1 0 1
a0 a1 a2 a3 a0 a1 a2 a3
Data Mirror
SE 3310 #1 SE 3310 #2
The support matrix for the SE 3320 JBOD with Sun Cluster 3 is listed below:
TABLE 7-23 Sun Cluster 3 and SE3320 JBOD Support Matrix for SPARC
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-23 Sun Cluster 3 and SE3320 JBOD Support Matrix for SPARC (Continued)
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-24 Sun Cluster 3 and SE3320 JBOD Support Matrix for x64
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-25 Sun Cluster 3 and SE3320 RAID Support Matrix for SPARC
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-25 Sun Cluster 3 and SE3320 RAID Support Matrix for SPARC (Continued)
Maximum
Node
Server Host Adapter Cable Connectivity
TABLE 7-26 Sun Cluster 3 and SE3320 RAID Support Matrix for x64
Maximum
Node
Server Host Adapter Cable Connectivity
HA 1 HA 2
0 1 0 1 0 1 0 1
a0 a1 a2 a3 a0 a1 a2 a3
Data Mirror
SE 3320 #1 SE 3320 #2
FIGURE 7-12 Direct-Attached SE 3320 RAID with Attached JBODs (for additional storage)
Node 1 Node 2
HA 1 HA 2
0 1 0 1 0 1 0 1
a0 a1 a2 a3 a0 a1 B1 B2 a2 a3 a0 a1 a2 a3 a0 a1 B1 B2 a2 a3
Data Data Mirror Mirror
RAID Requirements
■ ST 2530 arrays are supported without software mirroring when properly
configured with dual controllers, multipathing, and hardware RAID providing in-
array data redundancy.
■ A single 2530 array is supported when properly configured with dual controllers,
multipathing, and hardware RAID providing in-array data redundancy.
Multipathing
■ Sun StorEdge Traffic Manager (MPXIO) is required in Duplex Configuration
(ST2530 with 2x controllers). Solaris MPT Patch 125081-14 or later is required to
config Sun Cluster.
1. First check Table 5-6, SAS Storage for SPARC Servers, on page 55, or Table 5-7,
SAS Storage for x64 Servers, on page 56 to determine whether your chosen
server and storage combination is supported.
2. If your combination is supported, choose a supported HBA from the list below:
■ SG-XPCI8SAS-E-Z
■ SG-XPCIE8SAS-E-Z
4. If HBA support is not limited, you can use your server and storage combination
with host adapters as indicated by the “Server Search” under the Interop Tool
“Searches” tab, https://interop.central.sun.com/interop/interop
RAID Requirements
■ It is recommended to mirror shared data in a J4200 or J4400 with another array.
■ When configured with dual SIMs and MPxIO, shared data can be mirrored within
a single J4200 or J4400 with SAS HDDS, but with less availability.
■ When a J4200 or J4400 array is configured with a single SIM, shared data must be
mirrored to another array.
Multipathing
■ Sun Cluster support with SAS multipathing is enabled and qualified when using
SAS HDDs.
1. First check Table 5-6, SAS Storage for SPARC Servers, on page 55, or Table 5-7,
SAS Storage for x64 Servers, on page 56 to determine whether your chosen
server and storage combination is supported.
2. If your combination is supported, choose a supported HBA from the list below:
■ SG-XPCI8SAS-E-Z
■ SG-XPCIE8SAS-E-Z
If HBA support is not limited, you can use your server and storage combination
with host adapters as indicated by the “Server Search” under the Interop Tool
“Searches” tab, https://interop.central.sun.com/interop/interop
RAID Requirements
■ ST 2510 arrays are supported without software mirroring when properly
configured with dual controllers, multipathing, and hardware RAID providing in-
array data redundancy.
■ A single 2510 array is supported when properly configured with dual controllers,
multipathing, and hardware RAID providing in-array data redundancy.
Multipathing
■ For Duplex configuration, the option to use Sun StorEdge Traffic Manager
(MPXIO) is available. If MPXIO is not used, data must be mirrored to another
array or to another volume within the ST 2510.
Following that model, Sun Cluster 3 supports any Sun server qualified as a cluster
node, with any Ethernet interface supported by that server, provided the
requirements for Solaris release, patches, etc. are met.
■ When adding trusted admin access for the cluster, make sure the trusted admin
access entry comes before any general admin access entries.
■ It is also a good practice to set the NAS fencing module to load automatically
when the NAS device boots. If the NAS device is rebooted, and the fencing
module is not set to automatically load, failed cluster nodes will not be able to be
fenced. Please see the Sun Cluster System Administration Guide for details on
setting the NAS fencing module to load at boot time.
■ iSCSI LUNs may only be used as quorum devices.
■ An iSCSI LUN quorum device must be on the same subnet as that of the cluster
nodes due to bug 6614299.
RAID Requirements
■ N/A
Multipathing
■ N/A
Following that model, Sun Cluster 3 supports any Sun server qualified as a cluster
node, provided the requirements for Solaris release, patches, etc. are met.
RAID Requirements
■ There are no Sun Cluster specific requirements.
Multipathing
■ There are no Sun Cluster specific requirements.
Following that model, Sun Cluster 3 supports any Sun server qualified as a cluster
node, with any Ethernet interface supported by that server, provided the
requirements for Solaris release, patches, etc. are met.
Network Configuration
Cluster Interconnect
The cluster interconnect is the network fabric, private to the cluster, for
communication between the cluster nodes. This fabric is used for cluster-private
communication as well as cluster file system data transfer among the nodes. The
fabric consists of transport paths between all nodes of the cluster.
Point-to-Point Interconnect
For 2 node clusters, a point-to-point connection between the nodes forms a complete
interconnect.
NIC NIC
NIC NIC
Junction-Based Interconnect
For clusters with more than two nodes, a switch is necessary to form an
interconnect. Note that this option can be used for a two node cluster as well. Using
VLANs for private interconnect traffic is supported.
Switch
Switch
Ethernet
■ There can be a maximum of 6 independent Ethernet interconnects within a
cluster.
■ All Ethernet ports within an interconnect path must operate at the same speed.
■ VLAN Support
■ Sun Cluster supports the use of private interconnect networks over switch-
based virtual local area networks (VLAN). In a switch-based VLAN
environment, Sun Cluster enables multiple clusters and non-clustered systems
to share Ethernet switches in two different configurations.
■ The implementation of switch-based VLAN environments is vendor-specific.
Since each switch manufacturer implements VLAN differently, the following
guidelines address Sun Cluster requirements regarding how VLANs should be
configured for use with cluster interconnects.
■ You must understand your capacity needs before you set up a VLAN
configuration. To do this, you must know the minimum bandwidth necessary
for your interconnect and application traffic.
■ Interconnect traffic must be placed in the highest priority queue.
■ All ports must be equally serviced, similar to a round robin or first in first out
model.
■ You must verify that you have properly configured your VLANs to prevent
path timeouts.
■ Linking of VLAN switches together is supported. For minimum quality of
service requirements for your Sun Cluster configuration, please see the Sun
Cluster 3 Release Notes Supplement.
■ VLAN configurations are supported in campus cluster configurations with the
same restrictions as “normal” Sun Cluster configurations.
■ Transport paths may share a switch by using VLANs.
■ Jumbo Frames Support
■ Sun Cluster 3.1 and all updates prior to 3.1 9/04 (update 3) are supported and
require the following patches:
117950-07 (or later): SC3.1: Core Patch for Solaris 8.
117949-07 (or later): SC3.1: Core Patch for Solaris 9.
■ Sun Cluster 3.1 9/04 (update 3) and later are supported.
■ Agents support:
- Solaris 8 on Sun Cluster supports only Oracle RAC.
- Solaris 9 and later on Sun Cluster supports all Sun Cluster agents.
- When using Scalable Services and jumbo frames on your public network, it
is required that the Maximum Transfer Unit (MTU) of the private network is
the same size or larger than the MTU of your public network.
■ Solaris support:
- Solaris 8 requires patch 111883-23 (or later): SunOS 5.8: Sun GigaSwift
Ethernet 1.0 driver patch.
- Solaris 9 requires patch 112817-16 (or later): SunOS 5.9: Sun GigaSwift
Ethernet 1.0 driver patch.
- Solaris 10 does not have specific patch requirements for this feature.
PCI/SCI
SCI is supported with maximum 4-node clusters.
■ An SCI interconnect consists of a pair of cable connections.
InfiniBand
The Sun Dual Port 4X IB Host Channel Adapter is supported with maximum 4-node
clusters.
■ Sun Cluster 3.1 update 4 (or later).
■ Solaris 10 update 1 (or later).
■ Solaris Patch Requirements:
■ 118852-07 (or later) SunOS 5.10: patch kernel/misc/sparcv9/ibcm
■ All cluster configurations require one Sun IB Switch 9P per transport path. IB
does not support a point-to-point interconnect.
■ Each IB transport path requires one IB cable from an HCA port to the switch, e.g.
a two-node cluster using IB will use a total of 4 cables.
■ A maximum of 2 IB transport paths per node is supported. Using two IB HCA
cards is recommended for best availability, however using both ports of a single
HCA is supported but may reduce availability. Note that some servers only
support a single IB HCA card.
•
•
X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j
•
X4444A Quad-gigabit Ethernet cardi
•
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
•
•
•
•
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
•
X4150A/X4151A Gigabit Ethernet PCI
•
•
X2222A Combo Dual FastEthernet-Dual SCSI PCI
•
•
•
X1236A-Z InfiniBand HCA PCIe
X1233A/X1233A-Z InfiniBand HCA PCI
•
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009
•
•
X1150/X3150 Gigabit Ethernet PCI
•
•
X1141 Gigabit Ethernet PCI
•
X1074 SCi PCI
•
•
Network Interface Cards
listed in Table 10-1.
•
•
•
•
X1033 Fast-Ethernet PCI
•
X1032 SunSwift PCI
•
•
•
TABLE 10-1
X1027 PCI-E Dual 10 GigE Fiber Low Profilee
Onboard Ethernet/Gigabit Ports
•
•
•
•
•
•
•
Sun Netra 1280
Sun Netra 1290
Sun Netra 120
Sun Netra 210
Sun Netra 240
Sun Netra 20
AC 200/DC
Sun Netra t
Sun Netra t
1400/1405
1120/1125
AC/DC
Servers
200
188
Sun Fire Cluster Link (Wildcat)
X7286 Sun PCI-X Single GigE MMF Low Profile
NETWORK CONFIGURATION
189
Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)
•
•
X7280A-2 Gigabit Ethernet UTP PCI-Ed, j
•
•
X5544A/X5544A-4 10 Gigabit Ethernet PCI
X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j
•
•
X4444A Quad-gigabit Ethernet cardi
•
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
•
X4150A/X4151A Gigabit Ethernet PCI
•
X2222A Combo Dual FastEthernet-Dual SCSI PCI
•
X1236A-Z InfiniBand HCA PCIe
X1233A/X1233A-Z InfiniBand HCA PCI
X1151/X3151 Gigabit Ethernet PCI
•
X1150/X3150 Gigabit Ethernet PCI
•
X1141 Gigabit Ethernet PCI
•
X1074 SCi PCI
•
Network Interface Cards
•
X1033 Fast-Ethernet PCI
•
X1032 SunSwift PCI
•
TABLE 10-1
•
Onboard Ethernet/Gigabit Ports
•
•
d
c
Sun Enterprise
Sun Enterprise
Sun Enterprise
Sun Enterprise
Sun Netra
Sun Netra
Sun Netra
Sun Netra
Sun Netra
Sun Netra
CP3010
CP3060
CP3260
Servers
T2000
T5220
T5440
220R
250
420
450
Sun Fire Cluster Link (Wildcat)
X7286 Sun PCI-X Single GigE MMF Low Profile
Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)
•
j
X7280A-2 Gigabit Ethernet UTP PCI-Ed, j
•
j
X5544A/X5544A-4 10 Gigabit Ethernet PCI
•
X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j
•
•
•
j
•
•
•
•
X4444A Quad-gigabit Ethernet cardi
•
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
•
•
•
•
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
•
•
•
X4150A/X4151A Gigabit Ethernet PCI
•
•
•
X2222A Combo Dual FastEthernet-Dual SCSI PCI
•
X1236A-Z InfiniBand HCA PCIe
•
X1233A/X1233A-Z InfiniBand HCA PCI
•
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009
•
X1150/X3150 Gigabit Ethernet PCI
•
X1141 Gigabit Ethernet PCI
•
X1074 SCi PCI
•
Network Interface Cards
•
X1033 Fast-Ethernet PCI
•
X1032 SunSwift PCI
•
TABLE 10-1
•
•
•
Onboard Ethernet/Gigabit Ports
•
•
•
•
•
•
•
d
V210a
Sun Fire T1000
Sun Fire T2000
Sun Enterprise
Sun Enterprise
Sun Enterprise
Sun Enterprise
Sun Enterprise
3x00
4x00
5x00
6x00
10K
190
Sun Fire Cluster Link (Wildcat)
•
k
k
X7286 Sun PCI-X Single GigE MMF Low Profile
NETWORK CONFIGURATION
191
Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)
•
•
•
•
X7285 Sun PCI-X Dual GigE UTP Low Profile •
•
•
•
•
X7281A-2 Gigabit Ethernet MMF PCI-Ej
•
•
j
j
X7280A-2 Gigabit Ethernet UTP PCI-Ed, j
•
•
j
j
X5544A/X5544A-4 10 Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j
•
•
j
•
•
•
•
•
•
•
•
X4444A Quad-gigabit Ethernet cardi •
•
•
•
•
•
•
•
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
•
•
•
•
•
•
•
•
•
•
•
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
•
•
•
•
X4150A/X4151A Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
•
•
•
X2222A Combo Dual FastEthernet-Dual SCSI PCI
•
•
•
•
•
•
•
•
•
•
X1236A-Z InfiniBand HCA PCIe
X1233A/X1233A-Z InfiniBand HCA PCI
•
•
•
X1151/X3151 Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
•
•
X1150/X3150 Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
•
•
X1141 Gigabit Ethernet PCI
•
•
•
•
•
X1074 SCi PCI
•
•
•
•
•
•
•
•
•
Network Interface Cards
•
•
•
•
•
•
•
•
•
•
X1033 Fast-Ethernet PCI
•
•
•
•
•
•
•
X1032 SunSwift PCI
•
•
•
•
•
•
•
TABLE 10-1
•
Onboard Ethernet/Gigabit Ports
•
•
•
•
•
•
•
•
•
•
Sun Fire V1280
Sun Fire E2900
Sun Fire V245
•
X7280A-2 Gigabit Ethernet UTP PCI-Ed, j
•
X5544A/X5544A-4 10 Gigabit Ethernet PCI
•
•
X4447A-Z x8 PCI-E Quad Gigabit Ethernete, j
•
X4444A Quad-gigabit Ethernet cardi
•
•
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
X4150A/X4151A Gigabit Ethernet PCI
•
X2222A Combo Dual FastEthernet-Dual SCSI PCI
•
X1236A-Z InfiniBand HCA PCIe
•
h
X1233A/X1233A-Z InfiniBand HCA PCI
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009
•
X1150/X3150 Gigabit Ethernet PCI
•
X1141 Gigabit Ethernet PCI
•
X1074 SCi PCI
•
g
Network Interface Cards
•
X1033 Fast-Ethernet PCI
•
X1032 SunSwift PCI
•
TABLE 10-1
•
Onboard Ethernet/Gigabit Ports
•
Sun Fire E4900,
M4000/M5000
M8000/M9000
T5120/T5220
E20K/E25Kb
Sun SPARC
Sun SPARC
Sun SPARC
Sun SPARC
Enterprise
Enterprise
Enterprise
Enterprise
12K/15Kb
Sun Fire
Sun Fire
Servers
M3000
6900
192
NETWORK CONFIGURATION
TABLE 10-1 Cluster Interconnects: PCI Network Interfaces for SPARC Servers (Continued)
Sun SPARC • • • • • •
Enterprise
T5140/T5240
Sun SPARC • • • • • •
Enterprise
T5440
External I/O • • • •
Expansion Unit
for Sun SPARC
Enterprise
M4000, M5000,
M8000, M9000,
T5120, T5140,
T5220 & T5240
a SF V210 onboard gigabit port support requires patch #110648-28
b Do not install PCI SCI cards into hs PCI+ PCI slot 1. For more information see bug 6178223.
c Base and Extended Fabrics, and Sun Netra CP3200 ARTM-FC-Z (XCP32X0-RTM-FC-Z)
d Two-node clusters installed with Solaris 10 11/06 (or later) and KU 118833-30 (or later) can configure e1000g cluster interconnects using back-
to-back cabling, otherwise Ethernet switches are required. See Info Doc number 88928 for more info.
e Refer to Info Doc ID: 89736 for details
f Includes support for new LW8-QFE card on SF 1280, Netra 1280 and E2900
g This support requires patch #110900-08 for Solaris 8, patch #112838-06 and 114272-02 for Solaris 9. Max nodes supported is 4 X1074A
h Support in SC3.2U1 or later as CR 6599044 (P2/S2) was tested and integrated in SC3.2U1
•
X7285A Sun PCI-X Dual GigE UTP Low Profile
•
X7281A-2 Sun PCI-E Dual GigE MMF
•
•
•
X7280A-2 Sun PCI-E Dual GigE UTP
•
•
•
j Note that the network interface is not supported with Solaris 9 as Solaris 9 does not support PCIe
•
X4447A-Z Sun x8 PCI-E Quad GigE UTP
k Sun Fire Cluster Link Only Supported on SF 6800, 12k/15k. Only DLPI mode is supported
•
X4446A-Z Sun x4 PCI-E Quad GigE UTP
•
•
•
X4445A Sun Quad GigaSwift PCI-X UTP
•
X4444A Sun Quad GigaSwift PCI UTP
•
X4422A/X4422A-2 Sun StorEdge Dual GigE/Dual SCSI PCIb
•
•
c
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009
•
X4150A/X4150A-2 Sun GigaSwift UTP PCI
•
X2222A Combo Dual FastEthernet-Dual SCSI PCI
X1333A-4 Sun Dual Port 4x IB HCA PCI-X
•
Network Interface Cards
X1236A-Z Sun Dual Port 4x IB HCA PCI-E
•
X1235A Sun Dual Port 4x IB HCA PCI-X
•
X1233A/X1233A-Z InfiniBand HCA PCI
TABLE 10-2
X1027 PCI-E Dual 10 GigE Fiber Low Profilea
•
•
Onboard Ethernet/GigE Ports
•
•
•
•
•
•
•
Sun Fire X2100
Servers
194
M2
M2
M2
NETWORK CONFIGURATION
195
Cluster Interconnects: PCI Network Interfaces for x64 Servers (Continued)
b Requires Sun GigaSwift Ethernet driver for x 86 Solaris 9 1.0. available at http://www.sun.com/software/download/prod-
X9271A Intel Single GigEe
X7286A Sun PCI-X Single GigE MMF Low Profile
•
•
X7285A Sun PCI-X Dual GigE UTP Low Profile
•
•
•
•
•
•
•
•
•
X7280A-2 Sun PCI-E Dual GigE UTP
•
•
•
•
•
•
•
•
•
•
•
•
X5544A/X5544A-4 Sun 10 GigE PCI/PCI-Xd
•
•
X4447A-Z Sun x8 PCI-E Quad GigE UTP
•
•
X4446A-Z Sun x4 PCI-E Quad GigE UTP
•
•
•
•
•
•
•
•
•
•
•
•
X4445A Sun Quad GigaSwift PCI-X UTP
•
X4444A Sun Quad GigaSwift PCI UTP
X4422A/X4422A-2 Sun StorEdge Dual GigE/Dual SCSI PCIb
c Do not install X4422A in both V 40z PCI slots 2 and 3 (See CR 6196936)
X4151A/X4151A-2 Sun GigaSwift MMF PCI
X4150A/X4150A-2 Sun GigaSwift UTP PCI
X2222A Combo Dual FastEthernet-Dual SCSI PCI
X1333A-4 Sun Dual Port 4x IB HCA PCI-X
•
•
•
Network Interface Cards
•
•
•
•
•
•
•
X1235A Sun Dual Port 4x IB HCA PCI-X
•
•
•
•
•
•
•
Onboard Ethernet/GigE Ports
•
•
•
•
•
•
•
•
•
•
•
ucts/40f7115e.html
Sun Netra X4250
M2
M2
M2
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009
The SBus and cPCI network interfaces that can be used to set up the cluster
interconnect are listed in Table 10-3
TABLE 10-3 Cluster Interconnects: SBus and cPCI Network Interfaces for SPARC Servers
X1059 Fast-Ethernet
Servers
TABLE 10-4 Cluster Interconnects: PCI-E ExpressModule Network Interfaces for SPARC
Servers
Network Interface
ExpressModules
Servers
TABLE 10-5 Cluster Interconnects: PCI-E ExpressModule Network Interfaces for x64
Servers
Network Interface
ExpressModules
SG-XPCIE2FCGBE-E-Z Dual 4Gb FC Dual GbE ExpressModule
Servers
TABLE 10-6 Cluster Interconnect: Network Express Module (NEM) Network Interfaces
for SPARC Servers
Network Interface
NEMs
Servers
TABLE 10-7 Cluster Interconnect: Network Express Module (NEM) Network Interfaces for
x64 Servers
Network Interface
NEMs
X4212A SB 6000 14-Port Multi-Fabric NEM
Servers
TABLE 10-8 Cluster Interconnect: XAUI Network Interfaces for SPARC Servers
Network
Interface
Cards
SESX7XA1Z
Servers
TABLE 10-8 Cluster Interconnect: XAUI Network Interfaces for SPARC Servers
Network
Interface
Cards
SESX7XA1Z
Servers
The cables/switches supported with each type of cluster interconnect are listed
below:
The switches supported with each type of cluster interconnect are listed below:
Public Network
Clients connect to the cluster nodes through public network interfaces. It is required
that all nodes in the cluster be independently connected on the same IP subnets.
Sun Cluster 3.0 uses NAFO as a public network interface while later Sun Cluster 3
releases use IPMP as a public network interface.
Note – The Sun X1018 and X1059 cards do not support IPMP, thus, they are not
supported as a public network interface with Sun Cluster 3 releases after 3.0.
203
Public network PCI interfaces supported with Sun Cluster 3 for SPARC servers are
•
X4444A Quad-Gigabit Ethernet PCI*c
•
configuration meets your customer’s requirements.
•
•
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
X4150A/X4151A Gigabit Ethernet PCI
•
X2222A Combo Dual FastEthernet-Dual SCSI PCII
•
•
•
X1159 Sun ATM 622/MMF 5.0 PCI
•
X1157 Sun ATM 155/MMF 5.0 PCI
•
X1151/X3151 Gigabit Ethernet PCI
•
X1150/X3150 Gigabit Ethernet PCI
•
X1141 Gigabit Ethernet PCI
•
Network Interface Cards
listed in Table 10-11
•
•
•
X1033 Fast-Ethernet PCI
•
•
TABLE 10-11
X1032 SunSwift PCI
•
•
X1027 PCI-E Dual 10 GigE Fiber Low Profileb
Onboard Ethernet/Gigabit Ports
•
•
•
•
•
Sun Netra 120
Sun Netra 210
Sun Netra 240
Sun Netra 20
AC 200/DC
Sun Netra t
Sun Netra t
1400/1405
1120/1125
AC/DC
Servers
200
X7286 Sun PCI-X Single GigE MMF Low Profile
Public Network: PCI Network Interfaces for SPARC Servers (Continued)
•
X7280A-2 Gigabit Ethernet UTP PCI-Ed
•
X5544A/X5544A-4 10 Gigabit Ethernet PCI
•
•
X4445A Quad-Gigabit Ethernet PCI*c
•
X4444A Quad-Gigabit Ethernet PCI*c
•
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
•
•
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
•
•
X4150A/X4151A Gigabit Ethernet PCI
•
•
•
X2222A Combo Dual FastEthernet-Dual SCSI PCII
•
X1159 Sun ATM 622/MMF 5.0 PCI
•
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009
•
X1151/X3151 Gigabit Ethernet PCI
•
X1150/X3150 Gigabit Ethernet PCI
•
X1141 Gigabit Ethernet PCI
•
Network Interface Cards
•
X1033 Fast-Ethernet PCI
•
TABLE 10-11
•
X1027 PCI-E Dual 10 GigE Fiber Low Profileb
•
Onboard Ethernet/Gigabit Ports
•
•
•
•
a
Sun Netra 1280
Sun Netra 1290
Sun Enterprise
Sun Enterprise
Sun Enterprise
Sun Netra
Sun Netra
Sun Netra
Sun Netra
Sun Netra
Sun Netra
CP3010
CP3060
CP3260
Servers
T2000
T5220
T5440
220R
250
420
204
NETWORK CONFIGURATION
205
X7286 Sun PCI-X Single GigE MMF Low Profile
Public Network: PCI Network Interfaces for SPARC Servers (Continued)
•
d
X7280A-2 Gigabit Ethernet UTP PCI-Ed
•
d
X5544A/X5544A-4 10 Gigabit Ethernet PCI
•
•
•
d
X4445A Quad-Gigabit Ethernet PCI*c
•
•
•
•
X4444A Quad-Gigabit Ethernet PCI*c
•
•
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
•
•
•
•
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
•
•
X4150A/X4151A Gigabit Ethernet PCI
•
•
X2222A Combo Dual FastEthernet-Dual SCSI PCII
•
X1159 Sun ATM 622/MMF 5.0 PCI
•
X1157 Sun ATM 155/MMF 5.0 PCI
•
X1151/X3151 Gigabit Ethernet PCI
•
X1150/X3150 Gigabit Ethernet PCI
•
X1141 Gigabit Ethernet PCI
•
Network Interface Cards
•
X1033 Fast-Ethernet PCI
•
TABLE 10-11
•
X1027 PCI-E Dual 10 GigE Fiber Low Profileb
•
•
•
Onboard Ethernet/Gigabit Ports
•
•
•
•
•
•
Sun Fire T1000
Sun Fire T2000
Sun Enterprise
Sun Enterprise
Sun Enterprise
Sun Enterprise
Sun Enterprise
Sun Enterprise
3x00
4x00
5x00
6x00
10K
450
X7286 Sun PCI-X Single GigE MMF Low Profile
•
•
•
•
Public Network: PCI Network Interfaces for SPARC Servers (Continued)
•
•
•
•
X7281A-2 Gigabit Ethernet MMF PCI-Ed
•
•
d
d
X7280A-2 Gigabit Ethernet UTP PCI-Ed
•
•
d
d
X5544A/X5544A-4 10 Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
•
d
X4445A Quad-Gigabit Ethernet PCI*c
•
•
•
•
•
•
•
•
•
•
X4444A Quad-Gigabit Ethernet PCI*c
•
•
•
•
•
•
•
•
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
•
•
•
•
•
•
•
•
•
•
•
•
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
•
•
•
•
•
X4150A/X4151A Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
•
•
•
•
•
X2222A Combo Dual FastEthernet-Dual SCSI PCII
•
•
•
•
•
•
•
•
•
•
X1159 Sun ATM 622/MMF 5.0 PCI
•
•
•
•
•
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009
•
•
•
•
•
•
•
•
X1151/X3151 Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
•
•
X1150/X3150 Gigabit Ethernet PCI
•
•
•
•
•
•
•
•
•
•
X1141 Gigabit Ethernet PCI
•
•
•
•
•
•
•
Network Interface Cards
•
•
•
•
•
•
•
•
•
•
X1033 Fast-Ethernet PCI
•
•
•
•
•
•
•
TABLE 10-11
•
•
•
•
•
•
•
X1027 PCI-E Dual 10 GigE Fiber Low Profileb
•
Onboard Ethernet/Gigabit Ports
•
•
•
•
•
•
•
•
•
•
•
•
Sun Fire V1280
Sun Fire E2900
Sun Fire V240
Sun Fire V245
206
NETWORK CONFIGURATION
207
X7286 Sun PCI-X Single GigE MMF Low Profile
•
•
Public Network: PCI Network Interfaces for SPARC Servers (Continued)
•
X7280A-2 Gigabit Ethernet UTP PCI-Ed
•
•
X5544A/X5544A-4 10 Gigabit Ethernet PCI
•
•
X4445A Quad-Gigabit Ethernet PCI*c
•
•
X4444A Quad-Gigabit Ethernet PCI*c
•
•
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
•
•
X4150A-2/X4151A-2 Gigabit Ethernet PCI
•
X4150A/X4151A Gigabit Ethernet PCI
•
X2222A Combo Dual FastEthernet-Dual SCSI PCII
•
X1159 Sun ATM 622/MMF 5.0 PCI
•
X1157 Sun ATM 155/MMF 5.0 PCI
•
X1151/X3151 Gigabit Ethernet PCI
•
X1150/X3150 Gigabit Ethernet PCI
•
X1141 Gigabit Ethernet PCI
•
Network Interface Cards
•
X1033 Fast-Ethernet PCI
•
TABLE 10-11
•
X1027 PCI-E Dual 10 GigE Fiber Low Profileb
•
Onboard Ethernet/Gigabit Ports
•
Sun Fire E4900,
M4000/M5000
M8000/M9000
T5120/T5220
E20K/E25K
Sun SPARC
Sun SPARC
Sun SPARC
Sun SPARC
Enterprise
Enterprise
Enterprise
Enterprise
12K/15K
Sun Fire
Sun Fire
Servers
M3000
E6900
X7286 Sun PCI-X Single GigE MMF Low Profile
Public Network: PCI Network Interfaces for SPARC Servers (Continued)
•
X7280A-2 Gigabit Ethernet UTP PCI-Ed
•
•
X5544A/X5544A-4 10 Gigabit Ethernet PCI
•
X4445A Quad-Gigabit Ethernet PCI*c
X4444A Quad-Gigabit Ethernet PCI*c
d Note that the network interface is not supported with Solaris 9 as Solaris 9 does not support PCIe
X4422A/X4422A-2 Combo Dual Gigabit Ethernet-Dual SCSI PCII
X4150A-2/X4151A-2 Gigabit Ethernet PCI
a Base and Extended Fabrics, and Sun Netra CP3200 ARTM-FC (XCP32X0-RTM-FC-Z)
X4150A/X4151A Gigabit Ethernet PCI
c Includes support for the Sun LW8-QFE card on the SF1280, Netra 1280 and E2900
X2222A Combo Dual FastEthernet-Dual SCSI PCII
X1159 Sun ATM 622/MMF 5.0 PCI
SUN CLUSTER 3 CONFIGURATION GUIDE • OCTOBER 13, 2009
•
Onboard Ethernet/Gigabit Ports
•
Expansion Unit
for Sun SPARC
M4000, M5000,
M8000, M9000,
External I/O
Sun SPARC
Sun SPARC
Enterprise
Enterprise
Enterprise
Servers
T5440
208
NETWORK CONFIGURATION
Public network PCI interfaces supported with Sun Cluster 3 for x64 servers are listed
TABLE 10-12 Public Network: PCI Network Interfaces for x64 Servers
TABLE 10-12 Public Network: PCI Network Interfaces for x64 Servers (Continued)
in Table 10-12.
Public network SBus and cPCI interfaces supported with Sun Cluster 3 are listed in
Table 10-13.
TABLE 10-13 Public Network: SBus and cPCI Network Interfaces for SPARC Servers
TABLE 10-14 Public Network: PCI-E ExpressModule Network Interfaces for SPARC
Servers
Servers
TABLE 10-15 Public Network: PCI-E ExpressModule Network Interfaces for x64 Servers
Servers
TABLE 10-16 Public Network: Network Express Module (NEM) Network Interfaces for
SPARC Servers
Network Interface
NEMs
X4212A SB 6000 14-Port Multi-Fabric NEM
Servers
TABLE 10-17 Public Network: Network Express Module (NEM) Network Interfaces for x64
Servers
Network Interface
NEMs
Servers
TABLE 10-18 Public Network: XAUI Network Interfaces for SPARC Servers
Network
Interface
Cards
SESX7XA1Z
Servers
IPMP Support
IPMP, Sun's Network Multipathing implementation for the Solaris Operating
System, is easy to use, and enables a server to have multiple network ports
connected to the same subnet. Solaris IPMP software provides resilience from
network adapter failure by detecting the failure or repair of a network adapter and
switching the network address to and from the alternative adapter. Moreover, when
more than one network adapter is functional, Solaris IPMP increases data
throughput by spreading outbound packets across adapters.
Solaris IPMP provides a solution for most failover scenarios, while requiring
minimal system administrator intervention. With Solaris IPMP, there is no
degradation in system or network performance when IPMP functions are not
invoked, and failover functions are accomplished in a short time frame. Public
Network Management (PNM), Network Adapter Fail Over (NAFO) supported in
Sun Cluster 3.0 is officially end of life. Starting with Sun Cluster 3.1, Solaris IPMP is
the replaced technology to ensure public network availability on SunPlex systems.
■ It is recommended to configure redundant network adapters for every public
network interface.
■ The Sun X1018 and X1059 cards do not support IPMP, thus, they are not
supported with Sun Cluster 3.1 as a public network interface.
There are two options for implementing link aggregation with Sun Cluster:
■ Sun Trunking 1.3.
■ The link aggregation software included with Solaris 10 1/06 (update 1) and later.
See dladm(1M).
The Ethernet NIC and Solaris release dictates which option can be used.
Sun Cluster supports Sun Trunking 1.3 with Solaris 8, 9 and 10.
Solaris link aggregation is supported with Solaris 10 1/06 and later. Solaris 10 1/06
is the first Solaris release providing this feature.
The Ethernet NIC must be supported by the server. Refer to the Public Network
support tables earlier in this chapter to determine Sun Cluster support.
Then consult the Solaris link aggregation and Sun Trunking 1.3 hardware support
information for configuration requirements:
Global Networking
Sun Cluster 3 provides global networking between the clients and the cluster nodes
through the use of following features:
■ Global Interface (GIF): A global interface is a single network interface for
incoming request from all the clients. The responses are sent out directly by the
individual nodes processing the requests. In case the node hosting the global
interface fails, the interface is failed over to a backup node.
■ Cluster Interconnect: The cluster interconnect is used for request/data transfer
between the cluster nodes, the providing global connectivity to all the cluster
nodes from any one node.
■ It is strongly recommended to configure redundant network adapters in GIF’s
NAFO/IPMP group.
Software Configuration
Typically, each node in a sun cluster will have the Solaris Operating Environment,
Sun Cluster 3, volume management software, and applications along with their
agents and fault monitors running on it.
Solaris Releases
All nodes in the cluster are required to run the same version (including the update
release) of the operating system.
The Solaris releases supported with Sun Cluster 3 are listed below.
TABLE 11-1 Solaris Releases for Sun Cluster 3.1 SPARC (Continued)
Solaris 9 (FCS) • • • • •
Solaris 10 (FCS) •
TABLE 11-3 Solaris Releases for Sun Cluster 3.2 x64 (Continued)
Application Services
An application service is an application along with an agent which makes the
application highly available and / or scalable in Sun Cluster. Application services
can be of two types - failover and scalable. Sun Microsystems has developed agents
and fault monitors for a core set of applications. These application services are
discussed in the following sections. Sun Microsystems has also made available an
application service development toolkit for developing custom agents and fault
monitors for other applications. Unless otherwise noted, all application services are
supported with all hardware components (servers, storage, network interfaces, etc.)
stated as supported in previous chapters. Unless otherwise noted, all services are
32bit application services. For more information on application services, please see
the Sun Cluster Data Services Planning and Administration Guide at
http://docs.sun.com/
All the Sun Cluster 3.1 agents are supported in the Sun Cluster 3.2 release. If you
upgrade Sun Cluster 3.1 software to Sun Cluster 3.2 software, we recommend that
you upgrade all agents to Sun Cluster 3.2 to utilize any new features and bug fixes
in the agent software. If you upgrade the application software you must apply the
latest agent patches to make the new version of the application highly available on
Sun Cluster. Please check the application support matrix to make sure the
application version is supported with Sun Cluster.
All Sun Cluster 3.2 u1 agents are supported on SC 3.2 core. After installing SC 3.2
core platform, please download the latest agent packages (e.g. SC 3.2 u1) or apply
the latest agent patches. Agents are continuously enhanced to support the latest
application versions. The latest agent updates or agent patches contains fixes to
support the newer application versions.
Failover Services
A failover service has only one instance of the application running in the cluster at a
time. In case of application failure, an attempt is made to restart the application on
the same node. If unsuccessful, the application is restarted on one of the surviving
nodes, depending on the service configuration. This process is called failover.
The table below lists the failover services supported with Sun Cluster 3.1
TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)
TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)
TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)
SAP 4.0, 4.5, 4.6, 6.10, 3.1 8, 9, 10 • The intermediate releases of SAP
6.20, 6.30, 6.40, 7.0, application, for example 4.6C, 4.6D, etc.,
NW 2004 (SR1, are all supported
SR2, SR3) • The Sun Cluster Resource Types (RTs) for
making the traditional SAP components
(Central Instance and App Server
Instances) Highly Available are:
- SUNW.sap_ci_v2
- SUNW.sap_as_v2
• The agent part number for making the
traditional SAP components (CI and AS)
Highly Available is CLAIS-XXG-9999
• The RTs for making WebAS, SCS, Enq
and Replica Highly Available are:
- SUNW.sapwebas,
- SUNW.sapscs
- SUNW.sapenq
- SUNW.saprepl
• The Agent part number for making
WebAS, SCS, Enq and Replica Highly
Available is CLAIS-XAI-9999
• The RTs for making SAP J2EE Highly
Available are:
- SUNW.sapscs
- SUNW.sapenq
- SUNW.saprepl
- SUNW.sap_j2ee
• The Agent part numbers for making SAP
J2EE Highly Available are:
- CLAIS-XAI-9999
- CLAIS-XAE-9999
• SAP J2EE agent not supported on S10
• In Sun Cluster 3.2 the SAP J2EE
functionality is available in the
SUNW.sapwebas RT. There is no separate
GDS resource needed to make SAP J2EE
Highly Available. One single part
number CLAIS-XAI-9999 will make
ABAP, J2EE or ABAP+J2EE Highly
Available. Refer to SC 3.2 section of this
config guide for details.
TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)
TABLE 11-4 Failover Services for Sun Cluster 3.1 SPARC (Continued)
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.1 9, 10 • Supported in failover zones (using the
6.0 container agent)
BEA WebLogic Server 7.0, 8.1 3.1 9, 10
DHCP N/A 3.1 9, 10 • Requires patch 117639-03 or later
TABLE 11-5 Failover Services for Sun Cluster 3.1 x64 (Continued)
DNS 3.1 9, 10
HADB (JES) All versions 3.1 9, 10
supported by JES
Application Server
EE are supported
JES Application Server All versions till JES 3.1 9, 10
previously known as 5 are supported
SunOne Application (up to 8.1EE)
Server
JES Web Proxy Server All versions till JES 3.1 9, 10
previously known as 5 are supported
SunOne Proxy Server
JES Web Server All versions till JES 3.1 9, 10
previously known as 5 are supported
SunOne Web Server (up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release)
MySQL 3.23.54a-4.0.23, 3.1 9, 10 • Supported in failover zones (using the
4.1.6-4.1.22, 5.0.15- container agent)
5.0.45, 5.0.85
N1 Grid Engine 5.3 3.1 9, 10 • Requires patch 118689-02 or later
6.0, 6.1
N1 Grid Service Provi- 4.1, 5.0, 5.0u1, 5.1, 3.1 9, 10 • Supported in failover zones (using the
sioning System 5.2, 5.2.1 - 5.2.4 container agent)
NFS V3 3.1 9, 10
Oracle Server 10G R1 32 bit 3.1 10 • Both Standard and Enterprise Editions
10G R2 32 & 64 bit are supported with Sun Cluster 3.1U4
PostgreSQL 7.3.x, 8.0.x, 8.1.x, 3.1 9, 10 • Supported in failover zones (using the
8.2.x, 8.3.x container agent)
Samba 2.2.2 to 3.0.27 3.1 9, 10 • Requires patch 116726-05 or later
TABLE 11-5 Failover Services for Sun Cluster 3.1 x64 (Continued)
The tables below lists the failover services supported with Sun Cluster 3.2:
Agfa IMPAX 4.5 - 5.x, 6.3 3.2 9, 10 • Agent not supported in non-global zones
• Solaris 10 version support is for Agfa
IMPAX 6.3 only
Apache Proxy Server All 2.2.x versions 3.2 9, 10 • Agent supported in global zones and
and all versions of zone nodes (SC 3.2 support of zones)
Apache shipped • Agent not supported in failover zones
with Solaris. • Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.2 9, 10 • Agent supported in global zones, failover
6.0 zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Apache Web Server All 2.2.x versions 3.2 9, 10 • Agent supported in global zones, zone
and all versions of nodes (SC 3.2 support of zones), and
Apache shipped Zone Clusters (a.k.a. cluster brand zones)
with Solaris. • Agent not supported in failover zones
• Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported
TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)
BEA WebLogic Server 7.0, 8.1, 9.0, 9.2, 3.2 9, 10 • Agent supported in global zones and
10.0, 10.2 zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Please see the Release Notes that
documents an issue discovered during
the qualification of WLS in non-global
zones
• Apply the latest patch or upgrade the
agent to SC 3.2 u1 or later
DHCP N/A 3.2 9, 10 • Agent not supported in non-global zones
DNS 3.2 9, 10 • Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
HADB (JES) All versions 3.2 9, 10 • Agent not supported in non-global zones
supported by JES
Application Server
EE are supported
(4.4, 4.5)
IBM WebSphere MQ 5.3, 6.0, 7.0 3.2 9, 10 • Supported in global zones, failover zones
(using the container agent) and zone
nodes (SC 3.2 support of zones)
Informix V9.4, 10, 11 and 3.2 10 • Agent available for download at
11.5 http://www.sun.com/download under
Systems Administration category and
Clustering sub-category
JES Application Server All versions till JES 3.2 9, 10 • Agent supported in global zones and
previously known as 5 U1, 9.1, 9.1 UR2, zone nodes (SC 3.2 support of zones)
SunOne Application GlassFish V2 UR2 • Agent not supported in failover zones
Server
JES Directory Server 5.2.x. This agent is 3.2 • Please contact the Directory Server
owned and product group: Ludovic Poitou, Regis
supported by the Marco
Directory Server • For more info:
product group http://blogs.sfbay.sun.com/Ludo/date/
20061106
JES Messaging Server 6.3. This agent is 3.2 • Please contact the Messaging Server
previously known as owned and product group: Durga Tirunagari
iPlanet Messaging supported by the • For more info, mail to
Server (ims) Messaging Server messaging@sun.com
product group
TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)
JES Web Proxy Server All versions till JES 3.2 9, 10 • Agent supported in global zones and
previously known as 5 are supported zone nodes (SC 3.2 support of zones)
SunOne Proxy Server (up to 4.0) • Agent not supported in failover zones
JES Web Server All versions up to 3.2 9, 10 • Agent supported in global zones and
previously known as and including JES zone nodes (SC 3.2 support of zones)
SunOne Web Server 5 U1 are • Agent not supported in failover zones
supported. All
releases up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release
Kerberos Version shipped 3.2 10 • Agent supported in global zones and
with Solaris zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
MySQL 3.23.54a-4.0.23 3.2 9, 10 • Agent supported in global zones, failover
4.1.6-4.1.22 zones (using the container agent), zone
5.0.15-5.0.85 nodes (SC 3.2 support of zones) and
Zone Clusters (a.k.a. cluster brand zones)
5.1.x
• MySQL versions 5.0.x and 5.1.x require
patches 126031-04 (S9), 126032-04 (S10)
N1 Grid Engine 6.0, 6.1 3.2 9, 10 • Agent not supported in non-global zones
N1 Grid Service 4.1, 5.0, 5.0u1, 5.1, 3.2 9, 10 • Agent supported in global zones, failover
Provisioning System 5.2, 5.2.1 - 5.2.4 zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Netbackup This agent is 3.2 • Please contact Veritas/Symantec for
owned and details
supported by
Veritas/Symantec
NFS V3 3.2 9, 10 • Agent not supported in non-global zones
V4 10
Oracle Application 9.0.2 - 9.0.3 (10g) 3.2 9 • Note 1: 9.0.2 - 9.0.3 = 9iAS
Server • Note 2: 9.0.4 = 10g AS
9.0.4 - 10.1.3.1 9, 10
• Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Apply the latest agent patch
TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)
Oracle E-Business 11.5.8, 11.5.9, 3.2 9, 10 • Agent supported in global zones and
Suite 11.5.10 -11.5.10cu2 zone nodes (SC 3.2 support of zones)
12.0 • Agent not supported in failover zones
• Apply the latest agent patch
Oracle Server 8.1.6 32 & 64 bit 3.2 9 • Note that Oracle 8.1.x have been
8.1.7 32 & 64 bit desupported by Oracle. However, when
9i 32 & 64 bit a customer has continuing support for
Oracle 8.1.x from Oracle, Sun will
continue supporting the Sun Cluster HA
Oracle agent with it.
9i R2 32 & 64 bit 9, 10 • Both Standard and Enterprise Editions
10G R1 & R2 64 bit are supported
11g • Supported in non-global zones
TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)
PostgreSQL 7.3.x, 8.0.x, 8.1.x, 3.2 9, 10 • Agent supported in global zones, failover
8.2.x, 8.3.x zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
• PostgreSQL agent in SC 3.2 u2 supports
Write Ahead Log (WAL) shipping
functionality. Get this functionality in
one of the following ways:
- Install the SC 3.2 u2 agent, or
- Upgrade to the SC 3.2 u2 agent, or
- Apply the latest agent patch
• Feature info: This project enhances the
PostgreSQL agent to provide the ability
to support log shipping functionality as a
replacement for shared storage thus
eliminating the need for shared storage
in a cluster when using PostgreSQL
Databases. This feature provides support
for PostgreSQL database replication
between two different clusters or
between two different PostgreSQL
failover resources within one cluster.
Samba 2.2.2 to 3.0.27 3.2 9, 10 • Agent supported in global zones, failover
zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)
TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)
TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)
Siebel 7.0, 7.5, 7.7 3.2 9 • Agent not supported in non-global zones
7.7, 7.8 • Agent for Siebel 8.0 requires SC 3.2 u1 or
patches to the SC 3.2 Siebel agent:
7.8.2 9, 10 - 126064-02 (Solaris 9)
8.0 - 126065-02 (Solaris 10)
Solaris Containers Brand type: native, 3.2 10 • This agent now supports lx, solaris8 and
(a.k.a. Zones) lx, solaris8 and solaris9 brand containers in addition to
solaris9 supporting native Solaris 10 containers
• Container agent requires at least patch
126020-01 or a SC 3.2 u1 agent to support
lx and solaris8 brand containers
• Container agent requires patch 126020-03
to support solaris9 brand container
Sun Java Server All versions till JES 3.2 9, 10 • Agent supported in global zones and
Message Queue 5 are supported whole root zones (SC support for non-
previously known as (3.5, 3.6, 4.0, 4.1, global zones)
JES MQ Server and 4.2, 4.3) • Agent not supported in sparse root zones
SunOne MQ Server • Agent not supported in failover zones
Sun StorEdge 3.2.1 3.2 9 • Requires Solaris 9u9 and patches 116466-
Availability Suite 09, 116467-09 and 116468-13
• HA-ZFS not supported with AVS
4.0 10 • Requires Solaris 10u3 and patch 123246-
02
• HA-ZFS not supported with AVS.
SWIFTAlliance Access 5.9, 6.0, 6.2 3.2 9, 10 • SC 3.2 SWIFTAlliance Access agent
patch 126085-01 or later required for
Solaris 9
• Solaris 10 agents are available for
download from
http://www.sun.com/download
• SWIFT Alliance Access 6.0 is supported
on all S10 versions supported by Swift
and by Sun Cluster. 6.0 is not supported
on Solaris 9.
• SWIFT Alliance Access 6.2 is supported
on Solaris 10 8/07 or later on SPARC
platform with patch 126086-01
TABLE 11-6 Failover Services for Sun Cluster 3.2 SPARC (Continued)
SWIFTAlliance 5.0, 6.0, 6.1 3.2 9, 10 • S10 agents are available for download
Gateway from http://www.sun.com/download
• SWIFT Alliance Gateway 6.0 and 6.1 are
supported on all S10 versions supported
by Swift and Sun Cluster. 6.0 and 6.1 are
not supported on Solaris 9
Sybase ASE 12.0 - 12.5.1, 12.5.2, 3.2 9 • Supported in HA mode only - both
12.5.3 asymmetric and symmetric. The
Companion Server feature is not
12.5.2, 12.5.3, 15.0, 10
supported.
15.0.1, 15.0.2
Note - There are two Sybase agents. One
sold by Sun, another sold by Sybase. This
table refers to the agent sold by Sun.
• Agent supported in global zones and
zone nodes (SC support of zones)
• Agent not supported in failover zones
WebSphere Message 5.0, 6.0 3.2 9, 10 • Agent supported in global zones and
Broker zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
Apache Proxy Server All 2.2.x versions 3.2 10 • Agent supported in global zones and
and all versions of zone nodes (SC 3.2 support of zones)
Apache shipped • Agent not supported in failover zones
with Solaris. • Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.2 10 • Agent supported in global zones, failover
6.0 zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Apache Web Server All 2.2.x versions 3.2 10 • Agent supported in global zones, zone
and all versions of nodes (SC 3.2 support of zones), and
Apache shipped Zone Clusters (a.k.a. cluster brand zones)
with Solaris. • Agent not supported in failover zones
• Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.
TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)
BEA WebLogic Server 7.0, 8.1, 9.0, 9.2, 3.2 10 • Agent supported in global zones and
10.0, 10.2 zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Please see the Release Notes that
documents an issue discovered during
the qualification of WLS in non-global
zones
• Apply the latest agent patch or upgrade
the agent to SC 3.2 u1
DHCP N/A 3.2 10 • Agent not supported in non-global zones
DNS 3.2 10 • Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
HADB (JES) All versions 3.2 10 • Agent not supported in zones
supported by JES
Application Server
EE are supported
(4.4, 4.5)
IBM WebSphere MQ 6.0, 7.0 3.2 10 • Agent supported in global zones, failover
zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Informix V9.4, 10, 11, 11.5 3.2 10 • Agent available for download from
http://www.sun.com/download under
Systems Administration category and
Clustering sub-category
JES Application Server All versions till JES 3.2 10 • Agent supported in global zones and
previously known as 5 U1, 9.1, 9.1 UR2, zone nodes (SC 3.2 support of zones)
SunOne Application GlassFish V2 UR2 • Agent not supported in failover zones
Server
JES Web Proxy Server All versions till JES 3.2 10 • Agent not supported in non-global zones
previously known as 5 are supported
SunOne Proxy Server (up to 4.0)
JES Web Server All versions up to 3.2 10 • Agent supported in global zones and
previously known as and including JES zone nodes (SC 3.2 support of zones)
SunOne Web Server 5 U1 are • Agent not supported in failover zones
supported. All
releases up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release.
TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)
TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)
PostgreSQL 7.3.x, 8.0.x, 8.1.x, 3.2 10 • Agent supported in global zones, failover
8.2.x, 8.3.x zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
• PostgreSQL agent in SC 3.2 u2 supports
Write Ahead Log (WAL) shipping
functionality. Get this functionality in
one of the following ways:
- Install the SC 3.2 u2 agent or
- Upgrade to the SC 3.2 u2 agent or
- Apply the latest agent patch
• Feature info: This project enhances the
PostgreSQL agent to provide the ability
to support log shipping functionality as a
replacement for shared storage thus
eliminating the need for shared storage
in a cluster when using PostgreSQL
Databases. This feature provides support
for PostgreSQL database replication
between two different clusters or
between two different PostgreSQL
failover resources within one cluster.
SAP NetWeaver 2004s 3.2 10 • Agent supported in global zones and
(SR1, SR2, SR3), zone nodes (SC 3.2 support of zones)
Web Application • Agent not supported in failover zones
Server 7.0, SAP 7.1 • Apply the latest agent patch
• NetWeaver 2004s is based on SAP Kernel
7.00
• Refer to the following document for
details on SAP agents:
http://galileo.sfbay/agent_support_mat
rix/SAP-Config-Guide/
• See SPARC Table 11-6 for details
• Apply patch 126063-07 to make SAP 7.1
Highly Available on SC 3.2 or use the
SAP WebAS agent (SUNW.sapenq,
SUNW.saprepl, SUNW.sapscs,
SUNW.sapwebas) from SC 3.2 u2
SAP LiveCache 7.6 3.2 10 • Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Requires SAP Livecache version 7.6.01.09
for S10 x86
TABLE 11-7 Failover Services for Sun Cluster 3.2 x64 (Continued)
SAP MaxDB 7.6, 7.7 3.2 10 • Agent supported in global zones and
zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Requires SAP MaxDB version 7.6.01.09
for S10 x86
Samba 2.2.2 to 3.0.27 3.2 10 • Agent supported in global zones, failover
zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Solaris Containers Brand type: native, 3.2 10 • This agent now supports lx, solaris8 and
(a.k.a. Zones) lx, solaris8 and solaris9 brand containers in addition to
solaris9 supporting native Solaris 10 containers
• Container agent requires at least patch
126021-01 or the SC 3.2 u1 agent to
support lx and solaris8 brand containers
• Container agent requires at least patch
126021-03 to support solaris9 brand
containers
Sun Java Server All versions till JES 3.2 10 • Agent supported in global zones, whole
Message Queue 5 are supported root non-global zone nodes (SC 3.2
previously known as (3.5, 3.6, 4.0, 4.1, support of zones)
JES MQ Server and 4.2, 4.3) • Agent not supported in sparse root non-
SunOne MQ Server global zones
• Agent not supported in Failover Zones
Sun StorEdge 4.0 3.2 10 • Requires at least Solaris 10u3 and patch
Availability Suite 123247-02
• HA-ZFS not supported with AVS
Sybase ASE 15.0, 15.0.1 and 3.2 10 • Agent supported in global zones and
15.0.2 zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Agent available for download from
http://www.sun.com/download
WebSphere Message 6.0 3.2 10 • Agent supported in global zones and
Broker zone nodes (SC 3.2 support of zones)
• Agent not supported in failover zones
• Apply the latest agent patch
Scalable Services
A scalable service has one or more instances of applications running in the cluster
simultaneously. A global interface provides the view of a single logical service to the
outside world. The application requests are distributed to various running instances,
based on the load-balancing policy. In case a node on which an application instance
is running fails, an attempt is made to restart the application on the same node. If
unsuccessful, the application is restarted on a surviving node or the load is
redistributed among the surviving nodes, depending on the service configuration. In
case the node hosting the global interface (GIF) fails, the global interface is failed
over to a surviving node, depending on the service configuration.
This section does not include information about Oracle Real Application Cluster
(RAC). Please refer to “Oracle Real Application Cluster (OPS/RAC)” on page 245.
The following tables contain the scalable services supported with Sun Cluster 3.1
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.1 8, 9, 10 • Supported in failover zones (using the
6.0 container agent)
Apache Web Server All versions 3.1 8, 9, 10
shipped with
Solaris
JES Web Server All versions till JES 3.1 8, 9, 10
previously known as 5 are supported
SunOne Web Server (up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release)
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.1 9, 10 • Supported in failover zones (using the
6.0 container agent)
Apache Web Server All versions 3.1 9, 10
shipped with
Solaris
JES Web Server All versions till JES 3.1 9, 10
previously known as 5 are supported
SunOne Web Server (up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release)
The following tables contain the scalable services supported with Sun Cluster 3.2
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.2 9, 10 • Agent supported in global zones, failover
6.0 zones (using the container agent), and
zone nodes (SC 3.2 support of zones)
Apache Web Server All 2.2.x versions 3.2 9, 10 • Agent supported in global zones, zone
and all versions of nodes (SC 3.2 support of zones), and
Apache shipped Zone Clusters (a.k.a. cluster brand zones)
with Solaris. • Agent not supported in failover zones
• Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.
JES Web Server All versions up to 3.2 9, 10 • Agent supported in global zones and
previously known as and including JES zone nodes (SC 3.2 support of zones)
SunOne Web Server 5 U1 are • Agent not supported in failover zones
supported. All
releases up to and
including 7.0, 7.0
U1, 7.0 U2 and all
future updates of
7.0 release.
Apache Tomcat 3.3, 4.0, 4.1, 5.0, 5.5, 3.2 10 • Agent supported in global zones, failover
6.0 zones (using the container agent) and
zone nodes (SC 3.2 support of zones)
Apache Web Server All 2.2.x versions 3.2 10 • Agent supported in global zones, zone
and all versions of nodes (SC 3.2 support of zones), and
Apache shipped Zone Clusters (a.k.a. cluster brand zones)
with Solaris. • Agent not supported in failover zones.
• Important note: For Apache versions
2.2.x, the agent supports only standard
HTTP server. Apache-SSL and mod_ssl
are not supported.
JES Web Server All versions up to 3.2 10 • Agent supported only in zone nodes (SC
previously known as and including JES 3.2 support of zones)
SunOne Web Server 5 U1 are
supported.
All releases up to
and including 7.0,
7.0 U1, 7.0 U2 and
all future updates
of 7.0 release
RSM is supported with RAC and Sun Cluster 3. This functionality requires Sun
Cluster 3.0 5/02, Oracle 9i RAC 9.2.03 and Solaris 8 or 9. This support is limited to
SCI-PCI cards and switches. This support applies to all servers that support SCI-PCI.
Maximum Nodesb TABLE 11-12 Oracle RAC Support with Sun Cluster 3.1 for SPARC
Solaris
H/W RAID
Veritas CVMe
NAS
Fast Ethernet
Gigabit Ethernet
10Gigabit Ethernet
Infinibandm
i
Version
8.1.7 4 8, 9 • • • •k • • •
32bit/
64bit/
OPFS
32bita
9i 4 8, 9 • • • •k • • •
RAC/
RACG
R1
32/64
bit
9i 8c 8, 9, • • • •h •j • • • • RAC •
RAC/ 10d 9.2.0.3
RACG and
R2 32/ above
64 bit
10gR1 8 8, 9, • • • • • • • • • • •
RAC 10
10.1.0.
3 and
above
10gR2 8 8, 9, • •e • • • • • • • •
RAC 10
11g 8 9, 10 • • • • • • • • • •
RAC
a Supported in active-passive mode only
b Please refer to the respective storage section for the number of nodes supported
c Requires Oracle 9.2.0.3 and above plus patch 2854962. Please refer to the respective storage section for
the number of nodes supported
d Requires Sun Cluster 3.1 8/05
e Requires Veritas CVM 3.2 or later
TABLE 11-13 Oracle RAC Support with Sun Cluster 3.2 for SPARC
Maximum Nodesc
Solaris
H/W RAID
Veritas CVMg
NAS
Fast Ethernet
Gigabit Ethernet
10GB Ethernet
Infinibandp
k
Version
9i RAC/ 4 9 • • • • • • •
m
RACG
R1
32/64
bit
9i RAC/ 8d 9, 10 • • • •j •l • • • RAC •
RACG 9.2.0.3
R2 32/ and
64 bit above
10gR1 8 9, 10 • • • • • • • • • •
RAC
10.1.0.3
and
above
10gR2 8 9, 10 • • • • • • • • •
RAC
10g 16e 10f 4.6.2 • • • •
RAC and
10.2.0.3a above
Maximum Nodesc TABLE 11-13 Oracle RAC Support with Sun Cluster 3.2 for SPARC (Continued)
Solaris
H/W RAID
Veritas CVMg
NAS
Fast Ethernet
Gigabit Ethernet
10GB Ethernet
Infinibandp
k
Version
10g R2 4 10f • • • •
RAC n
10.2.0.4b
8 10f • • • • • • • • •
8 9, 10 • • • • • • • • •
11gR1 4 10f • • • •
RACb
8 10f • • • • • • • • •
n Adds support for the Sun Storage 7000 series: 1) When RAC is installed in a global zone, you can also use NFS for
Clusterware OCR and Voting disks; 2) When RAC installed in a zone cluster, you must use iSCSI LUNs as OCR
and Voting devices; 3) If you use iSCSI LUNs for Clusterware OCR and Voting disks, either in the global zone or
in a zone cluster, configure the corresponding DID devices with fencing disabled.
o Maximum of 4 nodes with PCI-SCI
p InfiniBand support starts with Solaris 10
TABLE 11-14 Oracle RAC Support with Sun Cluster 3.1 and Sun Cluster 3.2 for x64
Maximum Nodesa
Solaris
H/W RAID
Veritas CVMc
Shared QFS
NAS
Fast Ethernet
Gigabit Ethernet
10GB Ethernet
Infiniband
Version
10gR2 8b 10 • • • • • • • • •
RAC 64
bit
(10.2.0.1
and
above)
a Please refer to the respective storage section for the number of nodes supported
b Greater than 4 nodes requires SC 3.2 2/08 (u1) and above
c Veritas CVM not supported for x64
d Supported with Binary and log files only
e Up to four nodes are supported with SVM - larger numbers of nodes requires hardware RAID
Co-Existence Software
Solaris Resource Manager 1.2 and 1.3 is certified for co-existence with Sun Cluster
3.0 7/01 (or later) software.
Data Configuration
The application data can be configured on the shared storage in Sun Cluster in one
of the following structures:
■ “Raw Devices” on page 250
■ “Raw Volumes / Meta Devices” on page 250
■ “File System” on page 253
Raw Devices
Since every shared storage disk is a global device, all the disk partitions, and any
raw data laid out on them, are globally accessible. No other software apart from
Solaris Operating Environment and Sun Cluster 3 is required to configure data on
raw devices.
Veritas Cluster Volume Manager (CVM) and Solaris Volume Manager for Sun
Cluster (Oban) is supported with only Oracle RAC/OPS clusters.
Either VxVM volume manager or Solstice DiskSuite (SDS) can be used for shared
storage within a cluster configuration. Using VxVM for shared storage and SDS for
mirroring the root disk is also a supported configuration.
Platform/Ver
Volume Manager sion Solaris Notes
Solaris Volume Manager SPARC SVM support tracks Solaris Please see the respective SC
(SVM) and x64 support. Please see Table 11-2, Release Notes for patch and other
“Solaris Releases for Sun requirements.
Cluster 3.2 SPARC,” on
page 221 and Table 11-3,
“Solaris Releases for Sun
Cluster 3.2 x64,” on page 221
for details.
Solaris Volume Manager SPARC SVM for SC support tracks Please see the respective SC
for SC (Oban) and x64 Solaris support. Please see Release Notes for patch and other
Table 11-2, “Solaris Releases requirements.
for Sun Cluster 3.2 SPARC,”
on page 221 and Table 11-3,
“Solaris Releases for Sun
Cluster 3.2 x64,” on page 221
for details.
Veritas Volume Manager SPARC: 4.1 - S9u8 plus required patches 4.1_mp2 patch 117080-07
(VxVM) including CVM (SC 3.2) as listed with SunSolve
support - S9u9
SPARC: 5.0 5.0_mp1 patch 122058-09 and
(SC 3.2) - S10u3 plus required patches 124361-05,
as listed with SunSolve
- S10u4 plus required patches
as listed with SunSolve
SPARC: 5.0
MP3 RP1
(SC 3.2u2)
Veritas Volume Manager x64: 4.1 - S10u3 plus required patches 4.1_mp1 patch 120586-04
(VxVM) only (SC 3.2) as listed with SunSolve
x64: 5.0 - S10u4 plus required patches patch 128060-02
(SC 3.2u1) as listed with SunSolve
x64: 5.0
MP3 RP1
(SC 3.2u2)
File System
If the application data is laid out on a file system, the cluster file system enables the
file system data to be available to all the nodes in the cluster. Sun Cluster 3 supports
cluster file system on top of a UFS/VxFS laid out on a Veritas volume or SDS meta
device. File system logging is required in Sun Cluster 3.
TABLE 11-17 Veritas File System Support Matrix with Sun Cluster 3.1
TABLE 11-18 Veritas File System Support Matrix with Sun Cluster 3.2
SPARC 4.1 - S9u8 plus required patches Requires 119301-04 (S9) and
as listed with SunSolve 119302-04 (S10) patches
5.0 - S9u9 Requires 123201-02 (S9) and
- S10u3 plus required patches 123202-02 (S10) patches
as listed with SunSolve
- S10u4 plus required patches
as listed with SunSolve
x64 5.0 - S10 plus required patches as - Starting with SC3.2u1
listed with SunSolve - Requires 125847-01 patch
TABLE 11-19 Sun StorEdge QFS (SPARC) Support Matrix with Sun Cluster 3.1
4.1 (HA) QFS 8 update 5 3.1 u1 SVM and Veritas VxVM N/A Yes
Standalone 9 update 3 a, b, c 3.5 and above
4.2 (HA) QFS 8 update 7 3.1 u2 SVM and Veritas VxVM N/A Yes
Standalone 9 update 3 and later a, b, c 3.5 and above
4.3 (HA) QFS 8 update 7 3.1 u3 SVM and Veritas VxVM N/A Yes
Standalone 9 update 3 and later a, b, c 4.0 and above
Solaris 10
4.3 (Shared) 8 update 7 3.1 u3 No VM Support N/A N/A
QFS 9 update 3 and later c, d
Solaris 10
4.4 (HA) QFS 9 update 3 and later 3.1 u3 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 a, b, e 4.0 and above
4.4 (Shared) 9 update 3 and later 3.1 u3 VM/Oban (with Solaris N/A N/A
QFS Solaris 10 d, e, f 10 only, No S9 support)
4.5 (HA) QFS 9 update 3 and later 3.1 u4 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 u1 a,b,g,h 4.0 and above
4.5 (Shared) 9 update 3 and later 3.1 u4 VM/Oban (with Solaris N/A N/A
QFS d,f,g,h 10 only, No S9 support)
Solaris 10 u1
4.6 (HA) QFS 9 update 3 and later 3.1 u4 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 u3 a,b,g 4.1 and above
4.6 (Shared) 9 update 3 and later 3.1 u4 VM/Oban (with Solaris L700 k.a Refer to j
SAM-QFS Solaris 10 u3 d,f,g,i,j 10 only, NO S9 support) SL500 FCk.a
a Supports with use of HA-NFS Agent
b Supports with use of HA-Oracle Agent
c Supports Oracle 9i only
d Supports with use of RAC Agent(s)
e Supports Oracle 9i, 10gR1 only
f Support with SVM Cluster Functionality (Oban).
g Supports Oracle 9i, 10gR1, and 10gR2
h Supports for SC 3.2 w/QFS 4.5 + QFS 05 patch (Build 4.5.42)
COTC: Currently COTC is at release 1.0. Clients outside the cluster is used when
user applications require access to the data stored on Cluster filesystem(s), the
cluster device fencing is lowered so COTC can access the data stored on attached
storage that is being managed by the cluster. For this configuration user applications
must run outside the cluster, this configuration requires that no other data service is
used inside the cluster for applications access outside the cluster. This configuration
requires that a logical hostname be used for Shared QFS Metadata traffic
communications between Shared QFS Metadata Server and Metadata Clients that
exist outside the cluster, this requires extra set-up in SC RG (see QFS Related
documentation for configuration examples). It is highly recommended that a
dedicated network be used for communications between the cluster nodes and the
nodes that exist outside the cluster. The storage topology is that must be used is
direct FC attached storage and can be any HWRAID supported in the configuration
guide. This is Shared QFS with NO SAM functionality. The cluster nodes provide
automated failover of the MDS. The currently supported node configuration is 2-4
nodes inside the cluster, and up to 16 nodes outside the cluster, is what has currently
been qualified. If your requirement requires other than mentioned above, a Get-To-
Yes must be filed for supportability. See QFS Documentation
http://docs.sun.com/source/819-7935-10/chapter6.html#94364
HA-SAM: Currently HA-SAM is at release 1.0, HA-SAM provide the feature of the
SAM (Storage Archive Management) “Archiving, Staging, Releaser, & Recycler”.
Each of these must run on the current Metadata Server. HA-SAM automated failover
is done with use of the SUNW.qfs agent, the Metadata Server in a HA-SAM
configuration has not been qualified using no other data service other than
SUNW.qfs & SUNW.hasam. This configuration is supported with a maximum of 2
cluster nodes, this also requires Shared QFS filesystem(s), As a requirement for this
configuration 1-PXFS filesystem must be used for SAM catalog. Currently this
configuration has only been qualified to runs in a active-passive configuration. No
other data service is supported in conjunction with this configuration. If your
requirement requires other than mentioned above, a Get-To-Yes must be filed for
supportability. See HA-SAM Documentation http://docs.sun.com/source/819-7931-
10/chap08.html#19295
TABLE 11-20 Sun StorEdge QFS (SPARC) Support Matrix with Sun Cluster 3.2
4.5 (HA) QFS 9 update 3 and later 3.2 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 u1 a,b,c,d 4.0 and above
4.5 (Shared) 9 update 3 and later 3.2 VM/Oban (with Solaris N/A N/A
QFS Solaris 10 u1 c,d,e,f 10 only, No S9 support)
4.6 (HA) QFS 9 update 3 and later 3.2 SVM and Veritas VxVM N/A Yes
Standalone Solaris 10 u3 a, b, c 4.1 and above
4.6 (Shared) 9 update 3 and later 3.2 VM/Oban (with Solaris L700 i, 9a Refer to h
SAM-QFS Solaris 10 u3 c,d,e,g,h 10 only, NO S9 support) SL500 FC 9a
a Supports with use of HA-NFS Agent
b Supports with use of HA-Oracle Agent
c Supports Oracle 9i, 10gR1, and 10gR2
d Supports for SC 3.2 w/QFS 4.5 + QFS 05 patch (Build 4.5.42)
e Supports with use of RAC Agent(s)
f Support with SVM Cluster Functionality (Oban).
TABLE 11-21 Sun StorEdge QFS (x64) Support Matrix with both Sun Cluster 3.1 and 3.2
4.5 (HA) QFS 9 update 3 and later 3.1 u4/3.2 SVM/VxVm 4.0 and N/A Yes
Standalone Solaris 10 FCS - u1 a, b, c above
4.5 (Shared) 9 update 3 and later 3.1 u4/3.2 VM/Oban (with Solaris N/A N/A
QFS Solaris 10 FCS - u1 c, d, e, f 10 only, No S9 support)
4.6 (HA) 9 update 3 and later 3.1 u4/3.2 SVM/VxVm 4.1 and N/A Yes
Standalone QFS Solaris 10 FCS - u3 a, b, c above
4.6 (Shared) 9 update 3 and later 3.1 u4/3.2 VM/Oban (with Solaris L700 i, ia Refer to h
SAM-QFS Solaris 10 FCS - u3 c, d, e, g, h 10 only, NO S9 support) SL500 FCia
a Supports with use of HA-NFS Agent
b Supports with use of HA-Oracle Agent
c Supports Oracle 10gR2
d Supports with use of RAC Agent(s)
e Support with SVM Cluster Functionality (Oban).
f Supports for SC 3.2 w/QFS 4.5 + QFS 05 patch (Build 4.5.42)
FIGURE 11-1 Solaris Cluster in I/O domains with non-clustered guest domains
Please note that using LDoms 1.0.3 Guest domains as Sun Cluster nodes in
conjunction with LDoms I/O domains to provide device services to other domains
can introduce additional load on the I/O domains. As such, performance and
capacity planning should be considered for the I/O domains
Sun Cluster Data Services which are currently certified are also supported with
Ldoms 1.0.3 Guest domains clusters with the following exceptions.
■ Oracle RAC configurations.
Following are some rules and guidelines in using Ldoms 1.0.3 guest domains with
Sun Cluster:
■ Use the mode=sc option for all virtual switch devices that connect the virtual
network devices used as the cluster interconnect.
■ Map only the full SCSI disks into the guest domains for shared storage.
■ The nodes of a cluster can consist of any combination of physical machines,
LDoms I/O domains, and LDoms guest domains.
■ If a physical machine is configured with LDoms, install Sun Cluster software only
in I/O domains or guest domains on that machine.
■ Network isolation - Guest domains that are located on the same physical machine
but are configured in different clusters must be network-isolated from each other
using one of the following methods:
■ Configure the clusters to use different network interfaces in the I/O domain
for the private network.
■ Use different network addresses for each of the clusters.
For the complete and detailed list of rules and guidelines please refer to
http://wikis.sun.com/display/SunCluster/Sun+Cluster+3.2+2-
08+Release+Notes#SunCluster3.22-08ReleaseNotes-ldomsguidelines
http://wikis.sun.com/display/SunCluster/Sun+Cluster+3.2+2-
08+Release+Notes#SunCluster3.22-08ReleaseNotes-ldomssw
Please note that the following cards are not supported as of July’08:
http://docs.sun.com/source/820-4895-10/chapter1.html#d0e995
Console Access
It is required to have console access to each cluster node for some maintenance and
service procedures, and for monitoring the console messages. Sun Cluster 3 does not
require any specific type of console access mechanism. Some options that are
available are:
■ Sun serial port A - this may be used with the Sun Cluster Terminal Concentrator
(X1312A), a customer supplied terminal concentrator, an alphanumeric terminal,
or serial terminal connection software from another computer such as tip(1).
■ E10K System Service Processor (SSP) and similar console devices.
■ Sun keyboards and monitors may be used on cluster nodes when supported by
the base server platform. However, they may not be used as console devices. The
console must be redirected to a serial port or SSP/RSC as applicable to the server
using the appropriate OBP settings.
■ Sun Management Center (SunMC) - This is the de facto system management tool
for all Sun platforms in the Enterprise. SunMC enables administrators to carry out
in-depth monitoring of the SunPlex system. Sun Cluster 3 requires that the
SunMC console layer be run on a Solaris SPARC system. The versions of SunMC
supported with the Sun Cluster 3 product are listed below:
■ SunMC 2.1.1
■ SunMC 3.0
■ SunMC 3.5
■ SunMC 3.6
■ SunMC 3.6.1
■ SunMC 4.0
■ SunPlex Manager - This is an easy to use system management tool that enables
one to carry out basic SunPlex system management and monitoring with a focus
on installation and configuration. This requires a suitable workstation or PC with
a Web browser as listed below:
■ Cluster Control Panel (CCP) - provides a launch pad for the cconsole, crlogin,
and ctelnet GUI tools which start multiple window connections to a set of
specified nodes. The multiple window connections consist of a host window for
each of the specified nodes and a common window. The common window’s input
is directed to each host window for running the same command on each node
simultaneously. This requires a Solaris SPARC system with a graphics console
running Solaris 8 (or later) and requires about 250KB in /opt. Note that cconsole
is designed to work with the Sun Cluster Terminal Concentrator, Enterprise 10K
System Service Processor, Sun Fire 3800 - 6800 System Controller, and Sun Fire
12K/15K System Controller. Cluster Control Panel is supported with Solaris 9 x86
and Solaris 10 x86.
Follow the steps given below for ordering a Sun Cluster 3 Configuration:
9. (Required) Order Enterprise Services and training packages from the Sun
Cluster section of the Enterprise Services price list.
Required Recommended
Server Component Quantity Quantity
Required Recommended
Server Component Quantity Quantity
2. (Required) Order Shared Storage. The tables below give the number of
components (for example, cable, GBIC) required to connect each storage unit to
a pair of nodes. Some of these components may be bundled with other
components (for example, cable with storage array). Please calculate the actual
number of additional components to be ordered appropriately. Also, the tables
give the number of “Host I/O ports” required with a shared storage unit. Some
servers have onboard host adapters and some host adapter cards have multiple
ports on them. Calculate the actual number of Host Adapter Cards to be
ordered appropriately.
a. Ordering Netra st D130. Refer to “SCSI Storage Support” on page 127 for the
configuration rules and the part numbers of the supported components.
Order each component in the quantity mentioned in the table below to
configure a Netra st D130 unit as a shared storage.
Component Quantity
b. Ordering Sun StorEdge S1. Refer to “Sun StorEdge S1 Array” on page 134
for the configuration rules and the part numbers of the supported
components. Order each component in the quantity mentioned in the table
below to configure a Sun StorEdge S1 unit as a shared storage.
Component Quantity
Component Quantity
d. Ordering Sun StorEdge D1000. Refer to “Netra st A1000 Array” on page 128
for the configuration rules and the part numbers of the supported
components. Order each component in the quantity mentioned in the table
below to configure one D1000 unit as shared storage. To configure a Single
Bus D1000, order components in the first row of the table. To configure Split
Bus D1000 order components in the second row of the table.
e. Ordering Netra st D1000. Refer to “Netra st A1000 Array” on page 128 for the
configuration rules and the part numbers of the supported components.
Order each component in the quantity mentioned in the table below to
configure one Netra st D1000 unit as shared storage. To configure a Single
Bus Netra st D1000, order components in the first row of the table. To
configure Split Bus Netra st D1000 order components in the second row of
the table.
Component Quantity
connecting two hubs to a pair of nodes. Order all the components in the second
row of the table below to configure an A3500FC controller module attached to
both the hub.
Component Quantity
Component Quantity
iii. Ordering Hub-attached full loop, single loop A5x00. Order all the
components in the first row of the table below for connecting a hub to a
pair of nodes. Order all the components in the second row of the table
below to attach as many A5x00 to the hub as required. Note that
maximum 4 A5000, or 4 A5100, or 3 A5200 units can be attached to a hub.
Ordering Hub-attached full loop, dual loop A5x00. Order all the
components in the first row of the table below for connecting two hubs to a
pair of nodes. Order all the components in the second row of the table
below to attach as many A5x00 to both the hubs as required. Note that
maximum 4 A5000, or 4 A5100, or 3 A5200 units can be attached to the hub-
pair in this fashion.
i. Ordering Sun StorEdge T3. Refer to “Sun StorEdge T3 Array (Single Brick)”
on page 74 for the configuration rules and the part numbers of the supported
components. Both T3 for the Workgroup and T3 for the Enterprise models
are supported with Sun Cluster 3.
ii. Ordering Switch-attached T3 Array. Order all the components in the first
row of the table below for connecting two switches to a pair of nodes.
Order all the components in the second row of the table below to attach a
T3 brick to a switch.
Component Quantity
Min. Max.
Interconnect topology. Component Quantity Quantity
Min. Max.
Interconnect topology. Component Quantity Quantity
6. (Required) Order the Solaris media. Solaris licenses are included with a new
Sun server.
a. Order Sun Cluster 3 base software. Starting with the 7/01 release, we now
have a generic part number available for the Sun Cluster 3. This part number
will always point to the latest update release.Order Sun Cluster 3 license:
Description Part#
Description Part#
Description Part#
TABLE 13-1 Sun Cluster 3.1 base software, License Only (Continued)
Description Part#
TABLE 13-1 Sun Cluster 3.1 base software, License Only (Continued)
Description Part#
Description Part#
TABLE 13-2 Sun Cluster 3.2 base software, License Only (Continued)
Description Part#
TABLE 13-2 Sun Cluster 3.2 base software, License Only (Continued)
Description Part#
TABLE 13-2 Sun Cluster 3.2 base software, License Only (Continued)
Description Part#
c. Upgrade licenses for the cluster software. Order one per server. Please refer
to http://www.sun.com/software/solaris/cluster/faq.jsp#g31 for more
details on various tiers:
TABLE 13-3 Sun Cluster 3.1 and 3.2 Base Software, Upgrade from Previous Revisions Only
SunPlex upgrade license to upgrade from Tier 1 to Tier 2 CLSIS-LCO-A9U9 1 per server
SunPlex upgrade license to upgrade from Tier 2 to Tier 3 CLSIS-LCO-B9U9
SunPlex upgrade license to upgrade from Tier 3 to Tier 4 CLSIS-LCO-C9U9
SunPlex upgrade license to upgrade from Tier 4 to Tier 5 CLSIS-LCO-D9U9
SunPlex upgrade license to upgrade from Tier 5 to Tier 6 CLSIS-LCO-E9U9
SunPlex upgrade license to upgrade from Tier 6 to Tier 7 CLSIS-LCO-F9U9
SunPlex upgrade license to upgrade from Tier 7 to Tier 8 CLSIS-LCO-G9U9
SunPlex upgrade license to upgrade from Tier 8 to Tier 9 CLSIS-LCO-H9U9
SunPlex upgrade license to upgrade from Tier 9 to Tier 10 CLSIS-LCO-I9U9
SunPlex upgrade license to upgrade from Tier 10 to Tier 11 CLSIS-LCO-J9U9
SunPlex upgrade license to upgrade to same or lower Tier CLSIS-LCO-K9U9
a. Order Sun Cluster 3 Agent software. For Sun Cluster 3.1, order the Sun
Cluster 3 Agents CD. A softcopy of documentation for the agents is included
in the CD. For Sun Cluster 3.2 the agents are included on the same DVD as
the base software. Documentation can also be found at docs.sun.com.
b. Order Sun Cluster 3.1 and 3.2 Agent license. Order one license for every
agent installed in the cluster.
Description Part#
Description Part#
c. Order VxVM cluster license from the table below. This license needs to be
ordered when OPS/RAC is used with the VxVM. Note that the VxVM
software package includes the cluster functionality in it. Separate license
keys are needed to enable the VxVM base product and the VxVM cluster
functionality. The VxVM software package and the license key for VxVM
base product need to be acquired separately.
Veritas VxVM 5.0 Cluster Functionality License CLUI9-500-9999 One per OPS/RAC node
Note that CVM 5.0 uses the same license PN as that of VxVM 5.0
Description Part#
Sun Cluster Advanced Edition for Oracle RAC License for Tier 1 Servers CLAI9-LCA-1999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 2 Servers CLAI9-LCA-2999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 3 Servers CLAI9-LCA-3999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 4 Servers CLAI9-LCA-4999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 5 Servers CLAI9-LCA-5999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 6 Servers CLAI9-LCA-6999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 7 Servers CLAI9-LCA-7999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 8 Servers CLAI9-LCA-8999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 9 Servers CLAI9-LCA-9999
Sun Cluster Advanced Edition for Oracle RAC License for Tier 10 Servers CLAI9-LCA-1099
Sun Cluster Advanced Edition for Oracle RAC License for Tier 11 Servers CLAI9-LCA-1199
Description Part#
Sun Cluster Geographic Edition 3.1 License for Tier 1 Servers CLGI9-001-9999
Sun Cluster Geographic Edition 3.1 License for Tier 2 Servers CLGI9-002-9999
Sun Cluster Geographic Edition 3.1 License for Tier 3 Servers CLGI9-003-9999
Sun Cluster Geographic Edition 3.1 License for Tier 4 Servers CLGI9-004-9999
Sun Cluster Geographic Edition 3.1 License for Tier 5 Servers CLGI9-005-9999
Sun Cluster Geographic Edition 3.1 License for Tier 6 Servers CLGI9-006-9999
Sun Cluster Geographic Edition 3.1 License for Tier 7 Servers CLGI9-007-9999
Sun Cluster Geographic Edition 3.1 License for Tier 8 Servers CLGI9-008-9999
Description Part#
Sun Cluster Geographic Edition 3.1 License for Tier 9 Servers CLGI9-009-9999
Sun Cluster Geographic Edition 3.1 License for Tier 10 Servers CLGI9-010-9999
Sun Cluster Geographic Edition 3.1 License for Tier 11 Servers CLGI9-011-9999
Description Part#
11. (Required) Order Enterprise Services and training packages from the Sun
Cluster section of the Enterprise Services pricelist.Enterprise Tracking System)
Campus Clusters
This appendix documents all the support related information for campus clusters
using Sun Cluster 3. For a detailed description of campus cluster concepts and
configurations refer to the Sun Cluster Hardware Administration Guide. In general,
the support information listed for traditional clusters in the rest of the configuration
guide applies to campus cluster configurations as well. This section gives details that
are specific to campus cluster configurations with appropriate pointers to other
sections in the config guide.
Number of Nodes
8-node campus cluster configurations are supported with Sun Cluster 3.
Applications
All of the application services, including Oracle Parallel Server (OPS), and Real
Application Clusters (RAC), mentioned in the “Software Configuration” on page 219
is applicable to the campus clusters as well.
Note that the solutions deployed for distance in the transport subsystem and for the
distance in the I/O paths can be either distinct or shared, as depicted in the previous
example with DWDMs. This design choice has to be made by the implementers,
within the constraints of the requirements described in the other sections of this
document, and may depend on the topology of the Specs Based Campus Cluster.
Technical requirements
This section deals with the technical list of features that a Specs Based Configuration
must comply with:
Latency:
■ Transport Latency
■ The measured latency of each transport, between any pair of nodes in the
cluster, must be less than 15 ms one-way.
■ Note that this document doesn’t address the means used to measure the
latency. It assumes that this information is obtained by the field, possibly but
not exclusively, under the terms of some Service Level Agreement (SLA).
■ Data path Latency
■ The measured latency of each path, between nodes and storage devices
attached through redundant SANs, must be less than 15 ms.
■ Note that the “path” that is referred to in that previous rule is defined as
whatever resides between a SAN switch the cluster nodes are directly
connected to, and the corresponding SAN switch the shared storage devices
are directly connected to.
■ The same remark as above applies here concerning the actual measurement of
that latency.
■ General rules and guidelines:
■ The measured network latency should be identical for each redundant private
interconnect between two nodes
■ In case of failures in the distance infrastructure (“cloud”), the latency of the
remaining transport(s) or data path(s) must remain below the max. values (15
ms one-way)
Topology:
The basic requirements and recommendations are common with standard cluster
configurations. Below are a few additional considerations
■ HDS array is supported as Quorum Device with Sun Cluster 3.2 using patch
release 2 (Solaris 9 SPARC/126105-01, Solaris 10 SPARC/126106-01, Solaris 10
x86/126107-01) and Sun Cluster 3.1U4 using patches (Solaris 8 SPARC/117950-31,
Solaris 9 SPARC/117949-30, Solaris 9 x86/117909-31, Solaris 10 SPARC/120500-15,
Solaris 10 x86/120501-15)
■ Transport:
■ Transport redundancy must be implemented and ensured between the cluster
nodes. The distance transport must be implemented in such a way that the
cluster nodes logically and functionally perceive distinct paths. For example,
adding/removing as well as enabling/disabling a transport path shouldn’t
affect the other one(s). In other words, from a functional point of view, the
distance implementation must be totally transparent, delayed responses apart,
to all applicable SC3.x commands related to transports.
■ The same principle must apply during the re-establishment of a previously
failed path.
■ I/O:
■ I/O path redundancy must be implemented and ensured between the nodes
and the SAN attached shared storage devices.
TrueCopy Support
TrueCopy is now supported for shared storage data replication between two sites
within a cluster. This offers a configuration alternative for campus clusters in which
distance concerns make host-side mirroring impractical. Automatic failover in the
case of primary node failure is included, as well as support for SVM, VxVM and raw
disk device groups. Careful consideration must be taken when deciding on
TrueCopy configuration parameters, such as fence level, since these have a direct
impact on cluster availability and data integrity guarantees.
All Truecopy fence levels are supported, however, there are specific trade-offs with
respect to cluster availability, performance and data integrity which should be
considered when deciding upon a setting. The DATA fence level offers the best
guarantees of data integrity by offering fully synchronous data updates, but can
leave the primary site vulnerable to storage problems at the secondary site. A fence
level of NEVER avoids the issues of being vulnerable to secondary storage failures,
but opens up the possibility of allowing the primary and secondary data copies to
get out of sync. Using a fence level of ASYNC can offer increased I/O performance
through the use of asynchronous data updates, but of course introduces a potential
for data loss should the primary site fail while it is still caching unwritten data.
Two node clusters still require the use of a quorum device and even though the
replicated Truecopy devices are made to look like a single DID device, they are not
truly shared devices, so do not meet the needs of a quorum device. Quorum server
is generally a viable option.
Nodes at each site must only have direct access to one of the devices in a replica
pair, otherwise volume management software can become confused about the disks
which make up replicated device groups. Multiple local nodes at each site can share
access to local replicas (providing local failover), but direct access to a single replica
must not be shared between sites.
SRDF Support
SRDF is now supported for shared storage data replication between two sites within
a cluster. This offers a configuration alternative for campus clusters in which
distance concerns make host-side mirroring impractical. Automatic failover in the
case of primary node failure is included, as well as support for SVM, VxVM and raw
disk device groups. Careful consideration must be taken when deciding on SRDF
configuration parameters since these have a direct impact on cluster availability and
data integrity guarantees.
■ Two node clusters still require the use of a quorum device and even though the
replicated SRDF devices are made to look like a single DID device, they are not
truly shared devices, so do not meet the needs of a quorum device. Quorum
server is generally a viable option.
■ Nodes at each site must only have direct access to one of the devices in a replica
pair, otherwise volume management software can become confused about the
disks which make up replicated device groups. Multiple local nodes at each site
can share access to local replicas (providing local failover), but direct access to a
single replica must not be shared between sites.
■ Careful planning of device usage is important as replica groups must be
configured to match a corresponding global device group (including naming) so
that the switching of the replication primary can coincide with the importing of
the proper device groups.
■ Take care to ensure that the correct DID devices are being merged into a single
replicated DID device. If the wrong pair of devices are combined, use the
“scdidadm -b” command to unmerge them.
Introduction
This chapter provides a description of the supported Sun Cluster Geographic
Edition (GE) product hardware configurations and infrastructure. The Sun Cluster
Configuration Guide / Support Matrix provides the technical specification for
individual clusters in Sun Cluster GE configurations. The networking infrastructure
required for inter-cluster connections will depend on customer-specific
requirements.
Inter-Cluster Topologies
Inter-cluster relationships in Sun Cluster GE consist of entities called partnerships,
which are relationships between two clusters. All Sun Cluster GE inter-cluster
communications happen between partner clusters.
FIGURE B-1 Example Sun Cluster GE topologies that demonstrate Sun Cluster GE inter-
cluster relationships.
The New York-London topology has two clusters that form a partnership with two
protection groups. In normal operation, each cluster is the primary for one of the
protection groups, and the secondary for the other, this is a symmetrical
configuration. The partnership requires a two-way IP connection between the two
clusters for inter-cluster management and heartbeats. Data-replication link
infrastructure is required between the clusters to support data-replication for two
protection groups.
Three-site topologies
It is possible to use a campus cluster for the primary cluster, thus creating a three-
site configuration of Primary, Backup and DR sites. This is currently supported using
volume manager mirroring within the campus cluster, and AVS replication to the DR
site. Other combinations will be supported in the future. It is not possible to create a
daisy-chain of Sun Cluster GE pairs, i.e. London -> Paris -> Rome.
Both sites must have the same platform architecture, SPARC or x64. This is not a
requirement of Sun Cluster GE, but rather of most applications. Filesystems and data
files (e.g. from an Oracle data base) are generally not endian-neutral. Heterogeneous
combinations have therefore not been tested.
For specific supported software versions, please see the matrices at the end of this
section.
Storage configurations
Within one cluster, Sun Cluster GE data-replication places some software
configuration requirements on the accessibility of device groups and the
configuration of data volumes. The software configuration requirements may have
implications for the preferred configuration of storage on the cluster.
The use of Synchronous replication will guarantee that both clusters in a partnership
always have identical copies of data, however the need to ensure that data has been
written to both partners before a write is considered as complete means that the data
write throughput is effectively limited to that of the inter-cluster link. This will be
orders of magnitude slower than the physical disk connection.
The use of Asynchronous replication will avoid this performance penalty, but can
mean that the data stored on the secondary partner may not always be an up-to-date
copy of the primary data. A failure of the primary cluster under such circumstances
can result in some data updates not being completed at the remote site.
Using Sun Cluster GE with AVS requires nothing in the way of specialized
hardware. AVS, being a software-based replication system, is largely hardware-
agnostic. See the AVS documentation for information on which Sun storage systems
are supported.
Since AVS replication software runs on a single host in each cluster, certain scalable
and parallel applications cannot be supported with AVS. A specific example is
Oracle RAC, which cannot work with AVS. HA-Oracle is fully supported.
Supported versions
AVS 3.2.1 is supported only on Solaris 8 and Solaris 9, SPARC only. AVS 4.0 is
supported only on Solaris 10, SPARC and x86.
Supported versions
TrueCopy Raid Manager versions 01-18-03/03 or later (SPARC) are supported.
EMC SRDF
Use of Sun Cluster GE with EMC Symmetrix Remote Data Facility (SRDF) data-
replication requires Sun Cluster configurations with EMC Symmetrix hardware that
supports the SRDF Solutions Enabler command interface.
http://www.emc.com/techlib/pdf/H1143.1_SRDFS_A_Oracle9i_10g_ldv.pdf
Supported versions
EMC Solution Enabler (SymCLI) version 6.0.1 or later is supported on Solaris SPARC
and x86. Enginuity firmware Version 5671 or later is required.
Custom Heartbeats
Sun Cluster GE provides interfaces for optional customer-added plug-ins for inter-
cluster heartbeats. The communication channel for a custom heartbeat plug-in is
defined by its implementation. A custom heartbeat plug-in would allow the use of
a communication channel that is different from the default heartbeat connection. In a
telecoms environment, for example, there may be other, non-IP, connection paths
available.
The type of inter-cluster links used for the data replication will depend on the
product chosen. Sun Cluster GE does not place additional limitations on this beyond
those required by the data replication product.
Note, however, that while network throughput (in Mbit/s) is important when dealing
with large quantities of data, network latency is of much greater importance as far as
write performance is concerned.
By way of an example, consider a large internet sales company. It will have a large
database of products, which is updated regularly but probably not continuously.
Staff will, from time to time, add new products and remove old ones. Such a
database could safely be replicated asynchronously, since even if some updates were
lost following a failure, the situation could be recovered relatively easily. Staff could
re-enter the changes at a later date.
On the other hand, the filesystem which keeps records of customers’ purchases
cannot tolerate any data loss, since this could not be recovered by company staff.
This would not only result in financial loss from the lost order data, but could also
lead to a loss of customer confidence. The relatively small quantity of data stored
would, however, probably permit this filesystem to be replicated synchronously to
avoid any risk of data loss following a failure.
Unsupported features
Support for some new features in Solaris requires further testing and/or additional
development. Please note the following specific restrictions.
Shared QFS
Shared QFS filesystems embed the names of the host systems in the filesystem
metadata. In order to transfer an sQFS filesystem to a new cluster this, metadata
must be rewritten to contain the names of the hosts in the new cluster. SCGE does
not perform this rewrite, and so SQFS filesystems cannot be supported with SCGE.
This restriction will be lifted in a forthcoming release.
Oracle ASM
Testing on ASM is ongoing and support is very limited at this time. Please contact
the cluster team for the latest status.
ZFS
There are two issues which prevent SCGE from supporting ZFS:
1. Prior to bringing a zpool online on a new cluster, the LUNs used by the zpool
must be imported. This is analogous to the import operation carried out by
traditional volume managers such as SVM and VxVM. SCGE does not yet issue a
zpool import command. This prevents the use of ZFS with storage-based
replication mechanisms, where the LUNs are inaccessible while configured as
secondaries.
2. More seriously, there is a potential interaction between ZFS and block-based
replication systems in general. The ZFS copy-on-write model of file update
presumes that the on-disk structure of the filesystem is always internally
consistent. For a local filesystem this will be the case, but when a filesystem is
replicated to a remote site this consistency can only be guaranteed if the order in
which disk blocks are written is the same at the secondary site as at the primary.
All of the supported replication technologies will guarantee this during normal
active replication, but if the communications link between primary and secondary
sites is lost, or the secondary site is otherwise unavailable, a backlog of modified
blocks will occur at the primary. This backlog will be transmitted once the secondary
site is again available, however most replication products do not maintain write-
ordering during this catch-up phase (AVS, TrueCopy and SRDF do not maintain
write-ordering such circumstances. Universal Replicator does). If a failure should
occur during this catch-up resynchronization, the destination zpool could be left in
an unusable state.
affinities between the replication RGs and the application RGs. Solaris Cluster
will not permit affinities or dependencies to be created between RGs if one RG
has a nodelist of physical nodenames, and the other has a nodelist of “zone-
nodes”. This is highlighted in CR 6443496.
Until this issue is addressed, SCGE will be unable to support the use of zone-
nodes with AVS replication. The use of zone-nodes with TrueCopy and SRDF is,
however, fully supported.
TABLE B-1 Test/support matrix for SC Geographic Edition with various types of data
replication and volume managers
Volume HW HW
Manager: Raid SVM†† VxVM HW Raid SVM†† VxVM Raid SVM†† VxVM
Odyssey S8u7 SPAR Yes Yes Yes Yes‡‡ No††† Yes‡‡ No‡‡‡ No‡‡‡ No‡‡‡
R1 SCGE or C (V4.1)
3.1 8/05 later
x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡‡‡ No‡‡‡ No‡‡‡
with SC
3.1u4 (3.1 S9u7 SPAR Yes Yes Yes Yes No††† Yes No‡‡‡ No‡‡‡ No‡‡‡
8/05) * or C (V4.1) (V4.1)
later
x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡‡‡ No‡‡‡ No‡‡‡
S10 SPAR No§ No§ No§ Yes No††† Yes No‡‡‡ No‡‡‡ No‡‡‡
C
x64 No§ No§ No§ No§§ No††† No§§ No‡‡‡ No‡‡‡ No‡‡‡
TABLE B-1 Test/support matrix for SC Geographic Edition with various types of data
replication and volume managers
Odyssey S8u7 SPAR Yes Yes Yes Yes*** No††† Yes*** No‡‡‡ No‡‡‡ No‡‡‡
R2 or C (V4.1)
(“Nestor”) later
x64 No‡ No‡ No‡ No‡ No‡ No‡ No No No
SCGE 3.1
‡,‡‡‡, ‡,‡‡‡,§ ‡,‡‡‡,§§
2006Q4,
§§§ §§ §
with SC
3.1u4 (3.1
8/05)
S9u7 SPAR Yes Yes Yes Yes No††† Yes Yes No††† Yes
or C (V4.1) (V4.1)
later
x64 No‡ No‡ No‡ No‡ No‡ No‡ No‡ No No‡
‡,§§§
S10U SPAR Yes Yes Yes Yes No††† Yes Yes No††† Yes
2 or C (V4.1) (V4.1) (V4.1)
later
x64 Yes Yes Yes No§§ No††† No§§ No§§§ No No§§§
(V4.1) †††,§§
§
TABLE B-1 Test/support matrix for SC Geographic Edition with various types of data
replication and volume managers
This matrix shows the supported combinations for each release of Sun Cluster Geographic Edition. Superscript
numbers refer to explanatory notes below. It is assumed that each Solaris release also has the latest patch
releases required by the underlying Sun Cluster installation, unless notes are given to the contrary. The full
details of testing can be found at the (internal) URLs in the Test documents section in the following paragraph.
This is a current matrix, including qualifications carried out after a given version was released. The support
status of components not specifically referred to here (e.g. UFS, VxFS) should be determined by reference to
standard Sun Cluster.
Note that references to volume managers below are to single-owner versions (i.e. not CVM or Oban). Multi-
owner volume manager support is addressed in the Oracle configuration matrix.
Test documents:
http://haweb.sfbay/dsqa/projects/odyssey/r1/
http://galileo.sfbay/scq/odyssey/athena/
http://galileo.sfbay/scq/odyssey/post_scgeo32_quals/
* When using SCGE 3.1 8/05 with Cacao 1.1 (as shipped in Java ES 4) patch 122783-03 or later must be installed.
† AVS 3.2.1 required for Solaris 8 and 9, AVS 4.0 or later required for Solaris 10
‡ SCGE x64 support is only available with Solaris 10.
§ AVS was not available for Solaris 10 at this time.
** Solaris 8 is not supported with Sun Cluster 3.2, nor with SCGE 3.2.
††On Solaris 8 references to SVM should be taken as referring to Solstice Disk Suite (SDS)
‡‡Tested on Solaris 9, extrapolated to S8.
§§Not tested.
***Not tested, extrapolated from testing on previous release.
†††CRs 6216278 (SVM) and 5070680 (SCGE) must be addressed first. Work is in progress.
‡‡‡SRDF support was added for SCGE 3.1 2006Q4, for S9 and S10 only.
§§§SRDF software was not available for Solaris on x86 or x64 platforms for this release.
TABLE B-2 Test and support matrix for SCGE and Oracle RAC, showing tested and
supported configurations per release.
TABLE B-2 Test and support matrix for SCGE and Oracle RAC, showing tested and
supported configurations per release.
True Copy SPARC Yes‡ No‡‡ Yes‡, ***, Yes‡ No‡‡ Yes***,†††
†††
††††
TABLE B-2 Test and support matrix for SCGE and Oracle RAC, showing tested and
supported configurations per release.
This matrix shows the supported combinations for Oracle RAC and various types of data replication
technology, for each release of Solaris Cluster Geographic Edition (SCGE). Superscript numbers refer to
explanatory notes below. It is assumed that each Solaris release also has the latest patch releases required by
the underlying Sun Cluster installation, unless notes are given to the contrary. The full details of testing can
be found at the (internal) URLs in the Test documents section in the following paragraph.
“HW Raid” means that no volume manager was used. “SVM/Oban means the Sun Cluster Volume Manager,
and “VxVM/CVM” means the Veritas Cluster Volume Manager
This is a current, evolving, matrix, including qualifications carried out after a given version was released.
HA Oracle. Note that this table no longer calls out HA-Oracle as a separate entity. SCGE support for HA-
Oracle is the same as that provided by the underlying Solaris Cluster release.
* ASM support is limited at present, for technical reasons.
† The use of AVS Replication with Oracle RAC is not technically possible.
‡ Extrapolated from tests on a compatible release.
§ CVM is not yet supported on Solaris x86
** Oracle 9i was not released for Solaris x86.
††SRDF software was not available with SCGE for Solaris on x86 or x64 platforms for this release.
‡‡CRs 6216268 (SVM), 6325951 (Oban) and 5032363 (SCGE) must be addressed first.
§§Not yet tested, by project decision.
***Requires SCGE TrueCopy patch 126613-01 or later.
†††Limited support, requires special configuration. Obtain prior review/approval of configuration by SCGE team before making com-
mitment.
‡‡‡11g support is the same as 10g, presuming corresponding support by underlying core Sun Cluster
§§§CRs 6216268 (SVM) and 5070680 (SCGE) must be addressed first. Work is in progress.
****VxVM on x64 is not supported by SC3.1u4
††††Requires SCGE SRDF patch 126746-01 or later.
Third-Party Agents
All the agents mentioned in “Application Services” on page 222 are developed, sold,
and supported by the Sun Cluster Business Unit. A variety of agents have been/are
being developed by third party organizations - other business units in Sun, and
ISVs. These agents are sold and supported by the respective third party
organizations. The table below lists the agents which Sun Cluster product marketing
is aware of:
The versions of the application supported in this table may not be up-to-date.
Please contact the person referred to in the contact column of the table for more/
latest information on these agents:
Application Contact
04/17/2001
■ Added support for Serengeti-12/12i/24 with
Revision T3 single brick configs
History 05/07/2001
■ HA Oracle 8.1.6 64bit
■ Solaris 8 U4
■ SunMC 3.0 support
■ changed the verbiage for Sun Cluster 3.0
11/21/00 server licensing
■ First draft created. ■ Sample configs for Serengeti12/12i/24
cluster
12/22/00
06/12/01
■ HA LDAP 4.12 + Solaris 8
■ T3 single brick + 220/420/250/450
02/13/01 ■ Switch + 250/450/220r/420r/4800/4810/
6800
■ Support for E420R
■ CVM 7/10/2001
03/30/01 ■ VxVM 3.1.1
■ Oracle 9iRAC (OPS, 32bit) + VxVM 3.0.4
■ T3 single brick + E3x00-E6x00,E10K ■ Oracle Parallel Server 8.1.7 32bit + VxVM
■ A3500FC + E3x00-E6x00, E10K 3.1.1
■ Solaris 8 Update 2
■ Solaris 8 Update 3
■ OPS/RAC support on Sun Fire 4800/4810/ ■ Netra t 1400/1405 + Netra st D130 + VxVM
6800 servers 3.1.1
■ Gigabit Ethernet as Public Network
Interface. 9/26/01
■ Sun Fire 4800/4810/6800 8 node, mixed
■ Clarify Statement around E1 expander
cluster, and SVM support
support
■ Add II/SNDR 3.0 support
07/23/01 ■ Netra 1400/1405 + S1
■ SunPlex Manager ■ Netra AC200/DC200 + S1
■ Solaris Resource Manager 1.2 coexistence ■ F15K + Purple2
■ HA Sybase Agent
■ HA SAP Agent. 10/01/01
■ Sun Fire(TM) 280R server support.
■ clarify statement around 2 node OPS/RAC
■ Sun Fire 3800 server support.
support
■ Netra t1 200
■ HA Oracle 8.1.7 64 bit
■ Netra t 1400/1405
■ HA Oracle 9i 32 bit
■ Netra t 1120/1125
■ weaken the swap requirements to
recommendation
08/01/01 ■ removed the two node limit for E250/450/
■ Fix the VxVM license in sample configs 220R/420R + T3 single bricks
■ Solaris 8 7/01 ■ added a table for maximum cluster nodes
■ HA Informix v9.21
■ T3PP + E220R/E420R/E250/E450 10/16/01
■ Solaris 8 Update6 support
08/21/01 ■ Netra 20 + D1000
■ SE 99x0 + E450/E3500-6500 ■ Netra 20 + S1
■ HA Netbackup 3.4, 3.4.1
08/29/01
■ Changed SVM to SDS
10/29/01
■ Oracle 9iRAC (OPS) 32 bit + VxVM 3.1.1 ■ Sun Fire V880 + D1000/A5200/T3
(using cluster functionality) ■ Scalable Broadvision
■ HA SAP 4.6D 64 bit
■ HA SAP 4.5B 32 bit 11/13/01
■ HA SAP 4.0 32 bit
■ HA Informix v9.21 to be sold and supported
■ LDAP 4.13
by Informix. Contact: Hans Juergen Krueger,
■ Sun StorEdge 4800/4810/6800 + T3PP
hans-juergen.kreuger@informix.com, 1-650-
926-1061
9/11/01 ■ Oracle 9i RAC 64bit
■ Purple2 support ■ Update information about webdesk
■ 280R + Purple1 partner pair ■ Update information about SCOPE
■ 3800 + Purple1 partner pair ■ cleaned up the placement of some of the
■ >2 node 280R configs storage information.
■ add crystal+ support
12/04/01 02/12/02
■ Sun Cluster 3.0 U2 ■ Added a section on campus cluster
• PCI-SCI + E3500-6500 configurations
■ E3500-6500, 10K (SBus only) + A5x00/T3A/
01/08/02 TB (single brick and partner pair) + 6757A
■ onboard GBE port for public interface and
■ Indy DAS
cluster interconnect for V880
■ OPFS 8i 32bit
■ Scalable SAP 4.6D 32 bit (same agent as HA-
■ Made MPxIO support information more
SAP)
explicit
■ HA-iDS 5.1
■ PCI/SCI + E250/450
■ HA-iCS 5.1- The HA-iCS agent will be sold
■ added Sun Cluster 3.0 12/01
and supported by the iCS group. Contact
Cheryl Alderese, cheryl.alderese@sun.com
01/29/02 for details.
■ Revision history added ■ Updated the part numbers for sun cluster
■ >2 node support for V880 user documentation
■ >2 node support for SF3800 ■ Updated the contact address for Informix
■ F15K and 1034A public network interface agent
■ Netra T1 + Netra st D1000 ■ added 5-meter fiber optic cable support to
■ 250/220R/420R + FCI 1063 + SE 99x0 direct T3, A5x00 section
attached ■ Clarified statement around use of PCI I/O
■ F4800-6800 + 6799/6727 + SE 99x0 direct board for SCI-PCI in E3500-6500
attached
■ E10K + FC641063 + SE 99x0 direct attached 02/28/02
■ F15K + 6799/6727 + SE 99x0 direct attached
■ TrueCopy support
■ V880 + 1063/6799/6727 + SE 99x0 direct
■ Solaris 8 02/02 support
attached
■ Build F15K and F6800 in the same family
■ V880 + 1063 + Brocade 2800(F) + SE 99x0
■ A1000 support with E250/450/220R/420R/
■ F3800 + 6748 + SE 99x0 direct attached
280R/V880/3500-6500
■ E250/450/220R/420R + FCI 1063 + Brocade
■ Netra 1400/1405, 1120/1125, 20 + Netra st
2800 (F) + SE 99x0
A1000
■ E3500-6500 + FC641063 + Brocade 2800 (F) +
■ Campus clusters support for 220R/420R/
SE 99x0
250/450/280R/V880/3800 + T3A/T3B
■ E10K + FC641063 + Brocade 2800 (F) + SE
(single brick and partner pair)
99x0
■ F4800-6800 + 6727/6799 + Brocade 2800 (QL
only) + SE 99x0
03/15/02
■ F4800-6800 + FCI 1063 + Brocade 2800 (F) + ■ Dynamic reconfiguration (DR) support for
SE 99x0 Sun Fire 3800-6800
■ F15K + 6727/6799 + Brocade 2800 (QL only) ■ 1034A as private interconnect with Sun Fire
+ SE 99x0 15K
■ F15K + 1063 + Brocade 2800 (F) + SE 99x0 ■ SDS 4.2.1 supported with SE 99x0 arrays
■ Quorum support on T3PP/SE 99x0/SE39x0 ■ Soft Partitioning now supported with SDS
■ F15K + F4800-6800 + SE 99x0 - mixed family 4.2.1
config ■ SE39x0 + V880, F15K, E3500-6500, E10K
■ Sun Fire 15K + T3A
08/06/02 10/29/02
■ Revised the topology support section to
■ PCI-SCI with Sun Fire 4800, 6800
reflect the relaxed topology restrictions.
■ Heterogeneous node configurations
■ Added the WDM based campus cluster
■ 2222A + S1 on remaining platforms
configurations section.
■ Availability Suite 3.1 with Sun Cluster 3.0 5/
■ Added the “hot-plug” functionality section
02 (or later) + Solaris 8
to the Campus Cluster section.
■ 8 node N+1 configurations
08/20/02 11/12/02
■ Added Sun Fire V120 Support
■ Cassini 1261a, 1150a, 1151a support
■ Added Enterprise 10k PCI SCI Support
■ Oracle 9iR2 RAC 64 bit
(1074a)
■ Oracle 9iR2 RACG 64 bit
■ Added SANtinel and LUSE to the SE 9900
■ HA-Livecache 7.4
series software support sections.
■ HA-Siebel 7.0
■ Updated Agents and Third-Party Agents
section
09/10/02
■ Fixed several typographical errors within
■ 4 Node OPS/RAC supported with SE 9970/ several sections
9980
■ Netra server line VxVM support 12/03/02
standardized (identical to all other
■ Added McData 6064 1GB switch support for
supported servers with Sun Cluster 3.0)
9910/9960
■ SE A5200 support for V480
■ Added SunOne Proxy Server 3.6 support
■ Support 2GB HBA (6767A, 6768A) and
Brocade 3800 switch with SE T3 ES, SE 39x0
1/14/03 2/25/03
■ PCI SCI (1074a) support for SF 280R, V480, ■ Added Sun Netra 1280 Support
V880 ■ Added Brocade 6400 Switch Support
■ Added McData 6064 2GB switch support for ■ Added SE 69x0 Campus Cluster Support
the 9910/9960/9970/9980
■ Added Netra 120 support 3/11/03
■ Added VLAN support
■ Added Brocade 12000 switch support
■ Added A1000 daisy chaining support
■ Added SF V480 McData 6064 (1&2 Gb)
■ Added SunOne Web Server 6.1 agent
support with SE 9970/9980
support
■ Revised Storage Support, Interconnects and
Data Configuration sections
1/28/03
■ Added V1280 support 4/1/03
■ Added SDLM support
■ Added SE 6120 support
■ Added non-support statement for
■ Added 4 nodes Sun Fire Link Support
multipathing to the local disks of a SF v480/
■ Added E450 S1 storage support
v880
■ Single dual-controller, split-bus SE 3310
■ Added Sun Fire Link support for 6800
JBOD configuration support removed
■ Added WDM support for V280, 480, 880
■ Revised Storage Support and Interconnects
■ Added WDM support for OPS/RAC
sections
(removed the RAC/OPS restriction)
■ Added SAP 6.20 support
■ Added 6768 HBA support for SF 6800/SE
■ Added support for RAC on GFS
9980
■ Added HA-Siebel 7.5 Sun Cluster 3.0 U3
4/15/03
support
■ Revised SE 3310 sections ■ Added SE 2GB FC 64 Port Switch Support
■ Expanded Brocade 3800 support to SBUS
2/11/03 systems with T3s/39x0
■ Expanded SE 9970/9980 support for E 420
■ VLAN phase 2 (switch trunking) enabled
■ Revised several sections
■ Slot 1 DR support added
■ Added 6757 McData 6064 support with
5/6/03
9980/ E10k
■ Added HA IBM WebSphere MQ agent ■ Added SE 6320 support
support ■ Added Sol 8 12k/15k SCI support
■ Added HA IBM WebSphere MQ Integrator ■ Added 12k/15k Sol 9 DR Slot 1 support
agent support ■ Added RSM support with RAC
■ Added HA Samba Agent support ■ Revised interconnect and storage sections
■ Added HA DHCP support
■ Added HA NetBackUp 3.4 agent support for 5/20/03
Solaris 9 ■ Added Sun Cluster 3.1. All sections were
■ Revised A5x00 and SE 3310 Storage sections “generified” to Sun Cluster 3 (unless
■ Revised agents, server support, interconnect otherwise specified)
support sections ■ Added SF V210/V240 support
■ Added additional SE 6320 support
■ Added Sol 9 12k/15k SCI support
5/11/04 8/31/04
■ Sun Netra 440 DC ■ Support for Netra 440 X6799 and X6541
■ hsPCI+ for 12K/15K
■ Single SE 3120 JBOD Split Bus 9/14/04
■ SE 3510 8 array expansion
■ Support for Sun StorEdge 9990
■ Sun Cluster Open Storage
■ Support for Sun Fire V490/890
■ HA-Oracle Agent for Oracle 10G on Sun
■ Support for X4444A card with Sun Fire 20/
Cluster 3.0
25K
6/1/04 10/05/04
■ Expanded Campus Cluster Support
■ Support for Sun LW8-QFE card
including McData 4500
■ HA-Oracle Agent for Oracle 10G on Sun
10/19/04
Cluster 3.1
■ SAP DB agent Support (SPARC) ■ Support for SE 6130
■ App Server J2EE Support (SPARC) ■ Support for 4 card SCI without DR
■ 8 Node support for SE 6120/6130
■ x86 Support matrix addendum 11/02/04
■ Support for 0racle 10G RAC on Solaris
6/15/04 SPARC
■ Support for SE 6920 ■ XMITS PCI IO boats for Serengeti class
systems with Sun Cluster
7/13/04
11/16/04
■ Support for SE 3511 RAID
■ Support for SE 320 ■ Sun Cluster 3.1 9/04
■ Support for Brocade 3250, 3850 and 24000
switches 12/07/04
■ Support for SE 3310 with V440/Netra 440 on ■
board SCSI
■ EMC Symetrix DMX, 8000, EMC Clariion 1/11/05
CX300,CX400,CX500,CX600 and CX700
■ Jumbo Frames Support
8/03/04
2/01/05
■ Support for 3510 and 3511 RAID arrays with
eight nodes connected to a LUN ■ 10G RAC with SVM Cluster Functionality
■ Support for Netra 440 with the X4422A
(cauldron S), SG-XPCI1FC-QF2, SG- 3/08/05
XPCI2FC-QF2 and X4444A cards ■ Support for Netra 440 and Jasper 320
■ Support for QLogic 5200 Switch
8/17/04
■ Support for Netra 440 AC with X3151A card 4/05/05
■ Support for Sun Fire V40z with SE 3310
■ Support for Public Network VLAN Tagging
RAID and X4422A (cauldron S) HBA.
■ Support for Brocade 4100 FC SPARC
■ Support for HA Siebel 7.7 ■ Support for Sun Fire V40z dual core
■ Support for Sun 4150A51A Cards processors
4/19/05 8/23/05
■ Support for HA Sybase 12.5 agent ■ Support for SE 9985 with Sun Cluster
■ Support for Oracle 10G with Shared QFS ■ Panther processor support
■ HA-Oracle 10G on Solaris 9 x86
5/17/05 ■ Miscellaneous updates
■ Support for NEC iStorage
■ Miscellaneous Updates
9/27/05
■ Support for Brocade 200E and 48000
6/7/05 ■ Support for 3310 RAID and V40z with SG-
■ Support for 3310/3120 JBOD XPCI1SCSI-LM320
■ Support for X4444A ■ Panther processor support for E2900,4900
and 6900
■ Support for SG-XPCI2SCSI-LM320 (Jasper
320) ■ Support for Sybase 12.5.2 and 12.5.3
■ Support for Sybase ASE 12.5.1 (SPARC)
10/11/05
7/12/05 ■ Support for AVS 3.2.1
■ Support for Sun 5544A Card (SPARC) ■ Support for SE 3320
■ Support for Sun Emulex Cards (Rainbow) ■ Panther processor support for E20 and 25K
SG-XPCI21C-EM2 and SG-XPCI2FC -EM2 ■ Misc. updates and corrections
(SPARC)
■ Support for Sun Fire V440 On Board HW 11/11/05
RAID ■ Galaxy Servers
■ Support for SE 9990 with HDLM 5.4\ ■ Fibre Channel storage for x64
■ Support for Sun 4150/4151A card on Solaris ■ Support for 3320 on x64
x86 ■ Support for Infiniband on x64
■ Support for Shadow Image and TrueCopy ■ Support for HA Oracle 10gR1 on x64
with SE 9990 ■ Corrections on agents
■ Support for Sun Fire V40z On Board HW ■ Misc. updates and corrections
RAID
12/10/05 4/18/06
■ 8 Node Oracle RAC support with V40z
■ T2000 support for SCSI storage
1/10/06 ■ Support for RoHS NICs
■ Support for T2000 Server ■ Updated storage support for Netra 240
■ Added License part numbers for Sun Cluster
Geo Edition
1/24/06 ■ Added License part numbers for Sun Cluster
■ Support for 6920 with x64 Clusters Advanced Edition for Oracle RAC
■ Updated Version Support for MySQL and
WebSphere MQ agents
■ Support for single dual-port HBA as path to 7/11/06
shared storage ■ StorageTek 6540 Array
1/09/07
4/4/06 ■ Support for the Sun Fire X2100 M2 and
■ Oracle RAC 10gR2 for x64 X2200 M2 servers
■ 4422A support for Solaris 10x64
2/06/07
■ Support for McData 4500 and 4700 switches
■ Update MySQL agent section
■ Support for 99x0 with T2000
■ Update Samba agent section
■ Support for the Sun Blade x8420 (A4F) ■ Add new support of ST2540 (FC)
Server Module
■ Support of Netra 210 for Diskless Cluster 5/08/07
Config ■ Add Sun Cluster Geographic Edition section
■ Change of config guide ownership from ■ Update Spec-Based Campus Cluster section
Matt Hamilton to Hamilton Nguyen
■ Consolidate various Campus Cluster entries
3/06/07 ■ Update Siebel 7.8.2, SwiftAlliance Access
and SwiftAlliance Gateway support for Sun
■ Update the entire config guide with Sun Cluster3.1 (SPARC) table
Cluster 3.2 data
■ Add Cisco 9124, Brocade 5000, Qlogic 9100
■ Update V210/V240 Server Configuration and 9200 to list of FC switches supported
section
■ Update 5544A/5544A-4 support with
■ Update SE3511 RAID Configuration Rules additional servers
section
■ Add Sun NAS 53XX note
■ Update Private Interconnect Technology
Support section ■ Add Minnow firmware note
■ Add STK6140 and two additional HBAs to ■ Update QFS and Oracle RAC tables (x64 and
Sun Blade 8000 support matrix SPARC)
■ Add new Netra x4200 M2 support matrix ■ Add new Sun Blade 8000 P support matrix
■ Add Spec-Based Campus Cluster section ■ Add Sun SPARC Enterprise M4000, M5000,
M8000 and M9000 supports
■ Add SAN4.4.12 note
6/05/07
4/03/07
■ Add Sun Blade T6300 support
■ Add SE 9970/9980 and SE 9985/9990
supports to x4600 Matrix ■ Add StorageTek 6540 support with x64
servers
■ Add note related to Info Doc#88928 to T2000
section ■ Add External I/O Expansion Unit for Sun
SPARC Enterprise Mx000 Servers
■ Add Oracle Application Server support to
Failover Services for Sun Cluster 3.2 (x64) ■ Add Apache Tomcat 6.0 support
table ■ Add/update AVS support including AVS 4.0
■ Add HA Oracle support to Failover Services ■ Add SAP support to Failover Services for
for Sun Cluster 3.2 (SPARC and x64) and Sun Cluster 3.2 (x64)
Failover Services for Sun Cluster 3.1 (x64) ■ Update SAP with agent support in zones to
tables Failover Services for Sun Cluster 3.2
■ Add JES Directory Server/JES Messaging (SPARC)
Server/Netbackup notes to Failover Services ■ Update Swift Alliance Access and Gateway
for Sun Cluster 3.1 (SPARC) and Failover sections with Solaris 10 11/06 support
Services for Sun Cluster 3.2 (SPARC) tables
■ Add new support of V125
■ Add IB notes/Update IB support
7/10/07 9/14/07
■ Add 802.3ad Native Link Aggregation ■ Add CP3060 SPARC Blade for Netra CT900
support with Public Network ATCA Server support
■ Add new support of SE 9990V ■ Update CP3010 SPARC Blade for Netra
■ Update Oracle RAC table (Sun Cluster 3.2 CT900 ATCA Server with SE3510 support
SPARC) with additional storage support ■ Add Solaris 10 Update 4 support with Sun
■ Update Sun Cluster Geographic Edition and Cluster 3.2
Oracle table with additional config support ■ Update Guideline for Spec Based Campus
■ Update MySQL with incrementally Cluster Configurations section with support
supported versions of HDS as quorum device
■ Update SAP support (Sun Cluster 3.2 x64) ■ Update Cluster Interconnect section of
Network Configuration chapter
■ Add note to Diskless Cluster section as
related to inclusion of Quorum Server ■ Update link aggregation info in IPMP
Support sub-section under Public Network
■ Update Andromeda tables with additional section
hardware support
■ Add configuration rule to SE 99xx sections
■ Update V215, V245, V445 and V490 on mixing FC HBAs that are and are not
platforms with additional SE 99xx support MPxIO supported
■ Update Netra 440 platform with additional ■ Update JES Messaging Server with version
storages support 6.3 and JES Directory Server with version
■ Update T1000 platform with SCSI-based 5.2.x in Failover Services for Sun Cluster 3.2
storage support (SPARC) table
■ Update Cluster Interconnect and Public ■ Update both SwiftAlliance Access and
Network tables with additional NICs support SwiftAlliance Gateway with version 6.0 in
Failover Services for Sun Cluster 3.2
8/07/07 (SPARC) table
■ Add CP3010 SPARC Blade for Netra CT900 ■ Update N1 Grid Engine 6.1 in Failover
ATCA Server support Services for Sun Cluster 3.1 (SPARC and x64)
and Sun Cluster 3.2 (SPARC and x64) tables
■ Add Solaris 9 support to V215 and V245
platforms ■ Add Sybase ASE support to Failover
Services for Sun Cluster 3.2 (x64) table
■ Update Campus Clusters chapter
■ Update Sybase ASE entry in Failover
■ Update True Copy Support section
Services for Sun Cluster 3.2 (SPARC) table
■ Add additional PCI-E ExpressModule with non-global zones support
Network Interfaces to Cluster Interconnect and
Public Network tables 10/09/07
■ Update Supported SAN Software section ■ Add Sun SPARC Enterprise T5120 and T5220
with release SAN 4.4.13 note platforms support
■ Update Siebel 7.8.2 entry in Failover Services ■ Add new support of SE 9985V
for Sun Cluster 3.1 (SPARC) table with
Solaris 10 support ■ Update Sun Blade T6300 platform with
additional HBA support
■ Update X2100 M2 and X2200 M2 Servers with ■ Update SE 99xx with Mx000 support
SE3120, SE3310 and SE3320 supports ■ Update SE3320 RAID Support Matrix with
■ Update SE3310, SE3320, SE3510 and SE3511 Netra 1290 support
with Minnow 4.21 firmware ■ Update Sun Blade T6300 platform with
■ Update Netra1290 with ST6140 and ST6540 LDOM support
supports ■ Update Mx000 with DR support
■ Update/add SAP Livecache 7.6 and SAP ■ Add Cisco 9134 and 9222i to list of FC
MaxDB 7.6 entries in Failover Services for switches supported
Sun Cluster 3.2 (SPARC & x64) tables
■ Update QFS tables with SAM-QFS (Shared)
■ Update MySQL version in Failover Services 4.6 support
for Sun Cluster 3.1 (SPARC & x64) and
Failover Services for Sun Cluster 3.2 (SPARC ■ Update Samba with incrementally supported
& x64) tables versions for both Solaris Cluster 3.1 and 3.2
11/06/07 01/08/08
■ Add Sun Blade x6220 and x6250 Server Modules ■ Add ST2530 (SAS) and SAS HBAs supports
support ■ Add Sun Blade 6048 chassis support
■ Add Sun Blade T6320 Server Module ■ Update Sun Blade 60xx Support Matrix with
support Infiniband interconnect (x1288A-Z) and ST
■ Add new section to introduce Support for 99xx storage support
Virtualized OS Environment (LDOM) ■ Update Sun SPARC Enterprise T5120 and T5220
■ Update Solaris Container agent for Sun platforms with SCSI storage support
Cluster 3.1 with native and 1x brand support ■ Add Sybase version 15.0.1 and 15.0.2
■ Update Guideline for Spec Based Campus support in Failover Services for Sun Cluster
Cluster Configurations with support of HDS 3.1(SPARC) and Sun Cluster 3.2 (SPARC and
as quorum device for Sun Cluster 3.1u4 x64) tables
■ Update SE3120 JBOD Support Matrix with ■ Add Brocade DCX to list of SAN switches
E6900 support supported
■ Update Sun Blade 8000 Support Matrix with ■ Update Mx000 with additional ST 99xx
x7287A-Z support support
■ Update x4100 M2, x4200 M2, Netra x4200 ■ Update External I/O Expansion Unit for Sun
M2, x4600 and x4600 M2 with x4446A-Z SPARC Enterprise Mx000 Servers with
support additional NICs
12/04/07 02/05/08
■ Add Sun Blade x8440 Server Modules ■ Update ST2540 with additional servers
support support
■ Add Sun Fire X4150 and X4450 Servers ■ Update ST2530 (SAS) with additional servers
support and HBAs support
■ Update ST2540 with M4000, M8000, M9000 ■ Update Sun SPARC Enterprise T5120/T5220
and Sun Blade X84xx support with additional ST6540 Array support
■ Update Volume Manager tables with ■ Update Oracle Server with version 11g
additional S10U4 support support in Sun Cluster 3.1 (SPARC) and Sun
■ Update Netra X4200 M2 with additional Cluster 3.2 (SPARC) tables
ST2540 RAID Array support ■ Update Oracle RAC with version 11g
■ Update Sun Blade 6000/6048/8000 Support support in Sun Cluster 3.1 (SPARC) and Sun
Matrix with additional NIC support Cluster 3.2 (SPARC) tables
■ Update Oracle Application Server with
03/04/08 version 10.1.3.1 support in Sun Cluster 3.2
(SPARC and x64) tables
■ Add Sun Blade x8450 Server Module support
■ Update Oracle Business Suite with version
■ Add Universal Replicator support with SE
12.0 support in Sun Cluster 3.2 (SPARC)
9985V/SE 9990V
table
■ Add ST2530 support with T5120/T5220
■ Add HA Container (1x and Solaris8
■ Update supported SAN software for Sun branded) support to Sun Cluster 3.2 (SPARC
Cluster on Solaris 9 and x64) tables
■ Update SE 9985V/9990V with x64 support ■ Update BEA Web Logic Server with version
■ Update Siebel agent with additional version 9.2 support in Sun Cluster 3.2 (SPARC and
8.0 support in Failover Services for Sun x64) tables
Cluster 3.2 (SPARC) ■ Update JES Application Server with version
■ Update Sun Blade x6220 and x6250 Server 9.1EE support in Sun Cluster 3.2 (SPARC and
Modules with SE 9985V/9990V support x64) tables
■ Update Sun SPARC Enterprise T5120 and T5220 ■ Update CP3060 SPARC Blade for Netra
with SE 99xx support CT900 ATCA Server with additional HBA
support
04/01/08 ■ Update Sun Blade 8000 and 8000P with
■ Add ST2510 (iSCSI) support additional SE 99xx support
■ Add Sun SPARC Enterprise T5140 and T5240 ■ Update Sun Fire X4100 M2/X4200 M2,
support X4450, X4600, X4600 M2 with additional SE
99xx support
■ Add Sun Fire X4140 and X4240 Servers
support ■ Update Netra 440, Netra 1280, SF V440, SF
V445, SF V480, SF V490 with additional NIC
■ Add Sun Fire X4440 Server support
support
■ Add Sun StorageTek NAS support for any
■ Update Sun SPARC Enterprise M5000 with
data services with more than 2-node
ST2540 support
■ Add support of SRDF in a campus cluster
■ Update the maximum number of Cluster
configuration
nodes (x64) from 4x to 8x
■ Update Sun Cluster Geographic Edition
appendix to reflect SCGE3.2U1 release 05/13/08
■ Update VxVM (on x64 and SPARC) tables to ■ Add S10U5 support with SC3.2
reflect SC3.2U1 release
■ Add Brocade 300, 5100 and 5300 switches
■ Add x7285A and x7286A NICs support
■ Update Sun Blade T6320 with X4236A ■ Update Sun Cluster 3.2 and Sun Cluster
support 3.2U1 with Solaris 10U6 support
■ Update MySQL in Failover Services for Sun ■ Update SCGE/Oracle RAC table with VxVM
Cluster 3.2 (SPARC and x64) with additional support involving TrueCopy/S10 x86/
version SCGE3.2 and SRDF/S10 x86/SCGE3.2U1
■ Update External I/O Expansion Unit for Sun ■ Update SWIFT Alliance Access in Failover
SPARC Enterprise Mx000 Servers with Service for Sun Cluster 3.2 (SPARC) with
additional NIC support version 6.2
■ Update x2100M2/x2200M2, x4100M2/
10/14/08 x4200M2, x4140, x4150, x4240, x4440, x4450,
■ Add Sun SPARC Enterprise T5440 Server x4600/x4600M2 with additional HBAs
support support
■ Add Sun Blade T6340 Server Module ■ Update External I/O Expansion Unit with
support T5120, T5140, T5220 and T5240 support
■ Add Sun Fire X4540 Server support ■ Add Brocade 310 switch support
■ Update Sun Blade X6220 and X6250 Server
Module with x4236A NEM10G support
12/09/08
■ Add 4x 8GB FC PCIe HBAs support ■ Add new J4200 storage support
■ Update Sun StorageTek 9985V/9990V with ■ Update Sun StorEdge 9985/9990 with T6340,
M3000, T5440, T6340, and X4200 M3000, T5440, X2200 M2
09/01/09
■ Add Sun Blade 6048 for SPARC blades
■ Update Network Configuration chapter,
separating ExpressModules and Network
Express Modules into separate tables
■ Add Dhole X4822A FEM
■ Update Sun StorEdge 9970/9980 with M4000
10/13/09
■ Add Sun StorageTek 9985V/9990V 16-node
N*N RAC support
■ Add Sun Storage 7000 support for RAC over
NFS
■ Add Sun Storage 6180
■ Re-add Netra X4450 info (lost since 10/14/
08?)
■ Update Apache Web Server agent with Zone
Cluster support
■ Update HA Oracle with Zone Cluster
support
■ Update Java MQ agent with 4.3 support
■ Update MySQL agent with 5.0.85 and Zone
Cluster support
■ Update SS 7000 iSCSI LUN fencing and
scsi2/scsi3 quorum device support with SW
2009.Q3
■ Update ST 3320 JBOD that new single-bus
configs not supported per FAB 239464
■ Update External I/O Expansion Unit
support for the SE9900 line
■ Relocate/integrate Sun StorageTek 5000 NAS
info to the Ethernet Storage Support chapter
C
campus clusters 287
configurations 287
F
failover services
maximum nodes 287
Agfa IMPAX
SAN configurations 288 Sun Cluster 3.1 223
TrueCopy 291 Sun Cluster 3.2 230
Apache Proxy Server
Cluster Control Panel (CCP) 264
Sun Cluster 3.1 223
cluster topologies 3 Sun Cluster 3.2 230, 238
interconnect 183 L
Ethernet 185 local storage 39
junction-based 184 LUN Manager 114, 118, 122, 125
PCI/SCI 186 LUSE 113, 117, 121, 125
point-to-point 184
Sun Fire Link 187
technologies supported 185 M
VLAN support 185
managing clusters 263
iPlanet Mail/Messaging Server 311
meta devices 250
IPMP 217
minimum CPUs 15
Multipack 131
multipathing 217
J
MySQL
J4200 JBOD array 169
Sun Cluster 3.1 224, 229
J4400 JBOD array 169 Sun Cluster 3.2 232, 240
JES Application Server
Sun Cluster 3.1 224, 229
Sun Cluster 3.2 231, 239
N
JES Directory Server
N*N topology 7
Sun Cluster 3.1 224
N+1 topology 5
Sun Cluster 3.2 231
N1 Grid Engine
JES Messaging Server
Sun Cluster 3.1 224, 229
Sun Cluster 3.1 224
Sun Cluster 3.2 232, 240
Sun Cluster 3.2 231
N1 Grid Service Provisioning System
JES MQ Server
Sun Cluster 3.1 224, 229
Sun Cluster 3.1 227, 230
Sun Cluster 3.2 232, 240
Sun Cluster 3.2 237, 242
JES Web Proxy Server NAFO 216
Sun Cluster 3.1 224, 229 NAS storage
Sun Cluster 3.2 232, 239 Sun Storage 7000 Unified Storage System 179
JES Web Server Sun Storage 7110 Unified Storage System 181
Sun Cluster 3.1 224, 229, 243, 244 Sun Storage 7210 Unified Storage System 181
Sun Cluster 3.2 232, 239, 244, 245 Sun Storage 7310 Unified Storage System 181
Sun Storage 7410 Unified Storage System 181
Sun StorageTek 5000 NAS Appliance 175
K Sun StorageTek 5210 NAS Appliance 177
Kerberos Sun StorageTek 5220 NAS Appliance 177
Sun Cluster 3.2 232, 240 Sun StorageTek 5310 NAS Appliance 178
Sun StorageTek 5320 NAS Appliance 178