Sunteți pe pagina 1din 19

RELEASE NOTES

EMC SRDF/Cluster Enabler Plug-in


Version 4.2.1

Release Notes
REV 01

March 16, 2014

These release notes contain supplemental information about EMC SRDF/Cluster


Enabler for Microsoft Failover Clusters Version 4.2.1. The notes cover the following
topics:

Product description............................................................................................ 2
New features and changes ................................................................................. 3
Environment and system requirements............................................................... 4
Known problems and limitations ........................................................................ 9
Technical notes ................................................................................................ 12
Documentation ................................................................................................ 13
Software media, organization, and files............................................................ 13
Installation....................................................................................................... 14
Troubleshooting and getting help ..................................................................... 19

Product description

Product description
SRDF/Cluster Enabler (SRDF/CE) Version 4.2.1 is a software plug-in module to EMC
Cluster Enabler for Microsoft Failover Clusters software. Cluster Enabler (CE) plug-in
architecture consists of a CE Base Component and separately available plug-in
modules, which support your chosen storage replication technology.
The SRDF/CE plug-in module provides a software extension of failover clusters
functionality that allows Windows Server 2008 (including R2) Enterprise and
Datacenter editions, and Windows Server 2012 (including R2) Standard and
Datacenter editions running Microsoft Failover Clusters to operate across multiple
connected VMAX arrays in geographically distributed clusters.
Each cluster node is connected through a storage network to the supported VMAX
array. Once configured using the EMC Cluster Enabler Manager graphic user interface
(GUI), Microsoft Failover Clusters are referred to as CE clusters. SRDF/CE software can
support up to 64 shared quorum disk clusters per VMAX array pair. There is no limit on
the number of Majority Node Set (MNS) clusters per VMAX array pair.
Using an SRDF1 link, SRDF/CE expands the range of cluster storage and management
capabilities while ensuring full business continuance protection. A Fibre Channel
connection from each cluster node is made to its own VMAX array. Two VMAX arrays
are connected through SRDF to provide automatic failover of SRDF-mirrored volumes
during a Microsoft failover cluster node failover. This connection effectively extends
the distance between cluster nodes to form a geographically distributed cluster with
disaster-tolerant capabilities. 2
SRDF/CE protects data from storage, system, and site failures, 24 hours a day, 7 days
a week, and 365 days per year.

The EMC Solutions Enabler SRDF Family 8.0.2 CLI User Guide provides more detailed
information about SRDF operations.
2. The EMC Networked Storage Topology Guide provides additional information regarding
distance restrictions for your specific configuration.

1.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

New features and changes

New features and changes


This section details functional changes, new features, and software enhancements
provided in EMC SRDF/Cluster Enabler Version 4.2.1. These enhancements extend the
existing functionality, providing the industry-leading technology that you expect from
EMC.
For information on previously released features, or any updates to this document,
refer to the corresponding release notes located on EMC Online Support at:
https://support.EMC.com

Solutions Enabler and HYPERMAX OS support


Cluster Enabler version 4.2.1 supports Solutions Enabler Version 8.0.2 and the
HYPERMAX OS 5977 operating environment for VMAX arrays.

VMAX3 support of cascaded and concurrent SRDF


VMAX3 arrays can be part of cascaded and concurrent SRDF configurations. The
arrays need to run HYPERMAX OS 5977 Q1 2015 SR.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Environment and system requirements

Environment and system requirements


This section lists the hardware, software, EMC Enginuity, and HYPERMAX OS
requirements for an SRDF/CE cluster.

Hardware requirements
You need the following hardware to create a geographically distributed failover cluster
using SRDF/CE software.

VMAX and VMAX3 arrays

Any of the following storage arrays:


A VMAX3 storage array running HYPERMAX OS 5977 Q1 2015 SR or later
A VMAX 40K storage array running Enginuity version 5876 or later
A VMAX 20K/VMAX storage array running Enginuity version 5874 or later
A VMAX 10K/VMAXe storage array running Enginuity version 5875.231.172 or
later as documented in the E-Lab Interoperability Navigator at:
http://elabnavigator.EMC.com
A currently supported DMX storage array running Enginuity version 5671 or
later

Two or more DMX or VMAX arrays with Remote Link Director (RLD) cards installed
and RDF links established in synchronous or asynchronous mode with auto
recovery enabled. The VMAX arrays must have RDF-type disks for the device
groups.
Note: Each array pair supports up to 64 failover clusters using SRDF/CE software.

SRDF must be configured for bidirectional SRDF/Synchronous or


SRDF/Asynchronous operation. Dual unidirectional RDF links will work; however,
SRDF/CE does not report on down links with a dual unidirectional configuration.

A minimum of one HBA is required for connectivity to the VMAX array. If


PowerPath software is used, a minimum of two HBAs are required. For HBA
driver recommendations, refer to the EMC SRDF/Cluster Enabler Plug-in Product
Guide. For clustering reliability, it is recommended to have at least two paths per
device.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Environment and system requirements

Each VMAX array must be configured with six or more gatekeeper devices per
node, per side for SRDF/CE exclusive use. Gatekeepers should be mirrored with
signatures. This prevents Microsoft failover clusters from hiding them. In addition,
for performance reasons, an additional gatekeeper should be associated with
each SRDF/CE device group.
Note: It is recommended that you allocate one additional gatekeeper on each host
for each SRDF/CE group created for optimum performance. The gatekeepers
should be associated on a per group basis and should be mirrored.

All disks need to be Read Write visible to the intended hosts at least once to allow
Windows to fully populate its internal data stores.

Servers
Two or more x64 servers with separate external Fibre Channel buses. All servers in a
cluster must be of the same processor architecture.

Disks
SRDF/CE requires all disks have signatures. You can assign disk signatures using
Windows device management initialization or other third-party tools.

Network
Microsoft failover cluster compatible network configuration, as recommended by
Microsoft.

Mandates
EMC mandates that all hardware configurations (servers, adapter cards, cluster
system, and so forth) be supported in the SRDF/CE cluster environment, as
documented in the E-Lab Interoperability Navigator at http://elabnavigator.EMC.com

CSV configuration
EMC recommends having two or more nodes per site when managing cluster shared
volumes. However, high availability may not be guaranteed in the event of storage or
site failures.
Note: SRDF/CE does not support the Optimised CSV Placement Policies feature of
Microsoft Windows Server 2012 R2. Ensure that you switch off the feature by setting
the value of the CSVBalancer property to 0.
EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Environment and system requirements

Software requirements
To install and configure SRDF/CE, the following Microsoft software and EMC software
must be installed on all nodes that make up the SRDF/CE cluster.

Microsoft software
The software requirements for Microsoft products are:

Installation on Windows Server 2008 and 2012 requires that Microsoft Windows
Installer Version 4.5 first be installed.

Windows Server 2008 Enterprise, or Datacenter editions, Windows Server 2008


R2 Core, Enterprise or Datacenter editions, Windows Server 2012 Standard or
Datacenter editions, or Windows Server 2012 R2 Standard or Datacenter editions
requires that all nodes first be installed with the Failover Cluster feature.

For Failover Cluster on Windows Server 2008 and 2012, Microsoft Cluster Validate
must pass all tests except storage. For more information see
support.microsoft.com/kb/943984.

Microsoft Internet Explorer 6.0 and higher (only required for online help).
Currently, the latest release is 11.

In addition, be aware of the following points:

A wmiprvse.exe process may leak memory when a WMI notification query is used
heavily on a Windows Server 2008 system.
Refer to Microsoft Knowledge Base article 958124 for a hotfix download:
http://support.microsoft.com/default.aspx?scid=kb;EN-US;958124
In addition, refer to EMC Knowledge Base article 91194 Best Practices for SRDF
Cluster Enabler (SRDF/CE) that includes suggested settings for WMI:
http://support.emc.com/kb/91194

SRDF/CE supports Cluster Shared Volumes (CSV). Once converted using the CE
Configuration wizard, CSV disks display under Cluster Shared Volumes in the left
pane navigation tree of the CE Manager. The EMC SRDF/Cluster Enabler Plug-in
Version 4.2.1 Product Guide and the EMC Cluster Enabler Base Component
Version 4.2.1 Release Notes provide additional information.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Environment and system requirements

EMC software
The software requirements for EMC software are as follows:

A minimum version of Solutions Enabler Version 8.0.2 is required to be installed


prior to installing SRDF/CE Version 4.2.1.

Group Name Services (GNS) for Symmetrix arrays is not supported on hosts
configured with SRDF/CE.

PowerPath is optional, and only required if more than one HBA is attached per
host and you want to load balance between the HBAs.

VMware software
SRDF/Cluster Enabler Version 4.0 and later supports the configuration of a four-node
Windows Server 2008 cluster (including R2) and Windows Server 2012 (including R2)
in VMware ESX Server environments.

HBA driver firmware and software


SRDF/CE supports the STORport driver for both Emulex and QLogic.

SRDF/CE installation and operating requirements


A domain-based user ID with effective local administrator rights on each node of the
cluster can install, configure and manage SRDF/CE. SRDF/CE requires a minimum of
64 MB of disk space, not including runtime logging storage.

Cascaded SRDF
SRDF/CE supports cascaded SRDF configurations for SRDF synchronous and
asynchronous modes as follows:

Synchronous for Site A to Site B (R1 R21)

Asynchronous for Site B to Site C (R21 R2)

The EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Product Guide provides details
on both concurrent and cascaded SRDF/CE supported environments.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Environment and system requirements

RDF N-X support


RDF N-X allows users to replicate data between a VMAX array running Enginuity 5876
and a VMAX3 array running HYPERMAX OS 5977 using Solutions Enabler V8.0.2. The
replication can occur in either direction.
With the addition of HYPERMAX OS 5977, the following shows the connectivity
allowed when creating RDF groups between HYPERMAX OS 5977 and Enginuity
versions:
HYPERMAX OS 5977 HYPERMAX OS 5977
HYPERMAX OS 5977 Enginuity 5876 with fix number 67492

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Known problems and limitations

Known problems and limitations


The following are performance or functionality restrictions and limitations that may
apply to your specific storage environment or host platform. Unless explicitly stated,
these problems and limitations apply both the VMAX arrays running Enginuity 5876 or
later and VMAX3 arrays running HYPERMAX OS 5977 or later.

Continued support
Support for SRDF/Cluster Enabler for MSCS V4.1.2 ended 11/2013. It is
recommended that customers who are still running V4.1 or earlier upgrade to the
latest version as soon as possible.

Recovery from SRDF link failure


After restoring from an SRDF link failure, you must run the Storage Discover Wizard in
the Cluster Enabler Manager to discover all nodes and refresh the CE cluster. Select
the Cluster icon from the Navigation Console Tree and select Action and Storage
Discover from the menu bar. This opens the first page of the Storage Discover Wizard.

SRDF partitioned link state and maximum failures


For Windows Server 2008, if the SRDF link state is partitioned and the group policy is
set to restrict group movement, the CE group may not failback properly when moved a
second time. This error is due to a default setting of 2 in the MSCS Group Properties
for the Maximum failures in a specified period. Refer to your Windows Server 2008
documentation for instructions on changing the default setting.

Cluster support for R2 side larger than R1 side


SRDF/CE does not support clusters where the R2 side is larger than the R1 side. The
reason for this is that when the system fails over to the R2 side, it can never fail back
since the R2 cannot resynchronize all of its data back to the R1 side.

Dynamic swaps limitation with Enginuity version 5670


Dynamic swaps of SRDF/A devices are not supported in Enginuity version 5670.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Known problems and limitations

Modifying SRDF/CE groups using SYMCLI not supported


Modifying SRDF/CE groups directly using SYMCLI commands is not supported.
SRDF/CE does not detect changes that occur outside the SRDF/CE GUI. Changes made
to SRDF/CE groups outside of the SRDF/CE GUI results in an unusable cluster.

Improving SRDF/CE performance


To improve the overall performance of SRDF/CE, you can edit the SYMAPI options file.
The SYMAPI options file is typically located in the following directory:
C:\Program Files\EMC\SYMAPI\config\options

To improve the performance of SRDF/CE, edit the options file and add the following
line to the end of the file:
SYMAPI_AUTO_COMMIT_DB_RELOAD = ENABLE

This enables faster performance when accessing the SYMAPI database.

SYMAPI database and devices


If a CE cluster is destroyed without performing a deconfigure action, CE groups from
that cluster are not deleted from the SYMAPI database and devices in those existing
groups cannot be used to create new groups. To correct the problem, edit the SYMAPI
options file. The SYMAPI options file is typically located in the following directory:
C:\Program Files\EMC\SYMAPI\config\options

Edit the options file and add the following line to the end of the file:
SYMAPI_ALLOW_DEV_IN_MULT_GRPS = ENABLE

This enables the same device to be added to multiple groups in the SYMAPI database.

Witness disk quorum model


When configuring a cluster or changing the cluster model to a witness disk quorum
model, ensure that the witness disk selected for the cluster is synchronous.

SRDF composite groups


SRDF/CE does not support SRDF composite groups that span across multiple arrays.

10

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Known problems and limitations

RDF devices for Windows Server 2008


All SRDF devices must be configured to have the SCSI3_persist_reserv device
attribute set to enabled. Otherwise, the devices fail to come back online. The EMC
Solutions Enabler Symmetrix Array Controls CLI Product Guide provides instructions
for setting Symmetrix device attributes.

Moving devices between device and composite groups


If you have moved, migrated, or swapped RDF devices in a CE group to a different RA
Group, running the Update Mirror Pairs action on the CE group may show invalid
configuration information for the group. To correct the problem, move the CE group to
another node, or take the CE group offline and bring it online again.
For example, if the group was originally online on node A, bring the group offline and
online again. If the group was originally online on node B, move the group to node A.

Cascaded SRDF
For SRDF/CE to support cascaded SRDF/CE configurations, the cascaded disks cannot
be used as the quorum disk. If the cluster loses quorum, incomplete failover occurs
and the cluster goes down.
Also, cascaded SRDF disks are not discovered in CE if the R21 and R2 Symmetrix
arrays are mapped to the same R2 host.

Clustered Shared Volumes


Any node in a Failover Cluster failing to have read/write access on the Cluster Shared
Volume (CSV) disk causes the status to report as Online [Redirected Access]. This is
applicable to both R1 and R2 devices, therefore it is reported that the CSV disk is
always Online [Redirected Access], even if the Hyper-V has read/write access to the
CSV disk.

Nodes per site


EMC recommends having two or more nodes per site when managing cluster shared
volumes. However, high availability may not be guaranteed in the event of storage or
site failures.

Adaptive Copy Disk mode


Adaptive Copy Disk mode is not supported and Cluster Enabler prevents failover
operations on these groups.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

11

Technical notes

Group Name Services


Group Name Services (GNS) for Symmetrix arrays is not supported on hosts
configured with SRDF/CE.

Remote cluster management


When a CE cluster is managed remotely from a server (management station) which is
not part of that cluster, the installed Cluster Enabler version on all the cluster nodes
must be the same as on the management station.

Disks appear as Write Disabled or Not Ready when a split brain


situation occurs due to a communications failure
In a geographically distributed cluster that uses the File Share Witness model, it is
possible for a communications failure between two sites to cause a cluster split brain
situation. In addition, it is possible that the Cluster Enabler resources on the
production side appear as Write Disabled (WD) or Not Ready (NR). The SRDF link
between the two VMAX clusters remains in tact at all times.
In this case, it is possible for the CE resources (the disks in the disk array) to failover
to the disaster recovery side of the cluster, before cluster arbitration has completed. If
the disaster recovery side does not gain control of the cluster its arbitration attempt
fails. However, this leaves the disks in their failover state and thus unavailable on the
production side of the cluster.
The work around for this issue is to reduce the value of the Microsoft Cluster property
that regulates the timeout period for quorum arbitration. That is, change the value of
the quorumarbitrationtimeoutmax property from 90 (its default value) to 50. This
ensures that disks do not fail over until the disaster recovery side has successfully
completed quorum arbitration.

Technical notes
There are no technical notes presented with this release of SRDF/CE for Microsoft
Failover Clusters, Version 4.2.1.

12

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Documentation

Documentation
These release notes provide the latest information not included in the EMC

SRDF/Cluster Enabler Plug-in Version 4.2.1 Product Guide.


The EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Product Guide is available on the
EMC online support website at:
https://support.EMC.com

Required documentation
The EMC Cluster Enabler Base Component Version 4.2.1 Release Notes is part of the
EMC Cluster Enabler for Microsoft Failover Clusters documentation set, and is required
for SRDF/Cluster Enabler.

Software media, organization, and files


The EMC SRDF/Cluster Enabler Plug-in for Microsoft Failover Clusters Version 4.2.1
software consists of the following:

EMC SRDF/Cluster Enabler Plug-in for Microsoft Failover Clusters Module Version
4.2.1 software

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Product Guide

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Note: Software and documentation are downloadable from EMC Online Support at
https://support.EMC.com.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

13

Installation

Installation
Complete installation and upgrade instructions for the supported Microsoft Windows
Server 2008 or 2012 systems are provided in the EMC SRDF/Cluster Enabler Plug-in
Version 4.2.1 Product Guide. Simplified installation instructions are provided in this
section for a clean install (no previous versions of CE exist) or upgrade from version
4.0 and above only.
It is recommended that you contact EMC Customer Support for assistance if any of the
following issues are applicable:

You have applications already layered with dependencies.

You need other devices online.

You are not confident about installing and configuring new software within the
context of Windows Server 2008 or 2012, Microsoft Failover Clusters, and
Symmetrix arrays with SRDF.

Supported upgrade versions


The supported versions of SRDF/CE that may be upgraded to Cluster Enabler Version
4.2.1 using the InstallShield wizard include only SRDF/Cluster Enabler for Microsoft
Failover Clusters Versions 4.1.x and later. The EMC SRDF/Cluster Enabler Plug-in
Version 4.2.1 Product Guide provides upgrade instructions.
If you have an older unsupported version that you would like to upgrade, contact your
EMC representative for assistance or proceed with a clean install after uninstalling the
older version.
Note: For a clean install, all existing clusters will have to be re-configured and any
unique settings in SRDF/CE will be lost.

Before you begin


Before installing SRDF/Cluster Enabler, read through all of the installation procedures
and related information found in the product guide to get an overall understanding of
the installation process. The configuration guidelines provide references for properly
installing hardware and software in an SRDF/CE cluster environment.

14

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Installation

Installation requirements and considerations


Consider the following criteria before installation:

The Cluster Enabler Base Component is a prerequisite for the Cluster Enabler
plug-ins, and therefore must be installed prior to or with the plug-ins. For
instructions on installing the Base Component, refer to the EMC SRDF/Cluster
Enabler Plug-in Product Guide and the EMC Cluster Enabler Base Component
Release Notes.

The supported versions of CE that may be upgraded to Cluster Enabler Version


4.2.1 using the InstallShield wizard include only Cluster Enabler for MSCS
versions 4.1.x and later.
Note: For a clean install, all existing clusters will have to be reconfigured and any
unique settings in CE will be lost.

Only the x64 (AMD64 and Intel EM64T) Windows processor architectures are
supported.

SRDF/CE does not require a license key.

For Enginuity versions earlier than 5875, SRDF/CE requires that the appropriate
license keys for Solutions Enabler SRDF/Synchronous and SRDF/Asynchronous be
entered to create SRDF pairs. Refer to the EMC Solutions Enabler Installation
Guide for information on the appropriate license keys.

Installation on Windows Server 2008 systems requires that all nodes first be
installed with the Failover Cluster feature.

On Windows Vista SP1, Windows 7, and Windows Server 2008 (including R2)
select Start ->Run, and type in mstsc /admin /v: <host name> to use Remote
Desktop in the Console Mode.

For Failover Cluster on Windows Server 2008 and 2012, Microsoft Cluster Validate
must pass all tests except storage.

Cluster Enabler Version 4.2.1 requires that a minimum version of Solutions


Enabler 8.0.2 or later first be installed.

Upgrades scenarios where the storage is being replaced is not supported.

Configurations where the cluster node is zoned to both local and remote storage
arrays are not supported.

For upgrade scenarios, the cluster quorum type can only be changed before or
after the upgrade.
EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

15

Installation

Installing the Base Component with the SRDF/CE plug-in (clean install)
For a clean install of the Base Component and SRDF/CE Plug-in, files must be
downloaded into the same folder on a local directory with the required administrative
privileges. Do not install the files from network shares. The Base Component files are
named EMC_CE_Base.msi and Setup.exe. The SRDF/CE plug-in files are named
EMC_CE_SRDF_Plugin.msi and Setup_SRDF_Plugin.exe.
To install the Base Component together with the SRDF/CE plug-in:
1. Create a temporary directory and into it download (save) the Base Component
files (EMC_CE_BASE_4.2.1.zip) from the EMC online support.
2. Unpack the .zip file in that directory.
3. Download the SRDF/CE plug-in file (SRDFCEMFC_Plugin_4.2.1.zip) to the same
temporary directory you just created, being sure not to rename it.
4. In the temporary directory, run the EMC_CE_Base.msi file to launch the
InstallShield wizard.
5. Complete the steps in the InstallShield wizard.
Note: When prompted, select the SRDF/CE Plug-in. The plug-in is automatically
launched by the installer before reboot.
6. When prompted to restart your system, click Yes to restart the system, or No to
restart it at a later time.

16

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Installation

Upgrading the Base Component with the SRDF/CE plug-in


To upgrade the Base Component together with the SRDF/CE plug-in from a previous
release of Version 4.1.x to 4.2.1:
1. Move all cluster groups to node A.
2. Perform the following actions on all other cluster nodes:
a. Copy the setup.exe, EMC_CE_Base.msi, and .msi files for the plug-ins to
the same local folder on your host.
b. Click setup.exe to launch the installation.
c. A Plug-in Selection dialog box displays. The SRDF/CE plug-in is automatically
selected. You can select or de-select the listed plug-ins.
d. Complete the steps in the InstallShield wizard, being sure to select the
upgrade path.
e. When prompted to restart your system, click Yes.
f. After the node has finished rebooting, log onto the node. Using the Cluster
Manager verify that the cluster service is up.
3. After all other nodes are up, move all groups from node A to one of the other
nodes. If using a shared quorum cluster model, verify that the quorum group
comes online on the other node before continuing.
4. Repeat step 2 on node A.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

17

Installation

Upgrading only the SRDF/CE plug-in module


To upgrade only the SRDF/CE plug-in module from a previous release of Version 4.1.x
to 4.2.1:
1. Copy the EMC_CE_SRDF_Plugin.msi, and Setup_SRDF_Plugin.exe files to the
same local folder on your host.
2. Click Setup_SRDF_Plugin.exe to launch the upgrade.
3. Once the plug-in module has been successfully installed, if the node is not
rebooted, the following two steps are required to be performed to make the
upgrade effective:
a. Restart the WmiPrvSe.exe (running as SYSTEM user) process manually. This
can be done by ending the process WmiPrvSe.exe using the Windows Task
Manager.
b. Delete the CE_WMI_Symm.dll.1 (located under directory C:\Program
Files\EMC\Cluster-Enabler\Plugin\Symm\) file. This file is a temporary
file and if not deleted, subsequent plug-in only upgrade will fail.

18

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

Troubleshooting and getting help

Troubleshooting and getting help


EMC support, product, and licensing information can be obtained as follows:
Product information For documentation, release notes, software updates, or
information about EMC products, licensing, and service, go to EMC Online Support
website (registration required) at:
https://support.EMC.com

Technical support For technical support, go to EMC online support and select
Support. On the Support page, you will see several options, including one to create a
service request. Note that to open a service request, you must have a valid support
agreement. Contact your EMC sales representative for details about obtaining a valid
support agreement or with questions about your account.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.
Published March 16, 2015
EMC believes the information in this publication is accurate as of its publication date. The information is subject to
change without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any
kind with respect to the information in this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in
this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and
other countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories
section on the EMC online support website.

EMC SRDF/Cluster Enabler Plug-in Version 4.2.1 Release Notes

19

S-ar putea să vă placă și