Sunteți pe pagina 1din 48

The Host Server

VMware ESXi Configuration Guide

May 2017

This guide provides configuration settings and considerations for SANsymphony


Hosts running VMware ESX/ESXi.

Basic VMware administration skills are assumed including how to connect to iSCSI
and/or Fibre Channel Storage Array target ports as well as the processes of
discovering, mounting and formatting a disk device.

The Data Infrastructure Software Company


Table of contents

Changes made to this document 3


VMware ESXi compatibility lists 4
The DataCore Server's settings 9
Running a DataCore Server in a Virtual Machine 11
The VMware ESXi Host's settings 12
VMware Path Selection Policies 16
Configuring the Round Robin Path Selection Policy 18
Configuring the Fixed Path Selection Policy 20
Configuring the Most Recently Used Path Selection Policy 22
Known issues 24
ESXi 6.x (includes 6.0.x and 6.5.x) 25
Converged Network Adaptors 25
When connecting ESXi Hosts to DataCore Servers 25
When running Microsoft Clusters in a Virtual Machine 27
ESXi 5.x (includes 5.0.x, 5.1.x and 5.5.x) 28
Converged Network Adaptors 28
When connecting ESXi Hosts to DataCore Servers 28
When running Microsoft Clusters in a Virtual Machine 29
ESX 4.x (includes 4.0.x, and 4.1.x) 30
Converged Network Adaptors 30
When connecting ESXi Hosts to DataCore Servers 30
When running Microsoft Clusters in a Virtual Machine 30
Appendix A 31
Preferred Server & Preferred Path settings 31
Appendix B 33
Configuring Disk Pools 33
Appendix C 34
Reclaiming storage 34
SANsymphony's Automatic Reclamation feature 35
SANsymphony's Manual Reclamation feature 36
How much storage will be reclaimed? 37
Appendix D 38
Moving from Most Recently Used to either Round Robin or Fixed Path Selection Policy 38
Previous Changes 39

Page | 2 The Host Server VMware ESXi Configuration Guide


Changes made to this document
The most recent version of this document is available from here:
http://datacore.custhelp.com/app/answers/detail/a_id/838

All changes since April 2017

Added
Known Issues all ESX versions When connecting ESXi Hosts to DataCore Servers
After upgrading to VMware ESXi 6.0 Update 3 ESX paths will only report as 'Active'. No paths
will report as 'Active (I/O)' - regardless of the Path Selection Policy.

All previous changes


Please see page 39

Page | 3 The Host Server VMware ESXi Configuration Guide


VMware ESXi compatibility lists
Operating system versions

SANsymphony
(1)
9.0 PSP 4 Update 4 10.0 (all versions)

ESXi Version With ALUA Without ALUA With ALUA Without ALUA

3.x and earlier Not Supported Not Supported Not Supported Not Supported

4.0.x Not Qualified Not Qualified Not Supported Not Supported

4.1.x Qualified Not Qualified Not Supported Not Supported

5.x Qualified Not Qualified Qualified Not Qualified

6.x Qualified Not Qualified Qualified Not Qualified

Regarding VMwares own Hardware Compatibility List


Please see the official statement from DataCore here:
DataCore Software and VMware's Hardware Compatibility List (HCL)
http://datacore.custhelp.com/app/answers/detail/a_id/1131

Fibre Channel and iSCSI Connections


DataCore Software supports VMware ESXi Hosts using either Fibre Channel or iSCSI Connections
to SANsymphony Front-End (FE) Ports for any Virtual Disk type.

SCSI UNMAP
See the VStorage API for Array Integration (VAAI) compatibility table on page 7.

1
SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life.
Please see: End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329

Page | 4 The Host Server VMware ESXi Configuration Guide


VMware ESXi compatibility lists

Operating system version compatibility notes


Qualified
These VMware/SANsymphony combinations have been tested using all the host-specific
settings listed in this document against all Virtual Disk types. Mirrored and Dual Virtual Disks
have been tested for 'high availability' in all possible failure scenarios.

Not Qualified
These VMware/SANsymphony combinations have not been tested against any Mirrored or Dual
Virtual Disks types. DataCore therefore cannot guarantee 'high availability' in any failure
scenario (even if all host-specific settings listed in this document are followed) however, self-
qualification may be possible. For more information on this please see:
http://datacore.custhelp.com/app/answers/detail/a_id/1506

Support for anyESXi versions that are considered End of Life by VMware and are listed as 'Not
Qualified' can still be self-qualified but only if there is an agreed support contract with
VMware. In this case though, DataCore Technical Support will not do root-cause analysis for
SANsymphony in the case of any future issues but will offer 'best effort' support to get Hosts
accessing any SANsymphony Virtual Disks.

Note: Non-mirrored Virtual Disks are always considered as qualified even for 'Not Qualified'
combinations of VMware/SANsymphony.

Not Supported
These VMware/SANsymphony combinations have usually failed one or more of our 'high
availability' tests when using Mirrored or Dual Virtual Disks types; but also may simply be where
an Operating System's own requirements (or limitations) due to its age make it impractical to
test. Entries marked as 'Not Supported' can never be self-qualified. Mirrored or Dual Virtual
Disks types are configured at the end-user's own risk.

Note: Non-mirrored Virtual Disks are always considered as qualified even for 'Not Supported'
combinations of VMware/SANsymphony.

End of Life VMware versions


Support for any VMware version that is considered End of Life by VMware or has no active
development/Long Term Support can still be self-qualified but only if there is an agreed
support contract with VMware.

In this case, DataCore Technical Support will help the customer to get the Host Operating
system accessing Virtual Disks, but will not then do any root-cause analysis.

Page | 5 The Host Server VMware ESXi Configuration Guide


VMware ESXi compatibility lists

VSphere Metro Storage Clusters (vMSC)


vMSC is qualified for all SANsymphony/VMware ESXi combinations that are listed as 'qualified'
in the VMware ESXi compatibility list above where Virtual Disks have been formatted using
VMFS5 file system. Virtual Disks whose file systems have been 'upgraded' (from earlier versions
of VMFS) to VMFS5 are not supported (even if the SANsymphony/VMware ESXi combination is
listed as 'qualified' on the previous page).

VMware 'Fault Tolerant' or 'High Available' Clusters


When setting up a VMware Fault Tolerant or High Available Cluster and where Virtual Disks are
to be shared between two or more of the ESX Hosts; make sure that the all Host connections to
any DataCore Server Front End (FE) Port do not share any 'physical links' with the DataCore
Server's own Mirror (MR) connections (between the DataCore Servers) - for example sharing a
single 'Dark Fibre' link or 'Inter Switch Link' between the DataCore Servers across two site
locations.

Should a failure occur in this configuration, on that single physical link, then both the DataCore
Mirror IO will fail and the Host IO will fail. While the DataCore Server will send the correct SCSI
notification to the Hosts (LUN_NOT_AVAILABLE), ESX does not interpret this SCSI command, as
either a Permanent Device Loss (PDL) and All-Paths-Down (APD) event.

The Host will therefore continue to try to access the Virtual Disk on the failed path and will not
attempt to 'failover' (HA) or move the VM (fault tolerant). This will result in a loss of access to
the Virtual Disk(s) and DataCore cannot support this type of configuration.

Make sure all Hosts access any DataCore Server FE ports via separate physical links to the
DataCore Server's own MR port connections.

Page | 6 The Host Server VMware ESXi Configuration Guide


VMware ESXi compatibility lists

VStorage API for Array Integration (VAAI)(1)

SANsymphony
(2)
9.0 PSP 4 Update 4 10.0 (all versions)

ESXi Version VAAI VAAI

3.x and earlier N/A N/A

(3)
4.x Does not work Does not work

5.x Qualified Qualified

6.x Qualified Qualified

Note: For more information on using SCSI UNMAP to reclaim storage from Disk Pools, please
refer to Appendix C: 'Reclaiming Storage' on page 34 for specific instructions.

VMware VVOL VASA API 2.0

SANsymphony

9.0 PSP 4 Update 4 10.0 PSP3 and earlier 10.0 PSP 4 and greater

ESXi Version VASA support for VVOL VASA support for VVOL VASA support for VVOL

5.x and earlier N/A N/A Not Supported

6.x N/A N/A Qualified

Note: this also includes vMotion over iSCSI. Also see the 'Getting Started with the DataCore
VASA Provider' section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Getting_Started_with_VASA_Provider.htm

1
The following VAAI specific commands are supported by the DataCore Server:
Atomic Test & Set (ATS), Clone Blocks/Full Copy/XCOPY, Zero Blocks/Write Same and Block Delete/SCSI UNMAP
2
SCSI UNMAP supported was included in SANsymphony-V 9.0 PSP 4 Update 4. SANsymphony-V 8.x and all versions
of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications
http://datacore.custhelp.com/app/answers/detail/a_id/1329
3
For VMware ESXi 4.1 Hosts, VAAI must be disabled on the Host otherwise it will cause unexpected behaviour.
Please refer to VMware's own knowledgebase article: "Disabling the VAAI functionality in ESX/ESXi (1033665)"
http://kb.vmware.com/kb/1033665

Page | 7 The Host Server VMware ESXi Configuration Guide


VMware ESXi compatibility lists

VMware ESXi Path Selection Policies (PSP)

VMware Path Selection Policy

ESXi Most Recently Used (MRU) Fixed Round Robin (RR)


Version (without ALUA only) (with ALUA only) (with ALUA only)

4.x Qualified Qualified Qualified

5.x Qualified Qualified Qualified

6.x Not Qualified Qualified Qualified

Path Selection Policy compatibility notes


General
Any SANsymphony/ESXi version combinations that are listed as 'Not Supported' on page 4 must
only use non-mirrored Virtual Disks.

ESX 6.x
Fixed and RR PSPs have both tested by DataCore Software and are both listed on VMware's
Hardware Compatibility List (HCL).

MRU has not been tested by DataCore Software and is considered 'Not Qualified'; it is not listed
on VMware's HCL.

ESX 5.x
Fixed and RR PSPs have both tested by DataCore Software but only the RR PSP is listed on
VMware's HCL.

MRU has not been tested by DataCore Software and is considered 'Not Qualified'; it is not listed
on VMware's HCL.

Page | 8 The Host Server VMware ESXi Configuration Guide


The DataCore Server's settings
Also see:
Video: Configuring ESX Hosts in the DataCore Management Console
http://datacore.custhelp.com/app/answers/detail/a_id/1637

These are the Host-specific settings that need to be configured directly on the DataCore Server.

Operating system type


See the Registering Hosts section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Hosts.htm

When registering the Host choose the 'VMware ESXi' menu option.

Port roles
Ports used for serving Virtual Disks to Hosts should only have the Front End (FE) role enabled.
Mixing other Port Role types may cause unexpected results as Ports that only have the FE role
enabled will be turned off when the DataCore Server software is stopped (even if the physical
server remains running). This helps to guarantee that any Hosts do not still try to access FE
Ports, for any reason, once the DataCore Software is stopped but where the DataCore Server
remains running. Any Port with the Mirror and/or Back End role enabled do not shut off when
the DataCore Server software is stopped but still remain active.

Multipathing support
The Multipathing Support option should be enabled so that Mirrored Virtual Disks or Dual
Virtual Disks can be served to Hosts from all available DataCore FE ports. Also see the
Multipathing Support section from the SANsymphony Help: http://www.datacore.com/SSV-
Webhelp/Hosts.htm

Non-mirrored Virtual Disks and Multipathing


Non-mirrored Virtual Disks can still be served to multiple Hosts and/or multiple Host Ports from
one or more DataCore Server FE Ports if required; in this case the Host can use its own
multipathing software to manage the multiple Host paths to the Single Virtual Disk as if it was a
Mirrored or Dual Virtual Disk.

Note: Hosts that have non-mirrored Virtual Disks served to them do not need Multipathing
Support enabled unless they have other Mirrored or Dual Virtual Disks served as well.

Page | 9 The Host Server VMware ESXi Configuration Guide


The DataCore Server's settings

Asymmetrical Logical Unit Access (ALUA) support


The ALUA support option should be enabled if required and if Multipathing Support has been
also been enabled (see above). Please refer to the Operating system compatibility table on page
4 to see which combinations of VMware ESXi and SANsymphony support ALUA.

More information on Preferred Servers and Preferred Paths used by the ALUA function can be
found on in Appendix A on page 31.

Serving Virtual Disks to the Hosts for the first time


DataCore recommends that before serving Virtual Disks for the first time to a Host, that all
DataCore Front-End ports on all DataCore Servers are correctly discovered by the Host first.
Then, from within the SANsymphony Console, the Virtual Disk is marked Online, up to date and
that the storage sources have a host access status of Read/Write.

Virtual Disks LUNs and serving to more than one Host or Port
DataCore Virtual Disks always have their own unique Network Address Authority (NAA)
identifier that a Host can use to manage the same Virtual Disk being served to multiple Ports on
the same Host Server and the same Virtual Disk being served to multiple Hosts.

See the SCSI Standard Inquiry Data section from the online Help for more information on this:
http://www.datacore.com/SSV-Webhelp/Changing_Virtual_Disk_Settings.htm

While DataCore cannot guarantee that a disk device's NAA is used by a Host's operating system
to identify a disk device served to it over different paths generally we have found that it is. And
while there is sometimes a convention that all paths by the same disk device should always
using the same LUN 'number' to guarantees consistency for device identification, this may not
be technically true. Always refer to the Host Operating System vendors own documentation for
advice on this.

DataCore's Software does, however always try to create mappings between the Host's ports
and the DataCore Server's Front-end (FE) ports for a Virtual Disk using the same LUN number(5)
where it can. The software will first find the next available (lowest) LUN 'number' for the Host-
DataCore FE mapping combination being applied and will then try to apply that same LUN
number for all other mappings that are being attempted when the Virtual Disk is being served.
If any Host-DataCore FE port combination being requested at that moment is already using that
same LUN number (e.g. if a Host has other Virtual Disks served to it from previous) then the
software will find the next available LUN number and apply that to those specific Host-
DataCore FE mappings only.

5
The software will also try to match a LUN 'number' for all DataCore Server Mirror Port mappings of a Virtual Disk
too, although the Host does not 'see' these mirror mappings and so this does not technically need to be the same
as the Front End port mappings (or indeed as other Mirror Path mappings for the same Virtual Disk). Having Mirror
mappings using different LUNs has no functional impact on the Host or DataCore Server at all.

Page | 10 The Host Server VMware ESXi Configuration Guide


The DataCore Server's settings

Running a DataCore Server in a Virtual


Machine
See the article Hyperconverged and Virtual SAN Best Practices guide:
http://datacore.custhelp.com/app/answers/detail/a_id/1155

Page | 11 The Host Server VMware ESXi Configuration Guide


The VMware ESXi Host's settings
The following are the Host-specific settings that need to be configured directly on the Host
Server.

Note: Older versions (of VMware ESXi) may require different Host settings when compared to
newer versions. When a setting or configuration change is listed for one version but not another
then is only required for that specific version of VMware ESXi. If you have upgraded from an
older version and a specific setting is no longer documented for your newer version then assume
that no further changes are needed for those settings that are no longer listed in your newer
version and they should be left as they were.

ISCSI Connections
TCP Ports
Make sure TCP Port 3260 is opened for all iSCSI Communication to the DataCore Server.

See the 'TCP and UDP Ports' section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/windows_security_settings_disclosure.htm

ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server
Front-end port is not supported (this also includes ESXi 'Port Binding').

The Front End port will only accept the first connection from a given IQN that attempts to
login to it where a unique ISCSI Session ID (ISID) is created for that connection. All subsequent
connections that then come from a different NIC that happens to share the same IQN as the
first login, will causes a ISID conflict and will be rejected by the DataCore Server. After that, no
further iSCSI logins will be possible for this IQN. This may cause unexpected disconnects
between the Host and the DataCore Server for those connections.

It is important to note that if the first successful connection gets disconnected for any reason
(e.g. by a SCSI reset), then one of the other NICs - sharing the same IQN - may re-attempt a
login and, if successful, will take the session for itself. This will now block the previously-
connected NIC from being able to re-connect and it will now remain disconnected.

See the following pages for example of qualified and not-supported configurations:

Page | 12 The Host Server VMware ESXi Configuration Guide


The VMware ESXi Host's settings

Example 1 A qualified configuration


An ESX Host (ESX1) has four different Network Interfaces; each with its own IP address but all
with the same IQN:

192.168.1.1 (iqn.esx1)
192.168.2.1 (iqn.esx1)
192.168.1.2 (iqn.esx1)
192.168.2.2 (iqn.esx1)

There are, in this example, two DataCore Servers each with two Front-end ports with their own
corresponding IP adresses and IQNs:

192.168.1.101 (iqn.dcs1-1)
192.168.2.101 (iqn.dcs1-2)
192.168.1.102 (iqn.dcs2-1)
192.168.2.102 (iqn.dcs2-2)

Each Network Interface of the ESX Host should connect to a separate Front-end Port on both
DataCore Servers;

(iqn.esx1) 192.168.1.1 ISCSI Fabric 1 192.168.1.101 (iqn.dcs1-1)


(iqn.esx1) 192.168.2.1 ISCSI Fabric 2 192.168.2.101 (iqn.dcs1-2)
(iqn.esx1) 192.168.1.2 ISCSI Fabric 1 192.168.1.102 (iqn.dcs2-1)
(iqn.esx1) 192.168.2.2 ISCSI Fabric 2 192.168.2.102 (iqn.dcs2-2)

Also note that this kind of set up will make things simpler to manage and troubleshoot if
connection problems occur in the future. There is no case in the above example where the
other Network Interface on ESX1 is also trying to connect to the other Front-end port on the
same DataCore Server (i.e. there are no multiple ISCSI session connections).

Page | 13 The Host Server VMware ESXi Configuration Guide


The VMware ESXi Host's settings

Example 2 An non-supported configuration


Using the same values as the qualified example above;

(iqn.esx1) 192.168.1.1 ISCSI Fabric 1 192.168.1.101 (iqn.dcs1-1)


(iqn.esx1) 192.168.2.1 ISCSI Fabric 2 192.168.2.101 (iqn.dcs1-2)
(iqn.esx1) 192.168.1.2 ISCSI Fabric 1 192.168.1.101 (iqn.dcs1-1)
(iqn.esx1) 192.168.2.2 ISCSI Fabric 2 192.168.2.102 (iqn.dcs2-2)

In this case, both of the Network Interfaces from ESX1 have been configured to connect to the
same Network Interface on DataCore Server 1 in this case iqn.dcs1-1 (highlighted in red)
which will not work as expected.

DataCore Server1 will accept only one of the connections and the other will be rejected; any
subsequent interruption over that iSCSI connection may then result in either of the two ESX
Network Interfaces being able to (re)connect to iqn.dcs1-1, forcing the other ESX connection to
be rejected. In other words, there is no guarantee that the ESX1 connection that was previously
logged into iqn.dcs1-1 will be able to reconnect if disconnected for any reason and if the other
Network Interface logs in before it. A solution in this case may be Teaming the NICs together as
the teamed connections will then only have a single IP address and be recognized by the
DataCore Server as a single NIC.

Also See the Important Notes section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Configuring_iSCSI_Connections.htm

Page | 14 The Host Server VMware ESXi Configuration Guide


The VMware ESXi Host's settings

Advanced Settings
Note: A reboot may not be needed if any of these settings are changed from a previous value
please check with VMware first.

ESX 6.x and 5.x


From within the ESXi Configuration Tab under Advanced Settings change and/or verify the
following values are set:

Disk.DiskMaxIOSize = 512

ESX 4.1.x
From within the ESXi Configuration Tab under Advanced Settings change and/or verify the
following values are set:

Disk.DiskMaxIOSize = 512
Disk.QFullSampleSize = 32
Disk.QFullThreshold = 8
Disk.UseLunReset = 1
Disk.UseDeviceReset = 0
SCSI.CRTimeoutDuringBoot = 10000

ESX 4.0.x
From within the ESXi Configuration Tab under Advanced Settings change and/or verify the
following values are set:

Disk.DiskMaxIOSize = 512
Disk.QFullSampleSize = 32
Disk.QFullThreshold = 8
Disk.UseLunReset = 1
Disk.UseDeviceReset = 0
SCSI.CRTimeoutDuringBoot = 1
SCSI.ConflictRetries = 200

Page | 15 The Host Server VMware ESXi Configuration Guide


VMware Path Selection Policies
Which Path Selection Policies (PSP) are qualified?
Please refer to VMware ESXi Path Selection Policies (PSP) compatibility list on page 8.

Which PSP does DataCore Software recommend?


DataCore does not recommend one particular policy over another; one users installation and
configuration of SANsymphony will be different to another's.

Note: Some PSPs are not supported by VMware themselves for certain types of Virtual Machine
Operating Systems. DataCore cannot take responsibility for these VMware-unsupported Virtual
Machine/PSP combinations should any issues occur.

See http://kb.vmware.com/kb/1011340

Changing the PSP type on an already-served Virtual Disk


As long as the same Storage Array Type Plug-in (SATP) being used on the Host is the same for
the new PSP there is nothing that needs to be done on the DataCore Server.

If the current SATP is different to what the new PSP requires (for example, moving from 'Most
Recently Used' to 'Round Robin'), then DataCore recommend that you unserve the Virtual Disks
first, delete the old SATP value adding the new SATP and serve them back again.

Note: Changing the SATP type may also require that the ALUA option also be changed as well on
the Host, within the SANsymphony Console, from its current setting. In which case see the After
changing the settings section of changing multipath or ALUA support settings for hosts from
the SANsymphony Help: http://www.datacore.com/SSV-Webhelp/Hosts.htm

Using different PSPs for the same Virtual Disk on multiple Hosts
While this is technically possible, it is not supported and DataCore cannot guarantee the
behavior of the VMware ESXi Hosts in this case. Always use the same PSP for the same Virtual
Disk on all VMware ESXi Hosts that it is served to.

Page | 16 The Host Server VMware ESXi Configuration Guide


VMware Path Selection Policies

Which Storage Array Type Plug-in (SATP) should I use?


Please refer to the following pages to determine which SATP and how to configure it for the
particular PSP you wish to use.

Note: Auto-detection by VMware ESXi to choose the correct Path Selection Policy and/or
Storage Array Plugin Type to use for a given Virtual Disk can be inconsistent in older versions
and VMware ESXi may, for example, default to Most Recently Used by default for any Virtual
Disk mapped to the Host regardless if the ALUA option had been enabled or not. This type of
mistake will cause unexpected results during any failover event.

It is, therefore, important to always verify manually, that both the correct PSP and SATP have
been selected. This can be done directly on the VMware vSphere client GUI or by running one of
the following two commands on the VMware ESXi console:

esxcli storage nmp device list | grep -C 3 DataCore

esxcli storage nmp device list | grep -C 3 0030d9

The first command lists all disk devices served to the VMware ESXi Host that have the
SANsymphony DataCore SCSI Vendor string; the second command does the same thing but
uses DataCores unique NAA identifier. Both of these are part of any SANsymphony Virtual Disks
SCSI Standard Inquiry Data.

See the SCSI Standard Inquiry Data section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Changing_Virtual_Disk_Settings.htm

The additional -C switch for the grep command will display an additional 3 lines of output
above and below the line that the string searched for appears on which should then include the
Path Selection Policy and the Storage Array Type Plugin. Increase this number to display more of
the Virtual Disks properties as required.

If using SANsymphonys own VMware vCenter Integration, searching by the NAA identifier is
the only way to list the Virtual Disks on the command line.

Page | 17 The Host Server VMware ESXi Configuration Guide


VMware Path Selection Policies

Configuring the Round Robin Path Selection Policy


Use the SATP 'VMW_SATP_ALUA' with the claim option 'tpgs_on'.

Round Robin can be configured using either the 'default' SATP type or by configuring a custom
SATP rule.

Using the default SATP type


It is possible to use VMware ESXis built-in, generic 'VMW_SATP_ALUA' rule

VMW_SATP_ALUA system tpgs_on Any array with ALUA support

Using a custom SATP rule


To create a custom rule run the following command on the ESXi Hosts console

esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA
-c tpgs_on -P VMW_PSP_RR

Verify the custom rule has been set correctly

esxcli storage nmp satp rule list -s VMW_SATP_ALUA | grep DataCore

The response should look like something this(1)

VMW_SATP_ALUA DataCore Virtual Disk user tpgs_on VMW_PSP_RR

This custom SATP rule can be used for all Virtual Disks from any DataCore Servers when using
the Round Robin PSP.

Note: Round Robin is only qualified with the ALUA option enabled on the VMware Host from
within the DataCore Server's Console.

See Changing multipath or ALUA support settings for hosts from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Multipath_Support.htm

1
This example is taken from VMware ESXi version 5.5

Page | 18 The Host Server VMware ESXi Configuration Guide


VMware Path Selection Policies

Which Preferred Server setting on the DataCore Server should I use with Round Robin?
DataCore recommends, when using the Round Robin PSP and configuring your Hosts for the
first time to set an explicit DataCore Server as the Preferred Server or leave the 'Auto select'
setting configured and not use the All setting.

When the Host's Preferred Server setting is either 'Auto Select', or is using an explicitly named
DataCore Server, only the Host paths that are connected to either the first DataCore Server
listed in the Virtual Disk's properties (for 'Auto Select') or the named DataCore Server
respectively are set as Active Optimized. The Host's paths connected to the other DataCore
Server are set as 'Active Non-optimized'.

VMware Hosts will only send IO to 'Active Optimized' paths when there is a choice between
that or 'Active Non-optimized'.

Caution is therefore advised when using the All setting as this allows the VMware Host to send
I/O to all paths on all DataCore Servers for any served Virtual Disk. While this may seem
preferable, in configurations where there are significant path distances between servers (e.g.
across remote sites) or where the speed of links between remote servers is significantly slower
than links between local servers on the same site, it is possible that longer I/O wait times are
encountered between paths to remote servers which will then cause additional delays for I/O
within the same request but sent via paths to local servers resulting in overall significant I/O
latency.

Testing is advised.

Please see Appendix A Notes on Preferred Server and Preferred Path settings on page 31 for a
more details explanation when using the All setting.

Also see the Preferred Servers section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Port_Connections_and_Paths.htm

Page | 19 The Host Server VMware ESXi Configuration Guide


VMware Path Selection Policies

Configuring the Fixed Path Selection Policy


Use the SATP 'VMW_SATP_ALUA' with the claim option 'tpgs_on'.

The Fixed PSP can be configured using either the 'default' SATP type or by configuring a custom
SATP rule.

Using the default SATP type


It is possible to use VMware ESXis built-in, generic 'VMW_SATP_ALUA' rule

VMW_SATP_ALUA system tpgs_on Any array with ALUA support

Using a custom SATP rule


To create a custom rule run the following command on the ESXi Hosts console

esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA
-c tpgs_on -P VMW_PSP_FIXED

Verify the custom rule has been set correctly

esxcli storage nmp satp rule list -s VMW_SATP_ALUA | grep DataCore

The response should look like something this(1)

VMW_SATP_ALUA DataCore Virtual Disk user tpgs_on VMW_PSP_FIXED

This custom SATP rule can be used for all Virtual Disks from any DataCore Servers when using
the Fixed PSP.

Note: Fixed PSP is only qualified with the ALUA option enabled on the VMware Host from within
the DataCore Server's Console.

See Changing multipath or ALUA support settings for hosts from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Multipath_Support.htm

1
This example is taken from VMware ESXi version 5.5

Page | 20 The Host Server VMware ESXi Configuration Guide


VMware Path Selection Policies

Which Preferred Server setting on the DataCore Server should I use with Fixed PSP?
Unlike Round Robin, where DataCore recommend (initially) to not use the 'All' Preferred Server
setting, when using the Fixed PSP the 'All' setting is mandatory and no other Preferred Server
setting is qualified.

This is because the Fixed PSP always required an 'Active Optimized' path to failover or failback
to for it to work as expected.

Note: The Fixed PSP will not send IO to all 'Active Optimized' paths like the Round Robin PSP
does. The actual 'active' path used by the VMware Host that is using the Fixed PSP is configured
on the ESX Host directly and is not controlled by the DataCore Server. Please refer to VMware's
own documentation on how to configure the 'active' path when using the Fixed PSP.

Please see Appendix A Notes on Preferred Server and Preferred Path settings on page 31 for
a more details explanation when using the All setting.

Also see the Preferred Servers section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Port_Connections_and_Paths.htm

Page | 21 The Host Server VMware ESXi Configuration Guide


VMware Path Selection Policies

Configuring the Most Recently Used Path Selection Policy


Use the SATP 'VMW_SATP_DEFAULT_AA' with no claim option set.

The Fixed PSP can be configured using either the 'default' SATP type or by configuring a custom
SATP rule.

Using the default SATP type


It is possible to use VMware ESXis built-in, generic 'VMW_SATP_DEFAULT_AA' rule

VMW_SATP_DEFAULT_AA fc system Fibre Channel Devices

Using a custom SATP rule


To create a custom rule run the following command on the ESXi Hosts console

esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s


VMW_SATP_DEFAULT_AA -P VMW_PSP_MRU

Verify the custom rule has been set correctly

esxcli storage nmp satp rule list -s VMW_SATP_DEFAULT_AA | grep DataCore

The response should look like something this(8)

VMW_SATP_DEFAULT_AA DataCore Virtual Disk user VMW_PSP_MRU

This custom SATP rule can be used for all Virtual Disks from any DataCore Servers when using
the Fixed PSP.

Note: Most Recently Used is only qualified without the ALUA option enabled on the VMware
Host from within the DataCore Server's Console.

See Changing multipath or ALUA support settings for hosts from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Multipath_Support.htm

8
This example is taken from VMware ESXi version 5.5

Page | 22 The Host Server VMware ESXi Configuration Guide


VMware Path Selection Policies

Which Preferred Server setting on the DataCore Server should I use with Most Recently Used?
Because the ALUA option is not supported when using the Most Recently Used PSP it must
never be enabled on the Host. The Preferred Server setting, which controls the ALUA state of a
given path to a Host from a DataCore Server, will therefore be ignored by the Host.

Note: The actual 'active' path used by the VMware Host that is using the Most Recently Used
PSP is configured on the ESX Host directly and is not controlled by the DataCore Server. Please
refer to VMware's own documentation on how to configure the 'active' path when using the
Most Recently Used PSP.

Also see the Preferred Servers section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Port_Connections_and_Paths.htm

Page | 23 The Host Server VMware ESXi Configuration Guide


Known issues
The following is intended to make DataCore Software customers aware of any issues that may
affect performance, access or generally give unexpected results under certain conditions when
VMware ESXi is used with SANsymphony.

Some of the issues here have been found during DataCores own testing but many others are
issues reported by DataCore Software customers, where a specific problem had been identified
and then subsequently resolved.

DataCore cannot be held responsible for incorrect information regarding VMware products. No
assumption should be made that DataCore has direct communication with VMware regarding
the issues listed here and we always recommend that users contact the VMware directly to see
if there are any updates or fixes since they were reported to us.

For Known issues for DataCores own Software products, please refer to the relevant DataCore
Software Components release notes.

Page | 24 The Host Server VMware ESXi Configuration Guide


Known issues ESX 6.x

ESXi 6.x (includes 6.0.x and 6.5.x)


Converged Network Adaptors

When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor


Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option.

When connecting ESXi Hosts to DataCore Servers

After upgrading to VMware ESXi 6.0 Update 3 ESX paths will only report as 'Active'. No paths will report as
'Active (I/O)' - regardless of the Path Selection Policy.

Example before ESXi 6.0 Update 3:

Example after ESXi 6.0 Update 3:

VMware have verified this as a cosmetic bug in ESXi that does not affect IO of either the ESX Hosts or VMs and that
their engineering team is currently working on a solution.

The ESXi 'esxtop' command (e.g. using the 'd' or 'u' switches) will show activity on the expected paths and/or
devices.

As of the end of April 2017 there was no workaround from VMware.

Page | 25 The Host Server VMware ESXi Configuration Guide


Known issues ESX 6.x

Configurations of VMware Clusters (HA or FT) that have Host connections to any DataCore Server Front End (FE)
Port which share a 'physical link' with the DataCore Server's own Mirror (MR) connections (between the
DataCore Servers) are not supported.
Please see the section 'VMware 'Fault Tolerant' or 'High Available' Clusters' on page 6 for more specific
information.

ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported (this also includes ESXi 'Port Binding')
Please see the ISCSI Connections section on page 12 for more specific information, with examples.

Storage PDL responses may not trigger path failover in vSphere 6.0.0 and 6.0 Update 1.
This has now been fixed by VMware. See http://kb.vmware.com/kb/2144657.

VHBAs and other PCI devices may stop responding when using Interrupt Remapping.
See http://kb.vmware.com/kb/1030265.

Under heavy load the VMFS heartbeat may fail with 'false' ATS miscompare message.

The ESXi VMFS 'heartbeat' used to use normal 'SCSI reads and writes' to perform its function. A change in the
heartbeat method - released in ESXi 5.5 Update 2 and ESXi 6.0 uses ESXi's VAAI ATS commands instead directly to
the storage array (i.e. the DataCore Serve). DataCore Servers do not require (and so do not support) these ATS
commands. DataCore therefore recommend disabling the VAAI ATS heartbeat setting see
http://kb.vmware.com/kb/2113956.

If your ESXi Hosts are connected to other storage arrays contact VMware to see if it is safe to disable this setting
for these arrays.

Page | 26 The Host Server VMware ESXi Configuration Guide


Known issues ESX 6.x

Significant numbers of Virtual Machines all running on the same Virtual Disk may result in excessive SCSI
reservation requests leading to reservation conflicts between Hosts sharing the Virtual Disk which may lead to
increased I/O latency.

This only affects ESXi Hosts not using VAAI.

Reduce the number of running Virtual Machines on a single Virtual Disk and that ESX Hosts with the closest IO
path to the DataCore Server all access the same, shared Virtual Disk as this will also help to reduce the potential
for excessive SCSI Reservation conflicts. Also see: http://kb.vmware.com/kb/1005009

DataCore Software recommends using VAAI where the 'Atomic Test and Set (ATS) primitive' is used instead as this
is much better method for locking VMFS Datastores on Virtual Disks when compared to the normal SCSI
Reservation process.

When running Microsoft Clusters in a Virtual Machine

Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access.
A fix is available from VMware. See https://kb.vmware.com/kb/2145663 for more information.

Unable to access filesystem for MSCS cluster nodes after vMotion.


This has now been fixed by VMware. See https://kb.vmware.com/kb/2144153.

The SCSI-3 Persistent Reserve tests fail for Windows 2012 Microsoft Clusters running in VMware ESXi Virtual
Machines.
This is expected. See http://kb.vmware.com/kb/1037959 specifically read the 'additional notes' (under the
section 'VMware vSphere support for running Microsoft clustered configurations').

ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start
or during LUN rescan.
See http://kb.vmware.com/kb/1016106.

Page | 27 The Host Server VMware ESXi Configuration Guide


Known issues ESX 5.x

ESXi 5.x (includes 5.0.x, 5.1.x and 5.5.x)


Converged Network Adaptors

When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor


Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option.

When connecting ESXi Hosts to DataCore Servers

Configurations of VMware Clusters (HA or FT) that have Host connections to any DataCore Server Front End (FE)
Port which share a 'physical link' with the DataCore Server's own Mirror (MR) connections (between the
DataCore Servers) are not supported.
Please see the section 'VMware 'Fault Tolerant' or 'High Available' Clusters' on page 6 for more specific
information.

ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported (this also includes ESXi 'Port Binding')
Please see the ISCSI Connections section on page 12 for more specific information, with examples.

VHBAs and other PCI devices may stop responding when using Interrupt Remapping.
See http://kb.vmware.com/kb/1030265.

Under heavy load the VMFS heartbeat may fail with 'false' ATS miscompare message.

The ESXi VMFS 'heartbeat' used to use normal 'SCSI reads and writes' to perform its function. A change in the
heartbeat method - released in ESXi 5.5 Update 2 and ESXi 6.0 uses ESXi's VAAI ATS commands instead directly to
the storage array (i.e. the DataCore Serve). DataCore Servers do not require (and so do not support) these ATS
commands.DataCore therefore recommend disabling the VAAI ATS heartbeat setting see
http://kb.vmware.com/kb/2113956.

If your ESXi Hosts are connected to other storage arrays contact VMware to see if it is safe to disable this setting
for these arrays.

Page | 28 The Host Server VMware ESXi Configuration Guide


Known issues ESX 5.x

Significant numbers of Virtual Machines all running on the same Virtual Disk may result in excessive SCSI
reservation requests leading to reservation conflicts between Hosts sharing the Virtual Disk which may lead to
increased I/O latency.

This only affects ESXi Hosts not using VAAI.

Reduce the number of running Virtual Machines on a single Virtual Disk and that ESX Hosts with the closest IO
path to the DataCore Server all access the same, shared Virtual Disk as this will also help to reduce the potential
for excessive SCSI Reservation conflicts. Also see: http://kb.vmware.com/kb/1005009

DataCore Software recommends using VAAI where the 'Atomic Test and Set (ATS) primitive' is used instead as this
is much better method for locking VMFS Datastores on Virtual Disks when compared to the normal SCSI
Reservation process.

When running Microsoft Clusters in a Virtual Machine

Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access.
A fix is available from VMware. See https://kb.vmware.com/kb/2145663 for more information.

The SCSI-3 Persistent Reserve tests fail for Windows 2012 Microsoft Clusters running in VMware ESXi Virtual
Machines.
This is expected. See http://kb.vmware.com/kb/1037959 specifically read the 'additional notes' (under the
section 'VMware vSphere support for running Microsoft clustered configurations').

ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start
or during LUN rescan.
See http://kb.vmware.com/kb/1016106.

Page | 29 The Host Server VMware ESXi Configuration Guide


Known issues ESX 4.x

ESX 4.x (includes 4.0.x, and 4.1.x)


Converged Network Adaptors

When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor


Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option.

When connecting ESXi Hosts to DataCore Servers

ISCSI Patches required for ESXi 4.0 Hosts connected to DataCore Servers
VMware ESXi 4.0, Patch ESXi400-200906413-BG: http://kb.vmware.com/kb/1012232
VMware ESXi 4.0, Patch ESXi400-201003401-BG: http://kb.vmware.com/kb/1019492

ESXi does not support LUNs (i.e. SANsymphony Virtual Disks) greater than 2-terabyte.
See: http://kb.vmware.com/kb/3371739

ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported (this also includes ESXi 'Port Binding')
Please see the ISCSI Connections section on page 12 for more specific information, with examples.

VHBAs and other PCI devices may stop responding when using Interrupt Remapping.
See http://kb.vmware.com/kb/1030265.

When running Microsoft Clusters in a Virtual Machine

ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start
or during LUN rescan.
See http://kb.vmware.com/kb/1016106.

Significant numbers of Virtual Machines all running on the same Virtual Disk may result in excessive SCSI
reservation requests leading to reservation conflicts between Hosts sharing the Virtual Disk which may lead to
increased I/O latency.

Reduce the number of running Virtual Machines on a single Virtual Disk and that ESX Hosts with the closest IO
path to the DataCore Server all access the same, shared Virtual Disk as this will also help to reduce the potential
for excessive SCSI Reservation conflicts. Also see: http://kb.vmware.com/kb/1005009

Page | 30 The Host Server VMware ESXi Configuration Guide


Appendix A
Preferred Server & Preferred Path settings
See the Preferred Servers and Preferred Paths sections from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Port_Connections_and_Paths.htm

Without ALUA enabled


If Hosts are registered without ALUA support, the Preferred Server and Preferred Path settings
will serve no function. All DataCore Servers and their respective Front End (FE) paths are
considered equal.

It is up to the Hosts own Operating System or Failover Software to determine which DataCore
Server is its preferred server.

With ALUA enabled


Setting the Preferred Server to Auto (or an explicit DataCore Server), determines the DataCore
Server that is designated Active Optimized for Host IO. The other DataCore Server is
designated Active Non-Optimized.

If for any reason the Storage Source on the preferred DataCore Server becomes unavailable,
and the Host Access for the Virtual Disk is set to Offline or Disabled, then the other DataCore
Server will be designated the Active Optimized side. The Host will be notified by both
DataCore Servers that there has been an ALUA state change, forcing the Host to re-check the
ALUA state of both DataCore Servers and act accordingly.

If the Storage Source on the preferred DataCore Server becomes unavailable but the Host
Access for the Virtual Disk remains Read/Write, for example if only the Storage behind the
DataCore Server is unavailable but the FE and MR paths are all connected or if the Host
physically becomes disconnected from the preferred DataCore Server (e.g. Fibre Channel or
iSCSI Cable failure) then the ALUA state will not change for the remaining, Active Non-
optimized side. However, in this case, the DataCore Server will not prevent access to the Host
nor will it change the way READ or WRITE IO is handled compared to the Active Optimized
side, but the Host will still register this DataCore Servers Paths as Active Non-Optimized which
may (or may not) affect how the Host behaves generally.

Page | 31 The Host Server VMware ESXi Configuration Guide


Appendix A - Preferred Server & Preferred Path settings

In the case where the Preferred Server is set to All, then both DataCore Servers are designated
Active Optimized for Host IO.

All IO requests from a Host will use all Paths to all DataCore Servers equally, regardless of the
distance that the IO has to travel to the DataCore Server. For this reason, the All setting is not
normally recommended. If a Host has to send a WRITE IO to a remote DataCore Server (where
the IO Path is significantly distant compared to the other local DataCore Server), then the
WAIT times accrued by having to send the IO not only across the SAN to the remote DataCore
Server, but for the remote DataCore Server to mirror back to the local DataCore Server and
then for the mirror write to be acknowledged from the local DataCore Server to the remote
DataCore Server and finally for the acknowledgement to be sent to the Host back across the
SAN, can be significant.

The benefits of being able to use all Paths to all DataCore Servers for all Virtual Disks are not
always clear cut. Testing is advised.

For Preferred Path settings it is stated in the SANsymphony Help:


A preferred front-end path setting can also be set manually for a particular virtual disk. In this
case, the manual setting for a virtual disk overrides the preferred path created by the preferred
server setting for the host.

So for example, if the Preferred Server is designated as DataCore Server A and the Preferred
Paths are designated as DataCore Server B, then DataCore Server B will be the Active
Optimized Side not DataCore Server A.

In a two-node Server group there is usually nothing to be gained by making the Preferred Path
setting different to the Preferred Server setting and it may also cause confusion when trying to
diagnose path problems, or when redesigning your DataCore SAN with regard to Host IO Paths.

For Server Groups that have three or more DataCore Servers, and where one (or more) of these
DataCore Servers shares Mirror Paths between other DataCore Servers setting the Preferred
Path makes more sense.

So for example, DataCore Server A has two mirrored Virtual Disks, one with DataCore Server B,
and one with DataCore Server C and DataCore Server B also has a mirrored Virtual Disk with
DataCore Server C then using just the Preferred Server setting to designate the Active
Optimized side for the Hosts Virtual Disks becomes more complicated. In this case the
Preferred Path setting can be used to override the Preferred Server setting for a much more
granular level of control.

Page | 32 The Host Server VMware ESXi Configuration Guide


Appendix B
Configuring Disk Pools
See Creating Disk Pools and Adding Physical Disks from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/About_Disk_Pools.htm

The smaller the SAU size, the larger the number of indexes are required, by the Disk Pool driver,
to keep track of the equivalent amount of allocated storage compared to a Disk Pool with a
larger SAU size; e.g. there are potentially four times as many indexes required in a Disk Pool
using a 32MB SAU size compared to one using 128MB the default SAU size.

As SAUs are allocated for the very first time, the Disk Pool needs to update these indexes and
this may cause a slight delay for IO completion and might be noticeable on the Host. However
this will depend on a number of factors such as the speed of the physical disks, the number of
Hosts accessing the Disk Pool and their IO READ/WRITE patterns, the number of Virtual Disks in
the Disk Pool and their corresponding Storage Profiles.

Therefore, DataCore usually recommend using the default SAU size (128MB) as it is a good
compromise between physical storage allocation and IO overhead during the initial SAU
allocation index update. Should a smaller SAU size be preferred, the configuration should be
tested to make sure that a potential increased number of initial SAU allocations does not
impact the overall Host performance.

Page | 33 The Host Server VMware ESXi Configuration Guide


Appendix C
Reclaiming storage
Using VMwares Block Delete/SCSI UNMAP VAAI primitive
As of SANsymphony 9.0 PSP4 there is support for the Block Delete/SCSI UNMAP VAAI primitive
which, when used in conjunction with either the VMware ESXi vmkfstools or esxcli command
(depending on the version of ESXi used), allows Hosts to trigger the Automatic Reclamation
function on served Virtual Disks:

For ESXi 5.0.x and 5.1.x Hosts:


Using vmkfstools to reclaim VMFS deleted blocks on thin-provisioned LUNs (2014849)
http://kb.vmware.com/kb/2014849

For ESXi 5.5.x Hosts:


vSphere 5.5 Command Line Documentation > vSphere Command-Line Interface
Documentation > vSphere Command-Line Interface Concepts and Examples > Managing Files
> Reclaiming Unused Storage Space
https://pubs.vmware.com/vsphere-
55/topic/com.vmware.vcli.examples.doc/cli_manage_files.5.6.html

For ESXi 6.x Hosts:


vSphere 6.0 Command Line Documentation > vSphere Command-Line Interface
Documentation > vSphere Command-Line Interface Concepts and Examples > Managing Files
> Reclaiming Unused Storage Space
https://pubs.vmware.com/vsphere-
60/topic/com.vmware.vcli.examples.doc/cli_manage_files.5.6.html

Or alternatively, see:
Using esxcli in vSphere 5.5 and 6.0 to reclaim VMFS deleted blocks on thin-provisioned LUNs
(2057513) http://kb.vmware.com/kb/2057513

Page | 34 The Host Server VMware ESXi Configuration Guide


Appendix C Reclaiming storage

SANsymphony's Automatic Reclamation feature


DataCore Servers keep track of any 'all-zero' write I/O requests sent to Storage Allocation Units
(SAU) in all Disk Pools. When enough 'all-zero' writes have been detected to have been passed
down to an entire SAUs logical address space, that SAU will be immediately assigned as 'free'
(as if it had been manually reclaimed) and made available to the entire Disk Pool for future
(re)use.

No additional 'zeroing' of the Physical Disk or 'scanning' of the Disk Pool is required.

Important technical notes on Automatic Reclamation


The Disk Pool driver has a small amount of system memory that it uses keep a list of all address
spaces in a Disk Pool that are sent 'all-zero' writes; all other (non-zero) write requests are
ignored by the Automatic Reclamation feature and not included in the in-memory list.

Any all-zero write addresses that are detected to be physically 'adjacent' to each other from a
block address point of view the Disk Pool driver will 'merge' these requests together in the list
so as to keep the size of it as small as possible. Also as entire 'all-zeroed' SAUs are re-assigned
back to the Disk Pool, the record of all its address spaces is removed from the in-memory list
making space available for future all-zero writes to other SAUs that are still allocated.

However if write I/O pattern of the Hosts mean that the Disk Pool receives all-zero writes to
many, non-adjacent block addresses the list will require more space to keep track of them
compared to all-adjacent block addresses. In extreme cases, where the in-memory list can no
longer hold any more new all-zero writes (because all the allocated system memory for the
Automatic Reclamation feature has been used) the Disk Pool driver will discard the oldest
records of the all-zero writes to accommodate newer records of all-zero write I/O.

Likewise if a DataCore Server is rebooted for any reason, then the in-memory list is completely
lost and any knowledge of SAUs that were already partially detected as having been written
with all-zeroes will now no longer be remembered.

In both of these cases this can mean that, over time, even though technically an SAU may have
been completely overwritten with all-zero writes, the Disk Pool driver does not have a record
that cover the entire address space of that SAU in its in-memory list and so the SAU will not be
made available to the Disk Pool but remain allocated to the Virtual Disk until any future all-zero
writes happen to re-write the same address spaces that were forgotten about previously by the
Disk Pool driver. In these scenarios, a Manual Reclamation will force the Disk Pool to re-read all
SAUs and perhaps detect those now missing all-zero address spaces.

See the section 'Manual Reclamation' on the next page for more information.

Page | 35 The Host Server VMware ESXi Configuration Guide


Appendix C Reclaiming storage

Reclaiming storage by sending all-zero writes to a Virtual Machine's


own filesystem
For any VMware ESXi Hosts 4.1 or earlier or any VMware ESXi Hosts connected to DataCore
Servers running SANsymphony-V 9.0 PSP3 update 2 or earlier (including all versions of
SANsymphony-V 8.x) a suggestion would be to create a new, dummy virtual machine of an
appropriate size (If there is enough free space available in the VMFS) and then zero-fill it using
the vmkfstools eagerzeroedthick option:

vmkfstools -c [size] -d eagerzeroedthick /vmfs/volumes/[mydummydir]/[mydummy.vmdk]

Or if using VMwares own UI, format using the Thick Provision Eager Zeroed option. Refer to
VMwares own documentation on how to do this. Once the formatting has completed, delete
the dummy virtual machine. Then either wait for an Automatic Reclamation to take place or run
Manual Reclamation.

See the Performing Reclamation section from the SANsymphony Help:


http://www.datacore.com/SSV-Webhelp/Reclaiming_Virtual_Disk_Space.htm

Note that it is also possible to script manual reclamation using the Start-
DcsVirtualDiskReclamation PowerShell Command.

Reclaiming storage by sending all-zero writes when using Raw Device


Mapped Virtual Disks
For Virtual Machines which directly access Virtual Disks via VMware ESXis Raw Device
Mapping technology, it may be possible to zero-fill using the Virtual Machines own file system
using third party tools (i.e. sdelete for Microsoft Windows NTFS/FAT File systems or using the
dd command for UNIX/Linux File systems) without having to create a dummy virtual machine.

SANsymphony's Manual Reclamation feature


Manual reclamation forces the Disk Pool driver to 'read' all SAUs currently assigned to a Virtual
Disk looking for SAUs that contain only all-zero IO data. Once detected, that SAU will be
immediately assigned as 'free' and made available to the entire Disk Pool for future (re)use.

No additional 'zeroing' of the Physical Disk is required.

Note that manual reclamation will create additional 'read' I/O on the Storage Array used by the
Disk Pool, as this process runs at 'low priority' it should not interfere with normal I/O
operations. However, caution is advised, especially when scripting the manual reclamation
process.

Page | 36 The Host Server VMware ESXi Configuration Guide


Appendix C Reclaiming storage

Manual Reclamation may still be required even when Automatic Reclamation has taken place
(see the 'Automatic Reclamation' section on the previous page for more information)

How much storage will be reclaimed?


It is impossible to predict exactly how many Storage Allocation Units (SAUs) will be reclaimed.
For reclamation of an SAU to take place, it must contain only all-zero block data over the
entire SAU else it will remain allocated and this is entirely dependent on how and where the
Host has written its data on the DataCore LUN. DataCore Software can offer no guarantees.

For example, if the Host has written the data in such a way that every allocated SAU contains a
small amount of non-zero block data then no (or very few) SAUs can be reclaimed, even if the
total amount of data is much less than the total amount of assigned SAUs.

It may be possible to use the Host operating systems own defragmentation tools to move any
data that is spread out over the DataCore LUN so that it ends up as one or more large areas of
contiguous non-zero block addresses. This might then leave the the DataCore LUN with SAUs
that now only have all-zero data on them and that can then be reclaimed.

However care should be taken that the act of defragmenting the data itself does not cause
more SAU allocation as the block data is moved around (i.e. re-written to new areas on the
DataCore LUN) during the re-organization.

Page | 37 The Host Server VMware ESXi Configuration Guide


Appendix D
Moving from Most Recently Used to either Round
Robin or Fixed Path Selection Policy
This will require Virtual Disks to be unserved from the Host and so planning for the appropriate
downtime will be needed if Virtual Machines need to be stopped. It may be possible to move
Virtual Machines over to different Hosts (using vMotion for example) before the Virtual Disks
are unserved, to allow changing the PSP for one Host at a time but please refer to VMware
directly regarding mixing different PSPs on different VMware ESXi Hosts for the same LUN
before attempting this in case this is not appropriate to your VMware configuration.

1. Unserve all Virtual Disks from the Host from within the SANsymphony Console.
2. At the VMware ESXi Host rescan all disk devices so that the DataCore Virtual Disks are
removed and then remove the Storage Array Type Claim Rule and Storage Array as
described on page 22.
3. From within the SANsymphony Console enable the ALUA option on the Host. See
Changing multipath or ALUA support settings for hosts from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Multipath_Support.htm
4. Re-serve all Virtual Disks to the Host from within the SANsymphony Console. Note you
may need to use the same LUN number and Initiator/Target paths as before.
5. At the VMware ESXi Host rescan to re-detect the Virtual Disks with ALUA enabled and
proceed to the appropriate page in this document for either Fixed or Round Robin Path
Selection Policy.

Note: To move from Fixed or Round Robin to Most Recently Used uses the same steps as above
but the ALUA option must be unchecked in Step #3.

Remember that no versions of VMware ESXi have been qualified with SANsymphony without the
ALUA option set, and so the Most Recently Used PSP is considered unqualified by DataCore.

Please refer to page 4 regarding all unqualified configurations of VMware ESXi Hosts.

Page | 38 The Host Server VMware ESXi Configuration Guide


Previous Changes
2017
April
Added
Known Issues all ESX versions Converged Network Adaptors
When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor (CNA)
Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option.

This was previously documented in 'Known Issues - Third Party Hardware and Software'
http://datacore.custhelp.com/app/answers/detail/a_id/1277

Updated
VMware ESXi Compatibility lists VMware ESXi Path Selection Policies (PSP)
The information regarding the Most Recently Used (MRU) PSP and ESXi 6.x. was incorrectly listed as 'Supported'. It
has been corrected to 'Not Qualified'.

February
Added
VMware ESXi compatibility notes
VMware 'Fault Tolerant' or 'High Available' Clusters
Explained a specific configuration set up that DataCore cannot support when using VMware FT or HA clusters and
the reasons for that. This is also referred to again in the 'Known Issues' section.

2016
November
Updated
Appendix C - Reclaiming storage
Automatic and Manual reclamation
These two sections have been re-written with more detailed explanations and technical notes.

October
Updated
The VMware ESXi Host's settings - ISCSI Connections
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is not
supported (this also includes ESXi 'Port Binding'). The supported configuration example has been updated to make
it more obvious as to what is required (along with the same, corresponding changes made to the unsupported
example so that the comparison is easy to spot).

September
Added
Known Issues - general
There has been a general re-organization of this section separating all issues into subsections determined by the
version of ESXi that the known issue refers to.

Known Issues - 6.x


Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access
This has now been fixed in https://kb.vmware.com/kb/2145663 VMware ESXi 6.0, Patch Release ESXi600-
201608001 and was documented previously in VMwares own internal SR#15597438602.

Page | 39 The Host Server VMware ESXi Configuration Guide


Previous changes
Known Issues - 5.5
Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access
This affects ESX 5.5. This is documented in VMwares own internal SR#15597438602. Please contact VMware
directly about this as DataCore is not aware of any fix for ESXi 5.5 at this time.

Updated
The VMware ESXi Host's settings ISCSI Connections
The information that was previously in the 'Known Issues' section regarding connections from multiple NICs
sharing the same IQN has been moved to this section as it affects all versions of ESX and is not so much a 'Known
Issue' than a configuration requirement.

Known Issues ESX 6.x


Unable to access filesystem for MSCS cluster nodes after vMotion
https://kb.vmware.com/kb/2144153 A fix is available (as well as a workaround) from the VMware knowledge base
article above.

August
Added
Known Issues
ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start or
during LUN rescan. Applies to ESX 6.x, 5.x and 4.x. Please see: http://kb.vmware.com/kb/1016106

July
Added
The DataCore Server's settings
Added link:
Video: Configuring ESX Hosts in the DataCore Management Console
http://datacore.custhelp.com/app/answers/detail/a_id/1637

Updated
This document has been reviewed for SANsymphony 10.0 PSP 5.

VMware ESXi compatibility lists


ESX 4.1 is now 'not supported' for SANsymphony 10.x previously listed as 'unqualified'.
Because ESX 4.x is considered by VMware to be 'End of Availability' (https://kb.vmware.com/kb/2039567)
DataCore would not be able to get assistance from VMware if it were needed for any issues that were found
during 'Self Qualification'.

Known Issues
vMotion causing loss of access to filesystem for MSCS cluster nodes (2144153)
This was previously listed as "Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have
more than one Front End mapping to each DataCore Server may cause unexpected loss of access". A
Knowledgebase article has now been released by VMware https://kb.vmware.com/kb/2144153

April
Updated
Known Issues - VMware 6.0
Storage PDL responses may not trigger path failover in vSphere 6.0
http://kb.vmware.com/kb/2144657.
Note: This affects both vSphere 6.0 and 6.0 U1 customers. A fix is available in 6.0 U2.

February

Page | 40 The Host Server VMware ESXi Configuration Guide


Previous changes
Updated
List of qualified VMware Versions - Qualification notes on VMware-specific functions
Removed references specific to 'End of Life' versions of SANsymphony-V this includes all versions of
SANsymphony-V 8.x and any version of 9.x that are PSP 3 or earlier.

2015
December
Updated
List of qualified VMware Versions - Qualification notes on VMware-specific functions
Path Selection Policies and VMware ESX 6.x
For ESX 6.x, Fixed and Round Robin Path Selection Policies are both tested and supported by DataCore and both
are also listed on VMware's own Hardware Compatibility List.

VSphere APIs for Storage Awareness (VASA)


For ESX 6.x, VASA is tested and supported by DataCore and is also listed in VMware's own Compatibility Guide.

VSphere APIs for Virtual Volumes (VVOL)


For ESX 6.x, VVOL is tested and supported by DataCore and is also listed in VMware's own Compatibility Guide.

November
Updated
SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see:
End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329

October
Updated
Known Issues VMware ESXi 5.x and 6.x
DataCore have been informed that there is now a hotfix from VMware for the previously documented known
issue Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access (VMwares own SR#15597438602).
Contact VMware for more information.

July
Added
List of qualified VMware ESXi Versions - Notes on qualification
This section has been updated and new information added regarding the definitions of all qualified, unqualified
and not supported labels. A new section on Linux distributions that are no longer in development has also been
added at the end of this section.

Known Issues
Moved some of the information from the Host Configuration section where problems can arise into the Known
Issues section. The ISCSI Port Binding is no longer considered supported as even if configured to be used in
different subnets (as previously recommended) the sharing of IQNs for different iSCSI Initiators on the ESXi Hosts
cannot be avoided and this can lead to situations where different IP Addresses with the same IQN try to log into
the same DataCore FE Port and will not be able to. Please read the Known Issues section for more detail.

May
Added
Known Issues VMware ESXi 5.x and 6.x
An issue has been identified by VMware regarding Microsoft Clusters in Virtual Machines using SANsymphony-V
Virtual Disks served to more than one path on the same ESX host, which lead to unexpected loss of access. Under

Page | 41 The Host Server VMware ESXi Configuration Guide


Previous changes
heavy load the new VMFS heartbeat process used by ESX 5.5. Update 2 and 6.x may fail with false ATS
miscompare message.

Updated
VMware ESXi 6.x - generally
Sections that apply to only VMware ESXi 6.x have been explicitly labelled to avoid ambiguity.

April
Added
VMware ESXi Path Selection Policies (all)
It has been observed that different versions of ESXi may or may not auto configure the correct SATP claim rule for
Round Robin or Fixed Path Selection Policies when presented with a Virtual Disks from SANsymphony-V. Therefore
more explicit instructions on how to create a custom rules has been added.

Note: Existing SANsymphony-V installations probably do not need to worry about this new information as it does
not conflict with what was stated previously; but DataCore recommend that you review the section just to make
sure that your Virtual Disks are correctly configured.

Updated
List of qualified VMware ESXi Versions
Added VMware ESXi 6.x

Host Settings - VMware ESXi All versions


ISCSI Port binding
Clarified the statement regarding using same subnets as the VMKERNEL port.

Configuring VMware ESXi Path Selection Policies


General Notes this section has been re-ordered. No new information has been added.

2014 and earlier


December
Added
DataCore Server Settings
Installing a DataCore Server inside a VMware ESXi Virtual Machine
VMware ESXi Path Selection Policies
Which Path Selection Policy does DataCore Software Recommend?
Added some explanation on a frequently asked question based on the differences between Fixed and Round Robin
Path Selection Policies.

The Preferred Server setting when using the PSP


Added more detailed explanation regarding the SANsymphony-V Preferred Server setting and how it applies to
each of the three supported Path Selection Policies (Round Robin, Fixed and MRU).

Updated
Appendix D - Moving from Most Recently Used to Round Robin or Fixed Path Selection Policies
Added more information about how to reduce the likelihood for downtime (by using vMotion).

November
Added
Known Issues
Most of the information was moved from the Known Issues: Third Party Hardware and Software document:
http://datacore.custhelp.com/app/answers/detail/a_id/1277

Page | 42 The Host Server VMware ESXi Configuration Guide


Previous changes

Updated
List of qualified VMware ESXi versions
Not Supported has now been changed to mean explicitly Not Supported for Mirrored or Dual Virtual Disks.
Single Virtual Disks are now always considered supported.

Appendix B: Reclaiming Storage from Disk Pools


For ESXi 5.5 Hosts, the command to reclaim VMFS deleted blocks has changed since earlier versions of ESXi 5.x. A
link to the appropriate VMware KB article for the later version of ESXi has therefore been added.

July
Updated
VMware ESXi Path Selection Policies all types
The command to verify that a given SATP type had been set was incorrect for the later versions of VMware ESXi. It
was listed as:
esxcli nmp satp listrules -s [SATP_Type]
and should have been listed as:
esxcli storage nmp satp rule list -s [SATP_Type]

VMware ESXi Path Selection Policies Fixed


Added clarifying notes at the start of this section, as the specific requirements for the Host Settings within the
SANsymphony-V Management Console, using the Fixed Path Selection Policy with VMware ESXi, contradict the
general statement (for all other Host Operating Systems) in the SANsymphony-V Release Notes regarding use the
All setting for the Preferred Server setting.

June
Updated
List of qualified VMware ESXi Versions
Updated to include SANsymphony-V 10.x

May
This document combines all of DataCores VMware information from older Technical Bulletins into a single
document including:

Technical Bulletin 5b: VMware ESXi vSphere 4.0.x Hosts.


Technical Bulletin 5c: VMware ESXi vSphere 4.1.x Hosts.
Technical Bulletin 5d: VMware ESXi vSphere 5.x Hosts.
Note: Technical Bulletin 5a: VMware ESXi 2.x and 3.x Hosts contains versions not supported with SANsymphony-
V, so the information is not relevant to this document and has not been included.
Technical Bulletin 8: Formatting Hosts File Systems on Virtual Disks created from Disk Pools.
Technical Bulletin 11: Disk Timeout Settings on Hosts.
Technical Bulletin 16: Reclaiming Space in Disk Pools.

Added
Host Settings: VMware ESXi All Versions:
Notes on VMware iSCSI Port Binding

VMware ESXi Path Selection Policies:


Fixed AP is no longer included as this is not a supported Path Selection Policy with SANsymphony-V.

Fixed is supported (this was inconsistently documented across the different Technical Bulletins) but only with the
Preferred Server setting set to All.

Page | 43 The Host Server VMware ESXi Configuration Guide


Previous changes
Most Recently Used must only be used without the ALUA option set on the Host. However, no versions of
VMware ESXi, without the ALUA option set, have been qualified with SANsymphony-V, so this Path Selection Policy
is considered unqualified.

Appendix A: This section gives more detail on the Preferred Server and Preferred Path settings with regard to how
it may affect a Host.

Appendix B: This section incorporates information regarding Reclaiming Space in Disk Pools (from Technical
Bulletin 16) that is specific to VMware Hosts.

Appendix C: This section adds additional information regarding VMwares vStorage APIs for Array Integration
(VAAI) with SANsymphony-V.

Appendix D: This section adds more comprehensive steps for Moving from Most Recently Used to Fixed or Round
Robin Path Selection Policy.

Updated
DataCore Server Settings: VMware ESXi 4.0.x Hosts: Regarding Virtual Disk Names.
Host Settings: SCSI Reservation locking between VMware ESXi Hosts.

VMware ESXi Path Selection Policies: Previously the Preferred Server setting of All was explicitly stated to not be
used within the SANsymphony-V Management Console. However, Fixed requires that the Hosts Preferred Server
setting is set to All. Round Robin may use the All setting although caution is advised and more explanation is
provided in Appendix A why it may not be advisable.

An overall improvement of the explanations to most of the required Host Settings and DataCore Server Settings.

Technical Bulletin 5d: VMware ESXi vSphere 5.x Hosts

January 2014
Updated
The note on how to move from Most Recently Used, with the ALUA option not checked to Fixed/RR Path with
the ALUA option checked for a DataCore Disk with regard to SANsymphony-V 9.0 PSP3 and later versions.

December 2013
Added
VSphere ESXi 5.5 is qualified and no additional settings (from all previous 5.x versions) are needed. The SCSI
UNMAP primitive is supported from SANsymphony-V 9.0 PSP4.

Updated
DataCore Server configuration settings section (Virtual Disks mapped to more than one Host may need to use the
same LUN number ) for SANsymphony-V. Added a warning note at the start of each Path Selection Policy
(PSP), cautioning the user, that a VMs Operating System configuration may not be supported by VMware for a
particular PSP (i.e. of publication VMware state that MSCS VMs are not supported for Round Robin PSP).

April 2013
Removed
All references to SANmelody as this product is now End of Life as of December 31, 2012

March 2013
Added
Use VMFS5 for VSphere Metro Storage Clusters (vMSC).

Page | 44 The Host Server VMware ESXi Configuration Guide


Previous changes
February 2013
Updated
The General notes on path selection policies. To allow for different behavior with the VMware vCenter
Integration function of SANsymphony-V.

October 2012
Removed
All but one of the Advanced Settings; all other settings are no longer needed and can be ignored (there is no
requirement to reset or change the existing values for these other settings and they can be left as they are).

July 2012
Added
support for SANsymphony-V 9.x, no new technical information. Added extra steps to set the default path selection
policy to Fixed instead of MRU under the Fixed/Round Robin path selection policy section. Added note under
General section that:
i. VAAI is now supported - with SANsymphony-V 9.x and ESXi 5.x.
ii. Strengthened warning that MRU is not supported with ALUA

June 2012
Added
Two new settings to be applied under the General section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).

May 2012
Updated
The DataCore Server and Host minimum requirements.

Removed
All references to End of Life versions that are no longer supported as of December 31 2011. Updated notes at the
start of General notes for Path Selection Policies. Updated copyright. Added note to General notes on path
selection policies for ESXi 5.x on selecting the preferred path of Virtual Disk with multiple connections for
VMW_PSP_FIXED to the same DataCore Server.

December 2011
Initial publication of Technical Bulletin.

Technical Bulletin 5c: VMware ESXi vSphere 4.1.x Hosts

June 2013
Added
A warning note at the start of each Path Selection Policy (PSP), cautioning the user, that a VMs Operating System
configuration may not be supported by VMware for a particular PSP (i.e. of publication VMware state that MSCS
VMs are not supported for Round Robin PSP).

April 2013
Removed
All references to SANmelody as this is now End of Life of December 31 2012. Updated the DataCore Server
Configuration Settings added Preferred Server notes.

July 2012
Added
Support for SANsymphony-V 9.x. No new settings required. Added notes under General section that:
i. VAAI is not supported with SANsymphony-V and ESXi 4.1.
ii. Strengthened warning that MRU is not supported with ALUA

Page | 45 The Host Server VMware ESXi Configuration Guide


Previous changes

June 2012
Added
Two new settings to be applied under the General section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).

May 2012
Updated
The DataCore Server and Host minimum requirements. Removed all references to End of Life SANsymphony and
SANmelody versions that are no longer supported as of December 31 2011. Added notes at the start of General
notes for Path Selection Policies. Updated copyright. Updated Fixed AP and Round Robin Path Selection Policy
with regard to preferred paths. Existing users should re-check their configurations and make any appropriate
changes as necessary.

November 2011
Updated
URL to VMware SAN Configuration guides changed.

October 2011
Removed
All references to End of Life SANsymphony and SANmelody versions that are no longer supported as of July 31
2011. Moved known issues out of this Technical Bulletin and into the Known Issues: Third Party
Software/Hardware with DataCore Servers document. Added MRU path policy. Added important note on how to
verify path selection policy in each case. For SANsymphony-V the first 12 characters of the Virtual Disk name no
longer needs to be unique.

February 2011
Added
Support for SANsymphony-V 8.x.

September 2010
Initial publication of Technical Bulletin.

Technical Bulletin 5b: VMware ESXi vSphere 4.0.x Hosts

June 2013
Added
A warning note at the start of each Path Selection Policy (PSP), cautioning the user, that a VMs Operating System
configuration may not be supported by VMware for a particular PSP (i.e. of publication VMware state that MSCS
VMs are not supported for Round Robin PSP).

April 2013
Removed
All references to SANmelody as this product is now End of Life as of December 31, 2012

July 2012
Added
Support for SANsymphony-V 9.x. No new settings required. Corrected option for SCSI.CRTimeoutDuringBoot and
added back SCSI.ConflictRetries in ESXi(i) Host configuration settings - General.

June 2012
Added
Two new settings to be applied under the General section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).

Page | 46 The Host Server VMware ESXi Configuration Guide


Previous changes
November 2011
Updated
URI to VMware SAN Configuration guides changed.

October 2011
Removed
All references to End of Life versions that are no longer supported as of July 31 2011. Moved all issues not specific
to configuring Hosts or DataCore Servers out of this Technical Bulletin and into the Known Issues: Third Party
Software/Hardware with DataCore Servers document. Added important note on how to verify path selection
policy in each case. Changed requirement for Most Recently Used managed path policy do not use the ALUA
option.

March 2011
Added
Support for SANsymphony-V 8.x

June 2010
Added
Support for 'Round-Robin' path selection policy with SANsymphony 7.0 PSP 3 Update 4 and SANmelody 3.0 PSP 3
update 4.

December 2009
Added
Support for 'Fixed Path' path selection policy with SANsymphony 7.0 PSP 3 and SANmelody 3.0 PSP 3. Previously
only MRU was supported

October 2009
Initial publication of Technical Bulletin

Page | 47 The Host Server VMware ESXi Configuration Guide


COPYRIGHT

Copyright 2017 by DataCore Software Corporation. All rights reserved.


DataCore, the DataCore logo and SANsymphony are trademarks of DataCore Software Corporation. Other DataCore product or service names
or logos referenced herein are trademarks of DataCore Software Corporation. All other products, services and company names mentioned
herein may be trademarks of their respective owners.

ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED AS IS AND USERS MUST TAKE ALL
RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND THE INFORMATION CONTAINED IN THIS DOCUMENT.
NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND
SHALL HAVE NO LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER INFORMATION
REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED WARRANTIES OF MERCHANTABILITY, NON-
INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE
FULLEST EXTENT PERMITTED BY LAW.

No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-readable form without the
prior written consent of DataCore Software Corporation

S-ar putea să vă placă și