Documente Academic
Documente Profesional
Documente Cultură
May 2017
Basic VMware administration skills are assumed including how to connect to iSCSI
and/or Fibre Channel Storage Array target ports as well as the processes of
discovering, mounting and formatting a disk device.
Added
Known Issues all ESX versions When connecting ESXi Hosts to DataCore Servers
After upgrading to VMware ESXi 6.0 Update 3 ESX paths will only report as 'Active'. No paths
will report as 'Active (I/O)' - regardless of the Path Selection Policy.
SANsymphony
(1)
9.0 PSP 4 Update 4 10.0 (all versions)
ESXi Version With ALUA Without ALUA With ALUA Without ALUA
3.x and earlier Not Supported Not Supported Not Supported Not Supported
SCSI UNMAP
See the VStorage API for Array Integration (VAAI) compatibility table on page 7.
1
SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life.
Please see: End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329
Not Qualified
These VMware/SANsymphony combinations have not been tested against any Mirrored or Dual
Virtual Disks types. DataCore therefore cannot guarantee 'high availability' in any failure
scenario (even if all host-specific settings listed in this document are followed) however, self-
qualification may be possible. For more information on this please see:
http://datacore.custhelp.com/app/answers/detail/a_id/1506
Support for anyESXi versions that are considered End of Life by VMware and are listed as 'Not
Qualified' can still be self-qualified but only if there is an agreed support contract with
VMware. In this case though, DataCore Technical Support will not do root-cause analysis for
SANsymphony in the case of any future issues but will offer 'best effort' support to get Hosts
accessing any SANsymphony Virtual Disks.
Note: Non-mirrored Virtual Disks are always considered as qualified even for 'Not Qualified'
combinations of VMware/SANsymphony.
Not Supported
These VMware/SANsymphony combinations have usually failed one or more of our 'high
availability' tests when using Mirrored or Dual Virtual Disks types; but also may simply be where
an Operating System's own requirements (or limitations) due to its age make it impractical to
test. Entries marked as 'Not Supported' can never be self-qualified. Mirrored or Dual Virtual
Disks types are configured at the end-user's own risk.
Note: Non-mirrored Virtual Disks are always considered as qualified even for 'Not Supported'
combinations of VMware/SANsymphony.
In this case, DataCore Technical Support will help the customer to get the Host Operating
system accessing Virtual Disks, but will not then do any root-cause analysis.
Should a failure occur in this configuration, on that single physical link, then both the DataCore
Mirror IO will fail and the Host IO will fail. While the DataCore Server will send the correct SCSI
notification to the Hosts (LUN_NOT_AVAILABLE), ESX does not interpret this SCSI command, as
either a Permanent Device Loss (PDL) and All-Paths-Down (APD) event.
The Host will therefore continue to try to access the Virtual Disk on the failed path and will not
attempt to 'failover' (HA) or move the VM (fault tolerant). This will result in a loss of access to
the Virtual Disk(s) and DataCore cannot support this type of configuration.
Make sure all Hosts access any DataCore Server FE ports via separate physical links to the
DataCore Server's own MR port connections.
SANsymphony
(2)
9.0 PSP 4 Update 4 10.0 (all versions)
(3)
4.x Does not work Does not work
Note: For more information on using SCSI UNMAP to reclaim storage from Disk Pools, please
refer to Appendix C: 'Reclaiming Storage' on page 34 for specific instructions.
SANsymphony
9.0 PSP 4 Update 4 10.0 PSP3 and earlier 10.0 PSP 4 and greater
ESXi Version VASA support for VVOL VASA support for VVOL VASA support for VVOL
Note: this also includes vMotion over iSCSI. Also see the 'Getting Started with the DataCore
VASA Provider' section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Getting_Started_with_VASA_Provider.htm
1
The following VAAI specific commands are supported by the DataCore Server:
Atomic Test & Set (ATS), Clone Blocks/Full Copy/XCOPY, Zero Blocks/Write Same and Block Delete/SCSI UNMAP
2
SCSI UNMAP supported was included in SANsymphony-V 9.0 PSP 4 Update 4. SANsymphony-V 8.x and all versions
of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see: End of Life Notifications
http://datacore.custhelp.com/app/answers/detail/a_id/1329
3
For VMware ESXi 4.1 Hosts, VAAI must be disabled on the Host otherwise it will cause unexpected behaviour.
Please refer to VMware's own knowledgebase article: "Disabling the VAAI functionality in ESX/ESXi (1033665)"
http://kb.vmware.com/kb/1033665
ESX 6.x
Fixed and RR PSPs have both tested by DataCore Software and are both listed on VMware's
Hardware Compatibility List (HCL).
MRU has not been tested by DataCore Software and is considered 'Not Qualified'; it is not listed
on VMware's HCL.
ESX 5.x
Fixed and RR PSPs have both tested by DataCore Software but only the RR PSP is listed on
VMware's HCL.
MRU has not been tested by DataCore Software and is considered 'Not Qualified'; it is not listed
on VMware's HCL.
These are the Host-specific settings that need to be configured directly on the DataCore Server.
When registering the Host choose the 'VMware ESXi' menu option.
Port roles
Ports used for serving Virtual Disks to Hosts should only have the Front End (FE) role enabled.
Mixing other Port Role types may cause unexpected results as Ports that only have the FE role
enabled will be turned off when the DataCore Server software is stopped (even if the physical
server remains running). This helps to guarantee that any Hosts do not still try to access FE
Ports, for any reason, once the DataCore Software is stopped but where the DataCore Server
remains running. Any Port with the Mirror and/or Back End role enabled do not shut off when
the DataCore Server software is stopped but still remain active.
Multipathing support
The Multipathing Support option should be enabled so that Mirrored Virtual Disks or Dual
Virtual Disks can be served to Hosts from all available DataCore FE ports. Also see the
Multipathing Support section from the SANsymphony Help: http://www.datacore.com/SSV-
Webhelp/Hosts.htm
Note: Hosts that have non-mirrored Virtual Disks served to them do not need Multipathing
Support enabled unless they have other Mirrored or Dual Virtual Disks served as well.
More information on Preferred Servers and Preferred Paths used by the ALUA function can be
found on in Appendix A on page 31.
Virtual Disks LUNs and serving to more than one Host or Port
DataCore Virtual Disks always have their own unique Network Address Authority (NAA)
identifier that a Host can use to manage the same Virtual Disk being served to multiple Ports on
the same Host Server and the same Virtual Disk being served to multiple Hosts.
See the SCSI Standard Inquiry Data section from the online Help for more information on this:
http://www.datacore.com/SSV-Webhelp/Changing_Virtual_Disk_Settings.htm
While DataCore cannot guarantee that a disk device's NAA is used by a Host's operating system
to identify a disk device served to it over different paths generally we have found that it is. And
while there is sometimes a convention that all paths by the same disk device should always
using the same LUN 'number' to guarantees consistency for device identification, this may not
be technically true. Always refer to the Host Operating System vendors own documentation for
advice on this.
DataCore's Software does, however always try to create mappings between the Host's ports
and the DataCore Server's Front-end (FE) ports for a Virtual Disk using the same LUN number(5)
where it can. The software will first find the next available (lowest) LUN 'number' for the Host-
DataCore FE mapping combination being applied and will then try to apply that same LUN
number for all other mappings that are being attempted when the Virtual Disk is being served.
If any Host-DataCore FE port combination being requested at that moment is already using that
same LUN number (e.g. if a Host has other Virtual Disks served to it from previous) then the
software will find the next available LUN number and apply that to those specific Host-
DataCore FE mappings only.
5
The software will also try to match a LUN 'number' for all DataCore Server Mirror Port mappings of a Virtual Disk
too, although the Host does not 'see' these mirror mappings and so this does not technically need to be the same
as the Front End port mappings (or indeed as other Mirror Path mappings for the same Virtual Disk). Having Mirror
mappings using different LUNs has no functional impact on the Host or DataCore Server at all.
Note: Older versions (of VMware ESXi) may require different Host settings when compared to
newer versions. When a setting or configuration change is listed for one version but not another
then is only required for that specific version of VMware ESXi. If you have upgraded from an
older version and a specific setting is no longer documented for your newer version then assume
that no further changes are needed for those settings that are no longer listed in your newer
version and they should be left as they were.
ISCSI Connections
TCP Ports
Make sure TCP Port 3260 is opened for all iSCSI Communication to the DataCore Server.
See the 'TCP and UDP Ports' section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/windows_security_settings_disclosure.htm
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server
Front-end port is not supported (this also includes ESXi 'Port Binding').
The Front End port will only accept the first connection from a given IQN that attempts to
login to it where a unique ISCSI Session ID (ISID) is created for that connection. All subsequent
connections that then come from a different NIC that happens to share the same IQN as the
first login, will causes a ISID conflict and will be rejected by the DataCore Server. After that, no
further iSCSI logins will be possible for this IQN. This may cause unexpected disconnects
between the Host and the DataCore Server for those connections.
It is important to note that if the first successful connection gets disconnected for any reason
(e.g. by a SCSI reset), then one of the other NICs - sharing the same IQN - may re-attempt a
login and, if successful, will take the session for itself. This will now block the previously-
connected NIC from being able to re-connect and it will now remain disconnected.
See the following pages for example of qualified and not-supported configurations:
192.168.1.1 (iqn.esx1)
192.168.2.1 (iqn.esx1)
192.168.1.2 (iqn.esx1)
192.168.2.2 (iqn.esx1)
There are, in this example, two DataCore Servers each with two Front-end ports with their own
corresponding IP adresses and IQNs:
192.168.1.101 (iqn.dcs1-1)
192.168.2.101 (iqn.dcs1-2)
192.168.1.102 (iqn.dcs2-1)
192.168.2.102 (iqn.dcs2-2)
Each Network Interface of the ESX Host should connect to a separate Front-end Port on both
DataCore Servers;
Also note that this kind of set up will make things simpler to manage and troubleshoot if
connection problems occur in the future. There is no case in the above example where the
other Network Interface on ESX1 is also trying to connect to the other Front-end port on the
same DataCore Server (i.e. there are no multiple ISCSI session connections).
In this case, both of the Network Interfaces from ESX1 have been configured to connect to the
same Network Interface on DataCore Server 1 in this case iqn.dcs1-1 (highlighted in red)
which will not work as expected.
DataCore Server1 will accept only one of the connections and the other will be rejected; any
subsequent interruption over that iSCSI connection may then result in either of the two ESX
Network Interfaces being able to (re)connect to iqn.dcs1-1, forcing the other ESX connection to
be rejected. In other words, there is no guarantee that the ESX1 connection that was previously
logged into iqn.dcs1-1 will be able to reconnect if disconnected for any reason and if the other
Network Interface logs in before it. A solution in this case may be Teaming the NICs together as
the teamed connections will then only have a single IP address and be recognized by the
DataCore Server as a single NIC.
Also See the Important Notes section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Configuring_iSCSI_Connections.htm
Advanced Settings
Note: A reboot may not be needed if any of these settings are changed from a previous value
please check with VMware first.
Disk.DiskMaxIOSize = 512
ESX 4.1.x
From within the ESXi Configuration Tab under Advanced Settings change and/or verify the
following values are set:
Disk.DiskMaxIOSize = 512
Disk.QFullSampleSize = 32
Disk.QFullThreshold = 8
Disk.UseLunReset = 1
Disk.UseDeviceReset = 0
SCSI.CRTimeoutDuringBoot = 10000
ESX 4.0.x
From within the ESXi Configuration Tab under Advanced Settings change and/or verify the
following values are set:
Disk.DiskMaxIOSize = 512
Disk.QFullSampleSize = 32
Disk.QFullThreshold = 8
Disk.UseLunReset = 1
Disk.UseDeviceReset = 0
SCSI.CRTimeoutDuringBoot = 1
SCSI.ConflictRetries = 200
Note: Some PSPs are not supported by VMware themselves for certain types of Virtual Machine
Operating Systems. DataCore cannot take responsibility for these VMware-unsupported Virtual
Machine/PSP combinations should any issues occur.
See http://kb.vmware.com/kb/1011340
If the current SATP is different to what the new PSP requires (for example, moving from 'Most
Recently Used' to 'Round Robin'), then DataCore recommend that you unserve the Virtual Disks
first, delete the old SATP value adding the new SATP and serve them back again.
Note: Changing the SATP type may also require that the ALUA option also be changed as well on
the Host, within the SANsymphony Console, from its current setting. In which case see the After
changing the settings section of changing multipath or ALUA support settings for hosts from
the SANsymphony Help: http://www.datacore.com/SSV-Webhelp/Hosts.htm
Using different PSPs for the same Virtual Disk on multiple Hosts
While this is technically possible, it is not supported and DataCore cannot guarantee the
behavior of the VMware ESXi Hosts in this case. Always use the same PSP for the same Virtual
Disk on all VMware ESXi Hosts that it is served to.
Note: Auto-detection by VMware ESXi to choose the correct Path Selection Policy and/or
Storage Array Plugin Type to use for a given Virtual Disk can be inconsistent in older versions
and VMware ESXi may, for example, default to Most Recently Used by default for any Virtual
Disk mapped to the Host regardless if the ALUA option had been enabled or not. This type of
mistake will cause unexpected results during any failover event.
It is, therefore, important to always verify manually, that both the correct PSP and SATP have
been selected. This can be done directly on the VMware vSphere client GUI or by running one of
the following two commands on the VMware ESXi console:
The first command lists all disk devices served to the VMware ESXi Host that have the
SANsymphony DataCore SCSI Vendor string; the second command does the same thing but
uses DataCores unique NAA identifier. Both of these are part of any SANsymphony Virtual Disks
SCSI Standard Inquiry Data.
See the SCSI Standard Inquiry Data section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Changing_Virtual_Disk_Settings.htm
The additional -C switch for the grep command will display an additional 3 lines of output
above and below the line that the string searched for appears on which should then include the
Path Selection Policy and the Storage Array Type Plugin. Increase this number to display more of
the Virtual Disks properties as required.
If using SANsymphonys own VMware vCenter Integration, searching by the NAA identifier is
the only way to list the Virtual Disks on the command line.
Round Robin can be configured using either the 'default' SATP type or by configuring a custom
SATP rule.
esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA
-c tpgs_on -P VMW_PSP_RR
This custom SATP rule can be used for all Virtual Disks from any DataCore Servers when using
the Round Robin PSP.
Note: Round Robin is only qualified with the ALUA option enabled on the VMware Host from
within the DataCore Server's Console.
See Changing multipath or ALUA support settings for hosts from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Multipath_Support.htm
1
This example is taken from VMware ESXi version 5.5
Which Preferred Server setting on the DataCore Server should I use with Round Robin?
DataCore recommends, when using the Round Robin PSP and configuring your Hosts for the
first time to set an explicit DataCore Server as the Preferred Server or leave the 'Auto select'
setting configured and not use the All setting.
When the Host's Preferred Server setting is either 'Auto Select', or is using an explicitly named
DataCore Server, only the Host paths that are connected to either the first DataCore Server
listed in the Virtual Disk's properties (for 'Auto Select') or the named DataCore Server
respectively are set as Active Optimized. The Host's paths connected to the other DataCore
Server are set as 'Active Non-optimized'.
VMware Hosts will only send IO to 'Active Optimized' paths when there is a choice between
that or 'Active Non-optimized'.
Caution is therefore advised when using the All setting as this allows the VMware Host to send
I/O to all paths on all DataCore Servers for any served Virtual Disk. While this may seem
preferable, in configurations where there are significant path distances between servers (e.g.
across remote sites) or where the speed of links between remote servers is significantly slower
than links between local servers on the same site, it is possible that longer I/O wait times are
encountered between paths to remote servers which will then cause additional delays for I/O
within the same request but sent via paths to local servers resulting in overall significant I/O
latency.
Testing is advised.
Please see Appendix A Notes on Preferred Server and Preferred Path settings on page 31 for a
more details explanation when using the All setting.
Also see the Preferred Servers section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Port_Connections_and_Paths.htm
The Fixed PSP can be configured using either the 'default' SATP type or by configuring a custom
SATP rule.
esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA
-c tpgs_on -P VMW_PSP_FIXED
This custom SATP rule can be used for all Virtual Disks from any DataCore Servers when using
the Fixed PSP.
Note: Fixed PSP is only qualified with the ALUA option enabled on the VMware Host from within
the DataCore Server's Console.
See Changing multipath or ALUA support settings for hosts from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Multipath_Support.htm
1
This example is taken from VMware ESXi version 5.5
Which Preferred Server setting on the DataCore Server should I use with Fixed PSP?
Unlike Round Robin, where DataCore recommend (initially) to not use the 'All' Preferred Server
setting, when using the Fixed PSP the 'All' setting is mandatory and no other Preferred Server
setting is qualified.
This is because the Fixed PSP always required an 'Active Optimized' path to failover or failback
to for it to work as expected.
Note: The Fixed PSP will not send IO to all 'Active Optimized' paths like the Round Robin PSP
does. The actual 'active' path used by the VMware Host that is using the Fixed PSP is configured
on the ESX Host directly and is not controlled by the DataCore Server. Please refer to VMware's
own documentation on how to configure the 'active' path when using the Fixed PSP.
Please see Appendix A Notes on Preferred Server and Preferred Path settings on page 31 for
a more details explanation when using the All setting.
Also see the Preferred Servers section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Port_Connections_and_Paths.htm
The Fixed PSP can be configured using either the 'default' SATP type or by configuring a custom
SATP rule.
This custom SATP rule can be used for all Virtual Disks from any DataCore Servers when using
the Fixed PSP.
Note: Most Recently Used is only qualified without the ALUA option enabled on the VMware
Host from within the DataCore Server's Console.
See Changing multipath or ALUA support settings for hosts from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Multipath_Support.htm
8
This example is taken from VMware ESXi version 5.5
Which Preferred Server setting on the DataCore Server should I use with Most Recently Used?
Because the ALUA option is not supported when using the Most Recently Used PSP it must
never be enabled on the Host. The Preferred Server setting, which controls the ALUA state of a
given path to a Host from a DataCore Server, will therefore be ignored by the Host.
Note: The actual 'active' path used by the VMware Host that is using the Most Recently Used
PSP is configured on the ESX Host directly and is not controlled by the DataCore Server. Please
refer to VMware's own documentation on how to configure the 'active' path when using the
Most Recently Used PSP.
Also see the Preferred Servers section from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Port_Connections_and_Paths.htm
Some of the issues here have been found during DataCores own testing but many others are
issues reported by DataCore Software customers, where a specific problem had been identified
and then subsequently resolved.
DataCore cannot be held responsible for incorrect information regarding VMware products. No
assumption should be made that DataCore has direct communication with VMware regarding
the issues listed here and we always recommend that users contact the VMware directly to see
if there are any updates or fixes since they were reported to us.
For Known issues for DataCores own Software products, please refer to the relevant DataCore
Software Components release notes.
After upgrading to VMware ESXi 6.0 Update 3 ESX paths will only report as 'Active'. No paths will report as
'Active (I/O)' - regardless of the Path Selection Policy.
VMware have verified this as a cosmetic bug in ESXi that does not affect IO of either the ESX Hosts or VMs and that
their engineering team is currently working on a solution.
The ESXi 'esxtop' command (e.g. using the 'd' or 'u' switches) will show activity on the expected paths and/or
devices.
Configurations of VMware Clusters (HA or FT) that have Host connections to any DataCore Server Front End (FE)
Port which share a 'physical link' with the DataCore Server's own Mirror (MR) connections (between the
DataCore Servers) are not supported.
Please see the section 'VMware 'Fault Tolerant' or 'High Available' Clusters' on page 6 for more specific
information.
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported (this also includes ESXi 'Port Binding')
Please see the ISCSI Connections section on page 12 for more specific information, with examples.
Storage PDL responses may not trigger path failover in vSphere 6.0.0 and 6.0 Update 1.
This has now been fixed by VMware. See http://kb.vmware.com/kb/2144657.
VHBAs and other PCI devices may stop responding when using Interrupt Remapping.
See http://kb.vmware.com/kb/1030265.
Under heavy load the VMFS heartbeat may fail with 'false' ATS miscompare message.
The ESXi VMFS 'heartbeat' used to use normal 'SCSI reads and writes' to perform its function. A change in the
heartbeat method - released in ESXi 5.5 Update 2 and ESXi 6.0 uses ESXi's VAAI ATS commands instead directly to
the storage array (i.e. the DataCore Serve). DataCore Servers do not require (and so do not support) these ATS
commands. DataCore therefore recommend disabling the VAAI ATS heartbeat setting see
http://kb.vmware.com/kb/2113956.
If your ESXi Hosts are connected to other storage arrays contact VMware to see if it is safe to disable this setting
for these arrays.
Significant numbers of Virtual Machines all running on the same Virtual Disk may result in excessive SCSI
reservation requests leading to reservation conflicts between Hosts sharing the Virtual Disk which may lead to
increased I/O latency.
Reduce the number of running Virtual Machines on a single Virtual Disk and that ESX Hosts with the closest IO
path to the DataCore Server all access the same, shared Virtual Disk as this will also help to reduce the potential
for excessive SCSI Reservation conflicts. Also see: http://kb.vmware.com/kb/1005009
DataCore Software recommends using VAAI where the 'Atomic Test and Set (ATS) primitive' is used instead as this
is much better method for locking VMFS Datastores on Virtual Disks when compared to the normal SCSI
Reservation process.
Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access.
A fix is available from VMware. See https://kb.vmware.com/kb/2145663 for more information.
The SCSI-3 Persistent Reserve tests fail for Windows 2012 Microsoft Clusters running in VMware ESXi Virtual
Machines.
This is expected. See http://kb.vmware.com/kb/1037959 specifically read the 'additional notes' (under the
section 'VMware vSphere support for running Microsoft clustered configurations').
ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start
or during LUN rescan.
See http://kb.vmware.com/kb/1016106.
Configurations of VMware Clusters (HA or FT) that have Host connections to any DataCore Server Front End (FE)
Port which share a 'physical link' with the DataCore Server's own Mirror (MR) connections (between the
DataCore Servers) are not supported.
Please see the section 'VMware 'Fault Tolerant' or 'High Available' Clusters' on page 6 for more specific
information.
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported (this also includes ESXi 'Port Binding')
Please see the ISCSI Connections section on page 12 for more specific information, with examples.
VHBAs and other PCI devices may stop responding when using Interrupt Remapping.
See http://kb.vmware.com/kb/1030265.
Under heavy load the VMFS heartbeat may fail with 'false' ATS miscompare message.
The ESXi VMFS 'heartbeat' used to use normal 'SCSI reads and writes' to perform its function. A change in the
heartbeat method - released in ESXi 5.5 Update 2 and ESXi 6.0 uses ESXi's VAAI ATS commands instead directly to
the storage array (i.e. the DataCore Serve). DataCore Servers do not require (and so do not support) these ATS
commands.DataCore therefore recommend disabling the VAAI ATS heartbeat setting see
http://kb.vmware.com/kb/2113956.
If your ESXi Hosts are connected to other storage arrays contact VMware to see if it is safe to disable this setting
for these arrays.
Significant numbers of Virtual Machines all running on the same Virtual Disk may result in excessive SCSI
reservation requests leading to reservation conflicts between Hosts sharing the Virtual Disk which may lead to
increased I/O latency.
Reduce the number of running Virtual Machines on a single Virtual Disk and that ESX Hosts with the closest IO
path to the DataCore Server all access the same, shared Virtual Disk as this will also help to reduce the potential
for excessive SCSI Reservation conflicts. Also see: http://kb.vmware.com/kb/1005009
DataCore Software recommends using VAAI where the 'Atomic Test and Set (ATS) primitive' is used instead as this
is much better method for locking VMFS Datastores on Virtual Disks when compared to the normal SCSI
Reservation process.
Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access.
A fix is available from VMware. See https://kb.vmware.com/kb/2145663 for more information.
The SCSI-3 Persistent Reserve tests fail for Windows 2012 Microsoft Clusters running in VMware ESXi Virtual
Machines.
This is expected. See http://kb.vmware.com/kb/1037959 specifically read the 'additional notes' (under the
section 'VMware vSphere support for running Microsoft clustered configurations').
ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start
or during LUN rescan.
See http://kb.vmware.com/kb/1016106.
ISCSI Patches required for ESXi 4.0 Hosts connected to DataCore Servers
VMware ESXi 4.0, Patch ESXi400-200906413-BG: http://kb.vmware.com/kb/1012232
VMware ESXi 4.0, Patch ESXi400-201003401-BG: http://kb.vmware.com/kb/1019492
ESXi does not support LUNs (i.e. SANsymphony Virtual Disks) greater than 2-terabyte.
See: http://kb.vmware.com/kb/3371739
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported (this also includes ESXi 'Port Binding')
Please see the ISCSI Connections section on page 12 for more specific information, with examples.
VHBAs and other PCI devices may stop responding when using Interrupt Remapping.
See http://kb.vmware.com/kb/1030265.
ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start
or during LUN rescan.
See http://kb.vmware.com/kb/1016106.
Significant numbers of Virtual Machines all running on the same Virtual Disk may result in excessive SCSI
reservation requests leading to reservation conflicts between Hosts sharing the Virtual Disk which may lead to
increased I/O latency.
Reduce the number of running Virtual Machines on a single Virtual Disk and that ESX Hosts with the closest IO
path to the DataCore Server all access the same, shared Virtual Disk as this will also help to reduce the potential
for excessive SCSI Reservation conflicts. Also see: http://kb.vmware.com/kb/1005009
It is up to the Hosts own Operating System or Failover Software to determine which DataCore
Server is its preferred server.
If for any reason the Storage Source on the preferred DataCore Server becomes unavailable,
and the Host Access for the Virtual Disk is set to Offline or Disabled, then the other DataCore
Server will be designated the Active Optimized side. The Host will be notified by both
DataCore Servers that there has been an ALUA state change, forcing the Host to re-check the
ALUA state of both DataCore Servers and act accordingly.
If the Storage Source on the preferred DataCore Server becomes unavailable but the Host
Access for the Virtual Disk remains Read/Write, for example if only the Storage behind the
DataCore Server is unavailable but the FE and MR paths are all connected or if the Host
physically becomes disconnected from the preferred DataCore Server (e.g. Fibre Channel or
iSCSI Cable failure) then the ALUA state will not change for the remaining, Active Non-
optimized side. However, in this case, the DataCore Server will not prevent access to the Host
nor will it change the way READ or WRITE IO is handled compared to the Active Optimized
side, but the Host will still register this DataCore Servers Paths as Active Non-Optimized which
may (or may not) affect how the Host behaves generally.
In the case where the Preferred Server is set to All, then both DataCore Servers are designated
Active Optimized for Host IO.
All IO requests from a Host will use all Paths to all DataCore Servers equally, regardless of the
distance that the IO has to travel to the DataCore Server. For this reason, the All setting is not
normally recommended. If a Host has to send a WRITE IO to a remote DataCore Server (where
the IO Path is significantly distant compared to the other local DataCore Server), then the
WAIT times accrued by having to send the IO not only across the SAN to the remote DataCore
Server, but for the remote DataCore Server to mirror back to the local DataCore Server and
then for the mirror write to be acknowledged from the local DataCore Server to the remote
DataCore Server and finally for the acknowledgement to be sent to the Host back across the
SAN, can be significant.
The benefits of being able to use all Paths to all DataCore Servers for all Virtual Disks are not
always clear cut. Testing is advised.
So for example, if the Preferred Server is designated as DataCore Server A and the Preferred
Paths are designated as DataCore Server B, then DataCore Server B will be the Active
Optimized Side not DataCore Server A.
In a two-node Server group there is usually nothing to be gained by making the Preferred Path
setting different to the Preferred Server setting and it may also cause confusion when trying to
diagnose path problems, or when redesigning your DataCore SAN with regard to Host IO Paths.
For Server Groups that have three or more DataCore Servers, and where one (or more) of these
DataCore Servers shares Mirror Paths between other DataCore Servers setting the Preferred
Path makes more sense.
So for example, DataCore Server A has two mirrored Virtual Disks, one with DataCore Server B,
and one with DataCore Server C and DataCore Server B also has a mirrored Virtual Disk with
DataCore Server C then using just the Preferred Server setting to designate the Active
Optimized side for the Hosts Virtual Disks becomes more complicated. In this case the
Preferred Path setting can be used to override the Preferred Server setting for a much more
granular level of control.
The smaller the SAU size, the larger the number of indexes are required, by the Disk Pool driver,
to keep track of the equivalent amount of allocated storage compared to a Disk Pool with a
larger SAU size; e.g. there are potentially four times as many indexes required in a Disk Pool
using a 32MB SAU size compared to one using 128MB the default SAU size.
As SAUs are allocated for the very first time, the Disk Pool needs to update these indexes and
this may cause a slight delay for IO completion and might be noticeable on the Host. However
this will depend on a number of factors such as the speed of the physical disks, the number of
Hosts accessing the Disk Pool and their IO READ/WRITE patterns, the number of Virtual Disks in
the Disk Pool and their corresponding Storage Profiles.
Therefore, DataCore usually recommend using the default SAU size (128MB) as it is a good
compromise between physical storage allocation and IO overhead during the initial SAU
allocation index update. Should a smaller SAU size be preferred, the configuration should be
tested to make sure that a potential increased number of initial SAU allocations does not
impact the overall Host performance.
Or alternatively, see:
Using esxcli in vSphere 5.5 and 6.0 to reclaim VMFS deleted blocks on thin-provisioned LUNs
(2057513) http://kb.vmware.com/kb/2057513
No additional 'zeroing' of the Physical Disk or 'scanning' of the Disk Pool is required.
Any all-zero write addresses that are detected to be physically 'adjacent' to each other from a
block address point of view the Disk Pool driver will 'merge' these requests together in the list
so as to keep the size of it as small as possible. Also as entire 'all-zeroed' SAUs are re-assigned
back to the Disk Pool, the record of all its address spaces is removed from the in-memory list
making space available for future all-zero writes to other SAUs that are still allocated.
However if write I/O pattern of the Hosts mean that the Disk Pool receives all-zero writes to
many, non-adjacent block addresses the list will require more space to keep track of them
compared to all-adjacent block addresses. In extreme cases, where the in-memory list can no
longer hold any more new all-zero writes (because all the allocated system memory for the
Automatic Reclamation feature has been used) the Disk Pool driver will discard the oldest
records of the all-zero writes to accommodate newer records of all-zero write I/O.
Likewise if a DataCore Server is rebooted for any reason, then the in-memory list is completely
lost and any knowledge of SAUs that were already partially detected as having been written
with all-zeroes will now no longer be remembered.
In both of these cases this can mean that, over time, even though technically an SAU may have
been completely overwritten with all-zero writes, the Disk Pool driver does not have a record
that cover the entire address space of that SAU in its in-memory list and so the SAU will not be
made available to the Disk Pool but remain allocated to the Virtual Disk until any future all-zero
writes happen to re-write the same address spaces that were forgotten about previously by the
Disk Pool driver. In these scenarios, a Manual Reclamation will force the Disk Pool to re-read all
SAUs and perhaps detect those now missing all-zero address spaces.
See the section 'Manual Reclamation' on the next page for more information.
Or if using VMwares own UI, format using the Thick Provision Eager Zeroed option. Refer to
VMwares own documentation on how to do this. Once the formatting has completed, delete
the dummy virtual machine. Then either wait for an Automatic Reclamation to take place or run
Manual Reclamation.
Note that it is also possible to script manual reclamation using the Start-
DcsVirtualDiskReclamation PowerShell Command.
Note that manual reclamation will create additional 'read' I/O on the Storage Array used by the
Disk Pool, as this process runs at 'low priority' it should not interfere with normal I/O
operations. However, caution is advised, especially when scripting the manual reclamation
process.
Manual Reclamation may still be required even when Automatic Reclamation has taken place
(see the 'Automatic Reclamation' section on the previous page for more information)
For example, if the Host has written the data in such a way that every allocated SAU contains a
small amount of non-zero block data then no (or very few) SAUs can be reclaimed, even if the
total amount of data is much less than the total amount of assigned SAUs.
It may be possible to use the Host operating systems own defragmentation tools to move any
data that is spread out over the DataCore LUN so that it ends up as one or more large areas of
contiguous non-zero block addresses. This might then leave the the DataCore LUN with SAUs
that now only have all-zero data on them and that can then be reclaimed.
However care should be taken that the act of defragmenting the data itself does not cause
more SAU allocation as the block data is moved around (i.e. re-written to new areas on the
DataCore LUN) during the re-organization.
1. Unserve all Virtual Disks from the Host from within the SANsymphony Console.
2. At the VMware ESXi Host rescan all disk devices so that the DataCore Virtual Disks are
removed and then remove the Storage Array Type Claim Rule and Storage Array as
described on page 22.
3. From within the SANsymphony Console enable the ALUA option on the Host. See
Changing multipath or ALUA support settings for hosts from the SANsymphony Help:
http://www.datacore.com/SSV-Webhelp/Multipath_Support.htm
4. Re-serve all Virtual Disks to the Host from within the SANsymphony Console. Note you
may need to use the same LUN number and Initiator/Target paths as before.
5. At the VMware ESXi Host rescan to re-detect the Virtual Disks with ALUA enabled and
proceed to the appropriate page in this document for either Fixed or Round Robin Path
Selection Policy.
Note: To move from Fixed or Round Robin to Most Recently Used uses the same steps as above
but the ALUA option must be unchecked in Step #3.
Remember that no versions of VMware ESXi have been qualified with SANsymphony without the
ALUA option set, and so the Most Recently Used PSP is considered unqualified by DataCore.
Please refer to page 4 regarding all unqualified configurations of VMware ESXi Hosts.
This was previously documented in 'Known Issues - Third Party Hardware and Software'
http://datacore.custhelp.com/app/answers/detail/a_id/1277
Updated
VMware ESXi Compatibility lists VMware ESXi Path Selection Policies (PSP)
The information regarding the Most Recently Used (MRU) PSP and ESXi 6.x. was incorrectly listed as 'Supported'. It
has been corrected to 'Not Qualified'.
February
Added
VMware ESXi compatibility notes
VMware 'Fault Tolerant' or 'High Available' Clusters
Explained a specific configuration set up that DataCore cannot support when using VMware FT or HA clusters and
the reasons for that. This is also referred to again in the 'Known Issues' section.
2016
November
Updated
Appendix C - Reclaiming storage
Automatic and Manual reclamation
These two sections have been re-written with more detailed explanations and technical notes.
October
Updated
The VMware ESXi Host's settings - ISCSI Connections
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is not
supported (this also includes ESXi 'Port Binding'). The supported configuration example has been updated to make
it more obvious as to what is required (along with the same, corresponding changes made to the unsupported
example so that the comparison is easy to spot).
September
Added
Known Issues - general
There has been a general re-organization of this section separating all issues into subsections determined by the
version of ESXi that the known issue refers to.
Updated
The VMware ESXi Host's settings ISCSI Connections
The information that was previously in the 'Known Issues' section regarding connections from multiple NICs
sharing the same IQN has been moved to this section as it affects all versions of ESX and is not so much a 'Known
Issue' than a configuration requirement.
August
Added
Known Issues
ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start or
during LUN rescan. Applies to ESX 6.x, 5.x and 4.x. Please see: http://kb.vmware.com/kb/1016106
July
Added
The DataCore Server's settings
Added link:
Video: Configuring ESX Hosts in the DataCore Management Console
http://datacore.custhelp.com/app/answers/detail/a_id/1637
Updated
This document has been reviewed for SANsymphony 10.0 PSP 5.
Known Issues
vMotion causing loss of access to filesystem for MSCS cluster nodes (2144153)
This was previously listed as "Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have
more than one Front End mapping to each DataCore Server may cause unexpected loss of access". A
Knowledgebase article has now been released by VMware https://kb.vmware.com/kb/2144153
April
Updated
Known Issues - VMware 6.0
Storage PDL responses may not trigger path failover in vSphere 6.0
http://kb.vmware.com/kb/2144657.
Note: This affects both vSphere 6.0 and 6.0 U1 customers. A fix is available in 6.0 U2.
February
2015
December
Updated
List of qualified VMware Versions - Qualification notes on VMware-specific functions
Path Selection Policies and VMware ESX 6.x
For ESX 6.x, Fixed and Round Robin Path Selection Policies are both tested and supported by DataCore and both
are also listed on VMware's own Hardware Compatibility List.
November
Updated
SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now End of Life. Please see:
End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329
October
Updated
Known Issues VMware ESXi 5.x and 6.x
DataCore have been informed that there is now a hotfix from VMware for the previously documented known
issue Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access (VMwares own SR#15597438602).
Contact VMware for more information.
July
Added
List of qualified VMware ESXi Versions - Notes on qualification
This section has been updated and new information added regarding the definitions of all qualified, unqualified
and not supported labels. A new section on Linux distributions that are no longer in development has also been
added at the end of this section.
Known Issues
Moved some of the information from the Host Configuration section where problems can arise into the Known
Issues section. The ISCSI Port Binding is no longer considered supported as even if configured to be used in
different subnets (as previously recommended) the sharing of IQNs for different iSCSI Initiators on the ESXi Hosts
cannot be avoided and this can lead to situations where different IP Addresses with the same IQN try to log into
the same DataCore FE Port and will not be able to. Please read the Known Issues section for more detail.
May
Added
Known Issues VMware ESXi 5.x and 6.x
An issue has been identified by VMware regarding Microsoft Clusters in Virtual Machines using SANsymphony-V
Virtual Disks served to more than one path on the same ESX host, which lead to unexpected loss of access. Under
Updated
VMware ESXi 6.x - generally
Sections that apply to only VMware ESXi 6.x have been explicitly labelled to avoid ambiguity.
April
Added
VMware ESXi Path Selection Policies (all)
It has been observed that different versions of ESXi may or may not auto configure the correct SATP claim rule for
Round Robin or Fixed Path Selection Policies when presented with a Virtual Disks from SANsymphony-V. Therefore
more explicit instructions on how to create a custom rules has been added.
Note: Existing SANsymphony-V installations probably do not need to worry about this new information as it does
not conflict with what was stated previously; but DataCore recommend that you review the section just to make
sure that your Virtual Disks are correctly configured.
Updated
List of qualified VMware ESXi Versions
Added VMware ESXi 6.x
Updated
Appendix D - Moving from Most Recently Used to Round Robin or Fixed Path Selection Policies
Added more information about how to reduce the likelihood for downtime (by using vMotion).
November
Added
Known Issues
Most of the information was moved from the Known Issues: Third Party Hardware and Software document:
http://datacore.custhelp.com/app/answers/detail/a_id/1277
Updated
List of qualified VMware ESXi versions
Not Supported has now been changed to mean explicitly Not Supported for Mirrored or Dual Virtual Disks.
Single Virtual Disks are now always considered supported.
July
Updated
VMware ESXi Path Selection Policies all types
The command to verify that a given SATP type had been set was incorrect for the later versions of VMware ESXi. It
was listed as:
esxcli nmp satp listrules -s [SATP_Type]
and should have been listed as:
esxcli storage nmp satp rule list -s [SATP_Type]
June
Updated
List of qualified VMware ESXi Versions
Updated to include SANsymphony-V 10.x
May
This document combines all of DataCores VMware information from older Technical Bulletins into a single
document including:
Added
Host Settings: VMware ESXi All Versions:
Notes on VMware iSCSI Port Binding
Fixed is supported (this was inconsistently documented across the different Technical Bulletins) but only with the
Preferred Server setting set to All.
Appendix A: This section gives more detail on the Preferred Server and Preferred Path settings with regard to how
it may affect a Host.
Appendix B: This section incorporates information regarding Reclaiming Space in Disk Pools (from Technical
Bulletin 16) that is specific to VMware Hosts.
Appendix C: This section adds additional information regarding VMwares vStorage APIs for Array Integration
(VAAI) with SANsymphony-V.
Appendix D: This section adds more comprehensive steps for Moving from Most Recently Used to Fixed or Round
Robin Path Selection Policy.
Updated
DataCore Server Settings: VMware ESXi 4.0.x Hosts: Regarding Virtual Disk Names.
Host Settings: SCSI Reservation locking between VMware ESXi Hosts.
VMware ESXi Path Selection Policies: Previously the Preferred Server setting of All was explicitly stated to not be
used within the SANsymphony-V Management Console. However, Fixed requires that the Hosts Preferred Server
setting is set to All. Round Robin may use the All setting although caution is advised and more explanation is
provided in Appendix A why it may not be advisable.
An overall improvement of the explanations to most of the required Host Settings and DataCore Server Settings.
January 2014
Updated
The note on how to move from Most Recently Used, with the ALUA option not checked to Fixed/RR Path with
the ALUA option checked for a DataCore Disk with regard to SANsymphony-V 9.0 PSP3 and later versions.
December 2013
Added
VSphere ESXi 5.5 is qualified and no additional settings (from all previous 5.x versions) are needed. The SCSI
UNMAP primitive is supported from SANsymphony-V 9.0 PSP4.
Updated
DataCore Server configuration settings section (Virtual Disks mapped to more than one Host may need to use the
same LUN number ) for SANsymphony-V. Added a warning note at the start of each Path Selection Policy
(PSP), cautioning the user, that a VMs Operating System configuration may not be supported by VMware for a
particular PSP (i.e. of publication VMware state that MSCS VMs are not supported for Round Robin PSP).
April 2013
Removed
All references to SANmelody as this product is now End of Life as of December 31, 2012
March 2013
Added
Use VMFS5 for VSphere Metro Storage Clusters (vMSC).
October 2012
Removed
All but one of the Advanced Settings; all other settings are no longer needed and can be ignored (there is no
requirement to reset or change the existing values for these other settings and they can be left as they are).
July 2012
Added
support for SANsymphony-V 9.x, no new technical information. Added extra steps to set the default path selection
policy to Fixed instead of MRU under the Fixed/Round Robin path selection policy section. Added note under
General section that:
i. VAAI is now supported - with SANsymphony-V 9.x and ESXi 5.x.
ii. Strengthened warning that MRU is not supported with ALUA
June 2012
Added
Two new settings to be applied under the General section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).
May 2012
Updated
The DataCore Server and Host minimum requirements.
Removed
All references to End of Life versions that are no longer supported as of December 31 2011. Updated notes at the
start of General notes for Path Selection Policies. Updated copyright. Added note to General notes on path
selection policies for ESXi 5.x on selecting the preferred path of Virtual Disk with multiple connections for
VMW_PSP_FIXED to the same DataCore Server.
December 2011
Initial publication of Technical Bulletin.
June 2013
Added
A warning note at the start of each Path Selection Policy (PSP), cautioning the user, that a VMs Operating System
configuration may not be supported by VMware for a particular PSP (i.e. of publication VMware state that MSCS
VMs are not supported for Round Robin PSP).
April 2013
Removed
All references to SANmelody as this is now End of Life of December 31 2012. Updated the DataCore Server
Configuration Settings added Preferred Server notes.
July 2012
Added
Support for SANsymphony-V 9.x. No new settings required. Added notes under General section that:
i. VAAI is not supported with SANsymphony-V and ESXi 4.1.
ii. Strengthened warning that MRU is not supported with ALUA
June 2012
Added
Two new settings to be applied under the General section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).
May 2012
Updated
The DataCore Server and Host minimum requirements. Removed all references to End of Life SANsymphony and
SANmelody versions that are no longer supported as of December 31 2011. Added notes at the start of General
notes for Path Selection Policies. Updated copyright. Updated Fixed AP and Round Robin Path Selection Policy
with regard to preferred paths. Existing users should re-check their configurations and make any appropriate
changes as necessary.
November 2011
Updated
URL to VMware SAN Configuration guides changed.
October 2011
Removed
All references to End of Life SANsymphony and SANmelody versions that are no longer supported as of July 31
2011. Moved known issues out of this Technical Bulletin and into the Known Issues: Third Party
Software/Hardware with DataCore Servers document. Added MRU path policy. Added important note on how to
verify path selection policy in each case. For SANsymphony-V the first 12 characters of the Virtual Disk name no
longer needs to be unique.
February 2011
Added
Support for SANsymphony-V 8.x.
September 2010
Initial publication of Technical Bulletin.
June 2013
Added
A warning note at the start of each Path Selection Policy (PSP), cautioning the user, that a VMs Operating System
configuration may not be supported by VMware for a particular PSP (i.e. of publication VMware state that MSCS
VMs are not supported for Round Robin PSP).
April 2013
Removed
All references to SANmelody as this product is now End of Life as of December 31, 2012
July 2012
Added
Support for SANsymphony-V 9.x. No new settings required. Corrected option for SCSI.CRTimeoutDuringBoot and
added back SCSI.ConflictRetries in ESXi(i) Host configuration settings - General.
June 2012
Added
Two new settings to be applied under the General section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).
October 2011
Removed
All references to End of Life versions that are no longer supported as of July 31 2011. Moved all issues not specific
to configuring Hosts or DataCore Servers out of this Technical Bulletin and into the Known Issues: Third Party
Software/Hardware with DataCore Servers document. Added important note on how to verify path selection
policy in each case. Changed requirement for Most Recently Used managed path policy do not use the ALUA
option.
March 2011
Added
Support for SANsymphony-V 8.x
June 2010
Added
Support for 'Round-Robin' path selection policy with SANsymphony 7.0 PSP 3 Update 4 and SANmelody 3.0 PSP 3
update 4.
December 2009
Added
Support for 'Fixed Path' path selection policy with SANsymphony 7.0 PSP 3 and SANmelody 3.0 PSP 3. Previously
only MRU was supported
October 2009
Initial publication of Technical Bulletin
ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED AS IS AND USERS MUST TAKE ALL
RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND THE INFORMATION CONTAINED IN THIS DOCUMENT.
NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND
SHALL HAVE NO LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER INFORMATION
REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED WARRANTIES OF MERCHANTABILITY, NON-
INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE
FULLEST EXTENT PERMITTED BY LAW.
No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-readable form without the
prior written consent of DataCore Software Corporation