Sunteți pe pagina 1din 10

Datacore Test Lab

1.Basic Test Lab:


The Datacore Test Lab all runs inside virtualization environment using VMware
Workstation 10, with details as below.
This is network diagram:

Datacore system includes 2 Datacore servers running Windows Server 2012 R2 and
SANsymphony-V (latest & trial version). Datacore system is built using HighAvailability Pairs template as below:

2 Datacore servers will synchronize with each other and they use mirrored virtual
disks to keep 2 identical copies. Each Datacore server has 1 public NIC (connect to
external networks), 2 frontend NICs (connect to ESX hosts), 3 mirror teamed NICs
(connect to other datacore server), and 3 iSCSI teamed NICs (connect to iSCSI
storage device).

vSphere system includes 2 ESXi 5.5 hosts and 1 instance of vCenter 5.5. Each ESX
host has 2 iSCSI vmkernel ports (belonging to 2 separate IP networks), each
mapping with 1 physical NIC, to connect to Datacore servers.

Backend storage devices (Openfiler) use iSCSI, each has 3 NICs connecting to
Datacore server.

Datacore server combines local storage (SAS) and remote storage (iSCSI, Openfiler)
in a Disk Pool as below:

On each Datacore server, 2 virtual disks in Mirrored mode are created as below:

Virtual disk is up to date, connected (by ESX hosts) and mirror path is available, as
below:

From vSphere Client we can see that ESX hosts regconize the 2 virtual disks
presented by Datacore:

From above screenshot, we see that multipathing is controlled by NMP (vmware


Native Multi-Pathing plugin), but I dont know what does it mean with Hardware
Acceleration: Supported ? Maybe Datacore adds some advanced virtual hardware
functions which Openfiler doesnt have.
Below is active paths and path policy seen from vSphere Client:

Round Robin is used to load balance traffic over all active paths. LUN 0 can be
accessed (from this ESX host) via 4 active paths (corresponding to 2 Datacore
servers dc1 and dc2, each Datacore server listens on 2 iscsi frontend NICs). Because
LUN access is enabled via 2 datacore servers at the same time, seem that Datacore

supports failover and even Active/Active in this case ? Im also not sure what is the
difference between Active and Active (I/O) status from above screenshot, maybe
Active (I/O) means that the corresponding Datacore server (dc1) own the LUN (LUN
0) or it is the preferred/active Datacore server ?
Now I will shutdown Datacore server 1 (dc1, the one whose paths has Active I/O
status on screenshot above) to see if there is automatic failover. DC1 is now
shutdown. While dc1 is being shutdown, theres nothing abnormal seen from users
perspective, and I can run applications inside VM normally. Now we check the status
of paths in below screenshot:

As we expect, only Datacore server 2 (dc2) is currently online and present the LUN
with 2 active paths. Note the status is Active (I/O), not just Active, which may mean
that dc2 really own and present the LUN.
Now I turn on Datacore server 1 (dc1). One thing to note here is that when dc1 just
turns on, its data (stored on virtual disks) is not up to date, so it needs to
synchronize with dc2. DC1 reports that it needs Log Recovery and it temporarily
block access from hosts, as below:

The Log Recovery process will take some time, depending on the size of changed
data. Until synchronization is successfully, hosts cannot access the mentioned
virtual disk(s) on DC1, so here comes the risk: I wonder if DC2 is suddenly not
available while DC1 has not finished the Log Recovery process, then what
happens ? Datacore may then report a critical alert, LUN may not be available until
administrator intervenes correctly, and some data may be lost ! Btw, if the mirror
path is not available at this time (due to network connection issue or iSCSI target
not regconized ), the Log Recovery process will be pending until mirror path
comes back to work, which add more downtime.
One more thing I see from the lab is that the speed of Full Recovery is quite low,
even though changed data is very little:

As can be seen from above screenshot, virtual disk 1 from DC1 is still in sync with
DC2. The mirror link uses a NIC team (combination of 3 member NICs) at both ends,
but the recovery rate is quite slow, just about 10MB/s. It mean that it take a lot of
time before virtual disk 1 become up-to-date and available, and during this time
hosts access is blocked.
Regarding NIC aggregation, we have at least 2 choices: Windows NIC Teaming and
Windows MPIO (Multi Path I/O). In this lab I choose NIC Teaming for the mirror link
and dont see any improved performance (recovery rate is slow as seen above). I
tried using MPIO but I met problem: Windows iSCSI Initiator did not see iSCSI targets
(hosted by Datacore), so mirror path was unavailable. From Datacore console, I
dont see any interface that help monitor and manage iSCSI Target (Datacore
recommend not to use Windows iSCSI Target, but I dont know how to manage
Datacores implementation of iSCSI Target).

2.Advanced Test Lab:


The aim of advanced Lab is to modify, upgrade and fine-tune the existing basic Lab
in order to increase functionality, availability, manageability and performance. For
example the following functions can be considered or configured: NIC load
balancing, LUN access, hardware acceleration, Continuous Data Protection,
Replication, Snapshot, backup integration, storage tiering

S-ar putea să vă placă și