Sunteți pe pagina 1din 27

Reference Architecture

SimpliVity OmniStack for Citrix XenDesktop 7.6

Page 1 of 27 www.SimpliVity.com
Reference Architecture

Table of Contents
1. Executive Summary................................................................................................................................................................ 3
Audience ...................................................................................................................................................................... 3
2. Introduction............................................................................................................................................................................ 3
SimpliVity Hyperconvergence: Simplifying VDI............................................................................................................ 3
Superior user experience through unmatched VDI performance................................................................ 4
Linear scalability from pilot to production with cost-effective VDI deployments........................................ 4
Enterprise-grade data protection and resiliency for VDI workloads............................................................ 4
3. Technology Overview............................................................................................................................................................ 4
4. Solution Overview.................................................................................................................................................................. 5
Citrix XenApp and XenDesktop Technology Overview................................................................................................ 6
Citrix XenDesktop Components................................................................................................................................... 6
Citrix Receiver.............................................................................................................................................. 6
StoreFront..................................................................................................................................................... 6
Delivery Controller....................................................................................................................................... 6
Virtual Delivery Agent (VDA)........................................................................................................................ 6
NetScaler Gateway....................................................................................................................................... 6
Director EdgeSight....................................................................................................................................... 6
Studio........................................................................................................................................................... 7
Provisioning methods.................................................................................................................................................... 7
Citrix Provisioning Services 7.6..................................................................................................................... 7
Machine Creation Services........................................................................................................................... 8
Desktop types............................................................................................................................................................... 8
4. Solution Architecture............................................................................................................................................................. 9
Management Infrastructure Design.............................................................................................................................. 9
Desktop Infrastructure Design.................................................................................................................................... 14
2000 Office Worker Block.......................................................................................................................... 14
5. Login VSI...............................................................................................................................................................................20
Testing Methodology..................................................................................................................................................20
Office Worker Workload Definition............................................................................................................................. 21
Test Environment......................................................................................................................................................... 21
Results........................................................................................................................................................ 21
6. Summary/Conclusion........................................................................................................................................................... 24
7. References and Additional Resources.................................................................................................................................. 24
8. Appendix.............................................................................................................................................................................. 25
Design Guidelines....................................................................................................................................................... 25
SimpliVity OmniStack Design Guidelines................................................................................................... 25
Citrix XenDesktop 7.6 Design Guidelines.................................................................................................. 26
Supporting Infrastructure Design Guidelines............................................................................................ 27

Page 2 of 27 www.SimpliVity.com
Reference Architecture

1. Executive Summary 2. Data optimization at scale: Native always-on inline


deduplication and compression provided a data reduc-
Virtual Desktop Infrastructure (VDI) initiatives are a top tion rate of above 20:1.
priority for many IT organizations, driven in part by the
promise of a flexible, mobile computing experience for 3. 2000 user sessions (800 Hosted Desktops and 1200
end users, and consolidated management for IT. Organi- Hosted Shared Desktop sessions using PVS) running
zations are looking to VDI solutions like Citrix XenDesktop on only 10 OmniStack nodes, including resilient N+1
to reduce software licensing, distribution and administra- design.
tion expenses, and to improve security and compliance.
Audience
Too often, however, VDI deployments are plagued by slug-
gish and unpredictable desktop performance and higher The document is intended for IT planners, managers and
than expected costs. IT organizations are often forced to administrators; channel partner engineers, professional
make compromises and tradeoffs between desktop per- services personnel and other IT professionals who plan to
formance, availability, and costs. deploy the SimpliVity hyperconverged infrastructure solu-
tion to support Citrix XenDesktop 7.6
SimpliVity’s market-leading hyperconverged infrastructure
platform is an ideal solution for addressing unique VDI 2. Introduction
challenges. It provides the superior end-user experience
organizations require, without sacrificing economics or SimpliVity Hyperconverged Infrastructure:
resiliency. SimpliVity provides: Simplifying VDI

• Simplified deployment with hyperconverged, x86 Many businesses are constrained by legacy IT infra-
building blocks. structure that isn’t well suited for VDI initiatives. Siloed
data centers, composed of independent compute, stor-
• Ability to start small and scale out in affordable age, network and data protection platforms with distinct
increments—from pilot to production. administrative interfaces are inherently inefficient, cumber-
some and costly. Each platform requires support, main-
• Highest density of desktops per node in the hypercon- tenance, licensing, power and cooling—not to mention a
verged infrastructure category. set of dedicated resources capable of administering and
maintaining the elements. Rolling out a new application
• Independently validated, unmatched VDI performance like VDI is a manually intensive, time-consuming proposi-
for a superb end-user experience. tion involving a number of different technology platforms,
management interfaces, and operations teams. Expanding
• Deployment of full-clone desktops with the same data
system capacity can take days or even weeks, and require
efficiency as linked clones. complex provisioning and administration. Troubleshooting
• Enterprise-class data protection and resiliency. problems and performing routine data backup, replication
and recovery tasks can be just as inefficient.
This Reference Architecture provides evidence of these
capabilities and showcases third-party-validated Login VSI While grappling with this complexity, organizations
performance testing. It provides a reference configura- also need to address challenges that are unique to VDI,
tion for implementing Citrix XenDesktop 7.6 Hosted and including:
Hosted Shared Desktops on SimpliVity hyperconverged 1. Difficulty sizing VDI workloads, due to the inherent
infrastructure, and describes tests performed by SimpliV- randomness and unpredictability of user behavior.
ity to validate and measure the operation and perfor-
mance of the recommended solution. 2. Periodic spikes in demand, such as “login storms” and
“boot storms” that may significantly degrade perfor-
The performance testing demonstrates SimpliVity’s ability mance if not properly handled.
to consistently deliver a high quality end-user experience
in VDI deployments as the environment scales. Highlights 3. Loss of user productivity or revenue in the event of
include: an outage.
1. Performance at scale: In Login VSI testing, consistently SimpliVity addresses each of these challenges by providing
low latency of less than 2000ms average response was a scalable, building block-style approach to deploying
observed for both Hosted and Hosted Shared Desktop infrastructure for VDI, offering predictable cost, and deliv-
implementations. ering a high-performing desktop experience with contin-
ued availability.
Page 3 of 27 www.SimpliVity.com
Reference Architecture

Superior user experience through unmatched VDI • Accelerated Data Efficiency: Performs inline data
performance deduplication, compression and optimization on all
data at inception across all phases of the data lifecycle,
The SimpliVity solution enables high performance at very all handled with fine data granularity of just 4KB-8KB.
high desktop density. It absorbs VDI login storms, deliver- On average, SimpliVity customers achieve 40:1 data
ing 1,000 logins in 1,000 seconds – nearly 3x faster than efficiency while simultaneously increasing application
the standard Login VSI benchmark provisioning speed and performance.
unparalleled in the hyperconverged infrastructure solution
market. • Built-In Data Protection: Includes native data protec-
tion functionality, enabling business continuity and
Linear scalability from pilot to production with cost-
disaster recovery for critical applications and data,
effective VDI deployments
while eliminating the need for special-purpose backup
SimpliVity’s scale-out architecture minimizes initial capi- and recovery hardware or software. The solution’s
tal outlays and tightly aligns investments with business inherent data efficiencies minimize I/O and WAN traf-
requirements; SimpliVity building blocks are added incre- fic, reducing backup and restore times from hours to
mentally providing a massively-scalable pool of shared minutes, while obviating the need for special-purpose
resources. WAN optimization products.

Enterprise-grade data protection and resiliency for • Global Unified Management: A VM-centric approach
VDI workloads to management eliminates manually intensive, error-
prone administrative tasks. System administrators are
SimpliVity provides built-in backup and disaster recovery
no longer required to manage LUNs and volumes;
capabilities for the entire virtual desktop infrastructure.
instead, they can manage all resources and workloads
The solution also ensures resilient, highly available desktop
centrally, using familiar interfaces such as VMware
operations with the ability to withstand node failures with
vCenter Server.
no loss of desktops and minimal increase in latency.
The SimpliVity solution includes its OmniStack software
3. Technology Overview
and related technologies, packaged on popular x86
SimpliVity’s hyperconverged infrastructure solution is platforms—either on 2U servers marketed as SimpliVity
designed from the ground up to meet the increased OmniCube, or with partner systems from Cisco or Lenovo,
performance, scalability and agility demands of today’s marketed as OmniStack with Cisco UCS and OmniStack
data-intensive, highly virtualized IT environments. The with Lenovo System x, respectively.
SimpliVity solution transforms IT by virtualizing data and
An individual OmniStack node includes:
incorporating all IT infrastructure and services below the
hypervisor into compact x86 building blocks. With 3x total • A compact hardware platform - a 2U industry-standard
cost of ownership (TCO) reduction, SimpliVity delivers the virtualized x86 platform containing compute, memory,
best of both worlds: the enterprise-class performance, performance-optimized SSDs and capacity-optimized
protection and resiliency that today’s organizations require, HDDs protected in hardware RAID configurations, and
with the cloud economics businesses demand.
10GbE network interfaces.
Designed to work with any hypervisor or industry-standard
• A hypervisor such as VMware vSphere/ESXi.
x86 server platform, the SimpliVity solution provides a sin-
gle, shared resource pool across the entire IT stack, elimi- • OmniStack virtual controller software running on the
nating point products and inefficient siloed IT architec- hypervisor.
tures. The solution is distinguished from other converged
infrastructure solutions by three unique attributes: accel- • An OmniStack Accelerator Card – a special-purpose
erated data efficiency, built-in data protection functionality PCIe card with an FPGA, flash, and DRAM, protected
and global unified management capabilities. with super capacitors; the accelerator card offloads
CPU-intensive functions such as data compression,
deduplication and optimization from the x86 processors.

Page 4 of 27 www.SimpliVity.com
Reference Architecture

(4) Servers & VMware

Storage Switch

(2) HA shared storage

Backup & Dedupe

WAN Optimization • One Building Block


• 3x TCO Savings
Cloud Gateway • Global Unified Management
• Operational Efficiency

SSD Array

Storage Caching

Enterprise Cloud
Data Protection
Data Protection Application

Application
Capabilities Economics
Data Protection Apps

Legacy Comparison

4. Solution Overview • Windows 7 SP1 for Hosted Desktops

The solution outlined in this document provides guidance • Windows Server 2012 R2 for Hosted Shared Desktops
for implementing SimpliVity hyperconverged infrastruc- and server workloads
ture to enable a single VDI building block, supporting
2,000 office workers. This architecture can be used to • N+1 design for management workloads and infrastruc-
scale up to many thousands of users, by replicating the ture where possible
building blocks as outlined below.

This solution leverages SimpliVity hyperconverged infra-


structure as the foundational element of the design. Sim- External Users

pliVity OmniStack nodes are combined together, forming


a pool of shared compute (CPU and memory), storage, and
DMZ
storage network resources. VMware vSphere and Citrix
XenDesktop provide a high-performance VDI environment NetScaler NetScaler

that is highly available and highly scalable.


Infrastructure Services Infrastructure Services
The building block includes:

• SimpliVity OmniStack nodes with Haswell-based Intel vSphere 5.5U2

Xeon E5-2697 CPUs for desktop workloads vCenter Active


Directory
SQL

• SimpliVity OmniStack nodes with Haswell-based Intel


Xeon E5-2680 CPUs for management workloads
Delivery Provisioning Storefront
Controllers Services

• 199GB – 455GB usable memory per OmniStack node Two 5 nodes Simplivity OmniStack

• 2TB datastores for all workloads

• 10GbE networking

Page 5 of 27 www.SimpliVity.com
Reference Architecture

Citrix XenApp and XenDesktop Technology Citrix Receiver


Overview
Users access their applications and desktop through Citrix
Citrix XenApp and XenDesktop are application and desk- Receiver, a universal client that runs on virtually any device
top virtualization solutions that control virtual machines, operating platform, including iOS and Android in addition
applications, licensing, and security while providing to Windows, Mac® and Linux®. This software client sup-
anywhere access for any device. Citrix FlexCast Manage- plies the connection to the virtual machine via TCP port
ment Architecture (FMA) is a unified architecture that 80 or 443, and communicates with StoreFront using the
integrates XenApp and XenDesktop to centralize man- StoreFront Service API.
agement. Different types of workers across the enterprise
need different types of desktops. Some require simplicity StoreFront
and standardization, and others require high performance The interface that authenticates users, manages applica-
and personalization. XenDesktop can meet these require- tions and desktops, and hosts the application store. Store-
ments in a single solution using Citrix FlexCast delivery Front communicates with the Delivery Controller using
technology. With FlexCast, IT can deliver every type of XML.
virtual desktop, each specifically tailored to meet the
performance, security, and flexibility requirements of each Delivery Controller
individual user.
The central management component of a XenApp or
XenApp and XenDesktop allow: XenDesktop Site that consists of services that manage
resources, applications, and desktops; and optimize and
• End users to run applications and desktops indepen- balance the loads of user connections.
dently of the device’s operating system and interface.
Virtual Delivery Agent (VDA)
• Administrators to manage the network and provide or
restrict access from selected devices or from all devices. An agent that is installed on machines running Windows
Server or Windows desktop operating systems that allows
• Administrators to manage an entire network from a single these machines and the resources they host to be made
data center. available to users. The VDA-installed machines running
Windows Server OS allow the machine to host multiple
Citrix XenDesktop Components
connections for multiple users and are connected to users
Citrix FlexCast Management Architecture key components: on one of the following ports:

• TCP port 80 or port 443 if SSL is enabled


Receiver NetScaler
• TCP port 2598, if Citrix Gateway Protocol (CGP) is
enabled, which enables session reliability

Windows • TCP port 1494 if CGP is disabled or if the user is con-


Apps necting with a legacy client
Studio
HD
X

HDX
NetScaler Gateway
A data-access solution that provides secure access inside
StoreFront
or outside the LAN’s firewall with additional credentials.
X
HD

Server Director
Hosted EdgeSight Director EdgeSight
Desktops
Director provides real-time trend and diagnostic informa-
tion on users, applications and desktops to helpdesk staff
Any Location Controller with troubleshooting. It is a web-based tool that allows
administers access to real-time data from the Broker agent,
historical data from the Site database, and HDX data from
NetScaler for troubleshooting and support.

Page 6 of 27 www.SimpliVity.com
Reference Architecture

Studio vDisk attributes:

A management console that allows administers to config- • Read only during streaming
ure and manage Sites, and gives access to real-time data
from the Broker agent. Studio communicates with the Con- • Shared by many VMs, simplifying updates
troller on TCP port 80.
Write cache attributes:
Provisioning methods • One VM, one write cache file
XenDesktop 7.6 feature pack 2 has two integrated solu-
• Write cache file is empty after VM reboots
tions, Provisioning Services and Machine Creation Services
to provide different benefits to business needs. • Recommended storage protocol: NFS for write cache
Citrix Provisioning Services 7.6 • The write cache file size is 4GB to 10GB for Hosted
Traditional image solutions cost time and money to setup, Desktops and 20GB -60GB for Hosted Shared Desk-
update, support and decommission on each computer. tops; if PvDisk is used in the deployment, the write
Citrix Provisioning Service is based on software-streaming cache size will be smaller.
technology. This technology allows computers to be provi-
PVS File Layout
sioned and reprovisioned in real time from a single shared
disk image (vDisk). In doing so, administrators can elimi-
nate the need to manage and patch individual systems. This is what the user Visible file on another
sees as Drive C:\ disk, typically D:\
Instead, all image management is done on the master Windows OS
Master
image vDisk. This centralized management enables organi-
zations to reduce operational and storage costs.
PVS Stream
Virtual Desktop 1
After PVS components are installed and configured, a Streamed vDisk Write Cache

vDisk is created from a device’s hard drive by taking a


snapshot of the OS and application image and then stor- PVS Stream
Virtual Desktop 2
ing that image as a vDisk file on the network. A device Streamed vDisk Write Cache
used for this process is referred to as a master target
device. The devices that use the vDisks are called target PVS Stream
Virtual Desktop 3
devices. Citrix PVS streams a single shared disk image Streamed vDisk Write Cache
(vDisk) to individual machines. The write cache includes
OmniStack
data written by the target device.  

The streaming vDisk is in read-only format, and target


devices cannot change the image so that consistency is Provisioning Server RAM Cache
ensured. Any patches, updates and other configuration
changes can be delivered to the end devices in real time A RAM write cache option (cache on device RAM with
when they reboot. Citrix PVS is part of Citrix XenDesktop, overflow on hard disk) is available in Provisioning Server
desktop administrators can use PVS’s streaming technol- 7.6. Write cache can seamlessly overflow to a differencing
ogy to simplify, consolidate, and reduce the costs of both disk should RAM cache become full. At the PVS console,
physical and virtual desktop delivery. select vDisk properties and choose Cache type as “Cache
in device RAM with overflow on hard disk.” This will use
When using SimpliVity OmniStack, the best practice is to hypervisor RAM first and then use hard disk. Choose the
create a local partition on each PVS servers and each PVS RAM size based on OS type.
server have a local vDisk copy. This improves performance
by caching vDisk contents in PVS system cache memory. RAM Cache sizing consideration:

vDisks can be assigned to a single target device in private- • 256MB for Hosted Windows 7 32-bit
image mode, or to multiple target devices in standard-
image mode. • 512MB for Hosted Windows 7 64-bit

• 2GB–4GB for Hosted Shared Windows Server 2012 R2

Page 7 of 27 www.SimpliVity.com
Reference Architecture

Machine Creation Services


An image delivery technology, integrated within XenDesktop that utilizes hypervisor APIs to create a unique, read-only
thin provisioned clone of a master image where all writes are stored within a differencing disk.
When you provision desktops using MCS, a master image is copied to each datastore. This master image copy uses the
hypervisor snapshot clone. Within minutes of the master image copy process, MCS creates a differential disk and an iden-
tity disk for each VM. The size of the differential disk is the same as the master image in order to host the session data. The
identity disk is normally 16MB and is hidden by default. The identity disk has the machine identity information such as host
name and password.

MCS File Layout

This is what the user This is hidden from


sees as Drive C:\ the users view
Windows OS
Master

VHD Chain
Virtual Desktop 1
Diff Disk ID Disk

VHD Chain
Virtual Desktop 2
Diff Disk ID Disk

VHD Chain
Virtual Desktop 3
Diff Disk ID Disk

OmniStack

Desktop types
• Hosted Shared Desktops (XenApp/RDSH): This is many users to one server. Users get a desktop interface, which can
look like Windows 7 but it is a published desktop on one XenApp server. Every user is sharing the XenApp server and
we can configure restrictions and redirections to allow users to have a smaller impact on each other. These are inexpen-
sive, locked-down Windows virtual desktops hosted from Windows server operating systems. They are well suited for
users, such as call center employees, who perform a standard set of tasks.
• Hosted Virtual Desktops (XenDesktop/VDI): A Windows 7/XP desktop running as a virtual machine where a single user
connects remotely. One user’s desktop is not impacted by another user’s desktop configuration. Think of this as one
user to one desktop. There are many flavors for the hosted virtual desktop model (existing, installed, pooled, dedicated
and streamed), but they are all located within the data center. These virtual desktops each run a Microsoft Windows
desktop operating system rather than running in a shared, server-based environment. They can provide users with their
own desktops that they can fully personalize.
XenApp/RDSH XenDesktop/VDI

User User User User User User User User


Session Session Session Session Session Session Session Session

Remote Desktop Session Host OS OS OS OS

Virtual Machine VM VM VM VM

Hosted Shared Desktops Pooled Hosted Virtual Desktops


Windows Server 2012 Windows 7

For more information, check Citrix XenDesktop Release 7.6.

Page 8 of 27 www.SimpliVity.com
Reference Architecture

4. Solution Architecture

Management Infrastructure Design


This section details the OmniStack environment dedicated to running the management workloads required to support
2000 users in a Citrix XenDesktop implementation. A separate, dedicated OmniStack environment is also used for the
XenDesktop Hosted Desktops and Hosted Shared Desktops and is detailed in Desktop Infrastructure Design. The man-
agement workloads considered in this document are detailed in the table below.

Workload Version vCPUs RAM Disk OS


vCenter Server – 5.5 Update 2e 8 32GB 100GB Windows Server
Desktop 2012 R2
vCenter Server – 5.5 Update 2e 4 16GB 100GB Windows Server
Mgmt 2012 R2
Microsoft SQL 2012 SP1 4 8GB 100GB Windows Server
Server 2012 R2
AD DC/DHCP/DNS N/A 2 4GB 40GB Windows Server
x2 2012 R2
Citrix XenDesktop 7.6 4 8GB 40GB Windows Server
Controller Server 2012 R2
x2
Citrix XenDesktop 7.6 4 4GB 40GB Windows Server
StoreFront Server 2012 R2
x2
Citrix Provisioning 7.6 4 8GB 500GB Windows Server
Server x 3 2012 R2
Citrix XenDesktop 7.6 4 4GB 40GB Windows Server
Licensing Server 2012 R2

vSphere Design

Attribute Value Rationale


Number of vCenter Servers 1 A vCenter Server instance will support 2000 virtual desk-
tops.
Number of vSphere Clusters 1 Given the number of OmniStack systems required to
support the given workload, there is no need to split out
hosts into separate vSphere Clusters.
Number of vSphere Datacenters 1 A single vSphere Cluster means only a single vSphere
Datacenter is required.
vSphere HA Configuration 1. HA enabled 1. Enabled to restart VMs in the event of an ESXi host
2. Admission Control enabled failure
3. % of cluster resources 2. Ensure VM resources will not become exhausted in the
reserved – 50% case of a host failure.
4. Isolation Response – Leave 3. Set to the percentage of the cluster a single host rep-
Powered On resents.
4. Ensure a host isolation event does not needlessly
power off desktops.

Page 9 of 27 www.SimpliVity.com
Reference Architecture

vSphere HA – Advanced Settings das.vmmemoryminmb – Both are set to averages of the workloads in the cluster.
9137MB This serves to set the percentage of cluster resources in
das.vmcpuminmhz – 1000MHz HA calculation to that of an average VM.
ESXi – Advanced Settings
SunRPC.MaxConnPerIP 256 (max) Avoid hitting NFS connection limit
NFS.MaxVolumes 256 (max) Increase number of NFS volumes per host
NFS.MaxQueueDepth 256 Performance consideration
NFS.SendBufferSize 512 Performance consideration
NFS.ReceiveBufferSize 512 Performance consideration
Net.TcpipHeapSize 32 Performance consideration
Net.TcpipHeapMax 512 Performance consideration
Misc.APDHandlingEnable 1 Turn on All Paths Down handling in ESXi

These workloads are visually represented on the right:

• (2) OmniStack Integrated Solution with Cisco UCS C240 Management vCenter Server
M4
Management Datacenter/Cluster
• Intel Xeon E5-2680 v3 (Haswell 12-core, 2 sockets per
server)

• 199GB usable memory each


2x Desktop 2 StoreFront 1 Licensing 3 Provisioning
• 2TB datastores x2 Controllers Servers Server Servers

• 10GbE interconnect between systems (no 10GbE switch


required, but may be used)

OmniStack Servers – To support the management work-


loads outlined in this document, a 2-host vSphere Cluster, vCenter
Server(Mgmt)
vCenter Server
(Desktop)
2 MS SQL
Server
2 Active
Directory
comprised of OmniStack Integrated Solution with Cisco
UCS C240 M4, is recommended. Unlike other HCI vendors,
SimpliVity fully supports a 2-host cluster in its minimum
configuration. Using OmniStack from SimpliVity allows you
to start small, with only the infrastructure you need, and
scale out as your VDI environment grows.

vCenter Servers – All roles were installed onto a single vir- 4xC240-M4SX 2x 2TB NFS
16 core 2.6GHz each OmniStack Datastores
tual machine, including the vCenter Server Service, vCen- 256GB RAM usable each
ter Single Sign On (SSO), Inventory Service, and Update
Manager. No CPU or memory pressure were observed
during testing, so dedicating servers for each service was
unnecessary. If an embedded database server had been
utilized in this infrastructure, it would be advised to have
dedicated servers for the SSO and Inventory services to
avoid resource contention.

The vCenter Server appliance was not used in these tests.

Page 10 of 27 www.SimpliVity.com
Reference Architecture

It is a perfectly acceptable alternative to the Windows version of vCenter Server, and is permissible to use with the caveat
that any Windows-based features will require a stand-alone Windows server to support. Please see kb.vmware.com/
kb/2005086 for more details.

XenDesktop Delivery Controllers – A single Delivery Controller supports up to 5000 users. Two Delivery Controllers were
deployed in an N+1 configuration for high availability.

XenDesktop StoreFront Servers – A single StoreFront Server supports up to 10000 users. Two StoreFront Servers were
deployed in an N+1 configuration for high availability.

XenDesktop License Server – Only a single License Server is required.

Provisioning Servers – A single PVS server can support up to around 500 virtual machines. Given our environment of 800
Hosted Desktop VMs and 40 Hosted Shared Desktop VMs, three PVS servers were deployed in an N+1 configuration for
high availability.

For the vDisk Store, local disk was leveraged on the PVS servers. This was done to support vDisk RAM cache.

Infrastructure Services (Domain Controllers/DNS/DHCP) – These services were all co-located on the same virtual
machines. While no CPU or memory pressure was observed during testing, in-depth Active Directory design and recom-
mendations are outside the scope of this document. Please see msdn.microsoft.com/en-us/library/bb727085.aspx for
more information and best practices.

When PVS PXE boot is used, DHCP option 66 and 67 must be configured to enable TFTP boot. Please see docs.citrix.com/
en-us/provisioning/7-6.html for more details.

Microsoft SQL Server – All supporting databases for this reference design, were run on a single virtual machine. These
databases are referenced in the table below.

Database Authentication Size Recovery Mode


Desktop vCenter Server Windows authentication 5GB Simple
Management vCenter Server Windows authentication 1GB Simple
Desktop Update Manager SQL authentication 100MB Simple
Management Update Manager SQL authentication 100MB Simple
XenDesktop Delivery Windows Authentication Default Simple
Controller
Provisioning Server Windows Authentication Default Simple

Sizing – Compute, Storage, and Network resources for each infrastructure VM were selected using Citrix best practices as
a baseline and modified based on their observed performance on the OmniStack systems.

SimpliVity Arbiter Placement – Please refer to the SimpliVity OmniCube Deployment Guide for further guidance.

vStorage API for Array Integration (VAAI) – VAAI is a vSphere API that allows storage vendors to offload some common
storage tasks from ESXi to the storage itself. The VAAI plugin for OmniStack is installed during deployment, so no manual
intervention is required.

Datastores – A single datastore per server is recommended to ensure even SimpliVity storage distribution across cluster
members. This is less important in a 2 OmniStack server configuration; however, following this best practice guideline will
ensure a smooth transition to a 3+ node OmniStack environment, should the environment grow over time. This best prac-
tice has been proven to deliver better storage performance and is highly encouraged.

Page 11 of 27 www.SimpliVity.com
Reference Architecture

Networking – The following best practices were utilized in the vSphere networking design:

• Segregate OVC networking from ESXi host and virtual machine network traffic

• Leverage 10GbE where possible for OVC and virtual machine network traffic

These best practices offer the highest network performance to VMs running on OmniStack 3.0. Taking this into consider-
ation, a single vSphere Standard Switch is deployed for management traffic, and a single vSphere Distributed Switch is
deployed for the remaining traffic, including:

• Virtual Machines

• SimpliVity Federation

• SimpliVity Storage

• vMotion

vSphere Standard Switch Configuration

Parameter Setting
Load balancing Route based on Port ID
Failover detection Link status only.
Notify switches Enabled.
Failback No.
Failover order Active/Active
Security Promiscuous Mode – Reject
MAC Address Changes – Reject
Forged Transmits – Reject
Traffic Shaping Disabled
Maximum MTU 1500
Number of Ports 128
Number of Uplinks 2
Network Adapters 1GbE NICs on each host
VMkernel Adapters/VM Networks vmk0 – ESXi Management – Active/Active – MTU 1500
 M – vCenter Server – Active/Active – MTU 1500
V

Page 12 of 27 www.SimpliVity.com
Reference Architecture

vSphere Distributed Switch Configuration

Parameter Setting
Load balancing Route based on physical NIC load.
Failover detection Link status only.
Notify switches Enabled.
Failback No.
Failover order Active/Active
Security Promiscuous Mode – Reject
MAC Address Changes – Reject
Forged Transmits – Reject
Traffic Shaping Disabled
Maximum MTU 9000
Number of Ports 4096
Number of Uplinks 2
Network Adapters 10GbE NICs on each host
Network I/O Control Disabled
VMkernel ports/VM Networks vmk1 – vMotion
vmk2 – Storage
vMotion – Active/Standby – MTU 9000
Federation – Standby/Active – MTU 9000
Storage – Standby/Active – MTU 9000
Management VMs – Active/Active – MTU 9000
Port Binding Static

Management or Resource Cluster

vSwitch0
vmnic0
A

Management VLAN A

A
vmnic1
Switch1

dvSwitch0

vMotion VLAN B
A
dvUplink1
S

S
SVT Federation VLAN C
S

Switch2

SVT Storage VLAN C


A

dvUplink2
A

A
VLAN D
VM Networks VLAN E
...

Page 13 of 27 www.SimpliVity.com
Reference Architecture

Desktop Infrastructure Design


This desktop block is sized to support 2000 users provisioned by Citrix Provisioning Server with 800 users on Hosted
Desktops and 1200 users on Hosted Shared Desktops OR 1400 users provisioned by Citrix Machine Creation Service with
600 users on Hosted Desktops and 800 users on Hosted Shared Desktops. The sizing of the infrastructure supporting this
desktop block is dependent on the workload profile defined for the use case supported by that block. In this case, we
defined a single block supporting 2000 office workers.

2000 Office Worker Block


The desktop block for 2000 office workers is two 5-node vSphere Clusters that are contained within separate vSphere
Datacenter objects (5+5 Federation). This configuration has been tested and validated to support the workload as defined,
including N+1 design. Results of these tests are available in this document.

Office Worker Virtual Machine Configuration – Hosted Desktops

Attribute Specification
Operating System Windows 7 SP1 64-bit
Virtual Hardware VM virtual hardware version 10
VMware Tools Latest
Number of vCPUs 1
Memory – including PVS RAM cache 2048MB
PVS RAM cache size 512MB
Virtual Disk – VMDK 25GB
NTFS Cluster Alignment 8KB
SCSI Controller VMware Paravirtual
Virtual Floppy Drive Removed
Virtual CD/DVD Drive Removed
NIC vendor and model VMXNET3
Number of ports/NIC x speed 1x 10 Gigabit Ethernet
OS Page file 1.5GB starting and max
Number deployed 800

Page 14 of 27 www.SimpliVity.com
Reference Architecture

Office Worker Virtual Machine Configuration – Hosted Shared Desktops

Attribute Specification
Operating System Windows Server 2012 R2
Virtual Hardware VM virtual hardware version 10
VMware Tools Latest
Number of vCPUs 6
Memory – including PVS RAM cache 22528MB
PVS RAM cache size 2048MB
Virtual Disk – VMDK 60GB
NTFS Cluster Alignment 8KB
SCSI Controller VMware Paravirtual
Virtual Floppy Drive Removed
Virtual CD/DVD Drive Removed
NIC vendor and model VMXNET3
Number of ports/NIC x speed 1x 10 Gigabit Ethernet
OS Page file 20GB starting and max
Number deployed 40
Users per server 30

The following infrastructure was used to support these workloads:

• (10) OmniStack Integrated Solution with Cisco UCS C240 M4

• Intel Xeon E5-2697 v3 (Haswell 14-core, 2 sockets per server)

• 455GB usable memory per OmniStack system for Hosted Desktops

• 327GB usable memory per OmniStack system for Hosted Shared Desktops

• 2TB datastores x10

• 10GbE networking

SimpliVity Federation and vSphere Cluster/Datacenter Sizing – The decision was made to split the workload into multiple
vSphere Clusters, with the 800 Hosted Desktop workloads in one vSphere Cluster and the 1200 Hosted Shared Desktop
workloads in the other.

To support multiple vSphere Datacenters in a single vCenter Server, both Datacenters must belong to a single SimpliVity
Federation, as a vCenter Server supports a single Federation. A Federation can span multiple vCenter Server instances,
but that configuration is outside the scope of this document.

Page 15 of 27 www.SimpliVity.com
Reference Architecture

Desktop vCenter Server

Desktop Datacenter/Cluster1 Desktop Datacenter/Cluster2

Hosted Desktops PVS (800 Office Workers) Hosted Shared Desktops PVS (1200 Office Workers)

Template Write Write Write Write Template Write Write Write Write
Cache Cache Cache Cache Cache Cache Cache Cache

Master Write Write Write Write PVS Write Write Write Write
Image Cache Cache Cache Cache vDISK Cache Cache Cache Cache

... ... ... ...


5xC240-M4SX 5x 1TB 5xC240-M4SX 5x 1TB
2xIntel 14 core 2.6GHz OmniStack Datastores 2xIntel 14 core 2.6GHz OmniStack Datastores
384GB RAM 384GB RAM

Note: This solution architecture was designed based on a standard workload size. When sizing a production environment,
proper assessment and use case definition should be done to accurately size the environment.

vSphere Design

Attribute Value Rationale


Number of vCenter Servers 1 A vCenter Server instance will support 2000 users.
Number of vSphere Clusters 2 Given the number of OmniStack systems required to support the
given workload, the decision was made to split out into separate
vSphere Clusters.
Number of vSphere Datacen- 2 With OmniStack 3.0, the fault domain is at the vSphere Datacen-
ters ter level. Desktops will not cross back and forth between vSphere
clusters, so each vSphere Cluster should have its own vSphere
Datacenter.
vSphere HA Configuration 1. HA enabled 1. Enabled to restart VMs in the event of an ESXi host failure
2. Admission Control enabled 2. Ensure VM resources will not become exhausted in the case of
3. %
 of cluster resources a host failure.
reserved – 20% 3. Set to the percentage of the cluster a single host represents.
4. Isolation Response – Leave 4. Ensure a host isolation event does not needlessly power off
Powered On desktops.
vSphere HA – Advanced Set- das.vmmemoryminmb – Both are set to averages of the workloads in the cluster. This
tings 2048MB serves to set the percentage of cluster resources in HA calcula-
(Hosted Desktop Cluster) das.vmcpuminmhz – 300MHz tion to that of an average VM.
vSphere HA – Advanced Set- das.vmmemoryminmb – Both are set to averages of the workloads in the cluster. This
tings 22528MB serves to set the percentage of cluster resources in HA calcula-
(Hosted Shared Desktop Clus- das.vmcpuminmhz – 2000MHz tion to that of an average VM.
ter)
Reservations and Limits Full memory reservation for all Ensures all desktop workloads have access to memory resources.
desktop workloads Also avoids creation of VMkernel swap files on storage.
Page 16 of 27 www.SimpliVity.com
Reference Architecture

ESXi – Advanced Settings


SunRPC.MaxConnPerIP 256 (max) Avoid hitting NFS connection limit

NFS.MaxVolumes 256 (max) Increase number of NFS volumes per host


NFS.MaxQueueDepth 256 Performance consideration
NFS.SendBufferSize 512 Performance consideration
NFS.ReceiveBufferSize 512 Performance consideration
Net.TcpipHeapSize 32 Performance consideration
Net.TcpipHeapMax 512 Performance consideration
Misc.APDHandlingEnable 1 Turn on All Paths Down handling in ESXi

OmniStack Servers – Two 5-host vSphere Clusters comprised of OmniStack Integrated Solution with Cisco UCS C240 M4
systems to support the Office Worker desktop workload.

The following design patterns were observed:

• Limit physical CPU to virtual CPU oversubscription

• Do not overcommit memory

Limit physical CPU to virtual CPU oversubscription

The table below shows steps on how to calculate useable physical CPU:

Step Desktop CPU Calculation Rationale


Each node usable CPU 28 - OVC CPU 4 = 24 Each node has 28 physical CPU and one
OVC takes 4 physical CPU cores.
Total 4 node usable CPU 24 x 4 nodes = 96 Multiple usable desktop CPU by the num-
ber of nodes. N+1 is calculated so we use
4 instead of 5.
Total desktop virtual CPU requirement Hosted Desktops: 1 X800vm =800vCPU Each Hosted Desktop VM has a single
Hosted Shared Desktops : 6x40VM=240 vCPU, and each Hosted Shared Desktop
vCPU VM has 6 vCPUs.
Check CPU overcommitment Hosted Desktops : 800 vCPU required / 96 In both case, there should not be over-
pCPU = 8.33 vCPU:pCPU commit.
Hosted Shared Desktops: 240 vCPU
required / 96 pCPU = 2.5 vCPU:pCPU

Page 17 of 27 www.SimpliVity.com
Reference Architecture

Do not overcommit memory

In this configuration, each OmniStack system has 384GB or 512GB of available physical memory. We used 384GB memory
for hosted shared desktops and 512GB for hosted desktops. The table below shows steps on how to calculate useable
physical memory:

Calculation Step Desktop Memory Calculation Rationale


Each node usable memory 384GB- OVC memory 57GB = 327GB. Each node has 384GB or 512GB physical
512GB- OVC memory 57GB = 455GB. memories and one OVC takes 57GB.

Total 4 node usable memory Hosted Shared Desktops: 327GB x 4 Multiple usable desktop memories by the
nodes = 1308GB number of nodes. N+1 is calculated so we
Hosted Desktops: 455GB x 4 nodes = use 4 instead of 5.
1820GB
Total desktop memory requirement Hosted Shared Desktops: 20GB x 40VM Each Hosted Desktop VM has 2GB of
=800GB memory, and each Hosted Shared Desk-
Hosted Desktops: 2GB x 800VM=1600GB top VM has 20GB of memory.

Check memory overcommitment Hosted Shared Desktops: 1308GB per And both cases, no memory overcommit-
cluster - 800GB required = 428GB spare ment.
capacity
Hosted Desktops: 1820GB per cluster –
1600GB required = 220GB spare capacity

NOTE: For our testing, we used OmniStack systems with 384GB of memory, so there was some memory overcommitment
with Hosted Desktops. We also did not set memory reservations for desktop workloads. We did not notice any adverse
effects, e.g., VMkernel swap to disk; however, we recommend 512GB per OmniStack system for Hosted Desktops in this
case to avoid overcommitment of memory.

vStorage API for Array Integration (VAAI) – VAAI is a vSphere API that allows storage vendors to offload some common
storage tasks from ESXi to the storage itself. The VAAI plugin for OmniStack is installed during deployment, so no manual
intervention is required.

Datastores – An equal number of SimpliVity datastores to the number of OmniStack systems in each vSphere Cluster were
deployed. In this 5+5 Federation configuration, five SimpliVity datastores were created for each vSphere Cluster. This is
done to more evenly distribute storage load across the OmniStack systems in the vSphere Cluster, as well as increase the
likelihood any given desktop has locality with its VMDK disk.

Each datastore contains a virtual machine template and write cache files for every virtual machine. The write cache file con-
tains all disk writes of a target device when using a write-protected vDisk (Standard Image).

Networking – The following design patterns were observed in the design of the vSphere networking for the solution:

• Segregate OVC networking from ESXi host and virtual machine network traffic

• Leverage 10GbE where possible for OVC and virtual machine network traffic

Page 18 of 27 www.SimpliVity.com
Reference Architecture

With those ideals in mind, a single vSphere Standard Switch is deployed for management traffic, and a single vSphere Dis-
tributed Switch is deployed for the rest of our network needs, including

• Virtual Machines

• SimpliVity Federation

• SimpliVity Storage

• vMotion

vSphere Standard Switch Configuration

Parameter Setting
Load balancing Route based on Port ID
Failover detection Link status only.
Notify switches Enabled.
Failback No.
Failover order Active/Active
Security Promiscuous Mode – Reject
MAC Address Changes – Reject
Forged Transmits – Reject
Traffic Shaping Disabled
Maximum MTU 1500
Number of Ports 128
Number of Uplinks 2
Network Adapters 1GbE NICs on each host
VMkernel Adapters/VM Networks vmk0 – ESXi Management – Active/Active – MTU 1500

vSphere Distributed Switch Configuration

Parameter Setting
Load balancing Route based on physical NIC load.
Failover detection Link status only.
Notify switches Enabled.
Failback No.
Failover order Active/Active
Security Promiscuous Mode – Reject
MAC Address Changes – Reject
Forged Transmits – Reject
Traffic Shaping Disabled
Maximum MTU 9000
Number of Ports 4096
Number of Uplinks 2

Page 19 of 27 www.SimpliVity.com
Reference Architecture

Network Adapters 10GbE NICs on each host


Network I/O Control Disabled
VMkernel ports/VM Networks vmk1 – vMotion
vmk2 – Storage
vMotion – Active/Standby – MTU 9000
Federation – Standby/Active – MTU 9000
Storage – Standby/Active – MTU 9000
Desktop VMs – Active/Active – MTU 9000
Port Binding Static, except Desktop VMs, which is ephemeral

5. Login VSI
All performance testing documented utilized the Login VSI (http://www.loginvsi.com) benchmarking tool. Login VSI is
the industry-standard load testing solution for centralized virtualized desktop environments. When used for benchmark-
ing, the software measures the total response time of several specific user operations being performed within a desktop
workload in a scripted loop. The baseline is the measurement of the response time of specific operations performed in the
desktop workload, which is measured in milliseconds (ms).

There are two values in particular that are important to note: VSIbase and VSImax.

• VSIbase: A score reflecting the response time of specific operations performed in the desktop workload when there is
little or no stress on the system. A low baseline indicates a better user experience, resulting in applications responding
faster in the environment.

• VSImax: The maximum number of desktop sessions attainable on the host before experiencing degradation in host
and desktop performance.

SimpliVity used Login VSI 4.1.4 to perform the tests. The VMs were balanced across each of the servers, maintaining a
consistent number of VMs on each node. For the Login VSImax test, a Login VSI launcher was used per 500 desktops. The
Login VSI launcher was configured to launch a new session every 2.88 seconds. All Hosted and Hosted Shared Desktops
were powered on, registered, and idle prior to starting the actual test sessions.

Testing Methodology
For the tests, SimpliVity used the new Login VSI Office Worker workload. The workload simulate the following applications
found in almost every environment, as listed below:

Login VSI Workload Applications

• Microsoft Word 2010 • Mind Map

• Microsoft Excel 2010 • Flash Player

• Microsoft PowerPoint 2010 • Doro PDF Printer

• Microsoft Outlook 2010 • Photo Viewer

• Internet Explorer

All tests are executed in Login VSI’s Direct Desktop Mode. Since no specific remoting protocol is used, this makes the test
results relevant for everyone. In Direct Desktop Mode, all sessions are started as a console session. The big advantage is
that all comparisons are not influenced by changes on a remoting protocol level. As a result, the results are a “pure” com-
parison of the tests in a VDI context.

Page 20 of 27 www.SimpliVity.com
Reference Architecture

Office Worker Workload Definition


(from http://www.loginvsi.com/documentation/Changes_old_and_new_workloads)

There is a new workload added that has no precursor, called the Office Worker, which is based on the Knowledge Worker
(previously Medium) workload. The main goal of the Office Worker workload is to be deployed in environments that use
only 1vCPU in their VMs. Overall, the Office Worker workload has less resource usage in comparison to the Knowledge
Worker workload.

Test Environment
The test environment is as described in the Solution Architecture section of this document. That includes both manage-
ment infrastructure design for the management workloads used, with an addition of Login VSI launchers, and the desktop
infrastructure design for the desktop workloads tested.

Results
The following results are representative of multiple Login VSI 4.1 runs for office worker users on the infrastructure
described above.

Provisioned by Citrix Provisioning Server

1200 Hosted Shared Desktop sessions – Citrix PVS using cache in device RAM with overflow disk

VSIbase for the environment was 549ms, and VSImax was not reached in any run. VSImax average was 1147ms, and VSImax
threshold was 1550ms.

Page 21 of 27 www.SimpliVity.com
Reference Architecture

800 Hosted Desktops deployed with Citrix PVS using cache in device RAM with overflow disk

VSIbase for the environment was 842ms, and VSImax was not reached in any run. VSImax average was 1630ms, and VSImax
threshold was 1842ms.

Provisioned by Machine Creation Services

800 Hosted Shared Desktop sessions deployed with Citrix MCS

VSIbase for the environment was 586ms, and VSImax was hit at 840 sessions. VSImax average was 1481ms, and VSImax
threshold was 1856ms.

Page 22 of 27 www.SimpliVity.com
Reference Architecture

600 Hosted Desktops deployed with Citrix MCS

VSIbase for the environment was 847ms, and VSImax was not reached in any run. VSImax average was 1145ms, and VSImax
threshold was 1847ms.

Data Efficiency
One of the key components of SimpliVity hyperconverged infrastructure is data efficiency. By using inline deduplication
and compression to optimize data before it hits the disk, we can reduce I/O and space usage and leaves as much CPU as
possible available to run the business applications. The results for our small scale (600-1200 users per vSphere Datacenter)
testing was above 20:1 data efficiency.

Page 23 of 27 www.SimpliVity.com
Reference Architecture

6. Summary/Conclusion
This Reference Architecture provides guidance to organizations implementing Citrix XenDesktop 7.6 on SimpliVity hyper-
converged infrastructure, and describes tests performed by SimpliVity to validate and measure the operation and per-
formance of the recommended solution, including third-party validated performance testing from Login VSI, the industry
standard benchmarking tool for virtualized workloads.

LoginVSI office workload test results showed that two 5 nodes SimpliVity OmniStack can support 2000 seats XenDesktop
and XenApp with PVS RAM Cache option deployment and 1400 seats XenDesktop and XenApp with MCS deployment.
In Login VSI testing, consistently low latency of less than 2000ms average response was observed for both Hosted and
Hosted Shared Desktop implementations. Native always-on inline deduplication and compression provided a data reduc-
tion rate of above 20:1

PVS RAM cache is critical in eliminating IO on OmniStack storage and lower storage latency. MCS tests shows lighter IO
footprints compared to PVS without RAM Cache.

Utilizing Simplivity OmniStack hyperverged infrastructure dramatically simplifies IT systems management. OmniStack’s
Data Virtualization Platform delivers industry-leading data efficiency, global unified management and built-in data protec-
tion. For VDI environments, Simplivity provides an unmatched user experience without compromising desktop density or
resiliency.

7. References and Additional Resources


Performance Whitepaper: VDI without Compromise with SimpliVity OmniStack and Citrix XenDesktop 7.6

ESG Lab Review Preview: SimpliVity Hyperconverged Infrastructure for VDI Environments

Citrix Product Documentation - XenDesktop 7.6 Long Term Service Release

Citrix Product Documentation - XenApp 7.6 Feature Pack 2 Blueprint

Citrix Product Documentation - XenDesktop 7.6 Feature Pack 2 Blueprint

Citrix Product Documentation - Provisioning Services 7.6

Minimum requirements for the VMware vCenter Server 5.x Appliance (2005086)

Best Practice Active Directory Design for Managing Windows Networks

Login VSI - Changes old and new workloads

Page 24 of 27 www.SimpliVity.com
Reference Architecture

8. Appendix

Design Guidelines
With SimpliVity OmniStack, you can start small with as few as two OmniStack nodes (for storage HA) and grow the envi-
ronment as needed. This provides the flexibility of starting with a small scale proof of concept and growing to large scale
production without guessing the workload and purchasing up front.

The following section covers the OmniStack, XenDesktop, infrastructure and network design guidelines for Citrix Hosted
and Hosted Shared Desktop deployments on SimpliVity OmniStack.

SimpliVity OmniStack Design Guidelines

Item Detail Rationale


Minimum Size 2 OmniStack nodes Minimum size requirement with storage HA. A cluster of
a single OmniStack node is possible without storage HA.
Scale Approach Use modular block Scale out from proof of concept to thousands
of desktops
Scale Unit Nodes then cluster Granular scale to precisely meet the capacity demands.
Scale in node increments to vSphere Cluster and
Datacenter maximums.
Federation Maximum 32 OmniStack nodes Combine compute nodes to offload CPU and
memory usage.
Cluster Size Up to 5 OmniStack nodes per OmniStack version 3.0 supports up to 5 nodes in a
vSphere Cluster. single vSphere Cluster for desktop workloads.
Management Infrastructure Dedicated OmniStack infra- Separation of management workloads from desktops
structure is recommended for is key to ensuring performance, manageability, and
all XenDesktop deployments. security of both.

Page 25 of 27 www.SimpliVity.com
Reference Architecture

Citrix XenDesktop 7.6 Design Guidelines

Item Detail Rationale


Citrix XenDesktop
Desktop Delivery Controller N+1 One Citrix XenDesktop delivery controller can support
up to 5000 users. Use an N+1 configuration for high avail-
ability.
StoreFront Server N+1 StoreFront server provides users a list of resources. One
StoreFront can support 10,000 connections. Redundant
StoreFront servers should be deployed to provide N+1
high availability.
License Server
1 License server was used because the environment will
continue to function in a 30-day grace period if the
license server is offline.
SQL Database 2+ witness SQL server is a key resource in XenDesktop, so availability
is paramount. The best practice is to use SQL mirror with
witness.
Citrix Provisioning Services
PVS Server N+1 One PVS server can support around 500 VMs.
N+1 availability is required for production deployments.
Best practice is to use PVS RAM for vDisk to reduce hard
disk reads, thereby improving performance. The size of
RAM is 2GB + (vDisk size x 2GB).
vDisk Store PVS local disk To enable PVS vDisk RAM cache, PVS local disk is recom-
mended.
Write Cache Write cache on RAM Use host RAM almost eliminates IO and has become
with overflow to disk the new best practice.
RAM Cache size recommendation based on office
worker workload:
Hosted desktop with Windows 7 32bit: 256MB
Hosted desktop with Windows 7 64bit: 512MB
Hosted shared desktop with Windows 2012 R2: 2GB
The best practice is to run defragment on vDisk.
Citrix NetScaler ( Optional)
NetScaler N+1 NetScaler provides load balancing, including global site
load balancing, required for active/active multisite config-
urations, and disaster recovery capabilities between sites.

Page 26 of 27 www.SimpliVity.com
Reference Architecture

Supporting Infrastructure Design Guidelines

Item Detail Rationale


DNS N+1 High availability for DNS
DHCP N+1 High availability and load balance for DHCP. DHCP is required for PXE boot
with PVS with configuration of Option 66 and 67. Check Citrix PVS 7.6 product
documentation for detail.
File Services N+1 The best practice is to tune the Windows volume where the profiles are stored to
use 8KB Cluster Size, rather than the default.

For more information, visit:


www.simplivity.com
® 2016, SimpliVity. All rights reserved. Information described herein is furnished for
informational use only, is subject to change without notice. SimpliVity, the SimpliVity
logo, OmniCube, OmniStack, and Data Virtualization Platform are trademarks or
registered trademarks of SimpliVity Corporation in the United States and certain
other countries. All other trademarks are the property of their respective owners.

J966-Citrix-RA-EN-0716

Page 27 of 27 www.SimpliVity.com

S-ar putea să vă placă și