Documente Academic
Documente Profesional
Documente Cultură
TO THE MAN
BUILDING A
PRODUCTION
VDI WITH
FOSS
CHETAN VENKATESH BRIFORUM 2012
CTO & FOUNDER CHICAGO
ATLANTIS COMPUTING
VIVA LA REVOLUCION
Tweet me at
@chetan
Tweet from this session
#BRIFORUM-STIM
This presentation is Licensed under Creative
Commons
The Creative Commons copyright licenses and tools forge a balance inside the
traditional all rights reserved setting that copyright law creates. Our tools give
everyone from individual creators to large companies and institutions a simple,
standardized way to grant copyright permissions to their creative work. The
combination of our tools and our users is a vast and growing digital commons, a
pool of content that can be copied, distributed, edited, remixed, and built upon, all
within the boundaries of copyright law.
VDI
VDI AHEAD
AHEAD
TOP 3 VDI MYTHS
VDI is a form of STD that originated in IBM
VDI is transmitted through toilet seats
VDI is the accidental love child of a Vmware engineer and
Microsoft Bob (and or Windows 98 ME)
ANATOMY OF
VDI
WAN Optimization
Thin
Thin Client
Client Platform
Platform
Windows
Windows &
& App
App Virtualization
Virtualization
Broker
Broker
Hypervisor
Network
Storage
DESIGN GOALS
Must be practical to deploy
Cost effective
Perform within expectations
Equivalent Analogs to commercial components
Support both Persistent & Non Persistent Desktops
Must support HA for persistent desktops at storage level
Must support thin provisioning for cloning VMs (ala linked
clones/MCS)
Nice to have - a vBlock like POD architecture for 500 users
STORAGE
VMs use a virtualized Hard drive that
needs storage that is
Consistent
What you write is what you read
back
Highly Available
Should be able to read and write all
the time, any time
Adequate Performance
Should be able to drive IOPS hungry
VMs and Windows Operations
Windows Desktop VMs are very I/O
hungry & need very high performance
storage
CAP THEOREM &
STORAGE
Consistency
Partition/
Availability
Performance
WINDOWS DESKTOPS &
VIRTUALIZATION
Write intensive
70%-80% of I/O is write worse in the case of Linked Cl one/Thin Provisioned VMs
Small I/O
Majority of I/O averages at 4KB.
Lots of small direct I/O writes issued by workloads
Mostly Random
Hypervisors perform very poor I/O scheduling
Hypervisors unaware of underlying Storage Layout
Blend different types of I/O from different VMs into a variable and highly random access
pattern
High Bursts
Will read and write at queue depth of Virtual Disk driver not physical hardware
Cumulative queue depth is order of magnitude higher than the physical path
10X more VMs per host than with server virtualization
Interactive Writes cause Read starvation challenges
Reads are synchronous and block application/UI on interactive workloads
Stateless/Stateful separation challenges
WINDOWS IS OIL / STORAGE IS WATER
Windows is built for real not virtual
Windows has many Optimizations for Physical PCs that run on spinning media (HDD)
Optimizations eat I/O capacity because Windows is unaware that its storage is virtualized
A/V Access
Scanning Sector latency
optimization
File Layout
optimization
WINDOWS & CAP
Consistency
Partition/
Availability
Performance
STORAGE HARDWARE
Hardware for SAN/NAS
4U Rack Server Chasis with dual redundant power supply
Single Socket Dual Core 2Ghz Processor 48GB DDR3 RAM
Battery backed caching RAID card
24 Drive Bays with 10K SAS drives 300GB Each
RAW Capacity of 6.7 TB
Usable Capacity of 4.4 TB RAID 50
Average Persistent VM size 40GB = 102 VMs
Average NP VM size 5GB = 819 VMs
Usable IOPS capacity 1100 4KB IOPS
Average VM requires 30 IOPS = 36 VMs*
So for 500 VMs I would need a 333 Drives or add 13 More
Storage Servers like this one.
STORAGE OPTIONS
SAN/NAS Local Disk
Available Yes No
IOPS No No
Centralized Storage
Snapshots/Clones
Highly Available
Using Local Disk?
INTRODUCING
SHEEPDOG
Stupid name great technology
One of the most exciting Open Source projects today
A Virtualization Specific Storage Infrastructure
A distributed Block Level Storage system
Runs on local storage
Aggregates a server/nodes local storage into a collective
pool.
Scales to Hundreds of Nodes
Supports advanced volume management features
Snapshots, cloning, thin privisioning
STORAGE OPTIONS
SAN/NAS Local Disk Local Disk with
Sheepdog
Automatic VM
Replication
HOW SD WORKS ..
C-A OF CAP
HOW SD WORKS ..
ADD NODES DYNAMICALLY
SD PERFORMANCE
3 HOSTS & 1 VM
SD PERFORMANCE 64
HOSTS & 256 VMS
Linear Scalability in-terms of performance
C.A.P NOW AGREES
Consistency
Performance
Availability
*sort off
SD FEATURES OF
INTEREST
Create a VM
$ qemu-img create sheepdog:<vmname> 256G
Enable Local node based I/O caching
$ qemu-system-x86_64 -drive
file=sheepdog:<vmname>,cache=writeback
Boot a VM
qemu-system-x86_64 sheepdog:<vmname>
Create a Snapshot or linked clone
$ qemu-img snapshot -c <snap-name> sheepdog:<vmname>
How to manage sheepdog
Bash scripting
Write your own code using libvirt
Use Openstack (bindings in Alpha Stage)
WHERE TO GET
SHEEPDOG
Google Sheepdog KVM (github.com/collie/sheepdog/wiki)
Requirements
2 or more Linux Machines (or VMs)
Linux 2.6.27 or better
Corosync and Corosync lib
Qemu 0.13 or later
Recommend using Ubuntu or Debian Distributions since the packages
already exist
Caution
Might have to recompile Corosync to make it work
If you have a problem check the corosync conf file and make sure IPs
are correct
SD BASED VDI BOM
Node Configuration
Dual Socket Quad Core 2Ghz or better
64 GB DDR3 RAM
4 X 10K SAS Drives R0 = 1.2 TB / 600 IOPS
4 X 1Gbe - 3 X 1Gbe LACP links for Storage
Infrastructure Services
Windows Cross Roads LTSP Network Crossroads broker
AD Broker Server LTSP diskless thin client
Windows for AD
Linux KVM
SheepDog for storage
VM cloning
Stay tuned!!
Balloons
Open
Source Of
course
Beagle
Board
Satellite
String
Super
Uber
Magnificient
cloud
You are very welcome