Documente Academic
Documente Profesional
Documente Cultură
C1D
Tim Areeman
C!D
C3D
C3D
A Glossary of Terms: VMM (Virtual Machine Monitor) B a 3rd$party tool pro%iding the interface between a Virtual Machine and the host machine. 8ome e*amples of VMMs are VM1are and 0en. VMManager B 9rid ser%ice interface to allow a remote client to interact with the VMM VM"epository B 9rid ser%ice which catalogues VM images of a V. and which stores them for retrie%al and deployment Authori#ation $er%ice B 9rid ser%ice which the VMManager and VM#epository ser%ices call to chec if a user is authori"ed to perform the requested operation
Performance Implications
,nstead of running 9rid software within VMs, we integrated VM deployment into the 9rid infrastructure: mapping a client credential to a )ni* account was replaced by deploying a VM and starting the client<s en%ironment within it.
3 3 !
The performance of applications running on a VM depends on the third$party VMMs and the applications themsel%es. & purely '()$ bound program will ha%e almost no performance degradation as all instructions will be e*ecuted directly on hardware. Typically, %irtual machines intercept pri%ileged instructions +such as ,-./ resulting in a performance hit for those instructions although new methods, such as those implemented by 0en, impro%e this factor. ,n our implementation, we e*perimented with VM1are 1or station and 0en and in our e*perience slowdown was
ne%er more than 323 and is often less than 43. +The 0en slowdown was much less than 323/
Descri"ing VM Properties
1
Migration
1
& VM constitutes a %irtual wor space configured to meet the requirements of 9rid computations. 1e use an 0M: 8chema to describes %arious aspects of such wor space including %irtual hardware +#&M si"e, dis si"e, Virtual '6$#.M dri%es, serial ports, parallel ports/, installed software including the operating system +e.g. ernel %ersion, distribution type/ as well as library signature, as well as other properties such as image name and VM owner. 5ased on those descriptions VMs can be selected, duplicated, or further configured.
3egend
$ VMManager $ VM#epository
1. !. 3. ,n
,ntegrating Virtual Machines with 9rid technology allows easy migration of applications from one node to another. The steps are as follows: )sing 9rid software, the client free"es e*ecution of the VM The client then sends the HmigrateI command to the VMManager, specifying the new host node as a parameter &fter chec ing for the proper authori"ation, the VM is registered with the new host and a 9ridAT( call transfers the image terms of performance this is on a par with deployment B it is mainly bound by the length of transfer. ,n our tests, we migrated a !95 VM image from two identical nodes through a Aast 7thernet connection.
VM Deployment
The VM deployment process has 3 major steps: 1. The client queries the VM repository, sending a list of criteria describing a wor space. The repository returns a list of VM descriptors that match them. !. The client contacts the VMManager, sending it the descriptor of the VM they want to deploy, along with an identifier, and a lifetime for the VM. The VMManager authori"es the request using an access control list. 3. The VM instance is registered with the VMManager and the VM is copied from the VM#epository. The VMManager then interfaces with the VMM on the resource to power on the VM.
400
The graph to the right shows the proportion of time ta en by the constituents of the deployment process, measured in seconds. ;ote that the graph does not include time for authori"ation, but those times are comparable to registration time. &lso, the actual migration time depends on the networ latency and bandwidth. The pause and resume times are dependent on 3rd party VMM.
400
350
300
100
50
The graph to the right shows the proportion of time ta en by the constituents of the deployment process, measured in seconds. The authori"ation time is not included, but it is comparable to registration time. The dominant factor in o%erall deployment time depends on networ latency and bandwidth.
350
300
150
100
50
&fter a scientist has deployed a VM onto the resource, he may run an application in it. Aor this purpose, each of our VMs was configured with the 9lobus Tool it. This picture represents a scientist running the T.(. program, creating an image of a transmembrane protein.
The low le%el features of our architecture are detailed in the diagram to the right. The diagram describes for nodes, each running a +potentially different/ host .8. 7ach node is running a VMM and a VMManager 9rid 8er%ice. .n top of that layer, run the actual VMs, which are installed with 9rid software, allowing them to be run as 9rid nodes. The VMs could also be used as independent e*ecution en%ironments, without 9rid middleware installed on them. +,nstead they would run applications directly/.