Sunteți pe pagina 1din 15

GridRAM: A Software Suite Providing User Level GRID Functionality for University Computer Labs

Ronald Marsh, Ph.D. Computer Science Department University of North Dakota Grand Forks, ND, 58202 rmarsh@cs.und.edu

Abstract
This paper describes gridRAM, a program developed in the Computer Science Department at the University of North Dakota (UND) that provides students with a simple mechanism to use a collection of computers (i.e., a student accessible cluster) as a computational cluster. GridRAM was originally developed to provide students in CSci-446 (Introduction to Computer Graphics) in the Computer Science Department at UND with a simple mechanism to use a dedicated cluster as a computer animation rendering farm. GridRAM proved itself useful and users expressed interest in using it for other applications and in open/student-accessible computer labs. Thus, the capabilities of the current version of gridRAM are significantly extended from the original. By dividing gridRAM into a pair of programs, gridRAM allows almost any set of networked computers to function as a computational grid or rendering farm. However, unlike tradition GRID systems, gridRAM functions at the user level and does not require system administration support or installation. GridRAM supports the parallel processing of any type of files via any associated stand-alone package, provides help in configuring passwordless SSH connections between computers, allows the user to specify the maximum number of processes per machine as well as the maximum total number of processes to be used overall, provides user configurable levels of verbosity, provides user configurable levels of processing statistics, and allows gridRAM to automatically reprocess files that failed to process (due to a computer failure). GridRAM assumes that the computers are unreliable and conducts a series of checks before processing is begun to insure that the computers intended for use in the processing actually exist (some may be turned off for example), that the package to be used for processing the files exists on all available computers, and that the PVM daemons were successfully started on each available computer. Once these checks have been completed, gridRAM employs a naive master-slave load balancing algorithm to dispatch the various files to the various computers for processing.

1 Introduction
In prior years, students in CSci-446 (Introduction to Computer Graphics) in the Computer Science Department at the University of North Dakota (UND) were required to produce a 3D computer game as their final project. However, creating a game is risky. Aside from developing a rich graphical environment, one has to insure the game logic is correct. Thus, in the spring 2005 CSci-446 offering, students were given the option of developing a game or an animation. Due to the success and popularity of animation development, in the spring 2006 CSci-446 offering the decision was made to only allow animation projects. Furthermore, to reinforce software engineering / management concepts all animations required teams of at least 4 and a play-time of approximately 4 minutes. As the course progressed the students became more adept with OpenGL[1] and the story boards became more ambitious. While the movies could be played directly in OpenGL (i.e., like a video game), OpenGL does not provide a very sophisticated lighting model and the decision was made to use the UND developed PovGL[2] library to convert each OpenGL scene to a POVRay[3] scene description file and to use POVRay to render (raytrace) the scenes creating more realistic looking images which would be later combined to form the final movie. OpenGL test scenes were converted to the POVRay scene description format and rendered on computers in the Advanced Computing lab 1 . Depending on the image quality desired, POVRay could take up to 45 minutes per scene! Through experimentation, we settled on rendering parameters that resulted in good quality images and that reduced rendering time to an average of a few minutes per scene. Since each 4 minute movie would require the rendering of approximately 7000 scenes, it became clear that the computer time required to render the movies would still be substantial and that some mechanism to parallelize the tasks would be desirable. Since the Computer Science Department possesses a dedicated Beowulf cluster (called the CrayOwulf 2 ) the obvious choice was to obtain/develop software that would provide the ability to automatically spread the rendering across the CrayOwulf, in effect, turning the CrayOwulf into a computer animation rendering farm[4]. The rendering of many scenes (or the processing of many scene description files) is an embarrassingly parallel problem that is well suited to a master-slave environment and since we had already written a similar program for generating fractals, it was an obvious choice to revise the existing fractal generating program. GridRAM, the resulting program, proved itself useful and users expressed interest in using it for other applications and in open/studentaccessible computer labs. Thus, the question arose as to whether or not it was worth the effort to enhance gridRAM such that it could function as a computational GRID[5]. Before committing the time required to expand gridRAMs capabilities, we investigated some of the currently (and freely) available GRID packages such as Condor[6], Suns
1 2

Lab computers have 3 GHz Pentium CPUs, 1GB RAM, and 160 GB SATA hard drives. CrayOwulf consists of 10 4U rack mount computers each with dual 2.4 GHz Xeon CPUs, 512 Meg RAM (master has 4 GB RAM), 40 GB IDE hard drives (NFS has 1 TB) all housed in a Cray J90 rack.

Grid Engine[7], and the Globus toolkits[80]. While these packages provide a rich feature set and have been successfully used by many, they required system administration support and incur substantial overhead. We also reviewed the adaptable GRID system developed by 2 UND graduate students [9, 10]. While the UND developed system was simpler and closer to what was desired, its port knocking security mechanism was not required and the adaptability (which moved jobs from computers in-use to computers notin-use) incurred too much overhead. What we wanted was an even simpler system that would operate strictly at the user level and the revision of gridRAM was begun.

2 Usage
2.1 Initial Setup
Before we delve into the design of gridRAM, we will look at how one would use it. GridRAM supports the parallel processing of any type of files (plural) via any associated stand-alone package. However, GridRAM does not parallelize the processing of any single file. To make effective use of gridRAM, many files should be slated for processing. Firstly, gridRAM requires the user to copy the following files into a common directory: The data files to be processed. The gridRAM and gridRAMSlave binaries. The gridRAM.ini file. A nodes file (copy or create). Note that the names of the files must be exactly as shown above (names in italics). Secondly, the nodes file must be edited such that it contains the names of the computers that you want to include in your grid. Note that only 1 computer can be listed per line and you must use the name as required by an SSH connection. For example:
Chinchilla-15.cs.und.edu Chinchilla-16.cs.und.edu Chinchilla-17.cs.und.edu Penguin-20.cs.und.edu Penguin-25.cs.und.edu

Lastly, the gridRAM.ini file must be modified to match your environment and processing requirements. An example gridRAM.ini is shown below and discussed in following paragraphs:
Input_filename_extension__: Output_filename_extension_: Processes_per_node__(-1,N): No_processes_wanted_(-1,N): Wait_time_multiplier_(N)__: .pov .png -1 -1 2

Verbosity_level_(0,1,2)___: 2 Show_statistics_(0,1,2)___: 1 Reprocess_lost_files_(0,1): 1 # #Data_process_script: # /usr/local/bin/povray +L/usr/local/share/povray-3.6/include +I$1 +O$2 +A0.3 +W640 +H480 +FN -D

The first 8 lines allow the user to customize the behavior of gridRAM. Note that it is imperative that the user not change any text to the left of the colons. The first 8 options are as follows: 1. The Input_filename_extension line allows the user to specify the input filename extension. Any and all files in the common directory with the extension specified will be processed (whether intended or not). The maximum allowed character length of the input extension is 9 characters. 2. The Output_filename_extension line allows the user to specify the output filename extension. All resulting output files will be placed in the common directory and will have this extension appended on to the original input filename. For example, if the input filename is dataset1.pov, the resulting output file will be named dataset1.pov.png. The maximum allowed character length of the output extension is 9 characters. 3. The Processes_per_node __(-1,N) line allows the user to specify the maximum number of concurrent processes that will be allowed per computer (or node). A value of -1 instructs gridRAM to use as many processes as there are CPUs on each individual computer. A value of N (N > 0) instructs gridRAM to use at most N processes per computer (limited to the number of CPUs on each individual computer). Note that in some cases (processing that involves a lot of I/O) throughput may be enhanced by limiting the number of concurrent processes per machine to 1. Finally, while N must be an integer, there is no upper bound on N. 4. The No_processes_wanted (-1,N) line allows the user to specify the maximum number of concurrent processes that will be allowed on the entire grid. A value of -1 instructs gridRAM to use as many processes as computers and CPUs (limited by the Processes_per_node option) available in the grid. A value of N (N > 0) instructs gridRAM to use at most N processes. Note that gridRAM will prioritize the use of computers by the order they are listed in the nodes file. Depending on the tasks, the user may not need or want all of the computers in the GRID or they may want to restrict use to certain computers. Finally, while N must be an integer, there is no upper bound on N. 5. The Wait_time_multiplier_(N) line allows the user to specify an upper bound to the processing time in an effort to detect failed processes or computers. More details on how the Wait_time_multiplier is applied can be found in section 3.0. Finally, while N must be an integer, there is no upper bound on N. 6. The Verbosity_level_(0,1,2) line allows the user to control the amount of run-time information provided to the user. A value of 0 specifies that no run-time information be displayed. A value of 1 specifies that only summary run-time

information be displayed. A value of 2 specifies that all run-time information be displayed. For example, gridRAM pings each computer listed in the nodes file to check its availability. If the verbosity level is set to 1, the user will be informed that this step is taking place, but will not be shown any specific ping results. If the verbosity level is set to 2, the user will be shown the ping results for each computer. 7. The Show_statistics_(0,1,2) line allows the user to control the amount of processing statistics that is displayed. A value of 0 specifies that no statistics are to be displayed. A value of 1 specifies that only summary statistics are to be displayed. Summary statistics are: The number of data files processed. The total processing time (wall clock) The total processor time (sum of all process wall clock times) The average time to process a data file. The maximum time required to process a data file. The minimum time required to process a data file. A value of 2 specifies that the summary statistics be displayed as well as statistics for each computer. Individual computer statistics include: The number of data files processed on each computer. The average time to process a data file on each computer. 8. The Reprocess_lost_files_(0,1) line allows the user to instruct gridRAM to automatically reprocess files that failed to process due to hardware failure. A value of 0 specifies that files that failed to process will not be automatically reprocessed. A value of 1 specifies that files that failed to process will be automatically reprocessed. Note that if 1 has been specified and there are files that failed to process, the entire GRID is reinitialized before reprocessing is attempted. This is done in an effort to exclude failed nodes. Note that a setting of 1 will also force the Show_statistics option to have a minimal value of 1. The next 3 lines (starting with #) are simply placeholders and must not be changed or removed. The last line (last 2 lines in the example due to word wrapping) specifies the actual program/command used to process the data files. Depending on the program and its installation, this line will vary widely. In the example provided the entire path to the program (/usr/local/bin/povray) is required as well as the path to the POV-Ray library (+L/usr/local/share/povray-3.6/include). Also included are POV-Ray options (+A0.3 +W640 +H480 +FN -D) to control file processing. Most importantly, the example data processing script includes environment variables to specify the input file (+I$1) and the output file (+O$2) names. $1 and $2 are required in all data processing scripts regardless of the actual program used to process the data files as gridRAM parses this data processing script replacing the $1 and $2 environment variables with actual input and output filenames. For example, if there was an input data file called dataset1.pov and an output target filename of dataset1.pov.png, +I$1 and +O$2 would become +Idataset1.pov and +Odataset1.pov.png respectively. Essentially, the data processing script should be (in verbatim) whatever Linux/UNIX command is required to process a

single data file (with the exception of replacing the input filename with $1 and the output filename with $2). Note that the location of $1 and $2 in the data processing script is irrelevant. Once the nodes file has been created and the gridRAM.ini file has been modified gridRAM is ready for use.

2.2 Using GridRAM


There are four command-line options for gridRAM and if you do not include any command-line option, gridRAM will show them to you (and terminate). The first command-line option is the setup option. The setup option was designed to assist with the creation and dispersion of a private/public keys allowing passwordless SSH connections (considered by some to be a security risk, but very convenient nevertheless). The setup option first removes any old SSH DSA[11] key files from the users .ssh directory. The command ssh-keygen -t dsa" is then executed creating a new SSH DSA 3 public/private key pair which are then copied into the users .ssh/authorized_keys file. The availability of each computer listed in the nodes file is verified by pinging each computer. Computers that do not respond to the ping are assumed to be unavailable. Secure copy (SCP) is then used to copy an empty file (called test) to each available computer (answer yes when prompted to continue connecting). The purpose of this seemingly pointless step is to insure that the identity of each computer is registered in the users .ssh/known_hosts file allowing future passwordless connections. The final step is to adjust the users environment such that it references PVM. Please note that the environment variables are set assuming a RPM installation on a RedHat system with bash or zsh as the shell. The setup option is invoked by the Linux/UNIX command: ./gridRAM setup The second command-line option is the cleanup option. If a PVM based process terminates unexpectedly, it frequently leaves lock files in the /tmp directory which prevent further PVM daemons from starting. The cleanup option removes all PVM lock files from all available computers listed in the nodes file. The cleanup option is invoked by the Linux/UNIX command: ./gridRAM cleanup The third command-line option is the localhost option. The localhost option instructs gridRAM to only run on the localhost and is intended for testing purposes only. The localhost option is invoked by the Linux/UNIX command: ./gridRAM localhost
3

The DSA standard is more commonly supported by SSH implementations than the RSA standard.

The fourth command-line option is the pvm option. The pvm option instructs gridRAM to behave as a GRID spreading the workload across the available computers listed in the nodes file. One of the assumptions made in designing gridRAM was that the computers making up the GRID would be unreliable. At times they would be powered down, disconnected from the network, locked up, at times the program required to process the data files would be nonexistent (erased or never installed), and at times the PVM daemon would not be able to start or spawn the processes required. Therefore, when started with the pvm option, gridRAM conducts several tests in an effort to eliminate problematic computers from your GRID. If you specify the highest level of verbosity (2) in the gridRAM.ini file, you will see the tests and the results thereof. The pvm option is invoked by the Linux/UNIX command: ./gridRAM pvm See Appendix 1 for a sample run of gridRAM with the pvm option.

3 Design
GridRAM is designed for the Linux/UNIX environment, requires Parallel Virtual Machine (PVM)[12] (a standard RPM package in RedHat[13] Linux and freely available for other operating systems), Secure Shell (SSH) for interconnectivity (a standard package in Linux/UNIX), and that users home directories be mounted via a network file system (NFS). Obvious questions are why design a package requiring PVM, SSH and NFS? PVM was selected as the message passing mechanism because it is simple (more so than MPI), commonly available for many platforms, heterogeneous, and allows for the dynamic allocation of nodes (extremely important in an unreliable GRID). Originally, gridRAM did not require NFS or SSH as it was assumed that gridRAM would only be executed on a dedicated/secure cluster (where PVM over RSH could be used and where RCP could be used to transfer the applicable data files between the master and slave computers). However, as gridRAM was extended to include less-secure/student-accessible computers security concerns made SSH imperative. The use of SSH complicated the transfer of the applicable data files and complicated the use of PVM. The reliance of NFS to mount home directories greatly simplified the transfer of the applicable data files as NFS handles the file transfers automatically. The complication of PVM over SSH was resolved by setting up passwordless SSH connections. Since we are assuming an unreliable GRID, gridRAM conducts several tests to characterize the availability of each computer listed in the nodes file. The first test pings each computer (via the command ping computer_name -w 2 -c 1 -q") to determine its existence. Computers that reply to the ping are then tested to see if the PVM daemon can be started. If the PVM daemon can be started, the availability of the processing program is then verified (the program must reside at the location specified in

the gridRAM.ini file). If any computer fails any of the above 3 tests, it is removed from the list of available computers making up the GRID. The remaining computers are then queried (via the command "egrep -c '^cpu[0-9]+' /proc/stat") to determine the number of CPUs on that computer (note that hyperthreaded CPUs return 2 CPUs). GridRAM then queries the localhost (via the command "uname -n > masterNodeFile") to obtain its name. GridRAM operates as a traditional master-slave environment and employs the localhost as the master and by convention the master is not used for data processing, but reserved for managing the workload allocation. Aside from reserving the localhost for the masters duties, by having gridRAM detect the localhost, it provides some portability and any computer listed in the nodes file can act as the master. One of the issues that arose during the generation of the CSci446 movies was how to scan through thousands of files located in a single directory. Executing a system call using the command ls *.xxx > list does not always work as the shell will first expand the wildcard (*) up to the point where the sum of the number of characters in the list of file names equals/exceeds the character limit of the command line. Since the character limit of the command line is much less than the number files that may be present in a directory another method must be employed. Fortunately, there is a set of library functions (opendir, readdir, and closedir) in C/C++ for Linux/UNIX that allows one to scan through the inode (directory) directly. Hence, it is relatively easy to extract all of the filenames in a directory, regardless of number. Once gridRAM has verified the availability of computers and the availability of the processing program on those computers, the gridRAM master computer sends each slave computer a filename to process (or in the case of multi-CPU computers and depending on the parameters specified in gridRAM.ini, multiple filenames to process). As slave computers report the completion of file processing, the master records the processing time of the file and sends a new filename to that slave for processing. This process repeats until all of the files have been allocated to computers for processing. Once all of the files have been allocated to computers for processing the master initializes the wait time and waits for all outstanding files to be processed. As the processing of the outstanding files is completed, the master updates the maximum time required to process any file as well as the wait time. As the master waits for outstanding files to be completed it also compares the current wait time with the maximum time required to process any file multiplied by the Wait_time_multiplier (specified in gridRAM.ini). If the current wait time exceeds the maximum time required to process any file multiplied by the Wait_time_multiplier, gridRAM assumes that there has been a hardware failure and terminates the run. If the Reprocess_lost_files option has been set to 1 gridRAM will reinitialize the computers (starting with the ping test) and attempt to reprocess the files that failed. If the Reprocess_lost_files option is set, the overall processing is wrapped in a loop and gridRAM will continually reinitialize itself and process files until all failed files have been successfully processed. The assumption is that any failed processing is due to a computer that has just recently failed and we do not want to waste time by attempting to reuse that computer. Once all files have been processed they are moved from the users home directory into the directory from which the original data files originated. This

step is required due to an idiosyncrasy of using PVM. Finally, the load balancing mechanism used in gridRAM is nave. The file processing ordering is strictly based on the order of the filenames as retrieved from the inode. No attempt is made to reorder files based on a priori information. A flowchart of gridRAM is shown in figure 1.

Figure 1: GridRAM flowchart.

Article I. 4 Conclusion
GridRAM is a user level program that allows almost any open (i.e. student accessible) computer cluster to perform as a computational farm or GRID. Being a user-level program, gridRAM allows multiple/concurrent instances of itself to run (even from the same computer). GridRAM also requires very little (if any) system administration support. GridRAM has been successfully used at UND to parallelize the processing of 1000s of POV-Ray files across 46 computers in two student computer labs (12

computers in the Advanced Computing Lab and 34 computers in the Penguin Lab 4 ). Of course no software package is ever complete and gridRAM is no exception. For example: With passwordless SSH connections, it is possible to mimic NFS functionality using SCP to move files around the cluster. This would eliminate the need for NFS. It is possible to use SSH to execute programs on remote computers and to use file locks as semaphores to synchronize the load balancing. This would eliminate the need for PVM. It is possible to check for any users logged on to the terminal at any slave computer before a file is sent to that computer for processing. If someone is logged in no additional files would be sent for processing (processing of the current file would be completed). Since this mechanism has been explored at UND [10] it would be possible to implement. It should be possible to extend gridRAM to include computers running the MicroSoft Windows operating system. The biggest challenge left is how to determine if a file fails to correctly process. The current fault tolerance scheme simply assumes that if an output file with the expected filename exists, the processing was successful. However, in testing, I have confirmed that at times output files are written, that are in fact garbage.

Finally, GridRAM is licensed under the terms of the GNU General License as published by the Free Software Foundation. If you are interested in using gridRAM, please contact the author.

Appendix
GridRAM.ini file used for example run. Note that gridRAM is configured to use as many concurrent processes as there are CPUs and computers and that gridRAM is configured for maximum verbosity and to show all statistics.
Input_filename_extension__: Output_filename_extension_: Processes_per_node__(-1,N): No_processes_wanted_(-1,N): Wait_time_multiplier_(n)__: Verbosity_level_(0,1,2)___: Show_statistics_(0,1,2)___: Reprocess_lost_files_(0,1): # #Data_process_script: #
4

.pov .png -1 -1 2 2 2 1

Lab computers have 1.8 GHz Celeron CPUs, 256 Meg RAM, and 40 GB IDE hard drives.

/usr/local/bin/povray +L/usr/local/share/povray-3.6/include +I$1 +O$2 +A0.3 +W640 +H480 +FN -D

Nodes file used for example run. Note that there are 10 computers listed.
chinchilla-11.cs.und.edu chinchilla-12.cs.und.edu chinchilla-13.cs.und.edu chinchilla-14.cs.und.edu chinchilla-15.cs.und.edu chinchilla-16.cs.und.edu chinchilla-17.cs.und.edu chinchilla-18.cs.und.edu chinchilla-19.cs.und.edu chinchilla-20.cs.und.edu

Directory contents before example run. Note that there are only 19 files to process.
desk01.pov desk04.pov desk07.pov desk10.pov desk13.pov desk16.pov desk19.pov gridRAM gridRAM.ini gridRAMSlave nodes desk02.pov desk05.pov desk08.pov desk11.pov desk14.pov desk17.pov desk03.pov desk06.pov desk09.pov desk12.pov desk15.pov desk18.pov

Output of gridRAM during the sample run. Note that only 6 of the 10 machines responded to the ping and that those 6 machines each reported having 2 processors and all 6 had the required POV-Ray binary. Thus, 12 concurrent processes on 6 computers were used to process the 19 files.
grid-R.A.M. (V. 2.2) by R. Marsh. ************************************************* * Defining paths to data and slave processor. * * Scanning directory for data files. * * Identifying localhost (ie master). * * Building node list from 'nodes' file. * * Pinging all machines in 'nodes' file. * * - - - - - - - - - - - - - - - - - - - - - - - * MESSAGE - OK ping from node: chinchilla-11.cs.und.edu MESSAGE - OK ping from node: chinchilla-12.cs.und.edu MESSAGE - OK ping from node: chinchilla-13.cs.und.edu MESSAGE - OK ping from node: chinchilla-14.cs.und.edu NOTICE - No ping from node: chinchilla-16.cs.und.edu NOTICE - No ping from node: chinchilla-17.cs.und.edu MESSAGE - OK ping from node: chinchilla-18.cs.und.edu NOTICE - No ping from node: chinchilla-19.cs.und.edu MESSAGE - OK ping from node: chinchilla-20.cs.und.edu * - - - - - - - - - - - - - - - - - - - - - - - *

* Starting PVM deamons. * * - - - - - - - - - - - - - - - - - - - - - - - * libpvm [t40001]: pvm_addhosts(): Already in progress libpvm [t40001]: pvm_addhosts(): Already in progress * - - - - - - - - - - - - - - - - - - - - - - - * * Querying slave nodes to verify binaries. * * - - - - - - - - - - - - - - - - - - - - - - - * MESSAGE - Binary found on: chinchilla-11.cs.und.edu MESSAGE - Binary found on: chinchilla-12.cs.und.edu MESSAGE - Binary found on: chinchilla-13.cs.und.edu MESSAGE - Binary found on: chinchilla-14.cs.und.edu MESSAGE - Binary found on: chinchilla-18.cs.und.edu MESSAGE - Binary found on: chinchilla-20.cs.und.edu * - - - - - - - - - - - - - - - - - - - - - - - * * MESSAGE - Binaries found on 6 nodes. * * Querying slave nodes to count CPUs. * * - - - - - - - - - - - - - - - - - - - - - - - * MESSAGE - 2 CPUs found on node: chinchilla-11.cs.und.edu MESSAGE - 2 CPUs found on node: chinchilla-12.cs.und.edu MESSAGE - 2 CPUs found on node: chinchilla-13.cs.und.edu MESSAGE - 2 CPUs found on node: chinchilla-14.cs.und.edu MESSAGE - 2 CPUs found on node: chinchilla-18.cs.und.edu MESSAGE - 2 CPUs found on node: chinchilla-20.cs.und.edu * - - - - - - - - - - - - - - - - - - - - - - - * * MESSAGE - 12 CPUs available on 6 nodes. * * Processing data using 12 processes. * * - - - - - - - - - - - - - - - - - - - - - - - * Package desk01.pov > chinchilla-11.cs.und.edu. Package desk02.pov > chinchilla-11.cs.und.edu. Package desk03.pov > chinchilla-12.cs.und.edu. Package desk04.pov > chinchilla-12.cs.und.edu. Package desk05.pov > chinchilla-13.cs.und.edu. Package desk06.pov > chinchilla-13.cs.und.edu. Package desk07.pov > chinchilla-14.cs.und.edu. Package desk08.pov > chinchilla-14.cs.und.edu. Package desk09.pov > chinchilla-18.cs.und.edu. Package desk10.pov > chinchilla-18.cs.und.edu. Package desk11.pov > chinchilla-20.cs.und.edu. Package desk12.pov > chinchilla-20.cs.und.edu. Package desk12.pov < chinchilla-20.cs.und.edu (processed 12.608923 seconds). Package desk13.pov > chinchilla-20.cs.und.edu. Package desk06.pov < chinchilla-13.cs.und.edu (processed 12.699614 seconds). Package desk14.pov > chinchilla-13.cs.und.edu. Package desk10.pov < chinchilla-18.cs.und.edu (processed 12.733061 seconds). Package desk15.pov > chinchilla-18.cs.und.edu. Package desk09.pov < chinchilla-18.cs.und.edu (processed 12.735747 seconds). Package desk16.pov > chinchilla-18.cs.und.edu. Package desk08.pov < chinchilla-14.cs.und.edu (processed 12.755130 seconds). Package desk17.pov > chinchilla-14.cs.und.edu. Package desk04.pov < chinchilla-12.cs.und.edu (processed 12.788071 seconds). Package desk18.pov > chinchilla-12.cs.und.edu.

in

in

in

in

in

in

Package desk03.pov < chinchilla-12.cs.und.edu (processed 12.790466 seconds). Package desk19.pov > chinchilla-12.cs.und.edu. Package desk01.pov < chinchilla-11.cs.und.edu (processed 12.813492 seconds). Package desk11.pov < chinchilla-20.cs.und.edu (processed 12.794922 seconds). Package desk02.pov < chinchilla-11.cs.und.edu (processed 12.812033 seconds). Package desk07.pov < chinchilla-14.cs.und.edu (processed 12.930891 seconds). Package desk05.pov < chinchilla-13.cs.und.edu (processed 13.776563 seconds). Package desk13.pov < chinchilla-20.cs.und.edu (processed 7.345762 seconds). Package desk14.pov < chinchilla-13.cs.und.edu (processed 7.286570 seconds). Package desk17.pov < chinchilla-14.cs.und.edu (processed 7.331951 seconds). Package desk16.pov < chinchilla-18.cs.und.edu (processed 12.361622 seconds). Package desk19.pov < chinchilla-12.cs.und.edu (processed 12.380969 seconds). Package desk15.pov < chinchilla-18.cs.und.edu (processed 12.497977 seconds). Package desk18.pov < chinchilla-12.cs.und.edu (processed 12.513354 seconds). * - - - - - - - - - - - - - - - - - - - - - - - * * Moving results into working directory. * * Generating report from log file. * * - - - - - - - - - - - - - - - - - - - - - - - * Execution statistics: -----------------------Data files processed...: Total clock time.......: Total processor time...: Average processing time: Minimum processing time: Maximum processing time: -----------------------Statistics for node....: Data files processed...: Average processing time: -----------------------Statistics for node....: Data files processed...: Average processing time: -----------------------Statistics for node....: Data files processed...: Average processing time: -----------------------Statistics for node....: Data files processed...: Average processing time: ------------------------

in

in in in in in in in in in in in in

19 12.571506 225.957118 11.892480 7.286570 [desk14.pov] 13.776563 [desk05.pov] chinchilla-11.cs.und.edu 2 12.812762 chinchilla-12.cs.und.edu 4 12.618215 chinchilla-13.cs.und.edu 3 11.254249 chinchilla-14.cs.und.edu 3 11.005991

Statistics for node....: chinchilla-18.cs.und.edu Data files processed...: 4 Average processing time: 12.582102 -----------------------Statistics for node....: chinchilla-20.cs.und.edu Data files processed...: 3 Average processing time: 10.916536 ************************************************* zsh: terminated ./gridRAM pvm

Directory contents after example run (all 19 files where successfully processed).
desk01.pov desk04.pov desk07.pov desk10.pov desk13.pov desk16.pov desk19.pov desk01.pov.png desk04.pov.png desk07.pov.png desk10.pov.png desk13.pov.png desk16.pov.png desk19.pov.png gridRAM gridRAM.ini gridRAMSlave nodes ping gridRAM.log desk02.pov desk05.pov desk08.pov desk11.pov desk14.pov desk17.pov desk02.pov.png desk05.pov.png desk08.pov.png desk11.pov.png desk14.pov.png desk17.pov.png desk03.pov desk06.pov desk09.pov desk12.pov desk15.pov desk18.pov desk03.pov.png desk06.pov.png desk09.pov.png desk12.pov.png desk15.pov.png desk18.pov.png

References
[1] OpenGL, http://www.opengl.org/, retrieved October 23, 2006. [2] Kris Zarns and Ronald Marsh, RAYGL: An OpenGL to POVRAY API, Proceedings of 39th MICS: April 7 - 8, 2006. Iowa Wesleyan College, Mt. Pleasant, IA. [3] Persistence of Vision Raytracer, http://www.povray.org/, retrieved October 23, 2006. [4] Matthew Tait, Build Your Own Render Farm, ExtremeTech (http://www.extremetech.com/article2/0,1697,1847365,00.asp), retrieved October 23, 2006. [5] Ian Foster and Carl Kesselman, The Grid: Blueprint for a Future Computing Infrastructure, I. Foster and C. Kesselman (Eds), Morgan Kaufmann Publishers, San Francisco, 1998.

1. http://www.cs.wisc.edu/condor/, retrieved October 23, 2006. 2. http://gridengine.sunsource.net/, retrieved October 23, 2006. 3. http://www.globus.org/, retrieved October 23, 2006. 4. Raghuveer Maan, An Adaptive GRID Security Architecture, UND MS thesis, 2005. 5. Jyotsna Singh Mann, The Prototype Adaptive GRID, UND MS thesis, 2006. 6. http://www.itl.nist.gov/fipspubs/fip186.htm, retrieved October 24, 2006. 7. http://www.csm.ornl.gov/pvm/pvm_home.html, retrieved October 23, 2006. 8. http://www.redhat.com/, retrieved October 23, 2006.

S-ar putea să vă placă și