Documente Academic
Documente Profesional
Documente Cultură
Dr.M.V.L.N.Raja Rao 2
Professor & Head Dept. of Information Technology at Gudlavalleru Engineering College, Gudlavalleru, Krishna Dist., A.P. India.
Dr. G.V.S.N.R.V.Prasad3 Professor & Head Dept. of Computer Science & Engineering at Gudlavalleru Engineering College,Gudlavalleru, Krishna Dist., A.P. India.
Abstract
A computational grid has two characteristics: it must allow resource providers and resource consumers to make autonomous scheduling decisions. In this paper, we formulate this intuition of optimizing incentives for both parties as a dual-objective scheduling problem. The two objectives identified are to maximize the success rate of job execution and to minimize fairness deviation among resources. The challenge is to develop a grid scheduling scheme that enables individual participants to make autonomous decisions while producing a desirable emergent property in the grid system; that is, the two system wide objectives are achieved simultaneously. We present a dual object scheduling scheme, which utilizes a P2P decentralized scheduling framework, a set of Griddy local heuristic algorithms, and three market constraints of job announcement, price, and competition degree. The performance of this scheme is evaluated via extensive simulation using Grid simulator. The results show that our approach outperforms other scheduling schemes in optimizing incentives for both consumers and providers, leading to highly successful job execution and fair profit allocation. Index Terms-Computational grid, scheduling, incentive, peer to peer, Grid Simulator.
each participant makes decisions on its own behalf, and the individual economic behaviours of all participants work together to accomplish resource scheduling, with optimized incentives being an emergent property of the grid system. We formulate the above scheduling problem and investigate market instruments and algorithms to solve the problem. We identify the successfulexecution rate of jobs as the incentive for consumers and the inverse of fairness deviation as the incentive for providers. As even a sub problem of the formulated scheduling problem is NPcomplete (see Section 4.2), we develop a dual objective scheduling scheme using Griddy heuristics local scheduling algorithm. Job announcement, competition degree (CD), and price are defined and used as market instruments. . Performance evaluation is conducted via extensive simulations, utilizing both statistically generated workloads and real workloads. The results show that the proposed dual objective scheduling scheme outperforms other schemes in optimizing incentives for both consumers and providers. The rest of this paper is organized as follows: Section 2 gives a formal problem statement. Section 3 contrasts with related work. Section 4 presents the dual objective scheduling scheme in detail. Section 5 evaluates the performance of our scheme.
ISSN: 2231-2803
http://www.ijcttjournal.org
Page 744
International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
that the resource offers for executing a job of unit length. When a provider with capability 5 bids to execute a job of length 20 at a unit price of 2 and if the consumer accepts the bid and decides to send the job to run there, the job will take 20/5 = 4 units of time to complete, generating a profit of 2 X 20 = 40 for the provider. Incentives for consumers and providers Intuitively, consumers are attracted to a grid, because it offers high quality of computational service at low cost. This could lead to many potential metrics of consumer incentives. However, a fundamental incentive requirement is that a grid should have a high successful-execution rate of jobs, where a successful job execution means that a job is executed without missing its deadline. When this rate is too low, even if the cost is zero (as in the case when a grid is advertising funded), the consumers will lose faith in the grid and quit it. Therefore, we choose the successful-execution rate _ of the grid system as the incentive for consumers. It is formally defined as follows:
Fig.1 A Dual object scheduling scheme in computational Grids 2. Problem Formulation We define a computational grid as a four-tuple G=(R, S, J), as depicted in Fig. 1. The grid G consists of a set of m resource providers R = {fR0; . . .;Rm_1} and a set of k resource consumers S= {S0; . . . ; Sk_1}. Over a time period T, a set of n jobs J ={J0; . . . ; Jn_1} are submitted to the grid by the consumers, scheduled by the scheduling scheme M, and executed by resources of the providers. The scheduling scheme M should employ market instruments to allow each provider and each consumer to make the scheduling decision autonomously. That is, each provider Ri can decide whether it would offer its resource, and each consumer Sj can decide whether it would use a certain resource to execute its jobs. Consumers and jobs. In this paper, we only consider computation-intensive jobs, where all communication/networking overheads can be ignored. All jobs are independent of one another. The k consumers altogether have n jobs to execute in time period T. The consumers first submit job announcements to the computational grid. A job announcement includes the information of job length and job deadline. Job length is an empirical value assessed as the execution time of the job on a designated standard platform. Job deadline is a wall clock time by which a consumer desires a job to be finished, expressed as a number between 0 and T. Thus, a job with length=10 and deadline = 100 means that the jobs execution takes 10 time units on a designated standard computer, and it must be finished 100 time units after the common base time 0. Providers and resources. From the scheduling viewpoint, each resource provider is modeled with three parameters: capability, job queue, and unit price. Capability is the computational speed of the underlying resource, expressed as a multiple of the speed of the standard platform. The job queue of a resource provider keeps an ordered set of jobs scheduled but not yet executed. Each job, once it is executed on a resource, will run in a dedicated mode on that resource, without time-sharing or preempting. A provider charges for a job according to its unit price and job length. Unit price refers to the price
Here, TiD and TiC denote the deadline and the completion time of job Ji, respectively we formally define a quantity called fairness deviation of the grid system:
Here, Cj and Pj denote the capability and the profit of Resource provider Rj, respectively. The scheduling problem for computational grid can now be described as follows: Find an autonomous scheduling scheme M that schedules a set of jobs J = {J0; . . . ; Jn_1} to a set of providers R= {R0; . . .; Rm_1} to maximize and minimize 3 RELATED WORK Much attention has been devoted to the area of scheduling in distributed computing [2], [3], [4], [5], [6], [7], [8], [9],[10], [11], [12], [13], [14], [15], [16], [17], [18]. However, to the best of our knowledge, there is still no work investigating effective scheduling to optimize incentives for both consumers and providers, utilizing market information. Many previous research projects (for example, [2], [3],[4], [5], and [6]) focused on optimizing traditional performance metrics, like system utilization, system load balance, and application response time in controlled grids. They did not consider market-like grids, where providing sufficient incentives for participants is a key issue. Many projects [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18] have investigated the effectiveness of introducing economic models and theories into distributed resource scheduling. Researches in [12] and [17] study incentives for
ISSN: 2231-2803
http://www.ijcttjournal.org
Page 745
International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
participants to behave honestly. Our work learned from these researches, like using the bidding model and price mechanism. However, these researches only consider consumer objectives (for example, shorter response time and less payment) or provider objectives (for example, bigger profit and larger utilization), whereas we focus on optimizing incentives for both consumers and Providers. The willingness of both parties to stay and play is critical for building a sustainable market.
We design four Dual objective Scheduling algorithms to model the behavior of providers. The job computing algorithm describes how a provider bids when receiving a job announcement in Step 2 of job scheduling. The Griddy heuristic local scheduling algorithm is responsible for arranging the execution order of jobs in the job queue of a provider. It starts when a provider receives a job offer in Step 3 of job scheduling. The price updating algorithm helps a provider to dynamically adjust its unit price properly over the period of its participation in the computational grid and the CD-adjusting algorithm help a provider in dynamically adjusting its unit price and CD properly over the period of its participation in the computational grid. 4.2.1 Job computing algorithm
Fig3 Job queue of a provider. Step 1. The provider estimates whether it is able to meet the job deadline. As Fig. 3 shows, there are q jobs in the job queue. If we call the potential new job as s, P0; P1; . . . , and Pq represent the q + 1 possible places for s to be inserted into. TA, the available time, is the time instance when all the jobs in the job queue are completed. TL is calculated by subtracting the execution time of s from the deadline of s. It is the latest time to begin the execution of s if the provider does not want to let s miss its deadline. The estimation is described with the following pseudocode: //Here the T is deadline-execution_time compare to available_time // if (TL > available_time) { /*Here TL is the deadline execution_Time*/ can_meet = true; /*the customer meet the job deadline*/ reordered = false; /*rearrange job queue*/ insert_place = placen; */ insert the job into job queue*/ } else { T is covered by the execution of Ji in the queue if (insert Jn+1 at placei-1, none of JiJn will miss its deadline) {
Fig. 2. The steps that a single job goes through in the Dual object Scheduling Scheme..
Step 1:- A consumer submits a job announcement to the computational grid, and the job announcement is broadcast to all the providers. Step 2. Each provider, upon receiving a job announcement, estimates whether it is able to meet the deadline of the job. If yes, the provider sends a bid that contains the price for the job directly back to the consumer; otherwise, the provider ignores the job announcement. Step 3. After waiting for a certain time, the consumer processes all the bids received, chooses the provider who charges the least, and sends the job to the selected provider. Step 4. The provider who receives the job inserts it into its job queue. When the job is finished, the provider sends the result to the consumer.
ISSN: 2231-2803
http://www.ijcttjournal.org
Page 746
International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
can_meet = true; reordered = true; insert_place = placei-1; } else can_meet = false; }
The estimation algorithm is based on the principle that inserting a new job can never incur new deadline missing. If the variable can_meet is set to true, the algorithm goes on to Step 2. Step 2. The provider makes a price for the job. The pseudo-code is given below price = unit_price * job_length if (reordered) { price = * price; } is a decimal larger than 1. When the variable reordered is set to true, price is raised. Generally, jobs are queued in the order of their arrival. To meet job deadline, some jobs may be inserted into the job queue ahead of foregoing jobs, which indicates that the deadlines of these jobs are somewhat tight and the jobs need to be given higher priority. Step 3. The provider sends the price as a bid and inserts the job at the place the variable insert_place indicates at the probability of CD. If the provider chooses to insert and if the job offer doesnt come after certain time, it deletes the job from its job queue. The duration of keeping an unconfirmed job should be as short as possible but long enough to guarantee not to delete offered jobs. 4.2.2 Griddy heuristic local scheduling Algorithm The heuristic knowledge is the metric weights of the computing resources and the metric workload impact factors. Assume that ResList is the filtered available resources list; PROCESSCOUNT is the process count of the job, which is defined by the user; SelectedList is the selected resources list, i.e. the result. The data type of ResList is the list of the resource classes. There are two important properties in the resource class: processcount and cpucount. The processcount is the allocated process count of the job to the resource, and its initial value is zero. Therefore, if the processcount of a resource is bigger than zero at the end of GHSA, the resource is selected. The cpucount is the CPU count of the resource. If the PROCESSCOUNT value is more than 1, the job is parallel. ResList and PROCESSCOUNT are the inputs, while SelectedList is the output of the algorithm. The GHSA described with the following pseudocode:
SelectedList=NULL; for (i=0; i<PROCESSCOUNT; ++i) { for each resource Res in ResList { if processcount<cpucount Calculate Rank(Res); } Select the resource with the max Rank value and add 1 to its processcount; } for each resource Res in ResList { if its processcount>0 SelectedList=SelectedList+Res; } Return SelectedList;
is a decimal above 1 and is a positive decimal under 1. Offered job length is the aggregated length
ISSN: 2231-2803
http://www.ijcttjournal.org
Page 747
International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
of jobs offered to the provider. Total job length is the aggregated length of jobs whose announcements are received by the provider. Total capability refers the aggregated capability of all the providers. Offered job length and total job length rewind when total capability is updated. Our price adjusting mechanism is simple and intuitive -- just to make prices different to differentiate the chances of providers to be chosen and eventually realize the fair allocation of benefits. Providers can choose not to adjust price every time one job is offered or not, but start the algorithm every several jobs. However, if so, the providers are slow to react to the market. The fairness will be degraded consequently
4.2.4 CD-adjusting algorithm Like human beings, providers have diverse behavior. Thus, providers with various CDs coexist in a computational grid. The more conservative ones are relatively less competitive than the more aggressive ones. They always keep unconfirmed jobs in their job queues and tend to lose potential jobs because of being unable to bid. Most likely, these jobs are offered to the more aggressive ones. As a result, fairness among all the providers is hard to achieve. Moreover, the jobs that could have been done by the conservative ones may bring the aggressive ones not only profit but also penalty, of course, which results from deadline missing. A wise provider, whether a conservative or an aggressive one, should never hold its attitude toward competition if things like that happen. It will adjust its CD according to the situation that it perceives. Thus, we design the CD-adjusting algorithm. The following pseudocode describes the algorithm, and the time complexity of this algorithm is O (1): //Every time the penalty increases 1 if Rp > THp and CD > " then 2 CD CD _ "; 3 endif //Every time a certain interval such as 1 day 1 if Rp < THp and RJ > THJ and CD < 1 _ " then 2 CD CD "; 3 endif Here, Rp is the ratio of penalty to profit, and RJ is the ratio of jobs that the provider does not bid for. THp and TH J are thresholds for them, respectively. If one rate gets above its threshold, CD is adjusted accordingly at the step of ". As can be seen, the check of Rp is not only timelier but also prior. The reason is that the rate of penalty to profit is a more obvious index to providers. Thus, Rp is checked every time the penalty increases to avoid possible further increase in penalty in time, whereas R J can be checked regularly at a little longer interval such as 1
100, and capabilities of providers average 10. is assigned as 1.05, as 1.1, and as 0.9. Market price is 1. System load of simulations varies from 0.1 to 0.7 with a step of 0.1. Every simulation runs as long as 100 days in simulation time, working out results of three different CD configurations: 0, 0.5, and 1. failure_rate=n_j_fail/n_j_submitted deadline_missing_rate=n_j_miss/n_j_finished here n_j_fail=number of jobs submitted n_j_submitted=number of jobs submitted n_j_miss=number of jobs that miss deadline n_j_finished=number of jobs finished
day
5. EXPERIMRNTAL RESULTS
We are using Grid Simulator developed by the Java programming language to simulate the computational grid. Consumers and providers are modeled as two kinds of entities in the simulation system. The communications between them are performed by event delivery. As for the advance of simulation time, there are mainly two drives: one is the network delay of communications, and the other is job execution. We ignore delay in the simulator but focus on implementing the algorithms and evaluating them.
ISSN: 2231-2803
http://www.ijcttjournal.org
Page 748
International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
Because deadline missing rate is much smaller than
6 CONCLUSIONS
We formulate job scheduling in a computational grid as a dual-object optimization problem to optimize incentives for both consumers and providers. We develop dual object scheduling scheme using a P2P decentralized scheduling framework. Each consumer or provider autonomously makes scheduling decisions, Comparison on the successful-execution rate. All scheduling algorithms are local to a resource provider, and three market instruments, that is, job announcement, price, and CD, are employed, and the former two circulate in the grid each participant makes local/autonomous decisions, desirable properties emerge in the grid system as a whole, including a high successful-job-execution rate, a fair allocation of profits, and a balanced utilization of resources. Our scheme achieves the dual objectives better than other methods. REFERENCES
1. Lijuan Xiao, Yanmin Zhu, Member, IEEE, Lionel M.Ni, Fellow, IEE, and Zhiwei Xu, Senior Member, IEE Incentive Based Scheduling for Market Like Computational GridsIEE tractions on Parallel and distributed systems, vol no.7 July 2008. R. Buyya, D. Abramson, and S. Venugopal, The Grid Economy, Proc.IEEE, vol. 93, no. 3, pp. 698 -714, 2005. R. Buyya, D. Abramson, and J. Giddy, Nimrod/G: An Architecture of a Resource Management and Scheduling System in a Global Computational Grid, Proc. Fourth Intl Conf. High -Performance Computing in the Asia-Pacific Region (HPC Asia), 2000. O. Regev and N. Nisan, The POPCORN Market: An Online Market for Computational Resources, Proc. First Intl Conf. Information and Computation Economies (ICE 98), pp. 148 -157, 1998 R. Wolski, J.S. Plank, T. Bryan, and J. Brevik, G Commerce: Market Formulations Controlling Resource Allocation on the Computational Grid, Proc. 15th Intl Parallel and Distributed Processing Symp. (IPDPS 01), p. 8, 2001. K. Lai, L. Rasmusson, E. Adar, L. Zhang, and B.A. Huberman, Tycoon: An Implementation of a Distributed, Market-Based Resource Allocation System, Multiagent and Grid Systems, vol. 1, no. 3 , pp. 169-182, 2005 L. Xiao, Y. Zhu, L.M. Ni, and Z. Xu, Grid IS: An Incentive-Based Grid Scheduling, Proc. 19th IEEE Parallel and Distributed Processing Symp. (IPDPS 05), p. 65, 2005. Computational Grids, Ian Foster Mathematics and Computer Science Division Argonne National Laboratory Argonne, IL 60439 Carl Kesselman Information Sciences Institute University of Southern California Marina Del Rey, CA 90292.
2.
failure rate in most cases, it has less impact on successful execution rate. As Figure 4 shows, the conservative attitude at competing for jobs is not a desirable one when considering the successful execution rate
3.
4.
5.
6.
8.
ISSN: 2231-2803
http://www.ijcttjournal.org
Page 749
International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
Authors
V.Daya Sagar Ketaraju received, the M.Tech. In Computer Science and Engineering from Jawaharlal Nehru Technological University, Kakinada, AndhraPradesh, India in 2010. Presently working as associate professor in Dept. of Computer Science and Engineering at Nalanda Institute of Engineering & Technology, Kantipudi, sattenapalli, Guntur, Andhra Pradsesh, INDIA. His current research area is Database management systems, Grid Computing, computer networks Dr.M.V.L.N.Raja Rao received the Ph.D in Computer SciencefromAndhraUniversity, Visakapatnam, Andhrapradesh, India in2007. Presently working as Professor &Head in the Dept.of Information Technology, Gudlavalleru Engg. College, Gudlavalleru, Krishna Dist
ISSN: 2231-2803
http://www.ijcttjournal.org
Page 750