Sunteți pe pagina 1din 18

Term paper of

DATA STRUCTURES
CSE:205

TERM PAPER TOPIC:- Simulation of Scheduling Algorithms

SUBMITTED TO: SUBMITTED BY:


ASHISH BUTANI
Mr. Vijay Kumar ROLL.NO :-
B2802B33
REGNO.:- 10811098
ACKNOWLEDGEMENT

First and foremost I thank my teacher Mr. vijay kumar


who has assigned me this term paper to bring out my
creative capabilities.
I express my gratitude to my parents for being a continuous
source of encouragement for all their financial aid.
I would like to acknowledge the assistance provided to me
by the library staff of LOVELY PROFESSIONAL
UNIVERSITY.
My heartfelt gratitude to my class-mates and for helping me
to complete my work in time.
Index

1. Introduction
2. Proposal
3. Background
3.1 What is a Process
4. Life cycle of a Process
5. Scheduling Algorithms
6. Simulation
6.1 A simple class diagram
7. Analysis of algorithms
7.1First Come First Served
7.2Round Robin
7.3Shortest Process First
7.4Highest Response-Ratio-Next
7.5Shortest Remaining Time
8. Graphical representation
9. Conclusion
10.Further work
1. Introduction
Scheduling is a fundamental operating-system function. Whenever the CPU becomes idle, the
operating system must select one of the processes in the ready queue to be executed. The
selection process is carried out by the short-term scheduler. The scheduler selects from
among the processes in memory that are ready to execute, and allocates the CPU to one of
them.
All processes in the ready queue are lined up waiting for a chance to run on the CPU.
The records are generally the PCBs (Process Control Block) of the processes.

Another important component involved in the CPU scheduling function is the dispatcher. The
dispatcher is the module that gives control of the CPU to the processes selected by the short-
term scheduler. This function involves:
Switching context
Jumping to the proper location in the user program to restart that program.

Our goal is to simulate the process scheduling algorithms to get a more accurate evaluation
on how choice of a particular scheduling algorithm can effect CPU utilization and how a
scheduler decides when processors should be assigned, and to which processes. Different
CPU scheduling algorithms have different properties and may favor one class of processes
over another.

We have programmed a model of the computer system and implemented scheduling


algorithms using Software data structures which represent the major components of the
system which we have discussed in section 6.

2. Proposal
When system has a choice of processes to execute, it must have a strategy -called a Process
Scheduling Policy-for deciding which process to run at a given time .A scheduling policy
should attempt to satisfy certain performance criteria, such as maximizing:
Throughput
Latency
Preventing Indefinite postponement of Process
Maximizing Process Utilization
It is the job of the scheduler or dispatcher to assign a processor to the selected process.
In our project various Process Scheduling Algorithms that determine at runtime which process
runs next .These algorithms decide when and for how long each process runs; they make
choices about
Preemptibility
Priorities
Running time
Time-to-Completion
Fairness
We will be simulating these Scheduling Algorithms and comparing them against various
parameters mentioned above
3. Background
3.1. What is Process?
A process is the locus of control of a procedure in execution that is manifested by the
existence of a data structure called Process Control Block.

Each process has its own address space, which typically consists of Text region, Data region
and Stack region. The Text region stores the code that the processor executes. The Data
region stores the variables and dynamically allocated memory that the process uses during
execution. The Stack region stores instructions and local variables for active procedure calls.
The contents of the Stack grow as the process issues nested procedure calls and shrink as
procedures return.

4. Life Cycle of a Process

During its lifetime, a process moves through a series of discrete process states. These
different states are as follows:

Running State: A process is said to be in running state when it is executing on a processor.

Ready State: A process is said to be in ready state if it could execute on a processor if one
were available.

Blocked State: A process is said to be in blocked state if it is waiting for some event to
happen e.g. I/O completion event.

Suspended Process: A process is said to be in suspended state if it is indefinitely removed


from contention for time on a processor without being delayed.

Suspended Ready: A process can suspend itself or a ready process or a blocked process.
The suspended when again returns in contention for the processor time, after completing the
suspended block, it is placed in the suspended ready queue.

Suspended Block: A process when suspended from the contending for the processor time is
place in Suspended Block.
Life Cycle of a Process
Awake Asleep

Runnin
g
Block
Dispatch

Timer
Run out

Ready Blocked
Wakeup
What is Thread?
A thread is a Lightweight Process that shares many resources of the Heavyweight process
such as the address space of the process to improve the efficiency with which they perform
their tasks. They generally represent a single thread of instructions or thread of control.
Threads within a process execute concurrently to attain a common goal.

Need for scheduling


The main objective of multiprogramming is to have some process running all the times, so as
to maximize CPU utilization. In the uni-processor system, there is only one process running at
a time. Others wait till CPU is free.

If the process being executed requires an I/O then in that time period processor remains idle.
All this waiting time is wasted. With multiprogramming, we try to use this time productively.
Several processes are kept in the memory. When one process has to wait, operating system
can take away CPU from that process and gives it to another process. CPU scheduling is the
foundation of multiprogramming. The scheduling helps in the reduction of the waiting time and
response time of the processes. Along with, it also increases the throughput of the system.

What is Processor Scheduling Policy?


When a system as a choice of processes to execute, it must have a strategy for deciding
which process to run at a given time. This strategy is known as Processor Scheduling Policy.
Different process scheduling algorithms have different properties and may favor one class of
processes over another. In choosing which algorithm to use in a particular situation, we
compare the following characteristics to compare the algorithms.

CPU Utilization
We want to keep the CPU as busy as possible. It ranges from 0 to 100%. In real systems it
ranges from 40% to 90%. For the purpose of this simulation we have assumed that CPU
utilization is 100%.

Throughput
The work done by the CPU is directly proportional to the CPU utilization. The number of
processes completed per unit time, called throughput, is the measure of work done by the
CPU. Algorithms should try to maximize the throughput.

Turnaround time
The time interval from submission of job to the completion of job is termed as the turnaround
time. It includes waiting time of the process and the service time of the process.

Waiting time
The amount of time process spent waiting in the ready queue is termed as Waiting time. Any
algorithm does not affect the service time of the process but does affect the waiting time of the
process. Waiting time should be kept to the minimum.

Response time
The time interval from the submission of the process to the ready queue until the process
receives the first response is known as Response time. Response time should always be kept
minimum.
Besides the above features, a scheduling algorithm must also have the following properties:
Fairness
Predictability
Scalability

5. Scheduling Algorithms
First Come First Served
This is the simplest process-scheduling algorithm. In this, the process that requests the CPU
first is allocated the CPU first. the implementation of this algorithm consists of a FIFO queue.
The process enters the ready queue and gradually moves to the top of the ready queue.
When it reaches to the top of the queue, it is allocated the processor when it becomes free.
This algorithm generally has long average waiting time.

Round Robin
Round-robin is best suited for time sharing systems. It is very similar to FCFS, expect that the
preemption has been added to switch between the processes. A time called quantum is
introduced in this algorithm, which is the time for which a process runs on the processor. After
the quantum the process is preempted, and the new process takes control of the processor for
the next quantum time.

Shortest Process First


This algorithm associates with each process the length of the latters next CPU burst. When
the CPU is available, it is assigned to the process that has the smallest next CPU burst. If two
processes have the same length next CPU burst, FCFS scheduling is used to break the tie.
The SPF scheduling algorithm is optimal as it gives the minimum average waiting time for the
given set of processes. It does that by moving the shortest processes first ahead of the long
processes, thus decreasing their waiting time more than increasing the waiting time of the
long processes. Consequently average waiting time reduces.

Highest-Response-Ratio-Next
This algorithm corrects some of the weakness of the SPF. The SPF algorithm is biased
towards the processes with short service time. This keeps the longer processes waiting in the
ready queue for the longer time, despite of arriving in the ready queue before the short jobs. It
is a non-preemptive scheduling algorithm in which the priority is the function of not only the
service time but also of the time spent by the process waiting in the ready queue. Once the
process obtains the control of the processor, it completes to completion. The priority is
calculated by the formula

Priority = (Waiting Time + Service Time)/Service Time

In this algorithm too, short processes receive preference. But longer processes that have
been waiting in the ready queue are also given the favorable treatment.

Shortest Remaining Time


This is the preemptive algorithm which acts on the principles of SPF. It gives preference to the
processes with the smaller service time. If a process is using the process and in the mean
time a new process arrives whose service time is less than the currently running, then it
preempts the currently running process and gives processor control to the new process. This
algorithm is no longer useful in todays operating systems.
6. Simulation
We have programmed a model of the computer system using Software data structures which
represent the major components of the system discussed above .The Ready Queue and the
memory are simulated using Vectors in which we store objects of class Process. A Process
object contains information about the Process which is also updated when the process runs.
In the real system we call this entity as PCB (Process control block).

Ready Queue contains the list of ready processes which are ready to execute. Ready queue
is maintained in a priority order, which depends on the algorithm we are using to calculate the
priority. In our simulation the ready queue has been programmed to serve the processes in
the First in First out, Round Robin, Shortest Process first, Highest Response Ration Next and
also Shortest Remaining time.

The simulator has a variable representing a clock; as this variables value is increased, the
simulator modifies the system state to reflect the activities of the devices, the processes, and
the scheduler. Our system has a function called ProcessReady which checks which
processes are ready to enter the system depending on the current clock. Preemption is
performed based on the current clock. If based on the algorithm if the next process in the
ready queue should get the CPU the current process is pushed into the queue and the next
process, based on how the priority of the processes is calculated in ready queue, is taken and
given the CPU time. We call this in real systems as context switch .We will be providing this
overhead a simple variable which we fill add to a process when it is preempted.

The scheduler is an abstract class in which we have defined the basic components which are
needed by the scheduler like ready queue .FIFO, RR, SPF, SRT and HRRN are the classes
which extend this scheduler class and implement the ready queue based on specific
scheduler.

As we run the simulations the statistics that indicate algorithms performance are gathered and
printed. The analysis is shown in the section 7.

The data that we are using to drive the simulation is generated using a random-number
generator. The generator is programmed to generate processes, CPU-burst times, Arrivals
and Finish time.

The process PCB in our simulation consists of following attributes:


Process Id
Process ServiceTime
Process ArrivalTime
Process FinishTime
Process ResponseTime

The same set of processes is feed into the scheduling algorithm to evaluate the algorithms
effect on the processes and CPU. These are initialized for all the processes that we randomly
generate .Once the process gets the CPU its service time gets updated and if the simulation
performs a context switch which preempts the current running process and puts it at the back
of the ready queue i.e. we save the PCB of the process. After this the first process in the
ready queue is given the block .In the end the system outputs the Arrival Time, Service Time,
Turn around Time, Waiting Time and Response Time for each process executed by the
system. The output formats, the input and the Analysis using this simulation model are shown
in the sections that follow:
6.1. A simple Class Diagram

Scheduler Process
ReadyQ:Vector Id:Integer
FinishQ:Vector ServiceTime:Integer
ProcessReady() ArrivalTime:Integer
Report() FinishTime:Integer
ResponseTime:Integer
getId()
getArrival()
getServiceTime()
getTimeLeft()
setFinishTime()
getFinishTime()
setResponseTime()
getResponseTime()
servicing()

FIFO RR SPF HRRN SRT


7. Analyis
7.1. First Come First Serve
Dataset for simulation

Process Arrival Service Finish Turnaround Waiting


Name Time Time Time Time Time Response
0 1 5 6 5 0 0
1 2 4 10 8 4 4
2 3 6 16 13 7 7
3 4 2 18 14 12 12
4 5 7 25 20 13 13

0-1 1-2 2-3 3-4 4-5


P0 P0 P0 P0
5-6 6-7 7-8 8-9 9-10
P0 P1 P1 P1 P1
10-11 11-12 12-13 13-14 14-15
P2 P2 P2 P2 P2
15-16 16-17 17-18 18-19 19-20
P2 P3 P3 P4 P4
20-21 21-22 22-23 23-24 24-25
P4 P4 P4 P4 P4

In First In First Out scheduling algorithm, process P0 arrives into the Ready queue first. Since
FCFS allocates the processor to the processes on the basis of their arrival into the ready
queue. Hence, P1 is allocated the processor. P1 will execute to completion and finally gets
added to the finish queue. Then the processor is allocated to P1, which is next in ready
queue. After P1 completes its execution, it is added to finish queue. Then processor is
allocated to next in ready queue. Thus after P1, P2, P3 and P4 are allocated the processor
time respectively. Hence the processor is allocated to the processes in the order they arrive in
the ready queue.

Limitations:
In FCFS, average waiting time is quite longer. If we have a processor bound job (generally
with longer service time) and other I/O bound jobs. And if, processor bound job is allocated
the processor time, then it will hold the CPU. As a result, other I/O bound jobs will keep
waiting in the ready queue and the I/O devices will remain idle.

Like in the test cases we observed, process P3 despite having a very short service time had
to wait for long till all the processes ahead of it ran to completion.

Average Turn around Time: 12


Average Waiting Time: 7.2
Average Response Time: 7.2
7.2. Round Robin
Dataset for Simulation

Process Arrival Service Finish Turnaround Waiting


Name Time Time Time Time Time Response
3 4 2 12 8 6 6
0 1 5 14 13 8 0
1 2 4 18 16 12 2
2 3 6 21 18 12 4
4 5 7 25 20 13 9

0-1 1-2 2-3 3-4 4-5


P0 P0 P0 P1
5-6 6-7 7-8 8-9 9-10
P1 P1 P2 P2 P2
10-11 11-12 12-13 13-14 14-15
P3 P3 P0 P0 P4
15-16 16-17 17-18 18-19 19-20
P4 P4 P1 P2 P2
20-21 21-22 22-23 23-24 24-25
P2 P4 P4 P4 P4

Round Robin is basically FCFS with preemption. In this algorithm the P0 enters the Ready
Queue when no other process is there to compete for the processor time. Hence, P0 is given
access to the processor and it starts its execution. The quantum has been set to 3-unit time.
When P0 has run for the quantum time on the processor, it will relinquish the processor. At
this time, the value of the clock is 3. By this time, P1, P2 and P3 has entered the Ready
queue. P0 will enter the Ready queue after P3. Since P1 came into the Ready queue after P0,
thus the processor will be allocated to P1. Now P1 will run on the processor for the quantum
time. By the time, P1 completes its run on processor for the quantum time, P4 will enter the
Ready queue after P0. After the quantum expires, P1 will relinquish the processor and will
enter in the ready after P4. Now the processor is allocated to P2, P3 and P4 respectively.
After relinquishing the processor, they all will enter the ready queue after P1 in the sequence
P2, P3 and P4. When the process completes its execution, it will be removed from the ready
queue and will enter into finish queue.

Advantages:
Round Robin algorithm exhibits fairness. All the processes are treated equally and are given
equal processor time.
As compared to FCFS, the average waiting time is considerably reduced in Round Robin
algorithm. Like the process P3, waited for 16 unit time in FCFS, had to wait for 10 unit time to
gain access to processor and ran to completion in the quantum period only. This reduced the
total number of processes waiting in the ready queue.

Limitations:
The performance of the system implementing Round Robin mainly depends upon the value of
the quantum. If we set the quantum to very high value, then it will proceed as the FCFS. As a
result the system performance will be sluggish. If we keep the quantum value low, more
overhead will be produced because of frequent context switch .Round Robin with low
quantum is generally suitable for the interactive system. However, to determine the optimal
quantum time is a tedious task.
Average Turn around Time: 15
Average Waiting Time: 8.2
Average Response Time: 4.2

7.3. Shortest Process First


Dataset taken for simulation

Process Arrival Service Finish Turnaround Waiting


Name Time Time Time Time Time Response
0 1 5 6 5 0 0
3 4 2 8 4 2 2
1 2 4 12 10 6 6
2 3 6 18 15 9 9
4 5 7 25 20 13 13

0-1 1-2 2-3 3-4 4-5


P0 P0 P0 P0
5-6 6-7 7-8 8-9 9-10
P0 P3 P3 P1 P1
10-11 11-12 12-13 13-14 14-15
P1 P1 P2 P2 P2
15-16 16-17 17-18 18-19 19-20
P2 P2 P2 P4 P4
20-21 21-22 22-23 23-24 24-25
P4 P4 P4 P4 P4

Process P0 arrives in the ready queue when no other process is in the queue to compete for
the processor time. Thus P0 is given the processor time immediately. Since this is a non-
preemptive algorithm, P0 will execute to its completion and is added to finish queue. By the
time P0 completes other processes enter into the ready queue namely, P1, P2, P3 and P4.
When P0 completes its execution, scheduler searches for the process with the minimum
service time. Since of the four processes in the ready queue, P3 has the minimum service
time (= 2), hence P3 is allocated the processor time. When P3 executes to completion, the
process with next least service time is allocated the processor. This time it is P1.This
continues until all the processes have finished. This clearly demonstrates that SPF gives
preference to the processes with short service time. As a result of this processes with longer
service time have to wait for longer period of time for execution, despite of entering the queue
before the shorter process. This might cause indefinite postponement of processes with
higher service time. To avoid this from happening we have yet another scheduling algorithm in
series which gives favorable treatment to the processes which have been waiting longer in the
ready queue .HRRN is discussed next.

Advantages:
Shorter processes are given preference. If the ready queue contains Processor bound
processes and some I/O bound processes, then the I/O bound will be given more preference.
As a result the system throughput increases.
Average waiting time of the processes decreases. Like in the test case, the process P3 waited
for only 6 seconds compared to 10 seconds in RR and 16 seconds in FCFS.

Limitations:
The algorithm is more biased towards the processes with shorter service time. As a result the
processes with longer service time many a times are kept waiting in the ready queue for
longer time and may cause indefinite postponement.
Since it is a non-preemptive algorithm therefore it does not exhibit fairness.
Average Turn around Time: 10.8
Average Waiting Time: 6
Average Response Time: 6

7.4. Highest Response Ratio Next


Dataset for Simulation

Process Arrival Service Finish


Name Time Time Time Turnaround Waiting
Time Time Response
0 1 5 6 5 0 0
1 2 4 10 8 4 4
3 4 2 12 8 6 6
2 3 6 18 15 9 9
4 5 7 25 20 13 13

0-1 1-2 2-3 3-4 4-5


P0 P0 P0 P0
5-6 6-7 7-8 8-9 9-10
P0 P1 P1 P1 P1
10-11 11-12 12-13 13-14 14-15
P3 P3 P2 P2 P2
15-16 16-17 17-18 18-19 19-20
P2 P2 P2 P4 P4
20-21 21-22 22-23 23-24 24-25
P4 P4 P4 P4 P4

In Highest Response Ratio Next (HRRN) algorithm the P0 enters the Ready Queue when no
other process is there to compete for the processor time. Hence, P0 is given access to the
processor and it starts its execution. Till this point the algorithm works as FCFS. However, by
the time P0 completes its execution, other processes P1, P2, P3 and P4 arrive in the Ready
queue. Now HRRN algorithm based upon the Service time and the Waiting time of the
processes will determine the priority of the processes. The process with highest priority will be
given access of the processor to execute. When the P0 relinquishes the processor, at that
time the value of the clock is 6. The calculation to determine the priority of the process is done
in the following steps:

At clock=6,

P1 P2 P3 P4
Arrival time 2 3 4 5
Waiting Time 4 3 2 1
Service Time 4 6 2 7
Priority 1.5 1.5 1.5 1.1

At this point the priority of process P1, P2 and P3 are equal and of P4 is less than the other
three. So, P4 is out of the competition for the processor time. Since the three processes have
the equal priority, here the algorithm will stick to FCFS basis. So, P1 will be given access of
the processor. Now when P1 relinquishes the processor, at that time the value of the clock is
10. Following are the calculation at this time.
P2 P3 P4
Arrival time 3 4 5
Waiting Time 7 6 5
Service Time 6 2 7
Priority 2.2 4 1.7

Based upon the priority calculated at this time, P3 is allocated the process. When P3
relinquishes the processor, the value of clock is 12. So now new calculations will be

P2 P3
Arrival time 3 5
Waiting Time 9 7
Service Time 6 7
Priority 2.5 2

Hence, this time P2 has been allocated the processor. When P2 relinquishes the processor,
the P4 is finally allocated the processor.

Advantages:
HRRN overcomes the limitation of SPF by giving favorable treatment to the longer processes.
Like in our test case, the process P1 came in the queue before P3. However, as SPF gives
preference to shorter process, hence P3 was allocated the process ahead of P1. However, in
HRRN, P1 had to wait just for 6 unit time to get access to the processor.
HRRN also prevents indefinite postponement.

Average Turn around Time: 11.2


Average Waiting Time: 6.4
Average Response Time: 6.4

7.5. Shortest Remaining Time


Dataset for Analysis

Process Arrival Service Finish Turnaround Waiting


Name Time Time Time Time Time Response
0 1 5 6 5 0 0
3 4 2 8 4 4 2
1 2 4 12 10 8 6
2 3 6 18 15 12 9
4 5 7 25 20 18 13

0-1 1-2 2-3 3-4 4-5


P0 P0 P0 P0
5-6 6-7 7-8 8-9 9-10
P0 P3 P3 P1 P1
10-11 11-12 12-13 13-14 14-15
P1 P1 P2 P2 P2
15-16 16-17 17-18 18-19 19-20
P2 P2 P2 P4 P4
20-21 21-22 22-23 23-24 24-25
P4 P4 P4 P4 P4
In Shortest Remaining Time, when P1 enters the Ready queue, it is the only process in the
foray for the processor time. Hence P0 is allocated the processor time. At clock = 2, process
P1 enters the ready queue with service time 4. But at this moment the value of the service
time of P0 is 4, equal to P1. Hence FCFS is followed in this case. Since P0 arrived in the
ready queue before P1, hence P0 retains the processor. At clock =3, process P2 enters the
ready queue with service time 6. By this time the updated service time of P0 is 3 which is
minimum of the three, hence P0 retains the processor. At clock = 4, process P3 enters the
ready queue with service time 2. At this moment, the service time left of P0 is 2. Since both
the processes have the same service time, hence FCFS basis is adopted. Hence P0 retains
the processor. At clock =5, P4 enters the ready queue with service time 7. P0 continues to
retain the processor and executes to completion and is added to finish queue. After P0, the
processor is allotted to next process with shortest service time, i.e P3. Now P3 executes to
completion. This is followed by P1, P2 and P4 respectively. Upon completion these processes
are added to the finish queue.

Advantages:
It offers the minimum waiting time for the processes. Like the process P3, waited for 6
seconds before getting the processor time. Though this waiting time is equal to that in SPF.
But being a preemptive algorithm, SRT scores over SPF by providing even lesser waiting time
than the former.

Average Turn around Time: 11


Average Waiting Time: 6.4
Average Response Time: 6
8. Graphical Representation
The graphs presented below represent the turnaround time, waiting time and response time

Turnaround Time Comparison

25
20 P0
15 P1
Time

P2
10
P3
5 P4
0
FCFS RR SPF HRRN SRT
Algorithms

Waiting Time Comparison

20
P0
15
P1
Time

10 P2
P3
5
P4
0
FCFS RR SPF HRRN SRT
Algorithms

Response Time Comparison

16
14 P0
12
10 P1
Time

8 P2
6 P3
4
P4
2
0
FCFS RR SPF HRRN SRT
Algorithms
9. Conclusion
From the analysis of the algorithms, we have come up with the conclusion that RR has the
best average response time and being the preemptive algorithm, it exhibits fairness. But
however, performance of the RR algorithm depends heavily on the size of the quantum. On
the one extreme is the time quantum is very large, RR algorithm is same as FCFS policy. But
if the time quantum is fairly small, the RR will exhibit fairness but a considerable overhead
gets added to the turnaround time due frequent context switch. This fact becomes clear from
the RR average turnaround time reading is highest as compared to other algorithms. Hence
we observed if majority of the processes are less then the time quantum, the RR will give
better response time.

Further, SPF has the least average turnaround time and average waiting time as compared to
other algorithms. This shows that SPF is provably optimal, in that it gives the minimum
average time in the set of processes by moving the short process before a long one. The
waiting time of short process decreases more than the waiting time of the long process.
Consequently the waiting time decreases. But this algorithm can only be used for systems
which are interactive and thereby is biased to short processes and unfavorable to longer ones
which may lead to indefinite postponement of longer processes.

HRRN has approximately same average turnaround, waiting and response time. It overcomes
the limitation of the SPF by giving favorable treatment to the processes waiting for a longer
time, and thereby prevents indefinite postponement.

SRT exhibits approximately same average response time, waiting time and turnaround time,
and may seem to be an effective algorithm for interactive processes if the tasks performed
before issuing I/O are short in duration. However, SRT determines priority based on the run
time to completion, not the run time to I/O. Some interactive processes such as shell executes
for the life time of the session, which would place the shell at the lowest priority level.

10. Further work


We have successfully simulated the scheduling algorithms based on which the scheduler
decides which processes to allocate to the processor. We have successfully gathered data for
analysis, but we still have to take under consideration the overhead which the real world
operating systems incur due to context switch while using the preemptive algorithms.
References:
Books :

Design John Wiley and Sons.


Spotts M.F. and Shoup T.E. Design of Machine Elements Prentice Hall International.
Bhandari V.B. Design of Machine Elements Tata McGraw Hill Publication Co. Ltd.
Black P.H. and O. Eugene Adams Machine Design McGraw Hill Book Co. Inc.

Sites:

http://www.milcal.com

http://www.chbooks.com

http://inside.mines.com

http://www.wnycre.bufallo.edu/leavnknv.html.

http://www.nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Machine
%20design1/pdf/mod8les1.pdf

S-ar putea să vă placă și