Sunteți pe pagina 1din 62

Operating Systems

10EC65

Unit 7

Scheduling

Reference Book:
Operating Systems - A Concept based Approach
D. M. Dhamdhare, TMH, 3rd Edition, 2010
Shrishail Bhat, AITM Bhatkal
Operating Systems, by Dhananjay Dhamdhere
7.2 Copyright 2008
Introduction

Scheduling is the activity of selecting the next request to be


serviced by a server
In an OS, a request is the execution of a job or a process, and the server is
the CPU
A user submits a request and waits for its completion
Four events related to scheduling are
Arrival
Scheduling
Preemption
Completion

Shrishail Bhat, AITM Bhatkal


Scheduling

Shrishail Bhat, AITM Bhatkal


Scheduling Terminology and Concepts

Shrishail Bhat, AITM Bhatkal


Scheduling Terminology and Concepts (continued)

Shrishail Bhat, AITM Bhatkal


Scheduling Terminology and Concepts (continued)

Shrishail Bhat, AITM Bhatkal


Operating Systems, by Dhananjay Dhamdhere
7.7 Copyright 2008
Fundamental Techniques of Scheduling

Schedulers use three fundamental techniques:


Priority-based scheduling
Provides high throughput of the system
Reordering of requests
Implicit in preemption
Enhances user service and/or throughput
Variation of time slice
Smaller values of time slice provide better response times, but lower CPU
efficiency
Use larger time slice for CPU-bound processes

Shrishail Bhat, AITM Bhatkal


Operating Systems, by Dhananjay Dhamdhere
7.8 Copyright 2008
The Role of Priority

Priority: tie-breaking rule employed by scheduler when many


requests await attention of server
May be static or dynamic
Some process reorderings could be obtained through priorities
E.g., Short processes serviced before long ones
Some reorderings would need complex priority functions
What if processes have the same priority?
Use round-robin scheduling
May lead to starvation of low-priority requests
Solution: aging of requests
Shrishail Bhat, AITM Bhatkal
Scheduling Policies

A scheduling policy determines the quality of service provided


to the user.
Two categories based on preemption:
Nonpreemptive Scheduling Policies
Preemptive Scheduling Policies

Shrishail Bhat, AITM Bhatkal


Operating Systems, by Dhananjay Dhamdhere
7.10 Copyright 2008
Nonpreemptive Scheduling Policies

A server always services a scheduled request to completion


Preemption never occurs
Scheduling is performed only when processing of the
previously scheduled request gets completed
Attractive because of its simplicity
Some nonpreemptive scheduling policies:
First-come, first-served (FCFS) scheduling
Shortest request next (SRN) scheduling
Highest response ratio next (HRN) scheduling

Shrishail Bhat, AITM Bhatkal


First-Come, First-Served (FCFS) Scheduling

Requests are scheduled in the order in which they arrive in the


system
The list of pending requests is organized as a queue
The scheduler always schedules the first request in the list

Shrishail Bhat, AITM Bhatkal


FCFS Scheduling An Example

Short requests may


suffer high weighted
turnarounds

Shrishail Bhat, AITM Bhatkal


Shortest Request Next (SRN) Scheduling

It always schedules the request with the smallest service time


A request remains pending until all shorter requests have been
serviced
The SRN policy offers poor service to long processes, because a
steady stream of short processes arriving in the system can
starve a long process

Shrishail Bhat, AITM Bhatkal


SRN Scheduling An Example

May cause
starvation of long
processes

Shrishail Bhat, AITM Bhatkal


Highest Response Ratio Next (HRN) Scheduling

It computes the response ratios of all processes in the system and


selects the process with the highest response ratio

The response ratio of a newly arrived process is 1


The response ratio of a short process increases more rapidly than
that of a long process, so shorter processes are favoured for
scheduling
However, the response ratio of a long process eventually becomes
large enough for the process to get scheduled
This does not cause starvation of long processes
Shrishail Bhat, AITM Bhatkal
HRN Scheduling An Example

Use of response
ratio counters
starvation

Shrishail Bhat, AITM Bhatkal


Nonpreemptive Scheduling Policies (continued)

Do it yourself
Calculate the mean turnaround time and mean weighted
turnaround for the following set of processes using
FCFS Scheduling
SRN Scheduling
HRN Scheduling
Process
Arrival time 0 2 3 5 9
Service time 3 3 2 5 3

Shrishail Bhat, AITM Bhatkal


Preemptive Scheduling Policies

The server can be switched to the processing of a new request


before completing current request
Preempted request is put back into pending list
Its servicing is resumed when it is scheduled again
A request may be scheduled many times before it is completed
Larger scheduling overhead than with non-preemptive scheduling
Used in multiprogramming and time-sharing OSs

Shrishail Bhat, AITM Bhatkal


Preemptive Scheduling Policies (continued)

Some preemptive scheduling policies:


Round-robin (RR) scheduling with time slicing
Least completed next (LCN) scheduling
Shortest time to go (STG) scheduling

Shrishail Bhat, AITM Bhatkal


Round-Robin (RR) Scheduling

The time slice, which is designated as , is the largest amount


of CPU time a request may use when scheduled
A request is preempted at the end of a time slice
A timer interrupt is raised when the time slice elapses
It provides good response times to all requests
It provides comparable service to all CPU-bound processes
It does not fare well on measures of system performance like
throughput because it does not give a favored treatment to
short processes
Shrishail Bhat, AITM Bhatkal
Round-Robin (RR) Scheduling An Example

In this example, = 1

Shrishail Bhat, AITM Bhatkal


Least Completed Next (LCN) Scheduling

It schedules the process that has so far consumed the least


amount of CPU time
All processes will make approximately equal progress in terms
of the CPU time consumed by them
Guarantees that short processes will finish ahead of long processes
It starves long processes of CPU attention
It neglects existing processes if new processes keep arriving in
the system

Shrishail Bhat, AITM Bhatkal


LCN Scheduling An Example

3 4

Issues:
Short processes will finish
ahead of long processes
Starves long processes of
CPU attention
Neglects existing processes
if new processes keep
arriving in the system
Shrishail Bhat, AITM Bhatkal
Shortest Time to Go (STG) Scheduling

It schedules a process whose remaining CPU time requirements are


the smallest in the system
It favors short processes over long ones and provides good
throughput
It also favors a process that is nearing completion over short
processes entering the system
Helps to improve the turnaround times and weighted turnarounds of
processes
Since it is analogous to the SRN policy, long processes might face
starvation

Shrishail Bhat, AITM Bhatkal


STG Scheduling An Example

Since it is analogous to the


SRN policy, long processes
might face starvation

Shrishail Bhat, AITM Bhatkal


Preemptive Scheduling Policies (continued)

Do it yourself
Calculate the mean turnaround time and mean weighted
turnaround for the following set of processes using
RR Scheduling with time slicing ( = 1)
LCN Scheduling
STG Scheduling
Process
Arrival time 0 2 3 5 9
Service time 3 3 2 5 3

Shrishail Bhat, AITM Bhatkal


Scheduling in Practice

An OS has to provide a suitable combination of user-centric


and system-centric features
It also has to adapt its operation to the nature and number of
user requests and availability of resources
A single scheduler and a single scheduling policy cannot
address all these its concerns
So an OS has an arrangement consisting of three schedulers
called long-term scheduler, medium-term scheduler and short-
term scheduler to address different user-centric and system-
centric issues
Shrishail Bhat, AITM Bhatkal
Scheduling in Practice (continued)

Long-term scheduler: Decides when to admit an arrived


process for scheduling, depending on its nature (whether CPU-
bound or I/O-bound) and on availability of resources like kernel
data structures and disk space for swapping
Medium-term scheduler: Decides when to swap-out a process
from memory and when to load it back, so that a sufficient
number of ready processes would exist in memory
Short-term scheduler: Decides which ready process to service
next on the CPU and for how long
Shrishail Bhat, AITM Bhatkal
Scheduling in Practice (continued)

The short-term scheduler is the one that actually selects a


process for operation
Hence it is also called the process scheduler

Shrishail Bhat, AITM Bhatkal


Event Handling and Scheduling

Shrishail Bhat, AITM Bhatkal


Event Handling and Scheduling (continued)

The kernel operates in an interrupt-driven manner


Every event that requires the kernels attention causes an
interrupt
The interrupt handler performs context save and invokes an
event handler
The event handler processes the event and changes the state
of the process affected by the event
It then invokes the long-term, medium-term or short-term
scheduler as appropriate

Shrishail Bhat, AITM Bhatkal


Event Handling and Scheduling (continued)

For example, the event handler that creates a new process


invokes the long-term scheduler
The medium-term scheduler is invoked when a process is to be
suspended or resumed
The short-term scheduler gains control and selects a process
for execution

Shrishail Bhat, AITM Bhatkal


Scheduling in Time-Sharing Systems

Shrishail Bhat, AITM Bhatkal


Scheduling in Time-Sharing Systems (continued)

The long-term scheduler admits a process when resources can be


allocated to it
It allocates swap space for the process on a disk, copies the code of
the process into the swap space, and adds the process to the list of
swapped-out processes
The medium-term scheduler controls swapping of processes and
moves processes between the swapped-out, blocked and ready lists
The short-term scheduler selects one process from the ready list
for execution
A process may shuttle between the medium-term and short-term
schedulers many times due to swapping
Shrishail Bhat, AITM Bhatkal
Process Scheduling

The short-term scheduler is often called the process scheduler


It uses a list of ready processes and decides which process to
execute and for how long

Shrishail Bhat, AITM Bhatkal


Process Scheduling (continued)

The list of ready processes is maintained as a list of their PCBs


When the process scheduler receives control, it selects one process
for execution
The process dispatching mechanism loads two PCB fields PSW
and CPU registers fields into the CPU to resume execution of the
selected process
The context save mechanism is invoked when an interrupt occurs to
save the PSW and CPU registers of the interrupted process
The priority computation and reordering mechanism recomputes
the priority of requests and reorders the PCB lists to reflect the new
priorities
Shrishail Bhat, AITM Bhatkal
Priority-based Scheduling

It offers good response times to high priority processes and


good throughput.
Main features:
A mix of CPU-bound and I/O-bound processes exist in the system
An I/O-bound process has a higher priority than a CPU-bound process.
Process priorities are static.
Process scheduling is preemptive.

Shrishail Bhat, AITM Bhatkal


Priority-based Scheduling (continued)

Shrishail Bhat, AITM Bhatkal


Priority-based Scheduling (continued)

Shrishail Bhat, AITM Bhatkal


Round-Robin Scheduling

Shrishail Bhat, AITM Bhatkal


Round-Robin Scheduling An Example

Shrishail Bhat, AITM Bhatkal


Multilevel Scheduling

Shrishail Bhat, AITM Bhatkal


Multilevel Scheduling (continued)

Shrishail Bhat, AITM Bhatkal


Multilevel Adaptive Scheduling

Also called multilevel feedback scheduling


Scheduler varies priority of process so it receives a time slice
consistent with its CPU requirement
Scheduler determines correct priority level for a process by
observing its recent CPU and I/O usage
Moves the process to this level
Example: CTSS, a time-sharing OS for the IBM 7094 in the
1960s
Eight-level priority structure

Shrishail Bhat, AITM Bhatkal


Fair Share Scheduling

Fair share: fraction of CPU time to be devoted to a group of


processes from same user or application
Ensures an equitable use of the CPU by processes belonging to
different users or different applications
Lottery scheduling is a technique for sharing a resource in a
probabilistically fair manner
Tickets are issued to applications (or users) on the basis of their fair
share of CPU time
Actual share of the resources allocated to the process depends on
contention for the resource
Shrishail Bhat, AITM Bhatkal
Kernel Preemptibility

Helps ensure effectiveness of a scheduler


With a noninterruptible kernel, event handlers have mutually
exclusive access to kernel data structures without having to use data
access synchronization
If handlers have large running times, noninterruptibility causes large kernel
latency
May even cause a situation analogous to priority inversion
Preemptible kernel solves these problems
A high-priority process that is activated by an interrupt would start executing sooner

Shrishail Bhat, AITM Bhatkal


Scheduling Heuristics

Scheduling heuristics reduce overhead and improve user


service
Use of a time quantum
After exhausting quantum, process is not considered for scheduling unless
granted another quantum
Done only after active processes have exhausted their quanta
Variation of process priority
Priority could be varied to achieve various goals
Boosted while process is executing a system call
Vary to more accurately characterize the nature of a process

Shrishail Bhat, AITM Bhatkal


Power Management

Idle loop used when no ready processes exist


Wastes power
Bad for power-starved systems
E.g., embedded systems
Solution: use special modes in CPU
Sleep mode: CPU does not execute instructions but accepts interrupts
Some computers provide several sleep modes
Light or heavy
OSs like Unix and Windows have generalized power
management to include all devices
Shrishail Bhat, AITM Bhatkal
Real-Time Scheduling

Real-time scheduling must handle two special scheduling


constraints while trying to meet the deadlines of applications
First, processes within real-time applications are interacting
processes
Deadline of an application should be translated into appropriate deadlines for
the processes
Second, processes may be periodic
Different instances of a process may arrive at fixed intervals and all of them
have to meet their deadlines

Shrishail Bhat, AITM Bhatkal


Process Precedences and Feasible Schedules

Dependences between processes (e.g., Pi Pj) are considered while


determining deadlines and scheduling



*


*

Response requirements are guaranteed to be met (hard real-time


systems) or are met probabilistically (soft real-time systems),
depending on type of RT system

RT scheduling focuses on implementing a feasible schedule for an


application, if one exists Shrishail Bhat, AITM Bhatkal
Process Precedences and Feasible Schedules (continued)

Another dynamic scheduling policy: optimistic scheduling


Admits all processes; may miss some deadlines
Shrishail Bhat, AITM Bhatkal
Deadline Scheduling

Two kinds of deadlines can be specified:


Starting deadline: latest instant of time by which operation of the process
must begin
Completion deadline: time by which operation of the process must
complete
We consider only completion deadlines in the following
Deadline estimation is done by considering process precedences
and working backward from the response requirement of the
application
Di = Dapplication k descendant(i) xk

Shrishail Bhat, AITM Bhatkal


Example: Determining Process Deadlines

Total of service times of processes is 25 seconds


If the application has to produce a response in 25 seconds, the
deadlines of the processes would be:

Shrishail Bhat, AITM Bhatkal


Earliest Deadline First (EDF) Scheduling

It always selects the process with the earliest deadline


If pos(Pi) is position of Pi in sequence of scheduling decisions,
deadline overrun does not occur if

Condition holds when a feasible schedule exists


Advantages: Simplicity and nonpreemptive nature
Good policy for static scheduling

Shrishail Bhat, AITM Bhatkal


Earliest Deadline First (EDF) Scheduling (continued)

EDF policy for the deadlines of Figure 7.13:

P4 : 20 indicates that P4 has the deadline 20


P2,P3 and P5,P6 have identical deadlines
Three other schedules are possible
None of them would incur deadline overruns
Shrishail Bhat, AITM Bhatkal
Example: Problems of EDF Scheduling

PPG of Figure 7.13 with the edge (P5,P6) removed


Two independent applications: P1P4 and P6, and P5
If all processes are to complete by 19 seconds
Feasible schedule does not exist
Deadlines of the processes:

EDF scheduling may schedule the processes as follows:


P1,P2,P3,P4,P5,P6, or P1,P2,P3,P4,P6,P5
Hence number of processes that miss their deadlines is unpredictable

Shrishail Bhat, AITM Bhatkal


Feasibility of schedule for Periodic Processes

Fraction of CPU time used by Pi = xi / Ti


In the following example, fractions of CPU time used add up to
0.93

If CPU overhead of OS operation is negligible, it is feasible to service


these three processes
In general, set of periodic processes P1, . . . ,Pn that do not
perform I/O can be serviced by a hard real-time system that
has a negligible overhead if:

Shrishail Bhat, AITM Bhatkal


Rate Monotonic (RM) Scheduling

Determines the rate at which process has to repeat


Rate of Pi = 1 / Ti
Assigns the rate itself as the priority of the process
A process with a smaller period has a higher priority
Employs a priority-based scheduling
Can complete its operation early

Shrishail Bhat, AITM Bhatkal


Rate Monotonic (RM) Scheduling (continued)

Rate monotonic scheduling is not guaranteed to find a feasible


schedule in all situations
For example, if P3 had a period of 27 seconds
If application has a large number of processes, may not be able
to achieve more than 69 percent CPU utilization if it is to meet
deadlines of processes
The deadline-driven scheduling algorithm dynamically assigns
process priorities based on their current deadlines
Can achieve 100 percent CPU utilization
Practical performance is lower because of the overhead of dynamic
priority assignment
Shrishail Bhat, AITM Bhatkal
Scheduling in Unix

Pure time-sharing operating system


In Unix 4.3 BSD, priorities are in the range 0 to 127
Processes in user mode have priorities between 50 and 127
Processes in kernel mode have priorities between 0 and 49
Uses a multilevel adaptive scheduling policy
Process priority = base priority for user processes
+ f (CPU time used recently) + nice value
For fair share
Add f (CPU time used by processes in group)
Process priority = base priority for user processes
+ f (CPU time used by process)
+ f (CPU time used by processes in group) + nice value
Shrishail Bhat, AITM Bhatkal
Example: Process Scheduling in Unix

Process
Arrival time 0 2 3 4 8
Service time 3 3 5 2 3

Shrishail Bhat, AITM Bhatkal


Example: Fair Share Scheduling in Unix

Shrishail Bhat, AITM Bhatkal

S-ar putea să vă placă și