Sunteți pe pagina 1din 80

Operating Systems:

Internals and Design Principles, 6/E


William Stallings

Uniprocessor Scheduling
Readings:
Ch 09 (Stallings) & Ch 05 (Sliberschatz)

Dave Bremer
Otago Polytechnic, N.Z.
©2008, Prentice Hall
Roadmap
• Types of Processor Scheduling
• Scheduling Algorithms
• Traditional UNIX Scheduling
Scheduling
• An OS must allocate resources amongst
competing processes.
• The resource provided by a processor is
execution time
– The resource is allocated by means of a
schedule
Overall Aim
of Scheduling
• The aim of processor scheduling is to
assign processes to be executed by the
processor over time,
– in a way that meets system objectives, such
as response time, throughput, and processor
efficiency.
Scheduling Objectives
• The scheduling function should
– Share time fairly among processes
– Prevent starvation of a process
– Use the processor efficiently
– Have low overhead
– Prioritise processes when necessary (e.g. real
time deadlines)
Types of Scheduling
Two Suspend States
• Remember this diagram from Chapter 3/Last lect
Scheduling and
Process State Transitions
Nesting of Scheduling Functions
Queuing Diagram
Medium-Term
Scheduling
• Part of the swapping function
• Swapping-in decisions are based on the
need to manage the degree of
multiprogramming
Short-Term Scheduling
• Executes most frequently
• Invoked when an event occurs
– Clock interrupts
– I/O interrupts
– System calls
– Signals, e.g., Semaphores
Roadmap
• Types of Processor Scheduling
• Scheduling Algorithms
• Traditional UNIX Scheduling
Scheduling Criteria/ Metrics
• Different CPU scheduling algorithms have different properties
• The choice of a particular algorithm may favor one class of processes over
another
• In choosing which algorithm to use, the properties of the various algorithms
should be considered
• Criteria for comparing CPU scheduling algorithms may include the following
– CPU utilization – percent of time that the CPU is busy executing a process
– Throughput – number of processes that are completed per time unit
– Response time – amount of time it takes from when a request was
submitted until the first response occurs (but not the time it takes to
output the entire response)
– Waiting time – the amount of time before a process starts after first
entering the ready queue (or the sum of the amount of time a process has
spent waiting in the ready queue. Time blocked, waiting for I/O, is not part
of the waiting time.)
– Turnaround time – amount of time to execute a particular process from the
time of submission through the time of completion
Aim of Short
Term Scheduling
• Main objective is to allocate processor
time to optimize certain aspects of system
behaviour.
• A set of criteria is needed to evaluate the
scheduling policy as discussed in the
previous slide.
Short-Term Scheduling
Criteria: User vs System
• We can differentiate between user and
system criteria
• User-oriented
– Response Time
• Elapsed time between the submission of a request
until there is the first output.
• System-oriented
– Effective and efficient utilization of the
processor
Short-Term Scheduling
Criteria: Performance
• We could differentiate between
performance related criteria, and those
unrelated to performance
• Performance-related
– Quantitative, easily measured
– E.g. response time and throughput
• Non-performance related
– Qualitative
– Hard to measure
Interdependent
Scheduling Criteria
Interdependent
Scheduling Criteria cont.
Priorities
• Scheduler will always choose a process of
higher priority over one of lower priority
• Have multiple ready queues to represent
each level of priority
Priority Queuing
Starvation
• Problem:
– Lower-priority may suffer starvation if there is
a steady supply of high priority processes.

• Solution
– Allow a process to change its priority based
on its age or execution history
Decision Mode
• Specifies the instants in time at which the
selection function is exercised.
• Two categories:
– Nonpreemptive
– Preemptive
Nonpreemptive vs
Premeptive
• Non-preemptive
– Once a process is in the running state, it will
continue until it terminates or blocks itself for
I/O
• Preemptive
– Currently running process may be interrupted
and moved to ready state by the OS
– Preemption may occur when new process
arrives, on an interrupt, or periodically.
Single Processor Scheduling
Algorithms
Single Processor Scheduling Algorithms

• First Come, First Served


(FCFS)
• Shortest Job First (SJF)
• Priority Based Scheduling
• Round Robin (RR)
First Come, First Served
(FCFS) Scheduling
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
• With FCFS, the process that requests the CPU first is allocated the CPU first
• Case #1: Suppose that the processes arrive in the order (assuming at time 0): P1 , P2
, P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
• Average turn-around time: (24 + 27 + 30)/3 = 27
FCFS Scheduling (Cont.)
• Case #2: Suppose that the processes arrive in the order: P2 , P3 , P1

• The Gantt chart for the schedule is:


P2 P3 P1

0 3 6 30
• Waiting time for P1 = 6; P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3 (Much better than Case #1)
• Average turn-around time: (3 + 6 + 30)/3 = 13
• Case #1 is an example of the convoy effect; all the other processes wait for one
long-running process to finish using the CPU
– This problem results in lower CPU and device utilization; Case #2 shows that
higher utilization might be possible if the short processes were allowed to run
first
• The FCFS scheduling algorithm is non-preemptive
– Once the CPU has been allocated to a process, that process keeps the CPU
until it releases it either by terminating or by requesting I/O
– It is a troublesome algorithm for time-sharing systems
Shortest Job First (SJF)
Scheduling
Shortest-Job-First (SJF) Scheduling

• The SJF algorithm associates with each process the length


of its next CPU burst
• When the CPU becomes available, it is assigned to the
process that has the smallest next CPU burst (in the case of
matching bursts, FCFS is used)
• Two schemes:
– Non –preemptive – once the CPU is given to the
process, it cannot be preempted until it completes its
CPU burst
– Preemptive – if a new process arrives with a CPU burst
length less than the remaining time of the current
executing process, preempt. This scheme is know as
the Shortest-Remaining-Time-First (SRTF)
Example #1: Non-Preemptive SJF
(simultaneous arrival)

Process Arrival Time Burst Time


P1 0.0 6
P2 0.0 4
P3 0.0 1
P4 0.0 5
• SJF (non-preemptive, simultaneous arrival)

P3 P2 P4 P1

0 1 5 10 16
• Average waiting time = (0 + 1 + 5 + 10)/4 = 4
• Average turn-around time = (1 + 5 + 10 + 16)/4 = 8
Example #2: Non-Preemptive SJF
(varied arrival times)

P1 P3 P2 P4

0 3 7 8 12 16

Waiting time : sum of time that a process has spent waiting in the ready queue
Example #3: Preemptive SJF
(Shortest-remaining-time-first)
Process Arrival TimeBurst Time
P1 0.0 7
Waiting time : sum of time that a process has spent
P2 2.0 4 waiting in the ready queue
P3 4.0 1
P4 5.0 4
• SJF (preemptive, varied arrival times)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

• Average waiting time


= ( [(0 – 0) + (11 - 2)] + [(2 – 2) + (5 – 4)] + (4 - 4) + (7 –
5) )/4
= 9 + 1 + 0 + 2)/4
=3
• Average turn-around time = (16 + 7 + 5 + 11)/4 = 9.75
Priority Scheduling
Priority Scheduling
• The SJF algorithm is a special case of the general priority scheduling
algorithm
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority (smallest integer
= highest priority)
• Priority scheduling can be either preemptive or non-preemptive
– A preemptive approach will preempt the CPU if the priority of the newly-
arrived process is higher than the priority of the currently running
process
– A non-preemptive approach will simply put the new process (with the
highest priority) at the head of the ready queue
• SJF is a priority scheduling algorithm where priority is the predicted next CPU
burst time
• The main problem with priority scheduling is starvation, that is, low priority
processes may never execute
• A solution is aging; as time progresses, the priority of a process in the ready
queue is increased
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
• In the round robin algorithm, each process gets
a small unit of CPU time (a time quantum),
usually 10-100 milliseconds. After this time has
elapsed, the process is preempted and added
to the end of the ready queue.

• Performance of the round robin algorithm


– q large  FCFS
– q small  q must be greater than the context
switch time; otherwise, the overhead is too
high
Example of RR with Time Quantum = 20

Process Burst Time


P1 53
P2 17
P3 68
P4 24
• The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

• Typically, higher average turnaround than SJF, but better response time
• Average waiting time
= ( [(0 – 0) + (77 - 20) + (121 – 97)] + (20 – 0) + [(37 – 0) + (97 - 57) + (134
– 117)] + [(57 – 0) + (117 – 77)] ) / 4
= (0 + 57 + 24) + 20 + (37 + 40 + 17) + (57 + 40) ) / 4
= (81 + 20 + 94 + 97)/4
= 292 / 4 = 73
• Average turn-around time = 134 + 37 + 162 + 121) / 4 = 113.5
Multi-level Queue Scheduling
Multi-level Queue Scheduling
• Multi-level queue scheduling is used when processes can be classified into groups
• For example, foreground (interactive) processes and background (batch) processes
– The two types of processes have different response-time requirements and so
may have different scheduling needs
– Also, foreground processes may have priority (externally defined) over
background processes
• A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues
• The processes are permanently assigned to one queue, generally based on some
property of the process such as memory size, process priority, or process type
• Each queue has its own scheduling algorithm
– The foreground queue might be scheduled using an RR algorithm
– The background queue might be scheduled using an FCFS algorithm
• In addition, there needs to be scheduling among the queues, which is commonly
implemented as fixed-priority pre-emptive scheduling
– The foreground queue may have absolute priority over the background queue
Multi-level Queue Scheduling
• One example of a multi-level queue are the five queues shown below
• Each queue has absolute priority over lower priority queues
• For example, no process in the batch queue can run unless the queues above it
are empty
• Do you seen any problem with this approach ??
• This can result in starvation for the processes in the lower priority queues
Multilevel Queue Scheduling
• Another possibility is to time slice among the queues
• Each queue gets a certain portion of the CPU time, which it can then
schedule among its various processes
– The foreground queue can be given 80% of the CPU time for RR
scheduling
– The background queue can be given 20% of the CPU time for FCFS
scheduling
Multi-level Feedback Queue
Scheduling
Multilevel Feedback Queue
Scheduling
• In multi-level feedback queue scheduling, a process can move between the
various queues; aging can be implemented this way
• A multilevel-feedback-queue scheduler is defined by the following
parameters:
– Number of queues
– Scheduling algorithms for each queue
– Method used to determine when to promote a process
– Method used to determine when to demote a process
– Method used to determine which queue a process will enter when that
process needs service
Example of Multilevel Feedback Queue
Scheduling
– A new job enters queue Q0 (RR) and is placed at the end. When it
gains the CPU, the job receives 8 milliseconds. If it does not finish in 8
milliseconds, the job is moved to the end of queue Q1.
– A Q1 (RR) job receives 16 milliseconds. If it still does not complete, it is
preempted and moved to queue Q2 (FCFS).

Q0

Q1

Q2
Alternative Scheduling
Policies
Selection Function
• Determines which process is selected for
execution
• If based on execution characteristics then
important quantities are:
• w = time spent in system so far, waiting
• e = time spent in execution so far
• s = total service time required by the process,
including e;
Process Scheduling
Example
• Example set of
processes,
consider each a
batch job

– Service time represents total execution time


First-Come-
First-Served
• Each process joins the Ready queue
• When the current process ceases to
execute, the longest process in the Ready
queue is selected
First-Come-
First-Served
• A short process may have to wait a very
long time before it can execute
• Favors CPU-bound processes
– I/O processes have to wait until CPU-bound
process completes
Round Robin
• Uses preemption based on a clock
– also known as time slicing, because each
process is given a slice of time before being
preempted.
Round Robin
• Clock interrupt is generated at periodic
intervals
• When an interrupt occurs, the currently
running process is placed in the ready
queue
– Next ready job is selected
Effect of Size of
Preemption Time Quantum
Effect of Size of
Preemption Time Quantum
‘Virtual Round Robin’
Shortest Process Next
• Nonpreemptive policy
• Process with shortest expected processing
time is selected next
• Short process jumps ahead of longer
processes
Shortest Process Next
• Predictability of longer processes is
reduced
• If estimated time for process not correct,
the operating system may abort it
• Possibility of starvation for longer
processes
Calculating
Program ‘Burst’
• Where:
– Ti = processor execution
time for the ith instance of
this process
– Si = predicted value for
the ith instance
– S1 = predicted value for
first instance; not
calculated
Exponential Averaging
• A common technique for predicting a
future value on the basis of a time series
of past values is exponential averaging
Exponential Smoothing
Coefficients
Use Of Exponential
Averaging
Use Of
Exponential Averaging
Shortest Remaining
Time
• Preemptive version of shortest process
next policy
• Must estimate processing time and choose
the shortest
Highest Response
Ratio Next
• Choose next process with the greatest
ratio
Feedback Scheduling
• Penalize jobs that
have been running
longer
• Don’t know
remaining time
process needs to
execute
Feedback Performance
• Variations exist, simple version pre-empts
periodically, similar to round robin
– But can lead to starvation
Performance
Comparison
• Any scheduling discipline that chooses the
next item to be served independent of
service time obeys the relationship:
Formulas
Overall Normalized
Response Time
Normalized Response
Time for Shorter Process
Normalized Response
Time for Longer Processes
Normalized
Turnaround Time
Fair-Share Scheduling
• User’s application runs as a collection of
processes (threads)
• User is concerned about the performance
of the application
• Need to make scheduling decisions based
on process sets
Fair-Share Scheduler
Roadmap
• Types of Processor Scheduling
• Scheduling Algorithms
• Traditional UNIX Scheduling
Traditional UNIX
Scheduling
• Multilevel feedback using round robin
within each of the priority queues
• If a running process does not block or
complete within 1 second, it is preempted
• Priority is based on process type and
execution history.
Scheduling Formula
Bands
• Priorities are recomputed once per second
• Base priority divides all processes into
fixed bands of priority levels
– Swapper (highest)
– Block I/O device control
– File manipulation
– Character I/O device control
– User processes (lowest)
Example of Traditional
UNIX Process Scheduling

S-ar putea să vă placă și