Sunteți pe pagina 1din 19

Operating Systems

Prof. Navneet Goyal


Department of Computer Science & Information Systems
BITS, Pilani
Topics for Today
Thrashing
Working Set

Thrashing
Swapping out a piece of a process just
before that piece is needed
The processor spends most of its time
swapping pieces rather than executing
user instructions

Principle of Locality
Program and data references within a
process tend to cluster
Only a few pieces of a process will be
needed over a short period of time
Possible to make intelligent guesses about
which pieces will be needed in the future
This suggests that virtual memory may
work efficiently
Locality In A Memory-Reference Pattern
Thrashing
If a process does not have enough
pages, the page-fault rate is very high.
This leads to:
low CPU utilization
operating system thinks that it needs to
increase the degree of multiprogramming
another process added to the system

Thrashing a process is busy swapping
pages in and out
Thrashing (Cont.)
Demand Paging and Thrashing
Why does demand paging work?
Locality model
Process migrates from one
locality to another
Localities may overlap

Why does thrashing occur?
E size of locality > total memory
size
Working-Set Model
A working-set window a fixed number of
page references
Example: 10,000 instruction
WSS
i
(working set size of Process P
i
) =
total number of pages referenced in the most
recent A (varies in time)
if A too small will not encompass entire
locality
if A too large will encompass several localities
if A = will encompass entire program
D = E WSS
i
total demand frames
if D > m Thrashing
Policy if D > m, then suspend one of the
processes
Working-set model
Keeping Track of the Working Set
Approximate with interval timer + a reference
bit
Example: A = 10,000
Timer interrupts after every 5000 time units
Keep in memory 2 bits for each page
Whenever a timer interrupts copy and sets the
values of all reference bits to 0
If one of the bits in memory = 1 page in
working set
Why is this not completely accurate?
Improvement = 10 bits and interrupt every 1000
time units
Page Size
Smaller page size, less amount of internal
fragmentation
Smaller page size, more pages required
per process
More pages per process means larger
page tables
Larger page tables means large portion of
page tables in virtual memory
Secondary memory is designed to
efficiently transfer large blocks of data so
a large page size is better
Page Size
Small page size, large number of pages
will be found in main memory
As time goes on during execution, the
pages in memory will all contain portions
of the process near recent references.
Page faults low.
Increased page size causes pages to
contain locations further from any recent
reference. Page faults rise.
Frame Allocation: Issues
Smaller no. of frames to a process more
processes can reside in MM increases probability
that OS will find at least one ready process
Despite Principle of Locality, Page faults rate will be
high
Beyond a certain size, additional allocation will have
no noticeable effect on page fault rate because of
Principle of Locality
Frame Allocation Schemes
Fixed Allocation
Equal Allocation
Proportional Allocation
Priority Allocation
Fixed Allocation
Equal allocation e.g., if 100 frames and 5 processes,
give each 20 pages.
Proportional allocation Allocate according to the size
of process.
m
S
s
p a
m
s S
p s
i
i i
i
i i
= =
=
=
=

for allocation
frames of number total
process of size
59 64
137
127
5 64
137
10
127
10
64
2
1
2
1
~ =
~ =
=
=
=
a
a
s
s
m
Priority Allocation
Use a proportional allocation scheme
using priorities rather than size.

If process P
i
generates a page fault,
select for replacement one of its frames.
select for replacement a frame from a process
with lower priority number.
Global vs. Local Allocation
Global replacement process
selects a replacement frame from
the set of all frames; one process
can take a frame from another.
Local replacement each process
selects from only its own set of
allocated frames.

S-ar putea să vă placă și