Sunteți pe pagina 1din 116

Introduction To Operating System

Computer System

Hardware

Software

User

Input Units

CPU

Output Units

Secondary storage

Operating System

Application Programs

Compilers & Assemblers

Processor Memory

Control Unit

Magnetic Tapes
Magnetic Disks

Compact Disks

Programming Languages

RAM

Cache Memory

ROM

Floppy Disks

Hard Disks

High Level

Low Level

Q: Explain what is meant by Application Programs? Sol:


User

Application Programs
Software

Operating System

Hardware

Application programs

Are programs produced by programming companies to help computer users to perform useful tasks.

Q: Draw a block diagram showing the computer internal structure. Sol:

Instructions

Input Units

Instructions

Control Unit

Control Signals

Data

Memory (RAM) Output Units


Results

Processor
Cache Memory
Data

ALU
Store Results
Or Load date Results

Secondary Storage

ROM

CPU

Q: Define the following: RAM ROM Cache Memory Sol: RAM:

Random Access Memory Volatile (lose contents when power is off). Can be modified by the user.

ROM:

Read only Memory Non Volatile. Contains fixed programs used by the computer.

Cache Memory:

A memory inside the processor chip. Used to store frequently used data. Increases the processor speed.

Processor
Cache Memory

ALU

Q: Write short notes about: Secondary Storage Units Sol:

Definition: Secondary storage units are the units used to store data permanently.

Secondary Storage

Magnetic Disks

Magnetic Tapes

Compact Disks

Floppy Disks

Hard Disks

Magnetic Tapes:
Used to store large volume of data in large computers like mainframes

for long time. Consists of plastic film coated by magnetic material (iron oxide).

Advantages: Compact (can store huge amount of data). Economical (low cost). No loss of data. Disadvantages: Sequential storage.

Magnetic Disks:

A surface of metal (in the case of hard disk) or plastic (in floppy disks) coated with magnetic material. It rotates with a high speed. Divided into tracks and sectors. Track Sector

Compact Disks:

CD/ROM (Compact Disk/Read Only Memory). A plastic surface coated by a reflective material. A laser beam is used to write on CD/ROM. Can store up to 600 Mega byte.

Q: Define OS, then what are the different OS goals? Sol: Operating System (OS):
is the program running all the times on the computer to communicate all computer components (usually called Kernel).
OS does not perform a useful task by itself, but it creates a suitable environment so that other programs can operate efficiently.

OS Goals: Convenient for the user. Efficient for the system components.

Q: What are the different types of OS? Sol:

Different Types of OS are:


1. Batch System. 2. Multi-Programming System. 3. Multi-Tasking (Time Sharing) System. 4. Multi-Processor (Parallel) System. 5. Network Operating Systems. 6. Real Time Systems.

Batch System:
Users send their jobs to the computer operator. Operator organize jobs into a set of batches (each contains similar jobs). Each batch is run separately as a set of jobs.

Batch 1

Next batch

Operator
Batch 2

CPU
Batch 1 Job

Multi-Programming System:
A number of processes are in memory inside the ready queue waiting for the CPU (there is one user). Windows OS using this concept.

Process 1
Ready Queue Process 2 Process 3 Memory

Switching between processes

CPU

Multi-Tasking (Time sharing) System:


Allow a number of users to share the CPU in the same time. This concepts is used in the mainframe computers.

User 1

Ready Queue for user 1

Process 1 Process 2 Process 3

User 2

Ready Queue for user 2

Process 1
Process 2 Process 3

CPU
Switching between users processes

User 3

Ready Queue for user 3

Process 1 Process 2 Process 3

Memory

Multi-Processor System:
Is a system with more than one processor to maximize the system speed.

CPU 1
Ready Queue
Process 1 Process 2

User

CPU 2

Process 3

CPU 3 Memory

Network System:
Systems that operate networks in order to achieve:
Resource sharing. Computation speedup. Load balancing. Communication between hosts. Star

Bus

Ring

Real Time System:


Systems that performs critical tasks.

Real Time Systems

Hard Real Time Systems

Soft Real Time Systems

Hard real time systems: critical tasks must be performed on time. Soft real time systems: critical tasks gets priority over other tasks.

Q: What are the different activates supported by modern OS? Sol:

Different OS functions are:


1. Process Management. 2. Memory Management.

3. File Management.
4. Storage Management. 5. I/O Management.

6. Protection Management.
7. Networking Management.

Process Management

Q: Define the following: Process Resource Sol: Process :is a program in execution; it is an active entity in the memory while the program is the passive copy in the hard disk.

Process 1
Program

Memory
Process 2

Hard Disk

Resource: any things in the system that may be used by active processes. They may be hardware (printers, tape drives, memory), or software (files, databases).

Resource Types

Preemptive Resources

Non Preemptive Resources


A resource that stops the current process when a higher priority process comes.


A resource that can not stop the current process when a higher priority process comes.

Ex: Memory.

Ex: CD recorder,

Q: Explain why? Memory is a preemptive recourse. Sol:

Because
It allows a low priority process to swapped out

form it to the disk when a higher priority process arrives and needs a larger amount of memory than the available. The low priority process can resume execution later (swapped in again).

Assume a system with 32 kb memory size.

5kb are used for OS, 10 kb for the low priority process. Hence, the available space is 17 kb. A higher priority process arrives and needs 20 kb.
high priority process finishes execution
OS (5 kb) OS (5 kb) Swap in the high priority process OS (5 kb)

Disk

Swap out the low priority process to disk

OS (5 kb) Low Priority process (10 kb) Available 17 Kb

Available 27 Kb

High Priority process (20 kb)

Available 27 Kb

Available 7 Kb

Swap in the low priority process again to


resume execution

High Priority 20 Kb

Q: Explain why? CD recorder is nonpreemptive resource. Sol:

Because: If a process has begun to burn a CD-ROM, suddenly taking the CD recorder away from it and giving it to another process will result in a bad CD.

Q: What are the different steps to utilize a resource by a process? Sol:


A process may utilize a resource in only the following sequence as shown below:

Request Use

Release

Q: Explain the main differences between a input (job) and ready queues?
Sol: -input queue: stores the programs that will be opened soon (located at the disk) - ready queue: contains the active process that are ready to be executed (located in the memory).
Ready Queue

Process need to be executed (programs)

Q: Explain what is meant by process states, and then draw the process state diagram. Sol: As process executes, it changes its state. The process state may be:

New: The process is being created. Ready: The process is waiting for the processor. Running: The process instructions are being executed. Waiting: The process is waiting for an event to happen (such as I/O completion).

Terminated: The process has finished execution.

The process state diagram is:

Dispatcher

Q: Explain why? Each process must have a process control block (PCB) in memory. Sol:

Because:
Process control block (PCB) is the way used to represent the process in the operating system.

It contains information about the process such as: Process state (new, ready, ) Address of the next instruction to be executed in the process. Process priority. Process assigned memory. I/O information (such as I/O devices and opened files).

Q: Show graphically how OS uses the PCB to switch between active processes Sol:

When interrupt happened, OS saves the state of the current active process to its PCB so it can continue correctly when it resumes its execution again. To resume execution, OS reloads the process state from its PCB and continue execution.

Q: Give a suitable definition for the "Context Switch", and then explain why context switch is a pure overhead? Sol: Context switch is the time needed by OS to switch the processor from a process to another.

Context switch(Px Py) = TX + TY , where

TX : time needed to save the state of Px in PCBx TY : time needed to load the state of PY in PCBY
Context switch is a pure overhead because the system

does no useful work while switching.

Q: what is meant by Schedulers?, then discuss different types of them? Sol: Long-term scheduler (or job scheduler) selects which processes should be brought into the ready queue from the job queue (determine the degree of multi-programming)
Short-term scheduler (or CPU scheduler) selects which

process should be executed next and allocates CPU


Medium-term scheduler: swap out the process from memory

(ready queue) and swapped in again later (it decrease the degree of multiprogramming).

Passive Programs

Disk

Memory
Long Term Scheduler Short Term Scheduler

Open program Select a process from job to ready queue

Process

CPU
Assign the CPU to a process from ready queue

Job (input) Queue

Swap a process from ready to job queue

Process

Medium Term Scheduler

Q: Explain why?
Long term scheduler increases the degree of multiprogramming. Medium term scheduler decreases the degree of multiprogramming.

Sol: Degree of multi-programming is the number of processes that are placed in the ready queue waiting for execution by the CPU.
Process 1 Process 2 Process 3 Process 4 Process 5

Degree of
Multi-Programming

Memory

Since Long term scheduler selects which processes to brought to the ready queue, hence, it Decreases the degree of multiprogramming.

Disk

Long Term Scheduler

Process 1 Process 2 Process 3 Process 4 Process 5

Degree of Degree of Multi-Programming Multi-Programming

Job Queue

Memory

Since Medium term scheduler picks some processes from the ready queue and swap them out of memory, hence, it decreases the degree of multiprogramming.

Disk

Medium Term Scheduler

Process 1 Process 2 Process 3 Process 4 Process 5

Degree of Degree of Multi-Programming Multi-Programming

Job Queue

Memory

CPU Scheduling

(Ready Queue)

Q: Explain what is meant by CPU scheduling, and then discuss the difference between loader and dispatcher? Sol: Difference between loader and dispatcher:
Ready Queue

Process need to be executed

CPU scheduling
CPU Scheduling is the method to select a process from

the ready queue to be executed by CPU when ever the CPU becomes idle.
Some Examples: First Come First Serviced (FCFS) scheduling. Shortest Job First (SJF) scheduling. Priority scheduling.
CPU Scheduling Algorithm

FCFS SJF Priority

Process 1 Ready Queue

Process 2
Process 3 Memory

Dispatcher

CPU

Q: Explain the main differences between preemptive and non preemptive scheduling?

Sol:
Preemptive scheduling: allows releasing the current

executing process from CPU when another process (which has a higher priority) comes and need execution.

Non-preemptive scheduling: once the CPU has been

allocated to a process, the process keeps the CPU until it release the CPU .

Preemptive Scheduling

CPU

Non- Preemptive Scheduling

CPU

Q: discuss in details, what is meant by the following

parameters:
CPU utilization.

System throughput. Turnaround time. Waiting time.


Response time.

Then discuss which parameter to maximize and which one to minimize?

Sol:

CPU Utilization:
The percentage of times while CPU is busy to the total time (

times CPU busy + times it is idle). Hence, it measures the benefits from CPU.
Times CPU Busy *100 Total Time

CPU

Utilization

To maximize utilization, keep CPU as busy as possible. CPU utilization range from 40% (for lightly loaded systems) to

90% (for heavily loaded) (Explain why? CPU utilization can not reach 100%, because of the context switch between active processes).

System Throughput:
The number of process that are completed per time unit (hour)

Turnaround time:
For a particular process, it is the total time needed for process

execution (from the time of submission to the time of completion). It is the sum of process execution time and its waiting times (to get memory, perform I/O, .).

Waiting time:
The waiting time for a specific process is the sum of all periods it

spends waiting in the ready queue.

Response time.
It is the time from the submission of a process until the first

response is produced (the time the process takes to start responding).

It is desirable to:
Maximize: CPU utilization. System throughput. Minimize: Turnaround time. Waiting time. Response time.

First Come First Serviced (FCFS) algorithm

The process that comes first will be executed first. Not preemptive.

Ready queue

FCFS Scheduling

CPU

Consider the following set of processes, with the length of the CPU burst (Execution) time given in milliseconds: The processes arrive in the order P1, P2, P3. All at time 0.
Process Burst Time

1 2 3

P1
P2 P3

24
3 3

Gant chart:

waiting times and turnaround times for each process are:


Process P1 P2 P3

Waiting Time (WT)


Turnaround Time (TAT)

0
24

24
27

27
30

Hence, average waiting time= (0+24+27)/3=17 milliseconds

+
Execution Time

Repeat the previous example, assuming that the processes arrive in the order P2, P3, P1. All at time 0.
Process Burst Time 24 3 3

P1 P2 P3

Gant chart:

waiting times and turnaround times for each process are:

Process

P1

P2

P3

Waiting Time (WT)


Turnaround Time (TAT)

6
30

0
3

3
6

Hence, average waiting time= (6+0+3)/3=3 milliseconds

Q: Explain why? FCFS CPU scheduling algorithm introduces a long average waiting time?

Sol: Because:
it suffers from Convoy effect, hence, all other processes

must wait for the big process to execute if this big process comes first.
This results in a long waiting time for small processes, and

accordingly increases the average waiting time.

Shortest-Job-First (SJF) scheduling


When CPU is available, it will be assigned to the process with the smallest CPU burst (non preemptive). If two processes have the same next CPU burst, FCFS is used.
SJF Scheduling
10 5 18 7

X
CPU

18

10

Note: numbers indicates the process execution time

Consider the following set of processes, with the length of the CPU burst time given in milliseconds: The processes arrive in the order P1, P2, P3, P4. All at time 0.
Process P1 P2 P3 P4 Burst Time 6 8 7 3

1. Using FCFS Gant chart:

waiting times and turnaround times for each process are:


Process Waiting Time (WT) Turnaround Time (TAT) P1 0 6 P2 6 14 P3 14 21 P4 21 24

Hence, average waiting time= (0+6+14+21)/4=10.25 milliseconds

2. Using SJF

Process P1 P2

Burst Time 6 8 7 3

Gant chart:

P3 P4

waiting times and turnaround times for each process are:


Process Waiting Time (WT) Turnaround Time (TAT) P1 3 9 P2 16 24 P3 9 16 P4 0 3

Hence, average waiting time= (3+16+9+0)/4=7 milliseconds

Q: Explain why? SJF CPU scheduling algorithm introduces the minimum average waiting time for a set of processes? Give an example.
Sol: Because: by moving a short process before a long one, the waiting time of the short process decreases more than it increases the waiting time of the long process. Hence, the average waiting time decreases. Example: assuming two processes P1 and P2
Process P1 P2 Burst Time 30 2

Using FCFS
P1
0 30
Waiting time(P1)=0 Waiting time(P2)=30 Average waiting time=(0+30)/2=15

Using SJF P2
0 2

P2
32

P1
32

Waiting time(P1)=2 Waiting time(P2)=0 Average waiting time=(0+2)/2=1

Shortest-Remaining-Time-First (SRTF)

It is a preemptive version of the Shortest Job First . It allows a new process to gain the processor if its execution time less than the remaining time of the currently processing one.

SRTF Scheduling
2 10 7 5 3 4

CPU

Consider the following set of processes, with the length of the CPU burst time given in milliseconds: The processes arrive in the order P1, P2, P3, P4. as shown in table.
Process P1 P2 P3 P4 Burst Time 7 4 1 4 Arrival Time 0 2 4 5

1. Using SJF Gant chart:

waiting times and turnaround times for each process are:


Process Waiting Time (WT) Turnaround Time (TAT) P1 0 7 P2 6 10 P3 3 4 P4 7 11

Hence, average waiting time= (0+6+3+7)/4=4 milliseconds

2. Using SRTF

Process P1 P2 P3

Burst Time 7 4 1

Arrival Time 0 2 4

Gant chart:

P4

waiting times and turnaround times for each process are:


Process Waiting Time (WT) Turnaround Time (TAT) P1 9 16 P2 1 5 P3 0 1 P4 2 6

Hence, average waiting time= (9+1+0+2)/4=3 milliseconds

Priority scheduling

A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer). There are two types: Preemptive nonpreemptive
Priority Scheduling
10 5 18 7

X
CPU

18

10

Note: numbers indicates the process priority

Problems with Priority scheduling


Problem Starvation (infinite blocking) low priority

processes may never execute Solution Aging as time progresses increase the priority of the process
Very low priority processprocess Very low priority

26 28

30 8

Starvation Aging

Consider the following set of processes, with the length of the CPU burst time given in milliseconds:
Process Burst Time 10 1 2 1 5 priority 3 1 4 5 2

The processes arrive in the order P1, P2, P3, P4, P5. All at time 0.

P1 P2 P3 P4

1. Using priority scheduling Gant chart:

P5

waiting times and turnaround times for each process are:


Process Waiting Time (WT) Turnaround Time (TAT) P1 6 16 P2 0 1 P3 16 18 P4 18 19 P5 1 6

Hence, average waiting time= (6+0+16+18+1)/5=8.2 milliseconds

Round Robin scheduling


Allocate the CPU for one Quantum time (also called time slice) Q to each process in the ready queue. This scheme is repeated until all processes are finished. A new process is added to the end of the ready queue.
Round Roben Scheduling

CPU

Consider the following set of processes, with the length of the CPU burst time given in milliseconds: The processes arrive in the order P1, P2, P3. All at time 0. use RR scheduling with Q=2 and Q=4
Process P1 P2 P3 Burst Time 24 3 3

RR with Q=4

Gant chart:

waiting times and turnaround times for each process are:


Process P1 P2 P3

Waiting Time (WT)


Turnaround Time (TAT)

6
30

4
7

7
10

Hence, average waiting time= (6+4+7)/3=5.66 milliseconds

RR with Q=2

Process

Burst Time

P1
P2

24
3

P3

Gant chart:

waiting times and turnaround times for each process are:


Process Waiting Time (WT) P1 6 P2 6 P3 7

Turnaround Time (TAT)

30

10

Hence, average waiting time= (6+6+7)/3=6.33 milliseconds

Explain why? If the quantum time decrease, this will slow down the execution of the processes.
Sol:
Because decreasing the quantum time will increase

the context switch (the time needed by the processor to switch between the processes in the ready queue) which will increase the time needed to finish the execution of the active processes, hence, this slow down the system.

Multi-level queuing scheduling


There are two types: Without feedback: processes can not move between queues. With feedback: processes can move between queues.

Multi-level queuing without feedback:


Divide ready queue into several queues.
Each queue has specific priority and its own scheduling algorithm (FCFS, ).
High priority Queue

Low priority Queue

Multi-level queuing with feedback:


Divide ready queue into several queues. Each queue has specific Quantum time as shown in figure. Allow processes to move between queues. Queue 0

Queue 1

Queue 2

Deadlock

Q: Give a suitable definition for the Deadlock problem. Sol:


Solution:

Deadlock:
A set of blocked processes, each: 1 2 holding a resource

and waiting to use a resource held by another process in the set.


Give me your recourse Give me your recourse first

Resource

Process A

Deadlock

Process B

Resource

Hence, blocked processes will never change state (Explain why?) because the resource it has requested is held by another waiting process.

Deadlock
System Breakdown

Q: Discuss briefly the different deadlock conditions. Sol: Deadlock arises if four conditions hold simultaneously:

Deadlock conditions

Mutual Exclusion

Circular wait

Hold and wait

No preemption

Mutual 1 Exclusion
only one process can use a resource at a time.

2 Hold and Wait

A process holding at least one resource is waiting for additional resource held by another processes

No Preemption
A resource is released only by the process holding it after it completed its task
4 4 Circular Wait

A set of processes each waits for another one in a circular fashion.


Note: the four conditions must occur to have a deadlock. If one condition is absent, a deadlock may not exist.

Circular wait
There exists a set {P0, P1, P2, .., Pn} of waiting processes such that: P0 waiting for a resource held by P1. P1 waiting for a resource held by P2. P2 waiting for a resource held by P3.
P0 Pn P1

Circular Wait
Pn waiting for a resource held by P0.
P3 P4

P2

Deadlock Modeling (Resource Allocation Graph)


In order to solve the deadlock problem, we must find a method to
express it.
This can be achieved using resource allocation graph.

Resource Allocation Graph


it is a graph expressing:

- All system active processes. - Available system resources . - Interconnections between active processes and system resources.

Contents of Resource Allocation Graph


P (Process Set) set of processes in the system
Process P = {P1, P2, P3, ., Pn}

Resource

R (Resource Set) set of Resource types in the system


R = {R1, R2, R3, ., Rm}

Edge Rj Pi ( Request edge ) Process Pi requests an instance of resource Rj Rj Pi ( Assignment edge ) An instance of Resource Rj is assigned to Process Pi

E (Edge Set) set of all edges in Resource Allocation Graph. E = {Pn Rm, Rx Py, .. }

Example:
1

Resource instances: One instance of R1 and R3.


Two instances of R2. Three instances of R4. The sets P, R, and E:

Processes States:
Waiting

P = Process , P3 } Holding { P1, P2

Instance of R1 P1 R = { R1, R2, Instance of R2 R3, R4 } Instance of R3 P2 E = { P1R1,Instance of R11and R2 R2P2, R2P1 P2R3, R P2, Instance of R3 ------P3 ,R3P3 }

Q: Explain why: Although the graph contains a cycle, the system may not in a deadlock state.
Sol: Case 1: The system has one instance per resource type

If a cycle exists, the system in a deadlock state.


Each process involved in the cycle is deadlocked.
P1 Cycle:

R2

Cycle (Deadlock)

R1

P1 R1 P2 R2

P2

As show, no chance to break the cycle because: No process can finish execution.

No way to free a resource.

Case 2: The system has more than one instance per resource type
If a cycle exists, the system may or may not be in a deadlock state.

Ex 1: a cycle with deadlock:


Two Cycles exists: P1 R1 P2 R3 P3 R2

P2 R3 P3 R2

As show, P1, P2, and P3 are deadlocked

Ex 2: a cycle without deadlock:

Cycle: P1 R1 P3 R2

There is no deadlock because: P4 may release its instance of R2. This resource can be then allocated to P3 which breaking the cycle.

Also P2 may release its instance of R1. This resource can be allocated to P1.

Deadlock Avoidance
Q: Describe in details the basic rules used for deadlock avoidance . Sol: Examines the system state so that:
System state

Safe

Unsafe

No deadlock

Possibility of deadlock

Avoidance ensure that a system will never enter an

unsafe state.
System in safe state if there exists a safe sequence of all processes

<P1, P2, P3, P4, .., Pn-1, Pn>


Can be executed by Can be executed by Can be executed by 1 resources it holds + systems 2 available resources. + Resources 3 held by P1, P2

1 resources it holds 1 resources it holds + + systems 2 available resources. 2 systems available resources. + Resources 3 held by P1

Avoidance Algorithms Resource Allocation Graph Algorithm


Claim edge Pi ---> Rj indicated that process Pj may 1

request resource Rj in future; represented by a dashed line.

2 Claim edge converts to request edge when a process

requests a resource.
3 Request edge converted to an assignment edge when

the resource is allocated to the process.

Claim edge Request edge Assignment edge

Assignment Edge

Request Edge

Claim Edge

Suppose that process Pi requests a resource Rj


The request can be granted only if converting the request edge to

an assignment edge does not result in a cycle in the resource allocation graph

Case Study: consider the following resource allocation graph:

Suppose that P2 requests R2

Although R2 is free, we can not allocate it to P2 since this creates a cycle

Cycle

Consider the following Resource allocation graph


R1

R2 P1 P3

R3 P2

R4

Is the system at safe state? What is the safe sequence? (if exists). Assume at T1, P3 requests R2, is it reasonable to grant this request?

R1

R2 P1

P3

R3 P2

R4

Steps:
1. Find the Available Resources. 2. Construct the table: Process Max. Need Hold

3. Find the safe sequence.

R1

R2 P1

P3

R3 P2 Available Resources: R2

R4

Process
P1 P2 P3

Max. Need
R1, R2, R3 R3, R4 R1, R2, R3, R4

Hold
R1, R3 ---R4

Safe sequence: <P1, P3, P2> so system in safe state.

R1

cycle

R2 P1

P3

R3 P2

R4

At T1: (P3 requests R2)

It is not reasonable to grant the request because system will be in unsafe state (because this may result in a future cycle if P1 requests R2).

Recovery From Deadlock

Terminate deadlocked processes

Free some Resources

Terminate all deadlocked processes

Free some resources from the deadlocked processes. Give those resources to other processes until deadlock eliminated

Terminate one by one Until deadlock eliminated

Memory Management Techniques

Q: explain what is meant by memory management, then discuss why to manage memory? Sol:
Ready Queue

Process need to be executed

Memory management is how to organize active processes (processes currently in the ready queue) so that:
1. 2. Processes can be easily reached. Maximize memory space utilization.

Q: show how a loader stores an executable file into the memory assuming:
File of Size =20 memory words (instructions and data). Using Contiguous allocation method. Repeat the problem three different times using: First fit. Best fit. Worst Fit. Sol:

Executable file (Size =20 memory words)

?
Loader

Memory

loader
.1

block ( contiguous allocation ) : ( : hole First Fit ). : ( Best Fit ). : . Worst Fit
. .1 Limit register word Base Register

logical .2 . physical

Memory

Using First Fit


Base

OS
Start address of process Legal range
999 1000 1010

1040 20
Limit

Process Process Process

1040 1060

Executable file (Size =20 memory words)

1070

1100

1125

Process
1150

Loader

1200

Process
1250 1255

Memory

Using Best Fit


OS
Base 1000 1010

Start address of process Legal range

1100 20
Limit

Process

1040

Executable file (Size =20 memory words)

1070

Process
1100 1120 1125

Process Process

1150

Loader
1200
1250 1255

Process

Memory

Using Worst Fit


OS
Base 1000 1010

Start address of process Legal range

1150 20
Limit

Process

1040

Executable file (Size =20 memory words)

1070

Process
1100

1125

Process
1150 1170

Process Process

Loader
1200 1250 1255

Q: Explain what is meant by Bootstrapping, then what is the difference between Loader and Bootstrap Loader? Sol: Bootstrap program

ROM
Bootstrap Loader

Bootstrapping

. ROM Bootstrap Program . ( )

Bootstrap Program . Bootstrap loader .

Actions taken when a computer is first powered on until it is ready to be used . Computer reads a program from a ROM (Read Only Memory) which: Is installed by the manufacturer. Contains bootstrap program and some other routines that controls hardware

(BIOS)

. Loader . loader
: ROM Bootstrap loader Loaders 0 . Loader . Bootstrap program . Absolute Loader loader

Loader

A part or OS.
Perform loading and relocation for users programs.

Bootstrap Loader An absolute loader.


Executed when a computer is turned on or restarted. Loads the first program to be run by the computer (usually an OS). loads at address 0 in the memory (so that it is an absolute loader).

Q: Explain why? The bootstrap loader is an absolute (Simple) loader? Sol: Assuming OS requires 1000 words (from 0 to 999).
As shown, the range of the logical address (in the executable copy of OS) in the disk is the same of the physical address range in memory. So, no relocation.

Disk

Range of physical addresses 999

OS

OS
999

No Relocation
Memory

Range of logical addresses

Bootstrap Loader

Q: Explain in details how to manage memory in multi-programming environment? Sol: Memory Management In multi-programming environment

Paging Swapping

Contagious allocation

Swapping:
Q: Explain what is meant by swapping, Give some examples for swapping. Sol: A process can be swapped out of memory to disk, and then brought back into memory for continued execution.

Ex1: Multi-programming environment with priority scheduling


Assume a system with 32 kb memory size. 5kb are used for OS, 10 kb for the low priority process. Hence, the available space is 17 kb. A higher priority process arrives and needs 20 kb.
high priority process finishes execution
OS (5 kb) OS (5 kb)

Swap in the high priority process


OS (5 kb)

Disk

Swap out the low priority process to disk


OS (5 kb) Low Priority process (10 kb) Available 17 Kb

Available 27 Kb

High Priority process (20 kb)

Available 27 Kb

Available 7 Kb

Swap in the low priority process again to resume execution

High Priority 20 Kb

Contiguous Allocation
Q: Explain what is meant by contiguous allocation, what are its different types?
Sol:

In contiguous allocation, each process is contained in a single contiguous section of memory.


.

Methods for Contiguous Allocation

Simple Method

General Method

Simple Method
Divide memory into several fixed-sized partitions. Each partition contains one process.

Degree of multiprogramming is bound by the number of partitions.


When a partition is free, a process is selected from the input queue and is loaded into the free partition.

When the process terminates, the partition becomes available for another process.
its no longer used (Explain why?) because it has various drawbacks like: 1. Degree of multiprogramming is bounded by the number of partions. 2. Internal fragmentations.

Assuming a memory of 30 KB divided into three partitions as following:

Input Queue (in the disk)

Process 1 (7 KB)

10 KB

4 KB

8 KB

9 KB

7 KB

Process 2 (9 KB)

10 KB

Internal Fragmentation

Process 3 (8 KB)

10 KB

As shown
This method suffers from internal fragmentations. The degree of multiprogramming is bounded to 3 although it can be 4.

General Method
Initially, all memory is available for user processes, and is considered as
one large block of available memory. When a process arrives and needs memory, we search for a hole large enough for this process using.

First Fit

Best Fit

Worst Fit

If we find one, we allocate only as much memory as is needed, keeping the rest available to satisfy future requests.

There are three different methods to find the suitable hole for a process?:

First fit: allocate the first hole that is big enough (fastest method).
Best fit: allocate the smallest hole that is big enough (produces the smallest leftover hole). Worst fit: allocate the largest hole (produces the largest leftover hole which may be more useful than the smaller leftover hole from a best-fit approach.

OS
Input Queue (in the disk) Process 4 Process 1 (4 KB) (7 KB) Process 5 (9 KB) 2 Process (9 KB) Process 3 (8 KB)

4 KB

8 KB

9 KB

7 KB

As shown: The degree of multiprogramming changing according to the number of processes in the memory (in ready queue).

After a period of time External Fragmentations appear

OS
Process 4 Input Queue (in the disk) Process 2 Process 3 Process 9 Process 20

4 KB

8 KB

9 KB

7 KB

External Fragmentations

As shown: This method suffers from external Fragmentations.

Compaction
OS
Process 4 Process 4
Input Queue (in the disk)

Process 2 Process 2 Process 3 Process 3

4 KB

8 KB

9 KB

7 KB

Process 9 Process 9 Process 20


Process 20

Compaction: Is a movement of the memory contents to place all free memory in a one large

A new hole to store a new process

block sufficient to store new process. It is a solution for the external fragmentation but it is expensive and is not always possible.

Paging
Paging is a memory-management scheme that permits the physicaladdress space of a process to be noncontiguous.
it is commonly used in most operating systems. Divide physical memory into fixed-sized blocks called frames. Divide Process into blocks of same size called pages. Use a page table which contains base address of each page in physical memory.
X

Example
Page number
0

A process P is divided into 4 pages.

Page number

Frame number

Process P

1 2 3

The process will be loaded into 4 frames.


The page number is used as index to the page table. The page table contains the frame number for each page.

Paging Example
32-byte memory, each memory word of size =1 byte (can store only one character), the size of page (and frame also)= 4-byte. Show how to store 4 pages process into memory using a page table. according to your page table, what are the physical addresses corresponding to the logical addresses 4 and 13.

Sol: Memory size=32 Byte =32 words. Page size = Frame size = 4 bytes =4 words. Number of Frames = 32/4 = 8 Frames (addressed from 0 7)

Logical address

Page 0 Contents

Physical address

Page 0 Page 1 4 Page process Page 2 Page 3

Frame 0

Frame 1 Frame 2
Frame 3 Frame 4 Frame 5 Frame 6 Frame 7

Advantages of paging:
1. No external fragmentation. 2. Allow the process components to be noncontiguous.

Problems in paging:
1. A possibility of internal fragmentations that can not be used.

Q: Explain when you have internal fragmentations when using the paging technique. Sol: when the contents of the last page of the process is less than the frame size, then the remaining part of the frame will be an internal fragmentations that can not be used.

Any Questions?

S-ar putea să vă placă și