Sunteți pe pagina 1din 59

Operating System

Concurrent Process

Presented By:- Dr. Sanjeev Sharma


Process Concept
• A process is a program in execution. Process is not as same as
program code but a lot more than it. A process is an 'active'
entity as opposed to program which is considered to be a
'passive' entity. Attributes held by process include hardware
state, memory, CPU etc.
• To put it in simple terms, we write our computer programs in a
text file and when we execute this program, it becomes a
process which performs all the tasks mentioned in the program.

• When a program is loaded into the memory and it becomes a


process, it can be divided into four sections ─ stack, heap, text
and data.
Process Section
• Stack:- The process Stack contains the temporary data such as
method/function parameters, return address and local
variables.
• Heap:-This is dynamically allocated memory to a process
during its run time.
• Text section is made up of the compiled program code, read in
from non-volatile storage when the program is launched..
• Data:-This section contains the global and static variables.
Process Section
Process State
• When a process executes, it passes through different states. These stages
may differ in different operating systems, and the names of these states are
also not standardized.
• In general, a process can have one of the following five states at a time.

• New:- This is the initial state when a process is first started/created.

• Ready:- The process is waiting to be assigned to a processor. Ready


processes are waiting to have the processor allocated to them by the
operating system so that they can run. Process may come into this state
after Start state or while running it by but interrupted by the scheduler to
assign CPU to some other process.
• Running:-Once the process has been assigned to a processor by the OS
scheduler, the process state is set to running and the processor executes its
instructions.
• Waiting
• Process moves into the waiting state if it needs to wait for a
resource, such as waiting for user input, or waiting for a file to
become available.

• Terminated or Exit
• Once the process finishes its execution, or it is terminated by
the operating system, it is moved to the terminated state where
it waits to be removed from main memory.
Diagram of Process State
Process Control Block (PCB)
A Process Control Block is a data structure maintained by the
Operating System for every process. The PCB is identified by an
integer process ID (PID). A PCB keeps all the information needed to
keep track of a process
Information associated with each process
• Process state:-The current state of the process i.e., whether it is
ready, running, waiting, or whatever.
• Program counter:- Program Counter is a pointer to the address of
the next instruction to be executed for this process.
• CPU registers:- Various CPU registers where process need to be
stored for execution for running state.
• CPU scheduling information:- Process priority and other
scheduling information which is required to schedule the process.
• Memory-management information:-This includes the
information of page table, memory limits, Segment table
depending on memory used by the operating system.

• Accounting information:” -This includes the amount of CPU


used for process execution, time limits, execution ID etc.

• I/O status information:-This includes a list of I/O devices


allocated to the process.
• The PCB is maintained for a process throughout its lifetime,
and is deleted once the process terminates.
Process Control Block
CPU Switch from Process to Process
Process Scheduling Queues
• The OS maintains all PCBs in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs
of all processes in the same execution state are placed in the same
queue. When the state of a process is changed, its PCB is unlinked
from its current queue and moved to its new state queue.
• The Operating System maintains the following important process
scheduling queues −
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in
main memory, ready and waiting to execute. A new process is
always put in this queue.
• Device queues − The processes which are blocked due to
unavailability of an I/O device constitute this queue.
• Schedulers are special system software which handle process
scheduling in various ways. Their main task is to select the
jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long Term Scheduler
• It is also called a job scheduler. A long-term scheduler
determines which programs are admitted to the system for
processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for
CPU scheduling.
• The primary objective of the job scheduler is to provide a
balanced mix of jobs, such as I/O bound and processor bound.
It also controls the degree of multiprogramming. If the degree
of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of
processes leaving the system.
• It is also called as CPU scheduler. Its main objective is to
increase system performance in accordance with the chosen set
of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes
that are ready to execute and allocates CPU to one of them.

• Short-term schedulers, also known as dispatchers, make the


decision of which process to execute next. Short-term
schedulers are faster than long-term schedulers.
• Medium-term scheduling is a part of swapping. It removes the
processes from the memory. It reduces the degree of
multiprogramming. The medium-term scheduler is in-charge
of handling the swapped out-processes.
• A running process may become suspended if it makes an I/O
request. A suspended processes cannot make any progress
towards completion. In this condition, to remove the process
from memory and make space for other processes, the
suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be
swapped out or rolled out. Swapping may be necessary to
improve the process mix.
Ready Queue vs I/O queue
Representation of Process Scheduling
Action of Medium Term Schedular
Context Switch
• When CPU switches to another process, the system must save
the state of the old process and load the saved state for the
new process

• Context-switch time is overhead; the system does no useful


work while switching

• Time dependent on hardware support


Inter Process Communication
• A process can be of two type:
• Independent process.
• Co-operating process.
• An independent process is not affected by the execution of other
processes while a co-operating process can be affected by other
executing processes.
• Though one can think that those processes, which are running
independently, will execute very efficiently but in practical, there are
many situations when co-operative nature can be utilised for
increasing computational speed, convenience and modularity.
• Inter process communication (IPC) is a mechanism which allows
processes to communicate each other and synchronize their actions.
The communication between these processes can be seen as a
method of co-operation between them.
• There are numerous reasons for providing an environment or situation
which allows process co-operation:
• Information sharing: Since a number of users may be interested in the
same piece of information (for example, a shared file), you must provide a
situation for allowing concurrent access to those information.
• Computation speedup: If you want a particular work to run fast, you must
break it into sub-tasks where each of them will get execute in parallel with
the other tasks. Note that such a speed-up can be attained only when the
computer has compound or various processing elements like CPUs or I/O
channels.
• Modularity: You may want to build the system in a modular way by
dividing the system functions into split processes or threads.
• Convenience: Even a single user may work on many tasks at a time. For
example, a user may be editing, formatting, printing, and compiling in
parallel.
• Working together multiple processes, require an inter process
communication (IPC) method which will allow them to
exchange data along with various information. There are two
primary models of inter process communication:
– shared memory and
– message passing.
• In the shared-memory model, a region of memory which is
shared by cooperating processes gets established. Processes
can then able to exchange information by reading and writing
all the data to the shared region. In the message-passing form,
communication takes place by way of messages exchanged
among the cooperating processes.
Process Synchronization
• Concurrent access to shared data may result in data
inconsistency

• Maintaining data consistency requires mechanisms to ensure


the orderly execution of cooperating processes

• Suppose that we wanted to provide a solution to the consumer-


producer problem that fills all the buffers. We can do so by
having an integer counter that keeps track of the number of full
buffers. Initially, counter is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by
the consumer after it consumes a buffer.
Producer- Consumer Problem solution
using Counter Variable
Code For Producer Process

while (true) {

/* produce an item and put in nextProduced */


while (counter == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Code for Consumer Process

while (true) {
while (counter == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in nextConsumed
}
Race Condition
• A race condition is a special condition that may occur inside a
critical section. A critical section is a section of code that is executed
by multiple threads and where the sequence of execution for the
threads makes a difference in the result of the concurrent execution
of the critical section.
• When the result of multiple threads executing a critical section may
differ depending on the sequence in which the threads execute, the
critical section is said to contain a race condition. The term race
condition stems from the metaphor that the threads are racing
through the critical section, and that the result of that race impacts
the result of executing the critical section.
• This may all sound a bit complicated, so I will elaborate more on
race conditions and critical sections in the following sections.
• To prevent race conditions from occurring you must make sure
that the critical section is executed as an atomic instruction.
That means that once a single thread is executing it, no other
threads can execute it until the first thread has left the critical
section.

• Race conditions can be avoided by proper thread


synchronization in critical sections.
Race Condition
• counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1

• counter-- could be implemented as


register2 = counter
register2 = register2 – 1
count = register2
Consider this execution interleaving with “counter= 5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute counter = register1 {count = 6 }
S5: consumer execute counter = register2 {count = 4}
Critical Section
• A critical section is a region of code in which a process uses a
variable (which may be an object or some other data structure)
that is shared with another process (e.g. the “code” that read,
modified, and wrote an account balance in the example you
did.)
• Problems can arise if two processes are in critical sections
accessing the same variable at the same time.
• The critical section problem refers to the problem of how to
ensure that at most one process is executing its critical section
at a given time.
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections
2. Progress - If no process is executing in its critical section and
there exist some processes that wish to enter their critical
section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request to enter its critical
section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the N processes
Peterson Solution
• Peterson's Solution is a classic software-based solution to the critical
section problem.
• Peterson's solution is based on two processes, P0 and P1, which
alternate between their critical sections and remainder sections. For
convenience of discussion, "this" process is Pi, and the "other"
process is Pj. ( I.e. j = 1 - i )
• Peterson's solution requires two shared data items:
• int turn - Indicates whose turn it is to enter into the critical section.
If turn = = i, then process i is allowed into their critical section.
• boolean flag[ 2 ] - Indicates when a process wants to enter into their
critical section. When process i wants to enter their critical section,
it sets flag[ i ] to true.
Peterson’s Solution for Process i
• In the entry section, process i first raises a flag indicating a desire to
enter the critical section.

• Then turn is set to j to allow the other process to enter their critical
section if process j so desires.

• The while loop is a busy loop ( notice the semicolon at the end ),
which makes process i wait as long as process j has the turn and
wants to enter the critical section.

• Process i lowers the flag[ i ] in the exit section, allowing process j to


continue if it has been waiting.
• To prove that the solution is correct, we must examine the three conditions
listed above:
– Mutual exclusion - If one process is executing their critical section when the other
wishes to do so, the second process will become blocked by the flag of the first
process. If both processes attempt to enter at the same time, the last process to
execute "turn = j" will be blocked.
– Progress - Each process can only be blocked at the while if the other process wants
to use the critical section ( flag[ j ] = = true ), AND it is the other process's turn to
use the critical section ( turn = = j ). If both of those conditions are true, then the
other process ( j ) will be allowed to enter the critical section, and upon exiting the
critical section, will set flag[ j ] to false, releasing process i. The shared variable turn
assures that only one process at a time can be blocked, and the flag variable allows
one process to release the other when exiting their critical section.
– Bounded Waiting - As each process enters their entry section, they set the turn
variable to be the other processes turn. Since no process ever sets it back to their
own turn, this ensures that each process will have to let the other process go first at
most one time before it becomes their turn again.
• Note that the instruction "turn = j" is atomic, that is it is a single machine
instruction which cannot be interrupted.
Semaphore
• In 1965, Dijkstra proposed a new and very significant technique for
managing concurrent processes by using the value of a simple integer
variable to synchronize the progress of interacting processes. This integer
variable is called semaphore. So it is basically a synchronizing tool and is
accessed only through two low standard atomic operations, wait and signal
designated by P() and V() respectively.
• Two standard operations, wait and signal are defined on the semaphore.
Entry to the critical section is controlled by the wait operation and exit
from a critical region is taken care by signal operation.
• The manipulation of semaphore (S) takes place as following:
• The wait command P(S) decrements the semaphore value by 1..
• The V(S) i.e. signals operation increments the semaphore value by 1.
• Mutual exclusion on the semaphore is enforced within P(S) and V(S). If a
number of processes attempt P(S) simultaneously, only one process will be
allowed to proceed & the other processes will be waiting.
Wait and Signal function
• In practice, semaphores can take on one of two forms:
• Binary semaphores can take on one of two values, 0 or 1. They can
be used to solve the critical section problem as described above, and
are sometimes known as mutexes, because they provide mutual
exclusion.
• Counting semaphores can take on any integer value, and are
usually used to count the number remaining of some limited
resource. The counter is initialized to the number of such resources
available in the system, and whenever the counting semaphore is
greater than zero, then a process can enter a critical section and use
one of the resources. When the counter gets to zero ( or negative in
some implementations ), then the process blocks until another
process frees up a resource and increments the counting semaphore
with a signal call. ( The binary semaphore can be seen as just a
special case where the number of resources initially available is just
one. )
Semaphore Implementation
• Must guarantee that no two processes can execute wait () and
signal () on the same semaphore at the same time

• Thus, implementation becomes the critical section problem


where the wait and signal code are placed in the crtical section.
– Could now have busy waiting in critical section implementation
• But implementation code is short
• Little busy waiting if critical section rarely occupied

• Note that applications may spend lots of time in critical


sections and therefore this is not a good solution.
Semaphore Implementation with no Busy waiting

• With each semaphore there is an associated waiting queue.


Each entry in a waiting queue has two data items:
– value (of type integer)
– pointer to next record in the list

• Two operations:
– block – place the process invoking the operation on the
appropriate waiting queue.
– wakeup – remove one of processes in the waiting queue and
place it in the ready queue.
Semaphore Implementation with no Busy waiting (Cont.)

• Implementation of wait:

wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}

• Implementation of signal:

Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}
Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for
an event that can be caused by only one of the waiting
processes
• Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
• Starvation – indefinite blocking. A process may never be
removed from the semaphore queue in which it is suspended.
Classical Problems of Synchronization

• Bounded-Buffer Problem
• Readers and Writers Problem
• Dining-Philosophers Problem
Bounded-Buffer Problem
• This is a generalization of the producer-consumer problem
wherein access is controlled to a shared group of buffers of a
limited size. In this solution, the two counting semaphores
"full" and "empty" keep track of the current number of full and
empty buffers respectively ( and initialized to 0 and N
respectively. )
• The binary semaphore mutex controls access to the critical
section.
• The producer and consumer processes are nearly identical -
One can think of the producer as producing full buffers, and
the consumer producing empty buffers Semaphore mutex
initialized to the value 1
• Semaphore full initialized to the value 0
• Semaphore empty initialized to the value N.
Bounded Buffer Problem (Cont.)
Bounded Buffer Problem (Cont.)
Readers-Writers Problem
• In the readers-writers problem there are some processes ( termed readers ) who only
read the shared data, and never change it, and there are other processes ( termed
writers ) who may change the data in addition to or instead of reading it. There is no
limit to how many readers can access the data simultaneously, but when a writer
accesses the data, it needs exclusive access.
• There are several variations to the readers-writers problem, most centered around
relative priorities of readers versus writers. The first readers-writers problem gives
priority to readers. In this problem, if a reader wants access to the data, and there is
not already a writer accessing it, then access is granted to the reader. A solution to
this problem can lead to starvation of the writers, as there could always be more
readers coming along to access the data. ( A steady stream of readers will jump
ahead of waiting writers as long as there is currently already another reader
accessing the data, because the writer is forced to wait until the data is idle, which
may never happen if there are enough readers. )
• The second readers-writers problem gives priority to the writers. In this problem,
when a writer wants access to the data it jumps to the head of the queue - All
waiting readers are blocked, and the writer gets access to the data as soon as it
becomes available. In this solution the readers may be starved by a steady stream of
writers.
Readers-Writers Problem (Cont.)
• The following code is an example of the first readers-writers
problem, and involves an important counter and two binary
semaphores: readcount is used by the reader processes, to count the
number of readers currently accessing the data.
• mutex is a semaphore used only by the readers for controlled access
to readcount.
• rw_mutex is a semaphore used to block and release the writers. The
first reader to access the data will set this lock and the last reader to
exit will release it; The remaining readers do not touch rw_mutex. (
Eighth edition called this variable wrt. )
• Note that the first reader to come along will block on rw_mutex if
there is currently a writer accessing the data, and that all following
readers will only block on mutex for their turn to increment
readcount.
Readers-Writers Problem (Cont.)
Dining-Philosophers Problem
• The dining philosophers problem is a classic synchronization
problem involving the allocation of limited resources amongst a
group of processes in a deadlock-free and starvation-free manner:
Consider five philosophers sitting around a table, in which there are
five chopsticks evenly distributed and an endless bowl of rice in the
center, as shown in the diagram below. ( There is exactly one
chopstick between each pair of dining philosophers. )
• These philosophers spend their lives alternating between two
activities: eating and thinking.
• When it is time for a philosopher to eat, it must first acquire two
chopsticks - one from their left and one from their right.
• When a philosopher thinks, it puts down both chopsticks in their
original locations.
Dining-Philosophers Problem (Cont.)

• One possible solution, as shown in the following code section,


is to use a set of five semaphores ( chopsticks[ 5 ] ), and to
have each hungry philosopher first wait on their left chopstick
( chopsticks[ i ] ), and then wait on their right chopstick (
chopsticks[ ( i + 1 ) % 5 ] )
• But suppose that all five philosophers get hungry at the same
time, and each starts by picking up their left chopstick. They
then look for their right chopstick, but because it is
unavailable, they wait for it, forever, and eventually all the
philosophers starve due to the resulting deadlock.
• Some potential solutions to the problem include: Only allow
four philosophers to dine at the same time. ( Limited
simultaneous processes. )
• Allow philosophers to pick up chopsticks only when both are
available, in a critical section. ( All or nothing allocation of
critical resources. )
• Use an asymmetric solution, in which odd philosophers pick
up their left chopstick first and even philosophers pick up their
right chopstick first. ( Will this solution always work? What if
there are an even number of philosophers? )

S-ar putea să vă placă și