Sunteți pe pagina 1din 48

MCT510 Distributed Systems

Msc Programme
Introduction to Distributed Mutual Exclusion

page 1
Objectives

This lecture has the following objective:

- Introduce a distributed mutual exclusion algorithms


 Classifications
 Central coordinator
 Ricart and Agrawala
 Suzuki and Kasami

page 2
Motivation

Concurrency & collaboration among processes is fundamental in

distributed systems

Processes need to simultaneously access shared and indivisible

resources (e.g., a printer, a data source like a file, database etc)

page 3
Motivation

Mutual exclusion is required to prevent interference & ensure

consistency
- This is the critical section problem known from operating systems
- The problem is not solved by local kernel facilities in distributed systems
- Why is this so?

We require a solution to distributed mutual exclusion


- The solution is solely based on message passing
page 4
Algorithm Classes

Token-Based Algorithms

Permission-Based Algorithms

Special Case

page 5
Token-Based Algorithms

Mutual exclusion is achieved by passing a special message (token)

between processes

There is only one token available in the entire system

Whoever has the token is allowed to access the shared resource

The process releases the token after finishing accessing the shared

resource

page 6
Token-Based Algorithms
Advantages:
- Guarantees mutual exclusion
- Ensures starvation does not occur
- Deadlock avoidance

Disadvantages:
- If the token is lost (e.g., due to system crash), an intricate distributed
procedure needs to be started to ensure that:

 A new token is created


 It is the only token in the system

page 7
Permission-Based Algorithms
- The idea was first expressed by Ricart and Agrawala in 1981

- Basic principle:
 To enter a critical section, a process asks the others for permission

 The process waits for the permissions to arrive

 If a process is not interested in the critical section, it sends back its


permission as soon as it receives the request

 If it is interested, a priority is established between the conflicting


requests

page 8
Permission-Based Algorithms

Properties:
- The safety property is ensured by obtaining a sufficient number of permissions
 Mutual exclusion is guaranteed

- The liveliness property is ensured by totally ordering the requests either by


using a timestamp or managing a distributed acyclic directed graph

 Each request will be granted

 The liveliness property implies freedom from both deadlock & starvation
page 9
Permission-Based Algorithms
- In the case of a central coordinator, the token-based and permission-
based approaches meet

- Processes ask only the coordinator for permission before entering the
critical section

- The unique permission can be assimilated into a token managed by the


coordinator

page 10
Central Server Algorithm

One process is elected as the coordinator

When a process wants to access a shared resource, it sends a

request message to the coordinator and waits for a reply

If no other process is currently accessing the resource, the

coordinator grants permission


page 11
Example


0 1 2

Request OK

coordinator 3 Queue is empty

In the figure Process 3 is the coordinator

No other process is accessing (waiting for) the shared resource,

thus process 1 is granted access by the coordinator page 12


Example

0 1 2

Request
no reply

3 Queue is not empty


coordinator 2

In the figure Process 2 asks for permission to access the resource

The coordinator knows that another process (Process 1) is accessing

the resource, so Process 2’s request is queued


page 13

The reply message is system dependent


Example

0 1 2

Release
OK

coordinator 3

When Process 1 releases the resource, it tells the coordinator

The coordinator then grants access to Process 2, which is the head of

the queue page 14


Central Server Algorithm
Advantages
- The algorithm guarantees mutual exclusion (safety property)
 The coordinator only lets one process at a time to the resource

- It is fair
 Requests are granted in the order in which they are received
 No process ever wait forever (no starvation)

- The scheme is easy to implement because of its simplicity

- Minimal complexity in terms of the number of messages (request, grant,


release) per use of resource

page 15
Central Server Algorithm

Disadvantages
- The coordinator represents a single point of failure
- A single coordinator can be a performance bottleneck

page 16
Ring-Based Algorithm
P1
- N processes are arranged in a logical ring
P2
Pn
P3 - Each process p has a communication
channel to the next process in the ring
P(i+1) mod N
P4

token

- Exclusion is conferred by obtaining a token in the form of a message


passed from process to process in a single direction, say clockwise

- The ring topology may be unrelated to the physical interconnections


between computers

page 17
Ring-Based Algorithm
Advantages

+ Guarantees mutual exclusion (safety property)

+ Freedom from starvation and deadlock (liveness property)

page 18
Ring-Based Algorithm
Disadvantages

- The approach does not guarantee fairness


 The token is not obtained in the order in which requests
are made

 The algorithm continuously consumes network bandwidth


(except when a process is in a critical section)

 If the token is lost, a new token must be created and that


token must be the only one in the system

 If a process fails, a new logical ring must be established

page 19
Ricard and Agrawala Algorithm

Specification
- The network consists of a set P of N processes

P = P union {pi}, i = 1 to N

- Each process pi executes an identical algorithm, but refers to its own

unique id

- For each process pi , there exists three phases


1. Invocation of mutual exclusion by the process

2. Receiving a REQUEST message (and processing it) from other process

page 20
3. Receiving a REPLY message (and processing it)
Ricard and Agrawala Algorithm
Approach
- A process enters its critical section after:
 all processes within the system have been notified of the request and

 have granted their permission

- The algorithm exhibits full distributed control

page 21
Ricard and Agrawala Algorithm
Invoking mutual exclusion
- A process makes an attempt to invoke mutual exclusion by broadcasting a
REQUEST message to all other processes in the system

- Each requested process either sends a REPLY or defers until it leaves its
own critical section

- Lamport timestamps are used to order requests, thus prioritizing process


requests according to their order of occurrence

- Process Ids are used for tie-breaking

page 22
Example
REQUEST
3
3 REPLY
1
1
1 1
1
1 2
1 2 1

a) b)

Fig a)

1. Process 3 attempts to invoke mutual exclusion


2. It increments the Lamport clock to 1, and multicasts a REQUEST to 1 and 2

Fig b)

1. Before either message arrives, process 2 wishes to enter its critical section
2. Process 2 increments the clock to 1 and multicasts a REQUEST to 1 and 3

page 23
Example
REQUEST
3
3 REPLY
1 1 1
1 2 2
1 1
1 2
1 2 1
1
c) d)
Fig c)

1. Process 2’s messages have arrived


2. At process 1 (not yet made a REQUEST), a REPLY is immediately generated
3. Process 3 has identical timestamp as 2; process 2 wins on Id tie-breaking rule

Fig d)

1. Process 1 makes request to enter its critical section; it uses timestamp 2


2. A REQUEST message (1 to 2) overtakes REPLY; no REPLY is sent (from 2)
since the timestamp of 2 is higher
page 24
Example
REQUEST
2 3 2 3
1 1 1 1 REPLY

2 1
1 2 1 2 2
1
e) f)
Fig e)

1. Process 2 enters critical section – it has received all REPLIES


2. Process 1’s REQUEST has arrived at 3, but is deferred due to higher timestamp

Fig f)

1. When process 2 finishes the critical section, it sends a REPLY to 1 and 3

page 25
Example
REQUEST
3 3
1 1 REPLY
2 1 2 1
2 2
1 2 1 2

g) h)
Fig g)

1. Processes 1 and 3 have received their REPLY messages from 2, but not from
each other
2. Process 3’s request has arrived at 1; a REPLY is immediately send since it has
lower timestamp value
Fig h)

1. Process 3 enters its critical section after receiving all REPLY messages

page 26
Example
REQUEST
3 3
REPLY
2 1 2
2 2
1 2 1 2

i) j)
Fig i)

1. Process 3 has finished its critical section and returns the deferred REPLY
message to process 1

Fig j)

1. Process 1 enters its critical section after receiving all REPLY messages
2. After finishing the critical section processing, it does nothing since it knows no
other processing wishing to enter the critical section

page 27
Ricard and Agrawala Advantages
Mutual exclusion is achieved
- No pair of processes is ever simultaneously in its critical section

Deadlock is impossible
- Deadlock occurs when there is no process in its critical section and no
requesting process can ever proceed to its own critical section

Starvation is impossible
- Occurs if one or more processes must wait indefinitely to enter its critical
section even though others enter and exit their own critical section

Message Traffic
- The algorithm requires only 2*(N-1) message exchanges for entering
the critical section

page 28
Ricard and Agrawala Disadvantages

Identities of all processes in the system must be known by all

processes

Dynamic addition and removal of processes complicated

A single failing process can break the whole scheme

page 29
Suzuki Kasami Algorithm
Overview
- When process Pi wants to enter its critical section (CS), it broadcasts a
REQUEST message containing its own ID as well as a sequence number n

- When process Pj receives a REQUEST message, it compares the last


seen sequence number for Pi and updates it if the received sequence
number is larger

- A process which holds the token (called PRIVILEGE message) and has left
its CS sends the token to a process still waiting for the token in the queue

page 30
Design Challenges

1. How to distinguishing outdated requests from current requests?


- A process may receive a token request message after the corresponding
request has been satisfied due to message delays

2. How to determine the process with outstanding request for the CS?
- A process must be able to determine which processes have outstanding
requests in order that it can send the token to one of them

page 31
Requesting the CS

If a process wants to enter the CS, it must first acquire the token by

broadcasting a REQUEST message to all processes in the system

The process currently holding the token sends it to the requesting

process
- However, if it is in the CS, it gets to finish before sending the token

A process holding the token can continuously enter the CS until the page 32
Data Structures
Request vector RNi[j] at process Pi
 RNi[j] contains the largest sequence number received from process j

Token consists of vector LN[j] and a queue Q


 LN[j] contains the sequence number of the latest executed request
from process i

- Q is the queue of requesting process

page 33
Requesting the CS
If Pi wants to enter the CS and does not have the token, it:

- Increments the sequence number RNi[i]

- Sends a REQUEST(i, n) message containing new sequence number to


all processes in the system

page 34
Receiving a Request

When a process Pj receives the REQUEST(i,n)message, it:


- Sets RNj[i] = Max(RNj[i], n)

- If n < RNj[i] the message is outdated

- If process Pj has the token and is not in CS (i.e., is not using token),
and if RNj[i] == LN[i] + 1

(indicating an outstanding request) it sends the token to process i

page 35
Executing a Request
A process enters the CS when it acquires the token

It can keep the token provided no one requests for it, thus repeatedly
enter the CS

page 36
Releasing the Token

When a process j leaves the CS, it:


- Sets LN[i] of the token equal to RNi[i]

- This indicates that the RNi[i] has been executed

- For each process Pi whose Id is not in the token queue Q,


it appends its Id to Q if
RNi[j] = = LN[j] + 1

- This indicates that process j has an outstanding request


page 37
Releasing the Token

If the token queue Q is nonempty after this update:

- It deletes the process Id at the head of Q and

- Sends the token to that process

- Gives priority to others’ requests

- Otherwise, it keeps the token

page 38
Evaluation

0 or N messages are required to enter the CS

No messages if process holds the token

Otherwise (N-1) REQUESTs, 1 reply

page 39
Advantages
Guarantees mutual exclusion

Is deadlock free

Is starvation free

page 40
Disadvantages

Assumes an infinite sequence number space

Number of processes in the system must be known by all


processes

Failing processes can deadlock the system

Messages should be reliably transmitted

page 41
Example (taken from Internet)
Req REQUEST
req=[1,0,0,0,0] req=[1,0,0,0,0]
LN=[0,0,0,0,0] 1
0

2 req=[1,0,0,0,0]

4
req=[1,0,0,0,0] 3
req=[1,0,0,0,0]

initial state
page 42
Example

req=[1,1,1,0,0]
req=[1,1,1,0,0]
LN=[0,0,0,0,0] 1
0

2 req=[1,1,1,0,0]

4
req=[1,1,1,0,0] 3 req=[1,1,1,0,0]

1 & 2 send requests

page 43
Example

req=[1,1,1,0,0]
LN=[1,0,0,0,0] req=[1,1,1,0,0]
1
Q=(1,2)
0

2
req=[1,1,1,0,0]
4
req=[1,1,1,0,0] 3
req=[1,1,1,0,0]

0 prepares to exit CS
page 44
Example

req=[1,1,1,0,0]
req=[1,1,1,0,0] LN=[1,0,0,0,0]
1
Q=(2)
0

2
req=[1,1,1,0,0]
4
req=[1,1,1,0,0] 3
req=[1,1,1,0,0]

0 passes token (Q and last) to 1


page 45
Example

req=[2,1,1,1,0]
req=[2,1,1,1,0] LN=[1,0,0,0,0]
1
Q=(2,0,3)
0

2
req=[2,1,1,1,0]
4
req=[2,1,1,1,0] 3
req=[2,1,1,1,0]

0 and 3 send requests


page 46
Example

req=[2,1,1,1,0]
req=[2,1,1,1,0]
1
0

2
req=[2,1,1,1,0]
4 LN=[1,1,0,0,0]
req=[2,1,1,1,0] Q=(0,3)
3
req=[2,1,1,1,0]

1 sends token to 2
page 47
References
Dijkstra, E.W. (1968) Cooperating Sequential Processes. Programming
Languages.

Ricart, G. & Agrawala, A.K. (1981) An Optimal Algorithm for Mutual


Exclusion in Computer Networks. Communications of the ACM, 24(1):9 -
17.

Suzuki, I. & Kasami, T. (1985) A Distributed Mutual Exclusion


Algorithm. ACM Transactions on Computer Systems, 3(4):344 - 349,
November 1985.

Tanenbaum, A. S. & Steen, M.v. (2007) "Distributed Systems: Principles and

Paradigms", Prentice Hall, Second Edition.


page 48

S-ar putea să vă placă și