Sunteți pe pagina 1din 6

a. Device a queuing implementation of semaphores. Definition Of Semaphore and its components.

Semaphore (William Stalling, 2008) is in integer value used for signaling among processes. Only three operations may be performed on a semaphore, all of which are atomic: i. ii. iii. Initialize, Decrement, and Increment.

The decrement operation may result in the blocking of a process, and the increment operation may result in the unblocking of a process. Devicing a queuing implementation of semaphores involve signaling special variable called semaphore. These variable are used.

To transmit a signal via semaphore s, a process executes the primitive e.g. semSignal(s). To receive a signal via semaphore s, a process executes the primitive e.g. semWait(s);

If the corresponding signal has not yet been transmitted, the process is suspended until the transmission takes place. The following is the queue operation Prepare the task to be processed Schedule tasks for execution Schedule sub tasks for execution Wait for sub task completion Process and finish task execution Wait for all tasks to complete Destroy all task instances.

There are five basic steps in allocating a semaphore structure:


1. Request a semaphore structure, using the common key

2. Initialize the semaphore structure by setting the values of each semaphore

3. Define the basic operations that is to be perform on the semaphore structure 4. Use the basic operations in the program 5. Remove the semaphore structure.

b. Describe the necessary data structures and queue servicing disciplines. Therefore a queuing implementation of semaphores will involve particular kind of abstract data type or collection in which the entities in the collection are kept in order and the principal (or only) operations on the collection are the addition of entities to the rear terminal position and removal of entities from the front terminal position. This makes the queue a First-In-First-Out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once an element is added, all elements that were added before have to be removed before the new element can be invoked. A queue is an example of a linear data structure. Queue is a linear data structure in which data can be added to one end and retrieved from the other. The data that goes first into the queue is the first one to be retrieved. That is why queues are sometimes called as First-In-First-Out data structure. The data is added to one end (known as REAR) and retrieved from the other end (known as FRONT). The advantage of the queue is that if a program requests to do a pop and no data is available yet to be received from the queue, it will enter a waiting state. As soon as a task is available to be popped from the queue, the program can be scheduled for resumed execution with the received task. The structure of the node of will be as follows: Data structure for a queuing semaphore implementation Pointer to PCB pointer to the Process control block of the waiting process Pointer to the next node belongs to the next waiting process The wait(S) operation first check whether the resource is free by checking the value of S if it is free then it will decrease the value of S and allow the task to enter in the critical section. If the resource is not free then the PCB of the requesting task will be inserted in the queue with the help of function insert(). The insert() function prepares a node for the new waiting process. The insert() function must be re-entrant. Therefore, the task switching is disabled at the beginning of this function and enabled at the end of the function.

Psuedocode and logic description of wait(S) operation: Wait(S) { If (S =5) then insert(PCB); S=S+1; else S = S-1; for (int S=0; S < 5; S++) } Insert(S), PCB) { Disable process switching Newnode->PCB; start = S; while (Start->Next) Start = Start->Next; Newnode->Next = Start->Next; Start->Next = Newnode; Enable task switching } The signal(S) operation first checks whether any other process waits for the resource which is going to free by the current process. If any process is waiting then it will select the next process on the queue with the help of delete() function and allocate the resource to that task. If there is no process waiting for the resource then it will increase the value of S by one to make the resource free. The delete() function will return the PCB of the next process (which at the front of the queue). The delete() function must be re-entrant. Therefore, the task switching is disabled at the beginning of this function and enabled at the end of the function. Psuedocode and logic description of signal(S) operation: signal(S) { Processnode = delete( (S), PCB) if processnode != NULL then allocate the resource to the processnode else S = S+1; } Delete(S), PCB) { Disable task switching node = S; S = S->Next;

PCB = node->PCB; free(node); Enable task switching } The queuing implementation of semaphore in the above is not susceptible to the preemption deadlock problem. Preemption within a critical section is not allowed, it can only occur after the process execution and if it occurs then the process ends. The use of the semaphore, S. Whenever a task wants to access some semaphore protected shared resource, it must first request to acquire the semaphore access (P operation) of the semaphore. When done accessing the shared resource it must then release access (V operation) of the semaphore. If attempting to request a semaphore and other tasks have already requested (NON PREEMPTIVE) the total number of allowed accesses, the requesting task will transition to the waiting state until some other tasks release the semaphore and access is obtained. Waiting and Signalling (READY). Signaling is the simplest form of synchronization between a host program and multiple tasks. The program can specify a certain task to signal. When the task waits for a signal to be received it will be transitioned to the waiting state until the signal is received.

Question Two:

Discuss the usability of semaphores for synchronization of processes executing at different nodes of a distributed computer system that is characterized by no shared common memory and by relatively long internodes communication delays of variable duration. Suggest where copies of the semaphore variable should be kept and identify potential performance problems with using semaphores in distributed environments. The usability of semaphores for synchronization of processes is possible because we can utilize the idea of global semaphore mechanism which is then ported to various parallel programming environments. A global semaphore is a set of primitive functions that enable the synchronization of different processes running on different processors possibly on different machines connected via a separate communication network in a distributed environment. The main reason why semaphores are usable in this environment is because it is possible to synchronize an arbitrary number of processes by controlling their code execution rather than trying to know the particular participating process identities or process roles. The need for a parallel communication program supports all processes on all processors in a transparent way hence virtual single process is created Copies of this global semaphore must be kept in a special server in the network called a semaphore server running on every machine with a list of semaphores created by its own host processor. The semaphore shows the semaphores name, id, reference count, current value, pointer to a semaphore waiting in a queue and a pointer to the next semaphore in the list. The principle problem with this distributed system is non shared memory which makes several copies of semaphores to be created and creates weak coherence. Getting an efficient locking and protection scheme is not easy when memory is not shared this is because several updates can be made on several semaphores in different processors. Group communication mechanism increases inefficiencies in implanting the system since it generally increases the communication overhead.

c. Explain the relationship between a semaphore list and the ready list.

d. Describe traversal of the PCB of a process executing semaphore WAIT and SIGNAL operations.

e. Discuss whether queuing implementation of semaphore is susceptible to the preemptiondeadlock problem.

f. Explain how your solution behaves when a process is preempted after entering the critical section.

g. Discuss the usability of semaphore for synchronization of processes executing different nodes of a distributed computer system that is characterized by no shared common memory, h. and by relatively long internodes communication delays of variable duration. i. Suggest where copies of the semaphore variable should be kept, and identify potential performance problems with using semaphore in distributed environments.

S-ar putea să vă placă și