Sunteți pe pagina 1din 6

TABLE OF CONTENTS

TABLE OF CONTENTS .......................................................................................................................... i 1.0 2.0 3.0 4.0 5.0 6.0 7.0 Introduction: Overview of the Problems of Concurrent Processes ................................................. 1 Importance of Communication and Synchronization ..................................................................... 2 Use of Synchronization objects and mechanisms .......................................................................... 2 Use of WAIT and SIGNAL Mechanisms with synchronization objects ......................................... 3 Producers and Consumers processes ............................................................................................. 3 Conclusion ................................................................................................................................... 4 References ................................................................................................................................... 5

1.0

Introduction: Overview of the Problems of Concurrent Processes

Concurrency is the execution of two or more independent, interacting programs over the same period of time. Concurrency is used in many kinds of systems, both small and large. For example in a computer, a user runs can run in separate windows a web browser, a text editor, and a music player all at once. The operating system interacts with each of them. Almost unnoticed, the clock and virus scanner are also running. The operating system waits for the user to ask more programs to start, while also handling underlying tasks such as resolving what information from the Internet goes to which program. A concurrent system can be implemented via processes and/or threads. Although details can vary upon platform, the fundamental difference is that processes have separate address spaces, whereas threads share address spaces. The common problems of concurrency processes are briefly explained below;  Deadlock: When two or more threads stop and wait for each other. It is situation in which two or more processesare unable to proceed because each is waiting for one of the other do something.  Livelock: When two or more threads continue to execute, but make no progress toward the ultimate goal. It is a situation in which two or more processes continually change their states in response to changes in other processes without doing any useful work.  Fairness: Is the idea that each thread gets a turn to make progress. When considering fairness, one must also consider the system code implementing the threads. The implementation includes a scheduler that determines how to interleave the threads, and the scheduler might or might not provide any fairness guarantees.  Starvation: Is when some thread gets deferred forever. A situation in which a runnable process/thread is overlooked indefinitely by the scheduler although it is able to proceed, it is never chosen.  Race Condition: A race condition is when some possible interleaving of threads results in an undesired computation result. Race conditions are notoriously hard to debug and test for because they can well occur in highly unlikely situations. Multiple threads/processes read and write data item and the final result depends on the relative timing of their execution.  Mutual exclusion: A requirement that one process in critical section that access shared resources, no other process may be in critical section that access those shared resources.  Critical Section: A section of code within a process that requires access to shared resource and that must not be executed while another process is in corresponding section of the code  Atomic Operation: A sequence of one or more operation that appears to be indivisible. No other process can see an intermediate state or interrupt the operation

2.0

Importance of Communication and Synchronization

Communication:This is the corporation among processes. In this case various processes participate in a common effort that links all the processes. Communication provides the way to synchronize /coordinate the various activities. Communications consists of messages. Sending and receiving messages may be provided by the programming languages or OS kernel. Problems of deadlock and starvation may be experienced during the communication process. Two processes may be blocked, each waiting for a communication from the other. Synchronization: The communication of am message between two processes implies that there are some levels of synchronization between the two. The receiver cannot receive a message until it has been sent by another process. Three combinations are possible during the synchronization process.  Blocking Send, Blocking Receive: Both sender and receiver are blocked until the massage is delivered. This is referred as rendezvous It allows tight synchronization between processes.  Non-Blocking Send, Blocking Receive: The sender continues to send but the receiver is blocked until the arrival of the required message. Example the server process that is providing service or resource to the other processes.  Non-Blocking Send, Non- Blocking Receive: Neither part is required to wait. Therefore Non-Blocking send is more natural to many concurrent programming tasks the same way as Blocking Receive.

3.0
     

Use of Synchronization objects and mechanisms


Notification Event: An announcement that a system event has occurred Synchronization Event: An announcement that a system event has occurred Mutex : A mechanism that provides mutual exclusion capabilities. Semaphore: A counter that regulates the number of threads that can use a resource Waitable Timer: A counter that records the passage time Critical Section: It provides synchronization function similar to Mutex objects but it can be only by a thread of a single process. It is much faster, more efficient mechanism for mutual exclusion synchronization.

The Synchronization objects are mechanisms are explained below as follows;

4.0

Use of WAIT and SIGNAL Mechanisms with synchronization objects

This can be explained by the concept of wait functions. Wait functions allow a thread to block its own execution. The wait function does return until the defined criterion has been met. If the criterion has not been met, the calling thread enters the wait state. Processor time is not used while waiting for the criterion to be met. The most common is a Wait For Single Object which requires a handle to one synchronization object. The function returns when under the following two conditions  The specified object is in the signaled state  The time-out interval elapses. The time interval can be set to infinite to specify that wait will not time out A thread can be blocked on an object in an un-signaled state. The thread is released when the object enters the signaled state. Therefore one an object enters the signaled state one or all thread objects in waiting are released.

5.0

Producers and Consumers processes

Producer-consumer problem (bounded-buffer problem) is a classic example of a multiprocesssynchronization problem. The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer. The producer's job is to generate a piece of data, put it into the buffer and start again. At the same time the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer. The solution for the producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer. The solution can be reached by means of inter-process communication, typically using semaphores. An inadequate solution could result in a deadlock where both processes are waiting to be awakened. The problem can also be generalized to have multiple producers and consumers. Given that emptyCount and fullCount are counting semaphores, and emptyCount is initially N whilst fullCount is initially 0, The producer does the following repeatedly:
produce: P(emptyCount) putItemIntoQueue(item) V(fullCount)

The consumer does the following repeatedly:3

consume: P(fullCount) item getItemFromQueue() V(emptyCount)

It is important to note that the order of operations is important. For example, if the producer places the item in the queue after incrementing fullCount, the consumer may obtain the item before it has been written. If the producer places the item in the queue before decrementing emptyCount, the producer might exceed the size limit of the queue.

6.0

Conclusion

Concurrency processes always exist in computer systems as far as the operating system design is concerned. The advantage of these processes is to enable the computer to perform multiple tasks at the same time. But they are some problems which are associated with these processes. Problems of deadlocks, livelocks, fairness, race conditions, starvation etc. may be experienced. Synchronization and communication processes explained the previous sections should be taken into accounts during the OS design. Therefore concurrency processes cannot be avoided but in the OS design, one should always think about concurrency issues and how to address them as it is natural for users to do multiple tasks at the same time.

7.0

References 1. http://cnx.org/content/m12312/latest/ 2. http://student.fiit.stuba.sk/dos/lecture3.pdf 3. http://cs.gmu.edu/~menasce/osbook/psynch/P037.html 4. Operating Systems: Internals and Design Principles. By Williams Stallings Chapter 5 & 6 5. http://en.wikipedia.org/wiki/Semaphore_(programming)#Exampl e:_Producer.2FConsumer_Problem 6. http://en.wikipedia.org/wiki/Producers-consumers_problem

S-ar putea să vă placă și