Sunteți pe pagina 1din 4

Explain in detail about data Races,deadlocks?

DATA RACES: Unsynchronized access to shared memory can introduce race conditions, where the program results depend nondeterministically on the relative timings of two or more threads.

The example shows two threads trying to add to a shared variable x, which has an initial value of 0.Depending upon the relative speeds of the threads, the final value of x can be 1, , or !. Update operations such as "# are normally $ust shorthand for temp # x% x #temp"1, and hence can result in interleaving. &ometimes the shared location is accessed by different expressions. &ometimes the shared location is hidden by function calls. 'ven if each thread uses a single instruction to fetch and update the location, there could be interleaving, because the hardware might brea( the instruction into interleaved reads and writes. )ntel Thread *hec(er is a powerful tool for detecting potential race conditions.

example, threads may be reading a location that is updated asynchronously with a latest current value.)n such a situation, care must be ta(en that the writes and reads are atomic. +or example, reads and writes of structure types are often done a word at a time or a field at a time. Types longer than the natural word size, such as ,0-bit floating-point, might not be read or written atomically, depending on the architecture. Data races can arise not only from unsynchronized access to shared memory, but also from synchronized access that was synchronized at too low a level. )f two threads both attempt to insert the same (ey at the same time, they may simultaneously determine that the (ey is not in the list, and then both would insert the (ey.

./igher-0evel 1ace *ondition 'xample.

2uilding loc(s into low-level components is often a waste of time, because the high-level components that use the components will need higher-level loc(s anyway. The lowerlevel loc(s then become pointless overhead. +ortunately, in such a scenario the highlevel loc(ing causes the low-level loc(s to be uncontended, and most loc( implementations optimize the uncontended case. /ence the performance impact is somewhat mitigated, but for best performance the superfluous loc(s should be removed. There are times when components should provide their own internal loc(ing.

Deadlocks:
1ace conditions are typically cured by adding a loc( that protects the invariant that might otherwise be violated by interleaved operations. Unfortunately, loc(s have their own hazards, most notably deadloc(. Deadloc( can occur only if the following four conditions hold true3 1. .ccess to each resource is exclusive. . . thread is allowed to hold one resource while re4uesting another. !. 5o thread is willing to relin4uish a resource that it has ac4uired. 6. There is a cycle of threads trying to ac4uire resources, where each resource is held by one thread and re4uested by anot her. Deadloc( can be avoided by brea(ing any one of these conditions. 7ften the best way to avoid deadloc( is to replicate a resource that re4uires exclusive access, so that each thread can have its own private copy. 'ach thread can access its own copy without needing a loc(. The copies can be merged into a single shared copy of the resource at the end if necessary. 2y eliminating loc(ing, replication avoids deadloc( and has the further benefit of possibly improving scalability, because the loc( that was removed might have been a source of contention. )f there is no obvious ordering of loc(s, a solution is to sort the loc(s by address. This approach re4uires that a thread (now all loc(s that it needs to ac4uire before it ac4uires any of them. +or instance, perhaps a thread needs to swap two containers pointed to by pointers x and y, and each container is protected by a loc(. The thread could compare x 8 y to determine which container comes first, and ac4uire the loc( on the first container before ac4uiring a loc( on the second container.

0oc(s 7rdered by their .ddresses void AcquireTwoLocksViaOrdering( Lock& x, Lock& y ) { assert( &x!=&y ) i!( &x"&y ) { acquire x acquire y # e$se { acquire y acquire x # # The third condition for deadloc( is that no thread is willing to give up its claim on a resource. Thus another way of preventing deadloc( is for a thread to give up its claim on a resource if it cannot ac4uire the other resources. +or this purpose, mutexes often have some (ind of try loc(routine that allows a thread to attempt to ac4uire a loc(, and give up if it cannot be ac4uired. This approach is useful in scenarios where sorting the loc(s is impractical.

Try and bac( off logic void AcquireTwoLocksVia%acko!!( Lock& x, Lock& y ) { !or( int t=& t'=( ) { acquire x try to acquire y

i!( y was acquired ) )reak re$ease x wait !or rando* a*ount o! ti*e )etween + and t # # Try and bac( off logic has some timing delays in it to prevent the hazard of live loc(. 0ive loc( occurs when threads continually conflict with each other and bac( off. )t applies exponential bac(off to avoid live loc(. )f a thread cannot ac4uire all the loc(s that it needs, it releases any that it ac4uired and waits for a random amount of time. The random time is chosen from an interval that doubles each time the thread bac(s off. 'ventually, the threads involved in the conflict will bac( off sufficiently that at least one will ma(e progress. The disadvantage of bac(off schemes is that they are not fair. There is no guarantee that a particular thread will ma(e progress. )f fairness is an issue, then it is probably best to use loc( ordering to prevent deadloc(.

S-ar putea să vă placă și