Sunteți pe pagina 1din 9

Thread-Specic Storage An Object Behavioral Pattern for Efciently Accessing per-Thread State

Tim Harrison
harrison@cs.wustl.edu Dept. of Computer Science Wash. U., St. Louis This paper will be submitted to the 2nd annual European Pattern Languages of Programming conference held in Kloster Irsee, Germany, July, 1997.

Douglas C. Schmidt
schmidt@cs.wustl.edu Dept. of Computer Science Wash. U., St. Louis

return a failure status. When the application detects the failure status it checks the error variable to determine what type of error occurred.

1 Introduction
In theory, multi-threading an application can improve performance by executing multiple instruction streams simultaneously. In addition, they can simplify program structure by allowing each thread to execute synchronously rather than reactively or asynchronously. In practice, however, multithreaded applications often perform no better, or even worse, than single-threaded applications due to the overhead of acquiring and releasing locks [1]. In addition, threads are hard to program correctly due to subtle concurrency control protocols (e.g., avoiding race conditions and deadlock)[2]. This paper describes the Thread-Specic Storage pattern, which helps to alleviate several problems with thread performance and complexity. The Thread-Specic Storage pattern improves performance and simplies multi-threaded applications by allowing multiple threads to use one logical access point to retrieve thread local data without incurring locking overhead for each access.

3.2 Common Traps and Pitfalls


Although the global error variable approach works reasonably1 well for single-threaded applications, problems occur in preemptive multi-threaded applications. In particular, race conditions can cause error information set by a method in one thread to be read erroneously by applications in other threads. Using conventional synchronization mechanisms to avoid these race conditions can be error prone and inefcient since application programmers must use complex locking protocols correctly.

3.3 Forces
Our solution to problems such as the error reporting mechanism is to use the Thread-Specic Storage pattern. This approach must resolve the following forces: Efciency: The solution must allow sequential operations within a thread to access thread-specic objects atomically without incurring locking overhead for each access. Decoupling of threading policies from framework development: Proper implementation of the solution should allow framework developers to decouple method code from threading policies. Thus, regardless of whether an application runs within a single thread or multiple threads, there should be no additional overhead incurred and no changes to the code required. Ease of programming and increased portability: The implementation should simple and should not expose any platform specic features that prevent portability.

2 Intent
Allow multiple threads to use one logical access point to retrieve thread local data without incurring locking overhead for each access.

3 Motivation
3.1 Context
Multi-threaded applications may require access to objects that are specic to each thread. For example, many operating systems report error information to applications using globally visible variables (such as errno). A common way to implement this error reporting scheme is to use global variables. When an error occurs the operating system will set the error variable to report the problem and

4 Applicability
Use the Thread-Specic Storage pattern when an application has the following characteristics:
1 It is beyond the scope of this paper to discuss the downside of using error variables instead of exceptions.

Application Thread

TS Object

TS Object Proxy get_object () set_object () key TS Object Collection get_object (key) set_object (key)

Figure 1: Structure of Participants in the Thread-Specic Storage Pattern It contains multiple preemptive threads of control that can execute concurrently in an arbitrary scheduling order, and Each thread of control contains a sequence of operations that share data common only to that thread, and That data must be accessed through a global object that is logically shared with other threads, but is physically unique for each thread. Understanding this force is crucial to using (or not using) the pattern. For example, the UNIX errno variable is an example of an object that is logically global but physically thread-specic. Do not use the Thread-Specic Storage pattern when an application has the following characteristics: Multiple threads are collaborating on a single task that requires shared data (such as multiplying a large matrix). In this case, threads must shared data that is not thread-specic (e.g., rows and columns in the matrix). If Thread-Specic Storage was used for the matrix values, the threads could not share the data. Access to the matrix must be controlled with synchronization primitives (e.g., mutexes) so that the threads can collaborate on the shared data. It is more intuitive and efcient to maintain both a physical and logical separation of data. For instance, if it is possible to have threads access data visible only within each thread, the Thread-Specic Storage pattern may be unnecessary.

Thread-Specic Object Proxy (TS Object Proxy:) The TS Object Proxy provides an interface to a TS Object. It is responsible for providing access to a unique object for each calling thread through the get object and set object operations. In our error handling example from Section 3, for instance, the TS Object might be of type int to store errno values. Although a TS Object Proxy instance is responsible for a type of object, it mediates access to a thread-specic TS Object for every thread that accesses the proxy. For example, multiple threads may use the same TS Object Proxy to access thread-specic errno values. The key value stored by the proxy is assigned by the TS Object Collection when the proxy is created and is required by the collection during get object and set object operations. The advantage of TS Object Proxies is to hide keys and TS Object Collections. Without the proxies, the Application Threads would have to obtain the collections and use keys explicitly. As shown in section 9 most of the details of thread-specic storage can be completely hidden via the TS Object Proxy. Thread-Specic Object (TS Object) Collection: The TS Object Collection contains a set of all Thread objects belonging to a particular thread. In other words, every thread has its own TS Object Collection. The TS Object Collection maintains a mapping of keys to thread-specic TS Objects. A TS Object Proxy uses the key to retrieve a specic TS Object from the TS Object Collection via get object(key) and set object(key). In a complex multi-threaded application, for example, a threads errno value may be one of many types of data residing in thread-specic storage. For a thread to retrieve its thread-specic error data it must use the key that has been associated with errno to access the correct entry in the TS Object Collection. TS Object: A TS Object is a particular threads instance of a thread-specic object. For instance, a threadspecic errno is an object of type int. It is managed by the TS Object Collection and accessed only through a TS Object Proxy.

6 Collaborations
The interaction diagram in Figure 2 illustrates the collaboration between participants in the Thread-Specic Storage pattern. The Application Thread uses the TS Object Proxy to access TS Objects within their calling thread by invoking methods. In turn, the TS Object Proxy retrieves the threads TS Object Collection, which is stored inside the thread or in a global structure indexed by thread ID.2 Once the TS Object Collection has been located, the TS Object Proxy uses its key to retrieve the correct TS Object from the collection. At this
2 Every thread in a process contains a unique identifying number called a thread-ID. A thread-ID is similar to the notion of a process-ID.

5 Structure and Participants


Figure 1 illustrates the structure of the following participants in the Thread-Specic Storage pattern: Application Thread Application threads use TS Object Proxies to access the thread-specic storage in TS Objects. The implementation of the Thread-Specic Storage pattern can hide the TS Object Proxy and TS Object Collection so that the code looks like it is accessing the TS Object directly.

Application Thread
OBJECT OPERATION

TS Proxy

Thread state or global structure

TS-Object Collection

TS Object

get_object() lookup ()
COLLECTION LOOKUP

retrieve thread-specic data via a common access point. In this case, the data should be stored so that only the thread owning the data can access it. For example, consider a network server that uses a pool of worker threads to handling incoming service requests from clients. These threads may log the number and type of services performed. This logging mechanism could be accessed as a global Logger object utilizing Thread-Specic Storage. A simpler approach, however, represents each worker thread as an Active Object[4] with an instance of the Logger stored internally. In this case, there is no overhead required to access the Logger, as long as it is passed as a parameter to all functions in the Active Object.

RETURN COLLECTION

TS-Object Collection get_object (key)

OBJECT LOOKUP

RETURN TS-OBJECT

TS Object method ()

OBJECT OPERATION

Figure 2: Interactions Among Participants in the ThreadSpecic Storage Pattern point, the application thread operates on the TS Object using ordinary C++ method calls. No locking is necessary since the object is referenced by a pointer that is stored locally in the calling thread.

8 Implementation
The Thread-Specic Storage pattern can be implemented in two steps. The rst step discussed in Sections 8.1 and 8.2 involves implementing thread-specic storage infrastructure. The second step discuss in Section 8.3 describes how threadspecic storage can be used in client applications.

7 Consequences
There are several benets of using the Thread-Specic Storage pattern: Efciency The Thread-Specic Storage pattern can be implemented so that no locking is necessary for threads to access their thread-specic data. This eliminates any synchronization overhead for sharing data within a thread. For example, by placing errno into threadspecic storage, each thread can reliably set and test the status of operations within that thread without using locks or complex synchronization protocols. Decouples threading policies The structure and participants of the Thread-Specic Storage pattern do not restrict the implementation. As a result, Section 9 below describes how the pattern can be implemented to keep application code decoupled from threading policies. Increased type-safety and portability The TS Proxy participant of the Thread-Specic Storage pattern can be implemented to ensure threads only access their own data through strongly-typedinterfaces. When combined with other patterns (such as Proxy and Singleton [3]) and C++ language features (such as templates and operator overloading), the TS Proxy can be implemented so that objects in thread-specic storage can be treated just like conventional objects. In addition, frameworks and applications can be shielded from the underlying implementation of thread-specic storage provided by operating system thread libraries. There is a drawback to using the Thread-Specic Storage pattern: It encourages the use of (thread-safe) global variables Many applications do not require multiple threads to

8.1 Implementing TS Object Collections


The TS Object Collection shown in Figure 1 contains all TS Objects belonging to a particular thread. This collection can be implemented using a table of pointers to TS Objects indexed by keys. A thread must rst locate its TS Object Collection before accessing thread-specic objects by their keys. The rst design challenge, therefore, is determining how to locate and store TS Object Collections. TS Object Collections are typically stored either (1) externally to all threads and (2) internally to each thread. Each approach is described below: External to all threads: this approach denes a global mapping of each threads ID to its TS Object Collection, which is implemented as a table. Thread IDs can range from very small to very large values. Therefore, it is impractical to have an array with an entry for every possible thread ID value. It is more space efcient to have threads use a hash function based on their thread ID to obtain an offset into a hash table bucket. Each bucket contains a chain of tuples that map thread IDs to their corresponding TS Object Collection. The lookup algorithm traverses this chain to locate the correct collection (shown in Figure 3). Locating the right collection may require the use of a readers/writer lock to prevent race conditions. Once the collection is located, however, no additional locking is required since only one thread at a time can be active in a TS Object Collection. Internal to each thread: this approach requires each thread in a system to maintain a TS Object Collection along with its other internal state (such as

THREADS

THREAD A 1: Thread::getspecific(key)

THREAD B

1: Thread::getspecific(key)
Thread-specific Object tables indexed by key

: Hash Table

2: hash_table_lookup(thread ID)
2: get_table[key]
thread_ID thread_ID TS_Object_collection thread_ID TS_Object_collection TS_Object_collection

Thread-Specific Thread-Specific Objects Thread-Specific Objects Thread-Specific Objects Objects

Thread-Specific Thread-Specific Objects Thread-Specific Objects Thread-Specific Objects Objects

Thread-specific Object tables indexed by key


3: get_table[key] Thread-Specific Thread-Specific Objects Thread-Specific Objects Thread-Specific Objects Objects

Figure 4: Internal Implementation of Thread-Specic Storage threads state requires more state per thread. As long as this doesnt increase the cost of thread creation, context switching, and destruction, the internal approach is more efcient.

Figure 3: External Implementation of Thread-Specic Storage a run-time thread stack, program counter, general-purpose registers, and thread ID). When a thread accesses a threadspecic object, the object is retrieved by using the corresponding key as an index into the threads internal TS Object Collection. Note that this requires no additional locking, as well. For both the external and internal implementations, the TS Object Collection can be stored as an array if the range of thread-specic keys is relatively small. In this case, the lookup time can be O(1) by simply indexing into the array using the objects key, as shown in Figure 4. If the range of thread-specic keys is large, a dynamic data structure (such as a hash table or resizable vector) will be required, which increases lookup time. In general, selecting between the external and internal implementations involves a tradeoff of efciency and exibility. Depending on the implementation of the external table, the centralized location can allow threads to access other threads TS Object Collections. Although this seems to defeat the whole point of thread-specic storage, it is sometimes useful (e.g., when trying to check for an unused key so that it may be recycled). However, the external table increases the access time for every thread-specic object since synchronization schemes are required (such as readers/writer locks) to avoid race conditions when the table is modied. Keeping the TS Object Collection in each

8.2 Implementing Interfaces to TS Object Collections


The following code shows how thread-specic key creation, storage, and retrieval might be implemented when TS Objects are stored internally to each thread using a xed-sized array of MAX THREAD KEYS keys. This example is adapted from a publically available implementation[5] of POSIX Pthreads [6]. The thread state structure shown below contains the state of a thread. In addition to errno and the array of keys, this structure also includes a pointer to the threads stack, space to store context switch data (e.g., program counter,) etc.
struct thread_state { // ... // Error number. int errno; // Thread-specific data. void *key[MAX_THREAD_KEYS]; // ... }; // All threads share the same key pointer. static int total_keys = 0; // Exit hooks to cleanup thread-specific keys. static void (*thread_exit_hook[MAX_THREAD_KEYS]) (void);

For a particular thread-specic object, the same key value is used to set and get thread-specic values for all threads. For instance, if Logger objects are being registered to keep

track of some thread-specic attribute, the thread-specic Logger may be assigned some key value N . All threads accessing their thread-specic log value would use N to get and set values. Therefore, the total number of keys can be stored globally to all threads. An array of function pointers is also stored globally. This array contains the thread exit hooks that automatically cleanup thread-specic objects when a thread exits. Different functions are registered for each thread-specic object, but for each object, the same function is called for each thread. Since registering dynamically allocated objects as thread-specic is a common tactic, a thread exit hook typically looks like the following:
static void cleanup_tss_Logger (void *ptr) { // This cast is necessary to invoke // the destructor (if necessary). delete (Logger *) ptr; }

if (key < 0 || key >= total_keys) { self ()->errno = EINVAL; return -1; } self ()->key[key] = value; return 0; }

Likewise, thr getspecific retrieves into value the data bound to the given key for the calling thread:
// Retrieve a value from a data key // for some thread. int thr_getspecific (int key, void **value) { if (key < 0 || key >= total_keys) { self ()->errno = EINVAL; return -1; } *value = self ()->key[key]; return 0; }

The thr keycreate function allocates a key value for binding thread-specic data:3
// Create a new global key and specify // a "destructor" function callback int thr_keycreate (int *key, void (*thread_exit_hook) (void *)) { if (total_keys >= MAX_THREAD_KEYS) { self ()->errno = ENOMEM; return -1; } thread_exit_hook[total_keys] = thread_exit_hook; *key = total_keys++; return 0; }

Because the data is stored internally in the state of the thread neither of these functions requires any additional locks to access thread-specic data.

8.3 Using Thread-Specic Storage in Applications


One way to utilize thread-specic storage is to call the C-level OS thread-specic library functions (such as thr getspecific and thr setspecific shown above) directly in application code. However, these C-level interfaces have the following limitations: Non-type-safe: the POSIX Pthreads, Solaris, and Win32 thread-specic storage interfaces store pointers to threadspecic objects as void *s. Although this approach is exible, it is sometimes easy to make mistakes since void *s eliminate type-safety. Non-portable: the interfaces of POSIX Pthreads, Solaris threads, and Win32 threads are very similar. However, the semantics of Win32 threads are subtly different since they do not provide a reliable means of cleaning up objects allocated in thread-specic storage when a thread exits. This makes it hard to write portable code among UNIX and Win32 platforms. Hard to use: The example below illustrates the complexity required to use thread-specic data in a C function that can be called from more than one thread without having to write special initialization code:
static mutex_t keylock; static thread_key_t key; static int once = 0; void *func (void) { void *ptr = 0; // Use the Double-Checked Locking pattern

POSIX Pthreads allow a program to specify a pointer to a function that is called when a thread exits and has a threadspecic object registered for a key. The thr exit function below shows how thread exit hook functions can be called:
// Terminate the thread and call thread exit hooks. void thr_exit (void *status) { // ... for (i = 0; i < total_keys; i++) if (self ()->key[i] && thread_exit_hook[i]) (thread_exit_hook[i]) (self ()->key[i]); // ... }

The thr setspecific function binds value to the given key for the calling thread:
// Associate a value with a data key // for some thread. int thr_setspecific (int key, void *value) {
3 Note that self is a macro that refers to the context of the currently active thread, i.e., its similar to a this pointer in C++.

// to serialize key creation without // forcing each access to be locked. if (once == 0) { mutex_lock (&keylock); if (once == 0) { thr_keycreate (&key, free); once = 1; } mutex_unlock (&keylock); } thr_getspecific (key, (void *) &ptr); if (ptr == NULL) { ptr = malloc (SIZE); thr_setspecific (key, ptr); } return ptr; }

application is ported to another platform, the code internal to each thread-specic class must be altered to use the new thread library. In addition, making changes directly to the thread-specic class makes it difcult to change the threading policies. In other words, changing a thread-specic class to a global class would require intrusive changes to the code. More specically, each access to state internal to the object would require code to rst retrieve the state from threadspecic storage. A more reusable, portable, and exible approach is to implement a TS Object Proxy that is responsible for all thread-specic operations. This approach is shown in Section 9 below. It allows classes to be decoupled from the knowledge of how thread-specic storage is implemented.

Even with error checking omitted, the locking operations shown above are fairly complex and non-intuitive. Note that this is actually a C implementation of the Double-Checked Locking pattern [7]. Its instructive to compare this C implementation to the C++ version in Section 9 to observe the greater simplicity and clarity resulting from using C++. To overcome these limitations, additional classes and C++ wrappers can be developed to program thread-specic storage robustly. The following discusses two ways to encapsulate the low-level OS thread library interfaces. One way is to dene the target class (i.e., the one that we want to be threadspecic) using thread-specic library routines directly. This approach could be implemented as follows: 1. Determine the state information that must be stored or retrieved in thread-specic storage. 2. Dene an external class interface to this information. 3. Dene an internal structure that contains the appropriate elds. 4. Use the thread-specic storage operations provided by the thread library to dene a helper operation. Typically, this helper operation behaves as follows: (a) Initialize a key for each thread-specic object. (b) Use this key to get/set a thread-specic pointer to dynamically allocated memory containing an instance of the internal structure. Every method in the external interface will call this helper operation to obtain a pointer to the object that is placed in thread-specic storage. (c) Once the external interface method has the pointer, it can perform TYPE specic operations on the thread-specic object. The advantage of this approach is that it hides applications from the knowledge of the thread-specic operations. Unfortunately, this direct approach does not promote reusability, portability, or exibility. In particular, for every thread-specic class, the developer needs to reimplement the thread-specic operations within the class. Thus, if the

9 Sample Code
The following section illustrates how to encapsulate a threadspecic toolkit (such as Solaris threads, POSIX Pthreads, or Win32 threads) using C++ wrappers. The forces that our solution resolves are: code reusability, portability, and exibility. To resolve these forces weve dened a proxy class that is parameterized by the class whose objects will become thread-specic. Applications can invoke methods on this proxy as if they were calling the target class by overloading the C++ delegation operator (operator->). Moreover, by using other C++ features like templates, the TS Object Proxy transparently transforms classes like Error Handler into a type-safe, thread-specic class. Consider the following interface to a Thread-Specic wrapper:
template <class TYPE> class TSS { public: // Constructor. TSS (void); // Destructor TSS (void); // Use a "smart pointer" to get // thread-specific data. TYPE *operator-> (); private: // Key for the thread-specific error data. thread_key_t key_; // "First time in" flag. int once_; // Avoid race conditions during initialization. Mutex keylock_; };

The key methods in this class are described below.

9.1 The operator-> Method


Almost all the work in the TSS class is performed in the operator-> method shown below (error checking has been minimized to save space):
template <class TYPE> TYPE * TSS<TYPE>::operator-> () { TYPE *data = 0; // Use the Double-Checked Locking pattern to avoid // locking in the common case. // First check if (this->once_ == 0) { // Insure that we are serialized (constructor // of Guard acquires the lock). Guard <Mutex> guard (this->keylock_); // Double check if (this->once_ == 0) { Thread::keycreate (&this->key_, &this->cleanup); // *Must* come last so that other threads // dont use the key until its created. this->once_ = 1; } // Guard destructor releases the lock. } // Get the data from thread-specific storage. // Note that no locks are required here... Thread::getspecific (this->key_, (void **) &data); // Check to see if this is the first time in // for this thread. if (data == 0) { // Allocate memory off the heap and store // it in a pointer in thread-specific // storage (on the stack...). data = new TYPE; // Store the dynamically allocated pointer in // thread-specific storage. Thread::setspecific (this->key_, (void *) data); } return data; }

threads nally obtain the mutex keylock , they will nd once equal to 1 and not execute Thread::keycreate. Once the key is created, no further locking is necessary to access the thread-specic data. This is because the Thread::getspecific and Thread::setspecific functions retrieve the TS Object Collection from within the state of the calling thread. No additional locks are needed since this thread state is independent from other threads. In addition to reducing locking overhead, the ThreadSpecic Storage implementation shown above shields application code from knowledge of the fact that objects are specic to the calling thread. To accomplish this, the implementation uses C++ features such as templates, operator overloading, and smart pointers ( i.e., operator->). The smart pointer idiom is used in this implementation to control all access to a thread-specic object. The operator-> method receives special treatment from the C++ compiler. It rst obtains a pointer to the appropriate TYPE from thread-specic storage and then redelegates the original operation invoked on it. An example of this is shown in Section 9.3 below.

9.2 The Constructor and Destructor


The constructor for the TSS class is minimal, it simply initializes the local instance variables:
template <class TYPE> TSS<TYPE>::TSS (void): once_ (0), key_ (0) {}

In particular, note that we dont allocate the TSS key or a new TYPE in the constructor. There are several reasons for this design: Thread-specic storage semantics The thread that initially creates the TSS object (e.g., the main thread) is often not the same thread(s) that use this object (e.g., the worker threads). Therefore, there is often no benet in pre-initializing a new TYPE in the constructor since this will only be accessible by the main thread. Deferred initialization On some OS platform, TSS keys are a limited resource. For instance, Windows NT only allows a total of 64 TSS keys per-process. Therefore, we dont want to allocate the keys until we absolutely must use them. Instead, we defer the initialization until the rst time the operator-> method is called. The destructor for TSS presents us with several tricky design issues. The obvious solution is to release the TSS key allocated in operator->. However, there are several problems with this: Lack of features Both Windows NT and POSIX pthreads dene an API to release the TSS key. However, Solaris threads do not. Therefore, it is hard to write a portable wrapper.

Note the use of the Double-Checked Locking pattern above, where once is tested twice in the code. The reason for this is that although multiple threads can access the same instance of Thread Specific, only one thread should create a key (using Thread::keycreate). All threads will then use this key to access a thread-specic instance of an object. To ensure this, operator-> uses a mutex keylock to keep multiple threads from executing Thread::keycreate. The rst thread that acquires keylock will set once to 1 and all subsequent threads that call operator-> will nd once != 0 and will skip the initialization step. The second test of once handles the case where multiple threads executing in parallel queue up at keylock before the rst thread has set once to 1. In this case, when the queued

Race conditions One reason that Solaris threads does not provide a function to release the TSS key is that it is costly to implement. The problem is that each thread separately maintains the objects referenced by that key. Only when all these threads have exited is it safe to release the key.

9.4 Evaluation
The TSS proxy design based on the C++ operator-> illustrated above has the following benets: It maximizes code reuse by decoupling thread-specic operations from application-specic classes (i.e., the formal parameter class TYPE). Porting an application to another thread library would only require changing the Thread Specific class, not any applications using the class. Changing a class to and from a thread-specic class simply requires changing object construction (which can be decided at run-time or at compile-time.)

9.3 Use Case


Consider our continuing example of a thread-specic Logger being used by multiple worker threads:

static TSS<Logger> logger; static void * worker_thread (void *vp) { // Network connection stream. SOCK_Stream *stream = static_cast <SOCK_Stream *> vp; // Read from the network connection // and process the data until the connection // is closed. for (;;) { char buffer[BUFSIZ]; int result = stream->recv (buffer, BUFSIZ); // Record the recv results in thread // specific data. logger->log (result); process_buffer (buffer); } // Report on the total work done // by this thread. cout << Thread << thr_self () << processed << logger->total () << bytes. << endl; return 0; }

10 Known Uses
The UNIX errno mechanism (on platforms like Solaris that support multi-threading) and the Win32 GetLastError function are widely-used examples of thread-specic storage. Thread-specic storage is also used within the ACE network programming toolkit [8] to implement its error handling scheme.

11 Related Patterns
An object implemented using thread-specic storage is basically a per-thread Singleton. Not all uses of thread-specic storage are Singletons, however. For example, a thread can have multiple instances of a type allocated from threadspecic storage. The Thread Specific template class shown in Section 9 serves as a Proxy that shields the libraries, frameworks, and applications from the implementation of thread-specic storage provided by operating system thread libraries.

Consider the call to logger->log above. The compiler replaces this call with two function calls. The rst is a call to TSS::operator->, which returns a Logger instance. The compiler then generates a second function call to the log method of the returned logger object. In this case, TSS behaves as a proxy that allows an application to access and manipulate a thread-specic object as if it were an ordinary C++ object. Note that C++ operator-> does not work for built-in types like int since there are no methods to delegate to. Thats why we can not use int in place of the Logger class used above. The Logger example above is a good example where using one logical access point is advantageous. Since the worker thread function is global, it is not straightforward for threads to manage both a physical and logical separation of Logger objects. Instead, a thread-specic Logger allows multiple thread to use a single logical access point to manipulate physically separate objects.

Acknowledgements
Thanks to Peter Sommerlad for his help with this paper.

References
[1] Paul E. McKinney, A Pattern Language for Parallelizing Existing Programs on Shared Memory Multiprocessors, in Pattern Languages of Program Design (J. O. Coplien, J. Vlissides, and N. Kerth, eds.), Reading, MA: Addison-Wesley, 1996. [2] J. Ousterhout, Why Threads Are A Bad Idea (for most purposes), in USENIX Winter Technical Conference, (San Diego, CA), USENIX, Jan. 1996. [3] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software. Reading, MA: Addison-Wesley, 1995.

[4] R. G. Lavender and D. C. Schmidt, Active Object: an Object Behavioral Pattern for Concurrent Programming, in Pattern Languages of Program Design (J. O. Coplien, J. Vlissides, and N. Kerth, eds.), (Reading, MA), Addison-Wesley, 1996. [5] F. Mueller, A Library Implementation of POSIX Threads Under UNIX, in Proceedings of the Winter USENIX Conference, (San Diego, CA), pp. 2942, Jan. 1993. [6] IEEE, Threads Extension for Portable Operating Systems (Draft 10), February 1996. [7] D. C. Schmidt and T. Harrison, Double-Checked Locking An Object Behavioral Pattern for Initializing and Accessing Thread-safe Objects Efciently, in Submitted to the 3rd Pattern Languages of Programming Conference, September 1996. [8] D. C. Schmidt, ACE: an Object-Oriented Framework for Developing Distributed Applications, in Proceedings of the 6th USENIX C++ Technical Conference, (Cambridge, Massachusetts), USENIX Association, April 1994.

S-ar putea să vă placă și