Sunteți pe pagina 1din 13

The Serialization Algorithm

By now, you should have a pretty good feel for how the serialization mechanism works for individual classes. The next step in explaining serialization is to discuss the actual serialization algorithm in a little more detail. This discussion won't handle all the details of serialization.[5] Instead, the idea is to cover the algorithm and protocol, so you can understand how the various hooks for customizing serialization work and how they fit into the context of an RMI application.

The Data Format


The first step is to discuss what gets written to the stream when an instance is serialized. Be warned: it's a lot more information than you might guess from the previous discussion. An important part of serialization involves writing out class-related metadata associated with an instance. Most instances are more than one class. For example, an instance of String is also an instance of Object. Any given instance, however, is an instance of only a few classes. These classes can be written as a sequence: C1, C2...CN, in which C1 is a superclass of C2, C2 is a superclass of C3, and so on. This is actually a linear sequence because Java is a single inheritance language for classes. We call C1 the least superclass and CN the most-derived class. See Figure 10-4. Figure 10-4. Inheritance diagram

After writing out the associated class information, the serialization mechanism stores out the following information for each instance:

A description of the most-derived class. Description : The version ID of the class, which is an integer used to validate the .class files A boolean stating whether writeObject( )/readObject( ) are implemented The number of serializable fields A description of each field (its name and type) Extra data produced by ObjectOutputStream's annotateClass( ) method A description of its superclass if the superclass is serializable

Data associated with the instance, interpreted as an instance of the least superclass. Data associated with the instance, interpreted as an instance of the second least superclass.

And so on until:

Data associated with the instance, interpreted as an instance of the mostderived class.

So what really happens is that the type of the instance is stored out, and then all the serializable state is stored in discrete chunks that correspond to the class structure. But there's a question still remaining: what do we mean by "a description of the mostderived class?" This is either a reference to a class description that has already been recorded (e.g., an earlier location in the stream) or the following information:

The version ID of the class, which is an integer used to validate the .class files A boolean stating whether writeObject( )/readObject( ) are implemented The number of serializable fields A description of each field (its name and type) Extra data produced by ObjectOutputStream's annotateClass( ) method A description of its superclass if the superclass is serializable

This should, of course, immediately seem familiar. The class descriptions consist entirely of metadata that allows the instance to be read back in. In fact, this is one of the most beautiful aspects of serialization; the serialization mechanism automatically, at runtime, converts class objects into metadata so instances can be serialized with the least amount of programmer work.

A Simplified Version of the Serialization Algorithm


In this section, I describe a slightly simplified version of the serialization algorithm. I then proceed to a more complete description of the serialization process in the next section. Writing Because the class descriptions actually contain the metadata, the basic idea behind the serialization algorithm is pretty easy to describe. The only tricky part is handling circular references. The problem is this: suppose instance A refers to instance B. And instance B refers back to instance A. Completely writing out A requires you to write out B. But writing out B requires you to write out A. Because you don't want to get into an infinite loop, or even write out an instance or a class description more than once,[6] you need to keep track of what's already been written to the stream.
ObjectOutputStream does this by maintaining a mapping from instances and classes to handles. When writeObject( ) is called with an argument that has already been

written to the stream, the handle is written to the stream, and no further operations are necessary. If, however, writeObject( ) is passed an instance that has not yet been written to the stream, two things happen. First, the instance is assigned a reference handle, and the mapping from instance to reference handle is stored by ObjectOutputStream. The handle that is assigned is the next integer in a sequence. TIP: Remember the reset( ) method on ObjectOutputStream? It clears the mapping and resets the handle counter to 0x7E0000 .RMI also automatically resets its serialization mechanism after every remote method call. Second, the instance data is written out as per the data format described earlier. This can involve some complications if the instance has a field whose value is also a serializable instance. In this case, the serialization of the first instance is suspended, and the second instance is serialized in its place (or, if the second instance has already been serialized, the reference handle for the second instance is written out). After the second instance is fully serialized, serialization of the first instance resumes. The contents of the stream look a little bit like Figure 10-5. Figure 10-5. Contents of Serialization's data stream

Reading From the description of writing, it's pretty easy to guess most of what happens when readObject( ) is called. Unfortunately, because of versioning issues, the implementation of readObject( ) is actually a little bit more complex than you might guess. When it reads in an instance description, ObjectInputStream gets the following information:

Descriptions of all the classes involved The serialization data from the instance

The problem is that the class descriptions that the instance of ObjectInputStream reads from the stream may not be equivalent to the class descriptions of the same classes in the local JVM. For example, if an instance is serialized to a file and then read back in three years later, there's a pretty good chance that the class definitions used to serialize the instance have changed. This means that ObjectInputStream uses the class descriptions in two ways:

It uses them to actually pull data from the stream, since the class descriptions completely describe the contents of the stream. It compares the class descriptions to the classes it has locally and tries to determine if the classes have changed, in which case it throws an exception. If the class descriptions match the local classes, it creates the instance and sets the instance's state appropriately.

Key structures implemented as part of the Java Collections API are various types of maps, and in particular the hash map (via the HashMap class and other related classes).

ConcurrentHashMap Maps allow you to associate keys with values and crop up in all sorts of uses such as:

Caches: for example, after reading the contents of a given file or database table, we could associate the file name with its contents (or database key with a representation of the row data) in a HashMap; Dictionaires: for example, we could associate locale abbrevations with a language name; Sparse arrays: by mapping integers to values, we in effect create an array which does not waste space on blank elements.

Frequently-accessed hash maps can be important on server applications for caching purposes. And as such, they can receive a good deal of concurrent access. Before Java 5, the standard HashMap implementation had the weakness that accessing the map concurrently meant synchronizing on the entire map on each access. This means that, for example, a frequently-used cache implemented as a hash map can encounter high contention: multiple threads attempting to access the map at the same time frequently have to block waiting for one another.

Lock striping and ConcurrentHashMap


Synchronizing on the whole map fails to take advantage of a possible optimisation: because hash maps store their data in a series of separate buckets, it is in principle

possible to lock only the portion of the map that is being accessed. This optimisation is generally called lock striping. Java 5 brings a hash map optimised in this way in the form of ConcurrentHashMap. A combination of lock striping plus judicious use of volatile variables gives the class two highly concurrent properties:

Writing to a ConcurrentHashMap locks only a portion of the map; Reads can generally occur without locking.

Next: throughput and scalability of ConcurrentHashMap vs synchronized HashMap


The benefits of ConcurrentHashMap over a regular synchronized HashMap become blatantly apparent when we run a small experiment to simulate what might happen in the case of a map used as a frequently-accessed cache on a moderately busy server. On the next page, we discuss the scalability of ConcurrentHashMap in the light of such a simulation.

ConcurrentHashMap: usage and functionality


On the previous page, we saw how the ConcurrentHashMap offers a means of improving concurrency beyond that of normal hash maps. In many cases, ConcurrentHashMap can be used as a drop-in replacement for a synchronized HashMap, and offers a means of avoiding synchronization in the traditional sense. (A couple of subtle differences are that ConcurrentHashMap will generally take up more memory, and that it cannot take null as a key.) Let's consider a web server that counts the number of instances of particular queries. We'll hold a map of query strings to integers and define an incrementCount() method which we can call at the moment of serving a particular query:
public final class MyServlet extends MyAbstractServlet { private Map<String,Integer> queryCounts = Collections.synchronizedMap(new HashMap<String,Integer>(1000)); private void incrementCount(String q) { Integer cnt = queryCounts.get(q); if (cnt == null) { queryCounts.put(q, 1); } else { queryCounts.put(q, cnt + 1); } } }

In this example, we're using a plain old HashMap wrapped up in a synchronization wrapper. Recall that wrapping the map with Collections.synchronizedMap(...) makes it safe to access the map concurrently: each call to get(), put(), size(), containsKey() etc will synchronize on the map during the call. (One problem that we'll see in a minute is that iterating over the map does still require explicit synchronization.) Note that this doesn't make incrementCount() atomic, but it does make it safe. That is, concurrent calls to incrementCount() will never leave the map in a corrupted state. But they might 'miss a count' from time to time. For example, two threads could concurrently read a current value of, say, 2 for a particular query, both independently increment it to 3, and both set it to 3, when in fact two queries have been made. Generally in the context of counting queries, we'd probably live with this: it's quite unlikely that two clients are making the selfsame query at exactly the same time, and even if they were, we wouldn't really care about missing the odd count here and there in order to improve performance. In this example, we can improve concurrency in a single line by replacing our synchronized hash map with a ConcurrentHashMap:
private Map<String,Integer> queryCounts = new ConcurrentHashMap<String,Integer>(1000);

Note that our incrementCount() will still have the same semantics: that is, it will never leave the map in an inconsistent state, but it could still miss a count in an unlucky case.

Truly atomic updates


So what if we want truly atomic updates: that is, to make incrementCount() never miss a count? To do this with a traditional HashMap, we could synchronize on the map during the entire incrementCount() method, with a potential impact on throughput. With ConcurrentHashMap, we can take advantage of its concurrent update facility. ConcurrentHashMap implements the following interface:
public interface ConcurrentMap<K, V> extends Map<K, V> { V putIfAbsent(K key, V value); boolean remove(Object key, Object value); boolean replace(K key, V oldValue, V newValue); V replace(K key, V value); }

In our case, the interesting methods are the replace() methods, which are effectively compare-and-set operations for a map. So we can implement our incrementCount() method as follows. Note that we do now need to change the signature of our queryCounts map and declare it as a ConcurrentMap:
public final class MyServlet extends MyAbstractServlet { private ConcurrentMap<String,Integer> queryCounts = new ConcurrentHashMap<String,Integer>(1000); private void incrementCount(String q) {

Integer oldVal, newVal; do { oldVal = queryCounts.get(q); newVal = (oldVal == null) ? 1 : (oldVal + 1); } while (!queryCounts.replace(q, oldVal, newVal)); } }

This code is very similar to the code to update an AtomicInteger: we read the current value of the count, calculate the new count, and then say to the ConcurrentHashMap: "please map this key to this new value, if and only if the previously mapped value was this". If the call returns false to say that we were wrong about the previously mapped value, indicating in effect that another thread has "snuck in", then we simply loop round and try again. As with AtomicInteger updates, this is very efficient because we rarely expect another thread to sneak in, and where it does, we can keep hold of the CPU rather than having to sleep while the other thread releases the lock.

Iterating over the map


In our case of counting web queries so far, you may be wondering "what's the big deal"? Of course, there is the argument that on a busy server, anything that helps improve throughput is a big deal. But in this case, most of the operations on the map are very quick and occur only once per query, so the map won't be highly contended. In this case, a bigger benefit comes when we want to iterate over the map.

Concurrent structures and collections in Java 5

Iterating over ConcurrentHashMap


On the previous page, we gave an example of ConcurrentHashMap, using one to store a record of count-per-query on a web server. Arguably, each count will be "in andout", and one might argue that the improvement in throughput over a regular synchronized hash map won't be so great, since contention won't generally be so high. An additional benefit of ConcurrentHashMap is that we can iterate over the map without locking. (Indeed, it is not actually possible to lock a ConcurrentHashMap during this or any other operation.) Recall that with an ordinary HashMap even one wrapped in a Collections.synchronizedMap(...) wrapper! iteration over the map must occur whilst synchronized on the map in order to be thread-safe. If, due to incorrect synchronization, one thread does update the map whilst another is iterating over it, one of the threads is liable to throw a ConcurrentModificationException. In contrast, whilst one or more threads is iterating over a ConcurrntHashMap, other threads can concurrently access the map. Such operations will never throw a ConcurrentModificationException. In this case, the thread that is iterating over the map is not guaranteed to "see" updates since the iteration began, but it will still see

the map in a "safe state", reflecting at the very least the state of the map at the time iteration began. This is both good news and bad news:

Good news: it is perfect for cases where we want iteration not to affect concurrency, at the expense of possibly missing an update while iterating (e.g. in our imaginary web server, while iterating in order to persist the current query counts to a database: we probably wouldn't care about missing the odd count); Bad news: because there's no way to completely lock a ConcurrentHashMap, there's no easy option for taking a "snapshot" of the map as a truly atomic operation.

Garbage collection
Reference counting is a form of garbage collection whereby each object has a count of the number of references to it. Garbage is identified by having a reference count of zero. An object's reference count is incremented when a reference to it is created, and decremented when a reference is destroyed. The object's memory is reclaimed when the count reaches zero. Compared to tracing garbage collection, reference counting guarantees that objects are destroyed as soon as they become unreachable (assuming that there are no reference cycles), and usually only accesses memory which is either in CPU caches, in objects to be freed, or directly pointed by those, and thus tends to not have significant negative side effects on CPU cache and virtual memory operation. There are some disadvantages to reference counting:

If two or more objects refer to each other, they can create a cycle whereby neither will be collected as their mutual references never let their reference counts become zero. Some garbage collection systems using reference counting (like the one in CPython) use specific cycle-detecting algorithms to deal with this issue.[9] Another strategy is to use weak references for the "backpointers" which create cycles. Under reference counting, a weak reference is similar to a weak reference under a tracing garbage collector. It is a special reference object whose existence does not increment the reference count of the referent object. Furthermore, a weak reference is safe in that when the referent object becomes garbage, any weak reference to it lapses, rather than being permitted to remain dangling, meaning that it turns into a predictable value, such as a null reference. In naive implementations, each assignment of a reference and each reference falling out of scope often require modifications of one or more reference counters. However, in the common case, when a reference is copied from an outer scope variable into an inner scope variable, such that the lifetime of the inner variable is bounded by the lifetime of the outer one, the reference incrementing can be eliminated. The outer variable "owns" the reference. In

the programming language C++, this technique is readily implemented and demonstrated with the use of const references. Reference counting in C++ is usually implemented using "smart pointers" whose constructors, destructors and assignment operators manage the references. A smart pointer can be passed by reference to a function, which avoids the need to copy-construct a new reference (which would increase the reference count on entry into the function and decrease it on exit). Instead the function receives a reference to the smart pointer which is produced inexpensively.

When used in a multithreaded environment, these modifications (increment and decrement) may need to be atomic operations such as compare-and-swap, at least for any objects which are shared, or potentially shared among multiple threads. Atomic operations are expensive on a multiprocessor, and even more expensive if they have to be emulated with software algorithms. It is possible to avoid this issue by adding per-thread or per-CPU reference counts and only accessing the global reference count when the local reference counts become or are no longer zero (or, alternatively, using a binary tree of reference counts, or even giving up deterministic destruction in exchange for not having a global reference count at all), but this adds significant memory overhead and thus tends to be only useful in special cases (it's used, for example, in the reference counting of Linux kernel modules). Naive implementations of reference counting do not in general provide realtime behavior, because any pointer assignment can potentially cause a number of objects bounded only by total allocated memory size to be recursively freed while the thread is unable to perform other work. It is possible to avoid this issue by delegating the freeing of objects whose reference count dropped to zero to other threads, at the cost of extra overhead.

Strong and Weak references


The garbage collector can reclaim only objects that have no references. However, there can exist additional references that, in a sense, do not matter, which are called weak references. In discussions about weak references, ordinary references are sometimes called strong references. An object is eligible for garbage collection if there are no strong (i.e. ordinary) references to it, even though there still might be some weak references to it. A weak reference is not merely just any pointer to the object that a garbage collector does not care about. The term is usually reserved for a properly managed category of special reference objects which are safe to use even when the object disappears because they lapse to a safe value. An unsafe reference that is not known to the garbage collector will simply remain dangling by continuing to refer to the address where the object previously resided. This is not a weak reference. In some implementations, notably in Microsoft.NET,[3] the weak references are divided into two further subcategories: long weak references (tracks resurrection) and short weak references.

[edit] Weak Collections


Objects which maintain collections of other objects can also be devised which have weak tracking features. For instance, weak hash tables are useful. Like a regular hash table, a weak hash table maintains an association between pairs of objects, where each pair is understood to be a key and value. However, the hash table does not actually maintain a strong reference on these objects. A special behavior takes place when either the key or value or both become garbage: the hash table entry is spontaneously deleted. There exist further refinements such as hash tables which have only weak keys (value references are ordinary, strong references) or only weak values (key references are strong). Weak hash tables are important for maintaining associations between objects, such that the objects engaged in the association can still become garbage if nothing in the program refers to them any longer (other than the associating hash table). The use of a regular hash table for such a purpose could lead to a "logical memory leak": the accumulation of reachable data which the program does not need and will not use.

[edit] Basic algorithm


Tracing collectors are so called because they trace through the working set of memory. These garbage collectors perform collection in cycles. A cycle is started when the collector decides (or is notified) that it needs to reclaim memory, which happens most often when the system is low on memory[citation needed]. The original method involves a nave mark-and-sweep in which the entire memory set is touched several times. [edit] Nave mark-and-sweep In the naive mark-and-sweep method, each object in memory has a flag (typically a single bit) reserved for garbage collection use only. This flag is always cleared, except during the collection cycle. The first stage of collection does a tree traversal of the entire 'root set', marking each object that is pointed to as being 'in-use'. All objects that those objects point to, and so on, are marked as well, so that every object that is ultimately pointed to from the root set is marked. Finally, all memory is scanned from start to finish, examining all free or used blocks; those with the in-use flag still cleared are not reachable by any program or data, and their memory is freed. (For objects which are marked in-use, the in-use flag is cleared again, preparing for the next cycle.) This method has several disadvantages, the most notable being that the entire system must be suspended during collection; no mutation of the working set can be allowed. This will cause programs to 'freeze' periodically (and generally unpredictably), making real-time and time-critical applications impossible. In addition, the entire working memory must be examined, much of it twice, potentially causing problems in paged memory systems.

Thread Local

Managing Data with the ThreadLocal Class


by Keld H. Hansen Recently one of my colleagues introduced me to a class from the JDK with which I was not familiar: ThreadLocal. This class allows you to put local data on a thread, so that every module running in the thread can access it. ThreadLocal has been around since JDK 1.2, but hasn't been used much, maybe because of a first, rather poor implementation, performance-wise.

The ThreadLocal Class


The first part of ThreadLocal's documentation contains this text: This class provides thread-local variables. These variables differ from their normal counterparts in that each thread that accesses one (via its get or set method) has its own, independently initialized copy of the variable. ThreadLocal instances are typically private static fields in classes that wish to associate state with a thread (e.g., a user ID or Transaction ID). An example may help clarify this concept. A servlet is executed in a thread, but since many users may use the same servlet at the same time, many threads will be running the same servlet code concurrently. If the servlet uses a ThreadLocal object, it can hold data local to each thread. The user ID is a good example of what could be stored in the ThreadLocal object. I like to think of this object as a hash map where a kind of thread ID is used as the key.
ThreadLocal

contains these methods: Purpose Returns the value for the current thread Sets a new value for the current thread Used to return an initial value (if ThreadLocal is subclassed) In JDK 5 only - used to delete the current thread's value (for clean-up only)

Method Object get() set(Object) Object initialValue() remove()

The simplest way to use a ThreadLocal object is to implement it as a singleton. Here's an example in which the value stored in the ThreadLocal is a List:
public class MyThreadLocal { private static ThreadLocal tLocal = new ThreadLocal();

public static void set(List list) { tLocal.set(list); } public static List get() { return (List) tLocal.get(); } . . .

This makes it simple to set or get the current thread's value:


MyThreadLocal.set(list); . . . list = MyThreadLocal.get();

The first time you use this technique, it may seem a bit like magic, but behind the scenes, the local data is simply fetched using a unique ID of the thread. This class provides thread-local variables. These variables differ from their normal counterparts in that each thread that accesses one (via its get or set method) has its own, independently initialized copy of the variable. ThreadLocal instances are typically private static fields in classes that wish to associate state with a thread (e.g., a user ID or Transaction ID). For example, the class below generates unique identifiers local to each thread. A thread's id is assigned the first time it invokes UniqueThreadIdGenerator.getCurrentThreadId() and remains unchanged on subsequent calls.
import java.util.concurrent.atomic.AtomicInteger; public class UniqueThreadIdGenerator { private static final AtomicInteger uniqueId = new AtomicInteger(0); private static final ThreadLocal < Integer > uniqueNum = new ThreadLocal < Integer > () { @Override protected Integer initialValue() { return uniqueId.getAndIncrement(); } }; public static int getCurrentThreadId() { return uniqueId.get(); } } // UniqueThreadIdGenerator

Each thread holds an implicit reference to its copy of a thread-local variable as long as the thread is alive and the ThreadLocal instance is accessible; after a thread goes away, all of its copies of thread-local instances are subject to garbage collection (unless other references to these copies exist).

S-ar putea să vă placă și