This simple mechanism is similar to those used, at some level, in the Solaris-9 thread library [11], in WIN32 "consumable events", and in the Linux NPTL thread library, and so maps efficiently to each of these on the most common platforms Java runs on。 (However, the current Sun Hotspot JVM reference implementation on Solaris and Linux actually uses a pthread condvar in order to fit into the existing runtime design。) The park method also supports optional relative and absolute timeouts, and is integrated with JVM Thread。interrupt support — interrupting a thread unparks it。
3。3Queues
The heart of the framework is maintenance of queues of blocked threads, which are restricted here to FIFO queues。 Thus, the framework does not support priority-based synchronization。
These days, there is little controversy that the most appropriate choices for synchronization queues are non-blocking data structures that do not themselves need to be constructed using lower-level locks。 And of these, there are two main candidates: variants of Mellor-Crummey and Scott (MCS) locks [9], and variants of Craig, Landin, and Hagersten (CLH) locks [5][8][10]。 Historically, CLH locks have been used only in spinlocks。 However, they appeared more amenable than MCS for use in the synchronizer framework because they are more easily adapted to handle cancellation and timeouts, so were chosen as a basis。 The resulting design is far enough removed from the original CLH structure to require explanation。
A CLH queue is not very queue-like, because its enqueuing and dequeuing operations are intimately tied to its uses as a lock。 It is a linked queue accessed via two atomically updatable fields, head and tail, both initially pointing to a dummy node。
head tail
node's status
A new node, node, is enqueued using an atomic operation:
do { pred = tail;
} while(!tail。compareAndSet(pred, node)); The release status for each node is kept in its predecessor node。 So, the "spin" of a spinlock looks like:
while (pred。status != RELEASED) ; // spin A dequeue operation after this spin simply entails setting the head field to the node that just got the lock:
head = node;
Among the advantages of CLH locks are that enqueuing and dequeuing are fast, lock-free, and obstruction free (even under contention, one thread will always win an insertion race so will make progress); that detecting whether any threads are waiting is also fast (just check if head is the same as tail); and that release status is decentralized, avoiding some memory contention。
In the original versions of CLH locks, there were not even links connecting nodes。 In a spinlock, the pred variable can be held as a local。 However, Scott and Scherer[10] showed that by explicitly maintaining predecessor fields within nodes, CLH locks can deal with timeouts and other forms of cancellation: If a node's predecessor cancels, the node can slide up to use the previous node's status field。
The main additional modification needed to use CLH queues for blocking synchronizers is to provide an efficient way for one node to locate its successor。 In spinlocks, a node need only change its status, which will be noticed on next spin by its successor, so links are unnecessary。 But in a blocking synchronizer, a node needs to explicitly wake up (unpark) its successor。
An AbstractQueuedSynchronizer queue node contains a next link to its successor。 But because there are no applicable techniques for lock-free atomic insertion of double-linked list nodes using compareAndSet, this link is not atomically set as part of insertion; it is simply assigned:
pred。next = node;
after the insertion。 This is reflected in all usages。 The next link is treated only as an optimized path。 If a node's successor does not appear to exist (or appears to be cancelled) via its next field, it is always possible to start at the tail of the list and traverse backwards using the pred field to accurately check if there really is one。