an unbiased chance of winning a race with any incoming barging thread, reblocking and retrying if it loses。 However, if incoming threads arrive faster than it takes an unparked thread to unblock, the first thread in the queue will only rarely win the race, so will almost always reblock, and its successors will remain blocked。 With briefly-held synchronizers, it is common for multiple bargings and releases to occur on multiprocessors during the time the first thread takes to unblock。 As seen below, the net effect is to maintain high rates of progress of one or more threads while still at least probabilistically avoiding starvation。
When greater fairness is required, it is a relatively simple matter to arrange it。 Programmers requiring strict fairness can define tryAcquire to fail (return false) if the current thread is not at the head of the queue, checking for this using method getFirstQueuedThread, one of a handful of supplied inspection methods。
A faster, less strict variant is to also allow tryAcquire to succeed if the the queue is (momentarily) empty。 In this case, multiple threads encountering an empty queue may race to be the first to acquire, normally without enqueuing at least one of them。 This strategy is adopted in all java。util。concurrent synchronizers supporting a "fair" mode。
While they tend to be useful in practice, fairness settings have no guarantees, because the Java Language Specification does not provide scheduling guarantees。 For example, even with a strictly fair synchronizer, a JVM could decide to run a set of threads purely sequentially if they never otherwise need to block waiting for each other。 In practice, on a uniprocessor, such threads are
likely to each run for a time quantum before being pre-emptively context-switched。 If such a thread is holding an exclusive lock, it will soon be momentarily switched back, only to release the lock and block now that it is known that another thread needs the lock, thus further increasing the periods during which a synchronizer is available but not acquired。 Synchronizer fairness settings tend to have even greater impact on multiprocessors, which generate more interleavings, and hence more opportunities for one thread to discover that a lock is needed by another thread。
Even though they may perform poorly under high contention
5。
PERFORMANCE
While the synchronizer framework supports many other styles of synchronization in addition to mutual exclusion locks, lock performance is simplest to measure and compare。 Even so, there are many different approaches to measurement。 The experiments here are designed to reveal overhead and throughput。
In each test, each thread repeatedly updates a pseudo-random number computed using function nextRandom(int seed):
int t = (seed % 127773) * 16807 –
when protecting briefly-held code bodies, fair locks work well,
return
(seed / 127773) * 2836;
ff;
for example, when they protect relatively long code bodies and/or with relatively long inter-lock intervals, in which case barging provides little performance advantage and but greater risk of indefinite postponement。 The synchronizer framework leaves such engineering decisions to its users。
4。2 Synchronizers
Here are sketches of how java。util。concurrent synchronizer classes are defined using this framework:
The ReentrantLock class uses synchronization state to hold the (recursive) lock count。 When a lock is acquired, it also records the identity of the current thread to check recursions and detect illegal state exceptions when the wrong thread tries to unlock。 The class also uses the provided ConditionObject, and exports other monitoring and inspection methods。 The class supports an optional "fair" mode by internally declaring two different AbstractQueuedSynchronizer subclasses (the fair one disabling barging) and setting each ReentrantLock instance to use the appropriate one upon construction。